Admin Mstr
Admin Mstr
Adm i n i st r a t i on
Gu i de
MicroStrategy ONE
M i cr o St r at egy ON E
Sep t em b er 2024
Copyright © 2024 by MicroStrategy Incorporated. All rights reserved.
Trademark Information
The following are either trademarks or registered trademarks of MicroStrategy Incorporated or its affiliates in the United States and certain other
countries:
Dossier, Enterprise Semantic Graph, Expert.Now, Hyper.Now, HyperIntelligence, HyperMobile, HyperVision, HyperWeb, Intelligent Enterprise,
MicroStrategy, MicroStrategy 2019, MicroStrategy 2020, MicroStrategy 2021, MicroStrategy AI, MicroStrategy Analyst Pass, MicroStrategy Architect,
MicroStrategy Architect Pass, MicroStrategy Auto, MicroStrategy Cloud, MicroStrategy Cloud Intelligence, MicroStrategy Command Manager,
MicroStrategy Communicator, MicroStrategy Consulting, MicroStrategy Desktop, MicroStrategy Developer, MicroStrategy Distribution Services,
MicroStrategy Education, MicroStrategy Embedded Intelligence, MicroStrategy Enterprise Manager, MicroStrategy Federated Analytics, MicroStrategy
Geospatial Services, MicroStrategy Identity, MicroStrategy Identity Manager, MicroStrategy Identity Server, MicroStrategy Insights, MicroStrategy
Integrity Manager, MicroStrategy Intelligence Server, MicroStrategy Library, MicroStrategy Mobile, MicroStrategy Narrowcast Server, MicroStrategy
ONE, MicroStrategy Object Manager, MicroStrategy Office, MicroStrategy OLAP Services, MicroStrategy Parallel Relational In-Memory Engine
(MicroStrategy PRIME), MicroStrategy R Integration, MicroStrategy Report Services, MicroStrategy SDK, MicroStrategy System Manager, MicroStrategy
Transaction Services, MicroStrategy Usher, MicroStrategy Web, MicroStrategy Workstation, MicroStrategy World, Usher, and Zero-Click Intelligence.
The following design marks are either trademarks or registered trademarks of MicroStrategy Incorporated or its affiliates in the United States and certain
other countries:
Other product and company names mentioned herein may be the trademarks of their respective owners.
Specifications subject to change without notice. MicroStrategy is not responsible for errors or omissions. MicroStrategy makes no warranties or
commitments concerning the availability of future products or versions that may be planned or under development.
CON TEN TS
Best Practices for MicroStrategy System Administration 13
STG_CT_DEVICE_STATS 2087
STG_CT_EXEC_STATS 2090
STG_CT_MANIP_STATS 2100
STG_IS_CACHE_HIT_STATS 2107
STG_IS_CUBE_REP_STATS 2112
STG_IS_DOC_STEP_STATS 2117
STG_IS_DOCUMENT_STATS 2125
STG_IS_INBOX_ACT_STATS 2133
STG_IS_MESSAGE_STATS 2142
STG_IS_PERF_MON_STATS 2150
STG_IS_PR_ANS_STATS 2153
STG_IS_PROJ_SESS_STATS 2159
STG_IS_REP_COL_STATS 2162
STG_IS_REP_SEC_STATS 2165
STG_IS_REP_SQL_STATS 2168
STG_IS_REP_STEP_STATS 2177
STG_IS_REPORT_STATS 2188
STG_IS_SCHEDULE_STATS 2201
STG_IS_SESSION_STATS 2204
STG_MSI_STATS_PROP 2212
l Use the project life cycle of development, testing, production to fully test
your reports, metrics, and other objects before releasing them to users.
l Once Intelligence Server is up and running, you can adjust its governing
settings to better suit your environment. For detailed information about
these settings, see Chapter 8, Tune Your System for the Best
Performance.
l If you have multiple machines available to run Intelligence Server, you can
cluster those machines to improve performance and reliability. See
Chapter 9, Cluster Multiple MicroStrategy Servers.
l Create caches for commonly used reports and documents to reduce the
database load and improve the system response time. See Chapter 10,
Improving Response Time: Caching.
Creating reports based on Intelligent Cubes can also greatly speed up the
processing time for reports. Intelligent Cubes are part of the OLAP
Services features in Intelligence Server. See Chapter 11, Managing
Intelligent Cubes.
You can automate the delivery of reports and documents to users with the
Distribution Services add-on to Intelligence Server.
l The first tier consists of two databases: the data warehouse, which
contains the information that your users analyze; and the MicroStrategy
metadata, which contains information about your MicroStrategy projects.
For an introduction to these databases, see Storing Information: the Data
Warehouse and Indexing your Data: MicroStrategy Metadata.
l The third tier in this system is MicroStrategy Web or Mobile Server, which
delivers the reports to a client. For an introduction to MicroStrategy Web,
see Chapter 13, Administering MicroStrategy Web and Mobile.
l The last tier is the MicroStrategy Web client, Library client, Workstation
client, or MicroStrategy Mobile app, which provides documents and
reports to the users.
To help explain how the MicroStrategy system uses the metadata to do its
work, imagine that a user runs a report with a total of revenue for a certain
region in a quarter of the year. The metadata stores information about how
the revenue metric is to be calculated, information about which rows and
tables in the data warehouse to use for the region, and the most efficient
way to retrieve the information.
The role of the physical warehouse schema is further explained in the Basic
Reporting Help.
l Application objects are the objects that are necessary to run reports.
These objects are generally created by a report designer and can include
reports, report templates, filters, metrics, prompts, and so on. These
objects are built in Developer or Command Manager. The Basic Reporting
Help and Advanced Reporting Help are devoted to explaining application
objects.
You can manage your projects using the System Administration Monitor. For
details, see Managing and Monitoring Projects, page 44.
In older systems you may encounter a 6.x Project connection (also two-
tier) that connects directly to a MicroStrategy version 6 project in read-
only mode.
This section describes the ODBC standard for connecting to databases and
creating data source names (DSNs) for the ODBC drivers that are bundled
with the MicroStrategy applications.
The diagram below illustrates the three-tier metadata and data warehouse
connectivity used in the MicroStrategy system.
The diagram shown above illustrates projects that connect to only one data
source. However, MicroStrategy allows connection to multiple data sources
in the following ways:
l You can integrate MDX cube sources such as SAP BW, Microsoft Analysis
Services, and Hyperion Essbase with your MicroStrategy projects. For
information on integrating these MDX cubes sources into MicroStrategy,
see the MDX Cube Reporting Help.
You can also create and edit a project source using the Project Source
Manager in Developer. When you use the Project Source Manager, you must
specify the Intelligence Server machine to which to connect. It is through
this connection that Developer users retrieve metadata information.
For procedures to connect to the data warehouse, see the Installation and
Configuration Help.
l Cached: connections that are still connected to a database but not actively
submitting a query to a database
A cached connection is used for a job if the following criteria are satisfied:
l The connection string for the cached connection matches the connection
string that will be used for the job.
Intelligence Server does not cache any connections that have pre- or post-
SQL statements associated with them because these options may
drastically alter the state of the connection.
l Join options, such as the star join and full outer join
l Metric calculation options, such as when to check for NULLs and zeros
For more information about all the VLDB properties, see SQL Generation
and Data Processing: VLDB Properties.
When you create the metadata for a MicroStrategy project, the database-
specific information is loaded from a file supplied by MicroStrategy (called
Database.pds). If you get a new release from MicroStrategy, the metadata
is automatically upgraded using the Database.pds file with the metadata
update process. The Administrator is the only user who can upgrade the
metadata. Do this by clicking Yes when prompted for updating the metadata.
This happens when you connect to an existing project after installing a new
MicroStrategy release.
The MicroStrategy system cannot detect when you upgrade or change the
database used to store the MicroStrategy metadata or your data warehouse.
If you upgrade or change the database that is used to store the metadata or
data warehouse, you can manually update the database type to apply the
default properties for the new database type.
l Loads newly supported database types. For example, properties for the
newest database servers that were recently added.
l Loads updated properties for existing database types that are still
supported.
l Keeps properties for existing database types that are no longer supported.
If there were no updates for an existing database type, but the properties
for it have been removed from the Database.pds file, the process does
not remove them from your metadata.
For more information about VLDB properties, see SQL Generation and Data
Processing: VLDB Properties.
You may need to manually upgrade the database types if you chose not to
run the update metadata process after installing a new release.
2. Select Upgrade.
The Readme lists all DBMSs that are supported or certified for use with
MicroStrategy.
l Loads existing report cache files from automatic backup files into memory
for each loaded project (up to the specified maximum RAM setting)
This occurs only if report caching is enabled and the Load caches on
startup feature is enabled.
l Loads schedules
You can set Intelligence Server to load MDX cube schemas when it starts,
rather than loading MDX cube schemas upon running an MDX cube
report. For more details on this and steps to load MDX cube schemas
when Intelligence Server starts, see the Configuring and Connecting
Intelligence Server section of the Installation and Configuration Help.
The user who submitted a canceled job sees a message in the History List
indicating that there was an error. The user must resubmit the job.
mstrctl -s IntelligenceServer rs
mstrctl -s IntelligenceServer us
You can start and stop Intelligence Server manually as a service using any
of the following methods:
l If you are already using Developer, you may need to start and stop
Intelligence Server from within Developer. For instructions, see
Developer, page 35.
l You can start and stop Intelligence Server as part of a Command Manager
script. For details, see Command Manager, page 36.
l Finally, you can start and stop Intelligence Server from the command line
using MicroStrategy Server Control Utility. For instructions, see Command
Line, page 36.
l You must have the Configuration access permission for the server definition
object. For information about object permissions in MicroStrategy, see
Controlling Access to Objects: Permissions, page 89. For a list of the
permission groupings for server definition objects, see Controlling Access to
Objects: Permissions, page 89.
l To remotely start and stop the Intelligence Server service in Windows, you
must be logged in to the remote machine as a Windows user with
administrative privileges.
Service Manager
l MicroStrategy Listener
For instructions on how to use Service Manager, click Help from within
Service Manager.
Service Manager requires that port 8888 be open. If this port is not open,
contact your network administrator.
2. If the icon is not present in the system tray, then from the Windows
Start menu, point to All Programs, then MicroStrategy Tools, then
select Service Manager.
2. In the Server drop-down list, select the name of the machine on which
the service is installed.
4. Click Options.
6. Click OK.
You can also set this using the Services option in the Microsoft
Window's Control Panel.
4. Click Options.
5. On the Intelligence Server Options tab, select the Enabled check box
for the Re-starter Option.
Developer
You can start and stop a local Intelligence Server from Developer. You
cannot start or stop a remote Intelligence Server from Developer; you must
use one of the other methods to start or stop a remote Intelligence Server.
For the Command Manager syntax for starting and stopping Intelligence
Server, see the Command Manager Help (press F1 from within Command
Manager). For a more general introduction to MicroStrategy Command
Manager, see Chapter 15, Automating Administrative Tasks with Command
Manager.
You can start and stop Intelligence Server from a command prompt, using
the MicroStrategy Server Control Utility. This utility is invoked by the
command mstrctl. By default the utility is in C:\Program Files
(x86)\Common Files\MicroStrategy\ in Windows, and in
~/MicroStrategy/bin in UNIX.
For detailed instructions on how to use the Server Control Utility, see
Managing MicroStrategy Services from Command Line Using Server Control
Utility , page 39.
You can start and stop Intelligence Server and choose a startup option using
the Windows Services window.
l To change the startup type, select a startup option from the drop-
down list.
l Automatic means that the service starts when the computer starts.
l Disabled means that you cannot start the service until you change
the startup type to one of the other types.
5. Click OK.
Some advanced tuning settings are only available when starting Intelligence
Server as a service. If you change these settings, they are applied the next
time Intelligence Server is started as a service.
Executing this file from the command line displays the following
administration menu in Windows, and a similar menu in UNIX.
To use these options, type the corresponding letter on the command line and
press Enter. For example, to monitor users, type U and press Enter. The
information is displayed.
Server Control Utility can also be used to start, stop, and restart other
MicroStrategy services—such as the Listener, Distribution Manager,
Execution Engine, or Enterprise Manager Data Loader services—and to view
and set configuration information for those services.
The following table lists the commands that you can perform with the Server
Control Utility. The syntax for using the Server Control Utility commands is:
Where:
l login is the login for the machine hosting the server instance or service,
and is required if you are not logged into that machine. You are prompted
for a password.
List machines that the Server Control Utility can see and lm
affect. list-machines
list-servers
This command does not require a service name.
list-odbc-dsn
This command does not require a service name.
Configure a service
Display the configuration information for a service, in XML gsvc instancename [>
format. For more information, see Using Files to Store filename.xml]
Output and Provide Input, page 43. get-service-
configuration
You can optionally specify a file to save the instancename [>
configuration properties to. filename.xml]
Specify the configuration information for a service, in XML ssvc instancename [<
format. For more information, see Using Files to Store filename.xml]
Output and Provide Input, page 43. set-service-
configuration
You can optionally specify a file to read the instancename [<
configuration properties from. filename.xml]
Configure a server
Display the configuration information for a server instance, gsic instancename [>
in XML format. For more information, see Using Files to filename.xml]
Store Output and Provide Input, page 43. get-server-instance-
configuration
You can optionally specify a file to save the instancename [>
configuration properties to. filename.xml]
gdi
Display the default instance for a service.
get-default-instance
sdi instancename
Set an instance of a service as the default instance. set-default-instance
instancename
ci instancename
Create a new server instance. create-instance
instancename
cpi instancename
newinstancename
Create a copy of a server instance. Specify the name for the
new instance as newinstancename . copy-instance
instancename
newinstancename
di instancename
Delete a server instance. delete-instance
instancename
rs instancename
Register a server instance as a service. register-service
instancename
us instancename
Unregister a registered server instance as a service. unregister-service
instancename
gl instancename
Display the license information for a service instance. get-license
instancename
gs instancename
Display the status information for a server instance
get-status instancename
start --service
Start a server instance as a service.
instancename
Stop a server instance that has been started as a service. stop instancename
Pause a server instance that has been started as a. service pause instancename
For example, the following command saves the default server instance
configuration to an XML file:
l Project, which helps you keep track of the status of all the projects
contained in the selected project source. For detailed information, see
Managing Project Status, Configuration, or Security: Project View, page
45.
l Cluster, which helps you manage how projects are distributed across the
servers in a cluster. For detailed information, see Managing Clustered
Intelligence Servers: Cluster View, page 47.
To view the status of a project, select the List or Details view, and click the
+ sign next to the project's name. A list of all the servers in the cluster
expands below the project's name. The status of the project on each server
is shown next to the server's name. If your system is not clustered, there is
only one server in this list.
From the Project view, you can access a number of administrative and
maintenance functions. You can:
l View the change journal for a project (for details, see Monitor System
Activity: Change Journaling, page 828)
You can also schedule any of these maintenance functions from the
Schedule Administration Tasks dialog box. To access this dialog box, right-
click a project in the Project view and select Schedule Administration
Tasks. For more information, including detailed instructions on scheduling a
task, see Scheduling Administrative Tasks, page 1328.
3. To see a list of all the projects on a node, click the + sign next to that
node. The status of the project on the selected server is shown next to
the project's name.
These tasks are all available by right-clicking a server in the Cluster view.
You can also load or unload projects from a machine, or idle or resume
projects on a machine for maintenance (for details, see Setting the Status of
a Project, page 48) by right-clicking a project on a server. For more detailed
information about any of these options, see Manage your Projects Across
Nodes of a Cluster, page 1169.
l Loaded, page 48
l Unloaded, page 49
For example scenarios where the different project idle modes can help to
support project and data warehouse maintenance tasks, see Project and
Data Warehouse Maintenance Example Scenarios, page 54.
Loaded
A project in Loaded mode appears as an available project in Developer and
MicroStrategy Web products. In this mode, user requests are accepted and
processed as normal.
Unloaded
Unloaded projects are still registered on Intelligence Server, but they do not
appear as available projects in Developer or MicroStrategy Web products,
even for administrators. Nothing can be done in the project until it is loaded
again.
A project unload request is fully processed only when all executing jobs for
the project are complete.
Request Idle
Request Idle mode helps to achieve a graceful shutdown of the project
rather than modifying a project from Loaded mode directly to Full Idle mode.
In this mode, Intelligence Server:
l Stops accepting new user requests from the clients for the project.
l Completes jobs that are already being processed. If a user requested that
results be sent to their History List, the results are available in their
History List after the project is resumed.
Setting a project to Request Idle can be helpful to manage server load for
projects on different clusters. For example, in a cluster with two nodes
named Node1 and Node2, the administrator wants to redirect load
temporarily to the project on Node2. The administrator must first set the
project on Node1 to Request Idle. This allows existing requests to finish
execution for the project on Node1, and then all new load is handled by the
project on Node2.
Execution Idle
A project in Execution Idle mode is ideal for Intelligence Server maintenance
because this mode restricts users in the project from running any job in
Intelligence Server. In this mode, Intelligence Server:
l Stops executing all new and currently executing jobs and, in most cases,
places them in the job queue. This includes jobs that require SQL to be
submitted to the data warehouse and jobs that are executed in Intelligence
Server, such as answering prompts.
l Allows users to continue to request jobs, but execution is not allowed and
the jobs are placed in the job queue. Jobs in the job queue are displayed
as "Waiting for project" in the Job Monitor. When the project is resumed,
Intelligence Server resumes executing the jobs in the job queue.
This mode allows you to perform maintenance tasks for the project. For
example, you can still view the different project administration monitors,
create reports, create attributes, and so on. However, tasks such as
element browsing, exporting, and running reports that are not cached are
not allowed.
l Accepts new user requests from clients for the project, but it does not
submit any SQL to the data warehouse.
l Stops any new or currently executing jobs that require SQL to be executed
against the data warehouse and, in most cases, places them in the job
queue. These jobs display as "Waiting for project" in the Job Monitor.
When the project is resumed, Intelligence Server resumes executing the
jobs in the job queue.
l Completes any jobs that do not require SQL to be executed against the
data warehouse.
Full Idle
Full Idle is a combination of Request Idle and Execution Idle. In this mode,
Intelligence Server does not accept any new user requests and active
requests are canceled. When the project is resumed, Intelligence Server
does not resubmit the canceled jobs and it places an error message in the
user's History List. The user can click the message to resubmit the request.
This mode allows you to stop all Intelligence Server and data warehouse
processing for a project. However, the project still remains in Intelligence
Server memory.
Partial Idle
Partial Idle is a combination of Request Idle and Warehouse Execution Idle.
In this mode, Intelligence Server does not accept any new user requests.
Any active requests that require SQL to be submitted to the data warehouse
are queued until the project is resumed. All other active requests are
completed.
This mode allows you to stop all Intelligence Server and data warehouse
processing for a project, while not canceling jobs that do not require any
warehouse processing. The project still remains in Intelligence Server
memory.
4. Select the options for the idle mode that you want to set the project to:
l Request Idle (Request Idle): all executing and queued jobs finish
executing, and any newly submitted jobs are rejected.
l Execution Idle (Execution Idle for All Jobs): all executing, queued,
and newly submitted jobs are placed in the queue, to be executed
when the project resumes.
l Full Idle (Request Idle and Execution Idle for All jobs): all
executing and queued jobs are canceled, and any newly submitted
jobs are rejected.
l Partial Idle (Request Idle and Execution Idle for Warehouse jobs):
all executing and queued jobs that do not submit SQL against the
data warehouse are canceled, and any newly submitted jobs are
rejected. Any currently executing and queued jobs that do not require
SQL to be executed against the data warehouse are executed.
5. Click OK. The Idle/Resume dialog box closes and the project goes into
the selected mode. If you are using clustered Intelligence Servers, the
project mode is changed for all nodes in the cluster.
l Two projects, named Project1 and Project 2, use the same data
warehouse. Project1 needs dedicated access to the data warehouse for a
specific length of time. The administrator first sets Project2 to Request
Idle. After existing activity against the data warehouse is complete,
Project2 is restricted against executing on the data warehouse. Then, the
administrator sets Project2 to Warehouse Execution Idle mode to allow
data warehouse-independent activity to execute. Project1 now has
dedicated access to the data warehouse until Project2 is reset to Loaded.
Processing Jobs
Any request submitted to Intelligence Server from any part of the
MicroStrategy system is known as a job. Jobs may originate from servers
such as the Subscription server or Intelligence Server's internal scheduler,
or from client applications such as MicroStrategy Library, MicroStrategy
Workstation, MicroStrategy Web, Mobile, Integrity Manager, or another
custom-coded application.
The Job Monitor shows you which jobs are currently executing and lets you
cancel jobs as necessary. For information about the job monitor, see
Monitoring Currently Executing Jobs, page 76.
You can assign a priority level to each job according to factors such as the
type of request, the user or user group requesting the job, the source of the
job (such as Developer, Mobile, or MicroStrategy Web), the resource cost of
the job, or the project containing the job. Jobs with a higher priority have
precedence over jobs with a lower priority, and they are processed first if
there is a limit on the resources available. For detailed information on job
priority, including instructions on how to prioritize jobs, see Prioritize Jobs,
page 1086.
Those components are the stops the job makes in what is called a
pipeline, a path that the job takes as Intelligence Server works on it.
4. The result is sent back to the client application, which presents the
result to the user.
Most of the actual processing that takes place is done in steps 2 and 3
internally in Intelligence Server. Although the user request must be received
and the final results must be delivered (steps 1 and 4), those are relatively
simple tasks. It is more useful to explain how Intelligence Server works.
Therefore, the rest of this section discusses Intelligence Server activity as it
processes jobs. This includes:
Being familiar with this material should help you to understand and interpret
statistics, Enterprise Manager reports, and other log files available in the
system. This may help you to know where to look for bottlenecks in the
system and how you can tune the system to minimize their effects.
l A report instance is a container for all objects and information needed and
produced during report execution including templates, filters, prompt
answers, generated SQL, report results, and so on.
Component Function
Metadata
Controls all access to the metadata for the entire project.
Server
Creates, modifies, saves, loads and deletes objects from metadata. Also
maintains a server cache of recently used objects. The Object Server
Object Server does not manipulate metadata directly. The Metadata Server does all
reading/writing from/to the metadata; the Object Server uses the
Metadata Server to make any changes to the metadata.
Sends the SQL generated by the SQL Engine to the data warehouse for
Query Engine
execution.
SQL Engine
Generates the SQL needed for the report.
Server
2. The Resolution Server checks for prompts. If the report has one or
more prompts, the user must answer them. For information about these
extra steps, see Processing Reports with Prompts, page 60.
3. The Report Server checks the internal cache, if the caching feature is
turned on, to see whether the report results already exist. If the report
exists in the cache, Intelligence Server skips directly to the last step
and delivers the report to the client. If no valid cache exists for the
report, Intelligence Server creates the task list necessary to execute
the report. For more information on caching, see Result Caches, page
1203.
Prompts are resolved before the Server checks for caches. Users may
be able to retrieve results from cache even if they have personalized
the report with their own prompt answers.
4. The Resolution Server obtains the report definition and any other
required application objects from the Object Server. The Object Server
retrieves these objects from the object cache, if possible, or reads them
from the metadata via the Metadata Server. Objects retrieved from
metadata are stored in the object cache.
5. The SQL Generation Engine creates the optimized SQL specific to the
RDBMS being used in the data warehouse. The SQL is generated
according to the definition of the report and associated application
objects retrieved in the previous step.
6. The Query Engine runs the SQL against the data warehouse. The report
results are returned to Intelligence Server.
necessary information.
2. Intelligence Server puts the job in a sleep mode and tells the Result
Sender component to send a message to the client application
prompting the user for the information.
3. The user completes the prompt, and the client application sends the
user's prompt selections back to Intelligence Server.
5. This cycle repeats until all prompts in the report are resolved.
All regular report processing resumes from the point at which Intelligence
Server checks for a report cache, if the caching feature is turned on.
Reports can also connect to Intelligent Cubes that can be shared by multiple
reports. These Intelligent Cubes also allow the Analytical Engine to perform
additional analysis without requiring any processing on the data warehouse.
For information on personal Intelligent Cubes and Intelligent Cubes, see the
In-memory Analytics Help.
This process is called object browsing and it creates what are called object
requests. It can cause a slight delay that you may notice the first time you
expand or select a folder. The retrieved object definitions are then placed in
Intelligence Server's memory (cache) so that the information is displayed
immediately the next time you browse the same folder. This is called object
caching. For more information on this, see Object Caches, page 1276.
ComponentU Function
Metadata
Controls all access to the metadata for the entire project.
Server
Creates, modifies, saves, loads and deletes objects from metadata. Also
Object Server
maintains a server cache of recently used objects.
Source Net Receives, de-serializes, and passes metadata object requests to the
Server object server.
2. The Object Server checks for an object cache that can service the
request. If an object cache exists, it is returned to the client and
Intelligence Server skips to the last step in this process. If no object
cache exists, the request is sent to the Metadata Server.
3. The Metadata Server reads the object definition from the metadata
repository.
4. The requested objects are received by the Object Server where are
they deposited into memory object cache.
For a more thorough discussion of attribute elements, see the section in the
Basic Reporting Help about the logical data model.
When users request attribute elements from the system, they are said to be
element browsing and create what are called element requests. More
specifically, this happens when users:
l Use the Design Mode on MicroStrategy Web to edit the report filter
Component Function
DB Element Transforms element requests into report requests and then sends report
Server requests to the warehouse.
Element Net Receives, de-serializes, and passes element request messages to the
Server Element Server.
Element Creates and stores server element caches in memory. Manages all
Server element requests in the project.
Sends the SQL generated by the SQL Engine to the data warehouse for
Query Engine
execution.
SQL Engine
Generates the SQL needed for the report.
Server
2. The Element Server checks for a server element cache that can service
the request. If a server element cache exists, the element cache is
returned to the client. Skip to the last step in this process.
4. The Report Server receives the request and creates a report instance.
6. The SQL Engine Server generates the necessary SQL to satisfy the
request and passes it to the Query Engine Server.
7. The Query Engine Server sends the SQL to the data warehouse.
2. The Document Server inspects all dataset reports and prepares for
execution. It consolidates all prompts from datasets into a single
prompt to be answered. All identical prompts are merged so that the
resulting prompt contains only one copy of each prompt question.
5. After Intelligence Server has completed all the report execution jobs,
the Analytical Engine receives the corresponding report instances to
begin the data preparation step. Document elements are mapped to the
corresponding report instance to construct internal data views for each
element.
6. The Analytical Engine evaluates each data view and performs the
calculations that are required to prepare a consolidated dataset for the
entire document instance. These calculations include calculated
expressions, derived metrics, and conditional formatting. The
consolidated dataset determines the number of elements for each
group and the number of detail sections.
2. The dashboard server consolidates all prompts from child reports into a
single prompt to be answered. Any identical prompts are merged so
that the resulting single prompt contains only one copy of each prompt
question.
1. The user makes a request from a web browser. The request is sent to
the web server via HTTP or HTTPS.
Exporting a report from MicroStrategy Web products lets users save the
report in another format that may provide additional capabilities for sharing,
printing, or further manipulation. This section explains the additional
processing the system must do when exporting a report in one of several
formats. This may help you to understand when certain parts of the
MicroStrategy platform are stressed when exporting.
For information about governing report size limits for exporting, see Limit
the Information Displayed at One Time, page 1098 and the following
sections.
Export to Comma Separated File (CSV) and Export to Excel with Plain Text
is done completely on Intelligence Server. These formats contain only report
data and no formatting information. The only difference between these two
formats is the internal "container" that is used.
1. MicroStrategy Web product receives the request for the export and
passes the request to Intelligence Server. Intelligence Server takes the
XML containing the report data and parses it for separators, headers
and metric values.
2. Intelligence Server then outputs the titles of the units in the Row axis.
All these units end up in the same row of the result text.
3. Intelligence Server then outputs the title and header of one unit in the
Column axis.
4. Repeat step 3 until all units in the Column axis are completed.
5. Intelligence Server outputs all the headers of the Row axis and all
metric values one row at a time.
To export to Excel, users must first set their Export preferences by clicking
Preferences, then User preferences, then Export, and select the Excel
version they want to export to.
1. MicroStrategy Web product receives the request for the export to Excel
and passes the request to Intelligence Server. Intelligence Server
produces a report by combining the XML containing the report data with
the XSL containing formatting information.
3. Users can then choose to view the Excel file or save it depending on the
client machine operating system's setting for viewing Excel files.
Export to PDF
4. Narrowcast Server submits one report per user or one multipage report
for multiple users, depending on service definition.
l Executing
l Canceling
The Job Monitor displays which tasks are running on an Intelligence Server.
When a job has completed it no longer appears in the monitor. You can view
a job's identification number; the user who submitted it; the job's status; a
description of the status and the name of the report, document, or query;
and the project executing it.
3. Because the Job Monitor does not refresh itself, you must periodically
refresh it to see the latest status of jobs. To do this, press F5.
5. To view more details for all jobs displayed, right-click in the Job Monitor
and select View options. Select the additional columns to display and
click OK.
At times, you may see "Temp client" in the Network Address column. This
may happen when Intelligence Server is under a heavy load and a user
accesses the list of available projects. Intelligence Server creates a
temporary session that submits a job request for the available projects and
then sends the list to the MicroStrategy Web client for display. This
temporary session, which remains open until the request is fulfilled, is
displayed as Temp client.
To Cancel a Job
2. Press DELETE, and then confirm whether you want to cancel the job.
OEMs may use silent installations; however, it is more common for OEMs to
use a response file installation.
SETTIN G U P U SER
SECURITY
MicroStrategy has a robust security model that enables you to create users
and groups, and control what data they can see and what objects they can
use. The security model is covered in the following sections:
Privileges
Permissions
Users are defined in the MicroStrategy metadata and exist across projects.
You do not have to define users for every project you create in a single
metadata repository.
Each user has a unique profile folder in each project. This profile folder
appears to the user as the "My Personal Objects" folder. By default other
users' profile folders are hidden. They can be viewed by, in the Developer
Preferences dialog box, in the Developer: Browsing category, selecting the
Display Hidden Objects check box.
For a list of the privileges assigned to each group, see the List of Privileges
section.
Authentication-Related Groups
These groups are provided to assist you in managing the different ways in
which users can log into the MicroStrategy system. For details on the
different authentication methods, see Chapter 3, Identifying Users:
Authentication.
l 3rd Party Users: Users who access MicroStrategy projects through third-
party (OEM) software.
l LDAP Users: The group into which users that are imported from an LDAP
server are added.
l Developer: Developers can design new reports from scratch, and create
report components such as consolidations, custom groups, data marts,
documents, drill maps, filters, metrics, prompts, and templates.
l Web Analyst: Web Analysts can create new reports with basic report
functionality, and use ad hoc analysis from Intelligent Cubes with
interactive, slice and dice OLAP.
Administrator Groups
l System Monitors: The System Monitors groups provide an easy way to
give users basic administrative privileges for all projects in the system.
Users in the System Monitors groups have access to the various
monitoring and administrative monitoring tools
Privileges
Privileges allow users to access and work with various functionality within
the software. All users created in the MicroStrategy system are assigned a
set of privileges by default.
101. For a list of all user and group privileges in MicroStrategy, see the List
of Privileges section.
To see which users are using certain privileges, use the License Manager.
See Using License Manager, page 728.
3. Right-click the user and select Grant access to projects. The User
Editor opens to the Project Access dialog box. The privileges that the
user has for each project are listed, as well as the source of those
privileges (inherent to user, inherited from a group, or inherited from a
security role).
Permissions
Permissions allow users to interact with various objects in the MicroStrategy
system. All users created in the MicroStrategy system have certain access
rights to certain objects by default.
To Delete a User
If a Narrowcast user exists that inherits authentication from the user that
you are deleting, you must also remove the authentication definition from
that Narrowcast user. For instructions, see the MicroStrategy Narrowcast
Server Administration Guide.
4. Click OK.
5. Click No. The folder and its contents remain on the system and
ownership is assigned to Administrator. You may later assign
ownership and access control lists for the folder and its contents to
other users.
6. Click Yes and the folder and all of its contents are deleted.
l Temp client: At times, you may see "Temp client" in the Network
Address column. This may happen when Intelligence Server is under
a heavy load and a user accesses the Projects or Home page in
MicroStrategy Web (the pages that display the list of available
projects). Intelligence Server creates a temporary session that
submits a job request for the available projects and then sends the
list to the MicroStrategy Web client for display. This temporary
session, which remains open until the request is fulfilled, is
displayed as "Temp client."
To Disconnect a User
2. Press Delete.
To modify an object's ACL, you must access the Properties dialog box
directly from Developer. If you access the Properties dialog box from
within an editor, you can view the object's ACL but cannot make any
changes.
3. For the User or Group (click Add to select a new user or group), from
the Object drop-down list, select the predefined set of permissions, or
select Custom to define a custom set of permissions. If the object is a
folder, you can also assign permissions to objects contained in that
folder using the Children drop-down list.
4. Click OK.
3. To add new users or groups to the object's access control list (ACL):
l Select the users or groups that you want to add to the object's ACL.
l Click Add.
4. To remove a user or group from the object's ACL, click the X next to the
user or group's name.
5. When you are finished modifying the object's permissions, click OK.
For example, for the Northeast Region Sales report you can specify the
following permissions:
l The Managers and Executive user groups have View access to the report.
l The Developers user group (people who create and modify your
applications) has Modify access.
l The Everyone user group (any user not in one of the other groups) should
have no access to the report at all, so you assign the Denied All
permission grouping.
The default ACL of a newly created object has the following characteristics:
l The owner (the user who created the object) has Full Control permission.
l Permissions for all other users are set according to the Children ACL of
the parent folder.
Newly created folders inherit the standard ACLs of the parent folder. They
do not inherit the Children ACL.
l When creating new schema objects, if the Everyone user group is not
defined in the ACL of the parent folder, Developer adds the Everyone user
group to the ACL of the new schema object, and sets the permissions to
Custom. If the Everyone user group has permissions already assigned in
the parent folder ACL, they are inherited properly. Please note that
Workstation does not add the Everyone user group to the ACL of the new
schema object.
For example, if the Children setting of the parent folder's ACL includes
Full Control permission for the Administrator and View permission for the
Everyone group, then the newly created object inside that folder will have
Full Control permission for the owner, Full Control for the Administrator,
and View permission for Everyone.
l When a user group belongs to another user group, granting one group
permissions and denying the other any permissions will cause both
groups to have the Denied All permission.
l Modifying the ACL of a shortcut object does not modify the ACL of that
shortcut's parent object.
l When you move an object to a different folder, the moved object retains
its original ACLs until you close and reopen the project in Developer.
Using Save As to move an object to a new folder will update the ACLs for
all objects except metrics. When editing or moving a metric, you should
copy the object and place the copy in a new folder so the copied object
inherits its ACL from the Children ACL of the folder into which it is
copied.
Permissions
Grouping Description
granted
Browse
Grants permission to access the object for viewing Read
View only, and to provide translations for an object's name
and description. Use
Execute
Browse
Read
Write
Modify Grants permission to view and/or modify the object.
Delete
Use
Execute
Permissions
Grouping Description
granted
Explicitly denies all permissions for the object. None of none; all are
Denied All
the permissions are assigned. denied
Consume Browse
(Only available (Intelligent Cube only) Grants permission to create
Read
in MicroStrategy and execute reports based on this Intelligent Cube.
Web) Use
Browse
Add (Intelligent Cube only) Grants permission to create and
execute reports based on this Intelligent Cube, and Read
(Only available
in MicroStrategy republish/re-execute the Intelligent Cube to update the Use
Web) data.
Execute
Browse
Read
Collaborate (Intelligent Cube only) Grants permission to create
and execute reports based on this Intelligent Cube, Write
(Only available
in MicroStrategy republish/re-execute the Intelligent Cube to update the Delete
Web) data, and modify the Intelligent Cube.
Use
Execute
The permissions actually assigned to the user or group when you select a
permission grouping are explained in the table below.
Permission Definition
View the object's definition in the appropriate editor, and view the object's
Read access control list. When applied to a language object, allows users to see
the language in the Translation Editor but not edit strings for this language.
Modify the object's definition in the appropriate editor and create new
Write objects in the parent object. For example, add a new metric in a report or
add a new report to a document.
Use the object when creating or modifying other objects. For example, the
Use permission on a metric allows a user to create a report containing that
metric. For more information, see Permissions and Report/Document
Execution, page 99. When applied to a language object, allows users to edit
and save translations, and to select the language for display in their
Developer or MicroStrategy Web language preferences. This permission is
Use
checked at design time, and when executing reports against an Intelligent
Cube.
A user with Use but not Execute permission for an Intelligent Cube can
create and execute reports that use that Intelligent Cube, but cannot
publish the Intelligent Cube.
When you give users only Browse access to a folder, using the Custom
permissions, they can see that folder displayed, but cannot see a list of
objects within the folder. However, if they perform a search, and objects
within that folder match the search criteria, they can see those objects. To
deny a user the ability to see objects within a folder, you must deny all
access directly to the objects in the folder.
As with other objects in the system, you can create an ACL for a server
object that determines what system administration permissions are assigned
to which users. These permissions are different from the ones for other
objects (see table below) and determine what capabilities a user has for a
specific server. For example, you can configure a user to act as an
administrator on one server, but as an ordinary user on another. To do this,
you must modify the ACL for each server definition object by right-clicking
the Administration icon, selecting Properties, and then selecting the
Security tab.
The table below lists the groupings available for server objects, the
permissions each one grants, and the tasks each allows you to perform on
the server.
Cancel jobs
Trigger events
Browse
Change server definition properties
Read
Change statistics settings
Configuration Write
Delete server definition
Delete
Grant server rights to other users
Control
l User identity: The user identity is what determines an object's owner when
an object is created. The user identity also determines whether the user
has been granted the right to access a given object.
l Special privileges: A user may possess a special privilege that causes the
normal access checks to be bypassed:
l Bypass All Object Security Access Checks allows the user to ignore the
access checks for all objects.
1. Permissions that are directly denied to the user are always denied.
2. Permissions that are directly granted to the user, and not directly
denied, are always granted.
3. Permissions that are denied by a group, and not directly granted to the
user, are denied.
5. Any permissions that are not granted, either directly or by a group, are
denied.
For example, user Jane does not have any permissions directly assigned for
a report. However, Jane is a member of the Designers group, which has Full
Control permissions for that report, and is also a member of the Managers
group, which has Denied All permissions for that report. In this case, Jane is
denied all permissions for the report. If Jane is later directly granted View
permissions for the report, she would have View permissions only.
l Everyone: Browse
l Public/Guest: Browse
l Inherited ACL
l Administrator: Default
l Everyone: View
l Public/Guest: View
This means that new users, as part of the Everyone group, are able to
browse the objects in the Public Objects folder, view their definitions
and use them in definitions of other objects (for example, create a
report with a public metric), and execute them (execute reports).
However, new users cannot delete these objects, or create or save new
objects to these folders.
l Personal folders
This means that new users can create objects in these folders and have
full control over those objects.
l The Use permission allows the user to reference or use the object when
they are modifying another object. This permission is checked at object
design time, and when executing reports against an Intelligent Cube.
A user may have four different levels of access to an object using these two
new permissions:
l Both Use and Execute permissions: The user can use the object to create
new reports, and can execute reports containing the object.
l Execute permission only: The user can execute previously created reports
containing the object, but cannot create new reports that use the object. If
the object is an Intelligent Cube, the user cannot execute reports against
that Intelligent Cube.
l Use permission only: The user can create reports using the object, but
cannot execute those reports.
A user with Browse, Read, and Use (but not Execute) permissions for an
Intelligent Cube can create and execute reports that use that Intelligent
Cube, but cannot publish the Intelligent Cube.
l Neither Use nor Execute permission: The user cannot create reports
containing the object, nor can the user execute such reports, even if the
user has Execute rights on the report.
If the user does not have access to a metric used to define a report, the
report execution continues, but the metric is not displayed in the report for
that user.
If the user does not have access to objects used in a prompt, such as an
attribute in an element list prompt, object prompt, or attribute qualification
prompt or a metric in metric qualification prompt or object prompt, the
prompt is considered as not applied if the prompt answer is optional.
Alternatively, the execution may fail to complain about a lack of access if a
prompt answer is required.
With the Enable Web personalized drill paths check box cleared (and
thus, XML caching enabled), the attributes to which all users in
MicroStrategy Web can drill are stored in a report's XML cache. In this case,
users see all attribute drill paths whether they have access to them or not.
When a user selects an attribute drill path, Intelligence Server then checks
whether the user has access to the attribute. If the user does not have
access (for example, because of Access Control Lists), the drill is not
performed and the user sees an error message.
Alternatively, if you select the Enable Web personalized drill paths check
box, at the time the report results are created (not at drill time), Intelligence
Server checks which attributes the user may access and creates the report
XML with only the allowed attributes. This way, the users only see their
available drill paths, and they cannot attempt a drill action that is not
allowed. With this option enabled, you may see performance degradation on
Intelligence Server. This is because it must create XML for each report/user
combination rather than using XML that was cached.
For more information about XML caching, see Types of Result Caches, page
1209.
Based on their different privileges, the users and user groups can perform
different types of operations in the MicroStrategy system. If a user does not
have a certain privilege, that user does not have access to that privilege's
functionality. You can see which users are using certain privileges by using
License Manager (see Using License Manager, page 728).
For a complete list of privileges and what they control in the system, see the
List of Privileges section.
1. From Developer User Manager, edit the user with the User Editor or
edit the group with the Group Editor.
Rather than assigning individual users and groups these privileges, it may
be easier for you to create Security Roles (collections of privileges) and
assign them to users and groups. Then you can assign additional privileges
individually when there are exceptions. For more information about security
roles, see Defining Sets of Privileges: Security Roles, page 106.
You can grant, revoke, and replace the existing privileges of users, user
groups, or security roles with the Find and Replace Privileges dialog box.
This dialog box allows you to search for the user, user group, or security role
and change their privileges, depending on the tasks required for their work.
To access the Find and Replace Privileges dialog box, in Developer, right-
click the User Manager and select Find and Replace Privileges.
l Privileges assigned to any security roles that are assigned to the user
within the project (see Defining Sets of Privileges: Security Roles, page
106)
l System Monitors and its member groups have privileges based on their
expected roles in the company. To see the privileges assigned to each
group, right-click the group and select Grant Access to Projects.
Several of the predefined user groups form hierarchies, which allow groups
to inherit privileges from any groups at a higher level within the hierarchy.
These hierarchies are as follows:
In the case of the MicroStrategy Web user groups, the Web Analyst inherits
the privileges of the Web Reporter. The Web Professional inherits the
privileges of both the Web Analyst and Web Reporter. The Web Professional
user group has the complete set of MicroStrategy Web privileges.
l Web Reporter
l Web Analyst
l Web Professional
l Analyst
l Developer
The various System Monitors user groups inherit the privileges of the
System Monitors user group and therefore have more privileges than the
System Monitors. Each has its own specific set of privileges in addition, that
are not shared by the other System Monitors groups.
l System Monitors
This group inherits the privileges of the Analyst, Mobile User, Web Reporter,
and Web Analyst groups.
l International Users
Security roles exist at the project source level, and can be used in any
project registered with Intelligence Server. A user can have different
security roles in each project. For example, an administrator for the
development project may have a Project Administrator security role in that
project, but the Normal User security role in all other projects on that server.
For information about how privileges are inherited from security roles and
groups, see Controlling Access to Functionality: Privileges, page 101
3. Double-click the security role you want to assign to the user or group.
5. From the Select a Project drop-down list, select the project for which
to assign the security role.
6. From the drop-down list of groups, select the group containing a user or
group you want to assign the security role to. The users or groups that
are members of that group are shown in the list box below the drop-
down list.
l By default, users are not shown in this list box. To view the users as
well as the groups, select the Show users check box.
8. Click the > icon. The user or group moves to the Selected members
list. You can assign multiple users or groups to the security role by
selecting them and clicking the > icon.
9. When you are finished assigning the security role, click OK.
3. From the File menu, point to New, and select Security Role.
7. To assign the role to users, select the Members tab and follow the
instructions in To Assign a Security Role to Users or Groups in a
Project, page 107.
8. Click OK.
1. In Developer, log in the project source you want to remove the security
role from.
5. Click Yes.
You can also assign security roles to a user or group in the User Editor or
Group Editor. From the Project Access category of the editor, you can
specify what security roles that user or group has for each project.
You can assign roles to multiple users and groups in a project through the
Project Configuration dialog box. The Project Access - General category
displays which users and groups have which security roles in the project,
and allows you to re-assign the security roles.
You can also use Command Manager to manage security roles. Command
Manager is a script-based administrative tool that helps you perform
complex administrative actions quickly. For specific syntax for security role
management statements in Command Manager, see Security Role
Management in the Command Manager on-line help (from Command
Manager, press F1, or select the Help menu). For general information about
Command Manager, see Chapter 15, Automating Administrative Tasks with
Command Manager.
If you are using UNIX, you must use Command Manager to manage your
system's security roles.
3. In the Select a security role drop-down list, select the security role
that contains the user or group who you want to deny project access.
4. On the right-hand side of the Project access - General dialog, select the
user or group who you want to deny project access. Then click the left
arrow to remove that user or group from the security role.
5. Using the right arrow, add any users to the security role for whom you
want to grant project access. To see the users contained in each group,
highlight the group and check the Show users check box.
6. Make sure the user or group whose access you want deny does not
appear in the Selected members pane on the right-hand side of the
dialog. Then click OK.
7. In Developer, under the project source that contains the project you are
restricting access to, expand Administration, then expand User
Manager.
8. Click on the group to which the user belongs who you want to deny
project access for. Then double-click on the user in the right-hand side
of Developer.
10. In the Security Role Selection row, under the project you want to
restrict access to, review the Security Role Selection drop-down list.
Make sure that no security role is associated with this project for this
user.
When the user attempts to log in to the project, they receive the message
"No projects were returned by this project source."
For example, your company security policy may require you to keep the user
security administrator for your projects separate from the project resource
administrator. Rather than specifying the privileges for each administrator
individually, you can assign the Project Security Administrator role to one
l Consumer, who can only view a dashboard or document they have access
to.
l Power Users, which have the largest subset of privileges of any security
role.
l Project Security Administrators, who create users and manage user and
object security.
The ways by which data access can be controlled are discussed below:
1. In Developer, log into your project. You must log in as a user with
administrative privileges.
6. Double-click the new connection mapping in the Users column. Click ...
(the browse button).
7. Select the desired user or group and click OK. That user or group is
now associated with the connection mapping.
8. Click OK.
l All other users have limited access (warehouse login ID = "MSTR users")
In this case, you would need to create a user connection mapping within
MicroStrategy for the CEO. To do this:
This is shown in the diagram below in which the CEO connects as CEO
(using the new database login called "CEO") and all other users use the
default database login "MSTR users."
Both the CEO and all the other users use the same project, database
instance, database connection (and DSN), but the database login is
different for the CEO.
Connection mappings can also be made for user groups and are not limited
to individual users. Continuing the example above, if you have a Managers
group within the MicroStrategy system that can access most data in the data
warehouse (warehouse login ID = "Managers"), you could create another
database login and then create another connection mapping to assign it to
the Managers user group.
Another case in which you may want to use connection mappings is if you
need to have users connect to two data warehouses using the same project.
In this case, both data warehouses must have the same structure so that the
project works with both. This may be applicable if you have a data
warehouse with domestic data and another with foreign data and you want
users to be directed to one or the other based on the user group to which
they belong when they log in to the MicroStrategy system.
l "US users" connect to the U.S. data warehouse (data warehouse login ID
"MSTR users")
In this case, you would need to create a user connection mapping within
MicroStrategy for both user groups. To do this, you would:
l Create two connection mappings in the MicroStrategy project that link the
groups to the different data warehouses via the two new database
connection definitions
The project, database instance, and database login can be the same, but
the connection mapping specifies different database connections (and
therefore, different DSNs) for the two groups.
You can configure each project to use either connection mappings and/or
the linked warehouse login ID when users execute reports, documents, or
browse attribute elements. If passthrough execution is enabled, the project
uses the linked warehouse login ID and password as defined in the User
Editor (Authentication tab). If no warehouse login ID is linked to a user,
Intelligence Server uses the default connection and login ID for the project's
database instance.
l RDBMS auditing: If you want to be able to track which users are accessing
the RDBMS system down to the individual database query. Mapping
multiple users to the same RDBMS account blurs the ability to track which
users have issued which RDBMS queries.
l Teradata spool space: If you use the Teradata RDBMS, note that it has a
limit for spool space set per account. If multiple users share the same
RDBMS account, they are collectively limited by this setting.
l RDBMS security views: If you use security views, each user needs to log
in to the RDBMS with a unique database login ID so that a database
security view is enforced.
1. In Developer, log into your project. You must log in as a user with
administrative privileges.
5. To use warehouse credentials for all database instances, select the For
all database instances option.
7. Click OK.
For example, two regional managers can have two different security filters
assigned to them for their regions: one has a security filter assigned to them
that only shows the data from the Northeast region, and the other has a
security filter that only shows data from the Southwest region. If these two
regional managers run the same report, they may see different report
results.
l Using a Single Security Filter for Multiple Users: System Prompts, page
136
with the Category attribute, they only see the Electronics category. Within
the Electronics category, they see only the TV subcategory. Within the TV
subcategory, they see all Items within that subcategory.
When this user executes a simple report with Category, Subcategory, and
Item in the rows, and Revenue in the columns, only the Items from the TV
Subcategory are returned, as shown in the example below.
If this user executes another report with Category in the rows and Revenue
in the columns, only the Revenue from the TV Subcategory is returned, as
shown in the example below. The user cannot see any data from attribute
elements that are outside the security filter.
A security filter comes into play when a user is executing reports and
browsing elements. The qualification defined by the security filter is used in
the WHERE clause for any report that is related to the security filter's
attribute. By default, this is also true for element browsing: when a user
browses through a hierarchy to answer a prompt, they only see the attribute
elements that the security filter allows them to see. For instructions on how
to disable security filters for element browsing, see To Disable Security
Filters for Element Browsing, page 125.
Security filters are used as part of the cache key for report caching and
element caching. This means that users with different security filters cannot
access the same cached results, preserving data security. For more
information about caching, see Chapter 10, Improving Response Time:
Caching.
Each user or group can be directly assigned only one security filter for a
project. Users and groups can be assigned different security filters for
different projects. In cases where a user inherits one or more security filters
from any groups that they belong to, the security filters may need to be
merged. For information about how security filters are merged, see Merging
Security Filters, page 131.
3. From the Choose a project drop-down list, select the project that you
want to create a security filter for.
5. Select one:
l To create a new security filter, click New. The Security Filter Editor
opens.
6. In the left side of the Security Filter Manager, in the Security Filters
tab, browse to the security filter that you want to apply, and select that
security filter.
7. In the right side of the Security Filter Manager, select Security Filters.
8. Browse to the user or group that you want to apply the security filter to,
and select that user or group.
9. Click > to apply the selected security filter to the selected user or
group.
1. In Developer, log into a project. You must log in with a user account
that has administrative privileges.
5. Click OK.
This behavior can be modified by using the top range attribute and bottom
range attribute properties.
The top and bottom range attributes can be set to the same level.
The examples below use a report with Category, Subcategory, and Item on
the rows, and three metrics in the columns:
l Revenue
The user executing this report has a security filter that restricts the
Subcategory to the TV element.
In the example report below, the user's security filter does not specify a top
or bottom range attribute. Item-level detail is displayed for only the items
within the TV category. The Subcategory Revenue is displayed for all items
within the TV subcategory. The Category Revenue is displayed for all items
in the Category, including items that are not part of the TV subcategory.
However, only the Electronics category is displayed. This illustrates how the
security filter Subcategory=TV is raised to the category level such that
Category=Electronics is the filter used with Category Revenue.
If a top range attribute is specified, then the user cannot see any data
outside of them security filter. This is true even at levels above the top level,
regardless of whether metrics with absolute filtering are used.
In the example report below, the user's security filter specifies a top range
attribute of Subcategory. Here, the Category Revenue is displayed for only
the items within the TV subcategory. The security filter Subcategory=TV is
not raised to the Category level, because Category is above the specified
top level of Subcategory.
If a bottom range attribute is specified, the user cannot see data aggregated
at a lower level than the bottom level.
In the example report below, the user's security filter specifies a bottom
range attribute of Subcategory. Item-level detail is not displayed, because
Item is a level below the bottom level of Subcategory. Instead, data for the
entire Subcategory is shown for each item. Data at the Subcategory level is
essentially the lowest level of granularity the user is allowed to see.
You assign top and bottom range attributes to security filters in the Security
Filter Manager. You can assign range attributes to a security filter for all
users, or to the security filters per user.
You can assign the same attribute to a security filter as a top and bottom
range attribute. A security filter can have multiple top or bottom range
attributes as long as they are from different hierarchies. You cannot assign
multiple attributes from the same hierarchy to either a top or bottom range.
However, you can assign attributes from the same hierarchy if one is a top
range attribute and one is a bottom range attribute. For example, you can
assign Quarter (from the Time hierarchy) and Subcategory (from the
Products hierarchy) as top range attributes, and Month (from the Time
hierarchy) and Subcategory as bottom range attributes.
To modify security filters, you must have the Use Security Filter Manager
privilege.
2. From the Choose a project drop-down list, select the project that you
want to modify security filters for.
4. Browse to the attribute that you want to set as a top or bottom range
attribute, and select that attribute.
5. To apply a top or bottom range attribute to a security filter for all users:
l Browse to the security filter that you want to apply the range attribute
to.
l Expand that security filter, and select either the Top range
attributes or Bottom range attributes folder.
l Click > to apply the selected attribute to the selected security filter.
l Browse to the user or group that you want to apply the range attribute
to.
l Expand that user or group and select the security filter that you want
to apply the range attribute to.
l Expand that security filter, and select either the Top range
attributes or Bottom range attributes folder.
l Click > to apply the selected attribute to the selected security filter for
the selected user or group.
7. Click OK.
For the examples in these sections, consider a project with the following
user groups and associated security filters:
You control how security filters are merged at the project level. You can
change the merge settings in the Project Configuration Editor for the
selected project, in the Security Filter category. After making any changes to
the security filter settings, you must restart Intelligence Server for those
changes to take effect.
Changing how security filters are merged does not automatically invalidate
any result caches created for users who have multiple security filters.
MicroStrategy recommends that you invalidate all result caches in a project
after changing how security filters are merged for that project. For
instructions on how to invalidate all result caches in a project, see
Managing Result Caches, page 1221.
By default, security filters are merged with an OR if they are related, and
with an AND if they are not related. That is, if two security filters are related,
the user can see all data available from either security filter. However, if the
security filters are not related, the user can see only the data available in
both security filters.
Two security filters are considered related if the attributes that they derive
from belong in the same hierarchy, such as Country and Region, or Year and
Month. In the example security filters given above, the Electronics, TV, and
Movies security filters are all related, and the Northeast security filter is not
related to any of the others.
Using this merge method, a user who is a member of both the Electronics
and Drama groups can see data from the Electronics category and the
Drama subcategory, as shown below:
A user who is a member of both the Movies and Drama groups can see data
from all subcategories in the Movies category, not just the Drama
Data for the Movies category from outside the Northeast region is not
available to this user, nor is data for the Northeast region for other
categories.
The following examples show how the data engine treats related and
unrelated attributes.
l Related Attributes
l Unrelated Attributes
Rel at ed At t r i b u t es
Two security filters are considered related if the attributes that they derived
from belong in the same hierarchy with a one-to-one or one-to-many
relation, such as Manager and Call Center, Country and Region, or Year and
Month.
There are some advanced cases that fall into related or unrelated
categories. Related cases are sibling relations with a one-to-many
relationship to a common child/parent attribute, such as Region and
Distribution Center or MicroStrategy User and Distribution Center, where
respective security filters merge using OR.
Un r el at ed At t r i b u t es
Two filters are considered not related if they are defined as many-to-many,
such as Item and Catalog.
There are some advanced cases that fall into unrelated categories.
Unrelated cases are siblings that contain a join path that goes up and down
multiple times, such as Employee and Month of Year, where respective
security filters merge using AND, not OR. Notice how the join path may start
from Employee, all the way to Quarter, then come down to Month, and then
go up again to Month of Year.
You can also configure Intelligence Server to always merge security filters
with an AND, regardless of whether they are related.
As in the first method, a user who is a member of both the Movies and
Northeast groups would see only information about the Movies category in
the Northeast region.
A user who is a member of both the Movies and Drama groups would see
only data from the Drama subcategory of Movies, as shown below:
Data for the other subcategories of Drama is not available to this user.
To configure how security filters are merged, you must have the Configure
Project Basic privilege.
5. Click OK.
l Like other prompt objects, answers to system prompts are used to match
caches. Therefore, users do not share caches for reports that contain
different answers to system prompts.
If you are using LDAP authentication in your MicroStrategy system, you can
import LDAP attributes into your system as system prompts. You can then
use these system prompts in security filters, in the same way that you use
the User Login system prompt, as described above. For instructions on how
to import LDAP attributes as system prompts, see Manage LDAP
Authentication, page 189.
2. From the Choose a project drop-down list, select the project that you
want to create a security filter for.
4. Click New.
8. Type your custom expression in the Custom Expression area. You can
drag and drop a system prompt or other object to include it in the
custom expression. For detailed instructions on creating custom
expressions in filters, see the Filters section of the Advanced Reporting
Help.
9. When you have finished typing your custom expression, click Validate
to make sure that its syntax is correct.
10. Click Save and close. Type a name for the security filter and click
Save.
You can use a system prompt to apply a single security filter to all users in a
group. For example, you can create a security filter using the formula
User@ID=?[User Login] that displays information only for the element of
the User attribute that matches the user's login.
For a more complex example, you can restrict Managers so that they can
only view data on the employees that they supervise. Add the User Login
prompt to a security filter in the form Manager=?[User Login]. Then
assign the security filter to the Managers group. When a manager named
John Smith executes a report, the security filter generates SQL for the
condition Manager='John Smith' and only John Smith's employees' data
is returned.
You can also use the User Login system prompt to implement security filter
functionality at the report level, by defining a report filter with a system
prompt. For example, you can define a report filter with the User Login
prompt in the form Manager=?[User Login]. Any reports that use this
filter return data only to those users who are listed as Managers in the
system.
Security Views
Most databases provide a way to restrict access to data. For example, a
user may be able to access only certain tables, or they may be restricted to
certain rows and columns within a table. The subset of data available to a
user is called the user's security view.
Security views are often used when splitting fact tables by columns and
splitting fact tables by rows (discussed below) cannot be used. The rules
that determine which rows each user is allowed to see typically vary so much
that users cannot be separated into a manageable number of groups. In the
extreme, each user is allowed to see a different set of rows.
Note that restrictions on tables, or rows and columns within tables, may not
be directly evident to a user. However, they do affect the values displayed in
a report. You need to inform users as to which data they can access so that
they do not inadvertently run a report that yields misleading final results. For
example, if a user has access to only half of the sales information in the data
warehouse but runs a summary report on all sales, the summary reflects
only half of the sales. Reports do not indicate the database security view
used to generate the report.
1st
123456 12 Elm St. 400.80 40,450.00
National
45 Crest People's
908974 3,000.00 100,009.00
Dr. Bank
1 Ocean Eastern
562055 888.50 1,000.00
Blvd. Credit
You can split the table into separate tables (based on the value in Member
Bank), one for each bank: 1st National, Eastern Credit, and so on. In this
example, the table for 1st National bank would look like this:
1st
123456 12 Elm St. 400.80 40,450.00
National
1 Ocean Eastern
562055 888.50 1,000.00
Blvd. Credit
In most RDBMSs, split fact tables by rows are invisible to system users.
Although there are many physical tables, the system "sees" one logical fact
table.
Support for Split fact tables by rows for security reasons should not be
confused with the support that Intelligence Server provides for split fact
tables by rows for performance benefits. For more information about
partitioning, see the Advanced Reporting Help.
Each new table has the same primary key, but contains only a subset of the
fact columns in the original fact table. Splitting fact tables by columns allows
fact columns to be grouped based on user community. This makes security
administration simple because permissions are granted to entire tables
rather than to columns. For example, suppose a fact table contains the key
labeled Customer ID and fact columns as follows:
Current
Customer Customer Member Transaction
Balance
ID Address Bank Amount ($)
($)
You can split the table into two tables, one for the marketing department and
one for the finance department. The marketing fact table would contain
everything except the financial fact columns as follows:
The second table used by the financial department would contain only the
financial fact columns but not the marketing-related information as follows:
Current
Customer Transaction
Balance
ID Amount ($)
($)
the destination user or group. Then the user or group that is being merged is
removed from the metadata and only the destination user or group remains.
For example, you want to merge UserB into UserA. In this case UserA is
referred to as the destination user. In the wizard, this is shown in the image
below:
When you open the User Merge Wizard and select a project source, the
wizard locks that project configuration. Other users cannot change any
configuration objects until you close the wizard. For more information about
locking and unlocking projects, see Lock Projects, page 760.
You can also merge users in batches if you have a large number of users to
merge. Merging in batches can significantly speed up the merge process.
Batch-merging is an option in the User Merge Wizard. Click Help for details
on setting this option.
For example, if UserA has the Web user privilege and UserB has the Web
user and Web Administration privileges, after the merge, UserA has both
Web user and Web Administration privileges.
l If neither user has a security role for a project, the destination user does
not have a security role on that project.
l If the destination user has no security role for a project, the user inherits
the role from the user to be merged.
l If the destination user and the user to be merged have different security
roles, then the existing security role of the destination user is kept.
l If you are merging multiple users into a single destination user and each of
the users to be merged has a security role, then the destination user takes
the security role of the first user to be merged. If the destination user also
has a security role, the existing security role of the destination user is
kept.
4. Specify whether you want to have the wizard select the users/groups to
merge automatically (you can verify and correct the merge candidates),
or if you want to manually select them.
6. Select the users or groups to be merged and click > to move them to the
right-hand side.
7. Click Finish.
Ensure that the Administrator password has been changed. When you install
Intelligence Server, the Administrator account comes with a blank password
that must be changed.
Set up access controls for the database (see Controlling Access to Data,
page 113). Depending on your security requirements you may need to:
l Set up security roles for users and groups to assign basic privileges and
permissions
at both the server and the project levels.) Do not assign delete privileges
to the guest user account.
l Assign the Denied All permission to a special user or group so that, even if
permission is granted at another level, permission is still denied
l If you are working with sensitive or confidential data, enable the setting to
encrypt all communication between MicroStrategy Web server and
Intelligence Server.
l Admin.aspx
I D EN TIFYIN G U SERS:
A UTH EN TICATION
l Import your user database into the MicroStrategy metadata, or link your
users' accounts in your user database with their accounts in
MicroStrategy. For example, you can import users in your LDAP directory
into the MicroStrategy metadata, and ensure that their LDAP credentials
are linked to the corresponding MicroStrategy users. Depending on the
authentication mode you choose, the following options are available:
l You can import their accounts from an LDAP directory, or from a text
file. For the steps to import users, refer to the System Administration
Help in Developer.
l You can use a Command Manager script to edit the user information in
the metadata, and link the users' MicroStrategy accounts to their
l For all project sources that the above applications connect to.
Authentication Modes
Several authentication modes are supported in the MicroStrategy
environment. The main difference between the modes is the authentication
authority used by each mode. The authentication authority is the system that
verifies and accepts the login/password credentials provided by the user.
3. On the Advanced tab, select the appropriate option for the default
authentication mode that you want to use.
4. Click OK twice.
For more information, see the MicroStrategy for Office page in the
Readme and the MicroStrategy for Office Help.
You may decide to map several users to the same MicroStrategy user
account. These users would essentially share a common login to the system.
Consider doing this only if you have users who do not need to create their
own individual objects, and if you do not need to monitor and identify each
individual user uniquely.
By default, all users connect to the data warehouse using one RDBMS login
ID, although you can change this using Connection Mapping. For more
information, see Connecting to the Data Warehouse, page 22. In addition,
standard authentication is the only authentication mode that allows a user or
system administrator to change or expire MicroStrategy passwords.
Password Policy
A valid password is a password that conforms to any specifications you may
have set. You can define the following characteristics of passwords:
l Whether a user must change their password when they first log into
MicroStrategy
l The number of past passwords that the system remembers, so that users
cannot use the same password
l Whether a user can include their login and/or name in the password
l Whether or not rotating characters from last password are allowed in new
passwords
l The minimum number of special characters, that is, symbols, that the
password must contain
The expiration settings are made in the User Editor and can be set for each
individual user. The complexity and remembered password settings are
made in the Security Policy Settings dialog box, and affect all users.
This dynamically created guest user is not the same as the "Guest" user
which is visible in the User Manager.
By default, guest users have no privileges; you must assign this group any
privileges that you want the guest users to have. Privileges that are grayed
out in the User Editor are not available by default to a guest user. Other than
the unavailable privileges, you can determine what the guest user can and
cannot do by modifying the privileges of the Public/Guest user group and by
granting or denying it access to objects. For more information, see
Controlling Access to Functionality: Privileges, page 101 and Controlling
Access to Objects: Permissions, page 89.
All objects created by guest users must be saved to public folders and are
available to all guest users. Guest users may use the History List, but their
messages in the History List are not saved and are purged when the guest
users log out.
By default, anonymous access is disabled at both the server and the project
levels.
1. In Developer, log into the project source with a user that has
administrative privileges.
7. Click OK.
You can also set up MicroStrategy Office to use LDAP authentication. For
information, see the MicroStrategy for Office Help.
3. The authentication user searches the LDAP directory for the user who
is logging in via Developer or MicroStrategy Web, based on the DN of
the user logging in.
4. If this search successfully locates the user who is logging in, the user's
LDAP group information is retrieved.
l The connection details for your LDAP server. The information required is
as follows:
l Obtain a valid certificate from your LDAP server and save it on the
machine where Intelligence Server is installed.
l The user name and password of an LDAP user who can search the LDAP
directory. This user is called the authentication user, and is used by the
Intelligence Server to connect to the LDAP server. Typically, this user
has administrative privileges for your LDAP server.
l Details of your LDAP SDK. The LDAP SDK is a set of connectivity file
libraries (DLLs) that MicroStrategy uses to communicate with the LDAP
server. For information on the requirements for your LDAP SDK, and for
steps to set up the SDK, see Setting Up LDAP SDK Connectivity, page
167.
l Determine whether you want to use connection pooling with your LDAP
server. With connection pooling, you can reuse an open connection to the
LDAP server for subsequent operations. The connection to the LDAP
server remains open even when the connection is not processing any
operations (also known as pooling). This setting can improve performance
by removing the processing time required to open and close a connection
to the LDAP server for each operation.
l Determine whether you want to import LDAP user and group information
into the MicroStrategy metadata. A MicroStrategy group is created for
each LDAP group. The following options are available:
l Import users and groups into MicroStrategy: If you choose this option, a
MicroStrategy user is created for each user in your LDAP directory.
Users can then be assigned additional privileges and permissions in
MicroStrategy.
l If you choose to import LDAP user and group information into the
MicroStrategy metadata, determine the following:
l Determine whether you want to import LDAP user and group information
into the MicroStrategy metadata when users log in, and whether the
information is synchronized every time users log in.
l Determine whether you want to import LDAP user and group information
into the MicroStrategy metadata in batches, and whether you want the
information to be synchronized according to a schedule.
l If you want to import LDAP user and group information in batches, you
must provide search filters to import the users and the groups. For
example, if your organization has 1,000 users in the LDAP directory, of
whom 150 need to use MicroStrategy, you must provide a search filter
that imports the 150 users into the MicroStrategy metadata. For
information on defining search filters, see Defining LDAP Search Filters
to Verify and Import Users and Groups at Login, page 170.
To understand how this setting effects the way the users and groups are
imported into MicroStrategy, see the following diagram:
If you choose to import two nested groups when MicroStrategy imports LDAP
groups, the groups associated with each user are imported, up to two levels
above the user. In this case, for User 1, the groups Domestic and Marketing
would be imported. For User 3, Developers and Employees would be
imported.
l The Trusted Authenticated Request User ID for a 3rd party user. When a
3rd party user logs in, this Trusted Authenticated Request User ID will
be used to find the linked MicroStrategy user.
You can create security filters based on the LDAP attributes that you
import. For example, you import the LDAP attribute countryName,
create a security filter based on that LDAP attribute, and then you assign
that security filter to all LDAP users. Now, when a user from Brazil views
a report that breaks down sales revenue by country, they only see the
sales data for Brazil.
Once you have collected the above information, you can use the LDAP
Connectivity Wizard to set up your LDAP connection. The steps are
described in Setting up LDAP Authentication in MicroStrategy Web, Library,
and Mobile, page 185.
Intelligence Server requires that the version of the LDAP SDK you are using
supports the following:
l LDAP v. 3
l SSL connections
For LDAP to work properly with Intelligence Server, the 64-bit LDAP
libraries must be used.
The following image shows how behavior of the various elements in an LDAP
configuration affects other elements in the configuration.
1. The behavior between Intelligence Server and the LDAP SDK varies
slightly depending on the LDAP SDK used. The Readme provides an
overview of these behaviors.
2. The behavior between the LDAP SDK and the LDAP server is identical,
no matter which LDAP SDK is used.
MicroStrategy recommends that you use the LDAP SDK vendor that
corresponds to the operating system vendor on which Intelligence Server is
running in your environment. Specific recommendations are listed in the
Readme, with the latest set of certified and supported LDAP SDKs,
references to MicroStrategy Tech Notes with version-specific details, and
SDK download location information.
1. Download the LDAP SDK DLLs onto the machine where Intelligence
Server is installed.
l Linux environment: Modify the LDAP.sh file located in the env folder
of your MicroStrategy installation to point to the location of the LDAP
SDK libraries. The detailed procedure is described in the procedure
To Add the LDAP SDK Path to the Environment Variable in UNIX,
page 169 below.
This procedure assumes you have installed an LDAP SDK. For high-level
steps to install an LDAP SDK, see High-Level Steps to Install the LDAP SDK
DLLs, page 169.
3. Open the LDAP.sh file in a text editor and add the library path to the
MSTR_LDAP_LIBRARY_PATH environment variable. For example:
MSTR_LDAP_LIBRARY_PATH='/path/LDAP/library'
It is recommended that you store all libraries in the same path. If you
have several paths, you can add all paths to the MSTR_LDAP_
LIBRARY_PATH environment variable and separate them by a colon
(:). For example: MSTR_LDAP_LIBRARY_
PATH='/path/LDAP/library:/path/LDAP/library2'
4. Remove Write privileges from the LDAP.sh file by typing the command
chmod a-w LDAP.sh and then pressing Enter.
Additionally, you can specify search filters, which help narrow down the
users and groups to search.
The following sections describe the search settings that you can configure:
l Highest Level to Start an LDAP Search: Search Root, page 171 provides
examples of these parameters as well as additional details of each
parameter and some LDAP server-specific notes.
The following table, based on the diagram above, provides common search
scenarios for users to be imported into MicroStrategy. The search root is the
root to be defined in MicroStrategy for the LDAP directory.
Include all users and groups from Departments (with an exclusion clause in the
Operations, Consultants, and User/Group search filter to exclude users who belong
Technology to Marketing and Administration)
Include all users and groups from Departments (with an exclusion clause in the
Technology and Operations but User/Group search filter to exclude users who belong
not Consultants. to Consultants.)
For some LDAP vendors, the search root cannot be the LDAP tree's root. For
example, both Microsoft Active Directory and Sun ONE require a search to
begin from the domain controller RDN (dc). The image below shows an
example of this type of RDN, where "dc=sales, dc=microstrategy, dc=com":
Once Intelligence Server locates the user in the LDAP directory, the search
returns the user's Distinguished Name, and the password entered at user
login is verified against the LDAP directory. Intelligence Server uses the
authentication user to access, search in, and retrieve the information from
the LDAP directory.
Using the user's Distinguished Name, Intelligence Server searches for the
LDAP groups that the user is a member of. You must enter the group search
filter parameters separately from the user search filter parameters (see
Finding Groups: Group Search Filters, page 173).
l #LDAP_LOGIN# can be used in this filter to represent the LDAP user login.
Depending on your LDAP server vendor and your LDAP tree structure, you
may need to try different attributes within the search filter syntax above. For
example, (&(objectclass=person) (uniqueID=#LDAP_LOGIN#)),
where uniqueID is the LDAP attribute name your company uses for
authentication.
The group search filter is generally in one of the following forms (or the
following forms may be combined, using a pipe | symbol to separate the
forms):
l (&(objectclass=LDAP_GROUP_OBJECT_CLASS) (LDAP_MEMBER_LOGIN_
ATTR=#LDAP_LOGIN#))
l (&(objectclass=LDAP_GROUP_OBJECT_CLASS) (LDAP_MEMBER_DN_
ATTR=#LDAP_DN#))
l (&(objectclass=LDAP_GROUP_OBJECT_CLASS) (gidNumber=#LDAP_
GIDNUMBER#))
The group search filter forms listed above have the following placeholders:
remains open even when the connection is not processing any operations
(also known as pooling). This setting can improve performance by removing
the processing time required to open and close a connection to the LDAP
server for each operation.
You may have multiple LDAP servers which work together as a cluster of
LDAP servers.
l Whether the LDAP password is incorrect, has been locked out, or has
expired
l Whether the LDAP user account has been disabled, or has been identified
as an intruder and is locked out
If MicroStrategy can verify that none of these restrictions are in effect for
this user account, MicroStrategy performs an LDAP bind, and successfully
authenticates the user logging in. This is the default behavior for users and
groups that have been imported into MicroStrategy.
You can choose to have MicroStrategy verify only the accuracy of the user's
password with which the user logged in, and not check for additional
restrictions on the password or user account. To support password
comparison authentication, your LDAP server must also be configured to
allow password comparison only.
A user with user login UserA and password PassA logs in to MicroStrategy at
9:00 A.M. and creates a new report. The user schedules the report to run at
3:00 P.M. later that day. Since there is no report cache, the report will be
executed against the database. At noon, an administrator changes UserA's
password to PassB. UserA does not log back into MicroStrategy, and at 3:00
P.M. the scheduled report is run with the credentials UserA and PassA, which
are passed to the database. Since these credentials are now invalid, the
scheduled report execution fails.
To prevent this problem, schedule password changes for a time when users
are unlikely to run scheduled reports. In the case of users using database
passthrough execution who regularly run scheduled reports, inform them to
reschedule all reports if their passwords have been changed.
Connection
Benefits Considerations
Type
The options for importing users into MicroStrategy are described in detail in
the following sections:
You can modify your import settings at any time, for example, if you choose
not to import users initially, but want to import them at some point in the
future. The steps to modify your LDAP settings are described in Manage
LDAP Authentication, page 189.
You can choose to import LDAP users and groups at login, in a batch
process, or a combination of the two. Imported users are automatically
members of MicroStrategy's LDAP Users group, and are assigned the
access control list (ACL) and privileges of that group. To assign different
ACLs or privileges to a user, you can move the user to another
MicroStrategy user group.
When an LDAP user is imported into MicroStrategy, you can also choose to
import that user's LDAP groups. If a user belongs to more than one group,
all the user's groups are imported and created in the metadata. Imported
LDAP groups are created within MicroStrategy's LDAP Users folder and in
MicroStrategy's User Manager.
LDAP users and LDAP groups are all created within the MicroStrategy LDAP
Users group at the same level. While the LDAP relationship between a user
and any associated groups exists in the MicroStrategy metadata, the
relationship is not visually represented in Developer. For example, looking
in the LDAP Users folder in MicroStrategy immediately after an import or
synchronization, you might see the following list of imported LDAP users and
groups:
Removing a user from the LDAP directory does not effect the user's
presence in the MicroStrategy metadata. Deleted LDAP users are not
automatically deleted from the MicroStrategy metadata during
synchronization. You can revoke a user's privileges in MicroStrategy, or
remove the user manually.
The link between an LDAP user or group and the MicroStrategy user or
group is maintained in the MicroStrategy metadata in the form of a shared
Distinguished Name.
The user's or group's LDAP privileges are not linked with the MicroStrategy
user. In MicroStrategy, a linked LDAP user or group receives the privileges
of the MicroStrategy user or group to which it is linked.
Because guest users are not present in the metadata, there are certain
actions these users cannot perform in MicroStrategy, even if the associated
privileges and permissions are explicitly assigned. Examples include most
administrative actions.
l The user does not have a History List, because the user is not physically
present in the metadata.
l The User Connection monitor records the LDAP user's user name.
l Intelligence Server statistics record the session information under the user
name LDAP USER.
l If an LDAP user or group has been given new membership to a group that
has not been imported or linked to a group in MicroStrategy and import
options are turned off, the group cannot be imported into MicroStrategy
and thus cannot apply its permissions in MicroStrategy.
l When users and groups are deleted from the LDAP directory, the
corresponding MicroStrategy users and groups that have been imported
from the LDAP directory remain in the MicroStrategy metadata. You can
revoke users' and groups' privileges in MicroStrategy and remove the
users and groups manually.
Consider a user named Joe Doe who belongs to a particular group, Sales,
when he is imported into MicroStrategy. Later, he is moved to a different
group, Marketing, in the LDAP directory. The LDAP user Joe Doe and LDAP
groups Sales and Marketing have been imported into MicroStrategy. Finally,
the user name for Joe Doe is changed to Joseph Doe, and the group name
for Marketing is changed to MarketingLDAP.
The images below show a sample LDAP directory with user Joe Doe being
moved within the LDAP directory from Sales to Marketing.
The following table describes what happens with users and groups in
MicroStrategy if users, groups, or both users and groups are synchronized.
l You have collected the information for your LDAP server, and made
decisions regarding the LDAP authentication methods you want to use, as
described in Checklist: Information Required for Connecting Your LDAP
Server to MicroStrategy, page 162
l If you want Intelligence server to access your LDAP server over a secure
SSL connection, you must do the following:
1. Obtain a valid certificate from your LDAP server and save it on the
machine where Intelligence server is installed. The steps to obtain the
certificate depend on your LDAP vendor, and the operating system that
your LDAP server runs on. For specific steps, refer to the
documentation for your LDAP vendor.
l Port: The network port that the LDAP server uses. For clear text
connections, the default value is 389. If you want Intelligence server
to access your LDAP over an encrypted SSL connection, the default
value is 636.
l Novell: Provide the path to the certificate, including the file name.
l IBM: Use Java GSKit 7 to import the certificate, and provide the key
database name with full path, starting with the home directory.
l Open LDAP: Provide the path to the directory that contains the CA
certificate file cacert.pem, the server certificate file
servercrt.pem, and the server certificate key file
serverkey.pem.
7. Click Next.
10. When you have entered all the information, click Finish to exit the
LDAP Connectivity Wizard. You are prompted to test the LDAP
connection. It is recommended that you test the connection to catch any
errors with the connection parameters you have provided.
1. In the Folder List, right-click the project source, and select Modify
Project Source.
3. Click OK.
3. In the Login area, for LDAP Authentication, select the Enabled check
box.
5. Click Save.
1. Launch the Library Admin page by entering the following URL in your
web browser
http://<FQDN>:<port>/MicroStrategyLibrary/admin
2. On the Library Web Server tab, select LDAP from the list of available
Authentication Modes.
3. Click Save.
l If you want to modify the settings for importing users into MicroStrategy,
for example, if you initially chose not to import users, and now want to
import users and groups, see Importing LDAP Users and Groups into
MicroStrategy, page 190.
l Depending on the way your LDAP directory is configured, You can import
additional LDAP attributes for users, for example, a countryCode
attribute, indicating the user's location. These additional LDAP attributes
can be used to create security filters for users, such as displaying data
that is relevant to the user's country. For information on creating these
security filters, see Using LDAP Attributes in Security Filters, page 196.
3. Expand the LDAP category. The LDAP settings are displayed. You can
modify the following:
l Your LDAP server settings, such as the machine name, port, and so
on.
l Your LDAP SDK information, such as the location of the LDAP SDK
DLL files.
l The LDAP search filters that Intelligence Server uses to find and
authenticate users.
l Importing users and groups in batches: The list of users and groups are
returned from user and group searches on your LDAP directory.
MicroStrategy users and groups are created in the MicroStrategy metadata
for all imported LDAP users and groups.
l For information on setting up user and group import options, see Importing
Users and Groups into MicroStrategy, page 191.
l Once you have set up user and group import options, you can import
additional LDAP information, such as users' email addresses, or specific
LDAP attributes. For steps, see Importing Users' Email Addresses, page
193.
You can choose to import users and their associated groups when a user
logs in to MicroStrategy for the first time.
l Ensure that you have reviewed the information and made decisions regarding
your organization's policy on importing and synchronizing user information,
described in the following sections:
l If you want to import users and groups in batches, you must define the LDAP
search filters to return lists of users and groups to import into MicroStrategy.
For information on defining search filters, see Checklist: Information
Required for Connecting Your LDAP Server to MicroStrategy, page 162.
3. Expand the LDAP category, then expand Import, and then select
Import/Synchronize.
4. If you want to import user and group information when users log in, in
the Import/Synchronize at Login area, do the following:
l To import users in batches, select Import Users. You must also enter
a user search filter in the Enter search filter for importing list of
6. To modify the way that LDAP user and group information is imported,
for example, to import group names as the LDAP distinguished name,
under the LDAP category, under Import, click User/Group.
7. Click OK.
Once a user or group is created in MicroStrategy, the users are given their
own inboxes and personal folders. Additionally, you can do the following:
l Import users' email addresses. For steps, see Importing Users' Email
Addresses, page 193.
l Assign privileges and security settings that control what a user can access
in MicroStrategy. For information on assigning security settings after
users are imported, see User Privileges and Security Settings after Import,
page 194.
you have a license for MicroStrategy Distribution Services, then when you
import LDAP users, either in a batch or at login, you can import these email
addresses as contacts associated with those users.
3. Expand the LDAP category, then expand Import, and select Options.
6. From the Device drop-down list, select the email device that the email
addresses are to be associated with.
7. Click OK.
select the group and right-click the user) and select Edit. The Project
Access tab displays all privileges for each project in the project source.
The process of synchronizing users and groups can modify which groups a
user belongs to, and thus modify the user's privileges and security settings.
6. Click OK.
SSO credentials to their LDAP user names, and import the LDAP user and
group information into MicroStrategy. For information about configuring a
single sign-on system, see Enable Single Sign-On Authentication, page 198.
Depending on the SSO authentication system you are using, refer to one of
the following sections for steps:
Once you have created system prompts based on your LDAP attributes, you
can use those system prompts in security filters to restrict the data that your
users can see based on their LDAP attributes. For information about using
system prompts in security filters, including instructions, see Restricting
Access to Data: Security Filters, page 121. For general information about
security filters, see Restricting Access to Data: Security Filters, page 121.
3. Expand the LDAP category, then expand the Import category, and then
select Attributes.
4. From the Select LDAP Attributes drop-down list, select the LDAP
attribute to import.
5. From the Data Type drop-down list, select the data type of that
attribute.
6. Click Add.
7. Click OK.
By default, an LDAP user can log in to a project source even if the LDAP
attributes that are used in system prompts are not defined for that user. To
increase the security of the system, you can prevent LDAP users from
logging in to a project source if all LDAP attributes that are used in system
prompts are not defined for that user.
When you select this option, you prevent all LDAP users from logging in to
the project source if they do not have all the required LDAP attributes. This
affects all users using LDAP authentication, and also any users using
Windows, Trusted, or Integrated authentication if those authentication
systems have been configured to use LDAP. For example, if you are using
Trusted authentication with a SiteMinder single sign-on system, and
SiteMinder is configured to use an LDAP directory, this option prevents
SiteMinder users from logging in if they do not have all the required LDAP
attributes.
l If your system uses multiple LDAP servers, make sure that all LDAP
attributes used by Intelligence Server are defined on all LDAP servers. If
a required LDAP attribute is defined on LDAP server A and not on LDAP
server B, and the User login fails if LDAP attribute value is not read
from the LDAP server checkbox is selected, users from LDAP server B
will not be able to log in to MicroStrategy.
To Only Allow Users with All Required LDAP Attributes to Log In to the
System
3. Expand the LDAP category, then expand the Import category, and then
select Attributes.
4. Select the User logon fails if LDAP attribute value is not read from
the LDAP server checkbox.
5. Click OK .
Troubleshooting
There may be situations where you can encounter problems or errors while
trying to integrate MicroStrategy with your LDAP directory. For
troubleshooting information and procedures, see Troubleshooting LDAP
Authentication, page 2918.
l The object names correspond to the bean IDs in the configuration XML
files. You must view the configuration files to identify which Java classes
define those beans.
2. The multi-mode login filter recognizes this is a SAML login request and
delegates the work to the mstrMultiModeFilter SAML login filter
bean.
Bean Descr i p t i o n
A subclass of
LoginUrlAuth
enticationEn
tryPoint that
performs a
mstrSamlE com.microstrategy.auth.saml.authnrequest.S redirect to
ntryPoint AMLEntryPointWrapper where it is set in
the constructor
by the String
redirectFilt
erUrl
parameter
By default, this
filter responds to
the
/saml/authen
ticate/**
endpoint and the
result is a
mstrSamlAu org.springframework.security.saml2.provide
redirect that
thnRequest r.service.servlet.filter.Saml2WebSsoAuthen
includes a
Filter ticationRequestFilter
SAMLRequest
parameter
containing the
signed, deflated,
and encoded
<saml2:Authn
Request>
Cu st o m i zat i o n
The constructor argument must be exactly the same as the original if you
don't do any customizations to them.
Saml2WebSsoAuthenticationRequestFilter {
public MySAMLAuthenticationRequestFilter
(Saml2AuthenticationRequestContextResolver
authenticationRequestContextResolver, Saml2AuthenticationRequestFactory
authenticationRequestFactory) {
super(authenticationRequestContextResolver,
authenticationRequestFactory);
}
@Override
protected void doFilterInternal
(HttpServletRequest request, HttpServletResponse response, FilterChain
filterChain) throws ServletException, IOException {
//>>> Your logic here
super.doFilterInternal(request, response,
filterChain);
}
}
The two constructor arguments and property must be exactly the same as
the original if you don't customize them.
<bean
id=
"mstrSamlAuthnRequestFilter" class="MySAMLAuthenticationRequestFilter">
<constructor-arg
ref="samlAuthenticationRequestContextResolver"/>
<constructor-arg
ref="samlAuthenticationRequestFactory"/>
<property name="redirectMatcher"
ref="samlRedirectMatcher"/>
</bean>
5. The SAML processing filter calls the login success handler, which
redirects the browser to the original request.
Bean Descr i p t i o n
This is the
core filter
that is
responsible
for handling
the SAML
mstrSamlProcessi com.microstrategy.auth.saml.response.S
login
ngFilter AMLProcessingFilter
response
(SAML
assertion)
that comes
from the IDP
server.
This bean is
responsible
for
authenticatin
samlAuthenticati com.microstrategy.auth.saml.response.S g a user
onProvider AMLAuthenticationProviderWrapper based on
information
extracted
from SAML
assertion.
This bean is
responsible
for creating
and
samlIserverCrede com.microstrategy.auth.saml.response.S populating a
ntialProvider AMLIServerCredentialsProvider IServerCre
dentials
instance,
defining the
credentials
for creating
Intelligence
server
sessions.
The
IServerCre
dentials
object is
passed to the
Session
Manager's
login
method,
which
creates the
Intelligence
server
session.
Cu st o m i zat i o n
The following content uses the real class name, instead of the bean name.
You can find the bean name in SAMLConfig.xml.
2. Override the convert method and call super.convert, which can get
Saml2AuthenticationToken, a subclass of
SAMLAuthenticationToken.
3. Extract the information from the raw request in line 11 and return an
instance that is a subclass of Saml2AuthenticationToken.
SAMLAuthenticationTokenConverter
Saml2AuthenticationToken
(Saml2AuthenticationTokenConverter delegate) {
super(delegate);
}
@Override
public Saml2AuthenticationToken convert
(HttpServletRequest request) {
Saml2AuthenticationToken samlAuthenticationToken
= super.convert(request);
// >>> Extract info from request that you are
interested in
return samlAuthenticationToken;
}
}
<bean
id=
"samlAuthenticationConverter"
class="com.microstrategy.custom.MySAMLConverter">
<constructor-arg
ref="saml2AuthenticationConverter"/>
</bean>
You can customize this login process at the following three specific time
points, as illustrated in the diagram above:
The two constructor arguments must be exactly the same as the original if
you don't customize them.
<bean
id=
"samlAuthenticationProvider"
class="com.microstrategy.custom.MySAMLAuthenticationProviderWrapper">
<property name="assertionValidator"
ref="samlAssertionValidator"/>
<property name="responseAuthenticationConverter"
ref="samlResponseAuthenticationConverter"/>
</bean>
@Override
public Authentication authenticate(Authentication
authentication) throws AuthenticationException {
The two constructor arguments must be exactly the same as the original if
you don't customize them.
<bean
id=
"samlAuthenticationProvider"
class="com.microstrategy.custom.MySAMLAuthenticationProviderWrapper">
<property name="assertionValidator"
ref="samlAssertionValidator"/>
<property name="responseAuthenticationConverter"
ref="samlResponseAuthenticationConverter"/>
</bean>
The two constructor arguments must be exactly the same as the original if
you don't customize them.
<bean
id=
"samlAuthenticationProvider"
class="com.microstrategy.custom.MySAMLAuthenticationProviderWrapper">
<property name="assertionValidator"
ref="samlAssertionValidator"/>
<property name="responseAuthenticationConverter"
ref="samlResponseAuthenticationConverter"/>
</bean>
It is not uncommon for your web and IDP servers to have system clocks that
are not perfectly synchronized. For that reason, you can configure the
default SAMLAssertionValidator assertion validator with some
tolerance.
<bean
id=
"samlAssertionValidator"
class="com.microstrategy.auth.saml.response.SAMLAssertionValidator">
<property name="responseSkew" value="300"/>
</bean>
<bean
id=
"samlAssertionValidator"
class="com.microstrategy.auth.saml.response.SAMLAssertionValidator">
<property name="maxAuthenticationAge" value="2592000"/><!-- 30 days
-->
</bean>
The new spring SAML framework performs minimal validation on SAML 2.0
assertions. After verifying the signature, it:
2. Perform your own validation. For example, you can use OpenSAML's
OneTimeUseConditionValidator to also validate a <OneTimeUse>
condition.
<bean
id=
"samlAssertionValidator"
class="com.microstrategy.custom.MySAMLAssertionValidator">
<property name="maxAuthenticationAge" value="2592000"/><!-- 30
days -->
<property name="responseSkew" value="300"/>
</bean>
To set properties, see how to set a clock skew or authentication age for
timestamp validation.
<bean
id=
"samlResponseAuthenticationConverter"
class="com.microstrategy.custom.MyResponseAuthenticationConverter"/>
l The object names correspond to the bean IDs in the configuration XML
files. You must view the configuration files to identify which Java classes
define those beans.
Bean Descr i p t i o n
A subclass of
LoginUrlAuth
enticationEn
tryPoint that
performs a
mstrSamlE com.microstrategy.auth.saml.authnrequest.S redirect to
ntryPoint AMLEntryPointWrapper where it is set in
the constructor
by the String
redirectFilt
erUrl
parameter
By default, this
filter responds to
the
/saml/authen
ticate/**
endpoint and the
result is a
mstrSamlAu org.springframework.security.saml2.provide
redirect that
thnRequest r.service.servlet.filter.Saml2WebSsoAuthen
includes a
Filter ticationRequestFilter
SAMLRequest
parameter
containing the
signed, deflated,
and encoded
<saml2:Authn
Request>
Cu st o m i zat i o n
public MySAMLAuthenticationRequestFilter
(Saml2AuthenticationRequestContextResolver
authenticationRequestContextResolver, Saml2AuthenticationRequestFactory
authenticationRequestFactory) {
super(authenticationRequestContextResolver,
authenticationRequestFactory);
}
@Override
protected void doFilterInternal(HttpServletRequest request,
HttpServletResponse response, FilterChain filterChain) throws
ServletException, IOException {
//>>> Your logic here
super.doFilterInternal(request, response, filterChain);
}
}
The two constructor arguments and property must be exactly the same as
the original if you don't do any customizations to them.
<bean
id=
"mstrSamlAuthnRequestFilter" class="MySAMLAuthenticationRequestFilter">
<constructor-arg
ref="samlAuthenticationRequestContextResolver"/>
<constructor-arg
ref="samlAuthenticationRequestFactory"/>
<property name="redirectMatcher"
ref="samlRedirectMatcher"/>
</bean>
5. The SAML processing filter calls the login success handler, which
redirects the browser to the original request.
Bean Descr i p t i o n
This is the
core filter
that is
responsible
for handling
the SAML
mstrSamlProces com.microstrategy.auth.saml.response.SA
login
singFilter MLProcessingFilter
response
(SAML
assertion)
that comes
from the IDP
server.
This bean is
responsible
for
authenticatin
samlAuthentica com.microstrategy.auth.saml.response.SA g a user
tionProvider MLAuthenticationProviderWrapper based on
information
extracted
from SAML
assertion.
This bean is
responsible
for creating
and
com.microstrategy.auth.saml.SAMLUserDet populating a
userDetails
ailsServiceImpl IServerCre
dentials
instance,
defining the
credentials
for creating
Intelligence
server
sessions. The
IServerCre
dentials
object is
saved to the
HTTP sessio
n, which is
used to
create the
Intelligence
server
session for
future
requests.
Cu st o m i zat i o n
For better description, the following content uses the real class name,
instead of the bean name. You can find the bean name in
SAMLConfig.xml.
2. Override the convert method and call super.convert, which can get
Saml2AuthenticationToken, a subclass of
SAMLAuthenticationToken.
3. Extract the information from the raw request in line 11 and return an
instance that is a subclass of Saml2AuthenticationToken:
SAMLAuthenticationTokenConverter
Saml2AuthenticationToken
super(delegate);
}
@Override
public Saml2AuthenticationToken convert(HttpServletRequest request) {
Saml2AuthenticationToken samlAuthenticationToken = super.convert
(request);
// >>> Extract info from request that you are interested in
return samlAuthenticationToken;
}
}
<bean
id=
"samlAuthenticationConverter"
class="com.microstrategy.custom.MySAMLConverter">
<constructor-arg
ref="saml2AuthenticationConverter"/>
</bean>
You can customize this login process at the following three specific time
points, as illustrated in the diagram above:
The two constructor arguments must be exactly the same as the original if
you don't customize them.
<bean
id=
"samlAuthenticationProvider"
class="com.microstrategy.custom.MySAMLAuthenticationProviderWrapper">
<property name="assertionValidator"
ref="samlAssertionValidator"/>
<property name="responseAuthenticationConverter"
ref="samlResponseAuthenticationConverter"/>
</bean>
The two constructor arguments must be exactly the same as the original if
you don't customize them.
<bean
id=
"samlAuthenticationProvider"
class="com.microstrategy.custom.MySAMLAuthenticationProviderWrapper">
<property name="assertionValidator"
ref="samlAssertionValidator"/>
<property name="responseAuthenticationConverter"
ref="samlResponseAuthenticationConverter"/>
</bean>
The two constructor arguments must be exactly the same as the original if
you don't customize them.
<bean
id=
"mstrSamlProcessingFilter"
class="com.microstrategy.custom.MySAMLProcessingFilterWrapper">
<constructor-arg ref="samlAuthenticationConverter" />
<property name="authenticationManager" ref="authenticationManager" />
<property name="authenticationSuccessHandler"
ref="successRedirectHandler" />
<property name="authenticationFailureHandler"
ref="failureRedirectHandler" />
</bean>
It is not uncommon for your web and IDP servers to have system clocks that
are not perfectly synchronized. For that reason, you can configure the
default SAMLAssertionValidator assertion validator with some
tolerance.
<bean
id=
"samlAssertionValidator"
class="com.microstrategy.auth.saml.response.SAMLAssertionValidator">
<property name="responseSkew" value="300"/>
</bean>
users to stay authenticated for longer periods than this and you may need to
change the default value.
<bean
id=
"samlAssertionValidator"
class="com.microstrategy.auth.saml.response.SAMLAssertionValidator">
<property name="maxAuthenticationAge" value="2592000"/><!-- 30 days
-->
</bean>
The new spring SAML framework performs minimal validation on SAML 2.0
assertions. After verifying the signature, it:
2. Perform your own validation. For example, you can use OpenSAML's
OneTimeUseConditionValidator to also validate a <OneTimeUse>
condition.
<bean
id=
"samlAssertionValidator"
class="com.microstrategy.custom.MySAMLAssertionValidator">
<property name="maxAuthenticationAge" value="2592000"/><!-- 30
days -->
<property name="responseSkew" value="300"/>
</bean>
To set properties, see how to set a clock skew or authentication age for
timestamp validation.
<bean
id=
"samlResponseAuthenticationConverter"
class="com.microstrategy.custom.MyResponseAuthenticationConverter"/>
l IDPMetadata.xml
l SPMetadata.xml
l SamlKeystore.jks
l MstrSamlConfig.xml
2. Restore the file listed above to the same location after upgrading.
auth.modes.available=1048576
auth.modes.default=1048576
auth.admin.authMethod=2
This class is
com.microstrategy.aut
org.springframework.security exactly the
h.saml.response.SAMLC
.saml.SAMLCredential same as the
redential
previous one.
An extra
loadSAMLPr
operties
method is
added. This
com.microstrategy.aut method is
org.springframework.security
h.saml.SAMLUserDetail called in
.saml.SAMLCredential SAMLRelyin
sService
gPartyRegi
stration's
constructor
when the app
is launched.
Subclasses
should take
advantage of
the
SAMLConfig
instance and
set internal
properties.
This class is
a
replacement
of the
org.springframework.security com.microstrategy.aut
previous
.providers.ExpiringUsernameA h.saml.response.SAMLA
authenticatio
uthenticationToken uthentication
n token which
has the same
properties as
the old one.
If you are using utility classes in v2.6.7, you must transfer them to
parities in v4.1.0.
<bean
id=
"samlAssertionValidator"
class="com.microstrategy.auth.saml.response.SAMLAssertionValidator">
<property name="maxAuthenticationAge"
value="2592000"/><!-- 30 days -->
<property name="responseSkew" value="300"/>
</bean>
l IDPMetadata.xml
l SPMetadata.xml
l SamlKeystore.jks
l MstrSamlConfig.xml
2. Restore the files listed above to the same location after upgrading.
defaultloginmode=1048576
enableloginmode=1048576
springAdminAuthMethod=2
framework. The following table contains some useful parity classes for
your upgrade. If you are using them, directly change their class name to
the new one.
This class is
com.microstrategy.aut
org.springframework.security exactly the
h.saml.response.SAMLC
.saml.SAMLCredential same as the
redential
previous one.
An extra
loadSAMLPr
operties
method is
added. This
method is
called in
SAMLRelyin
gPartyRegi
com.microstrategy.aut stration's
org.springframework.security
h.saml.SAMLUserDetail constructor
.saml.SAMLCredential
sService when the app
is launched.
Subclasses
should take
advantage of
the
SAMLConfig
instance and
set internal
properties.
This class is
a
replacement
of the
org.springframework.security com.microstrategy.aut
previous
.providers.ExpiringUsernameA h.saml.response.SAMLA
authenticatio
uthenticationToken uthentication
n token which
has the same
properties as
the old one.
If you are using utility classes in v2.6.7, you must transfer them to
parities in v4.1.0.
<bean
id=
"samlAssertionValidator"
class="com.microstrategy.auth.saml.response.SAMLAssertionValidator">
<property name="maxAuthenticationAge" value="2592000"/><!-- 30 days
-->
<property name="responseSkew" value="300"/>
</bean>
See SAML Customization for MicroStrategy Web and Mobile for details.
l After entering the SAML user name and password in the login page, the
login fails.
Troubleshooting
Check the SAML response network trace. If you see the following response
and there’s no assertion appended, follow the solution below.
urn:oasis:names:tc:SAML:2.0:status:Responder
Solution
additional steps may need to be followed after the upgrade. The steps are
simple and often just need a replacement of classes with newer and more
secure classes.
l SAML single logout is now supported on Tomcat and Jboss. See Generate
SAML Configuration Files to enable single logout for your application.
Local logout is used by default.
1. Open a browser and access the SAML configuration page with a URL in
this format:
http://<FQDN>:<port>/<MicroStrategyLibrary>/saml/config/open
l General:
l Entity base URL: This is the URL the IdP will send and receive
SAML requests and responses. The field will be automatically
generated when you load the configuration page, but it should
always be double checked. It should be the application URL end
users would use to access the application.
l Behind the proxy: Using a reverse proxy or load balancer can alter
the HTTP headers of the messages sent to the application server.
These HTTP headers are checked against the destination specified
in the SAML response to make sure it is sent to the correct
destination. A mismatch between the two values can cause the
message delivery to fail. To prevent this, select Yes if
MicroStrategy Library runs behind a reverse proxy or load balancer.
The base URL field is set to the front-end URL. Select No if you are
not using a reverse proxy or load balancer.
l Logout mode: Select Local to prevent users from being logged out
from all other applications controlled by SSO. Select Global to log
out users from other applications controlled by SSO. Make sure that
l Encryption:
These options control how user attributes received from the SAML
responses are processed. If the SAML attribute names are
configurable on IdP side, you may leave all options as default. If your
IdP sends over SAML attributes in fixed names the values must be
changed on the application side to match.
l Group format:
<saml2:Attribute Name="Groups"
NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:unspecified">
<saml2:AttributeValue xmlns:xs="http://www.w3.org/2001/XMLSchema"
xmlns:xsi="http://www.w3.org/2001/XMLSchema- instance"
xsi:type="xs:string">IdPGroupA </saml2:AttributeValue>
<saml2:AttributeValue xmlns:xs="http://www.w3.org/2001/XMLSchema"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:type="xs:string">IdPGroupB </saml2:AttributeValue>
</saml2:Attribute>
MicroStrategy Library needs a metadata file from the IdP to identify which
service you are using.
This file name is case sensitive and must be saved exactly as shown
above.
most IdPs. Exact configuration details may differ depending on your IdP.
Consult your identity provider's documentation for specific instructions.
M an u al Regi st r at i o n
The SPMetadata.xml file contains all of the information needed for manual
configuration.
l The entityID= parameter is the same EntityID you provided in the SAML
config page
Re qu i r e d At t r i bu t e s:
l Name ID - Maps to Trusted Authenticated Request User ID of the
MicroStrategy user as defined in MicroStrategy Developer.
Opt i on a l At t r i bu t e s:
l DisplayName - Used to populate or link to a MicroStrategy user's Full
name
Attribute names are case sensitive. Make sure any SAML attribute name
configured here is an exact match to the application configuration.
In the case where IdP does not allow customization of SAML attribute
names and provides fixed names instead, you may modify the
corresponding attribute names in MstrSamlConfig.xml generated
previously.
1. Launch the Library Admin page by entering the following URL in your
web browser
http://<FQDN>:<port>/MicroStrategyLibrary/admin
2. On the Library Web Server tab, select SAML from the list of available
Authentication Modes.
4. Click Save.
In 2021 Update 2 or later, the Library admin pages support basic and SAML
authentication when only SAML authentication is enabled. The admin pages
authentication is governed by the auth.admin.authMethod parameter in
the WEB-INF/classes/config/configOverride.properties file. If
the parameter is not mentioned in the file, you can add it as shown below.
l auth.admin.authMethod = 1 (Default)
l auth.admin.authMethod = 2
The Library admin pages are protected by the SAML admin groups
mentioned in the saml/config/open form. These admin groups are linked to
the groups on the Identity Provider (IDP) side. The members who belong
to the IDP admin groups can only access the admin pages. Users that do
not belong to the admin group receive a 403 Forbidden error.
The administrator can change the parameter value as per the requirements.
A Web application server restart is required for the changes to take effect.
The Library admin pages cannot be protected by the SAML admin groups
when multiple authentication modes are enabled.
4. Locate <filter
class="ch.qos.logback.classic.filter.ThresholdFilter">
and change the level to be "DEBUG".
<filter class=“ch.qos.logback.classic.filter.ThresholdFilter”>
<level>DEBUG</level>
</filter>
Once the behavior you are investigating has been reproduced, edit
logback.xml once again and change level="DEBUG" back to
level="ERROR".
l Upload IDPMetadata
l Configure Logging
To launch the page that generates the configuration files, open a browser
and enter the following URL:
<web application_path>/saml/config/open
To access, you are prompted for the application server's admin credentials.
http://<FQDN>:<port>/MicroStrategyWeb/saml/config/open
http://<FQDN>:<port>/MicroStrategyMobile/saml/config/open
Exi st i n g SAM L Co n f i gu r at i o n Fi l es
If you have already configured SAML, you can download the following SAML
configuration files and verify the content.
l IDPMetadata.xml
l SPMetadata.xml
l SamlKeystore.jks
Up l o ad IDPM et ad at a
You can upload or change the existing IDPMetadata.xml file with the
metadata file generated by the Identity Provider.
SAM L Co n f i gu r at i o n Gen er at i o n
l General
Some IdPs may require Entity ID to be the web application URL. SAML
standards state it can be any string as long as a unique match can be
found among the IdP's registered entity IDs. Follow the requirements for
your specific IdP.
l Entity base URL: This is the URL the IdP will send and receive SAML
requests and responses. The field will be automatically generated when
you load the configuration page, but it should always be double checked.
http://<FQDN>:<port>/MicroStrategyWeb
http://<FQDN>:<port>/MicroStrategyMobile
l Behind the proxy: Using a reverse proxy or load balancer can alter the
HTTP headers of the messages sent to the application server. These
HTTP headers are checked against the destination specified in the
SAML response to make sure it is sent to the correct destination. A
mismatch between the two values can cause the message delivery to
fail. To prevent this, select Yes if MicroStrategy Library runs behind a
reverse proxy or load balancer. The base URL field is set to the front-
end URL. Select No if you are not using a reverse proxy or load
balancer.
l Encryption
These options control how user attributes received from the SAML
responses are processed. If the SAML attribute names are configurable on
IdP side, you may leave all options as default. If your IdP sends over
SAML attributes in fixed names the values must be changed on the web
application side to match.
l Group format
<saml2:Attribute Name="Groups"
NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:unspecified">
<saml2:AttributeValue xmlns:xs="http://www.w3.org/2001/XMLSchema"
xmlns:xsi="http://www.w3.org/2001/XMLSchema- instance"
xsi:type="xs:string">IdPGroupA </saml2:AttributeValue>
<saml2:AttributeValue xmlns:xs="http://www.w3.org/2001/XMLSchema"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:type="xs:string">IdPGroupB </saml2:AttributeValue>
</saml2:Attribute>
l Group Attribute:Groups
l Admin Groups:IdPGroupA,IdPGroupB
To register MicroStrategy Web with your IdP, you need to do the following:
l Register MicroStrategy Web with your IdP using the SPMetadata.xml file
you generated in the previous step.
Each SAML-compliant IdP has a different way to perform these steps. The
sections below provide a general overview of the process.
Required attributes
Optional attributes
Attribute names are case sensitive. Make sure any SAML attribute
name configured here is an exact match to the web application
configuration.
In the case where IdP does not allow customization of SAML attribute
names and provides fixed names instead, you may modify the
corresponding attribute names in MstrSamlConfig.xml generated
previously.
When configuring assertion attributes, make sure you set up users who
belong to a group (for example admin) with the same group name as
defined when generating configuration files in MicroStrategy Web
(step 2 in Generate and Manage SAML Configuration Files).
Otherwise, no user will be able to access the web administrator page
after the web.xml file has been modified and the Web server
restarted. Use Groups as SAML Attribute Name.
To enable SAML in the Web application for 2021 or older versions, modify
the web.xml file located in the WEB-INF folder of the MicroStrategy Web
installation directory.
2. Delete the first and last line of the web.xml fragment shown below to
enable SAML authentication.
<context-param>
<param-name>contextInitializerClasses</param-name>
<param-
value>com.microstrategy.auth.saml.config.ConfigApplicationContextInitia
lizer</param-value>
</context-param>
<filter>
<filter-name>springSecurityFilterChain</filter-name>
<filter-
class>org.springframework.web.filter.DelegatingFilterProxy</filter-
class>
</filter>
<filter-mapping>
<filter-name>springSecurityFilterChain</filter-name>
<url-pattern>/servlet/*</url-pattern>
</filter-mapping>
<filter-mapping>
<filter-name>springSecurityFilterChain</filter-name>
<url-pattern>/saml/*</url-pattern>
</filter-mapping>
<listener>
<listener-
class>org.springframework.web.context.ContextLoaderListener</listener-
class>
</listener>
-->
To disable SAML in a Web application for the 2021 platform release and
older versions, modify the web.xml file located in the WEB-INF folder of
the MicroStrategy Web installation directory.
1. Replace the web.xml file of the Web application with the original file
that you saved.
Configure Logging
It is not recommended to leave the file as is, since the relative file path
is very unreliable and can end up anywhere. The file can almost
always cannot be found in the Web application folder. Use full file
paths to fully control the log location.
${catalina.home}/webapps/MicroStrategy/WEB-INF/log/SAML/SAML.log
In 2021 Update 2 or later, the Web admin pages support SAML and basic
authentication when SAML authentication is enabled. The admin pages
l springAdminAuthMethod = 2
l springAdminAuthMethod = 1
The administrator can change the parameter value as per the requirements.
A Web server restart is required for the changes to take effect.
MicroStrategy Integration
The authorization rule has been added to Web.config out of the box. Once
you install the IIS URL Authorization module, you will automatically get
protection for Admin pages.
Co n f i gu r i n g t h e N ew Pl u gi n
This is best done from the command line. You will also need admin
privileges.
Co n f i gu r i n g t h e IIS7 DLL
Ver i f yi n g t h e i n st al l at i o n
https://localhost/Shibboleth.sso/Status
This must be run as localhost, and should return XML containing information
about Shibboleth. The latest Shibboleth version only supports
HTTP connections.
MicroStrategy Integration
M i cr o St r at egy User M ap p i n g
MicroStrategy Developer
To map users using MicroStrategy Developer, open: User Manager > Edit
User Properties > Authentication > Metadata > Trusted Authentication
Request > User ID.
1. Configure %SHIBBOLETH_INSTALL_
DIR%\etc\shibboleth\shibboleth2.xml
shibboleth2.xml – site
with
shibboleth2.xml – host
<Host name="sp.example.org">
<Path name="secure"
authType="shibboleth"
requireSession="true"/>
</Host>
with
<Host name="FULLY_QUALIFIED_SERVICE_PROVIDER_HOST_NAME">
<Path name="MicroStrategy"
authType="shibboleth"
requireSession="true"/>
<Path name="MicroStrategyMobile"
authType="shibboleth"
requireSession="true"/>
</Host>
l Replace entityID value with a suitable entity name for your new
service provider:
shibboleth2.xml - entityID
<ApplicationDefaults entityID="https://sp.example.org/shibboleth"
REMOTE_USER="eppn persistent-id targeted-id"
cipherSuites=
"ECDHE+AESGCM:ECDHE:!aNULL:!eNULL:!LOW:!EXPORT:!RC4:!SHA:!SSLv2">
with
<ApplicationDefaults entityID="https://FULLY_QUALIFIED_SERVICE_
PROVIDER_HOST_NAME/shibboleth"
REMOTE_USER="eppn persistent-id targeted-id"
cipherSuites=
"ECDHE+AESGCM:ECDHE:!aNULL:!eNULL:!LOW:!EXPORT:!RC4:!SHA:!SSLv2">
l Set SSO entityID with your SAML Identity Provider: This may be
obtained from the Identity Provider metadata by replacing:
<SSO entityID="https://idp.example.org/idp/shibboleth"
discoveryProtocol="SAMLDS"
discoveryURL="https://ds.example.org/DS/WAYF">
SAML2 SAML1
</SSO>
<SSO entityID="YOUR_SSO_SAML_ENTITY_ID">
SAML2 SAML1
</SSO>
<MetadataProvider
type="XML"
url="https://adfs.example.org/federationmetadata/2007-
06/federationmetadata.xml"/>
<MetadataProvider
type="XML"
file="partner-metadata.xml"/>
2. Configure %SHIBBOLETH_INSTALL_
DIR%\etc\shibboleth\attribute-map.xml to extract several
fields from the SAML assertion, which MicroStrategy will associate with
an Intelligence Server user. See AttributeNaming on the Shibboleth site
for more information.
<Attribute
name=
"http://schemas.microsoft.com/ws/2008/06/identity/claims/windowsaccount
name"
id="SBUSER"/>
<Attribute
name="urn:oid:0.9.2342.19200300.100.1.1"
id="SBUSER"
nameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:basic"/>
ADFS
3. Expand to the following: ADFS > Trust Relationships > Relying Party
Trusts.
5. When you reach the "Select Data Source" option, you need the
Shibboleth Service Provider metadata. Enter:
https://YOUR_MICROSTRATEGY_WEB_URL/Shibboleth.sso/Metadata
If the HTTP URL metadata does not work, you may have to manually
download and upload the metadata file.
7. When finished, you may be prompted to edit claim rules. If not, you can
right-click your new client and select Edit claim rules.
8. Click Add Rule under the tab Issuance Claim Rules.The Add
Transform Rule Claim Wizard appears.
10. Set the following fields to values consistent with the Shibboleth
attribute-map.xml configuration from above.
Keycl o ak
The Identity Provider will need ensure the user identity field is also included
in the SAML assertion generated when a user is authenticated. The exact
field depends upon the Identity Provider. The user identity will be associated
with the SAML parameter name of
urn:oid:0.9.2342.19200300.100.1.1. This parameter must be
consistent with the parameter with the same name in the Shibboleth Service
Provider attribute-map.xml declaration.
1. Locate restful-api-1.0-SNAPSHOT-jar-with-
dependencies.jar in the WEB-INF/lib of the MicroStrategy Library
file directory.
4. Find <bean
class="org.springframework.security.saml.context.SAML
ContextProviderImpl" id="contextProvider"/> in the file and
replace it with the following bean:
<bean id="contextProvider"
class="org.springframework.security.saml.context.SAMLContextProviderLB">
<property name="scheme" value="https"/>
<property name="serverName" value="your external hostname"/>
<property name="serverPort" value="443"/>
<property name="includeServerPortInRequestURL" value="false"/>
l The bean class is different from the original it has been changed to
SAMLContextProviderLB .
1. Open the Users and Badges tab and click Configure in the User
Management section.
1. On the Logical Gateways tab and click the Edit link in the Web
Application login section.
3. Map the SAML Attribute Name to the User Field that contains the
appropriate Active Directory Attribute configured in the previous
step.
4. Click Save.
Create an Application
1. Sign in to the Azure portal. If you have already launched Azure, under
Manage, go to Azure Active Directory and select Enterprise
applications.
2. At the top, select New application > Create your own application.
3. Provide a Name for the application, select the Non-gallery app option,
and click Create.
1. In the Set up single sign-on tile, click Get Started and select SAML
as the sign-on method.
3. Click Save.
Assertion Attributes
<auth:ClaimType
Uri="http://schemas.microsoft.com/identity/claims/displayname"
xmlns:auth="http://docs.oasis-open.org/wsfed/authorization/200706">
<auth:DisplayName>Display Name</auth:DisplayName>
<auth:Description>Display name of the user.</auth:Description>
</auth:ClaimType>
<auth:ClaimType
Uri="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress"
xmlns:auth="http://docs.oasis-open.org/wsfed/authorization/200706">
<auth:DisplayName>Email</auth:DisplayName>
<auth:Description>Email address of the user.</auth:Description>
</auth:ClaimType>
<auth:ClaimType
Uri="http://schemas.microsoft.com/ws/2008/06/identity/claims/groups"
xmlns:auth="http://docs.oasis-open.org/wsfed/authorization/200706">
<auth:DisplayName>Groups</auth:DisplayName>
2. Copy these values and paste them between the <userInfo> tags in
MstrSamlConfig.xml, located in the deployment folder.
<userInfo>
<groupAttributeName>http://schemas.microsoft.com/ws/2008/06/identity/clai
ms/groups</groupAttributeName>
<groupFormat>Simple</groupFormat>
<dnAttributeName>DistinguishedName</dnAttributeName>
<displayNameAttributeName>http://schemas.microsoft.com/identity/claims/di
splayname</displayNameAttributeName>
<emailAttributeName>http://schemas.xmlsoap.org/ws/2005/05/identity/claims
/emailaddress</emailAttributeName>
<adminGroups>2109318c-dee4-4658-8ca0-51623d97c611</adminGroups>
<roleMap/>
</userInfo>
Azure AD only sends the IDs. For admin permissions, the Object ID
must also be copied.
<adminGroups>36198b4e-7193-4378-xxx4-715e65edb580</adminGroups>
<roleMap/>
</userInfo>
Troubleshooting
Once web.xml has been changed to include SAML support, it refers to the
metadata and configuration files in the resources/SAML folder. If the Web
deployment fails to start, it is possible the generated files from the
resources/SAML/stage folder were not copied over. Copy the required
files to the SAML folder and restart the application.
The App ID URI does not match the entityID set in the SP metadata.
Review the URIs and correct the names accordingly. Changes can be made
in SPMetadata.xml, MstrSamlConfig.xml, and Azure. Restart the
application after you finalize the changes.
Azu r e SAM L gr o u p n u m b er l i m i t at i o n
If the number of groups the user is in, goes over a limit (150 for SAML),
there will be a limitation on the group attributes carried in the SAML
assertion and it won't work as expected. Please refer to Azure doc for more
details.
3. In any browser, enter the URL using the format <ADFS Server
base URL>/<Metadata entry point> to download the
metadata file to the browser's Downloads folder.
2. Establish trust between the Web server and Intelligence server. See
Single Sign-On with SAML Authentication for JSP Web and Mobile for
more information.
5. Register MicroStrategy Web, Library, and Mobile with the ADFS server.
5. In the Select Data Source pane, select Import data from the
relying party from a file.
6. Add claim rules for the registered relying party trust. See Integrating
SAML Support for ADFS for more information.
2. One by one, add the claim rules and complete the setup on ADFS.
The following are examples of rule creation and a list of created
rules. In this example, the same claim rule is created as specified
in the following screenshot.
b. Select the name shown in the screenshot for the claim rule
name.
f. The following is the list of the mappings for the rules used in
this example:
7. Map the OAuth user to the MicroStrategy user for login control.
Create an Application
4. Click Create.
2. Click Next.
<dnAttributeName>DistinguishedName</dnAttributeName>
<displayNameAttributeName>DisplayName</displayNameAttributeName>
<emailAttributeName>EMail</emailAttributeName>
<groupAttributeName>Groups</groupAttributeName>
Use the filter to select the groups that are sent over. To send over all
the groups, select Regex and enter .* into the field.
2. Go to Assignments.
4. Go to Sign On.
MicroStrategy and Snowflake also support single sign-on (SSO) using SAML
protocol, and Okta as an Identiy Provider (IdP).
5. Troubleshooting
Once you've completed all steps, you can troubleshoot the configuration.
Tr o u b l esh o o t an d Test t h e Co n f i gu r at i o n
The Okta account used as IdP for Snowflake must be the same account
used to authenticate MicroStrategy.
1. Introduction to OAuth
l session:role-any
l openid
l profile
l email
l offline_access
1. Testing Procedure
Once the database instance is created, it can be used to add tables to the
project schema via MicroStrategy Developer.
After adding tables to the project schema, another database connection can
be created for OAuth authentication.
iv. Locate the Login redirect URIs section and click Add URI.
A connection mapping can be created for the analyst to use the Snowflake_
SSO_DSN_OAuth connection, and for the administrator to use the
Snowflake_SSO_DSN_Basic connection. For more information on
connection mapping, see Controlling Access to the Database: Connection
Mappings.
5. Click OK.
9. Click OK.
Using an analyst user mapped to the Okta user (as explained in Mapping
SAML Users to MicroStrategy), log in to MicroStrategy Web.
1. In the Data Import dialog, select the primary database instance for the
project. For example, Snowflake_SSO.
At this point, you are authenticated to Snowflake and can access data
and dashboards with their credentials.
Execu t e Dash b o ar d s
Troubleshooting
Cause: The callback URL was not added to the whitelist of valid redirect
URLs.
Solution: Add the appropriate callback URL to the whitelist of valid URLs as
described in Whitelist the Callback URL.
Solution: You need to authenticate to Snowflake via the Data Import dialog.
Cl i en t ID o r Cl i et n Secr et n o t f o u n d i n m et ad at a. Er r o r i n Pr o cess
m et h o d o f Co m p o n en t : Qu er yEn gi n eSer ver , Pr o j ect M i cr o St r at egy TPCH,
Jo b 69, Er r o r Co d e = -2147212544
Solution: Confirm the connection mapping is mapped correctly for the user.
Change the default database connection for the database instance.
Sn o w f l ake Dr i ver Lo g
To enable the Snowflake driver, see KB48422: How to enable debug log for
newly bundled Snowflake driver.
Related Content
KB484275: Best practices for using the Snowflake Single Sign-on (SSO)
feature
2. Create single sign-on with SAML authetication for JSP Web and Mobile.
Gen er at e an d M o d i f y Co n f i gu r at i o n Fi l es
l For Web:
https://<FQDN>:<port>/MicroStrategy/saml/conf
ig/open
l For Library:
https://<FQDN>:<port>/MicroStrategyLibrary/sa
ml/config/open
l For Mobile:
https://<FQDN>:<port>/MicroStrategyMobile/sam
l/config/open
En a bl e SAM L a u t h e n t i ca t i on f or t h e 2021 pl a t f or m
r e l e a se or ol de r ve r si on s
<context-param>
<param-name>contextConfigLocation</param-name>
<param-
value>classpath:resources/SAML/SpringSAMLConfig.xml<
/param-value>
</context-param>
<context-param>
<param-name>contextInitializerClasses</param-
name>
<param-
value>com.microstrategy.auth.saml.config.ConfigAppli
cationContextInitializer</param-value>
</context-param>
<filter>
<filter-name>springSecurityFilterChain</filter-
name>
<filter-
class>org.springframework.web.filter.DelegatingFilte
rProxy</filter-class>
</filter>
<filter-mapping>
<filter-name>springSecurityFilterChain</filter-
name>
<url-pattern>/servlet/*</url-pattern>
</filter-mapping>
<filter-mapping>
<filter-name>springSecurityFilterChain</filter-
name>
<url-pattern>/saml/*</url-pattern>
</filter-mapping>
<listener>
<listener-
class>org.springframework.web.context.ContextLoaderL
istener</listener-class>
</listener>
1. Open
https://<FQDN>:<port>/MicroStrategyLibrary/saml/co
nfig/open to generate configuration files.
checkbox.
l [MicroStrategyLibrary Root]\WEB-
INF\classes\auth\SAML\custom\SAML2OAuth.xml
<bean id="oAuthTokenProvider"
class="com.microstrategy.auth.saml.implicitoauth.MicrosoftAzureAD">
<property name="authorizationEndpoint" value=""/>
<property name="clientID" value=""/>
<property name="redirectUri" value=""/>
Wi t h t h e Pr o j ect Sch em a
l In MicroStrategy Developer:
l In MicroStrategy Web:
Wi t h o u t t h e Pr o j ect Sch em a
To use the database instance without the project schema, you must
either have basic or OAuth authentication.
l In MicroStrategy Developer:
6. Click OK.
l In MicroStrategy Web:
Users must have the Set OAuth parameters for Cloud App
sources privilege under Client-Web.
After the database instance is created, you can set the OAuth
parameters in MicroStrategy Web.
l For Client ID, click on the app > Overview > Application
(client) ID, and locate the ID.
If you have multiple MicroStrategy Users or User Groups and want to give
access to the same database instance but with different database logins,
see Controlling Access to the Database: Connection Mappings
In a primary database connection, users that are not mapped into the
secondary database connection use the default database connection. In a
secondary database connection, users in a specific group use the mapped
database connection.
For example, the administrator uses basic authentication, while other users
use OAuth authentication. All users can use the project schema. You must
set the default connection to use standard authentication for the Warehouse
Catalog to work in Developer:
l In MicroStrategy Developer
l In MicroStrategy Developer
f. Click OK.
g. Click New.
checkbox.
Connection mapping.
After the database instance is created, you can set the OAuth
parameters in MicroStrategy Web.
l For Client ID, click on the app > Overview > Application
(client) ID, and locate the ID.
l For Client Secret, click on the app > Certificates & secrets,
and locate the secret. If necessary, create a new secret.
l For Directory (tenant) ID, click on the app > Overview, and
locate the ID.
l For Scope, click on the app > API permissions, click on the
API/Permission name, and locate the URL. The URL is in the
format like https://[AzureDomain]/
[id]/session:scope-any.
Related Content
KB484275: Best practices for using the Snowflake Single Sign-on (SSO)
feature
3. Known limitations
4. Appendix
l Configure Snowflake
l SAML configuration
l OAuth configuration
3. Click Next.
The redirect URI does not accept query parameters. In this case,
?env=3172 is removed in the AD FS setting. However, you must enter
the full URL when adding an OAuth redirect URI, which is
automatically regenerated by MicroStrategy. Some identity providers
(IdP), like Twitter, prevent using such URI, but still accept the request.
6. Click Next.
16. Issue Transform Rules for Web API applications as shown in the
following screenshots.
Upon completing the steps in this section, you should have generated all
OAuth parameters for Snowflake connectivity:
l Resource: https://<account_
identifier>.snowflakecomputing.com/fed/login
You can create Snowflake database instances with or without the project
schema.
DRIVER={SnowflakeDSIIDriver
ODBC};SERVER=sample.snowflakecomputing.com;DATABASE=SNOWF
LAKE_SAMPLE;SCHEMA=SAMPLE_SCHEMA;WAREHOUSE=SAMPLE_
WH;AUTHENTICATOR=oauth;TOKEN=?MSTR_OAUTH_TOKEN;
JDBC;DRIVER=
{net.snowflake.client.jdbc.SnowflakeDriver};URL=
{jdbc:snowflake://sample.snowflakecomputing.com/?authenti
cator=oauth&db=SNOWFLAKE_SAMPLE&warehouse=SAMPLE_
WH&schema=public&token=?MSTR_OAUTH_TOKEN};
Wi t h t h e Pr o j ect Sch em a
l In MicroStrategy Developer:
l In MicroStrategy Web:
Database instances created via MicroStrategy can be used for the project
schema, but cannot be used for connection mapping.
Wi t h o u t t h e Pr o j ect Sch em a
To use the database instance without the project schema, you must either
have basic or OAuth authentication.
l In MicroStrategy Developer:
6. Click OK.
l In MicroStrategy Web:
Users must have the Set OAuth parameters for Cloud App sources
privilege under Client-Web.
After the database instance is created, you can set the OAuth
parameters in MicroStrategy Web.
l For Client Secret, use the secret generated the first time you
created the server application.
See Enable Seamless Login Between Web, Library, and Workstation for
more information.
Lim itations
l An end-to-end single sign-on (SSO) workflow is not supported.
Appendix
AD FS Co n f i gu r at i o n f o r SAM L Au t h en t i cat i o n
Sn o w f l ake Co n f i gu r at i o n
SAML Configuration
The following is a sample workflow for configuring OAuth for Snowflake.
OAuth Configuration
See Configure Custom Clients for External OAuth - Snowflake
Documentation to configure OAuth for Snowflake.
User Mapping
The following pieces of information sent over in the SAML response can be
used to map to a MicroStrategy user:
l Name ID: MicroStrategy looks for a match between the Name ID and User
ID in the Trusted Authenticated Request setting.
MicroStrategy checks for matches in the exact order they are presented.
If no match is found, it means the SAML user does not yet exist in
MicroStrategy, and is denied access. You can choose to have SAML users
imported to MicroStrategy if no match is found, see Importing and Syncing
SAML Users.
Group Mapping
The way MicroStrategy maps user groups is determined by the entries made
in the Group Attribute and Group Format fields when the SAML
configuration files were generated for your application. Groups are mapped
between an identity provider and MicroStrategy in one of two ways:
This setting can be found in Developer by opening Group Editor > Group
Definition > General.
New users and their associated groups can be dynamically imported into
MicroStrategy during application log in. You can also configure Intelligence
server to sync user information for existing MicroStrategy users each time
they log in to an application. The following settings are accessed from the
Intelligence Server Configuration > Web Single Sign-on > Configuration
window in Developer.
Import user and Sync user are not be available unless this setting is
turned on.
All users imported this way are placed in the "3rd party users" group in
MicroStrategy, and are not be physically added to any MicroStrategy
groups that match its group membership information.
l Synch user at logon: Allows MicroStrategy to update the fields used for
mapping users with the current information provided by the SAML
response.
This option also updates all of a user's group information and import
groups into "3rd party users" if matching groups are not found. This may
result in unwanted extra groups being created and stored in the metadata.
l Library Web
See This page applies to MicroStrategy 2021 Update 6 and newer versions.
for more information.
MicroStrategy Web:
https://<FQDN>:<port>/MicroStrategy/saml/config/op
en
Mobile Server:
https://<FQDN>:<port>/MicroStrategyMobile/saml/con
fig/open
2. Enable the application to initiate single logout using the Azure console.
a. Open the Azure console and in the Single sign-on tab, edit the
Basic SAML Configuration.
MicroStrategy Web:
https://<FQDN>:<port>/MicroStrategy/saml/SingleLog
out
Mobile Server:
https://<FQDN>:<port>/MicroStrategyMobile/saml/Sin
gleLogout
Library Web
https://<FQDN>:<port>/MicroStrategyLibrary/saml/Si
ngleLogout
l MicroStrategy Web
l Library Web
See This page applies to MicroStrategy 2021 Update 6 and newer versions.
for more information.
2. Enable the application to initiate single logout using the Okta console.
-----BEGIN CERTIFICATE-----
MIIDoDCCAgigAwIBAgIEFJ1sZDANBgkqhkiG9w0BAQwFADASMRAwDgYDVQQDDAdz
aWduS2V5MB4Xn707jRnJRiDr8qNverYFLJwjNZo=
-----END CERTIFICATE-----
3. Download IDPMetadata.xml.
For single sign-on with integrated authentication to work, users must have
user names and passwords that are printable, US-ASCII characters. This
limitation is expected behavior in Kerberos. This limitation is important to
keep in mind when creating a multilingual environment in MicroStrategy.
For the Active Directory user account that you will associate with the SPN:
2. In the Account options section, clear the check box next to Account
is sensitive and cannot be delegated.
Once the user has been created, a Service Principal Name for the
Intelligence server must be attached to the user using the setspn
command.
C:\Windows\system32>
C:\Windows\system32setspn.exe -L mstrsvr_acct
Registered ServicePrincipalNames for CN=MicroStrategy Server
Account,CN=Users,DC=vmnet-esx-mstr,DC=net:
C:\Windows\system32>
C:\Windows\system32>setspn -A MSTRSVRSvc/exampleserver.example.com:34952
your_service_account
Registering ServicePrincipalNames for CN=your_service_
acount,CN=Users,DC=example,DC=com
MSTRSVRSvc/exampleserver.example.com:34952
Updated object
1. After creating the SPN, open the associated service user account.
2. On the Delegation tab select Trust this user for delegation to any
service (Kerberos only).
1. After creating the SPN, open the associated service user account.
3. Click Add.
4. Provide the service account for the destination services then select a
registered service from the list.
l Add the Intelligence server to the list of services that accept delegated
credentials.
l Add the data source services to the list of services that accept delegated
credentials.
In st al l Ker b er o s 5
You must have Kerberos 5 installed on your Linux machine that hosts
Intelligence server. Your Linux operating system may come with Kerberos 5
En su r e t h at t h e En vi r o n m en t Var i ab l es ar e Set
Once you have installed Kerberos 5, you must ensure that the following
environment variables have been created:
The variables must be set when the Intelligence server starts in order to
take effect.
Location of all
${KRB5_HOME} Kerberos /etc/krb5 Optional
configuration files
Location of the
${KRB5_CONFIG} default Kerberos /etc/krb5/krb5.conf Required
configuration file
Location of the
${KRB5CCNAME} Kerberos credential /etc/krb5/krb5_ccache Optional
cache
Location of the
${KRB5_KTNAME} /etc/krb5/krb5.keytab Required
Kerberos keytab file
You must create and configure the krb5.keytab file. The steps to
configure this file on your Linux machine are provided in the procedure
below.
l DOMAIN_REALM: The domain realm for your Intelligence server, which must
be entered in uppercase.
2. Retrieve the key version number for your Intelligence server service
principal name, using the following command:
kvno MSTRSVRSvc/ISMachineName:ISPort@DOMAIN_REALM
ktutil
addent -password -p MSTRSVRSvc/ISMachineName:ISPort@DOMAIN_REALM -k
KeyVersionNumber -e EncryptionType
wkt /etc/krb5/krb5.keytab
exit
The command should run without prompting you for a username and
password.
You must create and configure a file named krb5.conf. This file is stored
in the /etc/krb5/ directory by default.
If you create a krb5.conf file in a directory other than the default, you
must update the KRB5_CONFIG environment variable with the new location.
Refer to your Kerberos documentation for steps to modify the KRB5_
CONFIG environment variable.
[libdefaults]
default_realm = DOMAIN_REALM
default_keytab_name = FILE:/etc/krb5/krb5.keytab
forwardable = true
no_addresses = true
[realms]
DOMAIN_REALM = {
kdc = DC_Address:88
admin_server = DC_Admin_Address:749
}
[domain_realm]
.domain.com = DOMAIN_REALM
domain.com = DOMAIN_REALM
.subdomain.domain.com = DOMAIN_REALM
subdomain.domain.com = DOMAIN_REALM
3. On the Connection tab, under Server Name, type the server name
exactly as it appears is the Service Principal Name created in Active
Directory Account Configuration with the format
MSTRSVRSvc/<hostname>:<port>@<realm>.
2. Right click on a user and select Edit > Authentication > Metadata.
4. Click OK.
3. Expand the LDAP category, then expand Import, and then select
Options.
7. Click OK.
You must create a Service Principal Name (SPN) for your J2EE application
server, and map it to the domain user that the application server runs as.
The SPN identifies your application server as a service that uses Kerberos.
For instructions on creating an SPN, see Active Directory Account
Configuration.
HTTP/ASMachineName
l ASMachineName: This is the fully qualified host name of the server where
the application server is running. It is of the form machine-
name.example.com. Integrated authentication will only function when
accessing the application server using the ASMachineName used to
register the SPN. If the fully qualified host name was registered as SPN,
then using the machine name or IP address will not work. Should the
application server be accessible through FQDN and machine name,
additional SPNs will need to be registered to the AD service account.
You must create and configure a krb5.keytab file for the application
server. In UNIX, you must use the kutil utility to create this file. In
Windows, you must use the ktpass utility to create the keytab file.
DOMAIN_REALM: The domain realm for the application server. It is of the form
EXAMPLE.COM, and must be entered in uppercase.
Keytab_Path: For J2EE application servers under Windows, this specifies the
location of the krb5.keytab file. It is of the form
C:\temp\example.keytab.
ASUser and ASUserPassword: The user account for which the SPN was
registered, for example j2ee-http and its password.
If your application server and Intelligence server are hosted on the same
machine, it is required that you use separate keytab and configuration files
for each. For example, if you are using krb5.keytab and krb5.conf for
the Intelligence server, use krb5-http.keytab and krb5-http.conf for
the application server.
2. Retrieve the key version number for your application server service
principal name, using the commands shown below:
kinit ASUser
kvno ASUser
ktutil
addent -password -p ASUser@DOMAIN_REALM -k KeyVersionNumber -e
EncryptionType rc4-hmac
wkt /etc/krb5/krb5.keytab
exit
ktpass ^
-out Keytab_Path ^
-princ ASUser@DOMAIN_REALM ^
-pass ASUserPassword ^
-crypto RC4-HMAC-NT ^
-pType KRB5_NT_PRINCIPAL
For Linux only: If your application server and Intelligence server are hosted
on the same machine, it is required that you use a separate configuration
file. For example, if you created krb5.conf for the Intelligence server, use
krb5-http.conf for the application server.
[libdefaults]
default_realm = DOMAIN_REALM
default_keytab_name = Keytab_Path
forwardable = true
no_addresses = true
[realms]
DOMAIN_REALM = {
kdc = DC_Address:88
admin_server = DC_Admin_Address:749
}
[domain_realm]
.domain.com = DOMAIN_REALM
domain.com = DOMAIN_REALM
.subdomain.domain.com = DOMAIN_REALM
subdomain.domain.com = DOMAIN_REALM
Depending on the version of the Java Development Kit (JDK) used by your
application server, the format of the jaas.conf file varies slightly. Refer to
your JDK documentation for the appropriate format. Sample jaas.conf files
for the Sun and IBM JDKs follow. The following variables are entered in the
.accept section of the jaas.conf file.:
com.sun.security.jgss.krb5.accept {
com.sun.security.auth.module.Krb5LoginModule required
principal="ASUser@DOMAIN_REALM"
useKeyTab=true
doNotPrompt=true
storeKey=true
debug=true;
};
com.ibm.security.jgss.initiate {
com.ibm.security.auth.module.Krb5LoginModule required
useDefaultKeytab=true
principal="ASUser@DOMAIN_REALM"
credsType=both
debug=true
storeKey=true;
};
Save the jaas.conf file to the same location as your krb5.conf file.
Co n f i gu r e t h e JVM St ar t u p Par am et er s
For your J2EE-compliant application server, you must set the appropriate
JVM startup parameters. The variables used are described below:
-Djava.security.auth.login.config=JAAS_Path
-Djava.security.krb5.conf=KRB5_Path
-Djavax.security.auth.useSubjectCredsOnly=false
<filter>
<display-name>SpnegoFilter</display-name>
<filter-name>SpnegoFilter</filter-name>
<filter-class>com.microstrategy.web.filter.SpnegoFilter</filter-class>
</filter>
<filter-mapping>
<filter-name>SpnegoFilter</filter-name>
<servlet-name>mstrWeb</servlet-name>
</filter-mapping>
<filter>
<display-name>SpnegoFilter</display-name>
<filter-name>SpnegoFilter</filter-name>
<filter-class>com.microstrategy.mobile.filter.SpnegoFilter</filter-class>
</filter>
<filter-mapping>
<filter-name>SpnegoFilter</filter-name>
<servlet-name>mstrMobileAdmin</servlet-name>
</filter-mapping>
Restart your application server for all of the above settings to take effect.
1. Launch the Library Admin page by entering the following URL in your
web browser
http://<FQDN>:<port>/MicroStrategyLibrary/admin
2. On the Library Web Server tab, select Integrated from the list of
available Authentication Modes.
3. Click Save.
Restart your application server for all the above settings to take effect.
3. Select the Directory Security tab, and then under Anonymous access
and authentication control, click Edit.
6. Click OK.
Currently ASP Web can only delegate users from the same domain
l For IIS version 7 and older: If ASP runs on domain account, the account
needs to be an administrator or be enabled to act as part of the operating
system.
It is recommended that you create a Service Principal Name (SPN) for IIS,
and map it to the domain user that the application server runs as. The SPN
identifies your application server as a service that uses Kerberos. For
instructions on creating an SPN, refer to the Kerberos documentation.
HTTP/ASMachineName
l ASMachineName: This is the fully qualified host name of the server where
the application server is running. It is of the form machine-
name.example.com.
Co n f i gu r e t h e kr b 5.i n i Fi l e
If you configure Kerberos on IIS to host the web server, you must configure
the krb5.ini file. This file is included with an installation of MicroStrategy
Web, and can be found in the following directory:
The path listed above assumes you have installed MicroStrategy in the
C:\Program Files (x86) directory.
Once you locate the krb5.ini file, open it in a text editor. The content
within the file is shown below:
[libdefaults]
default_realm = <DOMAIN NAME>
default_keytab_name = <path to keytab file>
forwardable = true
no_addresses = true
[realms]
<REALM_NAME> = {
kdc = <IP address of KDC>:88
admin_server = <IP address of KDC admin>:749
}
[domain_realm]
.domain.com = <DOMAIN NAME>
domain.com = <DOMAIN NAME>
.subdomain.domain.com = <DOMAIN NAME>
subdomain.domain.com = <DOMAIN NAME>
4. Click Save.
l Linux: <tomcat_
directory>/webapps/MicroStrategyLibrary/WEB-
INF/classes/config
l auth.kerberos.debug=false
l auth.kerberos.isInitiator=true
Computer\HKEY_LOCAL_
MACHINE\SOFTWARE\Policies\Google\Chrome
AuthNegotiateDelegateAllowlist.
1. Open the Windows Control Panel and go to Network and Internet >
Internet Options.
information.
Second, you must also configure the browser to place the MicroStrategy
Web site in a security zone that can serve credentials. For security reasons,
Edge only allows Kerberos delegation to sites within the Intranet and
Trusted Sites zones. See FAQs about Enhanced Security Configuration on
the Microsoft site for more information about zones. For this reason, if
MicroStrategy Web is not automatically detected as belonging to either of
these zones, you need to add it to one of these zones manually.
3. Click Close.
macOS has built in support for Kerberos. You must add your account for
Kerberos authentication using either the Ticket Viewer app or kinit
command-line tool.
$ kinit user_name@REALM_NAME
user_name@REALM_NAME's Password:
$
1. Once you Add Your Account in macOS, you must configure Chrome’s
AuthServerAllowlist with any domains that require Kerberos
authentication.
3. You may need to restart your machine for the changes to take effect.
To learn more about the policies, see Chrome Enterprise policy list.
3. You may need to restart your machine for the changes to take effect.
Mozilla Firefox
3. Double-click on each flag and enter the host of the MicroStrategy Web
site, as shown below:
l You have configured the settings for importing users from your LDAP
directory., as described in Manage LDAP Authentication, page 189.
3. Go to LDAP > Import > Options. The Import Options are displayed.
7. Click OK.
7. In the Database Instance pane, enable the check boxes for all the
database instances for which you want to use integrated authentication,
as shown below.
8. Click OK.
http://<FQDN>:<port>/MicroStrategyLibrary/admin
2. On the Library Web Server tab, select Trusted from the list of
available Authentication Modes.
5. Click Save.
Primary User Identifier Enter the OIDC claim used to identify users.
By default, the OIDC claim is email.
Login Name Enter the OIDC claim for the login name. By default, the
OIDC claim is email.
Full name Enter the OIDC claim used for display users’ full names in
MicroStrategy. By default, the OIDC claim attribute is name.
Email Enter the OIDC claim used as the user's email in MicroStrategy.
By default, the OIDC claim attribute is email.
Select Import User at Login to allow all users in your AD to use their
credentials to log in to MicroStrategy.
6. In step 5, click Test Configuration to test with the credentials that you
provided above.
7. Click Save. In 2021 Update 2 or later, you see the message shown
below. Workstation automatically creates a trusted relationship and
enables OIDC authentication, along with standard authentication.
If you are using an older build then you may need to manually create
the trusted relationship and enable OIDC authentication mode on the
Library admin page.
http://<FQDN>:<port>/MicroStrategyLibrary/admin
In 2021 Update 2 or newer, the Library admin pages support basic and OIDC
authentication when only OIDC authentication is enabled. The admin pages
authentication is governed by the auth.admin.authMethod parameter in
the WEB-INF/classes/config/configOverride.properties file. If
the parameter is not mentioned in the file, you can add it as shown below.
l auth.admin.authMethod = 1 (Default)
l auth.admin.authMethod = 2
The Library admin pages are protected by the OIDC admin groups
mentioned in the OIDC configuration form. These admin groups are linked
to the groups on the Identity Provider (IDP) side. Members that belong to
the IDP admin groups can only access the admin pages. Users that do not
belong to the admin group receive a 403 Forbidden error.
The administrator can change the parameter value as per the requirements.
A Web application server restart is required for the changes to take effect.
The Library admin pages cannot be protected by the OIDC admin groups
when multiple authentication modes are enabled.
En ab l e OIDC Lo ggi n g
4. Locate <filter
class="ch.qos.logback.classic.filter.ThresholdFilter">
and change the level to be "DEBUG".
<filter class="ch.qos.logback.classic.filter.ThresholdFilter">
<level>DEBUG</level>
7. Once the behavior you are investigating has been reproduced, edit
logback.xml once again and change level="DEBUG" back to
level="ERROR".
En ab l e OIDC Au t h en t i cat i o n
The client ID and native client ID are the same for MicroStrategy Web.
4. Under Claim Map, provide the scope to map IDP users with
MicroStrategy users.
In MicroStrategy 2021 Update 2 or later, the Web admin pages support OIDC
and basic authentication when OIDC authentication is enabled. The admin
pages authentication is governed by the springAdminAuthMethod
parameter located in the WEB-INF/xml/sys_defaults.properties file.
l springAdminAuthMethod = 1 (Default)
l springAdminAuthMethod = 2
The administrator can change the parameter value per the requirements. A
web application server restart is required for the changes to take effect.
It is not recommended to leave the file as is, since the relative file path
is very unreliable and can end up anywhere. The file usually cannot be
found in the Web application folder. Use full file paths here to fully
control the log location.
${catalina.home}/webapps/MicroStrategy/WEB-INF/log/OIDC/OIDC.log
2. From File, go to Get data > Get data to get started > MicroStrategy
for Power BI.
3. Click Connect.
4. Enter your REST API URL and add the #OIDCMode parameter to the
end of the URL. For example,
https://mstr.mycompany.com/MicroStrategyLibrary/#OIDC
Mode.
5. Click OK.
10. Proceed with data import. See MicroStrategy for Power BI for
information.
l Prerequisites
l MicroStrategy Configuration
l End-to-End Testing
l Test Workstation
l Test Library
2. Upload the driver into the JDBC folder on the MicroStrategy Intelligence
server machine (<MSTR_INSTALL_HOME>/JDBC). See the example
paths below.
Linux
/opt/MicroStrategy/JDBC
Windows
Follow the steps below to prepare your application in Okta and Azure AD.
Okt a
6. Click Save.
Azu r e AD
3. Under the Implicit grant and hybrid flows section, select ID tokens.
Cr eat e a Cu st o m Po l i cy
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:ListBucketMultipartUploads",
"s3:AbortMultipartUpload",
"s3:ListBucket",
"s3:GetBucketLocation",
"s3:ListMultipartUploadParts"
],
"Resource": [
"arn:aws:s3:::tec-gd-gateway-data",
"arn:aws:s3:::tec-gd-gateway-data/athena",
"arn:aws:s3:::tec-gd-gateway-data/athena/*"
]
}
]
}
2. Under Trust Relationships > Trusted Entities for the IAM role, add
the AWS OIDC Identity Providers created above.
Example:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "GDAzure",
"Effect": "Allow",
"Principal": {
"Federated": "arn:aws:iam::xxxxxxx:oidc-
provider/login.microsoftonline.com/4ca8943a-xxxx-xxxx-868e-
c5bdb4d59fee/v2.0"
},
"Action": "sts:AssumeRoleWithWebIdentity",
"Condition": {
"StringEquals": {
"login.microsoftonline.com/4ca8943a- xxxx-xxxx -868e-
c5bdb4d59fee/v2.0:aud": "833d15da- xxxx-xxxx -ae3a-ca7a79432950"
}
}
},
{
"Sid": "GDOkta",
"Effect": "Allow",
"Principal": {
"Federated": "arn:aws:iam::xxxxxxx:oidc-provider/dev-
xxxxxx.okta.com/oauth2/aus5xhhzgxxxx2ZZ5d7"
},
"Action": "sts:AssumeRoleWithWebIdentity",
"Condition": {
"StringEquals": {
"dev-xxxxxx.okta.com/oauth2/aus5xhhzgxxxx2ZZ5d7:aud":
"xxxxxxxx"
}
}
}
]
}
This example includes both Azure AD and Okta. You can find more
details in Configuring a role for GitHub OIDC identity provider.
6. In the left pane, click Privileges and add the following privileges:
l Use Workstation
9. Click Save.
https://env-
xxxxxx.customer.cloud.microstrategy.com/MicroStrategy
/servlet/mstrWebAdmin
3. Click Setup next to the trust relationship between the Web server and
MicroStrategy Intelligence server.
4. Enter the user credentials with admin privileges and click Create Trust
Relationship.
6. Under OIDC Configuration, complete the remaining fields. For the Okta
Native app, leave Client Secret empty.
Client Secret This field is only required when the Azure application is a
Web app. If you deployed a Public client/native app in Create a
Custom Policy, you can leave this field blank.
Native Client ID This is the same as the client ID, unless configured
otherwise.
Redirect URI The default web redirect URI. This should not be changed
unless configured otherwise.
Claim Map
l Full Name The user display name attribute. The default value for this
field is name.
l User ID The user distinguished login attribute. The default value for
this field is email.
l Email The user email address attribute. The default value for this
field is email.
l Groups The user group attribute. The default value for this field is
groups.
Example: ["WebAdmin","SystemAdmin"]
Members belonging to WebAdmin and SystemAdmin can access the
admin pages.
Follow the steps in Manage OAuth Enterprise Security with Identity and
Access Management (IAM) Objects to create an enterprise security object.
For Okta, choose Okta from the identity provider drop-down and enter the
Client ID, OAuth URL, and Token URL for your Okta application. Use the
following format for the URLs:
https://dev-
xxxxxx.okta.com/oauth2/microstrategy/v1/authorize
https://dev-xxxxxx.okta.com/oauth2/microstrategy/v1/token
4. Enter a Name.
6. Enter a Name, select OAuth as the connection method, and enter the
required connection information.
AWS Region The AWS region of the Athena and AWS Glue instance
that you want to connect to.
AWS Role ARN The Amazon Resource Name (ARN) of the role that
you want to assume when authenticated through JWT. This is the name
you created in Prepare AWS IAM Objects.
9. Click Save.
10. Select the Projects to which the data source is assigned and can be
accessed.
Test Workstation
Test Library
4. Click Create.
2. Click Create.
l Security Setting
1. Open Workstation.
7. When you are finished, confirm the new OIDC/SAML configurations are
listed.
4. Click Save.
Security Setting
Administrators can enable a security setting that mitigates the risk of two
individuals from different IDP servers sharing the same name ID.
4. Once this security setting is enabled, navigate to Users & Groups >
Edit Specific User > Authentication > Namespace (Multi-SSO only)
and select the desired SSO scopes for the user's login. If a user
attempts to log in with an SSO scope different than the one configured
in the system, the login attempt is rejected.
l Create an Application
Create an Application
1. Sign in to the Azure portal. If you have already launched Azure Active
Directory, under Manage, select App registration.
https://env-
xxxxxx.customer.cloud.microstrategy.com/MicroStrategy
Library/auth/oidc/login
5. Click Register.
l http://127.0.0.1
l com.microstrategy.hypermobile://auth
l com.microstrategy.dossier.mobile://auth
l com.microstrategy.mobile://auth
l https://env-
xxxx.customer.cloud.microstrategy.com/MicroStrategyL
ibrary/static/oidc/success.html
l https://env-
xxxxxx.customer.cloud.microstrategy.com:443/MicroStr
ategy/auth/oidc/login
l https://env-
xxxxxx.customer.cloud.microstrategy.com:443/MicroStr
ategyMobile/auth/oidc/login
7. Click Save.
12. In the navigation pane, locate Manifest and download the manifest file.
13. In the navigation pane, locate Overview and take note of the Client ID
for later.
14. Click Endpoints and copy the OpenID Connect metadata document
field.
15. Add group claims by choosing Token configuration > Add group
claims > ID and save the defined group claim.
https://env-
xxxxxx.customer.cloud.microstrategy.com/MicroStrategy
Library/admin
4. Log in, deselect Standard, and click Save. For more information, see
Enable OIDC Authentication for MicroStrategy Library.
https://env-
xxxxxx.customer.cloud.microstrategy.com/MicroStrategy
/servlet/mstrWebAdmin
3. Click Setup next to the trust relationship between the Web server and
MicroStrategy Intelligence server.
4. Enter the user credentials with admin privileges and click Create Trust
Relationship.
Client Secret This field is only required when the Azure application is a
Web app. If you deployed a Public client/native app in Create an
Application, you can leave this field blank.
Native Client ID This is the same as the client ID, unless configured
otherwise.
Redirect URI The default web redirect URI. This should not be changed
unless configured otherwise.
Claim Map
l Full Name The user display name attribute. The default value for this
field is name.
l User ID The user distinguished login attribute. The default value for
this field is email.
l Email The user email address attribute. The default value for this
field is email.
l Groups The user group attribute. The default value for this field is
groups.
Example: ["WebAdmin","SystemAdmin"]
Members belonging to WebAdmin and SystemAdmin can access the
admin pages.
l Create an Application
Create an Application
4. Click Next
7. Add the following Web, Mobile, Library and Desktop application URIs
under Sign-in redirect URIs. Replace the environment-specific URIs
with your environment name.
l https://env-
xxxxxx.customer.cloud.microstrategy.com/MicroStrateg
yLibrary/auth/oidc/login
l com.microstrategy.hypermobile://auth
l com.microstrategy.dossier.mobile://auth
l https://env-
xxxxxx.customer.cloud.microstrategy.com/MicroStrateg
yLibrary/static/oidc/success.html
l local://plugins
l http://127.0.0.1
l http://127.0.0.1:51892
l http://127.0.0.1:51893
l http://127.0.0.1:51894
l http://127.0.0.1:51895
l http://127.0.0.1:51896
l http://127.0.0.1:51897
l com.microstrategy.mobile://auth
l https://env-
xxxxxx.customer.cloud.microstrategy.com:443/MicroStr
ategy/auth/oidc/login
l https://env-
xxxxxx.customer.cloud.microstrategy.com:443/MicroStr
ategyMobile/auth/oidc/login
8. Click Save.
10. On the General tab, under Client Credentials, take note of the client ID
for future reference.
11. Select the Sign On tab. Under the OpenID Connect ID Token, take note
of the issuer for future reference.
12. Next to OpenID Connect ID Token, click Edit. In the Groups claim
filter, choose Matches regex, enter a value of .*, and click Save.
13. Select the Assignments tab and verify that the users that need to
access the application are assigned.
3. Select Okta as the identity provider from the dropdown in the first step.
4. Verify that all URIs mentioned in the second step are already added to
the Okta application.
5. Provide the Client ID and Issuer for the Okta application in the third
step.
6. Verify the default User claim mappings and Import user at Login
setting.
https://env-
xxxxxx.customer.cloud.microstrategy.com/MicroStrategy
/servlet/mstrWebAdmin
3. Click Setup next to the trust relationship between the Web server and
MicroStrategy Intelligence server.
4. Enter the user credentials with admin privileges and click Create Trust
Relationship.
Client Secret This field is only required when the Okta application is a
Web app. If you deployed a Public client/native app in Create an
Application, you can leave this field blank.
Native Client ID This is the same as the client ID, unless configured
otherwise.
Redirect URI The default web redirect URI. This should not be changed
unless configured otherwise.
Claim Map
l Full Name The user display name attribute. The default value for this
field is name.
l User ID The user distinguished login attribute. The default value for
this field is email.
l Email The user email address attribute. The default value for this
field is email.
l Groups The user group attribute. The default value for this field is
groups.
Admin Groups Select admin groups whose members can access to the
admin pages. You can have multiple admin groups.
["WebAdmin","SystemAdmin"]
Members belonging to WebAdmin and SystemAdmin can access the
admin pages.
Restart the Web server after completing all the above steps for the changes
to take effect.
Create an Application
8. Add all four scopes including address, email, phone, and profile.
11. Take note of the Client ID, CLIENT SECRET, and ISSUER under the
Configuration tab. You will need this information later to configure
MicroStrategy with the Ping application.
Google does not expose an OAuth scope to obtain user groups as part of
the OIDC flow. Therefore, MicroStrategy can not retrieve group information
for Google users and can not map Google groups to MicroStrategy
administrator groups.
Create Application
3. Under Authorized redirect URIs, enter the Library URL and add
/auth/oidc/login to the end of the URL, as shown below.
https://env-
xxxxxx.customer.cloud.microstrategy.com/MicroStrategy
Library/auth/oidc/login
4. Click Create.
Cr eat e i OS OAu t h Cl i en t ID
4. Click Create.
Cr eat e An d r o i d OAu t h Cl i en t ID
6. Click Create.
3. Click Create.
Co n f i gu r e Web Cl i en t
1. In Client ID and Client Secret, enter the values from the client you
created in Create Web OAuth Client ID.
2. The Library Web URI is generated automatically. Ensure you add this
URI to the Authorized redirect URLs in the client you created in
Create Web OAuth Client ID.
Co n f i gu r e i OS Cl i en t
1. In Client ID, enter the value from the client you created in Create
iOS OAuth Client ID.
2. In Redirect URI Scheme, enter the iOS URL scheme from the
iOS client you created in Create iOS OAuth Client ID.
Co n f i gu r e An d r o i d Cl i en t
1. In Client ID, enter the value from the client you created in Create
Android OAuth Client ID.
2. In Package Name, enter the Package name from the client you created
in Create Android OAuth Client ID.
Co n f i gu r e Wo r kst at i o n Cl i en t
1. In Client ID and Client Secret, enter the values from the client you
created in Create Workstation OAuth Client ID.
En ab l e OIDC Au t h M o d e f o r M i cr o St r at egy Li b r ar y
5. Click Save.
l http://localhost
4. Under the Implicit gran and hybrid flows section, select the ID tokens
checkbox.
2. Set OIDC as the login mode and complete the required fields.
l For Client ID, go to the app > Overview > Application (client) ID,
and locate the ID.
l For Client Secret, go to the app > Certificates & secrets, and locate
the secret. If necessary, create a new secret.
l For Issuer, go to App > Overview > Endpoints, open the URL of
OpenID Connect metadata document, and copy the issuer value. For
example, https://login.microsoftonline.com/[Directory
tenant ID]/v2.0.
l Email: email
l Groups: groups
l For Admin Groups, go the app > Groups > Overview, and locate
Object Id. If the Object Id is a value set, only users in the group can
access mstrWebAdmin pages.
{
"iams":[{
"clientId":"XXXXXXX",
"clientSecret":"XXXXXXX",
"nativeClientId": "XXXXXXX",
"id":"test",
"issuer":"https://login.microsoftonline.com/XXXXXXX/v2.0",
"redirectUri":"https://XXXXXXX/MicroStrategyLibrary/auth/oidc/login",
"blockAutoProvisioning": true,
"claimMap": {
"email": "email",
"fullName": "name",
"userId": "upn",
"groups": "groups"
},
"default": true,
"mstrIam": true,
"scopes": [
"openid",
"profile",
"email",
"offline_access"
],
"vendor": {
"name": "MicroStrategy IAM",
"version": "Azure AD"
}
}]
}
click Setup.
2. Set OIDC as the login mode and complete the required fields using the
same values used for OIDC configuration for MicroStrategy Web.
3. Add the Mobile Redirect URI to Azure AD > Snowflake OAuth Client
Application > Authentication > Web Refidirect URIs.
You can create Snowflake database instances with or without the project
schema.
Wi t h t h e Pr o j ect Sch em a
l In MicroStrategy Developer:
l In MicroStrategy Web:
Database instances created via MicroStrategy can be used for the project
schema, but cannot be used for connection mapping.
Wi t h o u t t h e Pr o j ect Sch em a
To use the database instance without the project schema, you must either
have basic or OAuth authentication.
l In MicroStrategy Developer:
6. Click OK.
l In MicroStrategy Web:
Users must have the Set OAuth parameters for Cloud App sources
privilege under Client-Web.
3. After the database instance is created, you can set the OAuth
parameters in MicroStrategy Web.
l For Client ID, click on the app > Overview > Application
(client) ID, and locate the ID.
l For Client Secret, click on the app > Certificates & secrets,
and locate the secret. If necessary, create a new secret.
l For Directory (tenant) ID, click on the app > Overview, and
locate the ID.
l For Scope, click on the app > API permissions, click on the
API/Permission name, and locate the URL. The URL is in the
format like https://[AzureDomain]/
[id]/session:scope-any.
Client Application.
If you have multiple MicroStrategy Users or User Groups and want to give
access to the same database instance but with different database logins,
see Controlling Access to the Database: Connection Mappings
In a primary database connection, users that are not mapped into the
secondary database connection use the default database connection. In a
secondary database connection, users in a specific group use the mapped
database connection.
For example, the administrator uses basic authentication, while other users
use OAuth authentication. All users can use the project schema. You must
set the default connection to use standard authentication for the Warehouse
Catalog to work in Developer:
l In MicroStrategy Developer
l In MicroStrategy Developer
6. Click OK.
7. Click New.
l For Client ID, click on the app > Overview > Application
(client) ID, and locate the ID.
l For Client Secret, click on the app > Certificates & secrets,
and locate the secret. If necessary, create a new secret.
l For Directory (tenant) ID, click on the app > Overview, and
locate the ID.
l For Scope, click on the app > API permissions, click on the
API/Permission name, and locate the URL. The URL is in the
format like https://[AzureDomain]/
[id]/session:scope-any.
2. Open WSAuth.log and if SSO is being used, the log should have
the following content:
Related Content
KB484275: Best practices for using the Snowflake Single Sign-on (SSO)
feature
l Integrate MicroStrategy with Azure SQL Database for Single Sign-On with
OIDC
l Integrate MicroStrategy with Azure SQL Database for Single Sign-On with
SAML
You can create an OAuth application using the steps below or follow
Microsoft's documentation.
5. Enter a name for the client such as Azure SQL OAuth Client.
7. Click Register.
11. Select never expire. For testing purposes, select secrets that never
expire.
15. Select Web and add redirect URIs. Here are some samples:
l https://<FQDN>:<port>/MicroStrategy/servlet/mstrWeb?evt=3172
l http://localhost/
16. In the Implicit grand and hybrid flows section, select ID tokens.
Integrate MicroStrategy with Azure SQL Database for Single Sign-On with
OIDC
l Enable MicroStrategy Web OIDC with Azure AD
l Integrate MicroStrategy with Azure SQL Database for Single Sign-On with
SAML
2. Set OIDC as the login mode and input the necessary values.
c. For Client Secret, go to App > Certificates & secrets. Copy the
value and paste it into Client Secret.
d. For the Issuer, go to App > Overview > Endpoints. Open the
'OpenID Connect metadata document' URL, copy the issuer value.
For example, https://login.microsoftonline.com/[Directory tenant
ID]/v2.0.
h. For Admin Groups, enter the groups' Object Id as the value. If the
value is set, only users in the group can access the MicroStrategy
Web Admin page.
{
"iams":[{
"clientId":"XXXXXXX",
"clientSecret":"XXXXXXX",
"nativeClientId": "XXXXXXX",
"id":"test",
"issuer":"https://login.microsoftonline.com/XXXXXXX/v2.0",
"redirectUri":"https://XXXXXXX/MicroStrategyLibrary/auth/oidc/lo
gin",
"blockAutoProvisioning": true,
"claimMap": {
"email": "email",
"fullName": "name",
"userId": "upn",
"groups": "groups"
},
"default": true,
"mstrIam": true,
"scopes": [
"openid",
"profile",
"email",
"offline_access"
],
"vendor": {
"name": "MicroStrategy IAM",
"version": "Azure AD"
}
}]
}
2. Choose OIDC Authentication for the Login mode and enter the
necessary values.
b. Add the mobile redirect URI in Azure AD > Azure SQL OAuth
Client Application > Authentication > Web Redirect URIs.
Integrate MicroStrategy with Azure SQL Database for Single Sign-On with
SAML
See Integrate MicroStrategy With Snowflake for Single Sign-On With SAML
using Azure AD to integrate MicroStrategy with Azure SQL Database for
Single Sign On with SAML.
See Enable Seamless Login Between Web, Library, and Workstation for
more information.
You can create Azure SQL Database DB instances with or without the
project schema.
Wi t h t h e Pr o j ect Sch em a
In MicroStrategy Developer:
Copyright © 2024 All Rights Reserved 514
Syst em Ad m in ist r at io n Gu id e
In MicroStrategy Web:
1. Choose Add Data > New Data to open the Data Source dialog.
Wi t h o u t t h e Pr o j ect Sch em a
In MicroStrategy Developer:
6. Click OK.
In MicroStrategy Web:
1. Choose Add Data > New Data to open the Data Source dialog.
5. Fill out the required fields. To locate the Client ID, click on the app. Got
to Overview > Application (client) ID, and locate the ID.
6. For the Client Secret, click on the app. Go to Certificates & secrets,
and locate the secret. If necessary, create a new secret.
7. For the Directory (tenant) ID, click on the app. Go to Overview and
locate the ID.
Workstation: http://localhost
Users must have the Set OAuth parameters for Cloud App sources
privilege under Client-Web.
If you have multiple MicroStrategy users or user groups and want to give
access to the same database instance, but with different database logins,
see Controlling Access to the Database: Connection Mappings.
In a primary database connection, users that are not mapped into the
secondary database connection use the default database connection. In a
secondary database connection, users in a specific group use the mapped
database connection.
For example, the administrator uses basic authentication, while other users
use OAuth authentication. All users can use the project schema. You must
set the default connection to use standard authentication for the Warehouse
Catalog to work in Developer:
f. Click OK.
g. Click New.
1. Check to see if the id token was saved into the user run time. Open the
MicroStrategy Diagnostics and Performance Logging Tool and enable
the Kernel XML API log.
4. Check the WSAuth.log file. If you are using SSO, the log should look
similar to the one below.
User Mapping
l User ID: MicroStrategy looks for a match of the Name ID to the User ID of
the Trusted Authenticated Request setting.
If no match is found, this means the OIDC user does not yet exist in
MicroStrategy and will be denied access. You can choose to have OIDC
users imported into MicroStrategy if no match is found. See Importing and
Syncing OIDC Users below for more information.
New users and their associated groups can be dynamically imported into
MicroStrategy during application log in. You can also configure the
Intelligence server to sync user information for existing MicroStrategy users
each time they log in to an application. The following settings are accessed
from the Intelligence Server Configuration > Web Single Sign-on >
Configuration window in Developer.
Import user and Sync user are not be available unless this setting is
turned on.
All users imported this way are placed in the3rd party users group in
MicroStrategy and are not physically added to any MicroStrategy groups
that match its group membership information.
l Synch user at logon: Allows MicroStrategy to update the fields used for
mapping users with the current information provided by the OIDC
response.
This option also updates all of a user's group information and import
groups into 3rd party users if matching groups are not found. This may
result in unwanted extra groups being created and stored in the metadata.
Before following the steps below, you must create an Azure App and note
it's Client ID, Client Secret, and Directory/Tenant ID. You must also have
access to at least one Azure account.
1. Open the Workstation window with the Navigation pane in smart mode.
3. Log into your environment. You must have the Administrator privileges.
5. Click the plus icon (+) next to All Users and enter the required fields.
6. In the left pane, click Privileges and add the following privileges:
l Use Workstation
User ID.
For more information on mapping existing users, see Mapping OIDC Users
to MicroStrategy.
You must set up gcloud utility to perform the following procedure. For more
information on setting up gcloud utility, see gcloud CLI overview.
4. Set the workforce pool privileges for your organization's needs and set
the following minimum privileges:
Follow the steps in Manage OAuth Enterprise Security with Identity and
Access Management (IAM) Objects to create an enterprise security object.
Create Google BigQuery JDBC or ODBC Data Source with OAuth OBO
1. Open the Workstation window with the Navigation pane in smart mode.
2. In the Navigation pane, click the plus icon (+) next to Data Sources.
4. Enter a name for the data source and select the project(s) that will use
it.
7. Click Save.
8. Click Save.
1. Open the Workstation window with the Navigation pane in smart mode.
4. In the Navigation pane, click the plus icon (+) next to Datasets.
Check out the following topics to enable single sign-on using Google:
l Known Limitations
To set up OIDC login with Google, see Integrate OIDC Support with Google.
1. Open the Workstation window with the Navigation pane in smart mode.
3. Log into your environment. You must have the Administrator privileges.
5. Click the plus icon (+) next to All Users and enter the required fields.
6. In the left pane, click Privileges and add the following privileges:
l Use Workstation
9. Click Save.
For more information on mapping existing users, see Mapping OIDC Users
to MicroStrategy.
4. Select the Google IdP type and register the MicroStrategy environment
as an application with the provided Login Redirect URIs.
5. In the Workstation drop-down, enter the Client ID for each client that
you created in the previous step.
8. Click Save.
4. Enter a Name.
6. Enter a Name.
9. Click Save.
10. Select the Projects to which the data source is assigned and can be
accessed.
d. Click Create.
The Apple Safari web browser does not support Windows authentication
with MicroStrategy Web.
Use the procedures in the rest of this section to enable single sign-on with
Windows authentication in MicroStrategy Web. For high-level steps to
configure these settings, see Steps to Enable Single Sign-On to
MicroStrategy Web Using Windows Authentication, page 542.
You can also create MicroStrategy users from existing Windows by importing
either user definitions or group definitions.
There are several configurations that you must make to enable Windows
authentication in MicroStrategy Web. To properly configure MicroStrategy
Web, Microsoft Internet Information Services (IIS), and the link between
Microsoft and MicroStrategy users, follow the procedure Steps to Enable
Single Sign-On to MicroStrategy Web Using Windows Authentication, page
542.
Before continuing with the procedures described in the rest of this section, you
must first set up a Windows domain that contains a domain name for each user
that you want to allow single sign-on access to MicroStrategy Web with
Windows authentication.
The steps to perform this configuration are provided in the procedure below,
which may vary depending on your version of IIS. The following links can
help you find information on how to enable integrated authentication for your
version of IIS:
If you are using IIS 7 on Windows Server 2008, ensure the following:
3. Select the Directory Security tab, and then under Anonymous access
and authentication control, click Edit.
6. Click OK.
2. Click the ISAPI Filters tab. A list of ISAPI filters for your IIS installation
is shown.
3. Click Add.
1. In IIS, select the default web site. The Default Web Site Home page is
shown.
2. In the Default Web Site Home page, double-click ISAPI Filters. A list of
ISAPI filters for your IIS installation is shown.
4. In the Filter name field, type a name for the filter. For example,
MicroStrategy ISAPI Filter.
8. Click OK.
3. Navigate to the MicroStrategy user you want to link a Windows user to.
Right-click the MicroStrategy user and select Edit.
l Click Browse to select the user from the list of Windows users
displayed.
6. Click OK.
When using LDAP with MicroStrategy, you can reduce the number of times a
user needs to enter the same login and password by linking their Windows
system login with their LDAP login used in MicroStrategy.
For example, a user logs in to their Windows machine with a linked LDAP
login and password and is authenticated. The user then opens Developer
and connects to a project source using Windows authentication. Rather than
having to enter their login and password to log in to MicroStrategy, the
user's login and password authenticated when logging in to their machine is
used to authenticate the user. During this process, the user account and any
relevant user groups are imported and synchronized for the user.
The LDAP Server is configured as the Microsoft Active Directory Server domain
controller, which stores the Windows system login information.
3. Expand the LDAP category, then expand Import, and then select
Options.
5. Click OK.
4. Click OK.
There are two ways to enable access to MicroStrategy Web using Windows
authentication. Access can be enabled for the MicroStrategy Web
application as a whole, or it can be enabled for individual projects at the
project level.
4. Click Save.
2. At the upper left of the page, click the MicroStrategy icon, and select
Preferences.
6. Click Apply.
l For Internet Explorer, you must enable integrated authentication for the
browser, as well as add the MicroStrategy Web server URL as a trusted
site. Depending on your security policy, integrated authentication may be
l For Firefox, you must add the MicroStrategy Web server URL as a trusted
site. The URL must be listed in the about:config page, in the settings
network.negotiate-auth.trusted-uris and network.negotiate-
auth.delegation-uris.
l MicroStrategy Web
l MicroStrategy Mobile
For more information, see the MicroStrategy for Office page in the
Readme and the MicroStrategy for Office Help.
In this security model, there are several layers. For example, when a user
logs in to Tivoli, Tivoli determines whether the user's credentials are valid. If
the user logs in with valid credentials to Tivoli, the user directory (such as
LDAP) determines whether that valid user can connect to MicroStrategy. The
user's MicroStrategy privileges are stored within the MicroStrategy Access
Control List (ACL). What a user can and cannot do within the MicroStrategy
application is stored on Intelligence Server in the metadata within these
ACLs. For more information about privileges and ACLs in MicroStrategy, see
Chapter 2, Setting Up User Security.
The distinguished name of the user passed from the third-party provider is
URL-decoded by default within MicroStrategy Web, Mobile, or Web Services
before it is passed to the Intelligence Server.
You must complete all of the following steps to ensure proper configuration
of your authentication provider and MicroStrategy products.
For steps to create a junction (in Tivoli), a Web Agent (in SiteMinder), or a
Webgate (Oracle Access Manager), refer to the product's documentation.
If you use Internet Information Services (IIS) as your web server for
MicroStrategy Web or Web Services, you must enable anonymous
authentication to the MicroStrategy virtual directories to support SSO
authentication to MicroStrategy Web, Mobile, or Office. This is discussed in
Enabling Anonymous Authentication for Internet Information Services, page
563.
3. Scroll down to the Login area and, under Login mode, select the
Enabled check box next to Trusted Authentication Request. Also
select the Default option next to Trusted Authentication Request, as
shown below:
5. Click Save.
4. Click Save.
If you are using multiple Intelligence Server machines in a cluster, you must
first set up the cluster, as described in Chapter 9, Cluster Multiple
MicroStrategy Servers, and then establish trust between Web or Mobile
Server and the cluster.
l Web administration
6. Type a User name and Password in the appropriate fields. The user
must have administrative privileges for MicroStrategy Web or Mobile,
as applicable.
For example, you can use the URLs for the applications using Tivoli, as
follows:
MicroStrategy Web:
https://MachineName/JunctionName/MicroStrategy/asp
MicroStrategy Mobile:
https://
MachineName/JunctionName/MicroStrategyMobile/asp
To Ver i f y t h e Tr u st Rel at i o n sh i p
4. On the left, expand the Web Single Sign-on category, and verify that
the trusted relationship is listed in the Trusted Web Application
Registration list.
5. Click OK.
8. Click Save.
2. In the Microsoft Office ribbon, under the MicroStrategy Office tab, click
MicroStrategy Office. MicroStrategy Office starts, with a list of project
sources you can connect to.
3. From the list of project sources on the left, select the project source you
want to enable trusted authentication for.
4. In the right pane, enter the login ID and password for a user with
administrative privileges, and click Get Projects. A list of projects is
displayed.
1. In the Web Services URL field, enter the URL for the Tivoli Junction or
SiteMinder Web Agent, as applicable, that you created for
MicroStrategy Web Services.
2. Click OK.
To Ch o o se a Tr u st ed Au t h en t i cat i o n Pr o vi d er
l If you are using IIS as your application server, open the web.config
file in a text editor, such as Notepad. By default, the file is located in
TRUSTEDAUTHPROVIDER=1
If you are using a custom authentication provider, you must make additional
modifications to the custom_security.properties file, which is located
by default in C:\Program Files (x86)\MicroStrategy\Web
Services\resources. For information on these modifications, refer to the
MicroStrategy Developer Library (MSDL).
<ProjectSource>
<ProjectSourceName>Name</ProjectSourceName>
<ServerName>Name</ServerName>
<AuthMode>MWSSimpleSecurityPlugIn</AuthMode>
<PortNumber>0</PortNumber>
</ProjectSource>
4. Save projectsources.xml.
If you use Internet Information Services (IIS) as your web server, you must
enable anonymous authentication to the MicroStrategy virtual directory to
support SSO authentication to MicroStrategy Web, Web Services or Mobile.
The steps to perform this configuration are provided below, which may vary
depending on your version of IIS. Click here to find more information about
using anonymous authentication with IIS.
l IIS 7
l IIS 8
l IIS 10
5. Click OK.
6. Click OK.
l Allowed guest access to MicroStrategy Web. The Tivoli user inherits the
privileges of the Public/Guest group in MicroStrategy. Guest access to
MicroStrategy Web is not necessary for imported or linked Tivoli users.
For steps to perform this configuration, see Enabling Guest Access to
MicroStrategy Web or Mobile for Tivoli Users, page 567.
l Security privileges are not imported from Tivoli; these must be defined in
MicroStrategy by an administrator.
6. Click OK.
As an alternative to importing users, you can link (or associate) Tivoli users
to existing MicroStrategy users to retain the existing privileges and
configurations defined for the MicroStrategy users. Linking Tivoli users
rather than enabling Tivoli users to be imported when they log in to
MicroStrategy Web enables you to assign privileges and other security
settings for the user prior to their initial login.
3. In the folder list on the left, expand Administration, and then expand
User Manager.
The name you type in the User ID field should be the same as the one
that the user employs when providing their Tivoli login credentials.
8. Click OK.
If you choose to not import or link Tivoli users to a MicroStrategy user, you
can enable guest access to MicroStrategy Web for the Tivoli users. Guest
users inherit their privileges from the MicroStrategy Public/Guest group.
You are logged in to the MicroStrategy project with your Tivoli user
credentials.
If you are prompted to display both secure and non-secure items on the
web page, you can configure your web browser to hide this warning
message. Refer to your web browser documentation regarding this
configuration.
l The object names correspond to the bean IDs in the configuration XML
files. You must view the configuration files to identify which Java classes
define those beans.
2. The multi-mode login filter recognizes this is a SAML login request and
delegates the work to the mstrMultModeFilter SAML login filter
bean.
Bean Descr i p t i o n s
A subclass of
LoginUrlAuthe
nticationEntr
yPoint that
performs a
mstrEntryP com.microstrategy.auth.saml.authnrequest
redirect to where
oint .SAMLEntryPointWrapper
it is set in the
constructor by the
String
redirectFilte
rUrl parameter.
By default, this
filter responds to
the
/saml/authent
icate/**
endpoint and the
mstrSamlAu org.springframework.security.saml2.provi result is a redirect
thnRequest der.service.web.Saml2WebSsoAuthenticatio that includes a
Filter nRequestFilter SAMLRequest
parameter that
contains the
signed, deflated,
and encoded
<saml2:AuthnR
equest>
Cu st o m i zat i o n
Prior to /saml/authenticate
To customize before /saml/authenticate redirect:
<bean id="mstrSamlAuthnRequestFilter"
class="MySAMLAuthenticationRequestFilter">
<constructor-arg ref="samlAuthenticationRequestContextResolver"/>
</bean>
package com.microstrategy.custom.auth;
import ...;
public class MyAuthnRequestCustomizer extends
SAMLAuthenticationAuthnRequestCustomizer {
@Override
public void accept
(OpenSaml4AuthenticationRequestResolver.AuthnRequestContext
authnRequestContext) {
super.accept(authnRequestContext);
AuthnRequest authnRequest = authnRequestContext.getAuthnRequest
();
// Add your AuthnRequest customization here...
}
}
<bean id="authnRequestCustomizer"
class="com.microstrategy.custom.auth.MyAuthnRequestCustomizer"/>
5. The SAML processing filter calls the login success handler, which
redirects the browser to the original request.
Bean Descr i p t i o n s
Descript
Bean ID Java Class
ion
This is
the core
filter that
is
responsib
le for
handling
org.springframework.security.saml2.provider. the SAML
mstrSamlProc
service.web.authentication.Saml2WebSsoAuthen login
essingFilter
ticationFilter response
(SAML
assertion)
that
comes
from the
IDP
server.
This bean
is
responsib
le for
authentic
ating a
samlAuthenti
com.microstrategy.auth.saml.response.SAMLAut user
cationProvid
henticationProviderWrapper based on
er
informatio
n
extracted
from the
SAML
assertion.
Descript
Bean ID Java Class
ion
This bean
is
responsib
le for
creating
and
populatin
g an
IServer
Credent
ials
instance
that
defines
the
credential
samlIserverC s for
com.microstrategy.auth.saml.SAMLIServerCrede
redentialsPr creating
ntialsProvider
ovider Intelligen
ce server
sessions.
The
IServer
Credent
ials
object is
passed to
the
Session
Manager'
s login
method,
which
creates
the
Descript
Bean ID Java Class
ion
Intelligen
ce server
session.
Cu st o m i zat i o n
The following content uses the real class name, instead of the bean name.
You can find the bean name in SAMLConfig.xml.
2. Override the convert method and call super.convert, which can get
com.microstrategy.auth.saml.response.SAMLAuthenticati
onToken, a subclass of Saml2AuthenticationToken.
3. Extract the information from the raw request, then return an instance
that is a subclass of Saml2AuthenticationToken:
<bean id="samlAuthenticationConverter"
class="com.microstrategy.custom.MySAMLConverter">
<constructor-arg ref="saml2AuthenticationConverter"/>
</bean>
user based on information extracted from a SAML assertion and logs into the
Intelligence sever by calling the internal login method.
You can customize this login process at the following three time points:
<bean id="samlAuthenticationProvider"
class="com.microstrategy.custom.MySAMLAuthenticationProviderWrapper">
<property name="assertionValidator" ref="samlAssertionValidator"/>
<property name="responseAuthenticationConverter"
ref="samlResponseAuthenticationConverter"/>
</bean>
@Override
public Authentication authenticate(Authentication authentication)
throws AuthenticationException {
<bean id="samlAuthenticationProvider"
class="com.microstrategy.custom.MySAMLAuthenticationProviderWrapper">
<property name="assertionValidator" ref="samlAssertionValidator"/>
<property name="responseAuthenticationConverter"
ref="samlResponseAuthenticationConverter"/>
</bean>
}
}
<bean id="samlAuthenticationProvider"
class="com.microstrategy.custom.MySAMLAuthenticationProviderWrapper">
<property name="assertionValidator" ref="samlAssertionValidator"/>
<property name="responseAuthenticationConverter"
ref="samlResponseAuthenticationConverter"/>
</bean>
It is common for your web and IDP servers to have system clocks that are
not perfectly synchronized. You can configure the default
SAMLAssertionValidator assertion validator with some tolerance.
<bean id="samlAssertionValidator"
class="com.microstrategy.auth.saml.response.SAMLAssertionValidator">
<property name="responseSkew" value="300"/>
</bean>
<bean id="samlAssertionValidator"
class="com.microstrategy.auth.saml.response.SAMLAssertionValidator">
<property name="maxAuthenticationAge" value="2592000"/><!-- 30 days
-->
</bean>
The new spring SAML framework performs minimal validation on SAML 2.0
assertions. After verifying the signature, the spring SAML framework:
2. Perform your own validation. For example, you can use OpenSAML's
OneTimeUseConditionValidator to also validate a <OneTimeUse>
condition:
<bean id="samlAssertionValidator"
class="com.microstrategy.custom.MySAMLAssertionValidator">
<property name="maxAuthenticationAge" value="2592000"/><!-- 30
days -->
<property name="responseSkew" value="300"/>
</bean>
To set properties, see Set a Clock Skew for Timestamp Validation or Set an
Authentication Age for Timestamp Validation.
Custom ize Intelligence Server Credentials Object with the SAML Assertion
Inform ation
package com.microstrategy.custom.auth;
import ...;
public class MySAMLUserDetailsService extends
SAMLIServerCredentialsProvider {
@Override
public Object loadUserBySAML(SAMLCredential samlCredential) throws
AuthenticationException {
SAMLIServerCredentials iServerCredentials =
(SAMLIServerCredentials) super.loadUserBySAML(samlCredential);
return iServerCredentials;
}
}
<bean id="samlIserverCredentialsProvider"
class="com.microstrategy.auth.saml.SAMLIServerCredentialsProvider">
<!-- SAML Attribute mapping -->
<property name="displayNameAttributeName" value="DisplayName" />
<property name="dnAttributeName" value="DistinguishedName" />
<property name="emailAttributeName" value="EMail" />
<property name="groupAttributeName" value="Groups" />
package com.microstrategy.custom.auth;
import ...;
public class MySAMLUserDetailsService implements SAMLUserDetailsService {
@Override
public Object loadUserBySAML(SAMLCredential samlCredential) throws
UsernameNotFoundException {
SAMLIServerCredentials iServerCredentials = new
SAMLIServerCredentials();
return iServerCredentials;
}
@Override
public void loadSAMLProperties(SAMLConfig samlConfig) {
// load attributes from MstrSamlConfig.xml from start up, so that
it could be utilized by `loadUserBySAML(...)`
}
}
<bean id="samlIserverCredentialsProvider"
class="com.microstrategy.custom.auth.MySAMLUserDetailsService">
l The object names correspond to the bean IDs in the configuration XML
files. You must view the configuration files to identify which Java classes
define those beans.
Bean Descr i p t i o n s
A subclass of
LoginUrlAuthe
nticationEntr
yPoint that
performs a
mstrEntryP com.microstrategy.auth.saml.authnrequest
redirect to where
oint .SAMLEntryPointWrapper
it is set in the
constructor by the
String
redirectFilte
rUrl parameter.
By default, this
filter responds to
the
/saml/authent
icate/**
endpoint and the
mstrSamlAu org.springframework.security.saml2.provi result is a redirect
thnRequest der.service.web.Saml2WebSsoAuthenticatio that includes a
Filter nRequestFilter SAMLRequest
parameter that
contains the
signed, deflated,
and encoded
<saml2:AuthnR
equest>
Cu st o m i zat i o n
<bean id="mstrSamlAuthnRequestFilter"
class="MySAMLAuthenticationRequestFilter">
<constructor-arg ref="samlAuthenticationRequestContextResolver"/>
</bean>
package com.microstrategy.custom.auth;
import ...;
public class MyAuthnRequestCustomizer extends
SAMLAuthenticationAuthnRequestCustomizer {
@Override
public void accept
(OpenSaml4AuthenticationRequestResolver.AuthnRequestContext
authnRequestContext) {
super.accept(authnRequestContext);
AuthnRequest authnRequest = authnRequestContext.getAuthnRequest
();
// Add your AuthnRequest customization here...
}
}
<bean id="authnRequestCustomizer"
class="com.microstrategy.custom.auth.MyAuthnRequestCustomizer"/>
5. The SAML processing filter calls the login success handler, which
redirects the browser to the original request.
Bean Descr i p t i o n s
This is the
core filter
that is
responsible
for handling
the SAML
mstrSamlProces com.microstrategy.auth.saml.response.SA
login
singFilter MLProcessingFilterWrapper
response
(SAML
assertion)
that comes
from the IDP
server.
This bean is
responsible
for
authenticatin
g a user
samlAuthentica com.microstrategy.auth.saml.response.SA
based on
tionProvider MLAuthenticationProviderWrapper
information
extracted
from the
SAML
assertion.
This bean is
responsible
for creating
and
com.microstrategy.auth.saml.SAMLUserDet
userDetails populating an
ailsServiceImpl
IServerCre
dentials
instance that
defines the
credentials
for creating
Intelligence
server
sessions. The
IServerCre
dentials
object is
saved to the
HTTP
session,
which is used
to create the
Intelligence
server
session for
future
requests.
Cu st o m i zat i o n
The following content uses the real class name, instead of the bean name.
You can find the bean name in SAMLConfig.xml.
2. Override the convert method and call super.convert, which can get
com.microstrategy.auth.saml.response.SAMLAuthenticati
onToken, a subclass of Saml2AuthenticationToken.
3. Extract the information from the raw request, then return an instance
that is a subclass of Saml2AuthenticationToken:
<bean id="samlAuthenticationConverter"
class="com.microstrategy.custom.MySAMLConverter">
<constructor-arg ref="saml2AuthenticationConverter"/>
</bean>
You can customize this login process at the following three time points:
<bean id="samlAuthenticationProvider"
class="com.microstrategy.custom.MySAMLAuthenticationProviderWrapper">
<property name="assertionValidator" ref="samlAssertionValidator"/>
<property name="responseAuthenticationConverter"
ref="samlResponseAuthenticationConverter"/>
</bean>
com.microstrategy.auth.saml.response.SAMLAuthenticati
onProvider and overrides the authenticate method:
<bean id="samlAuthenticationProvider"
class="com.microstrategy.custom.MySAMLAuthenticationProviderWrapper">
<property name="assertionValidator" ref="samlAssertionValidator"/>
<property name="responseAuthenticationConverter"
ref="samlResponseAuthenticationConverter"/>
</bean>
@Override
public Authentication attemptAuthentication(HttpServletRequest
request, HttpServletResponse response) throws AuthenticationException {
Authentication authResult = super.attemptAuthentication(request,
response);
// >>>> Do something after the user login ---> Point ③ in the
above diagram
return authResult;
}
}
<bean id="mstrSamlProcessingFilter"
class="com.microstrategy.custom.MySAMLProcessingFilterWrapper">
<constructor-arg ref="samlAuthenticationConverter" />
<property name="authenticationManager" ref="authenticationManager" />
<property name="authenticationSuccessHandler"
ref="successRedirectHandler" />
<property name="authenticationFailureHandler"
ref="failureRedirectHandler" />
<property name="requiresAuthenticationRequestMatcher"
ref="samlSsoMatcher" />
</bean>
It is common for your web and IDP servers to have system clocks that are
not perfectly synchronized. You can configure the default
SAMLAssertionValidator assertion validator with some tolerance.
<bean id="samlAssertionValidator"
class="com.microstrategy.auth.saml.response.SAMLAssertionValidator">
<property name="responseSkew" value="300"/>
</bean>
<bean id="samlAssertionValidator"
class="com.microstrategy.auth.saml.response.SAMLAssertionValidator">
<property name="maxAuthenticationAge" value="2592000"/><!-- 30 days
-->
</bean>
The new spring SAML framework performs minimal validation on SAML 2.0
assertions. After verifying the signature, the spring SAML framework:
2. Perform your own validation. For example, you can use OpenSAML's
OneTimeUseConditionValidator to also validate a <OneTimeUse>
condition:
<bean id="samlAssertionValidator"
class="com.microstrategy.custom.MySAMLAssertionValidator">
<property name="maxAuthenticationAge" value="2592000"/><!-- 30
days -->
<property name="responseSkew" value="300"/>
</bean>
To set properties, see Set a Clock Skew for Timestamp Validation or Set an
Authentication Age for Timestamp Validation.
Custom ize Intelligence Server Credentials Object with the SAML Assertion
Inform ation
package com.microstrategy.custom.auth;
import ...;
public class MySAMLUserDetailsService extends SAMLUserDetailsServiceImpl
{
@Override
public Object loadUserBySAML(SAMLCredential samlCredential) throws
UsernameNotFoundException {
SAMLIServerCredentials iServerCredentials =
(SAMLIServerCredentials) super.loadUserBySAML(samlCredential);
return iServerCredentials;
}
}
<bean id="userDetails"
class="com.microstrategy.custom.auth.MySAMLUserDetailsService">
<!-- SAML Attribute mapping -->
<property name="displayNameAttributeName" value="DisplayName" />
<property name="dnAttributeName" value="DistinguishedName" />
<property name="emailAttributeName" value="EMail" />
<property name="groupAttributeName" value="Groups" />
package com.microstrategy.custom.auth;
import ...;
public class MySAMLUserDetailsService implements SAMLUserDetailsService {
@Override
public Object loadUserBySAML(SAMLCredential samlCredential) throws
UsernameNotFoundException {
SAMLIServerCredentials iServerCredentials = new
SAMLIServerCredentials();
return iServerCredentials;
}
@Override
public void loadSAMLProperties(SAMLConfig samlConfig) {
// load attributes from MstrSamlConfig.xml from start up, so that
it could be utilized by `loadUserBySAML(...)`
}
}
<bean id="userDetails"
class="com.microstrategy.custom.auth.MySAMLUserDetailsService">
The users in your LDAP directory can log into MicroStrategy Web by:
l Scanning a QR code using the Badge app on their smart phones, if Badge
is configured as the primary authentication method.
The high-level steps to enable Badge authentication for Web and Mobile are
as follows:
2. Add your LDAP directory to your Identity network. For steps to add your
LDAP directory to Identity, see the Identity Help.
3. If you are importing users from LDAP, connect LDAP by leveraging the
connection between LDAP and your MicroStrategy Identity Server.
Alternatively, you can manually connect your LDAP directory to
MicroStrategy. Otherwise, import your MicroStrategy user data into the
Identity network. For more information, see the Identity Help.
You have created an Identity network and badges for your users. Your network
is the group of users in your organization who can use the Badge app on their
smart phone to validate their identity to log into MicroStrategy. For steps to
create an Identity network, see the Identity Help.
4. To change the image that is displayed on the login page when users
open MicroStrategy Web, click Import an Icon. Select an image to
display and click Open.
7. Note the values for Organization ID, Application ID, and Token. You
use these values to configure MicroStrategy Intelligence Server.
8. Click Done.
You have upgraded your MicroStrategy metadata. For steps to upgrade your
MicroStrategy metadata, see the Upgrade Help.
If you are enabling two-factor authentication for Web using Badge, you have
added at least one user to the Two-factor Exempt (2FAX) user group in your
MicroStrategy project. MicroStrategy users who are members of the Two-factor
Exempt (2FAX) group are exempt from two-factor authentication, and do not
need to provide an Badge Code to log into MicroStrategy Web. It is
recommended that these users have a secure password for their accounts and
use their accounts for troubleshooting MicroStrategy Web.
Ensure that you configure your LDAP server information correctly in your
Intelligence Server. If it is not configured correctly, two-factor authentication
cannot be used and therefore users will not be able to log into the server.
1. From the Windows Start menu, select All Programs > MicroStrategy
Tools > Web Administrator.
3. Click Setup.
4. Click Save.
9. Click Save.
10. Return to the Mobile Configuration page and repeat the modify steps
for each other configuration name where you want to enable Badge
authentication.
Important Considerations
The following are some points to keep in mind while configuring seamless
login between Web and Library.
l For Web and Library configuration, use the same Intelligence Server.
2. Go to Preferences.
4. Click Apply.
Syst em Ad m in ist r at io n Gu id e
6. Click Save.
5. Enter your MicroStrategy Web URL into the Link field under
MicroStrategy Web
(<FQDN>:<port>/MicroStrategy/servlet/mstrWeb).
6. Click Save.
Related Topics
KB485196: A seamless login error occurs when launching MicroStrategy
Library from MicroStrategy Web
MicroStrategyLibrary/WEB-
INF/classes/config/configOverrides.properties
2. Set the default mode to one of the corresponding values below. When
setting auth.modes.default, make sure the value is one of the
auth.modes.available.
Standard 1
Guest 8
LDAP 16
Trusted 64
Integrated 128
SAML 1048576
OIDC 4194304
This is done anonymously because the user has not yet logged in to a
specific project. Because a warehouse database is not associated with
the project source itself, users are not authenticated until they select a
project to use. For more information about anonymous authentication,
including instructions on enabling it for a project source, see Implement
Anonymous Authentication, page 158.
2. The user selects a project, and then logs in to that project using their
data warehouse login ID and password. They are authenticated against
the data warehouse database associated with that project.
Authentication Examples
Below are a few examples of how the different methods for user
authentication can be combined with different methods for database
authentication to achieve the security requirements of your MicroStrategy
system. These examples illustrate a few possibilities; other combinations
are possible.
4. Link users to their respective database user IDs using the Warehouse
passthrough Login and Warehouse passthrough password boxes
for each user. For details on each option, click Help.
Bank that only has access to its corresponding table. Depending on the user
ID used to log in to the RDBMS, a different table is used in SQL queries.
Although there are only a small number of user IDs in the RDBMS, there are
many more users who access the MicroStrategy application. When users
access the MicroStrategy system, they log in using their MicroStrategy user
names and passwords. Using connection maps, Intelligence Server uses
different database accounts to execute queries, depending on the user who
submitted the report.
Secu r i t y Co n f i gu r at i o n s i n
M i cr o St r at egy
MicroStrategy's software platform ships with hardened security
configurations present by default where possible. In many cases, security
configurations are dependent upon the infrastructure where the system is
operated. In other cases, security configurations may be dependent upon
organizational requirements and unique operating environments. This
section attempts to document different security configurations which are
available to further harden a MicroStrategy deployment.
Secure Communication in
X X
MicroStrategy
Export
You must have the private key file that you created while requesting a
certificate for Intelligence Server.
1. From the Start menu, choose All Programs > MicroStrategy Tools >
Configuration Wizard.
4. In the SSL Configuration page, enable the Configure SSL check box.
5. Click the button next to the Certificate field and browse to the
certificate you created for Intelligence Server.
6. Click the button next to the Key field and browse to the private key file
you created while requesting the certificate for Intelligence Server.
7. In the Password field, type the password that you used while creating
the private key for the certificate.
8. In the SSL Port field, type the port number to use for SSL access. By
default, the port is 39321.
CA Signed l No additional
Public <JRE>/lib/security/cacerts
Public server
configuration
required.
l Configuring
secure
certification communication
Thawte Server,
Developer, and
client
application.
l Configure secure
communication
for
MicroStrategy
Library on
WIndows.
l Enterprise root
certificate must
be added to each
client Truststore.
Contact your IT
Self-signed
CA Signed Administrator for
by Enterprise /WEB-INF/trusted.jks
Enterprise a copy of your
root CA
enterprise CA
certificate chain.
l Configuring
secure
communication
for MicroStrategy
Web and Mobile
Server,
Developer, and
client
applications.
l Configure secure
communication
for MicroStrategy
Library on
Windows.
l Certificate must
be added to
client Truststore
l Truststore must
contain
certificate from
each Intelligence
Server
l Configuring
Self-signed secure
Self-Signed
by certificate /WEB-INF/trusted.jks communication
Certificate
creator for
MicroStrategy
Web and Mobile
Server,
Developer, and
client
applications.
l Configure secure
communication
for
MicroStrategy
Library on
Windows.
Once you have populated the Keystore on Intelligence Server with your SSL
certificate and private key, follow the steps below to add the necessary
certificate to the client Truststore.
<MICROSTRATEGY_JRE>/bin/keytool -importcert -
trustcacerts -alias "<certificate_common_name>" -
keystore trusted.jks -storepass mstr123 -file cert.pem
l Any alias value may be used, but the certificate common name is
recommended, as long as the alias is unique in the Truststore.
1. Create a simple text file, for example truststore.txt, and add the
certificate information for each certificate created during client
Keystore creation.
5. Click the Browse button next to the Truststore field and select the
truststore.txt file containing your client certificate information.
For steps to configure SSL on your application server, see the link below to
view the official documentation for your server type.
1. From the Start menu choose All Programs > MicroStrategy Tools and
select Web Administrator or Mobile Administrator.
4. Click Save.
l You must use the Configuration Wizard to set up SSL for Intelligence Server,
as described in Secure Communication in MicroStrategy.
l Your CA's SSL certificate. If you are using a commercial CA, refer to their
documentation for instructions to download their certificate.
l A .pem certificate containing both the SSL certificate and the CSR for
Intelligence Server.
You must perform the following tasks to verify the server's certificate:
4. Click OK.
5. For the Mobile Server that has SSL enabled, from the Request Type
drop-down list, select HTTPS.
6. Click Save.
7. Repeat this procedure for every configuration that includes the above
Mobile Server.
If you are using MicroStrategy 2021 Update 2 or a later version, the legacy
MicroStrategy Office add-in cannot be installed from Web.;
For more information, see the MicroStrategy for Office page in the Readme
and the MicroStrategy for Office Help.
3. In the Web Services URL field, replace the http:// prefix with
https://.
4. Click OK.
For ASP and IIS server, the default connection between the Refine Server
and Web Server is HTTP. Thus, you must complete the following steps to
manually enable HTTPS connection.
You can enable HTTPS connection between the Refine Server and
Web Server to ensure secure data wrangling service.
2. In the webapp folder, create a new folder and put the server keystore
file in this folder.
Server]. If the Refine Server does not exist, then create one.
2. Create a folder named keystore and put the client TrustStore file
(created in Prepare the Self-Assigned Certificate) in this folder.
3. Restart Tomcat.
Troubleshooting
2. Create a string value and name it the file name of your log file. For
example, https.
3. Create a log file with the name you edited in the register. For example,
https.log.
4. Put the log file under the Intelligence Server installation path,
C:\Program Files (x86)\MicroStrategy\Intelligence
Server\.
“OtherOptions”=“-Drefine.verbosity=info”
1. In your browser, enter the URL to access Web and Web Services. By
default, these are:
l Web (J2EE):
http://hostname/MicroStrategy/servlet/mstrWeb, where
hostname is the name of the server that Web is running on.
l Web Services:
http://hostname/MicroStrategyWS/MSTRWS.asmx, where
hostname is the name of the server that Web Services is running on.
Converting Files
To set up SSL for your MicroStrategy environment, you will need to have
your certificates and key files in .pem, .crt, and .key formats. If you have
files from your IT administrator that do not have these extensions, they must
be converted.
You can set up a CA server using the OpenSSL utility. If you are using a
UNIX or Linux machine, OpenSSL should be installed by default. If you are
using a Windows machine, you can download the OpenSSL utility from
http://www.openssl.org/.
l Create the directories and configuration files for the CA. See Creating the
Directories and Configuration Files for Your CA, page 644.
l Create the server's private key and root certificate. See Creating the
Private Key and Root Certificate for the CA, page 646.
l Configure OpenSSL to use the server's private key and certificate to sign
certificate requests. See Configuring OpenSSL to Use your Private Key
and Root Certificate, page 647.
private
A subdirectory to store the CA's private key
For example, devCA/private
newcerts
A subdirectory to store the new certificates in an
unencrypted format For example,
devCA/newcerts
2. In the root directory for the CA, use a text editor to create the following
files:
Filename Description
Contains the serial number for the next certificate. When you create
serial (no
the file, you must add the serial number for the first certificate. For
extension)
example, 01 .
The default installation folder may depend on the distribution you are
using. For example, for Red Hat Enterprise Linux, the default folder
is /etc/pki/tls.
This procedure assumes that you have followed all the steps in Creating the
Directories and Configuration Files for Your CA, page 644.
2. To create the private key and root certificate, type the following
command, and press Enter:
Where:
l devCApath: The root directory for your CA, which is created as part
of the procedure described in Creating the Directories and
Configuration Files for Your CA, page 644. For example,
/etc/pki/tls/devCA.
3. You are prompted for a pass-phrase for the key, and for information
about your CA, such as your location, organization name, and so on.
Use a strong pass-phrase to secure your private key, and type the
required information for the CA. The private key and root certificate are
created.
This procedure assumes that you have completed the following steps:
l Create the files and directory structure for your CA, including a copy of the
default OpenSSL configuration file, as described in Creating the Directories
and Configuration Files for Your CA, page 644.
l Create a private key and root certificate for your CA, as described in Creating
the Private Key and Root Certificate for the CA, page 646.
1. Use a text editor, such as Notepad, to open the copy of the OpenSSL
configuration file in your CA's root directory. For example,
openssl.dev.cnf.
l dir: Change this value to the root folder that you created for your
CA. For example, /etc/pki/tsl/devCA.
If you are using a UNIX or Linux machine, the OpenSSL utility should be
installed by default. If you are using a Windows machine, you can download
the OpenSSL utility from http://www.openssl.org/.
2. Type a secure pass-phrase for the key, and press Enter. The key file is
created.
Where Server_key.key is the private key file that you created, and
Server_CSR is the CSR file.
When you have entered all the required information, the CSR file is
created
3. Repeat this procedure for every application that you need a certificate
for.
This procedure assumes that you have completed the following steps:
l Create the files and directory structure for your CA, including a copy of the
default OpenSSL configuration file, as described in Creating the Directories
and Configuration Files for Your CA, page 644.
l Create a private key and root certificate for your CA, as described in Creating
the Private Key and Root Certificate for the CA, page 646.
l Configure OpenSSL to use the private key and root certificate, as described
in Configuring OpenSSL to Use your Private Key and Root Certificate, page
647.
l Create a certificate signing request (CSR file) for the applications that
require SSL certificates, as described in Generating an SSL Certificate
Signing Request, page 648. Copy the CSR file to the server that hosts your
CA.
The default installation folder may depend on the distribution you are
using. For example, for Red Hat Enterprise Linux, the default folder
is /etc/pki/tls.
Where:
l devCApath: The root directory for your CA, which is created as part
of the procedure described in Creating the Directories and
Configuration Files for Your CA, page 644. For example,
/etc/pki/tls/devCA.
<security-constraint>
<web-resource-collection>
<web-resource-name>NoAccess</web-resource-name>
<url-pattern>/plugins/<plugin name>/jsp/*</url-pattern>
<url-pattern>/plugins/<plugin name>/WEB-INF/*</url-pattern>
</web-resource-collection>
<auth-constraint />
<user-data-constraint>
<transport-guarantee>NONE</transport-guarantee>
</user-data-constraint>
</security-constraint>
MicroStrategy recommends you place your server side files for jsp
deployment in the WEB-INF and jsp folders. If your plugin has sensitive
files in other folders, you can add more <url-pattern> entries for those
folders in web.xml to ensure they cannot be accessed.
MicroStrategy recommends you place your server side files for asp
deployment in the WEB-INF and asp folders. If your plugin has sensitive
files in other folders, you can copy the same web.config in the
corresponding location.
l Whitelist URLs
Whitelist URLs
The URL whitelist is enabled by default, but allows all domains and all
protocols. The configuration must be adjusted to be more restrictive. Any
changes require a restart of the web server (Tomcat) to be applied.
The following section is present inside the exploded WAR file for
MicroStrategy Web and Library. Inside the \WEB-INF\ folder is the
web.xml file used for configuring the URL whitelist:
<filter>
<filter-name>redirectResponseFilter</filter-name>
<filter-class>com.microstrategy.web.filter.RedirectResponseFilter</filter-
class>
<init-param>
<param-name>allowedProtocols</param-name>
<param-value>*</param-value>
</init-param>
<init-param>
<param-name>domains</param-name>
<param-value>*</param-value>
</init-param>
</filter>
<filter-mapping>
<filter-name>redirectResponseFilter</filter-name>
<url-pattern>/*</url-pattern>
</filter-mapping>
l allowedProtocols
l domains
After editing the URL whitelist, you must restart the application server.
<filter>
<filter-name>redirectResponseFilter</filter-name>
<filter-class>com.microstrategy.web.filter.RedirectResponseFilter</filter-
class>
<init-param>
<param-name>allowedProtocols</param-name>
<param-value></param-value>
</init-param>
<init-param>
<param-name>domains</param-name>
<param-value></param-value>
</init-param>
</filter>
<filter-mapping>
<filter-name>redirectResponseFilter</filter-name>
<url-pattern>/*</url-pattern>
</filter-mapping>
<filter>
<filter-name>redirectResponseFilter</filter-name>
<filter-class>com.microstrategy.web.filter.RedirectResponseFilter</filter-
class>
<init-param>
<param-name>allowedProtocols</param-name>
<param-value>*</param-value>
</init-param>
<init-param>
<param-name>domains</param-name>
<param-value>*.microstrategy.com</param-value>
</init-param>
</filter>
<filter-mapping>
<filter-name>redirectResponseFilter</filter-name>
<url-pattern>/*</url-pattern>
</filter-mapping>
<filter>
<filter-name>redirectResponseFilter</filter-name>
<filter-class>com.microstrategy.web.filter.RedirectResponseFilter</filter-
class>
<init-param>
<param-name>allowedProtocols</param-name>
<param-value>https</param-value>
</init-param>
<init-param>
<param-name>domains</param-name>
<param-value>*.microstrategy.com</param-value>
</init-param>
</filter>
<filter-mapping>
<filter-name>redirectResponseFilter</filter-name>
<url-pattern>/*</url-pattern>
</filter-mapping>
Fields
Password Settings
Security Level Security Level includes the following password settings. It
provides four sets of predefined setting values for the administrator to use.
These are Default, Low, Medium, and High. Select Customize from the
drop-down list to view the following settings for a customized configuration.
l Lock after (failed attempts) Specify the number of failed login attempts
allowed. Once a user has this many failed login attempts in a row, the user
is locked out of the MicroStrategy account until an administrator unlocks
the account. Setting this value to No Limit indicates that users are never
locked out of their accounts. The default setting is No Limit.
l Allow user login and full name in password When this option is
disabled, Intelligence Server ensures that new passwords do not contain
the user's login or part of the user's name. This option is enabled by
default.
Authentication Settings
Update pass-through credentials on successful login Select to update or
disable updating the user's database credentials, LDAP credentials, on a
successful MicroStrategy login.
Content Settings
Enable custom HTML and JavaScript content in dashboards Enabling
this option allows users with the appropriate access to display third-party
Web applications or custom HTML and JavaScript directly in the dashboard.
This option is enabled by default. Although the ability to display Web
applications or custom HTML and JavaScript directly in a dashboard is
governed by user privileges, MicroStrategy recommends disabling these
features to ensure a secure environment.
If the URL is permitted by any of the specified URLs in the whitelist, then the
information is retrieved. The wildcard character (*) is allowed in the whitelist
as part of the URL. This allows you to have one URL in the whitelist that
encompasses many target URLs.
l Include URLs external to your own domain where you know content is
required.
l Avoid specifying the URL of the local machine where the MicroStrategy
product is running.
l If you must use the local MicroStrategy server machine to host content,
specify the exact location on the machine for the content.
l Key Store: Contains keys used to encrypt the metadata and file caches.
These keys are encrypted by the master key.
l Secure Bundle Code: The password used to protect the Secure Bundle
file.
Windows
To enable the Encryption Key Manager:
5. Click OK.
Linux
To enable the Encryption Key Manager:
Change
[HKEY_LOCAL_MACHINE\SOFTWARE\MicroStrategy\Feature
Flags]
"KE/EncryptionKeyManager"=dword:00000000
to
[HKEY_LOCAL_MACHINE\SOFTWARE\MicroStrategy\Feature
Flags]
"KE/EncryptionKeyManager"=dword:00000001
custom Web content without auditing and allow custom HTML content. For
more information, see Disable Granular Controls of HTML or JavaScript
Content in an Environment, which reverts the content behavior similar to
MicroStrategy ONE Update 12 and earlier.
If you are using the Content Inspector tool with certified objects, see
KB486729: Use Content Inspector with Certified Objects.
There are additional settings that control the HTML and JavaScript behavior
in Web. For more information, see How to Control the Use of HTML and
JavaScript in Web.
l In the Attribute Editor, assign the attribute form as HTML tag. See
Attribute Editor for more information.
l In the Metric Editor, select the checkbox next to Set as HTML content.
l Insert an HTML container and add HTML text to it. For more information,
see Add an HTML Container.
5. Click OK.
By default, the setting is toggled off which means the HTML and JavaScript
content is disabled. The content can be enabled after performing content
inspection (for more information, see Audit and Allow Custom HTML
Content) or at the content level, as shown below.
5. Click OK.
Report
1. Open a report.
5. Click OK.
Document
1. Open a document.
5. Click OK.
Dashboard
1. Edit a dashboard.
4. Click OK.
l Attribute forms with the HTML tag type and HTML containers can only be
added to a dashboard or document if the user has the Create custom
HTML and JavaScript content or Web create custom HTML and
JavaScript content privileges.
l Metrics and derived metrics with the HTML data type can only be added if
the user has the Create custom HTML and JavaScript content or Web
create custom HTML and JavaScript content privileges. Only metrics
configured with the HTML data type will render custom HTML content in
grids.
l Project schema attributes with HTML Tag type forms can only be created
by users with the Create schema objects privilege.
l Attributes with the HTML Tag type and text metrics can be created using
Data Import and can be added to a dashboard if the user has the Web
manage Document and Dashboard datasets privilege, in addition to one
of the following privileges:
o Access data (files) from Local, URL, DropBox, Google Drive, Sample
Files, Clipboard
o Access data from Cloud App (Google Analytics, Salesforce Reports,
Facebook, Twitter)
o Allow data from Databases, Google BigQuery, BigData, OLAP, BI tools
l Metrics with the data type Text may contain custom markup or
code.Metrics with text can be created in a dashboard or document if the
user has the following privileges:
o Create Derived Metrics
o Web create Derived Metrics and Derived Attributes
l Project metrics can only be created by users with the Use Metric Editor or
Web Use Metric Editor privilege.
If an owner has the Create custom HTML and JavaScript content or Web
create custom HTML and JavaScript content privileges, the owner is able
to see the HTML or JavaScript content rendered on the dashboard,
document, report, or Bot even if the content is set to disabled.
5. Click OK.
The HTML Container button disappears from the dashboard toolbar. Users
with the appropriate access can edit a dashboard and remove their HTML
container but can not add them back.
Grids that previously displayed images, links, or other custom content using
attribute forms with the HTML Tag type display a yellow icon with a tool tip
that indicates the content is disabled.
The contents of any text metrics are encoded before they display on the
grid. Any markup or code displays as raw text in the browser or mobile
client. When you export the grid data as a CSV, the rendering behavior is
the same.
You can display hyperlinks in grids using attribute forms with the URL type.
Mobile Caching
After you disable the setting, the change immediately applies to all mobile
clients. If a dashboard is open when the setting is disabled, the change
applies when the user closes and reopens the dashboard.
History List
After you disable the setting, the change also applies to any content sent to
a History List. Custom HTML and JavaScript will not be disabled for History
List entries and caches that were created when the setting was enabled.
MicroStrategy suggests that Administrators manually delete all History List
entries and caches when you disable the setting.
l Use Workstation
l Use Developer
Use Workstation
1. In Command Manager, create a new script file and enter the following
content:
Use Developer
For details to purge result caches in a project, see Purging all Result
Caches in a Project.
This setting mitigates security risks associated with custom HTML and
JavaScript that execute when users edit or consume content.
When you use the Content Inspector for the first time, all HTML content is
disabled in your environment.
1. In Workstation, go to Environments.
You can filter the objects by Content Type or other criteria in the Filter
panel.
2. Click General and navigate to the Web user session idle time (sec)
field.
4. Click Ok.
Configuration determine how often the check occurs. The check occurs
using the lowest of the two field's values.
In execution, a user session can last longer than what is defined, depending
on when the user session was created. For example, if the session is
checked every 20 minutes and the session is set to expire after 20 minutes,
the session can last up to 39 minutes and 59 seconds if the session was
created just after a session check completes. See below for a graphical
representation:
At 10:21 AM, the user reaches their "soft timeout" or the time when the
session should be terminated. However, because the Intelligence Server
performed a session validity check one minute earlier, the session is not
terminated and lasts 39 minutes until the Intelligence Server ends the
session at 10:40 AM. Once the session is terminated, if a user returns to
their screen, they are presented with the MicroStrategy Web login screen.
The login screen will not display if the web server session is still active and
the Allow automatic login if session is lost setting is enabled.
For more information of the Allow automatic login if session is lost setting,
see KB12867.
enableFilePathValidation=1
After you enable the feature, only files under the web application root
folder can be accessed. For example, apache-tomcat-
9\webapps\MicroStrategyLibrary.
Access to other files that are outside of the web application root folder
will be denied. If you do not want access to other files, you can skip the
next step.
2. If you want to allow access to files that are outside of the web
application root folder, the file path pattern must be added to the
mstrExternalConfigurationFileAllowList file.
3. Once your configuration is complete, restart the web server to apply the
changes.
SameSite prevents the browser from sending cookies along with cross-site
requests. The main goal is to mitigate the risk of cross-origin information
leakage. It also provides protection against cross-site request forgery
attacks. Possible values are as follows:
l Strict Prevents the cookie from being sent by the browser to the target
site in all cross-site browsing contexts, even when following a regular link.
3. In the left pane, click Library and scroll down to the Cookies section.
Learn more about the other settings on this dialog in View and Edit Library
Administration Settings.
3. In the left pane, click Library and scroll down to the Cookies section.
<wls:session-descriptor>
<wls:cookie-path>/;SameSite=NONE</wls:cookie-path>
</wls:session-descriptor>
3. In the left pane, click Library and scroll down to the Cookies section.
5. In JBoss, navigate to
jboss/standalone/configuration/standalone.xml.
samesite-cookie(mode=NONE)
SameSite prevents the browser from sending cookies along with cross-site
requests. The main goal is to mitigate the risk of cross-origin information
leakage. It also provides protection against cross-site request forgery
attacks. Possible values are as follows:
l Strict Prevents the cookie from being sent by the browser to the target
site in all cross-site browsing contexts, even when following a regular link.
4. Contact your IT team to configure and enable SSL and apply the
necessary certificates on the IIS server.
<session-descriptor>
<cookie-path>/;SameSite=NONE</cookie-path>
<cookie-secure>true</cookie-secure>
</session-descriptor>
4. In JBoss, navigate to
jboss/standalone/configuration/standalone.xml.
samesite-cookie(mode=NONE)
4. Locate the Allow Arbitrary Loads property and update the value to
NO.
HSTS Implications
After HSTS is enabled, all HTTP requests from a particular domain name (for
example, myweb.server.com) convert to HTTPS requests by the browser.
HSTS will affect all other applications hosted on your domain. Before
enabling HSTS, MicroStrategy suggests that your IT or Network team
evaluate it.
Enable HSTS
Configuring HSTS varies for each application server, see your vendor
documentation for more information. You can use the following links to
configure HSTS:
l Tomcat
o https://tomcat.apache.org/tomcat-9.0-doc/config/filter.html#HTTP_
Header_Security_Filter
o https://docs.microfocus.com/SM/9.52/Hybrid/Content/security/concepts/
support_of_http_strict_transport_security_protocol.htm
l IIS
o Use the following custom header solution:
<system.webServer>
<httpProtocol>
<customHeaders>
<add name="Strict-Transport-Security" value="max-
age=31536000"/>
</customHeaders>
</httpProtocol>
</system.webServer>
o https://docs.microsoft.com/en-
us/iis/configuration/system.webserver/httpprotocol/customheaders/
You can also View and Edit Library Administration Settings in Workstation.
Overview Tab
The Library Admin Overview tab provides the ability to examine the machine
name, port, state, and Intelligence and Collaboration servers. The top right
corner of each component displays its current status. Click Edit to change
settings for Intelligence and Collaboration servers. You can also choose to
expose or hide the Collaboration service feature to users.
The Collaboration server uses 3000 as the default port. If the administrator
uses 0 as the port number when configuring the Collaboration server, it will
be replaced with 3000.
The Intelligence server uses 34952 as the default port. If the administrator
uses 0 as the port number, the Library server tries to connect to an
Intelligence server under port 34952.
Ensure the input is the DNS of an Intelligence Server and not other
intermediate components. Load balancers are not supported between the
MicroStrategy Library and Intelligence Server layers, as load balancing is
handled within these layers by MicroStrategy Library.
l Error icons in red: The system is not running as expected. Click to view a
more detailed message.
l Warning icons in yellow: There are some minor issues that should be
addressed.
Timeout Settings
If you receive a Timeout Error in the Library Admin control panel, modify the
timeout settings in the Library server configuration file
configOverride.properties to a larger value. For example:
http.connectTimeout=5000
http.requestTimeout=10000
Library Server
The connection URL for MicroStrategy Library.
MicroStrategy Web
This provides a connection URL to MicroStrategy Web, which enables the
Web options in Library.
Authentication Modes
The Authentication Modes section gives an administrator to dictate which
authentication modes they want to allow. When the authentication mode is
Security Settings
Chrome Web Browser version 80 and above introduces new changes which
may impact embedding. For more information, see KB484005: Chrome v80
Cookie Behavior and the Impact on MicroStrategy Deployments.
To enable CORS, select All. When this security setting is updated and
saved, restart the Library application.
Mobile Configuration
The mobile configuration link provides with the mobile server link which can
copied and can be accessed.
Global Search
Select Enable Global Search to allow users to search for content outside
their personal library.
Modeling Service
For more information on how to configure the secret key in the Modeling
service, see Modeling Service Configuration Properties.
Connection Method
This setting allows the administrator to manage the default approach of how
the Library server locates the Modeling services.
l Specified URLs: Allows the Library server to refer to the specified URL
path to locate the Modeling service and set up the connection. If it fails, it
falls back to Auto-discovery.
Any changes to this section only impact new user sessions. Existing user
sessions must be logged out and then logged back in.
Enable TLS
Enable this option if the Modeling service is HTTPS enabled, but with a
private Root CA certificate or self-signed certificate. In this case, a
trustStore file and its password must be configured within
configOverride.properties. For a Modeling service that is HTTPS
enabled with a public Root CA certificate, disable this option.
For more information on how to set up the HTTPS connection between the
Library server and the Modeling service, see Configure HTTPS Connection
Between Library Server and Modeling Service.
This section allows for the input of Machine name and Port Number for the
Intelligence server.
l Enable TLS
l Enable Logging
To access this section, the communication between the Library server and
the Collaboration server needs to be established first: no errors or warnings
from the Library server to the Collaboration server on the overview page.
You can choose to show Comments only, Discussions only, or hide the
Collaboration panel entirely across the environment. To do this, you can use
the checkboxes or the toggle to easily turn off functionality globally. Once
making new selections in Library Admin, end users will see the changes
immediately.
Enable TLS
The configuration was last updated in MicroStrategy ONE Update 12. See
the following steps to enable the encryption of secret values in your
environment using one of the following methods:
l MicroStrategy Library
l MicroStrategy Web
MicroStrategy Library
1. Open the configOverride.properties config file, which is located
in Tomcat Folder/webapps/MicroStrategyLibrary/WEB-
INF/classes/config/configOverride.properties.
MicroStrategy Web
1. Open the sys_defaults.properties config file, which is located in
Tomcat Folder/webapps/MicroStrategy/WEB-INF/xml/sys_
defaults.properties.
2. Add the
modelservice.featureflag.propertyEncryptionEnabled =
true flag and save the file.
You must have metadata version MicroStrategy ONE (June 2024) or later to
upgrade your metadata encryption to AES-256
3. Click Try Out and modify the request body by providing your user name
and password.
4. Click Execute.
c. Set the proper X-MSTR-AuthToken from step 5. You can also get
this via inspecting the browser network XHR requests.
d. Click Execute.
9. Set the proper X-MSTR-AuthToken from step 5. You also can get this
via inspecting the browser network XHR requests.
l Named User Licenses, page 724, in which the number of users with access
to specific functionality are restricted
l CPU Licenses, page 726, in which the number and speed of the CPUs
used by MicroStrategy server products are restricted
For example, the Web Use Filter Editor privilege is a Web Professional
privilege. If you assign this privilege to User1, then Intelligence Server
grants a Web Professional license to User1. If you only have one Web
Professional license in your system and you assign any Web Professional
privilege, for example Web Edit Drilling And Links, to User2, Intelligence
Server displays an error message when any user attempts to log in to
MicroStrategy Web.
The Administrator user that is created with the repository is not considered
in the licensed user count.
To fix this problem, you can either change the user privileges to match the
number of licenses you have, or you can obtain additional licenses from
MicroStrategy. License Manager can determine which users are causing the
metadata to exceed your licenses and which privileges for those users are
causing each user to be classified as a particular license type (see Using
License Manager, page 728).
For more information about the privileges associated with each license type,
see the List of Privileges section. Each privilege group has an introduction
indicating any license that the privileges in that group are associated with.
Every user must be associated with a base server module license type,
either the "Server-Reporter" or "Server-Intelligence" license. For more
information about the license types, see Privileges by License Type.
l The Client - Reporter and Client - Web licenses are linked together in a
hierarchy that allows users to inherit specific privilege sets. The hierarchy
is set up such that the Client - Reporter license is a subset of the Client -
Web license. This means that if you have a Client - Web license, in
addition to the privilege set that comes with that license, you will
automatically inherit the privileges that come with the Client - Reporter
license.
For steps to manually verify your Named User licenses using License
Manager, see Audit Your System for the Proper Licenses, page 734. You
can configure the time of day that Intelligence Server verifies your Named
User licenses.
3. Specify the time in the Time to run license check (24 hr format) field.
4. Click OK.
CPU Licenses
When you purchase licenses in the CPU format, the system monitors the
number of CPUs being used by Intelligence Server in your implementation
and compares it to the number of licenses that you have. You cannot assign
privileges related to certain licenses if the system detects that more CPUs
are being used than are licensed. For example, this could happen if you
have MicroStrategy Web installed on two dual-processor machines (four
CPUs) and you have a license for only two CPUs.
To fix this problem, you can either use License Manager to reduce the
number of CPUs being used on a given machine so it matches the number of
licenses you have, or you can obtain additional licenses from MicroStrategy.
To use License Manager to determine the number of CPUs licensed and, if
necessary, to change the number of CPUs being used, see Using License
Manager, page 728.
For steps to manually verify your CPU licenses using License Manager, see
Audit Your System for the Proper Licenses, page 734.
After the system has been out of compliance for fifteen days, an additional
error message is displayed to all users when they log into a project source,
warning them that the system is out of compliance with the available
licenses. This error message is only a warning, and users can still log in to
the project source.
After the system has been out of compliance for thirty days, the products
identified as out-of-compliance have their privileges marked as unavailable
in the User Editor in MicroStrategy Developer. In addition, if the system is
out of compliance with Named User licenses, the privileges associated with
the out-of-compliance products are disabled in the User Editor, Group
Editor, and Security Role Editor to prevent them from being assigned to any
additional users.
If the time mentioned in the out of compliance message is longer than the
validity of your key, then your product will not be accessible after the key
expires.
You can check for and manage the following licensing issues:
l More copies of a MicroStrategy product are installed and being used than
you have licenses for.
l More users are using the system than you have licenses for.
l More CPUs are being used with Intelligence Server than you have licenses
for.
In both GUI mode and command line mode, License Manager allows you to:
From this information, you can determine whether you have the number of
licenses that you need. You can also print a report, or create and view a
Web page with this information.
l Update licenses by providing the new license key, without re-installing the
products. For example, you can:
l Change the number of CPUs being used for a given MicroStrategy product,
such as Intelligence Server or MicroStrategy Web, if your licenses are
based on CPUs.
l Trigger a license verification check after you have made any license
management changes, so the system can immediately return to normal
behavior.
l View your MicroStrategy installation history including all license keys that
have been applied.
If the edition is not an Evaluation edition, the expiration date has a value
of "Never."
For detailed steps to perform all of these procedures, see the License
Manager Help (from within License Manager, press F1).
l Windows GUI: From the Windows Start menu, point to All Programs,
then MicroStrategy Tools, and then select License Manager. License
Manager opens in GUI mode.
l Windows command line: From the Start menu, select Run. Type CMD and
press Enter. A command prompt window opens. Type malicmgr and
press Enter. License Manager opens in command line mode, and
instructions on how to use the command line mode are displayed.
N am ed User Over vi ew
N am ed User Det ai l
The Named User Details chapter provides two pages: the Product Details
page and the User by Product page.
The Product Details page provides more in-depth analysis of license usage
at the product level, as well as detailed information on each user and their
associated privileges.
CPU Li cen se
The CPU License chapter provides you with the number of CPUs related to
your license.
Troubleshooting
If you're having trouble running the dashboard, see
KB482878: Troubleshooting the Platform Analytics Compliance Telemetry
Dashboard.
To audit your system, perform the procedure below on each server machine
in your system.
In rare cases, an audit can fail if your metadata is too large for the Java
Virtual Machine heap size. For steps to modify the Java Virtual Machine
heap size in your system registry settings, see MicroStrategy Tech Notes
TN6446 and TN30885.
In command line mode, the steps to audit licenses vary from those
below. Refer to the License Manager command line prompts to guide
you through the steps to audit licenses.
5. Select the Everyone group and click Audit. A folder tree of the
assigned licenses is listed in the Number of licenses pane.
7. Count the number of licenses per product for enabled users. Disabled
users do not count against the licensed user total, and should not be
counted in your audit.
8. Click Print.
9. For detailed information, click Report to create and view XML, HTML,
and CSV reports. You can also have the report display all privileges for
each user based on the license type. To do this, select the Show User
Privileges in Report check box.
10. Total the number of users with each license across all machines.
You must update your license key on all machines where MicroStrategy
products are installed. License Manager updates the license information for
the products that are installed on that machine.
In command line mode, the steps to update your license vary from
those below. Refer to the License Manager command line prompts to
guide you through the steps to update your license.
4. Type or paste the new key in the New License Key field and click Next.
If you have one or more products that are licensed based on CPU
usage, the Upgrade window opens, showing the maximum number of
CPUs each product is licensed to use on that machine. You can
change these numbers to fit your license agreement. For example, if
you purchase a license that allows more CPUs to be used, you can
increase the number of CPUs being used by a product.
5. The results of the upgrade are shown in the Upgrade Results dialog
box. License Manager can automatically request an Activation Code for
your license after you update.
Related Topics
KB484614: How to set the CPU affinity for MicroStrategy Web JSP in Linux
After installation you can specify CPU affinity through the MicroStrategy
Service Manager. This requires administrator privileges on the target
machine.
3. Click Options.
6. Click OK.
If the target machine contains more than one physical processor and the
MicroStrategy license key allows more than one CPU to run Intelligence
Server Universal Edition, you are prompted to provide the number of CPUs
to be deployed. The upper limit is either the number of licensed CPUs or the
physical CPU count, whichever is lower.
Each Linux platform exposes its own set of functionality to bind processes to
processors. However, Linux also provides commands to easily change the
processor assignments. As a result, Intelligence Server periodically checks
its own CPU affinity and takes steps whenever the CPU affinity mask does
not match the overall CPU licensing. Whenever your licenses do not match
your deployment, CPU affinity is automatically adjusted to the number of
CPUs necessary to be accurate again.
This automatic adjustment for CPU affinity attempts to apply the user's
specified CPU affinity value when it adjusts the system, but it may not
always be able to do so depending on the availability of processors. For
example, if you own two CPU licenses and CPU affinity is manually set to
use Processor 1 and Processor 2, the CPU affinity adjustment may reset
CPU usage to Processor 0 and Processor 1 when the system is
automatically adjusted.
mstrctl -s [IntelligenceServerName] rs
Whenever you change the CPU affinity, you must restart the machine.
This section describes settings that may interact with CPU affinity that you
must consider, and provides steps to update CPU affinity in your
environment.
IIS Versions
CPU affinity can be configured on machines running IIS 6.0 or 7.0. The
overall behavior depends on how IIS is configured. The following cases are
considered:
l Worker process isolation mode: In this mode, the CPU affinity setting is
applied at the application pool level. When MicroStrategy Web CPU
affinity is enabled, it is applied to all ASP.NET applications running in the
same application pool. By default, MicroStrategy Web runs in its own
application pool. The CPU affinity setting is shared by all instances of
MicroStrategy Web on a given machine. Worker process isolation mode is
the default mode of operation on IIS 6.0 when the machine has not been
upgraded from an older version of Windows.
l IIS 5.0 compatibility mode: In this mode, all ASP.NET applications run in
the same process. This means that when MicroStrategy Web CPU affinity
is enabled, it is applied to all ASP.NET applications running on the Web
server machine. A warning is displayed before installation or before the
CPU affinity tool (described below) attempts to set the CPU affinity on a
machine with IIS running in IIS 5.0 compatibility mode.
This is the default mode of operation when the machine has been upgraded
from an older version of Windows.
Both IIS 6.0 and IIS 7.0 support a "Web Garden" mode, in which IIS creates
some number of processes, each with affinity to a single CPU, instead of
creating a single process that uses all available CPUs. The administrator
specifies the total number of CPUs that are used. The Web Garden settings
can interact with and affect MicroStrategy CPU affinity.
The Web Garden setting should not be used with MicroStrategy Web. At
runtime, the MicroStrategy Web CPU affinity setting is applied after IIS sets
the CPU affinity for the Web Garden feature. Using these settings together
can produce unintended results.
In both IIS 6.0 and IIS 7.0, the Web Garden feature is disabled by default.
l In worker process isolation mode, the Web Garden setting is applied at the
application pool level. You specify the number of CPUs to be used. A given
number of CPUs are specified, and IIS creates that number of w3wp.exe
instances. Each of the instances runs all of the ASP.NET applications
The MAWebAff.exe tool lists each physical CPU on a machine. You can add
or remove CPUs or disable CPU affinity using the associated check boxes.
Clearing all check boxes prevents the MicroStrategy Web CPU affinity
setting from overriding any IIS-related CPU affinity settings.
l Determines the set of data warehouse tables to be used, and therefore the
set of data available to be analyzed.
l Contains all schema objects used to interpret the data in those tables.
Schema objects include objects such as facts, attributes, and hierarchies.
l Contains all application objects used to create reports and analyze the
data. Application objects include objects such as reports, metrics, and
filters.
l Defines the security scheme for the user community that accesses these
objects. Security objects include objects such as security roles, privileges,
and access control lists.
l For details on how to implement the project life cycle in your MicroStrategy
environment, see Implement the Recommended Life Cycle, page 751
To set up the development, test, and production projects so that they all
have related schemas, you need to first create the development project. For
instructions on how to create a project, see the Project Design Help. Once
the development project has been created, you can duplicate it to create the
test and production projects using the Project Duplication Wizard. For
detailed information about the Project Duplication Wizard, see Duplicate a
Project, page 753.
Once the projects have been created, you can migrate specific objects
between them via Object Manager. For example, after a new metric has
been created in the development project, you can copy it to the test project.
For detailed information about Object Manager, see Copy Objects Between
Projects: Object Manager, page 762.
You can also merge two related projects with the Project Merge Wizard. This
is useful when you have a large number of objects to copy. The Project
Merge Wizard copies all the objects in a given project to another project. For
an example of a situation in which you would want to use the Project Merge
Wizard, see Real-Life Scenario: New Version From a Project Developer,
page 749. For detailed information about Project Merge, see Merge Projects
to Synchronize Objects, page 809.
To help you decide whether you should use Object Manager or Project
merge, see Compare Project Merge to Object Manager, page 759.
The Project Comparison Wizard can help you determine what objects in a
project have changed since your last update. You can also save the results
of search objects and use those searches to track the changes in your
projects. For detailed information about the Project Comparison Wizard, see
Compare and Track Projects, page 818. For instructions on how to use
search objects to track changes in a project, see Track Your Projects with
the Search Export Feature, page 821.
Integrity Manager helps you ensure that your changes have not caused any
problems with your reports. Integrity Manager executes some or all of the
reports in a project, and can compare them against another project or a
previously established baseline. For detailed information about Integrity
Manager, see Chapter 16, Verifying Reports and Documents with Integrity
Manager.
This combination of the two projects creates Project version 2.1, as shown in
the diagram below.
The vendor's new Version 2 project has new objects that are not in yours,
which you feel confident in moving over. But some of the objects in the
Version 2 project may conflict with objects that you had customized in the
Version 1.2 project. How do you determine which of the Version 2 objects
you want move into your system, or which of your Version 1.2 objects to
modify?
You could perform this merge object-by-object and migrate them manually
using Object Manager, but this will be time-consuming if the project is large.
It may be more efficient to use the Project Merge tool. With this tool, you can
define rules for merging projects that help you identify conflicting objects
and handle them a certain way. Project Merge then applies those rules while
merging the projects. For more information about using the MicroStrategy
Project Merge tool, see Merge Projects to Synchronize Objects, page 809.
Once the objects have been created and are relatively stable, they can
be migrated to the test project for testing. For instructions on how to
migrate objects, see Update Projects with New Objects, page 758.
Testing involves making sure that the new objects produce the
expected results, do not cause data errors, and do not put undue strain
on the data warehouse. If the objects are found to contain errors, these
errors are reported to the development team so that they can be fixed
and tested again. For more information about the test project, see
Recommended Scenario: Development, Test, and Production, page
746.
Once the objects have been thoroughly tested, they can be migrated to
the production project and put into full use. For instructions on how to
migrate objects, see Update Projects with New Objects, page 758.
The project life cycle does not end with the first migration of new objects into
the production project. A developer may come up with a new way to use an
attribute in a metric, or a manager may request a specific new report. These
objects pass through the project life cycle in the same way as the project's
initial objects.
Duplicate a Project
Duplicating a project is an important part of the application life cycle. If you
want to copy objects between two projects, MicroStrategy recommends that
the projects have related schemas. This means that one must have originally
been a duplicate of the other, or both must have been duplicates of a third
project.
Do not refresh the warehouse catalog in the destination project. Refresh the
warehouse catalog in the source project, and then use Object Manager to
move the updated objects into the destination project. For information about
the warehouse catalog, see the Optimizing and Maintaining your Project
section in the Project Design Help.
If you are copying a project to another project source, you have the option to
duplicate configuration objects as well. Specifically:
l You can choose whether to duplicate all configuration objects, or only the
objects used by the project.
l You can choose to duplicate all users and groups, only the users and
groups used by the project, no users and groups, or a custom selection of
users and groups.
that the new project's schema is identical to the source project's schema.
To duplicate a project:
You must have the Bypass All Object Security Access Checks privilege for that
project.
You must be a member of the System Administrator group for the target project
source.
You must have the Create Schema Objects privilege for the target project
source.
1. From Object Manager select the Project menu (or from Developer
select the Schema menu), then select Duplicate Project.
2. Specify the project source and project information that you are copying
from (the source).
3. Specify the project source and project information that you are copying
to (the destination).
6. Specify whether you want to see the event messages as they happen
and, if so, what types. Also specify whether to create log files and, if so,
what types of events to log, and where to locate the log files. By default
Project Duplicator shows you error messages as they occur, and logs
most events to a text file. This log file is created by default in
C:\Program Files (x86)\Common Files\MicroStrategy\.
You can also use the settings file to run the wizard in command-line mode.
The Project Duplication Wizard command line interface enables you to
duplicate a project without having to load the graphical interface, or to
schedule a duplication to run at a specific time. For example, you may want
to run the project duplication in the evening, when the load on Intelligence
Server is lessened. You can create an XML settings file, and then use the
Windows AT command or the Unix scheduler to schedule the duplication to
take place at night.
After saving the settings from the Project Duplication Wizard, invoke the
Project Duplication Wizard executable ProjectDuplicate.exe. By
default this executable is located in C:\Program Files (x86)\Common
Files\MicroStrategy.
Where:
l -md indicates that the metadata of the destination project source will be
updated if it is older than the source project source's metadata.
MicroStrategy has the following tools available for updating the objects in a
project:
l Project Merge migrates all the objects in a project at once. For information
about Project Merge, see Merge Projects to Synchronize Objects, page
809.
l Object Manager can move just a few objects, or just the objects in a few
folders. Project Merge moves all the objects in a project.
l Object Manager must locate the dependents of the copied objects and
then determine their differences before performing the copy operation.
Project Merge does not do a dependency search, since all the objects in
the project are to be copied.
l The Project Merge Wizard allows you to store merge settings and rules in
an XML file. These rules define what is copied and how conflicts are
resolved. Once they are in the XML file, you can load the rules and
"replay" them with Project Merge. This can be useful if you need to
perform the same merge on a recurring schedule. For example, if a project
developer sends you a new project version quarterly, Project Merge can
make this process easier.
l Project Merge can be run from the command prompt in Microsoft Windows.
An added benefit of this feature is that project merges can be scheduled
using the at command in Windows and can be run silently in an
installation routine.
Lock Projects
When you open a project in Project Merge, you automatically place a
metadata lock on the project. You also place a metadata lock on the project
If you lock a project by opening it in Object Manager, you can unlock the
project by right-clicking the project in Object Manager, and choosing
Disconnect from Project Source.
Only the user who locked a project, or another user with the Bypass All
Object Security Access Checks and Create Configuration Objects
privileges, can unlock a project.
Object Manager and Project Merge both copy multiple objects between
projects. Use Object Manager when you have only a few objects that need to
be copied. For the differences between Object Manager and Project Merge,
see Compare Project Merge to Object Manager, page 759.
l To create an update package, you must have either the Use Object
Manager privilege or the Use Object Manager Read-only privilege for the
project from which you are creating an update package.
However, you can move objects from the older version to the updated
project if the older version is interoperable with the updated version. For
detailed information about interoperability between versions of
MicroStrategy, see the Readme.
If you need to allow other users to change objects in projects while the
projects are opened in Object Manager, you can configure Object Manager
to connect to projects in read-only mode. You can also allow changes to
configuration objects by connecting to project sources in read-only mode.
l You cannot copy objects into a read-only project or project source. If you
connect to a project in read-only mode, you can still move, copy, and
delete objects in a project, but you cannot copy objects from another
project into that project.
6. Click OK.
Copy Objects
Object Manager can copy application, schema, and configuration objects.
l Application objects include reports and documents, and the objects used
to create them, such as templates, metrics, filters, prompts, and searches.
Folders are also considered to be application objects and configuration
objects.
If you use Object Manager to copy a user or user group between project
sources, the user or group reverts to default inherited access for all projects
in the project source. To copy a user or group's security information for a
project, you must copy the user or group in a configuration update package.
For background information on these objects, including how they are created
and what roles they perform in a project, see the Project Design Help.
l When copying MDX cubes between projects, make sure that the conflict
resolution action for the cubes, cube attributes, and reports that use the
cubes is set to Replace.
l If you need to copy objects from multiple folders at once, you can create a
new folder, and create shortcuts in the folder to all the objects you want to
copy. Then copy that folder. Object Manager copies the folder, its contents
(the shortcuts), and their dependencies (the target objects of those
shortcuts) to the new project.
l If you are using update packages to update the objects in your projects,
use the Export option to create a list of all the objects in each update
package.
l To log in to a project source using Object Manager, you must have the
Use Object Manager privilege for that project.
2. In the list of project sources, select the check box for the project source
you want to access. You can select more than one project source.
3. Click Open.
1. In the Folder List, expand the project that contains the object you want
to copy, then navigate to the object.
3. Expand the destination project in which you want to paste the object,
and then select the folder in which you want to paste the object.
If you are copying objects between two different project sources, two
windows are open within the main Object Manager window. In this
case, instead of right-clicking and selecting Copy and Paste, you can
drag and drop objects between the projects.
5. If you copied any schema objects, you must update the destination
project's schema. Select the destination project, and from the Project
menu, select Update Schema.
Co p y Co n f i gu r at i o n Ob j ect s
1. In the Folder Lists for both the source and destination projects, expand
the Administration folder, then select the appropriate manager for the
type of configuration object you want to copy (Database Instance
Manager, Schedule Manager, or User Manager).
2. From the list of objects displayed on the right-hand side in the source
project source, drag the desired object into the destination project
source and drop it.
To display the list of users on the right-hand side, expand User Manager,
then on the left-hand side select a group.
If the object you are copying does exist in the destination project, a conflict
occurs and Object Manager opens the Conflict Resolution dialog box. For
information about how to resolve conflicts, see Resolve Conflicts when
Copying Objects, page 777.
When you migrate an object to another project, by default any objects used
by that object in its definition (its used dependencies) are also migrated.
You can exclude certain objects and tables from the dependency check and
migration. For instructions, see Excluding Dependent Attributes or Tables
from Object Migration, page 774.
Used Dependencies
When you migrate an object to another project, any objects used by that
object in its definition (its used dependencies) are also migrated. The order
of these dependent relationships is maintained.
1. After you have opened a project source and a project using Object
Manager, in the Folder List select the object.
2. From the Tools menu, select Object used dependencies. The Used
dependencies dialog box opens and displays a list of objects that the
selected object uses in its definition. The image below shows the used
dependencies of the Revenue metric in the MicroStrategy Tutorial
project: in this case, the used dependency is the Revenue base
formula.
3. In the Used dependencies dialog box, you can do any the following:
l View used dependencies for any object in the list by selecting the
object and clicking the Object used dependencies toolbar icon.
l Open the Used-by dependencies dialog box for any object in the list
by selecting the object and clicking the Object used-by
dependencies icon on the toolbar. For information about used-by
dependencies, see Used-By Dependencies, page 771.
l View the properties of any object, such as its ID, version number, and
access control lists, by selecting the object and from the File menu
choosing Properties.
Used-By Dependencies
Used-by dependents are not automatically migrated with their used objects.
However, you cannot delete an object that has used-by dependencies
without first deleting its used objects.
1. After you have opened a project source and a project using Object
Manager, from the Folder List select the object.
l View used-by dependencies for any object in the list by selecting the
object and clicking the Object used-by dependencies icon on the
toolbar.
l Open the Used dependencies dialog box for any object in the list by
selecting the object and clicking the Object used dependencies icon
on the toolbar. For information about used dependencies, see Used
Dependencies, page 769.
l View the properties of any object, such as its ID, version number, and
access control lists, by selecting the object and from the File menu
choosing Properties.
For example, a user copies a report from the source project to the
destination project. In the source project, all dependents of the report are
stored in the Public Objects\Report Dependents folder. Object
Manager looks in the destination project's Public Objects folder for a
subfolder named Report Dependents (the same path as in the source
project). If the folder exists, the dependent objects are saved in that folder.
If the destination project does not have a folder in Public Objects with the
name User, Object Manager creates it and saves all dependent objects
there.
When you create an update package, click Add All Used Dependencies to
make sure all used dependencies are included in the package. If the
dependent objects for a specific object do not exist in either the destination
project source or in the update package, the update package cannot be
applied. If you choose not to add dependent objects to the package, make
sure that all dependent objects are included in the destination project
source.
Object Dependencies
Some objects have dependencies that are not immediately obvious. These
are listed below:
l Folders have a used dependency on each object in the folder. If you copy
a folder using Object Manager, all the objects in that folder are also
copied.
A folder that is copied as part of an update package does not have a used
dependency on its contents.
l Security filters, users, and user groups have a used dependency on the
user groups they belong to. If you copy a security filter, user, or user
group, the groups that it belongs to are also copied.
Groups have a used-by dependency on the users and security filters that
are associated with them. Copying a group does not automatically copy
the users or security filters that belong to that group. To copy the users or
security filters in a group, select the users from a list of that group's used-
by dependents and then copy them.
Attributes used in fact entry levels are not dependents of the fact.
l Exclude all parent attributes from an attribute and Exclude all child
attributes from an attribute: An attribute has a used dependency on its
parent and child attributes in a hierarchy. Thus, migrating an attribute may
result in migrating its entire hierarchy. To exclude the parent or child
attributes from being migrated, select the corresponding option.
For attributes, the lookup table must always exist in the destination
project, so it is always migrated.
3. Select the check boxes for the objects you want to exclude from Object
Manager's dependency checking.
4. Click OK.
4. Click OK.
The ability to retain the name, description, and long description is important
in internationalized environments. When replacing the objects to resolve
conflicts, retaining these properties of the objects in the destination project
facilitates support of internationalized environments. For example, if the
destination project contains objects with French names but the source
project has been developed in English (including English names), you can
retain the French names and descriptions for objects in the destination
project. Alternately, you can update the project with the English names and
not change the object itself.
4. From the Language for metadata and warehouse data if user and
project level preferences are set to default drop-down list, select
whether copied objects use the locale settings from Developer or from
the machine's regional settings.
6. To resolve translations with a different action than that specified for the
object associated with the translation, select the Enable advanced
conflict resolution check box.
8. Click OK.
When copying objects across projects with Object Manager, if an object with
the same ID as the source object exists anywhere in the destination project,
a conflict occurs and the Conflict Resolution dialog box (shown below)
opens. It prompts you to resolve the conflict.
Conflict Explanation
Exists The object ID, object version, and path are the same in the source and
identically destination projects.
Exists The object ID is the same in the source and destination projects, but the
differently object versions are different. The path may be the same or different.
The object ID and object version are the same in the source and
destination projects, but the paths are different. This occurs when one of
Exists the objects exists in a different folder.
identically
except for If your language preferences for the source and destination projects
path are different, objects that are identical between the projects may be
reported as Exists Identically Except For Path. This occurs because
when different languages are used for the path names, Object
Conflict Explanation
If you resolve the conflict with the Replace action, the destination object is
updated to reflect the path of the source object.
Exists (User only) The object ID and object version of the user are the same in the
identically source and destination projects, but at least one associated Distribution
except for Services contact or contact group is different. This may occur if you
Distribution modified a contact or contact group linked to this user in the source project.
Services If you resolve the conflict with the Replace action, the destination user is
objects updated to reflect the contacts and contact groups of the source user.
The object exists in the source project but not in the destination project.
Does not If you clear the Show new objects that exist only in the source
exist check box in the Migration category of the Object Manager
Preferences dialog box, objects that do not exist in the destination
project are copied automatically with no need for conflict resolution.
Replace moves the object into same parent folder as the source object. If
the parent path is the same between source and destination but the
grandparent path is different, Replace may appear to do nothing because
Replace puts the object into the same parent path.
Non-empty folders in the destination location will never have the same
version ID and modification time as the source, because the folder is
copied first and the objects are added to it, thus changing the version ID
and modification times during the copy process.
If the source object's modification time is more recent than the destination
Use newer object's, the Replace action is used.
If the source object's modification time is more recent than the destination
Use older object's, the Use existing action is used.
Merge The privileges, security roles, groups, and Distribution Services addresses
(user/group and contacts of the source user or group are added to those of the
only) destination user or group.
The selected table is not created in the destination project. This option is
Do not move only available if the Allow to override table creation for non-lookup
(table only) tables that exist only at source project check box in the Migration
category of the Object Manager Preferences dialog box is selected.
Force replace Replace the object in the destination project with the version of the object
(Update in the update package, even if both versions of the object have the same
packages Version ID.
only)
Delete the object from the destination project. The version of the object in
Delete
the update package is not imported into the destination project.
(Update
packages
If the object in the destination has any used-by dependencies when
only)
you import the update package, the import will fail.
Warehouse and other database tables associated with the objects moved
are handled in specific ways, depending on your conflict resolution choices.
For details, see Conflict Resolution and Tables, page 783.
The schema has been modified. In order for the changes to take
effect, you must update the schema.
To update the project schema, from the Object Manager Project menu,
select Update Schema. For details about updating the project schema, see
the Optimizing and Maintaining your Project section in the Project Design
Help.
To Resolve a Conflict
1. Select the object or objects that you want to resolve the conflict for.
You can select multiple objects by holding down SHIFT or CTRL when
selecting.
2. Choose an option from the Action drop-down list (see table above).
l Application objects
l Schema objects
l Configuration objects
l Folders
You can set a different default action for objects specifically selected by the
user, and for objects that are included because they are dependents of
selected objects. For example, you can set selected application objects to
default to Use newer to ensure that you always have the most recent
version of any metrics and reports. You can set dependent schema objects
to default to Replace to use the source project's version of attributes, facts,
and hierarchies.
These selections are only the default actions. You can always change the
conflict resolution action for a given object when you copy that object.
3. Make any changes to the default actions for each category of objects.
4. Click OK.
l If you resolve a conflict with the Replace action, and the access control
lists (ACL) of the objects are different between the two projects, you can
choose whether to keep the existing ACL in the destination project or
replace it with the ACL from the source project.
l If you add a new object to the destination project with the Create New or
Keep Both action, you can choose to have the object inherit its ACL from
the destination folder instead of keeping its own ACL. This is helpful when
copying an object into a user's profile folder, so that the user can have full
control over the object.
The Use Older or Use Newer actions always keep the ACL of whichever
object (source or destination) is used.
l To use the ACL of the source object, select Keep existing ACL when
replacing objects.
If this option is selected, the ACL is replaced even if the source and
destination objects are identical.
4. Under ACL option on new objects, select how to handle the ACL for
new objects added to the destination project:
l To use the ACL of the source object, select Keep ACL as in the
source objects.
l To inherit the ACL from the destination folder, select Inherit ACL
from the destination folder.
5. Click OK.
You can choose not to create a dependent table in the destination project by
changing the Action for the table from Create New to Ignore. You can also
choose not to migrate any dependent tables by specifying that they not be
included in Object Manager's dependency search. For detailed information,
including instructions, see What happens when You Copy or Move an
Object, page 769.
The following list and related tables explain how the attribute - table or fact -
table relationship is handled, based on the existing objects and tables and
the conflict resolution action you select.
In the following list and tables, attribute, fact, and table descriptions refer to
the destination project. For example, "new attribute" means the attribute is
new to the destination project: it exists in the source project but not the
destination project.
l New attribute or fact, existing table: The object in the source project
contains a reference to the table in its definition. The table in the
destination project has no reference to the object because the object is not
present in the destination project. In this case the new object will have the
same references to the table as it did in the source project.
Object
What happens in the destination project
Action
Use
The object does not reference the table.
Existing
The object has the same references to the table as it does in the source
Replace
project.
Object
What happens in the destination project
Action
Use
The object does not reference the table.
Existing
The object has the same references to the table as it does in the source
Replace
project.
Object
What happens in the destination project
Action
Use
The object has the same references to the table as it did before the action.
Existing
For example, you have several developers who are each responsible for a
subset of the objects in the development project. The developers can submit
update packages, with a list of the objects in the packages, to the project
administrator. The administrator can then import those packages into the
test project to apply the changes from each developer. If a change causes a
problem with the test project, the administrator can undo the package import
process.
If your update package includes any schema objects, you may need to
update the project schema after importing the package. For more
information about updating the schema after importing an update package,
see Update Packages and Updating the Project Schema, page 807.
l Force Replace: Replace the object in the destination project with the
version of the object in the update package, even if both versions of the
object have the same Version ID.
l Delete: Delete the object from the destination project. The version of the
object in the update package is not imported into the destination project.
If the object in the destination has any used-by dependencies when you
import the update package, the import will fail.
To update your users and groups with the project access information for
each project, you must create a project security update package for each
project. You create these packages at the same time that you create the
configuration update package, by selecting the Create project security
packages check box and specifying which projects you want to create a
project security update package for. For detailed instructions on creating a
configuration update package and project security update packages, see
Creating a Configuration Update Package, page 792.
You must import the configuration update package before importing the
project security update packages.
You can also create update packages from the command line, using rules
specified in an XML file. In the Create Package dialog box, you specify a
container object, such as a folder, search object, or object prompt, and
specify the conflict resolution rules. Object Manager creates an XML file
based on your specifications. You can then use that XML file to create an
update package that contains all objects included in the container. For more
information and instructions, see Creating an Update Package from the
Command Line, page 794.
You can also open this dialog box from the Conflict Resolution dialog
box by clicking Create Package. In this case, all objects in the Conflict
Resolution dialog box, and all dependents of those objects, are
automatically included in the package.
Ad d i n g Ob j ect s t o t h e Package
l Drag and drop objects from the Object Browser into the Create
Package dialog box.
l Click Add. Select the desired objects and click >. Then click OK.
l Click Add. You can import the results of a previously saved search
object.
2. To add the dependents of all objects to the package, click Add all used
dependencies.
If the dependent objects for a specific object do not exist in either the
destination project source or in the update package, the update
package cannot be applied. If you choose not to add dependent
objects to the package, make sure that all dependent objects are
included in the destination project source.
Co n f i gu r i n g t h e Package
2. Select the schema update options for this package. For more details on
these options, see Update Packages and Updating the Project Schema,
page 807.
3. Select the ACL options for objects in this package. For more details on
these options, see Resolve Conflicts when Copying Objects, page 777.
Savi n g t h e Package
1. Enter the name and location of the package file in the Save As field.
The default file extension for update packages is .mmp.
You can set the default location in the Object Manager Preferences
dialog box, in the Object Manager: Browsing category.
3. When you have added all objects to the package, click Proceed.
You can also open this dialog box from the Conflict Resolution dialog
box by clicking Create Package. In this case, all objects in the Conflict
Resolution dialog box, and all dependents of those objects, are
automatically included in the package.
Ad d i n g Co n f i gu r at i o n Ob j ect s t o t h e Package
3. When the objects are loaded in the search area, click and drag them to
the Create Package dialog box.
4. When you have added all the desired objects to the package, close the
Configuration - Search for Objects dialog box.
5. To add the dependents of all objects to the package, click Add all used
dependencies.
If the dependent objects for a specific object do not exist in either the
destination project source or in the update package, the update
package cannot be applied. If you choose not to add dependent
objects to the package, make sure that all dependent objects are
included in the destination project source.
2. In the Projects area, select the check boxes next to the projects you
want to create project security packages for.
Co n f i gu r i n g t h e Package
If you are creating project security update packages, you must select
Replace as the conflict resolution action for all users and groups.
Otherwise the project-level security information about those users and
groups is not copied into the destination project.
2. Select the ACL options for objects in this package. For more details on
these options, see Resolve Conflicts when Copying Objects, page 777.
Savi n g t h e Package
1. Enter the name and location of the package file in the Save As field.
The default file extension for update packages is .mmp.
3. When you have added all objects to the package, click Proceed.
You may want to schedule the creation of an update package at a later time,
so that the project is not locked during normal business hours. Or you may
You can use Object Manager to create an XML file specifying what objects
are to be included in the update package. That XML file can then be used to
create the package from the command line.
The XML file specifies a container object in the source project, that is, a
folder, search object, or object prompt. When you create the package from
the XML file, all objects included in that container object are included in the
update package, as listed in the table below:
To Create an XML File for Creating an Update Package from the Com m and
Line
Ad d i n g a Co n t ai n er Ob j ect t o t h e Package
1. Click Add.
2. You need to specify what to use as a container object. You can use a
search object, object prompt, or folder. To specify a search object or
object prompt as the container object:
l Type the name of the folder in the field, or click ... (the browse button)
and browse to the folder.
5. Click OK.
6. To add the dependents of all objects to the package, select the Add all
used dependencies check box. All dependent objects of all objects
included in the container object will be included in the package when it
is created.
If the dependent objects for a specific object do not exist in either the
destination project or in the update package, the update package
cannot be applied. If you choose not to include dependent objects in
the package, make sure that all dependent objects are included in the
destination project.
Co n f i gu r i n g t h e Package
2. Select the schema update options for this package. For more details on
these options, see Update Packages and Updating the Project Schema,
page 807.
3. Select the ACL options for objects in this package. For more details on
these options, see Resolve Conflicts when Copying Objects, page 777.
Savi n g t h e XM L Fi l e
1. Enter the name and location of the package file to be created by this
XML in the Save As field. The default file extension for update
packages is .mmp.
You can set the default location in the Object Manager Preferences
dialog box, in the Object Manager: Browsing category.
2. Click Create XML. You are prompted to type the name and location of
the XML file. By default, this is the same as the name and location of
the package file, with an .xml extension instead of .mmp.
3. Click Save.
Creating a package from the command line locks the project metadata for
the duration of the package creation. Other users cannot make any changes
to the project until it becomes unlocked. For detailed information about the
effects of locking a project, see Lock Projects, page 760.
Effect Parameter
Log into the project source with this password (the login ID to be
-sp Password
used is stored in the XML file)
Log into the project with this password (the login ID to be used is
-smp Password
stored in the XML file)
You can also create the XML file to create an update package without
opening Object Manager. To do this, you first copy a sample XML file that
contains the necessary parameters, and then edit that copy to include a list
of the objects to be migrated and conflict resolution rules for those objects.
This is the only way to create an XML file to create a configuration update
package.
Sample package creation XML files for project update packages and
configuration update packages are in the Object Manager folder. By default
this folder is C:\Program Files (x86)\MicroStrategy\Object
Manager\.
The XML file has the same structure as an XML file created using the
Project Merge Wizard. For more information about creating an XML file for
use with Project Merge, see Merge Projects to Synchronize Objects, page
809.
2. Edit your copy of the XML file to include the following information, in the
appropriate XML tags:
l AddDependents:
l Yes for the package to include all dependents of all objects in the
package.
l ConnectionMode:
4. Login: The user name to connect to the project source. You must
provide a password for the user name when you run the XML file from
the command line.
5. For a project update package, you can specify conflict resolution rules
for individual objects. In an Operation block, specify the ID (GUID) and
Type of the object, and the action to be taken. For information about
the actions that can be taken in conflict resolution, see Resolve
Conflicts when Copying Objects, page 777.
7. When you are ready to create the update package from the XML file,
call the Project Merge executable, projectmerge.exe, as described
in To Create an Update Package from an XML File, page 797.
You can make changes to an update package after it has been created. You
can remove objects from the package, change the conflict resolution rules
for objects in the package, and set the schema update and ACL options for
the package.
You cannot add objects to an update package once it has been created.
Instead, you can create a new package containing those objects.
3. In the Selected Package field, type the name and path of the update
package, or click ... (the browse button) to browse to the update
package.
4. Click Edit. The Editing pane opens at the bottom of the dialog box, as
shown below.
When you edit a package, the Create New action is changed to the
Replace action.
7. To remove an object from the update package, select the object and
click Remove.
8. You can also change the schema update options (for a project update
package only) or the access control list conflict resolution options. For
10. When you are done making changes to the update package, click Save
As. The default new name for the update package is the original name
of the package with a date and time stamp appended. Click Save.
If you are importing a package that is stored on a machine other than the
Intelligence Server machine, make sure the package can be accessed by the
Intelligence Server machine.
Before importing any project security update packages, you must import the
associated configuration update package.
Importing a package causes the project metadata to become locked for the
duration of the import. Other users cannot make any changes to the project
until it becomes unlocked. For detailed information about the effects of
locking a project, see Lock Projects, page 760.
You can import an update package into a project or project source in the
following ways:
l From within Object Manager: You can use the Object Manager graphical
interface to import an update package.
Scheduler to import the package at a later time, such as when the load on
the destination project is light.
The command line Import Package utility only supports Standard and
Windows Authentication. If your project source uses a different form of
authentication, you cannot use the Import Package utility to import an
update package.
You can also create an XML file to import an update package from the
command line, similar to using an XML file to create an update package
as described in Creating an Update Package from the Command Line,
page 794.
2. From the Tools menu, select Import Package (for a project update
package) or Import Configuration Package (for a configuration
update package).
3. In the Selected Package field, type the name and path of the update
package, or click ... (the browse button) to browse to the update
package.
5. To create a log file describing the changes that would be made if the
update package were imported, instead of importing the update
package, select the Generate Log Only checkbox.
6. Click Proceed.
Any objects that exist in different folders in the update package and
the destination project are handled according to the Synchronize
folder locations in source and destination for migrated objects
preference in the Migration category in the Object Manager
Preferences dialog box.
7. If the package made any changes to the project schema, you may need
to update the schema for the changes to take effect. To update the
project schema, from the Object Manager Project menu, select Update
Schema.
Effect Parameter
Log into the project source with this MicroStrategy username -u UserName
and password, using standard authentication (required
-p Password
unless you are using Windows authentication)
-f PackageLocation
The location must be specified relative to the
Intelligence Server machine, not relative to the
machine running the Import Package utility.
-l LogLocation
The location of the log file must be specified relative to
the machine running the Import Package utility.
Cr eat e t h e XM L Fi l e
2. From the Tools menu, select Import Package (for a project update
package) or Import Configuration Package (for a configuration
update package).
3. In the Selected Package field, type the name and path of the update
package, or click ... (the browse button) to browse to the update
package.
5. Click Proceed. You are prompted to type the name and location of the
XML file. By default, this is the same as the name and location of the
package file, with an .xml extension instead of .mmp. Click Save.
6. When you are ready to import the update package, call the Project
Merge executable, projectmerge.exe, with the following parameters:
Effect Parameter
Log into the project source with this password (the login ID to be
-sp Password
used is stored in the XML file)
Log into the project with this password (the login ID to be used is
-smp Password
stored in the XML file)
where "Filename" is the name and location of the update package, and
"ProjectName" is the name of the project that the update is to be applied
to.
If the package made any changes to the project schema, you need to update
the schema for the changes to take effect. The syntax for updating the
schema in a Command Manager script is
l Recalculate table keys and fact entry levels, if you changed the key
structure of a table or if you changed the level at which a fact is stored.
The update package cannot recalculate the object client cache size, and it
cannot update the schema logical information. These tasks must be
performed manually. So, for example, if you import an attribute that has a
new attribute form, you must manually update the project schema before any
objects in the project can use that attribute form.
l In Object Manager, select the project and, from the Project menu, select
Update Schema.
l In Developer, log into the project and, from the Schema menu, select
Update Schema.
For more detailed information about updating the project schema, see the
Optimizing and Maintaining your Project section in the Project Design Help.
When you import an update package, you have the option of creating an
undo package at the same time as the import. Alternately, you can choose to
create an undo package without importing the associated update package.
You import an undo package in the same way as you import any update
package. When you import an undo package, the Version ID and
Modification Date of all objects in the undo package are restored to their
values before the original update package was imported.
The Intelligence Server change journal records the importing of both the
original update package and the undo package. Importing an undo package
does not remove the change journal record of the original update package.
For more information about the change journal, see Monitor System Activity:
Change Journaling, page 828.
The rules that you use to resolve conflicts between the two projects in
Project Merge can be saved to an XML file and reused. You can then
execute Project Merge repeatedly using this rule file. This allows you to
schedule a project merge on a recurring basis. For more details about
scheduling project merges, see Merge Projects with the Project Merge
Wizard, page 811.
Project Merge migrates an entire project. All objects are copied to the
destination project. Any objects that are present in the source project but not
the destination project are created in the destination project.
l To merge two projects that do not have related schemas, the projects
must either have been created with MicroStrategy 9.0.1 or later, or have
been updated to version 9.0.1 or later using the Perform system object
ID unification option. For information about this upgrade, see the
Upgrade Help.
l Project Merge does not transfer user and group permissions on objects.
To migrate permissions from one project to another, use a project security
update package. For more information, see Copy Objects in a Batch:
Update Packages, page 786.
Projects may need to be merged at various points during their life cycle.
These points may include:
In either case, you must move objects from development to testing, and then
to the production projects that your users use every day.
l If an object ID does not exist in the destination project, the object is copied
from the source project to the destination project.
l If an object exists in the destination project and has the same object ID
and version in both projects, the objects are identical and a copy is not
performed.
l If an object exists in the destination project and has the same object ID in
both projects but a different version, there is a conflict that must be
resolved. The conflict is resolved by following the set of rules specified in
the Project Merge Wizard and stored in an XML file. The possible conflict
Merging projects with the Project Merge Wizard does not update the
modification date of the project, as shown in the Project Configuration
Editor. This is because, when copying objects between projects, only the
objects themselves change. The definition of the project itself is not
modified by Project Merge.
After going through the steps in the wizard, you can either execute the
merge right away or save the rules and settings in a Project Merge XML file.
You can use this file to run Project Merge from the Windows command
prompt (see Running Project Merge from the Command Line, page 813) or to
schedule a merge (see Scheduling a Project Merge, page 816).
The following scenario runs through the Project Merge Wizard several times,
each time fine-tuning the rules, and the final time actually performing the
merge.
Both the source and the destination project must be loaded for the project
merge to complete. For more information on loading projects, see Setting
the Status of a Project, page 48.
1. Go to Start > All Programs > MicroStrategy Tools > Project Merge
Wizard.
2. Follow the steps in the wizard to set your options and conflict resolution
rules.
For details about all settings available when running the wizard, see
the Help (press F1 from within the Project Merge Wizard). For
information about the rules for resolving conflicts, see Resolve
Conflicts when Merging Projects, page 817.
3. Near the end of the wizard, when you are prompted to perform the
merge or generate a log file only, select Generate log file only. Also,
choose to Save Project Merge XML. At the end of the wizard, click
Finish. Because you selected to generate a log file only, this serves as
a trial merge.
4. After the trial merge is finished, you can read through the log files to
see what would have been copied (or not copied) if the merge had
actually been performed.
5. Based on what you learn from the log files, you may want to change
some of the conflict resolution rules you set when going through the
wizard. To do this, run the wizard again and, at the beginning of the
wizard, choose to Load Project Merge XML that you created in the
previous run. As you proceed through the wizard, you can fine-tune the
settings you specified earlier. At the end of the wizard, choose to
Generate the log file only (thereby performing another trial) and
choose Save Project Merge XML. Repeat this step as many times as
necessary until the log file indicates that objects are copied or skipped
as you desire.
6. When you are satisfied that no more rule changes are needed, run the
wizard a final time. At the beginning of the wizard, load the Project
Merge XML as you did before. At the end of the wizard, when prompted
to perform the merge or generate a log file only, select Perform merge
and generate log file.
The settings for this routine must be saved in an XML file which can easily
be created using the Project Merge Wizard. Once created, the XML file
serves as the input parameter to the command.
projectmerge -f[ ] -sp[ ] -dp[ ] -smp[ ] -dmp[ ] -sup[ ] -MD -SU -lto -h
Specifies the path and file name (without spaces) of the XML file to use.
-f[ ] (You must have already created the file using the Project Merge Wizard.)
Example: -fc:\files\merge.xml
Updates the schema of the DESTINATION project after the Project Merge
is completed. This update is required when you make any changes to
schema objects (facts, attributes, or hierarchies).
-SU
If the XML file contains a space in the name or the path, you must enclose
the name in double quotes, such as:
The Project Merge Wizard can perform multiple simultaneous merges from
the same project source. This can be useful when you want to propagate a
change to several projects simultaneously.
To do this, you must modify the Project Merge XML file, and then make a
copy of it for each session that you want to run.
3. Make one copy of the XML file for each session of the Project Merge
Wizard you want to run.
l Ensure that each file uses a different Project Merge log file name.
7. For each XML file, run one instance of the Project Merge Wizard from
the command line.
For a list of the syntax options for this command, see Running Project Merge
from the Command Line, page 813.
2. Change the drive to the one on which the Project Merge utility is
installed. The default installation location is the C: drive (the prompt
appears as: C:\>)
When you define the rules for Project Merge to use, you first set the default
conflict resolution action for each category of objects (schema, application,
and configuration). (For a list of objects included in each category, see Copy
Objects.) Then you can specify conflict resolution rules at the object type
level (attributes, facts, reports, consolidations, events, schedules, and so
on). Object type rules override object category rules. Next you can specify
rules for specific folders and their contents, which override the object type
and object category rules. Finally you can specify rules for specific objects,
which, in turn, override object type rules, object category rules, and folder
rules.
For example, the Use Newer action replaces the destination object with the
source object if the source object has been modified more recently than the
destination object. If you specified the Use newer action for all metrics, but
the Sales metric has been changed recently and is not yet ready for the
production system, you can specify Use existing (use the object in the
destination project) for that metric only and it will not be replaced.
Action Effect
Use
No change is made to the destination object. The source object is not copied.
existing
Non-empty folders in the destination location will never have the same
Replace
version ID and modification time as the source, because the folder is
copied first and the objects are added to it, thus changing the version ID
and modification times during the copy process.
If the source object's modification time is more recent than the destination
Use newer object's, the Replace action is used. Otherwise, the Use existing action is
used.
If the source object's modification time is more recent than the destination
Use older object's, the Use existing action is used. Otherwise, the Replace action is
used.
Object Manager, see Copy Objects Between Projects: Object Manager, page
762.
You can track changes to your projects with the MicroStrategy Search
feature, or retrieve a list of all unused objects in a project with the Find
Unreferenced Objects feature of Object Manager.
For the source project, you specify whether to compare objects from the
entire project, or just from a single folder and all its subfolders. You also
specify what types of objects (such as reports, attributes, or metrics) to
include in the comparison.
You can print this result list, or save it as a text file or an Excel file.
Since the Project Comparison Wizard is a part of Object Manager, you can
also select objects from the result set to immediately migrate from the
source project to the destination project. For more information about
migrating objects using Object Manager, see Copy Objects Between
Projects: Object Manager, page 762.
For example, you can create a search object in the development project that
returns all objects that have been changed after a certain date. This lets you
know what objects have been updated and need to be migrated to the test
project. For more information about development and test projects, see The
Project Life Cycle, page 746.
l The user who was logged in when the search was performed.
l Any search criteria entered into the tabs of the Search for Objects dialog
box.
l A list of all the objects returned by the search, including any folders. The
list includes object names and paths (object locations in the Developer
interface).
3. After your search is complete, from the Tools menu in the Search for
Objects dialog box, select Export to Text. The text file is saved by
Finding unused objects is a part of Object Manager, and thus requires the
Use Object Manager privilege to run. For an overview of Object Manager,
see Copy Objects Between Projects: Object Manager, page 762.
4. In the Look In field, enter the folder you want to start your search in.
l Freeform SQL and Query Builder. For information on Freeform SQL and
Query Builder, see the Advanced Reporting Help.
l Import Data, which lets you use MicroStrategy Web to import data from
different data sources, such as an Excel file, a table in a database, or the
results of a SQL query, with minimum project design requirements.
Managed objects are stored in a special system folder, and can be difficult
to delete individually due to how these objects are created and stored. If you
use one of the features listed above, and then decide to remove some or all
of that feature's related reports and MDX cubes from the project, there may
be unused managed objects included in your project that can be deleted.
For example, you decide to delete a single Freeform SQL report that
automatically created a new managed object named Store. When you delete
the report, the managed object Store is not automatically deleted. You do
not plan to use the object again; however, you do plan to create more
Freeform SQL reports and want to keep the database instance included in
the project. Instead of deleting the entire Freeform SQL schema, you can
delete only the managed object Store.
If you are removing MDX cube managed objects, you must also remove
any MDX cubes that these managed objects depend on.
5. Click OK.
For example, you can create a separate database instance for your Freeform
SQL reports in your project. Later on, you may decide to no longer use
Freeform SQL, or any of the reports created with the Freeform SQL feature.
After you delete all the Freeform SQL reports, you can remove the Freeform
SQL database instance from the project. Once you remove the database
instance from the project, any Freeform SQL managed objects that
depended solely on that database instance can be deleted.
You can implement the same process when removing database instances for
Query Builder, SAP BW, Essbase, and Analysis Services.
1. Remove all reports created with Freeform SQL, Query Builder, or MDX
cubes.
If you are removing MDX cube managed objects, you must also remove
all imported MDX cubes.
5. Clear the check box for the database instance you want to remove from
the project. You can only remove a database instance from a project if
the database instance has no dependent objects in the project.
Projects loaded on specific nodes of the Manage your Projects Across Nodes of a
cluster Cluster, page 1169
Intelligent Cubes, whether they are loaded on Managing Intelligent Cubes: Intelligent
Intelligence Server Cube Monitor, page 1287
Quick search indices and their status Monitor Quick Search Indices
Before you can view a system monitor, you must have the appropriate privilege
to access that monitor. For example, to view the Job Monitor, you must have
the Monitor Jobs privilege. For more information about privileges, see
Controlling Access to Functionality: Privileges, page 101.
In addition, you must have Monitoring permission for the server definition that
contains that monitor. You can view and modify the ACL for the server
definition by right-clicking the Administration icon, selecting Properties, and
then selecting the Security tab. For more information about permissions and
ACLs, see Controlling Access to Objects: Permissions, page 89.
1. In Developer, log in to the project source that you want to monitor. You
must log in as a user with the appropriate administrative privilege.
The logged information includes items such as the user who made the
change, the date and time of the change, and the type of change (such as
saving, copying, or deleting an object). With change journaling, you can
keep track of all object changes, from simple user actions such as saving or
You must have the Audit Change Journal privilege to view the change
journal.
To view the detailed information for a change journal entry, double-click that
entry. Each entry contains the following information:
Entry Details
User name The name of the MicroStrategy user that made the change.
Transaction The date and time of the change, based on the time on the Intelligence
timestamp Server machine.
Transaction The type of change and the target of the change. For example, Delete
type Objects, Save Objects, or Enable Logging.
Entry Details
Transaction The application that made the change. For example, Developer, Command
source Manager, MicroStrategy Web, or Scheduler.
The name of the project that contains the object that was changed.
Project name
If the object is a configuration object, the project name is listed as
<Configuration>
Any comments entered in the Comments dialog box at the time of the
Comments
change.
Machine name The name of the machine that the object was changed on.
The type of change that was made. For example, Create, Change, or
Change type
Delete.
This information can also be viewed in the columns of the change journal. To
change the visible columns, right-click anywhere in the change journal and
select View Options. In the View Options dialog box, select the columns you
want to see.
5. Click OK.
For example:
l To find out when certain users were given certain permissions, you can
view entries related to Users.
You can also quickly filter the entries so that you see the entries for an
object or the changes made by a specific user. To do this, right-click one of
the entries for that object or that user and select either Filter view by
object or Filter view by user. To remove the filter, right-click in the change
journal and select Clear filter view.
4. To see changes made in a specific time range, enter the start and end
time and date.
5. To view all transactions, not just those that change the version of an
object, clear the Show version changes only and Hide Empty
Transactions check boxes.
6. Click OK to close the dialog box and filter the change journal.
l To see the changes made by this user, select Filter view by user.
When you export the change journal, any filters that you have used to view
the results of the change journal are also applied to the export. If you want
to export the entire change journal, make sure that no filters are currently in
use. To do this, right-click in the change journal and select Clear filter
view.
2. Right-click Change Audit and select Export list. The change journal is
exported to a text file.
A prompt is displayed informing you that the list was exported and noting the
folder and file name, and asks if you want to view the file. To view the file,
click Yes.
When you purge the change journal, you specify a date and time. All entries
in the change journal that were recorded prior to that date and time are
deleted. You can purge the change journal for an individual project, or for all
projects in a project source.
3. Set the date and time. All data recorded before this date and time is
deleted from the change journal.
4. To purge data for all projects, select the Apply to all projects check
box. To purge data relating to the project source configuration, leave
this check box cleared.
5. Click Purge Now. When the warning dialog box opens, click Yes to
purge the data, or No to cancel the purge. If you click Yes, change
journal information recorded before the specified date is deleted.
If you are logging transactions for this project source, a Purge Log
transaction is logged when you purge the change journal.
6. Click Cancel.
3. Under Purge Change Journal, set the date and time. All change
journal data for this project from before this date and time will be
deleted from the change journal.
5. Click Purge Now. When the warning dialog box opens, click Yes to
purge the data, or No to cancel the purge. If you click Yes, change
journal information for this project from before the specified date and
time is deleted.
6. Click OK.
You can enable change journaling for any number of projects in a project
source. For each project, when change journaling is enabled, all changes to
all objects in that project are logged.
You can also enable change journaling at the project source level. In this
case information about all changes to the project configuration objects, such
as users or schedules, is logged in the change journal.
If your metadata database grows too large due to change journaling, best
practice is to keep records active only for a certain amount of days and
archive older records. You can set a specific amount of days using
Developer.
5. In the Comments field, enter any comments that you may have about
the reason for enabling or disabling change journaling.
7. Click OK.
4. Click OK.
You can disable the requests for object comments from the Developer
Preferences dialog box.
3. Clear the Display change journal comments input dialog check box.
4. Click OK.
The statistics that are logged for each project are set in the Project
Configuration Editor, in the Statistics: General subcategory. The options
are as follows:
Report job
tables/columns Data warehouse tables and columns accessed by each report.
accessed
Mobile Clients
Manipulations
Detailed statistics on actions performed by end users on a
This option is available if mobile client.
Mobile Clients is
selected
Only purge statistics Purge statistics from the database if they are from the
logged from the current Intelligence Server you are now using. This is applicable if you
Intelligence Server. are using clustered Intelligence Servers.
You can log different statistics for each project. For example, you may want
to log the report job SQL for your test project when tracking down an error. If
you logged report job SQL for your production project, and your users are
running many reports, the statistics database would quickly grow to an
unwieldy size.
Intelligence Server can collect and log information from the MicroStrategy
Server Jobs and MicroStrategy Server Users categories. On UNIX or Linux,
Intelligence Server can also collect and log information from the following
categories:
l Memory
l System
l Process
l Processor
l Network Interface
l Physical Disk
If the Diagnostics option does not appear on the Tools menu, it has
not been enabled. To enable this option, from the Tools menu,
select MicroStrategy Developer Preferences. In the General
category, in the Advanced subcategory, select the Show
Diagnostics Menu Option check box and click OK.
5. In the Statistics column, select the check boxes for the counters that
you want to log to the statistics repository.
8. From the File menu, select Save. The changes that you have made to
the logging properties are saved.
If you are using Enterprise Manager to monitor your statistics, the database
that hosts the staging tables also contains the Enterprise Manager data
warehouse. The information in the staging tables is processed and loaded
into the data warehouse as part of the data load process. For information
about the structure of the Enterprise Manager data warehouse, see the
Enterprise Manager Data Dictionary. For steps on configuring Enterprise
Manager and scheduling data loads, see the Enterprise Manager Help.
l SQL Server
l Oracle
l Teradata
l Sybase ASE
For information about the specific versions of each database that are
supported, see the Readme.
Under single instance session logging, you must still specify which statistics
are logged for each individual project in the project source, as described in
Overview of Intelligence Server Statistics, page 838.
startup. For details on the possible side effects of not loading all projects,
see MicroStrategy Tech Note TN14591.
5. Click OK.
l Configure your system for single instance session logging, so that all
projects for a project source use the same statistics repository. This can
reduce duplication, minimize database write time, and improve
performance. For information about single instance session logging, see
Overview of Intelligence Server Statistics, page 838.
l Use the sizing guidelines (see Sizing Guidelines for the Statistics
Repository, page 844) to plan how much hard disk space you need for the
statistics repository.
l When the Basic Statistics, Report Job Steps, Document Job Steps, Report
SQL, Report Job Tables/Columns Accessed, and Prompt Answers
statistics are logged, a user executing a report increases the statistics
database size by an average of 70 kilobytes.
l This value assumes that large and complex reports are run as often as
small reports. In contrast, in an environment where more than 85 percent
of the reports that are executed return fewer than 1,000 cells, the average
report increases the statistics database size by less than 10 kilobytes.
To determine how large a database you need, multiply the space required
for a report by the number of reports that will be run over the amount of time
you are keeping statistics. For example, you may plan to keep the statistics
database current for six months and archive and purge statistics data that
are older than six months. You expect users to run an average of 400 reports
per day, of which 250, or 63 percent, return fewer than 1,000 rows, so you
assume that each report will increase the statistics table by about 25
kilobytes.
Do not store the statistics in the same database that you are using for either
your MicroStrategy metadata or your data warehouse.
l To use an existing database, note its Data Source Name (DSN). This DSN
is used when you create the statistics tables.
3. Select the Statistics & Enterprise Manager option and clear the other
options. Click Next.
4. From the DSN drop-down list, select the Data Source Name for the
database that will contain your Enterprise Manager repository (the
same database that you will use to log Intelligence Server statistics).
Any table in this database that has the same name as a MicroStrategy
statistics table is dropped. For a list of the MicroStrategy statistics
tables, see the Intelligence Server Statistics Data Dictionary.
5. In the User Name and Password fields, enter a valid login and
password for the data warehouse database.
The user name you specify must have permission to create and drop
tables in the database, and permission to create views.
6. If you want to use a custom SQL script for creating the repository, click
Advanced.
l In the Script field, the default script file name is displayed. The
selected script depends on the database type that you specified
earlier.
7. Click Next.
Clicking Yes deletes the existing tables and all information in them.
8. Click Finish.
2. Right-click the project that you want to monitor and select Project
Configuration.
If you are using single instance session logging, the project that you
select to configure must be the project that you selected when you set
up single instance session logging.
3. Expand the Database Instances category, and select the SQL Data
warehouses subcategory.
4. You need to create a new database instance for the statistics repository
database. Click New.
5. In the Database instance name field, type in a name for the statistics
repository database instance.
8. In the Database connection name field, type a name for the database
connection.
9. From the ODBC Data Sources list, select the Data Source Name used
to connect to the statistics repository database.
11. You need to create a new database login to log in to the database
instance. On the General tab, click New.
12. Type a name for the new database login in the Database login field.
16. From the Statistics database instance drop-down list, select your new
statistics database instance.
2. Select the DSN for your statistics and Enterprise Manager repository
and click Modify.
l Sybase: click the Advanced tab and select the Enable Describe
Parameter checkbox.
4. Click OK twice.
You must specify what statistics to log for all projects that log statistics.
Single instance session logging (see Overview of Intelligence Server
Statistics, page 838) causes all projects on a project source to share the
same statistics database, but not to log the same statistics.
2. Right-click the project that you want to monitor and select Project
Configuration.
5. To log advanced statistics, select the checkboxes for the statistics you
want to log. For information about each check box, see Overview of
Intelligence Server Statistics, page 838.
6. Click OK.
7. To begin logging statistics, unload and reload the project for which you
are logging statistics:
The feature flag is controlled via Command Manager with the following
scripts:
To verify that failed statistics messages are being captured by this feature:
l 2019-10-29 08:52:54.675[HOST:env-168909laiouse1]
[PID:8701][THR:139814881539840] Open file
/opt/mstr/MicroStrategy/log/FailedSentOutMessages/KafkaFailMessag
e_1 for storing fail messages.
l 2019-10-29 08:52:54.675[HOST:env-168909laiouse1]
[PID:8701][THR:139814881539840] Start processing fail messages
in file
/opt/mstr/MicroStrategy/log/FailedSentOutMessages/KafkaFailMessag
e_1 .
l 2019-10-29 08:52:54.675[HOST:env-168909laiouse1]
[PID:8701][THR:139814881539840] Finish processing 4 fail
messages in the file.
Feature Behavior
Intelligence Server will create a folder named "FailedSentOutMessages"
in the same folder as DSSError.log file. By default this is:
l Linux: /opt/MicroStrategy/log
Once the file has been read by Intelligence Server, the file will be deleted
from the disk.
In MicroStrategy ONE the file count limit is set to 2560 log files with a
default file size limit of 4MB. The file count limit can be modified through the
following registry setting:
KEY_LOCAL_MACHINE\SOFTWARE\MicroStrategy\Data
Sources\CastorServer\KafkaProducer Fail Message File
Count.
The default file size limit of 4MB cannot be changed, only the count limit of
files can be changed.
Exam p l e
l Linux: Open [HKEY_LOCAL_
MACHINE\SOFTWARE\MicroStrategy\Data
Sources\CastorServer] and add a new line:
l If you switch the metadata in your environment with a copy of the currently
used metadata, like the backup of the current metadata, a rebuild will be
required. Quick Search cannot distinguish the copies of a metadata.
l If you change your index folder, a rebuild will be required. If the index
folder is changed via MicroStrategy Developer, the index for all loaded
projects will be rebuilt automatically. You do not need to rebuilt the index.
l If you can not search your objects normally, a rebuild may be necessary.
Related Articles
If the Diagnostics option does not appear on the Tools menu, it has
not been enabled. To enable this option, go to Tools > Preferences
> Developer > General > Advanced > Show Diagnostics Menu
Option check box and click OK.
To configure the server instance with the logging settings that are used by
this machine, select CastorServer Instance and then select the Use
Machine Default Diagnostics Configuration check box.
This log destination is intended for use for interactive testing and
troubleshooting purposes, and should not be used in production
deployments.
Logging the Kernel XML API component can cause the log file to grow very
large. If you enable this diagnostic, make sure the log file you select in the
File Log column has a Max File Size (KB) of at least 2000. For instructions
on how to set the maximum size of a log file, see Creating and Managing
Log Files, page 874
5. Click Save.
You may need to restart Intelligence Server for the new logging
settings to take effect.
Once the system begins logging information, you can analyze it by viewing
the appropriate log file. For instructions on how to read a MicroStrategy log
file, see Creating and Managing Log Files, page 874.
Diagnostics Configuration
These log messages can be recorded in a MicroStrategy log file. They can
also be recorded in the operating system's log file, such as the Windows
Event Monitor.
l Error: This dispatcher logs the final message before an error occurs,
which can be important information to help detect the system component
and action that caused or preceded the error.
l Fatal: This dispatcher logs the final message before a fatal error occurs,
which can be important information to help detect the system component
and action that caused or preceded the server fatality.
l Info: This dispatcher logs every operation and manipulation that occurs on
the system.
Component Dispatcher
Authentication
Trace
Server
Scope Trace
Cache Trace
Scheduler Trace
Kernel
User Trace
Trace
You can enable or disable performance logging without having to clear all
the logging settings. To enable logging to a file, make sure the Log
Counters drop-down list is set to Yes. To enable logging to the statistics
database, make sure the Persist Statistics drop-down list is set to Yes.
2. From the Log Destination drop-down box, select the file to log
performance counter data to.
To create a new performance log file, from the Log Destination drop-
down box, select <New>. For instructions on using the Log Destination
Editor to create a new log file see Creating and Managing Log Files,
page 874.
3. In the Logging Frequency (sec) field, type how often, in seconds, that
you want the file log to be updated with the latest performance counter
information.
5. In the Logging Frequency (min) field, type how often, in minutes, that
you want the statistics database to be updated with the latest
performance counter information.
7. When you are finished configuring the performance counter log file,
click Save.
The table below lists the major MicroStrategy software features and the
corresponding performance counters that you can use to monitor those
features. For example, if the Attribute Creation Wizard seems to be running
slowly, you can track its performance with the DSS AttributeCreationWizard,
DSS ProgressIndicator, and DSS PropertySheetLib performance counters.
DSS AttributeCreationWizard
Attribute Creation
DSS ProgressIndicator Function Level Tracing
Wizard
DSS PropertySheetLib
DSS EditorContainer
DSS EditorManager
DSS ExpressionboxLib
DSS FormCategoriesEditor
DSS PropertySheetLib
Session Tracing
Client Connection Data Source Tracing
DSS ClientConnection
Data Source Enumerator
Tracing
DSS CommonDialogsLib
DSS PromptsLib
DSS CommonDialogsLib
DSS CommonEditorControlsLib
DSS Components
All components perform
DSS DateLib
Function Level Tracing. DSS
Custom Group Editor DSS EditorContainer Components also performs
Explorer and Component
DSS EditorManager
Tracing.
DSS EditorSupportLib
DSS ExpressionboxLib
DSS FilterLib
DSS FTRContainerLib
DSS ObjectsSelectorLib
DSS PromptEditorsLib
DSS PromptsLib
DSS DataTransmitter
DSS MhtTransformer
Data Transmitters
DSS MIME Function Level Tracing
and Transformers
DSS SMTPSender
DSS Network
DSS FactCreationWizard
Fact Creation Wizard Function Level Tracing
DSS ProgressIndicator
DSS ColumnEditor
DSS CommonDialogsLib
DSS ExtensionEditor
DSS FactEditor
DSS Components
DSS DateLib
DSS EditorContainer
DSS EditorManager
DSS EditorSupportLib
Components also performs
DSS ExpressionboxLib Explorer and Component
Tracing.
DSS FilterLib
DSS FTRContainerLib
DSS ObjectsSelectorLib
DSS PromptEditorsLib
DSS PromptsLib
DSS CommonDialogsLib
DSS EditorContainer
Hierarchy Editor Function Level Tracing
DSS EditorManager
DSS HierarchyEditor
DSS CommonDialogsLib
All components perform
DSS Components
Function Level Tracing. DSS
HTML Document
DSS DocumentEditor Components also performs
Editor
Explorer and Component
DSS EditorContainer
Tracing.
DSS EditorManager
Object Tracing
DSS MD4Server
Access Tracing
Metadata SQL
SQL Tracing
DSS MDServer
Content Source Tracing
DSS Components
DSS DimtyEditorLib
DSS EditorContainer
Function Level Tracing. DSS
DSS EditorManager Components also performs
DSS ExpressionboxLib Explorer and Component
Tracing.
DSS MeasureEditorLib
DSS PromptsLib
DSS PropertiesControlsLib
DSS CommonDialogsLib
DSS Components
All components perform
DSS DataSliceEditor
Function Level Tracing. DSS
Partition Editor DSS EditorContainer Components also performs
Explorer and Component
DSS EditorManager
Tracing.
DSS FilterLib
DSS PartitionEditor
DSS PrintCore
DSS ProgressIndicator
DSS AttributeCreationWizard
DSS FactCreationWizard
Project Creation Function Level Tracing
DSS ProgressIndicator
DSS ProjectCreationLib
DSS WHCatalog
DSS AsynchLib
DSS ProgressIndicator
Project Duplication Function Level Tracing
DSS ProjectUpgradeLib
DSS SchemaManipulation
DSS AsynchLib
DSS ProgressIndicator
Project Upgrade Function Level Tracing
DSS ProjectUpgradeLib
DSS SchemaManipulation
DSS CommonDialogsLib
DSS CommonEditorControlsLib
DSS Components
All components perform
DSS EditorContainer
Function Level Tracing. DSS
Prompt Editor DSS EditorManager Components also performs
Explorer and Component
DSS EditorSupportLib
Tracing.
DSS PromptEditorsLib
DSS PromptStyles
DSS SearchEditorLib
DSS CommonDialogsLib
DSS CommonEditorControlsLib
All components perform
DSS Components
Function Level Tracing. DSS
Report Editor DSS DateLib Components also performs
Explorer and Component
DSS EditorContainer
Tracing.
DSS EditorManager
DSS EditorSupportLib
DSS ExportLib
DSS ExpressionboxLib
DSS FilterLib
DSS FTRContainerLib
DSS GraphLib
DSS GridLib
DSS ObjectsSelectorLib
DSS PageByLib
DSS PrintGraphInterface
DSS PrintGridInterface
DSS PromptEditorsLib
DSS PromptsLib
DSS PropertySheetLib
DSS RepDrillingLib
DSS RepFormatsLib
DSS RepFormsLib
DSS ReportControl
DSS ReportDataOptionsLib
DSS ReportSortsLib
DSS ReportSubtotalLib
DSS DatabaseInstanceWizard
DSS DBConnectionConfiguration
DSS DBRoleConfiguration
DSS DiagnosticsConfiguration
DSS EVentsEditor
DSS PriorityMapEditor
DSS ProjectConfiguration
DSS SecurityRoleEditor
DSS SecurityRoleViewer
DSS ServerConfiguration
DSS UserEditor
DSS VLDBEditor
DSS CommonDialogsLib
DSS EditorContainer
Table Editor Function Level Tracing
DSS EditorManager
DSS TableEditor
DSS CommonDialogsLib
DSS Components
DSS GraphLib
DSS GridLib
DSS PageByLib
DSS PrintGraphInterface
DSS PrintGridInterface
DSS PromptsLib
DSS PropertySheetLib
DSS RepDrillingLib
DSS RepFormatsLib
DSS RepFormsLib
DSS ReportControl
DSS ReportDataOptionsLib
DSS ReportSortsLib
DSS ReportSubtotalLib
DSS CommonDialogsLib
DSS TransformationEditor
DSS CommonDialogsLib
DSS DatabaseInstanceWizard
Warehouse Catalog
DSS DBRoleConfiguration Function Level Tracing
Browser
DSS SchemaManipulation
DSS WHCatalog
Each log file has a specified maximum size. When a MicroStrategy log file
reaches its maximum size, the file is renamed with a .bak extension, and a
new log file is created using the same file name. For example, if the
DSSErrors.log file reaches its maximum size, it is renamed
DSSErrors.bak, and a new DSSErrors.log file is created.
You can create new log files and change the maximum size of log files in the
Log Destination Editor.
2. From the Select Log Destination drop-down list, select the log file.
3. In the Max File Size (KB) field, type the new maximum size of the log
file, in kilobytes.
3. In the File Name field, type the name of the file. The .log extension is
automatically appended to this file name.
4. In the Max File Size (KB) field, type the maximum size of the new log
file, in kilobytes.
These log files are plain text files and can be viewed with any text editor.
The MicroStrategy Web server error log files are in the MstrWeb/WEB-
INF/log/ directory. These log files can be viewed from the Web
Administrator page, by clicking View Error log on the left side of the page.
For more information about viewing log files in MicroStrategy Web, see the
Web Administrator Help (from the Web Administrator page, click Help).
Non-error messages in the log files have the same format. Each entry has
the following parts:
Section Definition
MODULE NAME Name of the MicroStrategy component that performed the action
Error messages in the log files have a similar format, but include the error and the error
code in the log files:
Sam p l e Lo g Fi l e
The following sample is a simple log file that was generated from
MicroStrategy Web (ASP.NET) after running the report called Length of
Employment in the MicroStrategy Tutorial project. The bulleted line before
each entry explains what the log entry is recording.
• Intelligence Server loads the report definition object named Length of Employment from
the metadata.
• Intelligence Server checks to see whether the report exists in the report cache.
• Intelligence Server checks for prompts and finds none in the report.
More detail is logged for report execution if the report is run from Developer.
Each SSD records information under the same process ID and thread ID.
This information includes the server and project configuration settings,
memory usage, schedule requests, user sessions, executing jobs and
processing unit states, and so on. The SSD information is broken into 14
sections, summarized below.
This section precedes the actual SSD and provides information on what
triggered the SSD, such as memory depletion or an unknown exception
error.
Sect i o n 3: Ser ver Def i n i t i o n Basi c (Cast o r Ser ver Co n f i gu r at i o n ' Pr o j ect ' )
In f o r m at i o n
l Number of projects
WorkingSet File Directory and Max RAM for WorkingSet Cache values are
not listed in an SSD.
This section includes basic information related to the state and configuration
of projects. This shows settings that are defined in the Project Configuration
Editor, such as:
l Project name
l Cache settings
l Governor settings
l DBRole used
l DBConnection settings
l Memory throttling
l Idle timeouts
l XML governors
The callstack dump provides information on the functions being used at the
time the SSD was written. Similarly, the lockstack provides a list of active
locks. The Module info dump provides a list of files that are loaded into
memory by Intelligence Server, and their location in memory.
This section contains the memory profile of the Intelligence Server process
and machine. If any of these values are near their limit, memory may be a
cause of the problem.
Sect i o n 8: Pr o j ect St at e Su m m ar y
l Reports
l Documents
The section provides information on the size of various user inboxes and
information related to the WorkingSet.
Sect i o n 12: Jo b s St at u s Sn ap sh o t
This section provides a snapshot of the jobs that were executing at the time
of the SSD. This information may be useful to see what the load on
Intelligence Server was, as well as what was executing at the time of the
error. If the error is due to a specific report, the information here can help
you reproduce it.
This section provides information about the states of the threads in each
processing unit in Intelligence Server. It also provides information on the
number of threads per Processing Unit and to what priority they are
assigned.
The Automated Crash Reporting and Diagnostics Tool does not transmit any
personally identifiable information.
DSSErrors.log File
Minidum p File
A minidump file records the thread stack memory, running context, and
statistics for a crashed process, including:
l A list of the executable and shared libraries that were loaded in the
process.
l A list of threads present in the process. For each thread, the minidump
includes the state of the processor registers, and the contents of the
threads' stack memory. These data are uninterpreted byte streams.
l Processor and operating system versions, the reason for the dump, and so
on.
dump_path: /backtrace/var/coronerd//microstrategy/iserver/_
objects/0000/tx.0EC4.dmp outfile: -
Classifier: abort
Crash info:
Application: MSTRSvr
Crashed YES
reason: SIGABRT
address: 0x3e800007c69
"attributes": {
"license_key": "1234567890abcdefghijklmnopqrstuvwxyz",
"object_ids": "(OID,PID,UID)=
(DB8D5B064BBE3C24F541DAA81A507FDC,A279D4564F7E225A5FEF6C9164251029,54F3D26011D
2896560009A8E67019608)",
"production_env": 0,
"reason": "crash",
"server_sid": "CD880B5A4240CFDEF86251902C3E37DA",
"sys_name": "Windows NT",
"sys_version": "6.2.9200",
"total_cpu": "4",
"total_ram": "25769197568",
"upload_status_DSSErrors.log": "uploaded",
"attachment_DSSErrors.log": "DSSErrors.log",
"cpu.count": 2,
"uname.sysname": "Linux",
"uname.version": "3.10.0-957.el7.x86_64",
"uname.machine": "amd64",
"Product": "Intelligence Server",
"hostname": "CentOS-DEBUG-10-244-20-234",
"customer_dsi": "0123",
"purpose": "ABA",
"test_id": "T634",
"test_type": "acceptance",
"version": "11.2.0000.35320",
"upload_file_minidump": "322f119b-8951-4381-870a889e-84830514.dmp",
"error.message": "SIGABRT",
"fault.address": "0x3e800007c69",
"application": "MSTRSvr",
"process.age": 1570209499
},
Memory Regions:
[0] 0x400000 - 0x421000 r-xp
[1] 0x620000 - 0x622000 rw-p
...
...
...
thread-63 tid=32352
[0] /usr/lib64/libpthread-2.17.so!__pthread_cond_wait + 0xc5
[1] /iserver-
install/BIN/Linux/lib/libM8Synch4.so!MSynch::InprocessRecursiveMutex::SmartLoc
k::WaitUntilSpuriouslyWokenUp(pthread_cond_t&) const
[/var/lib/jenkins/Projects/microstrategy/Tech/Server/Common/Synch/Synch/Privat
eSource/../ProtectedSource/InprocessRecursiveMutex.h : 198 + 0x12]
[2] /iserver-
install/BIN/Linux/lib/libM8Synch4.so!MSynch::ManualEvent::WaitForever() const
[/var/lib/jenkins/Projects/microstrategy/Tech/Server/Common/Synch/Synch/Privat
eSource/../ProtectedSource/ManualEvent.h : 180 + 0xf]
[3] /iserver-
install/BIN/Linux/lib/libM8Synch4.so!MSynch::EventImpl::WaitForever() const
[/var/lib/jenkins/Projects/microstrategy/Tech/Server/Common/Synch/Synch/Privat
eSource/EventImpl.cpp : 87 + 0x10]
[4] /iserver-install/BIN/Linux/lib/libMJThread.so!MSIThread::Run()
[/var/lib/jenkins/Projects/microstrategy/Tech/Server/Kernel/SourceCode/MSIThre
ad/MSIThread.cpp : 603 + 0x5]
[5] /iserver-
install/BIN/Linux/lib/libM8Synch4.so!MSynch::RunnableProxyImpl::Run()
[/var/lib/jenkins/Projects/microstrategy/Tech/Server/Common/Synch/Synch/Privat
eSource/../../Defines/RunnableProxyImpl.h : 93 + 0x5]
[6] /iserver-
install/BIN/Linux/lib/libM8Synch4.so!MSynch::ThreadImpl::ThreadFunction(void*)
[/var/lib/jenkins/Projects/microstrategy/Tech/Server/Common/Synch/Synch/Privat
eSource/ThreadImpl.cpp : 162 + 0x5]
[7] /usr/lib64/libpthread-2.17.so!start_thread + 0xc5
[8] /usr/lib64/libc-2.17.so!__clone + 0x6d
Crash info:
"attributes": {
"server_sid": "E5A0BB539945ACC7C8D4C586D267C9F6",
"production_env": "false",
"license_key": "1234567890abcdefghijklmnopqrstuvwxyz",
"total_cpu": "4",
"total_ram": "16775131136",
"sys_version": "4.18.0-305.el8.x86_64 #1 SMP Thu Apr 29 08:54:30 EDT 2021",
"sys_name": "Linux",
"hostname": "CentOS-DEBUG-10-244-20-234",
"customer_dsi": "0123",
"purpose": "ABA",
"reason": "server_stop",
"version": "11.3.0560.01287",
}
l Linux: /opt/mstr/MicroStrategy/IntelligenceServer/crash_
report.ini
[Config]
ServerURL=https://submit.backtrace.io/microstrategy/35d272ef647f8ec00f3560749e
6457bb7a131fed4418f9b48adad8e113e0dca8/minidump
ServerProdURL=https://submit.backtrace.io/microstrategy/35d272ef647f8ec00f3560
749e6457bb7a131fed4418f9b48adad8e113e0dca8/minidump
dump_path=crash_dumps
enable=truediagnostics=true
native_dump=true
keep_dump_file=true
[CrashAttachments]
DSSErrors.log=true
cubehealthchecker.log=true
Ho w t o m an u al l y u p l o ad t h e m i n i d u m p f i l e t o M i cr o St r at egy
Windows Example
Linux/MacOS Example
{"response":"ok","_rxid":"68000000-cde8-6f03-0000-
000000000000"}
For instance, you may want to ensure that the changes involved in moving
your project from a development environment into production do not alter
any of your reports. Integrity Manager can compare reports in the
development and the production projects, and highlight any differences. This
can assist you in tracking down discrepancies between the two projects.
For reports you can test and compare the SQL, grid data, graph, Excel, or
PDF output. For documents you can test and compare the Excel or PDF
output, or test whether the documents execute properly. If you choose not to
test and compare the Excel or PDF output, no output is generated for the
documents. Integrity Manager still reports whether the documents executed
successfully and how long it took them to execute.
l To execute an integrity test on a project, you must have the Use Integrity
Manager privilege for that project.
l To test the Excel export of a report or document, you must have Microsoft
Excel installed on the machine running Integrity Manager.
Enterprise Manager
MicroStrategy Enterprise Manager helps you analyze Intelligence Server
statistics. Enterprise Manager provides a prebuilt MicroStrategy project with
more than a hundred reports and dashboards covering all aspects of
Intelligence Server operation. You can also use Enterprise Manager's
prebuilt facts and attributes to create your own reports so you can have
immediate access to the performance and system usage information.
For steps on setting up Enterprise Manager and using the reports in it, see
the .Enterprise Manager Help.
The specifications of the machines that you use to run Intelligence Server,
how you tune those machines, and how they are used depend on the number
of users, number of concurrently active users, their usage patterns, and so
on. MicroStrategy provides up-to-date recommendations for these areas on
the MicroStrategy Knowledge Base.
As a high-level overview of tuning the system, you should first define your
system requirements, and then configure the system's design using those
requirements. The following topics lay the foundation for the specific tuning
guidelines that make up the rest of this section.
These scenarios share common requirements that can help you define your
own expectations for the system, such as the following:
l You may require that the system be able to handle a certain number of
concurrent users logged in, or a certain number of active users running
reports and otherwise interacting with the system.
l You may require that users have access to certain features, such as
scheduling a report for later execution, or sending a report to someone
else via email, or that your users will be able to access their reports online
through MicroStrategy Web.
The limits that the system encounters may be Intelligence Server machine
capacity, the data warehouse's throughput capacity, or the network's
capacity.
The diagram below illustrates these factors that influence the system's
capacity.
UNIX and Linux systems allow processes and applications to run in a virtual
environment. Intelligence Server Universal installs on UNIX and Linux
systems with the required environment variables set to ensure that the
server's jobs are processed correctly. However, you can tune these system
settings to fit your system requirements and improve performance. For more
information, see the Planning Your Installation section of the Installation
and Configuration Help.
You must have the Configure Governing privilege for the project or project
source.
You must have Configuration permissions for the server object. In addition, to
access the Project Configuration Editor you must have Write permission for the
project object. For more information about server object permissions, see
Permissions for Server Governing and Configuration, page 95.
Default
Setting Description
Value
Client-Server Number of
Enter the number of network threads. 5
Communications network threads
Default
Setting Description
Value
Default
Setting Description
Value
Default
Setting Description
Value
of character
changes.
changes
Update pass-
Select this checkbox to update the
through
user's database or LDAP
credentials when Checked
credentials on a successful
a successful
MicroStrategy login.
login occurs
Use
Check this checkbox to use a
Public/Private
public or private key to sign or
Key to Unchecked
verify a token. This requires the
Sign/Verify
setup of a public or private key.
Token
Default
Setting Description
Value
Default
Setting Description
Value
Journal
In the Time field, enter the time.
Default
Setting Description
Value
Default
Setting Description
Value
Default
Setting Description
Value
Default
Setting Description
Value
Statistics - General
Default
Setting Description
Value
Default
Setting Description
Value
Statistics - Purge
Default
Setting Description
Value
Today
Select Select the date range within which you want the minus one
From/To
dates purge operation to be performed. year/
Today
Default
Setting Description
Value
Default
Setting Description
Value
Default
Setting Description
Value
Default
Setting Description
Value
Default
Setting Description
Value
Default
Setting Description
Value
Default
Setting Description
Value
Default
Setting Description
Value
Session
Recovery
.\Inbox\SERVER_
and Deferred Specifies where the session
DEFINITION_
Inbox information is written to disk.
NAME\
storage
directory
Default
Setting Description
Value
Default
Setting Description
Value
Enable Maximum The Catalog cache is a cache for the catalog for Checked
catalog use of the data warehouse database. When the Catalog
25
Default
Setting Description
Value
Projects - General
Default
Setting Description
Value
Clustering - General
Default
Setting Description
Value
LDAP - Server
Default
Setting Description
Value
Default
Setting Description
Value
LDAP - Platform
LDAP - Filters
Default
Setting Description
Value
Default
Setting Description
Value
(&(objectclass=LDAP_USER_OBJECT_
CLASS)(LDAP_LOGIN_ATTR=#LDAP_
User search LOGIN#))
Empty
filter
Where:
LDAP_USER_OBJECT_CLASS
indicates the object class of the LDAP
users. For example, you can enter (&
(objectclass=person)(cn=#LDAP_
LOGIN#)).
Default
Setting Description
Value
(&(objectclass=person)
(uniqueID=#LDAP_LOGIN#))
(&(objectclass=LDAP_GROUP_
OBJECT_CLASS)(LDAP_MEMBER_
LOGIN_ATTR=#LDAP_LOGIN#))
(&(objectclass=LDAP_GROUP_
OBJECT_CLASS)(LDAP_MEMBER_
DN_ATTR=#LDAP_DN#))
(&(objectclass=LDAP_GROUP_
OBJECT_CLASS)(gidNumber=#LDAP_
GIDNUMBER#))
Default
Setting Description
Value
LDAP_GROUP_OBJECT_CLASS
indicates the object class of the LDAP
groups. For example, you can enter (&
(objectclass=groupOfNames)
(member=#LDAP_DN#)).
LDAP_MEMBER_[LOGIN or DN]_ATTR
indicates which LDAP attribute of an
LDAP group is used to store LDAP
logins/DNs of the LDAP users. For
example, you can enter (&
(objectclass=groupOfNames)
(member=#LDAP_DN#)).
Default
Setting Description
Value
LDAP - Schedules
Default
Setting Description
Value
Default
Setting Description
Value
Default
Setting Description
Value
Default
Setting Description
Value
(&(objectclass=LDAP_USER_
OBJECT_CLASS)(LDAP_
LOGIN_ATTR=SEARCH_
STRING))
LDAP_USER_OBJECT_CLASS
indicates the object class of the
LDAP users. For example, you
can enter (&
(objectclass=person)(cn=h*)).
LDAP_LOGIN_ATTR indicates
which LDAP attribute to use to
store LDAP logins. For example,
you can enter (&
(objectclass=person)(cn=h*)).
Default
Setting Description
Value
(&
(objectclass=LD
AP_USER_
OBJECT_
CLASS)(LDAP_
LOGIN_
ATTR=SEARC
H_STRING))
LDAP_USER_
OBJECT_
CLASS attributes within the search filter
indicates the syntax above. For example:
object class of
the LDAP users.
For example,
you can enter (&
(objectclass=pe
rson)(cn=h*)).
LDAP_LOGIN_
ATTR indicates
which LDAP
attribute to use
to store LDAP
logins. For
example, you
can enter (&
(objectclass=pe
rson)(cn=h*)).
Default
Setting Description
Value
SEARCH_
STRING
indicates the
search criteria
for your user
search filter.
You must match
the correct
LDAP attribute
for your search
filter. For
example, you
can search for
all users with an
LDAP user login
that begins with
the letter h by
entering (&
(objectclass=pe
rson)(cn=h*)).
Note:
Depending on
your LDAP
server vendor
and your tree
structure
created within
LDAP, you may
need to try
different
attributes within
the search filter
syntax above.
For example:
Default
Setting Description
Value
(&
(objectclass=pe
rson)
(uniqueID=SEA
RCH_STRING))
where uniqueID
is the LDAP
attribute your
company uses
for
authentication.
Default
Setting Description
Value
LDAP_GROUP_OBJECT_CLASS
indicates the object class of the
LDAP groups. For example, you
can enter (&
(objectclass=groupOfNames)
(cn=h*)).
LDAP_GROUP_ATTR indicates
which LDAP attribute of an LDAP
group is searched for to retrieve
a list of groups. For example, you
can enter (&
(objectclass=groupOfNames)
(cn=h*)).
Default
Setting Description
Value
Default
Setting Description
Value
Default
Setting Description
Value
Default
Setting Description
Value
as well as in
MicroStrategy.
To support this
option, the LDAP
Server must be
configured as the
Microsoft Active
Directory Server
domain controller,
which stores the
Windows system
login information.
See Identifying
Users:
Authentication for
more information on
Windows
authentication.
Default
Setting Description
Value
Unchecked
Default
Setting Description
Value
Specify whether to
use the default
Use default LDAP LDAP email attribute
Selected
attribute ('mail') mail (the default
selection) or another
LDAP attribute.
If you choose to
import email
addresses, the
imported email
address becomes
Address Properties - Generic
the default email
Device email
address. This
overwrites the
existing default
email address, if one
exists.
Default
Setting Description
Value
User logon Select the User login fails if LDAP attribute value is not
fails if LDAP read from the LDAP server checkbox to prevent LDAP
attribute users from logging into the MicroStrategy system if they Unselected
value is not do not have all the attributes that have been imported into
read from the the system.
Default
Setting Description
Value
Default
Setting Description
Value
Default
Setting Description
Value
Synch
Select this checkbox to synch users when
user at Unchecked
they log in.
logon
Maximum
number of Controls how many messages can
10,000
messages exist in a user's History List.
History
per user
settings
.\Inbox\SERVER_
History The location where History List
DEFINITION_
Directory messages are saved.
NAME\
Database
Select the database instance <None>
Instance
External
central
storage Specify where file-based History List
directory for messages are stored if you are using Empty
Database- a hybrid History List repository.
based
History LIst
Default
Setting Description
Value
Default
Setting Description
Value
Default
Setting Description
Value
Import If this option is selected, all SAP roles that the SAP user is
Unchecked
groups a member of are imported as groups in MicroStrategy.
Default
Setting Description
Value
Default
Setting Description
Value
Default
Setting Description
Value
Default
Setting Description
Value
Default
Setting Description
Value
Select a default
project drill map or
click Clear to
Default project drill map Empty
remove the default
drill map from the
field.
Default
Setting Description
Value
Default
Setting Description
Value
When this is
selected, Web users
can see only
personalized drill
paths rather than all
drill paths.
Personalized drill
paths are based on
each object's
access control list
(ACL), specified in
the Security
Enable Web category of the
personalized drill Properties dialog Unchecked
paths box. If you set up
ACLs, all drill paths
are still displayed in
Web until you
enable Web
personalized drill
paths.
Default
Setting Description
Value
performance.
When this is
enabled, all drilling
options are
automatically sorted
alphabetically in the
display when a user
right-clicks on a
drillable object.
Sorting occurs
within a hierarchy
and between
hierarchies, in
ascending
Sort drilling alphabetical order.
options in
Note: Sorting is by Unchecked
ascending
alphabetical order drill type, then by
set name, then by
path (attribute)
name. However, for
most custom drill
paths, the drill type
is "drilltounit" and
the set name is
generally empty, so
the most likely
default sorting order
is ascending order
of path (attribute)
name.
Default
Setting Description
Value
Default
Setting Description
Value
From the Edit drop-down list, select whether you want to add a
Edit header or a footer to PDFs created when a report is exported Empty
from this project.
Default
Setting Description
Value
You can define static text that will appear on all reports
within a project. This is particularly useful for adding text
such as "Confidential," "Proprietary," your company's name,
Edit Empty
and so on. The text appears on every report that is exported
from the project. The text can appear as a header or as a
footer.
Do not merge Select the Do not merge or duplicate headers when exporting
or duplicate to Excel/Word checkbox to repeat the table headers when
Empty
headers when exporting a report or a document to an Excel sheet or a Word
exporting to document, as per MicroStrategy 8.1.x. Again, MicroStrategy
Default
Setting Description
Value
The Export to Flash using this file format setting allows you
Export to
to select the Flash file format for documents and dashboards.
Flash file PDF
You can choose to export all the Flash files in a project in
format
either MHT or PDF format.
Default
Setting Description
Value
Please type The Window title is displayed after the name of the object
the text to (report, document, metric, and so on) on the title bar of each
display in interface for Developer users. The window title allows a Empty
the window user to confirm which project definition they are working
title with. Type the text to display in this field.
Default
Setting Description
Value
Default
Setting Description
Value
Click to
specify
Report Report report
Details details details
Properties properties properties.
See Project
Definition -
Documents
and Reports
- Report
Details
Properties -
General for
more
information.
Click to
specify the
default
watermark
for
documents
and reports.
Watermark Watermark
See Project
Definition -
Documents
and Reports
- Watermark
for more
information.
Select this
checkbox if
Allow
you want
documents
individual
Watermark to overwrite Checked
documents
this
to be able to
watermark
overwrite the
watermark.
containing
the
WEBSERVE
R macro is
executed
from
MicroStrateg
y Web, the
macro is
replaced with
the web
server used
to execute
the
document. If
the
used to
document is
replace
executed
WEBSERVE
from
R macro in
Developer,
documents
the
WEBSERVE
R macro is
replaced with
the web
server
specified in
the Specify
the web
server that
will be used
to replace
WEBSERVE
R macro in
documents
field.
If the
document is
executed
Specify the
through a
web server
subscription,
that will be
you can use
used in link
this field to
to history list
specify
for email
which web Empty
subscription
server to use
s and
in the link to
notification
History List
of history list
messages in
subscription
email
s
subscription
s and
notifications.
Select this
Enable links
option to
in exported
Flash enable links
Flash Unchecked
documents in stand-
documents
alone Flash
(.mht files)
documents.
Select this
option to
Mobile Enable enable smart
Unchecked
documents smart client client for
mobile
documents.
Default
Setting Description
Value
Default
Setting Description
Value
Default
Setting Description
Value
Default
Setting Description
Value
Default
Setting Description
Value
Additional read.
Options New line Select this to add a line after each sub-
between filter expression, to help differentiate between Checked
types the various filters.
Default
Setting Description
Value
Default
Setting Description
Value
Default
Setting Description
Value
Include
attribute form
Qualification Select this checkbox to display attribute
names in Checked
Conditions form names (such as DESC or ID).
qualification
conditions
Default
Setting Description
Value
Default
Setting Description
Value
Default
Setting Description
Value
Default
Setting Description
Value
Default
Setting Description
Value
Unselected
Default
Setting Description
Value
Default
Setting Description
Value
before (date)
Default
Setting Description
Value
Default
Setting Description
Value
Click to
configure the
Project-
analytical
Level VLDB
engine settings.
settings
See Details for
All VLDB
Default
Setting Description
Value
Properties for
more
information.
Default
Setting Description
Value
Populate Mobile
ID system Enter a value to populate as a mobile
Unchecked
prompt for non- ID system prompt for non-mobile users.
mobile users.
Default
Setting Description
Value
Default
Setting Description
Value
Select the
Select the primary database instance for the project from the
Primary
Select the Primary Database Instance for the Project drop- Empty
Database
down list.
Instance for
Default
Setting Description
Value
VLDB Click to configure VLDB properties. See Details for All VLDB
Properties Properties for more information.
Default
Setting Description
Value
Default
Setting Description
Value
Default
Setting Description
Value
mapping.
Default
Setting Description
Value
Default
Setting Description
Value
Default
Setting Description
Value
Default
Setting Description
Value
Default
Setting Description
Value
Default
Setting Description
Value
Default
Setting Description
Value
Default
Setting Description
Value
Concurrent
Limits the number of concurrent interactive project sessions
interactive
for a given user account. When the limit is reached, users 20
project session
cannot access new project sessions.
per user
Default
Setting Description
Value
Cache Limits the number of cache updates that a user can process at a
-1
Update time. A value of -1 indicates no limit.
Default
Setting Description
Value
Personal Limits the number of personal views that can be created by URL
-1
View sharing. A value of -1 indicates no limit.
Default
Setting Description
Value
Default
Setting Description
Value
Default
Setting Description
Value
Default
Setting Description
Value
Default
Setting Description
Value
Default
Setting Description
Value
time.
Default
Setting Description
Value
Never expire the events occur that result in the cache no longer being
valid. For example, for a daily report, the cache may Checked
caches
need to be deleted when the data warehouse is loaded.
For weekly reports, you may want to delete the cache
and recreate it at the end of each week. In production
systems, cache invalidation should be driven by events
such as Warehouse Load or End Of Week, not short-
term time-based occurrences such as 24 hours.
Default
Setting Description
Value
Default
Setting Description
Value
Purge
Object Click Purge Now to delete all object caches. Unclicked
Cache
Default
Setting Description
Value
Default
Setting Description
Value
Purge element
Click Purge Now to delete all element caches. Unclicked
caches
Default
Setting Description
Value
Re-run history list Select this checkbox to create caches or update Unchecked
Default
Setting Description
Value
and mobile
existing caches when a report or document is
subscriptions
executed and that report/document is subscribed to
against the
the History List folder or a mobile device.
warehouse
Re-run, file,
Select this checkbox to create caches or update
email, print, or
existing caches when a report or document is
FTP subscriptions Unchecked
executed and that report/document is subscribed to a
against the
file, email, or print device
warehouse
Keep document
available for
Select this checkbox to retain a document or report for
manipulation for
later manipulation that was delivered to the History Checked
History List
List folder.
Subscriptions
only
memory.
Dynamic Make
Select this checkbox to enable dynamic
Sourcing Intelligent
sourcing for all Intelligent Cubes in a
Cubes
project. To disable dynamic sourcing
available for Unchecked
as the default behavior for all
Dynamic
Intelligent Cubes in a project, clear this
Sourcing by
checkbox.
default
Allow
Select this checkbox to make
Dynamic
Intelligent Cubes available for dynamic
Sourcing
sourcing even if some outer join
even if outer Unchecked
properties are not set. However, this
join
may cause incorrect data to be shown
properties
in reports that use dynamic sourcing.
are not set
Statistics - General
Default
Setting Description
Value
Default
Setting Description
Value
tables/columns
accessed by each report.
accessed
Statistics - Purge
Default
Setting Description
Value
The beginning
date of the
Today minus
From date range for
one year
which to purge
statistics.
Select dates
The end date of
the date range
To for which to Today
purge
statistics.
The number of
seconds to
Purge timeout (seconds) 10
wait for the
Default
Setting Description
Value
purge process
to finish. If the
process does
not respond by
the end of this
time, a timeout
for the process
occurs, and
the system
does not
continue to
take up system
resources
trying to start
the process.
Starts the
Purge Now Unclicked
purge process.
Default
Setting Description
Value
Select a Use this drop-down list to view existing security roles and to
Empty
security assign a security role to a group or to individual users.
Default
Setting Description
Value
role from
the
following
list
This box displays any user or group that has the selected
security role assigned to them. You can assign security roles
Selected
by using the right arrow to move users and groups from the Empty
members
Available members box on the left to the Selected members
box on the right.
Default
Setting Description
Value
Default
Setting Description
Value
Default
Setting Description
Value
Fact creation Click Fact Options to open the Fact Creation Empty
Default
Setting Description
Value
Instance reports.
Default
Setting Description
Value
Default
Setting Description
Value
the reports
when the
metric value
cannot be
calculated at
the desired
level
Default
Setting Description
Value
new chart
Default
Setting Description
Value
Default
Setting Description
Value
Default
Setting Description
Value
This is the
message
that will be
displayed
when the Empty
No data report
returned execution
has no data
as a result
Default
Setting Description
Value
Retain
page-by
Select this checkbox if you want to retain
Page by selections
page-by selections when saving a report in Checked
reports when you
this project.
save a
report
Default
Setting Description
Value
Language - Metadata
Default
Settings Description
Value
Language - Data
Default
Setting Description
Value
Default
Setting Description
Value
User Language Preferences Click Modify to specify the metadata and data
Empty
Manager language for this project by individual user.
Metadata language
Select the metadata language to be used in
preference for all users in Default
this project.
this project
Default
Setting Description
Value
Report or
Name of the subscribed report or
Document Checked
document.
name
Enable email
Project containing the report or
notification to Project name Checked
document.
administrator
for failed email Delivery Email, file, FTP, print, Histoy List,
delivery Checked
method Cache, or Mobile
Subscription
The name of the subscription. Checked
name
Default
Setting Description
Value
Send
notification to
this Enter the email address of a system
administrator administrator to receive a notification Empty
email address email for the failed cache delivery.
when delivery
fails
Default
Setting Description
Value
Default
Setting Description
Value
Default
Setting Description
Value
Append the Select this checkbox and type the text that you want to
following add as a footer in the email that is sent to email Unchecked
footer subscription recipients.
Select this
checkbox to send
a notification
email to the
recipient when
the subscribed
Enable email
report or
notification
document is Checked
for file
delivered to the
delivery
file location. If
this checkbox is
cleared, all other
options in this
category are
disabled.
Select this
checkbox to send
a notification
Send email to the
notification to recipient when
Checked
recipient when the subscribed
delivery fails report or
document fails to
be delivered on
schedule.
The
MicroStrategy
Recipient name user or contact Checked
that subscribed
to the delivery.
Project
containing the
Project name Checked
report or
document
The delivery
method of email,
Delivery file, FTP, print,
Checked
method Histoy List,
Cache, or
Mobile.
The schedule
Schedule associated with Checked
the subscription.
Date of the
Date Checked
delivery
Time of the
Time Checked
delivery
Location of the
File location Checked
file
Hyperlink to the
Link to file Checked
file
To include a
message with
each cache
delivery
Append the
notification, Checked
following text
select this
checkbox and
type the message
in the field.
notification email
when delivery
for the failed
fails
cache delivery.
Default
Setting Description
Value
Default
Setting Description
Value
Default
Setting Description
Value
Subscription
The name of the subscription. Checked
name
Default
Setting Description
Value
delivery
Send
notification to
this Type the email address of a system
administrator administrator to receive a notification Empty
email address email for the failed cache delivery.
when delivery
fails
Default
Setting Description
Value
Default
Setting Description
Value
Subscription
The name of the subscription Checked
name
Default
Setting Description
Value
Send
notification to
this Enter the email address of a system
administrator administrator to receive a notification Empty
email address email for the failed cache delivery.
when delivery
fails
Default
Setting Description
Value
Default
Setting Description
Value
are disabled.
Default
Setting Description
Value
Report or
Name of the subscribed report or
Document Checked
document
name
Subscription
The name of the subscription Checked
name
Default
Setting Description
Value
Send
notification to
this Type the email address of a system
administrator administrator to receive a notification Empty
email address email for the failed cache delivery.
when delivery
fails
Default
Setting Description
Value
Default
Setting Description
Value
subscription
Subscription
The name of the subscription Checked
name
Send
notification to
this Type the email address of a system
administrator administrator to receive a notification Empty
email address email for the failed cache delivery.
when delivery
fails
Default
Setting Description
Value
Enable real
Select this checkbox to enable updated report and
time updates
document data to be automatically sent to Mobile users Checked
for mobile
that are subscribed to the report or document.
delivery
Default
Setting Description
Value
Report or
Name of the subscribed report or
Document Checked
document
name
Enable email
Project containing the report or
notification to Project name Checked
document
administrator
for failed cache Delivery Delivery method of email, file, FTP,
creation Checked
method print, Histoy List, Cache, or Mobile
Subscription
The name of the subscription Checked
name
Default
Setting Description
Value
Send
notification to
this Type the email address of a system
administrator administrator to receive a notification Empty
email address email for the failed cache delivery.
when delivery
fails
Default
Setting Description
Value
Default
Setting Description
Value
l Assign a high priority to more time-sensitive jobs, and a low priority to jobs
that may use a great deal of system resources, as described in Prioritize
Jobs, page 1086.
l Ensure that report and document designers are aware of the features that
can place an exceptionally heavy load on the system. These features are
listed in detail in Design Reports, page 1104.
The table below details the environment variable settings you can use to
adjust automatic memory turning.
Enabling middle level or high level memory tuning can potentially increase
the memory footprint of Intelligence Server. However, Intelligence Server
has the ability to release the cached memory in Smartheap when
Intelligence Server is about to hit Memory Contract Manager denial.
For detail about how Intelligence Server releases the cached memory in
Smartheap, please refer to the knowledge base in MicroStrategy
Community.
Choices that you must make when designing your system architecture
include:
l How the data warehouse is configured (see How the Data Warehouse can
Affect Performance, page 1027)
l The physical location of machines relative to each other and the amount of
bandwidth between them (see How the Network can Affect Performance,
page 1028)
Platform Considerations
The size and speed of the machines hosting your data warehouse and the
database platform (RDBMS) running your data warehouse both affect the
system's performance. A list of supported RDBMSs can be found in the
Readme. You should have an idea of the amount of data and the number of
users that your system serves, and research which RDBMS can handle that
type of load.
l What kind of lookup, relate, and fact tables will you need?
For more information about data warehouse design and data modeling, see
the Advanced Reporting Help and Project Design Help.
TPC/IP
XML requests are sent to Intelligence Server. XML report
2 or
results are incrementally fetched from Intelligence Server.
TLS/SSL
TCP/IP
Requests are sent to Intelligence Server. (No incremental fetch
3 or
is used.)
TLS/SSL
l Place the Web server machines close to the Intelligence Server machines.
l Place Intelligence Server close to the both the data warehouse and the
metadata repository.
l If you have a clustered environment with a shared cache file server, place
the shared cache file server close to the Intelligence Server machines.
The ability of the network to quickly transport data between the components
of the system greatly affects its performance. For large result sets, the
highest load or the most traffic typically occurs between the data warehouse
and the Intelligence Servers (indicated by C in the diagram below). The load
between Intelligence Server and Web server is somewhat less (B), followed
by the least load between the Web server and the Web browser (A).
After noting where the highest load is on your network, you can adjust your
network bandwidth or change the placement of system components to
improve the network's performance.
You can tell whether your network configuration has a negative effect on
your system's performance by monitoring how much of your network's
capacity is being used. Use the Windows Performance Monitor for the object
Network Interface, and the watch the counter Total bytes/sec as a percent
of your network's bandwidth. If it is consistently greater than 60 percent (for
example), it may indicate that the network is negatively affecting the
system's performance. You may want to use a figure different than 60
percent for your system.
To calculate the network capacity utilization percent, take the total capacity,
in terms of bits per second, and divide it by (Total bytes per second * 8).
(Multiply the Total Bytes per second by 8 because 1 byte = 8 bits.)
save money when building a business intelligence system, you may not have
the resources that you want you could have.
You must make certain choices about how to maximize the use of your
system's resources. Because Intelligence Server is the main component of
the MicroStrategy system, it is important that the machines running it have
sufficient resources for your needs. These resources include:
the system's capacity. In Windows, you can monitor the processor usage
with the Windows Performance Monitor.
If you upgrade a machine's CPU, make sure you have the appropriate
license to run Intelligence Server on the faster CPU. For example, if you
upgrade the processor on the Intelligence Server machine from a 2 GHz to a
2.5 GHz processor, you should obtain a new license key from MicroStrategy.
For detailed information about CPU licensing, see CPU Licenses, page 726.
Physical Disk
If the physical disk is used too much on a machine hosting Intelligence
Server, it can indicate a bottleneck in the system's performance. To monitor
physical disk usage in Windows, use the Windows Performance Monitor
counters for the object Physical Disk and the counter % Disk Time. If the
counter is greater than 80 percent on average, it may indicate that the
machine does not have enough memory. This is because when the
machine's physical RAM is full, the operating system starts swapping
memory in and out of the page file on disk. This is not as efficient as using
RAM. Therefore, Intelligence Server's performance may suffer.
By monitoring the disk utilization, you can see if the machine is consistently
swapping at a high level. Defragmenting the physical disk may help lessen
the amount of swapping. If that does not sufficiently lessen the utilization,
consider increasing the amount of physical RAM in the machine. For
information on how Intelligence Server uses memory, see Memory, page
1035.
Another performance counter that you can use to gauge the disk's utilization
is the Current disk queue length, which indicates how many requests are
waiting at a time. MicroStrategy recommends using the % Disk Time and
Current Disk Queue Length counters to monitor the disk utilization.
Memory
If the machine hosting Intelligence Server has too little memory, it may run
slowly, or even shut down during memory-intensive operations. You can use
the Windows Performance Monitor to monitor the available memory, and you
can govern Intelligence Server's memory use with the Memory Contract
Manager.
Virtual memory is the amount of physical memory (RAM) plus the Disk Page
file (swap file). It is shared by all processes running on the machine,
including the operating system.
When a machine runs out of virtual memory, processes on the machine are
no longer able to process instructions and eventually the operating system
may shut down. More virtual memory can be obtained by making sure that as
few programs or services as possible are executing on the machine, or by
increasing the amount of physical memory or the size of the page file.
Increasing the amount of virtual memory, and therefore the available private
bytes, by increasing the page file size may have adverse effects on
Intelligence Server performance because of increased swapping.
Private bytes are the bytes of virtual memory that are allocated to a process.
Private bytes are so named because they cannot be shared with other
processes: when a process such as Intelligence Server needs memory, it
allocates an amount of virtual memory for its own use. The private bytes
used by a process can be measured with the Private Bytes counter in the
Windows Performance Monitor.
The governing settings built into Intelligence Server control its demand for
private bytes by limiting the number and scale of operations which it may
perform simultaneously. In most production environments, depletion of
virtual memory through private bytes is not an issue with Intelligence Server.
When Intelligence Server starts up, it uses memory in the following ways:
l It initializes all internal components and loads the static DLLs necessary
for operation. This consumes 25 MB of private bytes and 110 MB of virtual
bytes. You cannot control this memory usage.
l It loads all server definition settings and all configuration objects. This
consumes an additional 10 MB of private bytes and an additional 40 MB of
virtual bytes. This brings the total memory consumption at this point to 35
MB of private bytes and 150 MB of virtual bytes. You cannot control this
memory usage.
l It loads the project schema (needed by the SQL engine component) into
memory. The number and size of projects greatly impacts the amount of
memory used. This consumes an amount of private bytes equal to three
times the schema size and an amount of virtual bytes equal to four times
the schema size. For example, with a schema size of 5 MB, the private
bytes consumption would increase by 15 MB (3 * 5 MB). The virtual bytes
consumption would increase by 20 MB (4 * 5 MB). You can control this
memory usage by limiting the number of projects that load at startup time.
l Caches: result (report and document) caches, object caches, and element
caches created after Intelligence Server has been started. The maximum
amount of memory that Intelligence Server uses for result caches is
configured at the project level. For more information about caches, see
Chapter 10, Improving Response Time: Caching.
l Intelligent Cubes: any Intelligent Cubes that have been loaded after
Intelligence Server has been started. The maximum amount of memory
used for Intelligent Cubes is configured at the project level. For details,
see Chapter 11, Managing Intelligent Cubes.
l SQL generation
l XML generation
The memory load of the requests governed by MCM depends on the amount
of data that is returned from the data warehouse. Therefore, this memory
load cannot be predicted.
graph of the report's results would be based on the same data and, thus,
would be allowed. Therefore, MCM is not involved in graphing requests. If
the report was not returned because it exceeded memory limits, the graphing
request would never be issued.
The Enable single memory allocation governing option lets you specify
how much memory can be reserved for a single Intelligence Server operation
at a time. When this option is enabled, each memory request is compared to
the Maximum single allocation size (MBytes) setting. If the request
exceeds this limit, the request is denied. For example, if the allocation limit
is set to 100 MB and a request is made for 120 MB, the request is denied,
but a request for 90 MB is allowed.
If the Intelligence Server machine has additional software running on it, you
may want to set aside some memory for those processes to use. To reserve
this memory, you can specify the Minimum reserved memory in terms of
either the number of MB or the percent of total system memory. In this case,
the total available memory is calculated as the initial size of the page file
plus the RAM. It is possible that a machine has more virtual memory than
MCM knows about if the maximum page file size is greater than the initial
size.
The Memory request idle time is the longest time MCM remains in memory
request idle mode. If the memory usage has not fallen below the low water
mark by the end of the Memory request idle time, MCM restarts
Intelligence Server. Setting the idle time to -1 causes Intelligence Server to
remain idle until the memory usage falls below the low water mark.
MCM does not submit memory allocations to the memory subsystem (such
as a memory manager) on behalf of a task. Rather, it keeps a record of how
much memory is available and how much memory has been contracted out to
the tasks.
l It is smaller than the high water mark, or the low water mark if Intelligence
Server is in memory request idle mode. These water marks are derived
from the Intelligence Server memory usage and the Maximum use of
virtual address space and Minimum reserved memory settings. For
detailed explanations of the memory water marks, see Memory Water
Marks, page 1043.
The high water mark (HWM) is the highest value that the sum of private
bytes and outstanding memory contracts can reach before triggering
memory request idle mode. The low water mark (LWM) is the value that
Intelligence Server's private byte usage must drop to before MCM exits
memory request idle mode. MCM recalculates the high and low water marks
after every 10 MB of memory requests. The 10 MB value is a built-in
benchmark and cannot be changed.
Two possible values are calculated for the high water mark: one based on
virtual memory, and one based on virtual bytes. For an explanation of the
different types of memory, such as virtual bytes and private bytes, see
Memory, page 1035.
l The high water mark for virtual memory (HWM1 in the diagram above) is
calculated as (Intelligence Server private bytes +
available system memory). It is recalculated for each potential
memory depletion.
l The high water mark for virtual bytes (HWM2 in the diagram above) is
calculated as (Intelligence Server private bytes). It is
calculated the first time the virtual byte usage exceeds the amount
specified in the Maximum use of virtual address space or Minimum
Reserved Memory settings. Because MCM ensures that Intelligence
Server private byte usage cannot increase beyond the initial calculation, it
is not recalculated until after Intelligence Server returns from the memory
request idle state.
The high water mark used by MCM is the lower of these two values. This
accounts for the scenario in which, after the virtual bytes HWM is calculated,
Intelligence Server releases memory but other processes consume more
available memory. This can cause a later calculation of the virtual memory
HWM to be lower than the virtual bytes HWM.
Once the high and low water marks have been established, MCM checks to
see if single memory allocation governing is enabled. If it is, and the request
is for an amount of memory larger than the Maximum single allocation
size setting, the request is denied.
l In memory request idle mode, the maximum request size is based on the
low water mark. The formula is [LWM - (1.05 *(Intelligence
Server Private Bytes) + Outstanding Contracts)].
For normal Intelligence Server operation, if the request is larger than the
maximum request size, MCM denies the request. It then enters memory
request idle mode.
If MCM is already in memory request idle mode and the request is larger
than the maximum request size, MCM denies the request. It then checks
whether the memory request idle time has been exceeded, and if so, it
restarts Intelligence Server. For a detailed explanation of memory request
idle mode, see Memory Request Idle Mode, page 1046.
If the request is smaller than the maximum request size, MCM performs a
final check to account for potential fragmentation of virtual address space.
MCM checks whether its record of the largest free block of memory has been
updated in the last 100 requests, and if not, updates the record with the size
of the current largest free block. It then compares the request against the
largest free block. If the request is more than 80 percent of the largest free
block, the request is denied. Otherwise, the request is granted.
After granting a request, if MCM has been in memory request idle mode, it
returns to normal operation.
When MCM first denies a request, it enters memory request idle mode. In
this mode, MCM denies all requests that would keep Intelligence Server's
private byte usage above the low water mark. MCM remains in memory
request idle mode until one of the following situations occurs:
l Intelligence Server's memory usage drops below the low water mark. In
this case, MCM exits memory request idle mode and resumes normal
operation.
l MCM has been in memory request idle mode for longer than the Memory
request idle time. In this case, MCM restarts Intelligence Server. This
frees up the memory that had been allocated to Intelligence Server tasks,
and avoids memory depletion.
The Memory request idle time limit is not enforced via an internal clock or
scheduler. Instead, after every denied request MCM checks how much time
has passed since the memory request idle mode was triggered. If this time is
more than the memory request idle time limit, Intelligence Server restarts.
Once request B has been denied, Intelligence Server enters the memory
request idle mode. In this mode of operation, it denies all requests that
would push the total memory used above the low water mark.
In the example above, request C falls above the low water mark. Because
Intelligence Server is in memory request idle mode, this request is denied
unless Intelligence Server releases memory from elsewhere, such as other
completed contracts.
Request D is below the low water mark, so it is granted. Once it has been
granted, Intelligence Server switches out of request idle mode and resumes
normal operation.
If Intelligence Server continues receiving requests for memory above the low
water mark before the Memory request idle time is exceeded, MCM shuts
down and restarts Intelligence Server.
In this example, Intelligence Server has increased its private byte usage to
the point that existing contracts are pushed above the high water mark.
Once request A has been denied, Intelligence Server enters the memory
request idle mode. In this mode of operation, all requests that would push
the total memory used above the low water mark are denied.
The low water mark is 95 percent of the high water mark. In this scenario,
the high water mark is the amount of Intelligence Server private bytes at the
time when the memory depletion was first detected. Once the virtual byte
high water mark has been set, it is not recalculated. Thus, for Intelligence
Server to exit memory request idle mode, it must release some of the private
bytes.
Although the virtual bytes high water mark is not recalculated, the virtual
memory high water mark is recalculated after each request. MCM calculates
the low water mark based on the lower of the virtual memory high water
mark and the virtual bytes high water mark. This accounts for the scenario
in which, after the virtual bytes high water mark is calculated, Intelligence
Server releases memory but other processes consume more available
memory. This can cause a later calculation of the virtual memory high water
mark to be lower than the virtual bytes high water mark.
Intelligence Server remains in memory request idle mode until the memory
usage looks like it does at the time of request B. The Intelligence Server
private byte usage has dropped to the point where a request can be made
that is below the low water mark. This request is granted, and MCM exits
memory request idle mode.
For more information about server messages and working set, see
Governing User Resources
When system memory is low, such as when the available total system
memory is under pressure (less than 20% of the machine/container's
physical memory), the Intelligence Server will automatically start unloading
cubes up to 10% of the total physical memory, using the following steps:
1. Release index of cubes not used in the past 2 days. Administrators can
customize the most used cube interval via registry. This is enabled by
default and Administrators can disable cube index governing.
2. If you Enable Cube Governing and the system is still under pressure,
the Intelligence server will start unloading cubes based on the Least
Recently Used (LRU) algorithm. This setting is disabled by default.
If the system does not recover from low memory and enters Memory
Depletion status, the Intelligence Server will follow the same steps above to
unload a larger number of cubes. For more information, see Governing
Intelligence Server Memory Use with Memory Contract Manager.
The unload process will be skipped if cube memory usage is less than 30%
of the total physical memory. Certified cubes will be skipped.
1. On the upper right of any page, click your user name, and then select
Preferences from the drop-down list.
If enough memory is released before reaching the full Memory Request Idle
Time limit, Intelligence Server will exit Memory Contract Manager. If not,
Intelligence Server will still be shutdown by Memory Contract Manager as
usual. Even if the Intelligence Server will avoid shutdown, the unloading
process will be performed for all cubes once it is triggered.
Ho w t o En ab l e o r Di sab l e Cu b e Un l o ad i n g
Available settings:
l Windows:
l Linux:
If the variable is set correctly, you could see following log in the
DSSErrors.log file:
When the Intelligence Server exceeds half of the Memory Request Idle Time
setting, the cube unloading will be triggered. Messages such as the
following will appear in the DSSErros.log file:
All un-certified cubes for each project will then be unloaded one by one to
release memory. Messages such as the following will appear in the
DSSErros.log file:
If enough memory is released before reaching the full Memory Request Idle
Time limit, Intelligence Server will exit Memory Contract Manager. If not,
Intelligence Server will still be shutdown by Memory Contract Manager as
usual. Even if the Intelligence Server will avoid shutdown, the unloading
process will be performed for all cubes once it is triggered.
l The latest active server message of each session will not be swapped.
This feature is enabled by default. You can Disable Working Set Memory
Governing via the MicroStrategy REST API, if necessary.
3. Click Try Out and modify the request body by providing your username
and password.
4. Click Execute.
c. Set the proper X-MSTR-AuthToken from step 5. You can also get
this via inspecting the browser network XHR requests.
d. Click Execute.
9. Set the proper X-MSTR-AuthToken from step 5. You can get this by
inspecting the browser network XHR requests.
When is it Logged?
l The Intelligence server enters MCM denial state for the first time. There
are two cases of this:
1. The Intelligence server has never entered MCM denial state and this
is the first time doing so.
2. The Intelligence server has entered MCM denial state, but recovers
and then works as expected. After some time, it enters MCM denial
state again.
However, if the Intelligence server has been in MCM denial state, to the
subsequent continuous request rejects, a memory usage breakdown is not
logged.
l The Intelligence server shuts down due to being in MCM denial state for an
extended time.
If the Intelligence server stalls for too long due to MCM denial, it shuts
down. For this shutdown, a memory usage breakdown is logged as well.
The Intelligence server does two things when outputting memory usage
breakdown information in DSSErrors.log.
l Collect system memory usage counters and format them in a string. Key
system memory counters are collected and divided into two parts:
l Collect object cache related memory usage counters and format them in a
string. Key object cache related memory counters are collected from the
Performance Monitor. These counters include:
l Total Size Of Physical Memory Used For Memory Mapped Files (MB)
The following table maps the item names in the memory usage breakdown
information to the names in the MicroStrategy Diagnostics and
Performance Logging Tool.
Cube Caches In Memory (MB) Total Size (in MB) of Cubes Loaded in Memory
MMF Virtual Memory Size (MB) Total Memory Mapped Files Size (MB)
All values in the MCM memory breakdown information are rounded to the
integer part. For example, the total system physical memory of 15.999 GB
appears as 15 GB in the memory breakdown information.
When the Intelligence server shuts down due to being in MCM denial state
for an extended time
When the Intelligence server shuts down due to being in MCM denial
state for an extended time
This setting is useful to prevent the system from servicing a Web request if
memory is depleted. If the condition is met, Intelligence Server denies all
requests from a MicroStrategy Web product or a client built with the
MicroStrategy Web API.
l How the concurrent users and user sessions on your system use system
resources just by logging in to the system (see Governing Concurrent
Users, page 1063)
l How memory and CPU are used by active users when they execute jobs,
run reports, and make requests, and how you can govern those requests
(see Governing User Resources, page 1066)
l How user profiles can determine what users are able to do when they are
logged in to the system, and how you can govern those profiles (see
Governing User Profiles, page 1069)
With the User Connection Monitor, you can track the users who are
connected to the system. For details about how to use this system monitor,
see Monitoring Users' Connections to Projects, page 87.
To help control the load that user sessions can put on the system, you can
limit the number of concurrent user sessions allowed for each project and for
Intelligence Server. Also, both Developer and MicroStrategy Web have
session timeouts so that when users forget to log out, the system logs them
out and their sessions do not unnecessarily use up Intelligence Server
resources.
For example, a user logs in, runs a report, then leaves for lunch without
logging out of the system. If Intelligence Server is serving the maximum
number of user sessions and another user attempts to log in to the system,
that user is not allowed to log in. You can set a time limit for the total
duration of a user session, and you can limit how long a session remains
open if it is inactive or not being used. In this case, if you set the inactive
time limit to 15 minutes, the person who left for lunch has their session
ended by Intelligence Server. After that, another user can log in.
Intelligence Server does not end a user session until all the jobs submitted
by that user have completed or timed out. This includes reports that are
waiting for autoprompt answers. For example, if a MicroStrategy Web user
runs a report with an autoprompt and, instead of answering the prompt,
clicks the browser's Back button, an open job is created. If the user then
closes thier browser or logs out without canceling the job, the user session
remains open until the open job "Waiting for Autoprompt" times out.
These user session limits are discussed below as they relate to software
features and products.
Monitor, the connections made to the project display the project name in the
Project column. If you sort the list of connections by the Project column, you
can see the total number of user sessions for each project.
You can limit the number of sessions that are allowed for each project. When
the maximum number of user sessions for a project is reached, users cannot
log in to the system. An exception is made for the system administrator, who
can log in to disconnect current users by means of the User Connection
Monitor or increase this governing setting.
To specify this setting, in the Project Configuration Editor for the project,
select the Governing Rules: Default: User sessions category and type the
number in the User sessions per project field.
You can also limit the number of concurrent sessions per user. This can be
useful if one user account, such as "Guest," is used for multiple connections.
To specify this setting, in the Project Configuration Editor for the project,
select the Governing Rules: Default: User sessions category and type the
number in the Concurrent interactive project sessions per user field.
Like all requests, user resources are also governed by the Memory Contract
Manager settings. For more information about Memory Contract Manager,
see Governing Intelligence Server Memory Use with Memory Contract
Manager, page 1039.
History List
The History List is an in-memory message list that references reports that a
user has executed or scheduled. The results are stored as History or
Matching-History caches on Intelligence Server.
The History List can consume much of the system's resources. You can
govern the resources used by old History List messages in the following
ways:
l You can delete messages from the History List with a scheduled
administrative task. For more information and instructions on scheduling
For more information about the History List, including details on History List
governing settings, see Saving Report Results: History List, page 1240.
Working Set
When a user runs a report from MicroStrategy Web or MicroStrategy Library,
the results from the report are added to the working set for that user's
session and stored in memory on Intelligence Server. The working set is a
collection of messages that reference in-memory report instances. A
message is added to the working set when a user executes a report or
retrieves a message from the History List. The purpose of the working set is
to:
Each message in the working set can store two versions of the report
instance in memory: the original version and the result version. The
original version of the report instance is created the first time the report is
executed and is held in memory the entire time a message is part of the
working set. The result version of the report instance is added to the working
set only after the user manipulates the report. Each report manipulation
adds what is called a delta XML to the report message. On each successive
manipulation, a new delta XML is applied to the result version. When the
user clicks the browser's Back button, previous delta XMLs are applied to
the original report instance up to the state that the user is requesting. For
example, if a user has made four manipulations, the report has four delta
XMLs; when the user clicks the Back button, the three previous XMLs are
applied to the original version.
Governing History List and Working Set Mem ory Use in MicroStrategy Web
You can control the amount of the memory that is used by the History List
and Working set in these ways:
l Limit the number of reports that a user can keep available for manipulation
in a MicroStrategy Web product. This number is defined in the
MicroStrategy Web products' interface in Project defaults: History List
settings. You must select the Manually option for adding messages to the
History List, then specify the number in the field labeled If manually, how
many of the most recently run reports and documents do you want to
keep available for manipulation? The default is 10 and the minimum is
1. The higher the number, the more memory the reports may consume.
l Limit the maximum amount of RAM that all users can use for the working
set. When the limit is reached and new report instances are created, the
least recently used report instance is swapped to disk. To set this, in the
Intelligence Server Configuration Editor, under the Governing Rules:
Default: Working Set category, type the limit in the Maximum RAM for
Working Set cache (MB) field.
l If you set this limit to more memory than the operating system can make
available, Intelligence Server uses a value of 100 MB.
l If you set this limit too low and you do not have enough hard disk space
to handle the amount of disk swapping, reports may fail to execute in
peak usage periods because the reports cannot write to memory or to
disk.
If a user session has an open job, the user session remains open and that
job's report instance is removed from the Working set when the job has
finished or timed out. In this way, jobs can continue executing even after
the user has logged out. This may cause excessive memory usage on
Intelligence Server because the session's working set is held in memory
until the session is closed. For instructions on how to set the timeout
period for jobs, see Limit the Maximum Report Execution Time, page
1077.
affect the system's performance. For example, when users schedule report
executions, this creates user sessions on Intelligence Server, thus placing a
load on it even when the users are not actively logged in.
Subscription-Related Privileges
Allowing users to subscribe to reports to be run later can affect system
performance. You can limit the use of subscriptions by using the Web
Scheduled Reports and Schedule Request privileges.
l To limit the use of pivoting, use the Web Pivot Report and Pivot Report
privileges.
l To limit the use of page-by, use the Web Switch Page-by Elements
privilege.
l To limit the use of sorting, use the Web Sort and Modify Sorting privilege.
Exporting Privileges
Exporting reports can consume large amounts of memory, especially when
reports are exported to Excel with formatting. For more information on how
to limit this memory usage, see Limit the Number of XML Cells, page 1099.
The privileges related to exporting reports are found in the Common
privilege group, and are as follows:
l Export to Excel
l Export to Flash
l Export to HTML
l Export to PDF
l Export to Text
To restrict users from exporting any reports from MicroStrategy Web, use
the Web Export privilege in the Web Reporter privilege group.
The OLAP Services privileges are marked with a * in the list of all privileges
(see the List of Privileges section. For more details about how OLAP
Services uses system resources, see Intelligent Cubes, page 1107.
Governing Requests
Each user session can execute multiple concurrent jobs or requests. This
happens when users run documents that submit multiple child reports at a
time or when they send a report to the History List, then execute another
while the first one is still executing. Users can also log in to the system
multiple times and run reports simultaneously. Again, this may use up a
great deal of the available system resources.
To control the number of jobs that can be running at the same time, you can
set limits on the requests that can be executed. You can limit the requests
per user and per project. You can also choose to exclude reports submitted
as part of a Report Services document from the job limits (see Exclude
Document Datasets from the Job Limits, page 1073).
l The total number of jobs (Limit the Total Number of Jobs, page 1074)
l The number of jobs per project (Limit the Number of Jobs Per Project,
page 1074)
l The number of jobs per user account and per user session (Limit the
Number of Jobs Per User Session and Per User Account, page 1075)
l The number of executing reports or data marts per user account (not
counting element requests, metadata requests, and report manipulations)
(Limit the Number of Executing Jobs Per User and Project, page 1076)
l The amount of time reports can execute (Limit the Maximum Report
Execution Time, page 1077)
l A report's SQL (per pass) including both its size and the time it executes
(Limit a Report's SQL Per Pass, page 1078)
reports embedded in it, Intelligence Server would only count two jobs, the
document and the prompt, towards the job limits described below.
To exclude document dataset jobs from the job limits, in the Intelligence
Server Configuration Editor, select the Governing Rules: Default: General
category, and select the For Intelligence Server job and history list
governing, exclude reports embedded in Report Services documents
from the counts check box. This selection applies to the project-level job
limits as well as to the server-level limits.
To set this limit, in the Intelligence Server Configuration Editor, select the
Governing Rules: Default: General category, and specify the value in the
Maximum number of jobs field. You can also specify a maximum number of
interactive jobs (jobs executed by a direct user request) and scheduled jobs
(jobs executed by a scheduled request). A value of -1 indicates that there is
no limit on the number of jobs that can be executed.
In a clustered system, these settings limit the number of concurrent jobs per
project on each node of the cluster.
To specify this job limit setting, in the Project Configuration Editor for the
project, select the Governing Rules: Default: Jobs category, and specify
the number of concurrent jobs that you want to allow for the project in each
Jobs per project field. You can also specify a maximum number of
interactive jobs (jobs executed by a direct user request) and scheduled jobs
(jobs executed by a scheduled request). A value of -1 indicates that the
number of jobs that can be executed has no limit.
Limit the Number of Jobs Per User Session and Per User
Account
If your users' job requests place a heavy burden on the system, you can limit
the number of open jobs within Intelligence Server, including element
requests, autoprompts, and reports for a user.
l To help control the number of jobs that can run in a project and thus
reduce their impact on system resources, you can limit the number of
concurrent jobs that a user can execute in a user session. For example, if
the Jobs per user session limit is set to four and a user has one session
open for the project, that user can only execute four jobs at a time.
However, the user can bypass this limit by logging in to the project
multiple times. (To prevent this, see the next setting, Jobs per user
account limit.)
To specify this setting, in the Project Configuration Editor for the project,
select the Governing Rules: Jobs category, and type the number in the
Jobs per user session field. A value of -1 indicates that the number of
jobs that can be executed has no limit
l You can set a limit on the number of concurrent jobs that a user can
execute for each project regardless of the number of user sessions that
user has at the time. For example, if the user has two user sessions and
the Jobs per user session limit is set to four, the user can run eight jobs.
But if this Jobs per user account limit is set to five, that user can execute
only five jobs, regardless of the number of times the user logs in to the
system. Therefore, this limit can prevent users from circumventing the
Jobs per user session limit by logging in multiple times.
To specify this setting, in the Project Configuration Editor for the project,
select the Governing Rules: Jobs category, and type the number of jobs
per user account that you want to allow in the Jobs per user account
field. A value of -1 indicates that the number of jobs that can be executed
has no limit.
These two limits count the number of report, element, and autoprompt job
requests that are executing or waiting to execute. Jobs that have finished,
cached jobs, or jobs that returned in error are not counted toward these
limits. If either limit is reached, any jobs the user submits do not execute and
the user sees an error message.
This limit is called Executing jobs per user. If the limit is reached for the
project, new report requests are placed in the Intelligence Server queue
until other jobs finish. They are then processed in the order in which they
were placed in the queue, which is controlled by the priority map (see
Prioritize Jobs, page 1086).
To specify this limit setting, in the Project Configuration Editor for the
project, select the Governing Rules: Default: Jobs category, and type the
number of concurrent report jobs per user you want to allow in the
Executing jobs per user field. A value of -1 indicates that the number of
jobs that can be executed has no limit.
To set this limit, in the Project Configuration Editor, select the Governing
Rules: Default: Result Sets category, and specify the number of seconds
in the Intelligence Server Elapsed Time (sec) fields. You can set different
limits for ad-hoc reports and scheduled reports.
This limit applies to most operations that are entailed in a job from the time it
is submitted to the time the results are returned to the user. If the job
exceeds the limit, the user sees an error message and cannot view the
report.
The figure below illustrates how job tasks make up the entire report
execution time. In this instance, the time limit includes the time waiting for
the user to complete report prompts. Each step is explained in the table
below.
Waiting for
1 Resolving prompts
Autoprompt
2* Waiting (in queue) Element request is waiting in job queue for execution
Waiting for
4 Waiting for user to make prompt selections
Autoprompt
*Steps 2 and 3 are for an element request. They are executed as separate
jobs. During steps 2 and 3, the original report job has the status "Waiting for
Autoprompt."
The following tasks are not shown in the example above because they
consume very little time. However, they also count toward the report
execution time.
For more information about the job processing steps, see Processing Jobs,
page 55.
You can also limit the amount of memory that Intelligence Server uses
during report SQL generation. This limit is set for all reports generated on
the server. To set this limit, in the Project Configuration Editor, open the
Governing Rules: Default: Result Sets category, and specify the Memory
consumption during SQL generation. A value of -1 indicates no limit.
To specify this setting, edit the VLDB properties for the database instance or
for a report, expand Governing settings, then select the SQL Time Out
(Per Pass) option.
To specify this, edit the VLDB properties for the database instance, expand
Governing settings, then select the Maximum SQL Size option.
When the log files reach the size limit they will automatically roll over to a
backup file.
l Linux:
[InstallPath]/IntelligenceServer/KafkaConsumer/LogCo
nsumer.properties
3. Click Save.
3. Follow the command line prompts to enter the Kafka consumer settings.
This section discusses the different ways you have of managing job
execution. These include:
You must determine the number of threads that strikes a good balance
between quickly serving each user request while not overloading the
system. The overall goal is to prioritize jobs and provide enough threads so
that jobs that must be processed immediately are processed immediately,
and the remainder of jobs are processed as timely as possible. If your
system has hundreds of concurrent users submitting requests, you must
Once you have the number of threads calculated, you can then set job
priorities and control how many threads are dedicated to serving jobs
meeting certain criteria.
high, medium, and low connections. The sum of these numbers is the total
number of concurrent connection threads allowed between Intelligence
Server and the data warehouse. These settings apply to all projects that use
the selected database instance.
You should have at least one low-priority connection available, because low
priority is the default job priority, and low-priority jobs can use only low-
priority database connection threads. Medium-priority connection threads
are reserved for medium- and high-priority jobs, and high-priority
connection threads are reserved for high-priority jobs only. For more
information about job priority, including instructions on how to set job
priority, see Prioritize Jobs, page 1086.
If you set all connections to zero, jobs are not submitted to the data
warehouse. This may be a useful way for you to test whether scheduled
reports are processed by Intelligence Server properly. Jobs wait in the
queue and are not submitted to the data warehouse until you increase the
connection number, at which point they are then submitted to the data
warehouse. Once the testing is over, you can delete those jobs so they are
never submitted to the data warehouse.
To set these limits, edit the database instance, then modify the database
connection (at the bottom of the Database Instances dialog box), and on the
Database Connections dialog box, select the Advanced tab. A value of 0 or
-1 indicates no limit.
When a user runs a report that executes for a long time on the data
warehouse, the user can cancel the job execution. This may be due to an
error in the report's design, especially if it is in a project in a development
environment, or the user may simply not want to wait any longer. If the
cancel is not successful after 30 seconds, Intelligence Server deletes that
job's database connection thread. The Maximum cancel attempt time
(sec) field controls how long you want Intelligence Server to wait in addition
to the 30 seconds before deleting the thread.
This is the maximum amount of time that a single pass of SQL can execute
on the data warehouse. When the SQL statement or fetch operation begins,
a timer starts counting. If the Maximum query execution time (sec) limit is
reached before the SQL operation is concluded, Intelligence Server cancels
the operation.
This setting is very similar to the SQL time out (per pass) VLDB setting
(see Limit a Report's SQL Per Pass, page 1078). That VLDB setting
overrides the Maximum query execution time (sec) setting. This setting is
made on the database connection and can be used to govern the maximum
query execution time across all projects that use that connection. The VLDB
setting can override this setting for a specific report.
This is the maximum amount of time that Intelligence Server waits while
attempting to connect to the data warehouse. When the connection is
initiated, a timer starts counting. If the Maximum connection attempt time
(sec) limit is reached before the connection is successful, the connection is
canceled and an error message is displayed.
To set these limits, edit the database instance, then modify the database
connection (at the bottom of the Database Instances dialog box), and on the
Database Connections dialog box, select the Advanced tab. For these
settings, a value of -1 indicates no limit, and a value of 0 indicates that the
connection is not cached and is deleted immediately when execution is
complete.
Connection Lifetim e
The Connection lifetime (sec) limit is the maximum amount of time that a
database connection thread remains cached. The Connection lifetime
should be shorter than the data warehouse RDBMS connection time limit.
Otherwise the RDBMS may delete the connection in the middle of a job.
l If the database connection has a status of Cached (it is idle, but available)
when the limit is reached, the connection is deleted.
The Connection idle timeout (sec) limit is the amount of time that an
inactive connection thread remains cached in Intelligence Server until it is
terminated. When a database connection finishes a job and no job is waiting
to use it, the connection becomes cached. If the connection remains cached
for longer than this timeout limit, the database connection thread is then
deleted. This prevents connections from tying up data warehouse and
Intelligence Server resources if they are not needed.
Prioritize Jobs
Job priority defines the order in which jobs are processed. Jobs are usually
executed as first-come, first-served. However, your system probably has
certain jobs that need to be processed before other jobs.
Job priority does not affect the amount of resources a job gets once it is
submitted to the data warehouse. Rather, it determines whether certain jobs
are submitted to the data warehouse before other jobs in the queue.
You can set jobs to be high, medium, or low priority, by one or more of the
following variables:
l Request type: Report requests and element requests can have different
priority (Prioritizing Jobs by Request Type, page 1088).
l User group: Jobs submitted by users in the groups you select are
processed according to the priority that you specify (Prioritizing Jobs by
User Group, page 1089).
l Cost: Jobs with a higher resource cost are processed according to the
priority that you specify (Prioritizing Jobs by Report Cost, page 1089). Job
cost is an arbitrary value you can assign to a report that represents the
resources used to process that job.
These variables allow you to create sophisticated rules for which job
requests are processed first. For example, you could specify that any
element requests are high priority, any requests from your test project are
low priority, and any requests from users in the Developers group are
medium priority.
from the specified application use the specified priority. For example, you
may want report designers to be able to quickly test their reports, so you
may specify that all jobs that are submitted from Developer are processed at
a high priority.
The set of cost groupings must cover all values from 0 to 999. You can then
assign a priority level to each priority group. For example, you can set heavy
reports to low priority, because they are likely to take a long time to process,
and set light reports to high priority, because they do not place much strain
on the system resources.
Once you determine the cost groupings, you can set the report cost value on
individual reports. For example, you notice that a report requires
significantly more processing time than most other reports. You can assign it
a report cost of 900 (heavy). In this sample configuration, the report has a
low priority. For factors that may help you determine the cost of a report, see
Results Processing, page 1090.
You set the cost of a report in the report's Properties dialog box, in the
Priority category. You must have system administrator privileges to set the
cost of a report.
3. In the Report Cost field, type the cost of the report. Higher numbers
indicate a report that uses a great deal of system resources. Lower
numbers indicate a less resource-intensive report.
4. Click OK.
Results Processing
When Intelligence Server processes results that are returned from the data
warehouse, several factors determine how much of the machine's resources
are used. These factors include:
l The size of the report (see Limiting the Maximum Report Size, page 1091)
l Whether the report is an Intelligent Cube (see Limiting the Size and
Number of Intelligent Cubes, page 1095)
l Whether the report is imported from an external data source (see Limiting
the Memory Used During Data Fetching, page 1096)
The row size depends on the data types of the attributes and metrics on the
report. Dates are the largest data type. Text strings, such as descriptions
and names, are next in size, unless the description is unusually long, in
which case they may be larger than dates. Numbers, such as IDs, totals, and
metric values, are the smallest.
The easiest way to estimate the amount of memory that a report uses is to
view the size of the cache files using the Cache Monitor in Developer. The
Cache Monitor shows the size of the report results in binary format, which
from testing has proven to be 30 to 50 percent of the actual size of the report
instance in memory. For instructions on how to use the Cache Monitor to
view the size of a cache, see Monitoring Result Caches, page 1217.
Intelligence Server allows you to govern the size of a report or request in the
following ways:
Like all requests, large report instances are also governed by the Memory
Contract Manager settings. For more information about Memory Contract
Manager, see Governing Intelligence Server Memory Use with Memory
Contract Manager, page 1039.
Reports with a large number of result rows can take up a great deal of
memory at run time. For example, your data warehouse may contain daily
sales data for thousands of items over several years. If a user attempts to
build a report that lists the revenue from every item for every day in the data
warehouse, the report may use all available Intelligence Server memory.
To set the maximum number of result rows for all reports, data marts, and
Intelligent Cubes in a project, in the Project Configuration Editor, expand the
Governing Rules: Default: Result Sets category, and type the maximum
number in the appropriate Final Result Rows field. You can set different
limits for standard reports, Intelligent Cubes, and data marts.
You can also set the result row limit for a specific report in that report's
VLDB properties. The VLDB properties limit for a report overrides the project
limit. For example, if you set the project limit at 10,000 rows, but set the limit
to 20,000 rows for a specific report that usually returns more than 10,000
rows, users are able to see that report without any errors.
1. In Developer, right-click the report to set the limit for and select Edit.
3. Expand the Governing settings, then select Results Set Row Limit.
4. Make sure the Use default inherited value check box is cleared.
Another way that you can limit the size of a request is to limit the number of
element rows returned at a time. Element rows are returned when a user
accesses a report prompt, and when using the Data Explorer feature in
Developer.
Element rows are incrementally fetched, that is, returned in small batches,
from the data warehouse to Intelligence Server. The size of the increment
depends on the maximum number of element rows specified in the client.
Intelligence Server incrementally fetches four times the number for each
element request.
For more information about element requests, such as how they are created,
how incremental fetch works, and the caches that store the results, see
Element Caches, page 1261.
MicroStrategy recommends that you set the element row limit to be larger
than the maximum number of attribute element rows that you expect users to
browse. For example, if the Product table in the data warehouse has 10,000
rows that users want to browse and the Order table has 200,000 rows that
you do not expect users to browse, you should set this limit to 11,000.
Intelligence Server incrementally fetches the element rows. If the element
rows limit is reached, the user sees an error message and cannot view the
prompt or the data.
To set the maximum number of element rows returned for all element
requests in a project in Developer, in the Project Configuration Editor for
that project, expand the Governing Rules: Default: Result Sets category
and type the number in the All element browsing result rows field.
5. Click OK.
To specify this limit for all reports in a project, in the Project Configuration
Editor, select the Governing Rules: Default: Result Sets category and
type the number in the All intermediate result rows box.
You can also set the intermediate row limit for a specific report in that
report's VLDB properties. The VLDB properties limit for the report overrides
the project limit. For example, if you set the project limit at 10,000 rows but
set the limit to 20,000 rows for a specific report that usually returns more
than 10,000 rows, users are able to see that report without any errors.
1. In Developer, right-click the report to set the limit for and select Edit.
4. Make sure the Use default inherited value check box is cleared.
To specify these settings, in the Project Configuration Editor for the project,
select the Cubes: General category and type the new values in the
You can govern the amount of memory used for an individual data fetch in
the Project Configuration Editor. Select the Governing Rules: Default:
Result Sets category, and type the new value in the Memory consumption
during data fetching (MB) field. The default value is -1, indicating no limit.
You can set limits in two areas to control how much information is sent at a
time. The lower of these two settings determines the maximum size of
results that Intelligence Server delivers at a time:
l How many XML cells in a result set can be delivered simultaneously (see
Limit the Number of XML Cells, page 1099)
l The maximum size of a report that can be exported (see Limit Export
Sizes, page 1100 and Limit the Memory Consumption for File Generation,
page 1101)
l The number of XML drill paths in a report (see Limit the Total Number of
XML Drill Paths, page 1102)
Like all requests, displayed and exported reports are also governed by the
Memory Contract Manager settings. For more information about Memory
Contract Manager, see Governing Intelligence Server Memory Use with
Memory Contract Manager, page 1039.
3. Select Project defaults, and then select the Grid display category.
5. Click OK.
If the user sets the number of rows and columns too high, the number
of XML cells limit that is set in Intelligence Server (see Limit the
Number of XML Cells, page 1099) governs the size of the result set.
5. Click OK.
For example, if the XML limit is set at 10,000 and a report has 100,000
metric cells, the report is split into 10 pages. The user clicks the page
number to view the corresponding page.
how large the batches are. Depending on this XML limit, Intelligence Server
behaves differently:
l If the limit is smaller, it takes a longer time to generate the XML because it
is generated in small batches, which use less memory and system
resources.
l If the limit is larger, it takes a shorter time to generate the XML because it
is generated in fewer, but larger, batches, which use more memory and
system resources.
To set the XML limit, in the Intelligence Server Configuration Editor, select
the Governing Rules: Default: File Generation category, then specify the
Maximum number of XML cells. You must restart Intelligence Server for
the new limit to take effect.
3. Select Project defaults, and then select the Export Reports category.
5. Click OK.
The more formatting an exported report has, the more memory it consumes.
When exporting large reports the best options are plain text or CSV file
formats because formatting information is not included with the report data.
In contrast, exporting reports as Excel with formatting uses a significant
amount of memory because the exported Excel file contains both the report
data and all the formatting data. For more information about exporting
reports, see Client-Specific Job Processing, page 72.
Because Excel export uses significantly more memory than other export
formats, you can limit the size of reports exported to Excel from Developer
as well as from Web. The default memory consumption limit is 100 MB.
To set the maximum memory consumption limits for exporting reports from
Web, in the Intelligence Server Configuration Editor, select the Governing
Rules: Default: File Generation category, and specify the Maximum
memory consumption for the XML, PDF, Excel, and HTML files.
4. On the Memory tab, in the Export to Excel section, select Use custom
value. In the Maximum RAM Usage (MB) field, specify the maximum
memory consumption.
5. Click OK.
For more information about customizing drill maps, see the Advanced
Reporting Help.
To set this limit, in the Intelligence Server Configuration Editor, select the
Governing Rules: Default: File Generation category, then specify the
Maximum number of XML drill paths. You must restart Intelligence Server
for the new limit to take effect.
The following sections cover the settings you can configure to improve the
performance of your in-memory datasets:
l Increase the maximum size of the datasets that users can import. If users
need to import large datasets into a project, increase the limit on the size
of the dataset that they can import. For steps to increase this limit, see
Governing Intelligent Cube Memory Usage.
l Enable parallel queries for the reports in your project, so that Intelligence
Server can execute database queries in parallel and retrieve more data
from your database. For steps to enable parallel queries, and to define the
maximum number of parallel queries that can be run for every report, see
the Optimizing Queries section.
Design Reports
In addition to the fact that large reports can exert a heavy toll on system
performance, a report's design can also affect it. Some features consume
more of the system's capacity than others when they are used.
Some report design features that can use a great deal of system resources
include:
Analytic Complexity
Calculations that cannot be done with SQL in the data warehouse are
performed by the Analytical Engine in Intelligence Server. These may result
in significant memory use during report execution. Some analytic
calculations (such as AvgDev) require the entire column of the fact table as
input to the calculation. The amount of memory used depends on the type of
calculation and the size of the report that is used. Make sure your report
designers are aware of the potential effects of these calculations.
Subtotals
The amount of memory required to calculate and store subtotals can be
significant. In some cases, the size of the subtotals can surpass the size of
the report result itself.
The size of the subtotals depends on the subtotaling option chosen, along
with the order and the number of unique attributes. The easiest way to
determine the number of subtotals being calculated is to examine the
number of result rows added with the different options selected in the
Advanced Subtotals Options dialog box. To access this dialog box, view the
report in Developer, then point to Data, then Subtotals, and then choose
Advanced. For more detailed information about the different subtotal
options, see the Reports section in the Advanced Reporting Help.
Subtotals can use a great deal of memory if you select the All Subtotals
option in the Pages drop-down list. This option calculates all possible
subtotal calculations at runtime and stores the results in the report instance.
MicroStrategy recommends that you encourage users and report designers
to use less taxing options for calculating subtotals across pages, such as
Selected Subtotals and Grand Total.
Page-By Feature
If designers or users create reports that use the page-by feature, they may
use significant system resources. This is because the entire report is held in
memory even though the user is seeing only a portion of it at a time. To
lessen the potential effect of using page-by with large reports, consider
splitting those reports into multiple reports and eliminating the use of page-
by. For more information about page-by, see the Advanced Reporting Help.
Prompt Complexity
Each attribute element or hierarchy prompt requires an element request to
be executed by Intelligence Server. The number of prompts used and the
number of elements returned from the prompts determine how much load is
placed on Intelligence Server. Report designers should take this into
account when designing prompted reports.
Server, less load is placed on the data warehouse and on the Intelligence
Server machine. For information about document caching, including
instructions, see Result Caches, page 1203.
Intelligent Cubes
With OLAP Services features, your report designers can create Intelligent
Cube reports. These reports allow data to be returned from the data
warehouse, stored in Intelligence Server memory, and then shared among
multiple reports.
You can also restrict the number and size of Intelligent Cubes that can be
loaded at once. For instructions, see Results Processing, page 1090.
These governors are arranged by where in the interface you can find them.
HTML Generation: Maximum The maximum amount of memory (in Limit the Memory
memory consumption for megabytes) that Intelligence Server Consumption for
Governing
A check box that enables the Intelligence Server
Enable single memory
Maximum single allocation size Memory Use with
allocation governing
governor. Memory Contract
Manager
Governing
Prevents Intelligence Server from Intelligence Server
Maximum single allocation
granting a request that would Memory Use with
size (MBytes)
exceed this limit. Memory Contract
Manager
Governing
The amount of system memory, in
Intelligence Server
Minimum reserved memory either MB or a percent, that must be
Memory Use with
(MBytes or %) reserved for processes external to
Memory Contract
Intelligence Server.
Manager
Governing
The maximum percent of the
Intelligence Server
Maximum use of virtual process' virtual address space that
Memory Use with
address space (%) Intelligence Server can use before
Memory Contract
entering memory request idle mode.
Manager
.\TmpPool
Session Recovery and Specifies the where the session Governing User
Deferred Inbox storage information is written to disk. The Resources
Governing
Enable Web User Session If selected, allows Web users to recover
User
Recovery on Logout their sessions.
Resources
Maximum
The maximum number of attribute elements Limiting the Number of
number of
that can be being retrieved from the data Elements Displayed and
elements to
warehouse at one time. Cached at a Time
display
The maximum number of concurrent jobs a single user Limit the Number
Executing
account can have executing in the project. If this of Executing Jobs
jobs per
condition is met, additional jobs are placed in the Per User and
user
queue until executing jobs finish. Project
Concurrent
Governing
interactive
The maximum number of concurrent sessions per user. Concurrent
project sessions
Users
per user
Maximum
The maximum number of reports or documents to
History List Managing
which a user can be subscribed for delivery to the
subscriptions Subscriptions
History List.
per user
Maximum
Cache Update The maximum number of reports or documents to Managing
subscriptions which a user can be subscribed for updating caches. Subscriptions
per user
Maximum
The maximum number of reports or documents to
Mobile Managing
which a user can be subscribed for delivery to a
subscriptions Subscriptions
Mobile device (MicroStrategy Mobile only).
per user
Maximum
The maximum number of personal views that can be
Personal View Managing
created by URL sharing. A value of -1 indicates no
subscriptions Subscriptions
limit. By default, this is set to -1.
per user
Formatted
The maximum amount of memory reserved for the
Documents - Configuring
creation and storage of document caches. This setting
Maximum Result Cache
should be configured to be at least the size of the largest
RAM usage Settings
cache file, or that report will not be cached.
(MBytes)
but the setting will not be enforced if set below the default
caches
value of 100000.
Maximum
This setting determines what percentage of the amount of
RAM for
memory specified in the Maximum RAM usage limits can
report cache
be used for result cache lookup tables.
index (%)
Configuring
Never expire
Determines whether caches automatically expire. Result Cache
caches
Settings
Do not Apply
Automatic Select this check box for report caches with Configuring
Expiration Logic for dynamic dates to expire in the same way as other Result Cache
reports containing report caches. Settings
dynamic dates
Summary Table of
Server - Maximum The amount of memory that Intelligence
Object Caching
RAM usage (MBytes) Server allocates for object caching.
Settings
Summary Table of
Client - Maximum The amount of memory that Developer
Object Caching
RAM usage (MBytes) allocates for object caching.
Settings
Summary Table of
Client - Maximum The amount of memory that Developer
Element Cache
RAM usage (MBytes) allocates for object caching.
Settings
Managing
Do not create or
Prevents subscriptions from creating or updating Scheduled
update matching
caches by default. Administration
caches
Tasks
Keep document
Managing
available for
Retains a document or report for later Scheduled
manipulation for
manipulation that was delivered to the History List. Administration
History List
Tasks
subscriptions only
Defining
Memory
Maximum RAM The maximum amount of memory used on Intelligence
Limits for
Usage (MBytes) Server by Intelligent Cubes for this project.
Intelligent
Cubes
due to indexes
Cube growth
Defines, in minutes, how often the Intelligent Cube’s size
check
is checked and, if necessary, how often the least-used
frequency (in
indexes are dropped.
mins)
Database Connection
This set of governors can be set by modifying a project source's database
instance and then modifying either the number of Job Prioritization
connections or the Database connection. For more details on each governor,
see the page references in the table below.
ODBC Settings
Number of The total number of High, Medium, and Low database Manage
database connections that are allowed at a time between Database
connection Intelligence Server and the data warehouse (set on the Connection
threads database instance's Job Prioritization tab). Threads
Maximum Manage
cancel The maximum amount of time that the Query Engine waits Database
attempt time for a successful attempt to cancel a query. Connection
(sec) Threads
Maximum Manage
query The maximum amount of time that a single pass of SQL Database
execution may execute on the data warehouse. Connection
time (sec) Threads
Maximum Manage
connection The maximum amount of time that Intelligence Server Database
attempt time waits to connect to the data warehouse. Connection
(sec) Threads
VLDB Settings
These settings can be changed in the VLDB Properties dialog box for either
reports or the database instance. For information about accessing these
properties, see the page reference for each property in the table below. For
complete details about all VLDB properties, see SQL Generation and Data
Processing: VLDB Properties.
The amount of time, in seconds, that any SQL pass can Limit a
SQL time out
execute on the data warehouse. This can be set at the Report's SQL
(per pass)
database instance and report levels. Per Pass
Limit a
Maximum SQL The maximum size (in bytes) that the SQL statement can
Report's SQL
size be. This can be set at the database instance level.
Per Pass
For more information, refer to the Narrowcast Server Getting Started Guide.
Personal report execution (PRE) executes a separate report for each set of
users with unique personalization. Users can have reports executed under
the context of the corresponding Intelligence Server user if desired. Using
this option, security profiles defined in Developer are maintained. However if
the system contains many users who all have unique personalization, this
option can place a large load on Intelligence Server.
Personal page execution (PPE) executes one multi-page report for all users
in a segment and then uses this single report to provide personalized
content (pages) for different users. All users have their reports executed
under the context of the same Intelligence Server user, so individual
security profiles are not maintained. However, the load on Intelligence
Server may be significantly lower than for PRE in some cases.
For more detailed information about these options, refer to the Narrowcast
Server Application Designer Guide, specifically the section on Page
Personalization and Dynamic Subscriptions.
l Timing of Narrowcast Server jobs: You can schedule reports to run at off-
peak hours when Intelligence Server's load from MicroStrategy Web
products and Developer users is lowest.
CLUSTER M ULTIPLE
M ICRO STRATEGY SERVERS
Overview of Clustering
A cluster is a group of two or more servers connected to each other in such a
way that they behave like a single server. Each machine in the cluster is
called a node. Because each machine in the cluster runs the same services
as other machines in the cluster, any machine can stand in for any other
machine in the cluster. This becomes important when one machine goes
down or must be taken out of service for a time. The remaining machines in
the cluster can seamlessly take over the work of the downed machine,
providing users with uninterrupted access to services and data.
l You can cluster Intelligence Servers using the built-in Clustering feature.
A Clustering license allows you to cluster up to eight Intelligence Server
machines. For instructions on how to cluster Intelligence Servers, see
Cluster Intelligence Servers, page 1146.
Benefits of Clustering
Clustering Intelligence Servers provides the following benefits:
Failover Support
Failover support ensures that a business intelligence system remains
available for use if an application or hardware failure occurs. Clustering
provides failover support in two ways:
l Load redistribution: When a node fails, the work for which it is responsible
is directed to another node or set of nodes.
node. Users must log in again to be authenticated on the new node. The
user is prompted to resubmit job requests.
Load Balancing
Load balancing is a strategy aimed at achieving even distribution of user
sessions across Intelligence Servers, so that no single machine is
overwhelmed. This strategy is especially valuable when it is difficult to
predict the number of requests a server will receive. MicroStrategy achieves
four-tier load balancing by incorporating load balancers into the
MicroStrategy Web and Web products.
Distributing projects across nodes also provides project failover support. For
example, one server is hosting project A and another server is hosting
projects B and C. If the first server fails, the other server can host all three
projects to ensure project availability.
Work Fencing
User fences and workload fences allow you to reserve nodes of a cluster for
either users or a project subscriptions. For more information, see Reserve
Nodes with Work Fences, page 1165.
The node of the cluster that performs all job executions is the node that the
client application, such as Developer, connects to. This is also the node that
can be monitored by an administrator using the monitoring tools.
1. MicroStrategy Web users log into a project and request reports from
their Web browsers.
4. The Intelligence Server nodes receive the requests and process them.
In addition, the nodes communicate with each other to maintain
metadata synchronization and cache accessibility across nodes.
l Result (report and document) caches and Intelligent Cubes: When a query
is submitted by a user, if an Intelligent Cube or a cached report or
document is not available locally, the server will retrieve the cache (if it
exists) from another node in the cluster. For an introduction to report and
document caching, see Result Caches, page 1203. For an introduction to
Intelligent Cubes, see Chapter 11, Managing Intelligent Cubes.
l History Lists: Each user's History List, which is held in memory by each
node in the cluster, contains direct references to the relevant cache files.
Accessing a report through the History List bypasses many of the report
l Result caches and Intelligent Cubes (for details, see Sharing Result
Caches and Intelligent Cubes in a Cluster, page 1138)
l History Lists (for details, see Synchronizing History Lists, page 1142)
To view clustered cache information, such as cache hit counts, use the
Cache Monitor.
Result cache settings are configured per project, and different projects may
use different methods of result cache storage. Different projects may also
use different locations for their cache repositories. However, History List
settings are configured per project source. Therefore, different projects
cannot use different locations for their History List backups.
For result caches and History Lists, you must configure either multiple local
caches or a centralized cache for your cluster. The following sections
describe the caches that are affected by clustering, and it presents the
procedures to configure caches across cluster nodes.
Synchronizing Metadata
Metadata synchronization refers to the process of synchronizing object
caches across all nodes in the cluster.
The node that processed the change automatically notifies all other nodes in
the cluster that the object has changed. The other nodes then delete the old
object cache from memory. The next request for that object that is
processed by another node in the cluster is executed against the metadata,
creating a new object cache on that node.
In addition to server object caches, client object caches are also invalidated
when a change occurs. When a user requests a changed object, the invalid
client cache is not used and the request is processed against the server
object cache. If the server object cache has not been refreshed with the
changed object, the request is executed against the metadata.
Intelligent Cube and result cache sharing among nodes can be configured in
one of the following ways:
l Local caching: Each node hosts its own cache file directory and
Intelligent Cube directory. These directories need to be shared so that
other nodes can access them. For more information, see Local Caching,
page 1140.
If you are using local caching, the cache directory must be shared as
"ClusterCaches" and the Intelligent Cube directory must be shared as
"ClusterCube". These are the share names Intelligence Server looks for
on other nodes to retrieve caches and Intelligent Cubes.
l Centralized caching: All nodes have the cache file directory and
Intelligent Cube directory set to the same network locations, \\<machine
name>\<shared cache folder name> and \\<machine
name>\<shared Intelligent Cube folder name>. For more
information, see Centralized Caching, page 1141.
The following table summarizes the pros and cons of the result cache
configurations:
Pros Cons
Allows faster read and write The local cache files may be
operations for cache files created temporarily unavailable if an
by the local server. Intelligence Server is taken off the
Pros Cons
For steps to configure cache files with either method, see Configure Caches
in a Cluster, page 1147.
Local Caching
In this cache configuration, each node maintains its own local Intelligent
Cubes and local cache file and, thus, maintains its own cache index file.
Each node's caches are accessible by other nodes in the cluster through the
cache index file. This is illustrated in the diagram below.
Centralized Caching
In this cache configuration, all nodes in the cluster use one shared,
centralized location for Intelligent Cubes and one shared, centralized cache
file location. These can be stored on one of the Intelligence Server machines
or on a separate machine dedicated to serving the caches. The Intelligent
Cubes, History List messages, and result caches for all the Intelligence
Server machines in the cluster are written to the same location. In this
option, only one cache index file is maintained. This is illustrated in the
diagram below.
If you are using a database-based History List, History List messages and
their associated caches are stored in the database and automatically
synchronized across all nodes in the cluster.
If you are using a file-based History List, the Intelligence Server Inbox folder
contains the collection of History List messages for all users, which appear
in the History folder in Developer. Inbox synchronization refers to the
process of synchronizing History Lists across all nodes in the cluster, so that
all nodes contain the same History List messages. Inbox synchronization
enables users to view the same set of personal History List messages,
regardless of the cluster node to which they are connected.
MicroStrategy Prerequisites
l You must have purchased an Intelligence Server license that allows
clustering. To determine the license information, use the License Manager
tool and verify that the Clustering feature is available for Intelligence
Server. For more information on using License Manager, see Chapter 5,
Manage Your Licenses.
l The user account under which the Intelligence Server service is running
must have full control of cache and History List folders on all nodes.
Otherwise, Intelligence Server will not be able to create and access cache
and History List files.
l You must have access to the Cluster view of the System Administration
monitor in Developer. Therefore, you must have the Administration
privilege to create a cluster. For details about the Cluster view of the
System Administration monitor, see Manage Your Clustered System, page
1168.
l The computers that will be clustered must have the same intra-cluster
communication settings. To configure these settings, on each Intelligence
Server machine, in Developer, right-click the project source and select
Configure MicroStrategy Intelligence Server. The Intelligence Server
Configuration Editor opens. Under the Server definition category, select
General.
Server Prerequisites
l The machines to be clustered must be running the same version of the
same operating system.
l The required data source names (DSNs) must be created and configured
for Intelligence Server on each machine. MicroStrategy strongly
recommends that you configure both servers to use the same metadata
database, warehouse, port number, and server definition.
l All nodes must join the cluster before you make any changes to any
governing settings, such as in the Intelligence Server Configuration Editor.
l The service user's Regional Options settings must be the same as the
clustered system's Regional Options settings.
ln -s OLDNAME NEWNAME
Where
Most operations (open, read, write) on the soft link automatically de-
reference it and operate on its target (OLDNAME). Some operations (for
example, removing) work on the link itself (NEWNAME).
l Confirm that each server machine works properly, and then shut down
each machine.
3. Join nodes.
l Local caching: Each node hosts its own cache file directory and
Intelligent Cube directory. These directories need to be shared so that
other nodes can access them. For more information, see Synchronizing
Cached Information Across Nodes in a Cluster, page 1137.
l Centralized caching: All nodes have the cache file directory and
Intelligent Cube directory set to the same network locations. For more
information, see Synchronizing Cached Information Across Nodes in a
Cluster, page 1137. MicroStrategy recommends this method since it’s
simpler in both configuration and maintenance.
.\Caches\ServerDefinition
This tells the other clustered nodes to search for caches in the
following path on all machines in the cluster:
4. Click OK.
8. Click OK.
.\Cube\ServerDefinition
This tells the other clustered nodes to search for caches in the
following path on all machines in the cluster:
12. On each machine in the cluster, open Windows Explorer and navigate
to the cache file folder. The default location is:
14. Select the Shared as option and in Share name, delete the existing
text and enter ClusterCube.
16. Restart the server. If the other cluster servers are running during the
configuration, restart them as well.
or
4. Click OK.
or
7. On the machine that is storing the centralized cache, create the file
folder that will be used as the shared folder. The file folder name must
be identical to the name you specified earlier in Cache file directory.
This is shown as the Shared Folder Name above.
8. Restart the server. If the other cluster servers are running during the
configuration, restart them as well.
If you are using a file-based history list, you can set up history lists to use
multiple local disk backups on each node in the cluster, using a procedure
similar to the procedure above, Configure Cache Sharing Using Multiple
Local Cache Files, page 1148. The history list messages are stored in the
History folder. To locate this folder, in the Intelligence Server Configuration
Editor, expand History settings and select General.
/<machine_name>/ClusterCaches
/<machine_name>/ClusterInBox
You can choose to use either procedure below, depending on whether you
want to use centralized or local caching. For a detailed description and
diagrams of cache synchronization setup, see Synchronizing Cached
Information Across Nodes in a Cluster, page 1137.
l The Linux machines are called UNIX1 and UNIX2. Note that UNIX1 and
UNIX2 are the hostnames, not the IP address.
mkdir /UNIX2
mount UNIX2:/<MSTR_HOME_PATH>/IntelligenceServer /UNIX2
mkdir /UNIX1
mount UNIX1:/<MSTR_HOME_PATH>/IntelligenceServer /UNIX1
10. Select Intelligent Cubes > General > Intelligent Cube File directory.
11. Set the path for the cube cache file directory to ./ClusterCube.
12. Disconnect from the project source and restart both Intelligence
servers.
This procedure assumes that the Linux machines are called UNIX1 and
UNIX2.
1. Create the folders for caches on the shared device called UNIX3 as
described in Prerequisites for Clustering Intelligence Servers, page
1143
mkdir /sandbox
mkdir /sandbox
mount UNIX3:/sandbox /sandbox
mkdir /sandbox
mount UNIX3:/sandbox /sandbox
//<SharedLocation>/<InboxFolder>
For caches stored on Linux machines using Samba, set the path to
\\<machine name>\<shared folder name>.
10. Select Intelligent Cubes > General > Intelligent Cube File directory.
For caches stored on Linux machines using Samba, set the path to
\\<machine name>\<shared folder name>.
12. Disconnect from the project source and restart both Intelligence
servers.
If you are not using user affinity clustering, MicroStrategy recommends that
you set the cache backup frequency to 0 (zero) to ensure that history list
messages are synchronized correctly between nodes. For more information
about this setting, see Configuring Result Cache Settings, page 1228.
5. Click OK.
The domain user running the remote Intelligence Servers must have full read
and write access to this shared location.
4. Click OK.
6. On the Sharing tab, select the Shared as option. In the Share Name
box, delete the existing text and type ClusterWSRM.
This folder must be shared with the name "ClusterWSRM". This name
is used by Intelligence Server to look for Session Recovery messages
on other nodes.
7. Click OK.
or
4. Click OK.
The domain user running the remote Intelligence Servers must have full read
and write access to this shared location.
Shared network locations should be set up and mounted to the local file system
on each Intelligence Server before configuring for centralized storage.
mkdir /sandbox/WSRMshare
//<machine_name>/sandbox/WSRMshare
4. Click OK.
7. Click OK.
mkdir /UNIX2
3. Mount the folders from UNIX2 on the UNIX1 machine using the
following command:
mount UNIX2:/Build/BIN/SunOS/UNIX2/ClusterInBox
mkdir /UNIX1
3. Mount the folders from UNIX2 on the UNIX1 machine using the
following command:
mount UNIX2:/Build/BIN/SunOS/UNIX1/ClusterInBox
4. Type the name of the machine running Intelligence Server to which you
are adding this node, or click ... to browse for and select it.
5. Click OK.
3. Use the Cache Manager and view the report details to make sure the
cache is created.
4. Connect to a different node and run the same report. Verify that the
report used the cache created by the first node.
7. Without logging out that user, log on to a different node with the same
user name.
8. Verify that the History List contains the report added in the first node.
You can also perform the same cache and History List tests described above
in Verify from Developer, page 1162.
To distribute projects across the cluster, you manually assign the projects to
specific nodes in the cluster. Once a project has been assigned to a node, it
is available for use.
If you do not assign a project to a node, the project remains unloaded and
users cannot use it. You must then manually load the project for it to be
available. To manually load a project, right-click the project in the Project
Monitor and select Load.
If you are using single instance session logging in Enterprise Manager with
clustered Intelligence Servers, the single instance session logging project
must be loaded onto all the clustered Intelligence Servers. Failure to load
this project on all servers at startup results in a loss of session statistics for
any Intelligence Server onto which the project is not loaded at startup. For
more information, see MicroStrategy Community Knowledge Base article
KB14591. For detailed information about session logging in Enterprise
Manager, see the Enterprise Manager Help .
2. One column is displayed for each node in the cluster that is detected at
the time the Intelligence Server Configuration Editor opens. Select the
corresponding check box to configure the system to load a project on a
node. A selected box at the intersection of a project row and a node
column signifies that the project is to be loaded at startup on that node.
If no check boxes are selected for a project, the project is not loaded on
any node at startup. Likewise, if no check boxes are selected for a
node, no projects are loaded on that node at startup.
or
If the All Servers checkbox is selected for a project, all nodes in the
cluster load this project at startup. All individual node check boxes are
also selected automatically. When you add a new node to the cluster,
any projects set to load on All Servers automatically load on the new
node.
If you select a check mark for a project to be loaded on every node but
you do not select the All Servers check box, the system loads the
project on the selected nodes. When a new node is added to the
cluster, this project is not automatically loaded on that new node.
3. Select Show selected projects only to display only those projects that
have been assigned to be loaded on a node. For display purposes it
filters out projects that are not loaded on any node in the cluster.
5. Click OK.
If you do not see the projects you want to load displayed in the Intelligence
Server Configuration Editor, you must configure Intelligence Server to use a
server definition that points to the metadata containing the project. Use the
MicroStrategy Configuration Wizard to configure this. For details, see the
Installation and Configuration Help.
It is possible that not all projects in the metadata are registered and listed in
the server definition when the Intelligence Server Configuration Editor
opens. This can occur if a project is created or duplicated in a two-tier
(direct connection) project source that points to the same metadata as that
being used by Intelligence Server while it is running. Creating, duplicating,
or deleting a project in two-tier while a server is started against the same
metadata is not recommended.
For example, a user fence could be configured for users who require more
processing power or high availability. Conversely, a workload fence, could
be configured to limit the resources for lower priority subscriptions.
Typically, the majority of the nodes in a cluster will not be part of a fence,
making them available for general use. All configured fences are defined in a
single list ordered by precedence. When a request is received, the ordered
list of all fences and their configurations are assessed to determine if the
request matches any fence configuration. A request will be processed by the
first fence found with an available node in the ordered list where the request
matches the fence criteria.
When all nodes in the cluster are part of the fence list, the request will be
sent to a node in the last fence in the ordered list.
l Nodes 1, 2, 3, and 4 are not defined in a fence, meaning that they are
available to process requests that do not meet the criteria of either fence.
Configure Fences
Using Command Manager, you can create, modify, list, and delete fences
without restarting the clustered Intelligence Servers. For more information
about Command Manager, see Chapter 15, Automating Administrative Tasks
with Command Manager.
l You have properly configured an Intelligence Server cluster and all nodes in
the cluster must use the same server definition.
Configure Fences
After your fences have been configured, you will need to enable
MicroStrategy Web to use user fences. The setting is off by default.
3. Click Save.
3. Restart Library.
For detailed information about the effects of the various idle states on a
project, see Setting the Status of a Project, page 48.
3. To see a list of all the projects on a node, click the + sign next to that
node.
You can perform an action on multiple servers or projects at the same time.
To do this, select several projects (CTRL+click), then right-click and select
one of the options.
1. In the Cluster view, right-click the project whose status you want to
change, point to Administer project on node, and select
Idle/Resume.
2. Select the options for the idle mode that you want to set the project to:
l Request Idle (Request Idle): all executing and queued jobs finish
executing, and any newly submitted jobs are rejected.
l Execution Idle (Execution Idle for All Jobs): all executing, queued,
and newly submitted jobs are placed in the queue, to be executed
when the project resumes.
l Full Idle (Request Idle and Execution Idle for All jobs): all
executing and queued jobs are canceled, and any newly submitted
jobs are rejected.
l Partial Idle (Request Idle and Execution Idle for Warehouse jobs):
all executing and queued jobs that do not submit SQL against the
data warehouse are canceled, and any newly submitted jobs are
rejected. Any executing and queued jobs that do not require SQL to
be executed against the data warehouse are executed.
To resume the project from a previously idled state, clear the Request
Idle and Execution Idle check boxes.
3. Click OK.
In the Cluster view, right-click the project whose status you want to change,
point to Administer project on node, and select Load or Unload.
Failover and latency take effect only when a server fails. If a server is
manually shut down, its projects are not automatically transferred to another
server, and are not automatically transferred back to that server when it
restarts.
You can determine several settings that control the time delay, or latency
period, in the following instances:
l After a machine fails, but before its projects are loaded onto to a different
machine
l After the failed machine is recovered, but before its original projects are
reloaded
When deciding on these latency period settings, consider how long it takes
an average project in your environment to load on a machine. If your
projects are large, they may take some time to load, which presents a strain
on your system resources. With this consideration in mind, use the following
information to decide on a latency period.
Latency takes effect only when a server fails. If a server is manually shut
down, its projects are not automatically transferred to another machine.
l Setting a higher latency period prevents projects on the failed server from
being loaded onto other servers quickly. This can be a good idea if your
projects are large and you trust that your failed server will recover quickly.
A high latency period provides the failed server more time to come back
online before its projects need to be loaded on another server.
l Setting a lower latency period causes projects from the failed machine to
be loaded relatively quickly onto another server. This is good if it is crucial
that your projects are available to users at all times.
l If you enter -1, the failover process is disabled and projects are not
transferred to another node if there is a machine failure.
Resource Availability
If a node is rendered unavailable because of a forceful shutdown, its cache
resources are still valid to other nodes in the cluster and are accessed if
they are available. If they are not available, new caches are created on other
nodes.
Developer
MicroStrategy Web
If a cluster node shuts down while MicroStrategy Web users are connected,
those jobs return an error message by default. The error message offers the
option to resubmit the job, in which case MicroStrategy Web automatically
reconnects the user to another node.
If multiple nodes in the cluster are restarted at the same time, they may not
all correctly rejoin the cluster. To prevent this, separate the restart times by
several minutes.
The nodes that are still in the cluster but not available are listed in the
Cluster Monitor with a status of Stopped.
l You can manage the caches on a node only if that node is active and
joined to the cluster and if the project containing the caches is loaded on
that node.
l The Cache Monitor's hit count number on a machine reflects only the
number of cache hits that machine initiated on any cache in the cluster. If
a different machine in the cluster hits a cache on the local machine, that
hit is not be counted on the local machine's hit count. For more information
about the Cache Monitor, see Monitoring Result Caches, page 1217.
For example, ServerA and ServerB are clustered, and the cluster is
configured to use local caching (see Synchronizing Cached Information
Across Nodes in a Cluster, page 1137). A report is executed on ServerA,
creating a cache there. When the report is executed on ServerB, it hits the
report cache on ServerA. The cache monitor on ServerA does not record
this cache hit, because ServerA's cache monitor displays activity initiated
by ServerA only.
5. Click OK.
After installation, you can see the following services are automatically
started:
Afterwards you will see Kafka log files created in the Kafka installation
Copyright © 2024 All Rights Reserved 1178
folder:
Syst em Ad m in ist r at io n Gu id e
Once you have completed the upgrade process, you need to enable
MicroStrategy Messaging Services. If not, the Intelligence Server continues
to write to the original log.
You can specify more Kafka Producer configuration settings in this command
following the same format.
5. Click Apply.
l Apache Kafka
l Apache Zookeeper
Configure Zookeeper
clientPort=2181
dataDir=C:\\Program Files (x86)\\MicroStrategy\\Messaging
Services\\tmp\\zookeeper
maxClientCnxns=0
initLimit=5
syncLimit=2
server.1=10.27.20.16:2888:3888
server.2=10.27.20.60:2888:3888
server.3=10.15.208.236:2888:3888
4. Create a text file named myid containing the identifying value from the
server parameter name in the zookeeper.properties file.
Configure Kafka
3. Modify the broker.id value to a unique integer from other Kafka servers
(the default value is 0), such as for node 10.27.20.60 we use number 2.
After installation, you can see the following services are automatically
started:
l Apache Kafka (
/opt/mstr/MicroStrategy/install/MessagingServices/Kaf
ka/kafka_2.11-0.10.1.0)
l Apache ZooKeeper (
/opt/mstr/MicroStrategy/install/MessagingServices/Kaf
ka/kafka_2.11-0.10.1.0)
Afterwards you will see Kafka log files created in the Kafka installation
folder:
/opt/mstr/MicroStrategy/install/MessagingServices/Kafk
a/tmp/kafka-logs
Once you have completed the upgrade process, you need to enable
MicroStrategy Messaging Services. If not, the Intelligence Server
continues to write to the original log.
5. Click Apply.
/opt/mstr/MicroStrategy/install/MessagingServices/Kafk
a/kafka_2.11-0.10.1.0
l Apache Kafka
l Apache Zookeeper
Configure Zookeeper
1. Browse to
/opt/mstr/MicroStrategy/install/MicroStrategy/Messa
gingServices/Kafka/kafka_2.11-0.9.0.1/config.
maxClientCnxns=0
initLimit=5
syncLimit=2
server.1=10.27.20.16:2888:3888
server.2=10.27.20.60:2888:3888
server.3=10.15.208.236:2888:3888
3. Go to
/opt/mstr/MicroStrategy/install/MicroStrategy/Messa
gingServices/Kafka/kafka_2.11-
0.9.0.1/tmp/zookeeper.
4. Create a file named myid containing the identifying value from the
server parameter name in the zookeeper.properties file.
Configure Kafka
1. Browse to
/opt/mstr/MicroStrategy/install/MicroStrategy/Messa
gingServices/Kafka/kafka_2.11-0.9.0.1/config.
#############################
# The id of the broker. This must be set to a unique integer for each
broker.
broker.id=2
/etc/init.d/kafka-zookeeper {start|stop|status}
Preview features are early versions of features and are not to be used in a
production environment as the core behavior remain subject to change
between preview and GA. By selecting to expose preview features, you can
access and use them as you would any other functionality. The official
versions of preview features are included in subsequent releases.
You can use this new functionality for the following scenarios:
Windows
2. In Computer\HKEY_LOCAL_
MACHINE\SOFTWARE\WOW6432Node\MicroStrategy\DSS Server
key, add a new DeploymentFeatureFlags key, if it does not already
exist.
Linux
Windows
2. In Computer\HKEY_LOCAL_
MACHINE\SOFTWARE\WOW6432Node\MicroStrategy\DSS Server
key, add a new DeploymentFeatureFlags key, if it does not already
exist.
Linux
3. Delete "CA/AdvancedCubeAvailability"="true".
Additional Notes
This functionality only takes effect when the Intelligence server node(s) that
will stop are in a cluster, meaning one of the following conditions must be
satisfied:
l The node is added to the cluster startup list. For more information on
monitoring clusters, see Server Clustering.
o You can check the Cluster Startup column to confirm if the node is in
the cluster startup list.
The cube file directory should be correctly configured and accessible by all
Intelligence server nodes in a cluster. For more information, Configure
Caches in a Cluster
Stopped nodes will appear in the clustering monitor with a Stopped status.
Duplicate cubes may display in the cube monitor when you republish a cube
that was originally published by a stopped mode. In these cases, the new
cube will be used while the old cube will remain visible until the original node
starts. This is a limitation of the current cube architecture.
If the machine selected is part of a cluster, the entire cluster appears on the
Administration page and is labeled as a single cluster. Once MicroStrategy
Web is connected to a cluster, all nodes reference the same project. Load
balancing directs new Web connections to the least loaded node, as
measured by user connections. Once connected to a node, the Web user
runs all MicroStrategy activity on the same node.
If nodes are manually removed from the cluster, projects are treated as
separate in MicroStrategy Web, and the node connected to depends on
which project is selected. However, all projects are still accessing the same
metadata.
Node Failure
MicroStrategy Web users can be automatically connected to another node
when a node fails. To implement automatic load redistribution for these
users, on the Web Administrator page, under Web Server select Security,
and in the Login area select Allow Automatic Login if Session is Lost.
I M PROVIN G RESPON SE
TIM E: CACH IN G
l Result caches: Report and document results that have already been
calculated and processed, that are stored on the Intelligence Server
machine so they can be retrieved more quickly than re-executing the
request against the data warehouse. For more information on these, see
Result Caches, page 1203.
l The History List is a way of saving report results on a per-user basis. For
more information, see Saving Report Results: History List, page 1240.
You specify settings for all cache types except History List under Caching in
the Project Configuration Editor. History List settings are specified in the
Intelligence Server Configuration Editor.
Result, element, and object caches are created and stored for individual
projects; they are not shared across projects. History Lists are created and
stored for individual users.
To make changes to cache settings, you must have the Administer Caches
privilege. In addition, changes to cache settings do not take effect until you
stop and restart Intelligence Server.
For Library Web and the Library Mobile App on Android, the page cache is in
HTML5 format, and one page corresponds to one page cache. For the
Library Mobile App on iOS, the page cache is in Flash format, and one page
corresponds to one page cache.
l No manipulations
A dashboard shortcut page cache contains any manipulations other than the
ones mentioned above.
A bookmark page cache is generated if the page caches are generated for a
bookmark.
On-the-fly:
Change chapter level filter panel All pages under the same chapter
l If you click the Rest button from the dashboard title bar or from the
dashboard cover in Library, a page cache will be generated on-the-fly.
When you close a dashboard and return to Library, the Intelligence Server
will generate page caches for the current page and several pages before
and after the current page, if there are no valid page caches. By default, at
most 10 page caches will be generated.
When you log out of Library or the user session times out, the Intelligence
Server will generate page caches for the dashboards that are active in the
server message. For each dashboard, the Intelligence Sever will generate
page caches for the last viewed page and several pages before and after the
last view page, if there are no valid page caches. By default, at most 10
page caches will be generated.
Server message refers to the last several dashboards that were ran from
Library before logging out or the session timing out. The number of server
messages is defined per user session and is restricted by the working set
limit. The working set limit can be configured via MicroStrategy Web
Preferences.
Cache Type
Case
Generated
The base dashboard is published to the specified User, but the User Base dashboard
hasn't logged in to Library yet page caches
The base dashboard is published to the specified User, and the User Base dashboard
only switches page or resets Dashboard page caches
Dashboard
The base dashboard is published to the specified User, and the User
shortcut page
changes the base dashboard
caches
The base dashboard is published to the specified User Group, and User 1: Base
the User Group contains User1 and User2. After the Base Dashboard dashboard page
is published to the User Group, User1 has logged in to their Library, caches or
but User2 hasn't logged in to their Library. The cache generation for Dashboard
User1 follows Case #3, and the cache generation for User2 follows shortcut page
Case #2. caches depending
Cache Type
Case
Generated
on changes made.
User 2: Base
dashboard page
caches
For MicroStrategy versions prior to 11.0 cache manager maintains only one
LRU queue for the caches. If the cache pool becomes full, the least-
recently-used cache would be swapped out to free memory.
The following are the pre-defined priority for different caches generated
under different circumstances:
Page cache generated on the fly and has manipulations saved as a cache key Low
There is a soft limit of 20% of the cache pool for low-priority caches. This is
to avoid low-priority caches not being generated if there are too many high-
priority caches filling up the cache pool.
When a new cache is going to be generated, if the cache pool is not full, the
cache can be generated successfully. If the cache pool is full, the cache-
swapping logic is triggered. If the low-priority caches already occupy more
than 20% of the cache pool, then they will be deleted until the total low-
priority cache size is equal to or below the limit. If the new cache still needs
more memory, the high-priority caches will be swapped out to free up more
memory, until the new cache can be generated.
Result Caches
A result cache is a cache of an executed report or document that is stored on
Intelligence Server. Result caches are either report caches or document
caches.
Report caches can be created or used for a project only if the Enable report
server caching check box is selected in the Project Configuration Editor
under the Caching: Result Caches: Creation category.
Document caches can be created or used for a project only if the Enable
Document Output Caching in Selected Formats check box is selected in
the Project Configuration Editor under the Caching: Result Caches:
Creation category, and one or more formats are selected.
By default, result caching is enabled at the project level. It can also be set
per report and per document. For example, you can disable caching at the
project level, and enable caching only for specific, frequently used reports.
For more information, see Configuring Result Cache Settings, page 1228.
Caching does not apply to a drill report request because the report is
constructed on the fly.
When a user runs a report (or, from MicroStrategy Web, a document), a job
is submitted to Intelligence Server for processing. If a cache for that request
is not found on the server, a query is submitted to the data warehouse for
processing, and then the results of the report are cached. The next time
someone runs the report or document, the results are returned immediately
without having to wait for the database to process the query.
You can easily check whether an individual report hit a cache by viewing the
report in SQL View. The image below shows the SQL View of a
MicroStrategy Tutorial report, Sales by Region. The fifth line of the SQL
View of this report shows "Cache Used: Yes."
Intelligence Server does not be enforce the following three cache count limit
settings if the values are set below the default value of 100000 (100K):
When configuring the registry setting for the left disk size to 0, Intelligence
Server falls back to use the cache count limit settings as the maximum
number for document, report, or cube cache entries.
M o n i t o r t h e d i sk an d st o p t o gen er at e cach es f o r d o cu m en t , r ep o r t , o r
cu b e i f t h e f r ee d i sk sp ace i s l ess t h an a sp eci f i c val u e
This setting can be configured through the registry. If the registry setting is
0, MicroStrategy falls back to use the three cache count limit settings.
If a value larger than 20 is already set for a certified dashboard, the value is
kept.
At least 20 page caches are generated for any shortcuts or bookmarks for a
certified dashboard.
For example, suppose this setting was customized on an installation with the
customized value is set to X.
l The drive that holds the result caches should always have at least 10% of
its capacity available.
l Be aware of the various ways in which you can tune the caching properties
to improve your system's performance. For a list of these properties, and
an explanation of each, see Configuring Result Cache Settings, page
1228.
Matching Caches
Matching caches are the results of reports and documents that are retained
for later use by the same requests later on. In general, Matching caches are
the type of result caches that are used most often by Intelligence Server.
History Caches
History caches are report results saved for future reference in the History
List by a specific user. When a report is executed, an option is available to
the user to send the report to the History List. Selecting this option creates a
History cache to hold the results of that report and a message in the user's
History List pointing to that History cache. The user can later reuse that
report result set by accessing the corresponding message in the History List.
For more information about History Lists, see Saving Report Results: History
List, page 1240.
Matching-History Caches
A Matching-History cache is a Matching cache that is referenced by at least
one History List message. It is a single cache composed of a Matching cache
and a History cache. Properties associated with the Matching caches and
History caches discussed above correspond to the two parts of the
Matching-History caches.
XML Caches
An XML cache is a report cache in XML format that is used for personalized
drill paths. It is created when a report is executed from MicroStrategy Web,
and is available for reuse in Web. It is possible for an XML cache to be
created at the same time as its corresponding Matching cache. XML caches
are automatically removed when the associated report or History cache is
removed.
To disable XML caching, select the Enable Web personalized drill paths
option in the Project definition: Drilling category in the Project
Configuration Editor. Note that this may adversely affect Web performance.
For more information about XML caching, see Controlling Access to Objects:
Permissions, page 89.
Report caches are stored on the disk in a binary file format. Each report
cache has two parts:
Intelligence Server creates two types of index files to identify and locate
report caches:
Document caches are stored on the disk in a binary file format. Each
document cache has two parts:
Intelligence Server creates two types of index files to identify and locate
document caches:
l User ID: To match caches by the global unique identifier (GUID) of the
user requesting the cache, in the Caching: Result Caches: Creation
category in the Project Configuration Editor, select the Create caches per
user check box.
l The Export Option (All or Current Page) and Locale of the document
must match the cache.
l The selector and group-by options used in the document must match those
used in the cache.
l In Excel, the document and cache must both be either enabled or disabled
for use in MicroStrategy Office.
For more information, see the MicroStrategy for Office page in the
Readme and the MicroStrategy for Office Help.
l Reports are heavily prompted, and the answer selections to the prompts
are different each time the reports are run.
l Few users share the same security filters when accessing the reports.
If you disable result caching for a project, you can set exceptions by
enabling caching for specific reports or documents. For more information,
see Configuring Result Cache Settings, page 1228.
4. To disable document caching but not report caching, leave the Enable
report server caching check box selected and clear the Enable
document output caching in selected formats check box.
5. Click OK.
You can also use the Diagnostics Configuration Tool for diagnostic tracing of
result caches (see Diagnostics and Performance Logging Tool, page 1220),
and Command Manager to automatically update information about result
caches (see Command Manager, page 1221).
A cache's hit count is the number of times the cache is used. When a report
is executed (which creates a job) and the results of that report are retrieved
from a cache instead of from the data warehouse, Intelligence Server
increments the cache's hit count. This can happen when a user runs a report
or when the report is run on a schedule for the user. This does not include
the case of a user retrieving a report from the History List (which does not
create a job). Even if that report is cached, it does not increase its hit count.
3. Select the project for which you want to view the caches and click OK.
5. To view additional details about all caches, from the View menu select
Details.
8. Select the project for which you want to view caches and click OK.
9. To display History and XML caches in the Report Cache Monitor, right-
click in the Cache Monitor and select Filter. Select Show caches for
History List messages or Show XML caches and click OK.
You can perform any of the following options after you select one or more
caches and right-click:
l Load from disk: Loads into memory a cache that was previously unloaded
to disk
l Unload to disk: Removes the cache from memory and stores it on disk
For detailed information about these actions, see Managing Result Caches,
page 1221.
Cache Statuses
A result cache's status is displayed in the Report Cache Monitor using one
or more of the following letters:
Stands
Status Description
for
The cache has been invalidated because its lifetime has elapsed. For
E information about expired caches, see Managing Result Caches, page
1221.
The cache has been updated in Intelligence Server memory since the
D
last time it was saved to disk.
The cache has been unloaded, and exists as a file on disk instead of
F in Intelligence Server memory. For information about loading and
unloading caches, see Managing Result Caches, page 1221.
Cache Types
Result caches can be of the following types:
Type Description
Matching- The cache is valid and available for use, and also referenced in at least one
History History List message.
(Web only) The cache exists as an XML file and is referenced by the
XML matching cache. When the corresponding Matching cache is deleted, the
XML cache is deleted.
For more information about each type of cache, see Types of Result Caches,
page 1209.
Command Manager
You can also use the following Command Manager scripts to monitor result
caches:
For more information about Command Manager, see Chapter 15, Automating
Administrative Tasks with Command Manager, or the Command Manager
Help (from within Command Manager, press F1).
Typically, reports and documents that are frequently used best qualify for
scheduling. Reports and documents that are not frequently used do not
necessarily need to be scheduled because the resource cost associated with
creating a cache on a schedule might not be worth it. For more information
on scheduling a result cache update, see Scheduling Reports and
Documents: Subscriptions, page 1333.
If a report cache is unloaded to disk and a user requests that report, the
report is then loaded back into memory automatically. You can also manually
load a report cache from the disk into memory.
Caches are saved to disk according to the Backup frequency setting (see
Configuring Result Cache Settings, page 1228). Caches are always saved to
disk regardless of whether they are loaded or unloaded; unloading or
loading a cache affects only the cache's status in Intelligence Server
memory.
l When the data warehouse changes, the existing caches are no longer
valid because the data may be out of date. In this case, future
report/document requests should no longer use the caches.
l When the cache for any of the datasets for a document becomes
invalidated or deleted, the document cache is automatically invalidated.
Caches need to be invalidated when new data is loaded from the data
warehouse so that the outdated cache is not used to fulfill a request. You
can invalidate all caches that rely on a specific table in the data warehouse.
For example, you could invalidate all report/document caches that use the
Sales_Trans table in your data warehouse.
You can update the data warehouse load routine to invoke a MicroStrategy
Command Manager script to invalidate the appropriate caches. This script is
at C:\Program Files (x86)\MicroStrategy\Command
Manager\Outlines\Cache_Outlines\Invalidate_Report_Cache_
Outline. For more information about Command Manager, see Chapter 15,
Automating Administrative Tasks with Command Manager.
To invoke Command Manager from the database server, use one of the
following commands:
l DB2: ! cmdmgr
l Teradata: os cmdmgr
From the Cache Monitor, you can manually invalidate one or more caches.
3. Select the project for which you want to invalidate a cache and click
OK.
In all cases, cache deletion occurs based on the Cache lookup cleanup
frequency setting. For more information about this setting, see Configuring
Result Cache Settings, page 1228.
You can manually delete caches via the Cache Monitor and Command
Manager, or schedule deletions via the Administration Tasks Scheduling, in
the same way that you manually invalidate caches. For details, see
Invalidating Result Caches, page 1223.
Purging deletes all result caches in a project, including caches that are still
referenced by the History List. Therefore, purge caches only when you are
sure that you no longer need to maintain any of the caches in the project,
and otherwise delete individual caches.
Even after purging caches, reports and documents may continue to display
cached data. This can occur because results may be cached at the object
and element levels, in addition to at the report/document level. To ensure
that a re-executed report or document displays the most recent data, purge
all three caches. For instructions on purging element and object caches, see
Deleting All Element Caches, page 1274 and Deleting Object Caches, page
1283.
Changes to any of the caching settings are in effect only after Intelligence
Server restarts.
You can also configure these settings using the Command Manager script,
Alter_Server_Config_Outline.otl, located at C:\Program Files
(x86)\MicroStrategy\Command Manager\Outlines\Cache_
Outlines.
You can specify the cache backup frequency in the Backup frequency
(minutes) box under the Server Definition: Advanced subcategory in the
Intelligence Server Configuration Editor.
Backing up caches from memory to disk more frequently than necessary can
drain resources.
This setting also defines when Intelligent Cubes are saved to secondary
storage, as described in Storing Intelligent Cubes in Secondary Storage,
page 1313.
The default value for this setting is 0 (zero), which means that the cleanup
takes place only at server shutdown. You may change this value to another
based on your needs, but make sure that it does not negatively affect your
system performance. MicroStrategy recommends cleaning the cache lookup
at least daily but not more frequently than every half hour.
You can also configure these settings using Command Manager scripts
located at C:\Program Files (x86)\MicroStrategy\Command
Manager\Outlines\Cache_Outlines.
Result caches can be created or used for a project only if the Enable report
server caching check box is selected in the Project Configuration Editor in
the Caching: Result Caches: Creation category.
If this option is disabled, all the other options in the Result Caches: Creation
and Result Caches: Maintenance categories are grayed out, except for
Purge Now. By default, report server caching is enabled. For more
information on when report caching is used, see Result Caches, page 1203.
Document caches can be created or used for a project only if the Enable
document output caching in selected formats check box is selected in
the Project Configuration Editor in the Caching: Result Caches: Creation
category. Document caches are created for documents that are executed in
the selected output formats. You can select all or any of the following: PDF,
Excel, HTML, and XML/Flash/HTML5.
report datasets do not provide significant benefits; therefore you may want
to disable this setting.
To disable this setting, clear its check box in the Project Configuration Editor
under the Caching: Result Caches: Creation category.
If you Enable caching for prompted reports and documents (see above),
you can also Record prompt answers for cache monitoring. This causes
all prompt answers to be listed in the Cache Monitor when browsing the
result caches. You can then invalidate specific caches based on prompt
answers, either from the Cache Monitor or with a custom Command Manager
script.
This option is disabled by default. To enable it, select its check box in the
Project Configuration Editor under the Caching: Result Caches: Creation
category.
This option is enabled by default. To disable it, clear its check box in the
Project Configuration Editor under the Caching: Result Caches: Creation
category.
This option is enabled by default. To disable it, clear its check box in the
Project Configuration Editor under the Caching: Result Caches: Creation
category.
If the Create caches per user setting is enabled, different users cannot
share the same result cache. Enable this setting only in situations where
security issues (such as database-level Security Views) require users to
have their own cache files. For more information, see Cache Matching
Algorithm, page 1213.
This option is disabled by default. To enable it, select its check box in the
Project Configuration Editor under the Caching: Result Caches: Creation
category.
This option is disabled by default. To enable it, select its check box in the
Project Configuration Editor under the Caching: Result Caches: Creation
category.
1213.
This option is disabled by default. To enable it, select its check box in the
Project Configuration Editor under the Caching: Result Caches: Creation
category.
The Cache file directory, in the Project Configuration Editor under the
Caching: Result Caches: Storage category, specifies where all the cache-
related files are stored. By default these files are stored in the Intelligence
Server installation directory, in the \Caches\<Server definition
name> subfolder.
l Local caching: Each node hosts its own cache file directory that needs to
be shared as "ClusterCache" so that other nodes can access it.
ClusterCache is the share name Intelligence Server looks for on other
nodes to retrieve caches.
l Centralized caching: All nodes have the cache file directory set to the
same network location, \\<machine name>\<shared directory
name>. For example, \\My_File_Server\My_Cache_Directory.
afp://my_file_server/my_inbox_directory /Volumes/my_
network_mount
Make sure this cache directory is writable from the network account under
which Intelligence Server is running. Each Intelligence Server creates its
own subdirectory.
The Cache encryption level on disk drop-down list controls the strength of
the encryption on result caches. Encrypting caches increases security, but
may slow down the system.
By default the caches that are saved to disk are not encrypted. You can
change the encryption level in the Project Configuration Editor under the
Caching: Result Caches: Storage category.
If the machine experiences problems because of high memory use, you may
want to reduce the Maximum RAM usage for the result caches. You need to
find a good balance between allowing sufficient memory for report caches
and freeing up memory for other uses on the machine. The default value is
250 megabytes for reports and datasets, and 256 megabytes for formatted
documents. The maximum value for each of these is 65536 megabytes, or 64
gigabytes.
MicroStrategy recommends that you initially set this value to 10% of the
system RAM if it is a dedicated Intelligence Server machine, that is, if no
other processes are running on it. This setting depends on the following
factors:
This setting should be at least as large as the largest report in the project
that you want to cache. If the amount of RAM available is not large enough
for the largest report cache, that cache will not be used and the report will
always execute against the warehouse. For example, if the largest report
you want to be cached in memory is 20 MB, the maximum RAM usage
needs to be at least 20 MB.
You should monitor the system's performance when you change the
Maximum RAM usage setting. In general, it should not be more than 30% of
the machine's total memory.
For more information about when report caches are moved in and out of
memory, see Location of Result Caches, page 1211.
l The number of users and the number of History List messages they keep.
If the Intelligence Server memory that has been allocated for caches
becomes full, it must swap caches from memory to disk. The RAM swap
multiplier setting, in the Project Configuration Editor under the Caching:
Result Caches: Storage category, controls how much memory is swapped
to disk, relative to the size of the cache being swapped into memory. For
example, if the RAM swap multiplier setting is 2 and the requested cache is
80 kilobytes, 160 kilobytes are swapped from memory to disk.
If the cache memory is full and several concurrent reports are trying to swap
from disk, the swap attempts can fail and re-execute those reports. This
counteracts any gain in efficiency due to caching. In this case, increasing
the RAM swap multiplier setting provides additional free memory into
which those caches can be swapped.
The default value for this parameter is 100%, and the values can range from
10% to 100%.
You can change this setting in the Project Configuration Editor under the
Caching: Result Caches: Storage category.
For large projects, loading caches on startup can take a long time so you
have the option to set the loading of caches on demand only. However, if
caches are not loaded in advance, there will be a small additional delay in
response time when they are hit. Therefore, you need to decide which is
best for your set of user and system requirements.
The Never expire caches setting, in the Project Configuration Editor under
the Caching: Result Caches: Maintenance category, causes caches to
never automatically expire. MicroStrategy recommends selecting this check
box, instead of using time-based result cache expiration. For more
information, see Managing Result Caches, page 1221.
All caches that have existed for longer than the Cache duration (Hours) are
automatically expired. This duration is set to 24 hours by default. You can
change the duration in the Project Configuration Editor under the Caching:
Result Caches: Maintenance category.
By default, caches for reports based on filters that use dynamic dates
always expire at midnight of the last day in the dynamic date filter. This
behavior occurs even if the Cache Duration (see above) is set to zero.
For example, a report has a filter based on the dynamic date "Today." If this
report is executed on Monday, the cache for this report expires at midnight
on Monday. This is because a user who executes the report on Tuesday
expects to see data from Tuesday, not the cached data from Monday. For
more information on dynamic date filters, see the Filters section in the
Advanced Reporting Help.
When you create a subscription, you can force the report or document to re-
execute against the warehouse even if a cache is present. You can also
prevent the subscription from creating a new cache.
To change the default behavior for new subscriptions, use the following
check boxes in the Project Configuration Editor, in the Caching: Subscription
Execution category.
l To cause new History List and Mobile subscriptions to execute against the
warehouse by default, select the Re-run History List and Mobile
subscriptions against the warehouse check box.
l To cause new email, file, and print subscriptions to execute against the
warehouse by default, select the Re-run file, email, and print
subscriptions against the warehouse check box.
For a document, you can choose which formats, such as HTML or PDF, are
cached. You can also choose to create a new cache for every page-by,
incremental fetch block, and selector setting.
To use the project-level setting for caching, select the Use default project-
level behavior option. This indicates that the caching settings configured at
the project level in the Project Configuration Editor apply to this specific
report or document as well.
l Keep shortcuts to previously run reports, like the Favorites list when
browsing the Internet.
The History List is displayed at the user level, but is maintained at the
project source level. The History List folder contains messages for all the
projects in which the user is working. The number of messages in this folder
is controlled by the setting Maximum number of messages per user. For
example, if you set this number at 40, and you have 10 messages for Project
A and 15 for Project B, you can have no more than 15 for Project C. When
the maximum number is reached, the oldest message in the current project
is purged automatically to leave room for the new one.
If the current project has no messages but the message limit has been
reached in other projects in the project source, the user may be unable to
run any reports in the current project. In this case the user must log in to
one of the other projects and delete messages from the History list in that
project.
The data contained in these History List messages is stored in the History
List repository, which can be located on Intelligence Server, or in the
database. For more information about the differences between these storage
options, see Configuring History List Data Storage, page 1245.
A History List message provides a snapshot of data at the time the message
is created. Using a different report filter on a History List message does not
cause the message to return different data. To view a report in the History
List with a different report filter, you must re-execute the report.
Each report that is sent to the History List creates a single History List
message. Each document creates a History List message for that document,
plus a message for each dataset report in the document.
You can send report results to the History List manually or automatically.
This operation creates two jobs, one for executing the report (against
the data warehouse) and another for sending the report to History List.
If caching is enabled, the second job remains in the waiting list for the
first job to finish; if caching is not enabled, the second job runs against
the data warehouse again. Therefore, to avoid wasting resources,
MicroStrategy recommends that if caching is not enabled, users not
send the report to History List in the middle of a report execution.
l From Web: While the report is being executed, click Add to History
List on the wait page.
This operation creates only one job because the first one is modified for
the Send to History List request.
l From Web: After the report is executed, select Add to History List from
the Home menu.
Two jobs are created for Developer, and only one is created for Web.
l From MicroStrategy Web: On the reports page, under the name of the
report that you want to send to History List, select Subscriptions, and
then click Add History List subscription on the My Subscriptions
page. Choose a schedule for the report execution. A History List
message is generated automatically whenever the report is executed
based on the schedule.
To use the History List Monitor Filter to filter your History List messages,
right click the History List folder, and select Filter. After you have specified
the filter parameters, click OK. The History List Monitor Filter closes, and
your History List messages will be filtered accordingly.
To use the History List Monitor Filter to purge items from your History List
folder, right click the History List folder and select Purge. After you have
specified the filter parameters, click Purge. The History List Monitor Filter
closes, and the History List Messages that match the criteria defined in the
History List Monitor Filter are deleted.
For more details about the History List Monitor Filter, click Help.
Multiple messages can point to the same History cache. In this case, the
History cache is deleted after all messages pointing to it have been deleted.
caches, see Types of Result Caches, page 1209. For more information about
storing History List data, see Configuring History List Data Storage, page
1245.
You can use the History List messages to retrieve report results, even when
report caching is disabled.
The History List repository is the location where all History List data is
stored.
There are several different ways that the History List repository can be
configured to store data for the History List. It can be stored in a database,
or in a file on the Intelligence Server machine. Alternately, you can use a
hybrid approach that stores the message information in a database for
improved search results and scalability, and the message results in a file for
performance reasons.
If you are using a database-based History List repository, the caches that
are associated with a History List message are also stored in the History List
database.
You can also configure Intelligence Server to use a hybrid History List
repository. In this configuration the History List message information is
stored in a database, and the cached results are stored in a file. This
approach preserves the scalability of the database-based History List, while
maintaining the improved performance of the file-based History List.
l Once Intelligence Server has been configured to store the History List
cached data in the database, this setting will apply to the entire server
definition.
The storage location for the History List data (the History List repository) must
have been created in the database. For information about creating the History
List repository in the database, see the Installation and Configuration Help.
If you are using a hybrid History List repository, the storage location for the
History List results must have been created and shared on the Intelligence
Server machine. For information about how to configure this location, see
Configuring Intelligence Server to Use a File-Based History List Repository,
page 1248.
Once Intelligence Server has been configured to store the History List
cached data in the database, this setting will apply to the entire server
definition.
5. Click Yes.
You can browse to the file location by clicking the . . . (browse) button.
To Co n f i r m t h at t h e Hi st o r y Li st Rep o si t o r y h as b een Co n f i gu r ed
Co r r ect l y
3. On the left, expand History Settings and select General. If you have
configured Intelligence Server properly, the following message is
displayed in the Repository Type area of the Intelligence Server
Configuration Editor:
l Local caching: Each node hosts its own cache file directory that needs to
be shared as "ClusterCache" so that other nodes can access it.
l Centralized caching: All nodes have the cache file directory set to the
same network location, \\<machine name>\<shared directory
name>. For example, \\My_File_Server\My_Inbox_Directory.
Make sure that the network directory is writable from the network account
under which Intelligence Server is running. Each Intelligence Server
creates its own subdirectory.
For steps to configure Intelligence Server to store cached History List data
in a file-based repository, see the procedure below.
4. Select File based, and type the file location in the History directory
field.
You can browse to the file location by clicking the …(browse) button.
5. Click OK.
l Message Creation Time: The time the message was created, in the
currently selected time zone.
l Details: More information about the report, including total number of rows,
total number of columns, server name, report path, message ID, report ID,
status, message created, message last updated, start time, finish time,
owner, report description, template, report filter, view filter, template
details, prompt details, and SQL statements.
Each time a user submits a report that contains a prompt, the dialog
requires that they answer the prompt. As a result, multiple listings of the
same report may occur. The differences among these reports can be found
by checking the timestamp and the data contents.
l Folder name: Name of the folder where the original report is saved
l Last update time: The time when the original report was last updated
l Message text: The status message for the History List message
You can see more details of any message by right-clicking it and selecting
Quick View. This opens a new window with the following information:
contain raw data, whether a cache was used, the job ID, and the SQL
produced.
3. Clear the check box for The new scheduled report will overwrite
older versions of itself.
2. Click Next.
l Choose the project that contains the object that you want to archive.
4. Click Next.
l Browse to the report or document that you want to archive. You can
select multiple reports or documents by holding the Ctrl key while
clicking them.
l Click Next when all of the reports or documents that you want to
archive have been added.
6. Select a user group to receive the message for the archived report or
document:
l Browse to the user group that you want to send the archived report to.
You can select multiple reports or documents by holding the Ctrl key
while clicking them.
l Click Next when all of the user groups that you want to receive the
archived report or document have been added.
All members in the user group receive the History List message.
8. Clear the The new scheduled report will overwrite older versions
of itself check box, and click Next.
9. Click Finish.
An administrator can control the size of the History List and thus control
resource usage through the following settings:
l The maximum size of the History List is governed at the project level. Each
user can have a maximum number of History List messages, set by an
administrator. For more details, including instructions, see Controlling the
Maximum Size of the History List, page 1255.
l You can also delete History List messages according to a schedule. For
more details, including instructions, see Scheduling History List Message
Deletion, page 1257.
l If you are using a database-based History List, you can reduce the size of
the database by disabling the History List backup caches. For more
details, including instructions, see Backing up History Caches to the
History List Database, page 1259.
If you are using a database-based History List repository and you have the
proper permissions, you have access to the History List Messages Monitor.
This powerful tool allows you to view and manage History List messages for
all users. For more information, see Monitoring History List Messages, page
1260.
The administrator can also specify whether to create separate messages for
each dataset report that is included in a Report Services document or to
create only a message for the document itself, and whether to create
messages for documents that have been exported in other formats, such as
Excel or PDF. Not creating these additional History List messages can
improve History List performance, at the cost of excluding some data from
the History List. By default, all reports and documents create History List
messages.
1. In Developer, log into a project. You must log in with a user account
that has administrative privileges.
3. Expand the Project Definition category and select the History list
subcategory.
7. In the Maximum Inbox message size (MB) field, type the maximum
message size, in megabytes, for inboxes. Type -1 for no limit.
8. Click OK.
this setting at user logout and deleted if found to be older than the
established lifetime.
When a message is deleted for this reason, any associated History caches
are also deleted. For more information about History caches, see Types of
Result Caches, page 1209.
The default value is -1, which means that messages can stay in the system
indefinitely until the user manually deletes them.
5. Click OK.
The Delete History List messages feature can also be used for one-time
maintenance by using a non-recurring schedule.
l Read
l Unread
l All
9. Click OK.
The Clean History List database feature can also be used for one-time
maintenance by using a non-recurring schedule.
5. Click OK.
If you are concerned about the size of the database used for a database-
based History List, you can disable the use of the database as a long-term
backup for History caches.
5. Click OK.
To use the History List Messages Monitor, your History List repository must
be stored in a database. For more information about configuring the History
List repository, see Configuring History List Data Storage, page 1245.
Element Caches
When a user runs a prompted report containing an attribute element prompt
or a hierarchy prompt, an element request is created. (Additional ways to
create an element request are listed below.) An element request is actually a
SQL statement that is submitted to the data warehouse. Once the element
request is completed, the prompt can be resolved and sent back to the user.
Element caching, set by default, allows for this element to be stored in
memory so it can be retrieved rapidly for subsequent element requests
without triggering new SQL statements against the data warehouse.
For example, if ten users run a report with a prompt to select a region from a
list, when the first user runs the report, a SQL statement executes and
retrieves the region elements from the data warehouse to store in an
element cache. The next nine users see the list of elements return much
faster than the first user because the results are retrieved from the element
cache in memory. If element caching is not enabled, when the next nine
users run the report, nine additional SQL statements will be submitted to the
data warehouse, which puts unnecessary load on the data warehouse.
Element caches are the most-recently used lookup table elements that are
stored in memory on the Intelligence Server or Developer machines so they
can be retrieved more quickly. They are created when users:
l Limiting the Amount of Memory Available for Element Caches, page 1270
When a Developer user triggers an element request, the cache within the
Developer machine's memory is checked first. If it is not there, the
Intelligence Server memory is checked. If it is not there, the results are
retrieved from the data warehouse. Each option is successively slower than
the previous one, for example, the response time could be 1 second for
Developer, 2 seconds for Intelligence Server, and 20 seconds for the data
warehouse.
l Attribute ID
l Attribute version ID
l Element ID
l Search criteria
l Database connection (if the project is configured to check for the cache
key)
l Database login (if the project is configured to check for the cache key)
l Security filter (if the project and attributes are configured to use the cache
key)
In situations where the data warehouse is loaded more that once a day, it
may be desirable to disable element caching.
to 0 (zero).
In the Project Source Manager, select the Memory tab, set the Maximum
RAM usage (KBytes) to 0 (zero).
You might want to perform this operation if you always want to use the
caches on Intelligence Server. This is because when element caches are
purged, only the ones on Intelligence Server are eliminated automatically
while the ones in Developer remain intact. Caches are generally purged
because there are frequent changes in the data warehouse that make the
caches invalid.
2. On the Display tab, clear the Enable element caching check box.
The incremental retrieval limit is four times the incremental fetch size. For
example, if your MicroStrategy Web product is configured to retrieve 50
elements at a time, 200 elements along with the distinct count value are
placed in the element cache. The user must click the next option four times
to introduce another SELECT pass, which retrieves another 200 records in
this example. Because the SELECT COUNT DISTINCT value was cached,
this would not be issued a second time the SELECT statement is issued.
To optimize the incremental element caching feature (if you have large
element fetch limits or small element cache pool sizes), Intelligence Server
uses only 10 percent of the element cache on any single cache request. For
example, if 200 elements use 20 percent of the cache pool, Intelligence
Server caches only 100 elements, which is 10 percent of the available
memory for element caches.
The number of elements retrieved per element cache can be set for
Developer users at the project level, MicroStrategy Web product users, a
hierarchy, or an attribute. Each is discussed below.
1. In Developer, log into a project. You must log in with a user account
that has administrative privileges.
5. Type the limit for the Maximum number of attribute elements per
block setting in the Incremental Fetch subcategory.
1. Open the Hierarchy editor, right-click the attribute and select Element
Display from the shortcut menu, and then select Limit.
3. In the Element Display category, select the Limit option and type a
number in the box.
The element display limit set for hierarchies and attributes may further
limit the number of elements set in the project properties or Web
preferences. For example, if you set 1,000 for the project, 500 for the
attribute, and 100 for the hierarchy, Intelligence Server retrieves only
100 elements.
To make this more efficient, you can set a VLDB option to control how the
total rows are calculated. The default is to use the SELECT COUNT
DISTINCT. The other option is to have Intelligence Server loop through the
table after the initial SELECT pass, eventually getting to the end of the table
and determining the total number of records. You must decide whether to
have the database or Intelligence Server determine the number of element
records. MicroStrategy recommends that you use Intelligence Server if your
data warehouse is heavily used, or if the SELECT COUNT DISTINCT query
itself adds minutes to the element browsing time.
Either option uses significantly less memory than what is used without
incremental element fetching enabled. Using the count distinct option,
Intelligence Server retrieves four times the incremental element size. Using
the Intelligence Server option retrieves four times the incremental element
size, plus additional resources needed to loop through the table. Compare
this to returning the complete result table (which may be as large as 100,000
elements) and you will see that the memory use is much less.
l To have the data warehouse calculate the count, select Use Count
(Attribute@ID) to calculate total element number (will use count
distinct if necessary) - Default.
Caching Algorithm
The cache behaves as though it contains a collection of blocks of elements.
Each cached element is counted as one object and each cached block of
elements is also counted as an object. As a result, a block of four elements
are counted as five objects, one object for each element and a fifth object for
the block. However, if the same element occurs on several blocks it is
counted only once. This is because the element cache shares elements
between blocks.
The cache uses the "least recently used" algorithm on blocks of elements.
That is, when the cache is full, it discards the blocks of elements that have
been in the cache for the longest time without any requests for the blocks.
Individual elements, which are shared between blocks, are discarded when
all the blocks that contain the elements have been discarded. Finding the
blocks to discard is a relatively expensive operation. Hence, the cache
discards one quarter of its contents each time it reaches the maximum
number of allowed objects.
You can configure the memory setting for both the project and the client
machine in the Cache: Element subcategory in the Project Configuration
Editor. You should consider these factors before configuring it:
l The number of attributes that users browse elements on, for example, in
element prompts, hierarchy prompts, and so on
l Time and cost associated with running element requests on the data
warehouse
For example, if the element request for cities runs quickly (say in 2
seconds), it may not have to exist in the element cache.
1. In Developer, log into a project. You must log in with a user account
that has administrative privileges.
l If you set it to -1, Intelligence Server uses the default value of 1 MB.
The new settings take affect only after Intelligence Server is restarted.
1. In the Project Source Manager, click the Caching tab and within the
Element Cache group of controls, select the Use custom value option.
If you select the Use project default option, the amount of RAM will
be the same as specified in the Client section in the Project
Configuration Editor described above.
2. Specify the RAM (in megabytes) in the Client section in the Maximum
RAM usage (MBytes) field.
This functionality can be enabled for a project and limits the element cache
sharing to only those users with the same security filter. This can also be set
for attributes. That is, if you do not limit attribute elements with security
filters for a project, you can enable it for certain attributes. For example, if
you have Item information in the data warehouse available to external
suppliers, you could limit the attributes in the Product hierarchy with a
security filter. This is done by editing each attribute. This way, suppliers can
see their products, but not other suppliers' products. Element caches not
related to the Product hierarchy, such as Time and Geography, are still
shared among users.
1. In Developer, log into a project. You must log in with a user account
that has administrative privileges.
You must update the schema before changes to this setting take affect
(from the Schema menu, select Update Schema).
1. In Developer, log into a project. You must log in with a user account
that has administrative privileges.
4. Select the Create element caches per connection map check box.
The new setting takes affect only after the project is reloaded or after
Intelligence Server is restarted.
Users may connect to the data warehouse using their linked warehouse
logins, as described below.
same database login are able to share the element caches. Before you
enable this feature, you must configure two items.
If both of these properties are not set, the users will use their connection
maps to connect to the database.
1. In Developer, log into a project. You must log in with a user account
that has administrative privileges.
4. Select the Create element caches per passthrough login check box.
The new setting takes affect only after the project is reloaded or after
Intelligence Server is restarted.
If you are using a clustered Intelligence Server setup, to purge the element
cache for a project, you must purge the cache from each node of the cluster
individually.
Even after purging element caches, reports and documents may continue to
display cached data. This can occur because results may be cached at the
report/document and object levels in addition to at the element level. To
ensure that a re-executed report or document displays the most recent data,
purge all three caches. For instructions on purging result and object caches,
see Managing Result Caches, page 1221 and Deleting Object Caches, page
1283.
1. In Developer, log into a project. You must log in with a user account
that has administrative privileges.
Maximum number of elements to see Limiting the Number of Elements Displayed and
display Cached at a Time, page 1265
Attribute element number count see Limiting the Number of Elements Displayed and
method Cached at a Time, page 1265
Element cache - Max RAM usage see Limiting the Amount of Memory Available for
(MBytes) Project Element Caches, page 1270
Element cache - Max RAM usage see Limiting the Amount of Memory Available for
(MBytes) Developer Element Caches, page 1270
Apply security filter to element see Limiting Which Attribute Elements a User can
browsing See, page 1272
Purge element caches see Deleting All Element Caches, page 1274
Object Caches
When you or any users browse an object definition (attribute, metric, and so
on), you create what is called an object cache. An object cache is a recently
used object definition stored in memory on Developer and Intelligence
Server. You browse an object definition when you open the editor for that
object. You can create object caches for applications.
For example, when a user opens the Report Editor for a report, the collection
of attributes, metrics, and other user objects displayed in the Report Editor
compose the report's definition. If no object cache for the report exists in
memory on Developer or Intelligence Server, the object request is sent to
the metadata for processing.
The report object definition retrieved from the metadata and displayed to the
user in the Report Editor is deposited into an object cache in memory on
Intelligence Server and also on the Developer of the user who submitted the
request. As with element caching, any time the object definition can be
returned from memory in either the Developer or Intelligence Server
machine, it is faster than retrieving it from the metadata database.
So when a Developer user triggers an object request, the cache within the
Developer machine's memory is checked first. If it is not there, the
Intelligence Server memory is checked. If the cache is not even there, the
results are retrieved from the metadata database. Each option is
successively slower than the previous. If a MicroStrategy Web product user
triggers an object request, only the Intelligence Server cache is checked
before getting the results from the metadata database.
l Object ID
l Object version ID
l Project ID
l Environment-level
l Project-level
For the Project-level setting, the value for Server:Maximum RAM Usage
should vary between Minimum (100 MB) and Maximum (64 GB), and the
default value is 1024 MB.
For a project that has a large schema object, the project loading speed
suffers if the maximum memory for object cache setting is not large enough.
This issue is recorded in the DSSErrors.log file. See KB13390 for more
information.
Set the Server Environm ent-Level Maxim um RAM Usage for the Object
Cache in Workstation
3. In the left pane, click All Settings and search for cache.
To Set the Server Project-Level Maxim um RAM Usage for the Object Cache
in Workstation
3. Right-click the project you want to apply the settings to and choose
Properties.
4. In the left pane, click All Settings and search for cache.
5. Update the Maximum memory usage for Server Object Cache (MB).
1. In Developer, log into a project. You must log in with a user account
that has administrative privileges.
4. Specify the RAM (in megabytes) in the Server section in the Maximum
RAM usage (MBytes) box.
5. Specify the RAM (in megabytes) in the Client section in the Maximum
RAM usage (MBytes) box.
The new settings take effect only after Intelligence Server is restarted.
On the Developer client machine, you maintain object caching by using the
Client: Maximum RAM usage (MBytes) setting in the Caching: Auxiliary
Caches (Objects) subcategory in the Project Configuration Editor. This is a
Developer client-specific setting.
To Set the Client Maximum RAM Available for Object Caches for a
Developer Machine
1. In the Project Source Manager, click the Caching tab and in the Object
Cache group of controls, select the Use custom value option.
If you select the Use project default option, the amount of RAM is the
same as specified in the Client section in the Project Configuration
Editor described above.
Even after purging object caches, reports and documents may continue to
display cached data. This can occur because results may be cached at the
report/document and element levels, in addition to at the object level. To
ensure that a re-executed report or document displays the most recent data,
purge all three caches. For instructions on purging result and element
caches, see Managing Result Caches, page 1221 and Deleting All Element
Caches, page 1274.
1. In Developer, log into a project. You must log in with a user account
that has administrative privileges.
Configuration objects are cached at the server level. You can choose to
delete these object caches as well.
You cannot automatically schedule the purging of server object caches from
within Developer. However, you can compose a Command Manager script to
purge server object caches and schedule that script to execute at certain
times. For a description of this process, see MicroStrategy Tech Note
TN12270. For more information about Command Manager, see Chapter 15,
Automating Administrative Tasks with Command Manager.
Object cache - Max RAM usage See Limiting the Amount of Memory Available for
(MBytes) Developer Object Caches, page 1277
The log counters will be saved in the performance monitor log file:
DSSPerformanceMonitor<IServer Process ID>.csv
M AN AGIN G I N TELLIGEN T
CUBES
You can return data from your data warehouse and save it to Intelligence
Server memory, rather than directly displaying the results in a report. This
data can then be shared as a single in-memory copy, among many different
reports created by multiple users. The reports created from the shared sets
of data are executed against the in-memory copy, also known as an
Intelligent Cube, rather than having to be executed against a data
warehouse.
Once an Intelligent Cube has been published, you can manage it from the
Intelligent Cube Monitor. You can view details about your Intelligent Cubes
such as last update time, hit count, memory size, and so on.
You can view the following information in the Intelligent Cube Monitor:
l Last Update Time: The time when the Intelligent Cube was last updated
against the data warehouse.
l Last Update Job: The job number that most recently updated the
Intelligent Cube against the data warehouse. You can use the Job Monitor
to view information on a given job.
l Creation Time: The time when the Intelligent Cube was first published to
Intelligence Server.
l Hit Count: The number of times the Intelligent Cube has been used by
reports since it was last loaded into Intelligence Server's memory. You can
reset the Hit Count to zero by unloading the Intelligent Cube from
Intelligence Server's memory.
l Historic Hit Count: The total number of times the Intelligent Cube has
been used by reports. You can reset the Historic Hit Count to zero by
deleting the Intelligent Cube's cache, and then republishing the Intelligent
Cube.
l File Name: The file location where the Intelligent Cube is saved to the
machine's secondary storage.
l Cube Instance ID: The ID for the current published version of the
Intelligent Cube.
l Data Language: The language used for the Intelligent Cube. This is
helpful if the Intelligent Cube is used in an internationalized environment
that supports multiple languages.
l Total number of rows: The number of rows of data that the Intelligent
Cube contains. To view this field, the Intelligent Cube must be published
at least once.
You can also view Intelligent Cube information for a specific Intelligent
Cube, by double-clicking that Intelligent Cube in the Intelligent Cube
Monitor. This opens a Quick View of the Intelligent Cube information and
usage statistics.
Required
Status to
Action Description
Perform
Action
Save to disk Loaded If you have defined the backup frequency as zero minutes,
Intelligent Cubes are automatically saved to secondary
storage, as described in Storing Intelligent Cubes in
Secondary Storage, page 1313.
Required
Status to
Action Description
Perform
Action
Additional statuses such as Processing and Load Pending are also used by
the Intelligent Cube Monitor. These statuses denote that certain tasks are
currently being completed.
Additionally, if you have defined the backup frequency as greater than zero
minutes (as described in Storing Intelligent Cubes in Secondary Storage,
page 1313), the following additional statuses can be encountered:
In both scenarios listed above, the data and monitoring information saved in
secondary storage for an Intelligent Cube is updated based on the backup
frequency. You can also manually save an Intelligent Cube to secondary
storage using the Save to disk action listed in the table above, or by using
the steps described in Storing Intelligent Cubes in Secondary Storage, page
1313.
Using the Intelligent Cube Monitor you can load an Intelligent Cube into
Intelligence Server memory, or unload it to secondary storage, such as a
disk drive.
large there may be some delay in displaying report results while the
Intelligent Cube is being loaded into memory. For more suggestions on how
to manage Intelligence Server's memory usage, see Chapter 8, Tune Your
System for the Best Performance.
The steps below show you how to define whether publishing Intelligent
Cubes loads them into Intelligence Server memory. You can enable this
setting at the project level, or for individual Intelligent Cubes.
4. You can select or clear the Load Intelligent Cubes into Intelligence
Server memory upon publication check box:
5. Click OK.
6. For any changes to take effect, you must restart Intelligence Server.
For clustered environments, separate the restart times for each
Intelligence Server by a few minutes.
2. In the Folder List, browse to the folder that contains the Intelligent
Cube you want to configure.
l Select this check box to load the Intelligent Cube into Intelligence
Server memory when the Intelligent Cube is published. Intelligent
Cubes must be loaded into Intelligence Server memory to allow
reports to access and analyze their data.
8. Click OK.
An Intelligent Cube memory limit defines the maximum amount of RAM of the
Intelligence Server machine that can be used to store loaded Intelligent
Cubes. This data is allocated separately of memory used for other
Intelligence Server processes.
For example, you define a memory limit on Intelligent Cubes to be 512 MB.
You have 300 MB of Intelligent Cube data loaded into Intelligence Server
memory, and normal processing of other Intelligence Server tasks uses 100
MB of memory. In this scenario, Intelligence Server uses 400 MB of the RAM
available on the Intelligence Server machine. This scenario demonstrates
that to determine a memory limit for Intelligent Cubes, you must consider the
below factors:
l The Maximum RAM usage (Mbytes) memory limit can be defined per
project. If you have multiple projects that are hosted from the same
Intelligence Server, each project may store Intelligent Cube data up to its
memory limit.
l For example, you have three projects and you set their Maximum RAM
usage (Mbytes) limits to 1 GB, 1 GB, and 2 GB. This means that 4 GB of
Intelligent Cube data could be stored in RAM on the Intelligence Server
machine if all projects reach their memory limits.
l The size of the Intelligent Cubes that are being published and loaded into
memory. The process of publishing an Intelligent Cube can require
memory resources in the area of two to four times greater than the
Intelligent Cube's size. This can affect performance of your Intelligence
Server and the ability to publish the Intelligent Cube. For information on
how to plan for these memory requirements, see the next section.
l To help reduce Intelligent Cube memory size, review the best practices
described in Best Practices for Reducing Intelligent Cube Memory Size,
page 1299.
You can help to keep the processes of publishing Intelligent Cubes within
RAM alone by defining memory limits for Intelligent Cubes that reflect your
Intelligence Server host's available RAM as well as schedule the publishing
of Intelligent Cubes at a time when RAM usage is low. For information on
scheduling Intelligent Cube publishing, see the In-memory Analytics Help.
To determine memory limits for Intelligent Cubes, you should review the
considerations listed in Determining Memory Limits for Intelligent Cubes,
page 1297. You must also account for the potential peak in memory usage
when publishing an Intelligent Cube, which can be two to four times the size
of an Intelligent Cube.
Once the Intelligent Cube is published, only the 1 GB for the Intelligent Cube
(plus some space for indexing information) is used in RAM and the
remaining .6 GB of RAM and .9 GB of swap space used during the publishing
of the Intelligent Cube is returned to the system, as shown in the image
below.
While the Intelligent Cube can be published successfully, using the swap
space could have an affect on performance of the Intelligence Server
machine.
Once the Intelligent Cube is published, only the .5 GB for the Intelligent
Cube (plus some space for indexing information) is used in RAM and the
remaining RAM used during the publishing of the Intelligent Cube is returned
to the system, as shown in the image below.
Be aware that as more Intelligent Cube data is stored in RAM, less RAM is
available to process publishing an Intelligent Cube. This along with the peak
memory usage of publishing an Intelligent Cube and the hardware resources
of your Intelligence Server host machine should all be considered when
defining memory limits for Intelligent Cube storage per project.
l You can use the amount of data required for all Intelligent Cubes to limit
the amount of Intelligent Cube data that is marked as "Loaded" at one time
for a project. The default is 256 megabytes.
The total size of the loaded Intelligent Cubes for a project is calculated
and compared to the limit you have defined. If an attempt to load an
Intelligent Cube is made that will exceed this limit, one ore more Intelligent
Cubes will be offloaded from the Intelligence Server memory before the
new Intelligent Cube is loaded into memory.
When Memory Mapped Files for Intelligent Cubes are enabled, only the
portion of the cube in memory will be governed by this counter (excluding
the portion of the cube on disk).
l You can use the number of Intelligent Cubes to limit the number of
Intelligent Cube stored in Intelligence Server memory at one time for a
project. The default is 1000 Intelligent Cubes.
The total number of Intelligent Cubes for a project that are stored in
Intelligence Server memory is compared to the limit you define. If an
attempt to load an Intelligent Cube is made that will exceed the numerical
limit, an Intelligent Cube is removed from Intelligence Server memory
before the new Intelligent Cube is loaded into memory.
l Starting in MicroStrategy ONE (June 2024), you can use the amount of
data required for Intelligent Cubes to limit the amount of Intelligent Cube
data stored in the Intelligence Server memory at one time for all projects
when the MicroStrategy products are deployed in container environments.
The default is 50% of the host machine memory, if there is no limit set on
the container.
When Memory Mapped Files for Intelligent Cubes are enabled, only the
portion of the cube in memory will be governed by this counter (excluding
the portion of the cube on disk).
5. Click OK.
Starting in MicroStrategy ONE (June 2024), you can use the following steps
to define limits on Intelligence Server memory usage by Intelligent Cubes in
container deployments. The Maximum memory consumption for Intelligent
Cubes (%) setting specifies the maximum percentage of system RAM that
Intelligent Cubes are allowed to consume.
The default value is 50% of the container memory or 50% of the host
machine memory, if there is no limit set on the container.
6. Click OK.
5. Click OK.
l Maximum file size (MB): Defines the maximum size for files that
users can upload and import data from. The default value is 30 MB.
The minimum value is 1 MB, and the maximum value is 9999999 MB.
l Maximum quota per user (MB): Defines the maximum size of all
data import cubes for each individual user. This quota includes the
file size of all data import cubes, regardless of whether they are
published to memory or on disk. You can set the maximum size quota
by entering the following values:
l -1: Unlimited - No limit is placed on the size of data import cubes for
each user.
5. Click OK.
As shown in the screenshot above, only one cube cache can be loaded in
RAM at any given time. With MMF, multiple cube caches can be loaded in
RAM.
Requirements
To use MMF, more disk space than system memory is required. The disk
space is checked at server startup and logged into DSSErrors.log with a
message similar to one of the following errors below. If enabled, it is also
checked on subsequent publishings, with additional logging only if the
amount of disk drops too low.
The number of descriptors for Linux, using the nofiles limit, should be set
to at least 65535 as mentioned in Recommended System Settings for Linux.
You can hover over the longer options, such Apply best strategy to
maximize performance with given resources and Turn-off the
capability without exceptions to view their full text.
Value Behavior
Apply best strategy to maximize Memory mapped files are only created after fetching
performance with given data and only for cubes smaller than approximately 1
resources GB.
Value Behavior
Troubleshooting
When using memory mapped files (MMFs) on systems with large numbers of
loaded cubes, particularly partitioned cubes, it is possible to exceed the OS-
configurable limit of open files per process. The following examples show
the errors you may encounter if this limit is met through usage of this feature
against a Linux Intelligence server.
Please check the available disk space and refer to the previous
Requirements section.
The act of loading an Intelligent Cube can require memory resources in the
area of two times greater than the size of an Intelligent Cube. This can
affect performance of your Intelligence Server as well as the ability to load
the Intelligent Cube. For information on how to plan for these memory
requirements, see Governing Intelligent Cube Memory Usage, page 1297.
5. Click OK.
4. In the Intelligent Cube file directory area, click ... (the Browse
button).
5. Browse to the folder location to store Intelligent Cubes, and then click
OK.
6. Click OK.
You can also define when Intelligent Cubes are automatically saved to
secondary storage, as described in Defining when Intelligent Cubes are
Automatically Saved to Secondary Storage, page 1314 below.
4. In the Backup frequency (minutes) field, type the interval (in minutes)
between when Intelligent Cubes are automatically saved to secondary
storage. A value of 0 means that Intelligent Cubes are backed up
immediately after they are created or updated.
Be aware that this option also controls the frequency at which cache
and History List messages are backed up to disk, as described in
Configuring Result Cache Settings, page 1228.
5. Click OK.
For detailed information about connection mapping, see the Installation and
Configuration Help.
If you do not use connection mapping, leave this check box cleared.
5. Click OK.
l Establish reasonable limits on how many scheduled jobs are allowed. For
details on this setting, see Limit the Total Number of Jobs, page 1074.
l If you need to create multiple similar subscriptions, you can create them
all at once with the Subscription Wizard. For example, you can subscribe
users to several reports at the same time.
l If you need to temporarily disable a schedule, you can set its start date for
some time in the future. The schedule does not trigger any deliveries until
its scheduled start date.
l If many subscriptions are listed in the Subscription Manager, you can filter
the list of subscriptions so that you see the relevant subscriptions.
l When selecting reports to be subscribed to, make sure all the reports with
prompts that require an answer actually have a default answer. If a report
has a prompt that requires an answer but has no default answer, the
subscription cannot run the report successfully because the prompt cannot
be resolved, and the subscription is automatically invalidated and removed
from the system.
l Enable caching.
Exercise caution when changing settings from the default. For details
on each setting, see the appropriate section of this manual.
l Limit the number of scheduled jobs per project and per Intelligence
Server.
l Enable caching.
l If you are using Distribution Services, see Best Practices for Using
Distribution Services, page 1351.
Time-Triggered Schedules
With a time-triggered schedule, you define a date and time at which the
scheduled task is to be run. For example, you can execute a task every
Sunday night at midnight. Time-triggered schedules are useful to allow
large, resource-intensive tasks to run at off-peak times, such as overnight or
over a weekend.
Event-Triggered Schedules
An event-triggered schedule causes tasks to occur when an event occurs.
For example, an event may trigger when the database is loaded, or when the
When an event is triggered, all tasks tied to that event through an event-
triggered schedule begin processing. For more information about events,
including how to create them, see About Events and Event-Triggered
Schedules, page 1326.
Creating Schedules
To create schedules, you must have the privileges Create Configuration
Object and Create and Edit Schedules and Events. In addition, you need to
have Write access to the Schedule folder. For information about privileges
and permissions, see Controlling Access to Application Functionality, page
88.
To Create a Schedule
3. From the File menu, point to New, and then select Schedule.
5. When you reach the Summary page of the Wizard, review your choices
and click Finish.
You can also create a schedule with the Create Schedule script for
Command Manager. For detailed syntax, see the Create Schedule script
outline in Command Manager Help.
Managing Schedules
You can add, remove, or modify schedules through the Schedule Manager.
You can modify the events that trigger event-triggered schedules through
the Event Manager. For instructions on using the Event Manager, see About
Events and Event-Triggered Schedules, page 1326.
You can also specify that certain schedules can execute subscriptions
relating only to certain projects. For instructions, see Restricting Schedules,
page 1324.
l To find all subscriptions that use one of the schedules, right-click the
schedule and select Search for dependent subscriptions.
Restricting Schedules
You may want to restrict some schedules so that they can be used only by
subscriptions in specific projects. For example, your On Sales Database
Load schedule may not be relevant to your Human Resources project. You
can configure the Human Resources project so that the On Sales Database
Load schedule is not listed as an option for subscriptions in that project.
You may also want to restrict schedules so that they cannot be used to
subscribe to certain reports. For example, your very large All Worldwide
Sales Data document should not be subscribed to using the Every Morning
schedule. You can configure the All Worldwide Sales Data document so that
the Every Morning schedule is not listed as an option for subscriptions to
that document.
3. In the left column, click Project Defaults, and then click Schedule.
5. The left column lists schedules that users are not allowed to subscribe
to. The right column lists schedules that users are allowed to subscribe
to.
When you first select this option, no schedules are allowed. All
schedules are listed by default in the left column, and the right column
is empty.
7. When you are finished selecting the schedules that users are allowed to
subscribe to in this project, click Save.
6. The left column lists schedules that users are not allowed to subscribe
to. The right column lists schedules that users are allowed to subscribe
to.
When you first select this option, no schedules are allowed. All
schedules are listed by default in the left column, and the right column
is empty.
8. When you are finished selecting the schedules that users are allowed to
subscribe to in this project, click OK.
Once Intelligence Server has been notified that the event has taken place,
Intelligence Server performs the tasks associated with the corresponding
schedule.
Creating Events
You can create events in Developer using the Event Manager.
You can create events with the following Command Manager script:
Triggering Events
MicroStrategy Command Manager can trigger events from the Windows
command line. By executing Command Manager scripts, external systems
can trigger events and cause the associated tasks to be run. For more
information about Command Manager, see Chapter 15, Automating
Administrative Tasks with Command Manager.
At the end of the database load routine, you include a statement to add a
line to a database table, DB_LOAD_COMPLETE, that indicates that the
database load is complete. You then create a database trigger that checks
to see when the DB_LOAD_COMPLETE table is updated, and then executes
a Command Manager script. That script contains the following line:
When the script is executed, the OnDBLoad event is triggered, and the
schedule is executed.
You can also use the MicroStrategy SDK to develop an application that
triggers an event. You can then cause the database trigger to execute this
application.
You can manually trigger events using the Event Manager. This is primarily
useful in a testing environment. In a production system, it may not be
practical for the administrator to be present to trigger event-based
schedules.
3. Choose a task from the action list. For descriptions of the tasks, see the
table below.
6. Click OK.
The table below lists the tasks that can be scheduled for a project. Some of
the tasks can also be scheduled at the project source level, affecting all
projects in that project source.
Task Description
Delete all report caches for the project. For more information, see
Managing Result Caches, page 1221.
Delete
caches
Typically the Invalidate Caches task is sufficient to clear the report
caches.
Clean Delete orphaned entries and ownerless inbox messages from the History
History List List database. For more information, see Managing History Lists, page
database 1254
Delete Delete all history list messages for the project or project source. For more
History List information, see Managing History Lists, page 1254.
messages
(project or This maintenance request can be large. Schedule the History List
project deletions for times when Intelligence Server is not busy, such as when
Task Description
Purge
Delete the element caches for a project. For more information, see Deleting
element
All Element Caches, page 1274.
caches
Deactivate
Unpublish an Intelligent Cube from Intelligence Server. For more
Intelligent
information, see Chapter 11, Managing Intelligent Cubes.
Cubes
Delete
Delete an Intelligent Cube from the server. For more information, see
Intelligent
Chapter 11, Managing Intelligent Cubes.
Cube
Update
Update a published Intelligent Cube. For more information, see Chapter
Intelligent
11, Managing Intelligent Cubes.
Cubes
Cause the project to stop accepting certain types of requests. For more
Idle project
information, see Setting the Status of a Project, page 48.
Bring the project back into normal operation from an unloaded state. For
Load project
more information, see Setting the Status of a Project, page 48.
Resume Bring the project back into normal operation from an idle state. For more
project information, see Setting the Status of a Project, page 48.
Task Description
Take a project offline to users and remove the project from Intelligence
Unload
Server memory. For more information, see Setting the Status of a Project,
project
page 48.
Batch LDAP
import Import LDAP users into the MicroStrategy system. For more information,
(project see Manage LDAP Authentication, page 189.
source only)
Delete
unused
managed Remove the unused managed objects created for Freeform SQL, Query
objects Builder, and MDX cube reports. For more information, see Delete Unused
(project or Schema Objects: Managed Objects, page 822.
project
source)
Deliver APNS Deliver a push notification for a Newsstand subscription to a mobile device.
Push For more information, see the MicroStrategy Mobile Design and
Notification Administration Guide.
Users are not notified when a task they have scheduled is deleted.
You can see which nodes are running which projects using the Cluster view
of the System Administration monitor. For details on using the Cluster view
of the System Administration monitor, see Manage Your Clustered System,
page 1168.
l You can create caches for frequently accessed reports and documents,
which provides fast response times to users without generating additional
load on the database system.
Types of Subscriptions
You can create the following types of subscriptions for a report or document:
l Cache update subscriptions refresh the cache for the specified report or
document. For example, your system contains a set of standard weekly
and monthly reports. These reports should be kept in cache because they
are frequently accessed. Certain tables in the database are refreshed
weekly, and other tables are refreshed monthly. Whenever these tables
are updated, the appropriate caches should be refreshed.
l History List subscriptions create a History List message for the specified
report or document. Users can then retrieve the report or document from
the History List message in their History List folder. For detailed
information about the History List, see Saving Report Results: History List,
page 1240.
Email, file, print, and FTP subscriptions are available if you have purchased
a Distribution Services license. For information on purchasing Distribution
Services, contact your MicroStrategy account representative.
l Format: HTML, PDF, Excel, ZIP file, plain text, CSV, bulk export, or .mstr
(dashboard) file
Before you can use Distribution Services to deliver reports and documents,
you must create the appropriate devices, transmitters, and contacts. For
detailed information on these objects and instructions on setting up
Distribution Services system, see Configuring and Administering Distribution
Services, page 1351.
Creating Subscriptions
This section provides detailed instructions for subscribing to a report or
document.
l You can create multiple cache, History List, Intelligent Cube, or Mobile
subscriptions at one time for a user or user group using the Subscription
Wizard in Developer (see To Create Multiple Subscriptions at One Time in
Developer, page 1338).
l If you have purchased a license for Command Manager, you can use
Command Manager scripts to create and manage your schedules and
subscriptions. For instructions on creating these scripts with Command
Manager, see Chapter 15, Automating Administrative Tasks with
Command Manager, or see the Command Manager Help. (From within
Command Manager, select Help.)
To create any subscriptions, you must have the Schedule Request privilege.
To create an alert-based subscription, you must also have the Web Create
Alert privilege (under the Web Analyst privilege group).
To subscribe other users to a report or document, you must have the Web
Subscribe Others privilege (under the Web Professional group). In addition, to
subscribe others in Developer, you must have the Administer Subscriptions,
Configure Subscription Settings, and Monitor Subscriptions privileges (under
the Administration group).
To subscribe a dynamic address list to a report or document, you must have the
Subscribe Dynamic Address List privilege. For information about dynamic
address lists, see Using a Report to Specify the Recipients of a Subscription,
page 1340.
Only History List, cache, Intelligent Cube, and Mobile subscriptions can be
created in Developer.
2. From the File menu, point to Schedule Delivery To, and select the
type of subscription to create. For a list of the types of subscriptions,
see Types of Subscriptions, page 1334.
5. Click OK.
Only History List, cache, Intelligent Cube, and Mobile subscriptions can be
created in Developer.
2. Step through the Wizard, specifying a schedule and type for the
subscriptions, and the reports and documents that are subscribed to.
3. Click Finish.
This icon becomes visible when you point to the name of the report or
document.
5. To add additional users to the subscription, click To. Select the users
or groups and click OK.
6. Click OK.
Default /
Prompt Personal
Result
Required? Answer
present?
Default /
Prompt Personal
Result
Required? Answer
present?
To create a dynamic recipient list, you first create a special source report
that contains all the necessary information about the recipients of the
subscription. You then use the source report to define the dynamic list in
MicroStrategy Web. The new dynamic recipient list appears in the list of
Available Recipients when defining a new subscription to a standard report
or document. When the subscription is executed, only the addresses
returned by the source report are included in the delivery.
The information in the source report includes email addresses, user IDs, and
chosen devices to which to deliver standard MicroStrategy reports and
documents. Each address in the source report must be linked to a
MicroStrategy user. Any security filters and access control lists (ACLs) that
are applied to the address's linked user are also applied to any reports and
documents that are sent to the address.
The procedure below describes how to create a source report that provides
the physical addresses, linked MicroStrategy user IDs, and device type
information necessary to create a dynamic recipient list. For steps to create
a dynamic recipient list using this source report, see the MicroStrategy Web
Help.
To create a dynamic recipient list, you must have the Create Dynamic Address
List privilege.
To subscribe a dynamic address list to a report or document, you must have the
Subscribe Dynamic Address List privilege.
The data type for the user ID and device columns must be VARCHAR, not
CHAR.
2. Save the report with a name and description that makes the report's
purpose as a source report for a dynamic recipient list clear.
3. You can now use this source report to create a new dynamic recipient
list in MicroStrategy Web. For steps to create a dynamic recipient list
using this source report, see the MicroStrategy Web Help.
You can also use macros to personalize the delivery location and backup
delivery location for a file device. For details, including a list of the macros
available for file devices, see Creating and Managing Devices, page 1363.
The following table lists the macros that can be used in email and file
subscriptions, and the fields in which they can be used:
Subject,
Date the subscription is sent {&Date}
File Name
Subject, File
Time the subscription is sent {&Time}
Name
Subject,
Name of the recipient {&RecipientName}
File Name
{&PromptNumber&}
Name of a prompt in the subscribed
(where Number is the All fields
report/document
number of the prompt)
you can split up, or burst, a report or document into multiple files. When the
subscription is executed, a separate file is created for each element of each
attribute selected for bursting. Each file has a portion of data according to
the attributes used to group data in the report (page-by axis) or document
(group-by axis).
For example, you may have a report with information for all regions. You
could place Region in the page-by axis and burst the file subscription into
the separate regions. This creates one report file for each region.
As a second example, if you choose to burst your report using the Region
and Category attributes, a separate file is created for each combination of
Region and Category, such as Central and Books as a report, Central and
Electronics as another, and so on.
When creating the subscription for PDF, Excel, plain text, and CSV file
formats, you can use macros to ensure that each file has a unique name. For
example, if you choose to burst your document using the Region and
Category attributes, you can provide {[Region]@[DESC]},
{[Category]@[DESC]} as the file name. When the subscription is
executed, each file name begins with the names of the attribute elements
used to generate the file, such as Central, Books or Central, Electronics.
3. From the Available Attributes list, select the attributes to use to break
up the data, then click the right arrow to move those attributes to the
Selected Attributes list.
5. In the File Name field, type a name for the burst files. You can use
macros to ensure that each file has a unique name.
6. Click OK.
For example, if your report has Manager in the page-by axis, you may burst
the report into subfolders using the Manager's last name. In this case, you
provide macro text {[Manager]@[Last Name]} as the bursting subfolder
name.
reports with books data is in the Books subfolder in the manager's subfolder
named Abram-Crisby.
In the example above, the Reports\FileDev1 path was defined as part of the
file device used for the subscription. The file name has the date and time
appended to the report name because the file device definition has the
Append timestamp to file name check box selected.
3. From the Available Attributes list, select any attribute to use to create
the subfolders, then click the right arrow to move the attribute to the
Selected Attributes list. The Sub-folder field displays below or to the
right of the File Name field.
5. In the File Name field, type a name for the files to be created. You can
use macros to ensure that each file has a unique name.
6. In the Sub-folder field (the one below or to the right of the File Name
field), type a macro to dynamically create the subfolders.
7. Click OK.
Managing Subscriptions
This section contains the following information:
The steps in the table take you to the main interface to complete the task.
For detailed steps, click Help once you are in the main interface.
system cache. menu, select Schedule delivery to, then select History
List, Update cache, or Mobile.
Administering Subscriptions
You can create, remove, or modify subscriptions through the Subscription
Manager.
You can set the maximum number of subscriptions of each type that each
user can have for each project. This can prevent excessive load on the
system when subscriptions are executed. By default, there is no limit to the
number of subscriptions. You set these limits in the Project Configuration
Editor, in the Governing Rules: Default: Subscriptions category.
When you create a subscription, you can force the report or document to re-
execute against the warehouse even if a cache is present, by selecting the
Re-run against the warehouse check box in the Subscription Wizard. You
can also prevent the subscription from creating a new cache by selecting the
Do not create or update matching caches check box.
You can change the default values for these check boxes in the Project
Configuration Editor, in the Caching: Subscription Execution category.
This section explains the Distribution Services functionality and steps to set
it up in your MicroStrategy system.
For details about statistics logging for email, file, print, and FTP deliveries,
see Statistics on Subscriptions and Deliveries, page 2952.
l For best results, follow the steps listed in High-Level Checklist to Set Up a
Report Delivery System, page 1353.
l PDF, plain text, and CSV file formats generally offer the fastest delivery
performance. Performance can vary, depending on items including your
hardware, operating system, network connectivity, and so on.
l The performance of the print delivery method depends on the speed of the
printer.
l Enable the zipping feature for the subscription so that files are smaller.
l Use bulk export instead of the CSV file format. Details on bulk exporting
are in the Reports section of the Advanced Reporting Help.
If you are processing many subscriptions, consider using the bulk export
feature. Details on bulk exporting are in the Reports section of the
Advanced Reporting Help.
l When creating contacts, make sure that each contact has at least one
address for each delivery type. Otherwise the contact does not appear in
the list of contacts for subscriptions that are for a delivery type that the
contact has no address for. For example, if a contact does not have an
email address, when an email subscription is being created, that contact
does not appear in the list of contacts.
l When selecting reports to be subscribed to, make sure none of the reports
have prompts that require an answer and have no default answer. If a
report has a prompt that requires an answer but has no default answer, the
subscription cannot run the report successfully, and the subscription is
automatically removed from the system.
l The maximum file size of dashboard (.mstr) files that can be sent through
Distribution Services is defined by the MicroStrategy (.mstr) file size
(MB) setting. To access the setting, in MicroStrategy Developer right-click
on the project and select Project Configuration… Then, In the Project
Configuration dialog box, choose Project Definition > Governing Rules
> Default > Result sets. The maximum .mstr file size is 2047 MB.
Understand your users' requirements for subscribing to reports and where they
want them delivered.
l For best practices for working with transmitters, see Creating and
Managing Transmitters, page 1355.
l For best practices for working with devices, see Creating and
Managing Devices, page 1363.
l You can easily test an email transmitter by using the Save to File check
box on the Email Transmitter Editor's Message Output tab.
2. In the Transmitter List area on the right, right-click the transmitter that
you want to view or change settings for.
3. Select Edit.
5. Click OK.
Creating a Transmitter
In Developer, you can create the following types of transmitters:
Once an email transmitter is created, you can create email devices that are
based on that transmitter. When you create a device, the transmitter
appears in the list of existing transmitters in the Select Device Type dialog
box. The settings you specified above for the email transmitter apply to all
email devices that will be based on the transmitter.
2. Right-click in the Transmitter List area on the right, select New, and
select Transmitter.
5. Click OK.
Once a file transmitter is created, you can create file devices that are based
on this transmitter. When you create a device, the transmitter appears in the
list of existing transmitters in the Select Device Type dialog box. The
settings you specified above for the file transmitter apply to all file devices
that will be based on the transmitter.
2. Right-click in the Transmitter List area on the right, select New, then
select Transmitter.
5. Click OK.
Once a print transmitter is created, you can create print devices that are
based on the transmitter. When you create a device, the transmitter appears
in the list of existing transmitters in the Select Device Type dialog box. The
settings you specified above for the print transmitter apply to all print
devices that are based on the transmitter.
2. Right-click in the Transmitter List area on the right, select New, and
select Transmitter.
5. Click OK.
Once an FTP transmitter is created, you can create FTP devices that are
based on the transmitter. When you create a device, the transmitter appears
in the list of existing transmitters in the Select Device Type dialog box. The
settings you specified above for the FTP transmitter apply to all FTP devices
that will be based on the transmitter.
2. Right-click in the Transmitter List area on the right, select New, then
select Transmitter.
5. Click OK.
2. Right-click in the Transmitter List area on the right, select New, and
then Transmitter.
5. Click OK.
2. Right-click in the Transmitter List area on the right, select New, and
then Transmitter.
5. Click OK.
Deleting a Transmitter
You can delete a transmitter if you no longer need to use it.
You cannot delete a transmitter if devices depend on the transmitter. You must
first delete any devices that depend on the transmitter.
To Delete a Transmitter
2. In the Transmitter List area on the right, right-click the transmitter that
you want to delete.
4. Click Yes.
For example, if you want to send reports via email, and your recipients use
an email client such as Microsoft Outlook, you can create a Microsoft
Outlook email device that has settings appropriate for working with Outlook.
If you need to send reports to a file location on a computer on your network,
you can create a file device specifying the network file location. If you want
to send reports to a printer on your network, you can create a printer device
specifying the network printer location and printer properties.
network file location and file properties for the file device to deliver the file
to. For steps to create a file device, see Creating a File Device.
You create new devices when you need a specific combination of properties
and settings for a device to deliver files. You can create a new device in two
ways. You can either create a completely new device and enter all the
supporting information for the device manually, or you can duplicate an
existing device and edit the supporting information so it suits your new
device. You create and configure devices using the Device Editor.
Devices can be created in a direct connection (two-tier) mode, but print and
file locations for those devices are not validated by the system. Print and
file locations for devices created when in server connection mode (three-
tier) are automatically validated by MicroStrategy.
l For file delivery locations, use the Device Editor's File: General tab and
File: Advanced Properties tab.
l For printer locations, use the Device Editor's Print: General tab and
Print: Advanced Properties tab.
l For FTP locations, use the Device Editor's FTP: General tab.
l Test a delivery using each device to make sure that the device settings are
still effective and any system changes that have occurred do not require
changes to any device settings.
l If you have a new email client that you want to use with Distribution
Services functionality, create a new email device and apply settings
specific to your new email application. To create a new device quickly, use
the Duplicate option and then change the device settings so they suit your
new email application.
l If you rename a device or change any settings of a device, test the device
to make sure that the changes allow the device to deliver reports or
documents successfully for users.
2. In the Device List area on the right, right-click the device that you want
to view or change settings for, and select Edit.
4. Click OK.
To rename a device, right-click the device and select Rename. Type a new
name, and then press Enter. When you rename a device, the contacts and
subscriptions using the device are updated automatically.
You create a new device when you need a specific combination of properties
and settings for a file device to deliver files.
You must specify the file properties and the network file location for the file
device to deliver files to. You can include properties for the delivered files
such as having the system set the file to Read-only, label it as Archive, and
so on.
A quick way to create a new file device is to duplicate an existing device and
then edit its settings to meet the needs for this new device. This is a time-
saving method if a similar device already exists, or you want to duplicate the
default MicroStrategy file device. To duplicate a device, right-click the
device that you want to duplicate and select Duplicate.
2. Right-click in the Device List area on the right, select New, and then
Device.
5. Click OK.
Once the file device is created, it appears in the list of existing file devices
when you create an address (in this case, a path to a file storage location
such as a folder) for a MicroStrategy user or a contact. You select a file
device and assign it to the address you are creating. When a user
subscribes to a report to be delivered to this address, the report is delivered
to the file delivery location specified in that address, using the delivery
settings specified in the associated file device.
When a new device is created the following default values are applied to the
file. They can be accessed from the Device Editor: File window:
The ACL of a file is largely determined by the parent folder (and recursively
to the root drive) which is determined before delivery. The administrator is
responsible for setting the ACL of the parent folder to meet specific security
needs.
Gen er al Tab
l File Location: <MicroStrategy Installation
Path>\FileSubscription
You can dynamically specify the File Location and Backup File Location
in a file device using macros. For example, if you specify the File Location
as C:\Reports\{&RecipientName}\, all subscriptions using that file
device are delivered to subfolders of C:\Reports\. Subscribed reports or
documents for each recipient are delivered to a subfolder with that
recipient's name, such as C:\Reports\Jane Smith\ or
C:\Reports\Hiro Protagonist\.
The table below lists the macros that can be used in the File Location and
Backup File Location fields in a file device:
Description Macro
Description Macro
{&RecipientList
File path that a dynamic recipient list subscription is delivered to
Address}
This process uses Sharity software to resolve the Windows file location as a
mount on the UNIX machine. Intelligence Server can then treat the Windows
file location as though it were a UNIX file location.
You must have a license for MicroStrategy Distribution Services before you can
use file subscriptions.
2. Create a new file device or edit an existing file device (see Creating a
File Device, page 1366).
4. In the User Name field, type the Windows network login that is used to
access the Windows file location for mounting on the Intelligence
Server.
5. In the Password field, type the password for that user name.
6. In the Mount Root field, type the location on the Intelligence Server
machine where the mount is stored. Make sure this is a properly formed
UNIX path, using forward slashes / to separate directories. For
example:
/bin/Sharity/Mount1
7. Click OK.
You can specify various MIME options for the emails sent by an email
device, such as the type of encoding for the emails, the type of attachments
the emails, can support, and so on.
2. Right-click in any open space in the Device List area on the right, select
New, and then Device.
5. Click OK .
The selected printer must be added to the list of printers on the machine on
which Intelligence Server is running.
2. Right-click in the Device List area on the right, select New, and then
Device.
5. Click OK.
Once a print device is created, it appears in the list of existing print devices
when you create an address (in this case, a path to the printer) for a
MicroStrategy user or a contact. You select a print device and assign it to
the address you are creating. When a user subscribes to a report to be sent
to this address, the report is sent to the printer specified in that address,
using the delivery settings specified in the associated print device. For
details on creating an address for a user or on creating a contact and adding
addresses to the contact, click Help.
2. Right-click in the Device List area on the right, select New, and then
Device.
5. Click OK.
Once the FTP device is created, it appears in the list of existing FTP
devices. When you create an address for a MicroStrategy user or a contact,
you can select an FTP device and assign it to the address you are creating.
When a user subscribes to a report to be delivered to this address, the
2. Right-click in the Device List area on the right, select New, and then
Device.
3. Select Mobile Client iPhone or Mobile Client iPad and click OK.
Deleting a Device
You can delete a device if it is no longer needed.
Update the contacts and subscriptions that are using the device by replacing
the device with a different one. To do this, check whether the device you want
to delete is used by any existing addresses:
To Delete a Device
2. In the Device List area on the right, right-click the device you want to
delete.
3. Select Delete.
If you are upgrading from 11.2.x to 11.3, you must upgrade the metadata. If
the mobile device already exists in the metadata, the server name and port
number are changed dynamically to adapt to the new API. No manual work is
necessary.
You can also update the default value for the server name and port number
by editing the mobile device through Developer, using the procedure below.
To m o d i f y t h e i Ph o n e/ i Pad d evi ce
4. Click OK.
After upgrading to the new API, the Apple feedback server is no longer used,
since the APNS server sends per-notification feedback to the Intelligence
server. Developer hides the APNS feedback service.
Now the default push notification server address has been upgraded for the
server using the new API.
A D M IN ISTERIN G
M ICRO STRATEGY W EB
AN D M OBILE
The privileges available in each edition are listed in the List of Privileges
section. You can also print a report of all privileges assigned to each user
based on license type; to do this, see Audit Your System for the Proper
Licenses, page 734.
All MicroStrategy Web users that are licensed for MicroStrategy Report
Services may view and interact with a document in Flash Mode. Certain
interactions in Flash Mode have additional licensing requirements:
The MicroStrategy security model enables you to set up user groups that can
have subgroups within them, thus creating a hierarchy. The following applies
to the creation of user subgroups:
You need project administration privileges to view and modify user group
definitions.
See your license agreement as you determine how each user is assigned to
a given privilege set. MicroStrategy Web products provide three Web
editions (Professional, Analyst, Reporter), defined by the privilege set
assigned to each.
Assigning privileges outside those designated for each edition changes the
user's edition. For example, if you assign to a user in a Web Reporter group
a privilege available only to a Web Analyst, MicroStrategy considers the
user to be a Web Analyst user.
Within any edition, privileges can be removed for specific users or user
groups. For more information about security and privileges, see Chapter 2,
Setting Up User Security.
License Manager enables you to perform a self-audit of your user base and,
therefore, helps you understand how your licenses are being used. For more
information, see Audit and Update Licenses, page 728.
Any changes you make to the project defaults become the default settings
for the current project or for all Web projects if you select the Apply to all
projects on the current MicroStrategy Intelligence Server (server
name) option from the drop-down list.
The project defaults include user preference options, which each user can
override, and other project default settings accessible only to the
administrator.
For information on the History List settings, see Saving Report Results:
History List, page 1240.
l When the administrator who is setting the Project defaults clicks Load
Default Values, the original values shipped with the MicroStrategy Web
products are loaded on the page.
l When users who are setting User preferences click Load Default Values,
the project default values that the administrator set on the Project defaults
pages are loaded.
The settings are not saved until you click Apply. If you select Apply to all
projects on the current Intelligence Server (server name) from the drop-
down menu, the settings are applied to all projects, not just the one you are
currently configuring.
You can then set the defaults for several categories, including the following:
l General
l Folder Browsing
l Grid display
l Graph display
l History List
l Export Reports
l Drill mode
l Prompts
l Report Services
l Office
l Color Palette
l Email Addresses
l File Locations
l Printer Locations
l FTP Locations
Each category has its own page and includes related settings that are
accessible only to users with the Web Administration privilege. For details
on each setting, see the MicroStrategy Web Help for the Web Administrator.
Using Firewalls
A firewall enforces an access control policy between two systems. A firewall
can be thought of as something that exists to block certain network traffic
while permitting other network traffic. Though the actual means by which this
is accomplished varies widely, firewalls can be implemented using both
hardware and software, or a combination of both.
Therefore, in many environments and for a variety of reasons you may want
to put a firewall between your Web servers and the Intelligence Server or
cluster. This does not pose any problems for the MicroStrategy system, but
there are some things you need to know to ensure that the system functions
as expected. Another common place for a firewall is between the Web clients
and the Web or Mobile server.
Another common place for a firewall is between the Web clients and the Web
or Mobile server. The following diagram shows how a MicroStrategy system
might look with firewalls in both of these locations:
Regardless of how you choose to implement your firewalls, you must make
sure that the clients can communicate with MicroStrategy Web and Mobile
Servers, that MicroStrategy Web and Mobile can communicate with
Intelligence Server, and vice versa. To do this, certain communication ports
must be open on the server machines and the firewalls must allow Web
server and Intelligence Server communications to go through on those ports.
Most firewalls have some way to specify this. Consult the documentation
that came with your firewall solution for details.
You can change this port number. See the steps in the next procedure
To Change the Port through which MicroStrategy Web and Intelligence
Server Communicate, page 1388 to learn how.
using the Listener Service, you must make sure port 30172 is allowed
to send and receive TCP/IP and UDP requests through the firewall. You
cannot change this port number.
l MicroStrategy Library
l MicroStrategy Mobile
If you are using clusters, you must make sure that all machines in the Web
server cluster can communicate with all machines in the Intelligence Server
cluster.
3. On the Intelligence Server Options tab, type the port number you want
to use in the Port Number box. Save your changes.
7. On the Connection tab, enter the new port number and click OK.
You must update this port number for all project sources in your
system that connect to this Intelligence Server.
To Ch an ge t h e Po r t N u m b er f o r M i cr o St r at egy Web
4. In the Port box, type the port number you want to use. This port number
must match the port number you set for Intelligence Server. An entry of
0 means use port 34952 (the default).
5. Click Save.
Using Cookies
A cookie is a piece of information that is sent to your Web browser—along
with an HTML page—when you access a Web site or page. When a cookie
arrives, your browser saves this information to a file on your hard drive.
When you return to the site or page, some of the stored information is sent
back to the Web server, along with your new request. This information is
usually used to remember details about what a user did on a particular site
or page for the purpose of providing a more personal experience for the
user. For example, you have probably visited a site such as Amazon.com
and found that the site recognizes you. It may know that you have been
there before, when you last visited, and maybe even what you were looking
at the last time you visited.
MicroStrategy Web products use cookies for a wide variety of things. In fact,
they use them for so many things that the application cannot work without
them. Cookies are used to hold information about user sessions,
preferences, available projects, language settings, window sizes, and so on.
For a complete and detailed reference of all cookies used in MicroStrategy
Web and MicroStrategy Web Universal, see the MicroStrategy Web Cookies
section.
Using Encryption
Encryption is the translation of data into a sort of secret code for security
purposes. The most common use of encryption is for information that is sent
across a network so that a malicious user cannot gain anything from
intercepting a network communication. Sometimes information stored in or
written to a file is encrypted. The SSL technology described earlier is one
example of an encryption technology.
2. At the top of the page or in the column on the left, click Security to see
the security settings.
4. Click Save. Now all communication between the Web server and
Intelligence Server is encrypted.
For example, with Microsoft IIS, by default only the "Internet guest user"
needs access to the virtual directory. This is the account under which all file
access occurs for Web applications. In this case, the Internet guest user
needs the following privileges to the virtual directory: read, write, read and
execute, list folder contents, and modify.
However, only the administrator of the Web server should have these
privileges to the Admin folder in which the Web Administrator pages are
located. When secured in this way, if users attempt to access the
Administrator page, the application prompts them for the machine's
administrator login ID and password.
In addition to the file-level security for the virtual directory and its contents,
the Internet guest user also needs full control privileges to the Log folder in
the MicroStrategy Common Files, located by default in C:\Program
Files (x86)\Common Files\MicroStrategy. This ensures that any
application errors that occur while a user is logged in can be written to the
log files.
The file-level security described above is all taken care of for you when you
install the ASP.NET version of MicroStrategy Web using Microsoft IIS.
These details are just provided for your information.
If you are using the J2EE version of MicroStrategy Web you may be using a
different Web server, but most Web servers have similar security
requirements. Consult the documentation for your particular Web server for
information about file-level security requirements.
For more detailed information about this, see the Installation and
Configuration Help.
..\MicroStrategy\Narrowcast Server\Subscription
Engine\build\server\
Modify the file contents so the corresponding two lines are as follows:
TransactionEngineLocation=machine_name:\\Subscription
Engine\\build\\server
TransactionEngineLocation=MACHINE_NAME:/Subscription Engine/build/server
2. Share the folder where the Subscription Engine is installed for either
the local Administrators group or for the account under which the
Subscription Administrator service account runs. This folder must be
shared as Subscription Engine.
You should ensure that the password for this account does not expire.
If you are using MicroStrategy 2021 Update 2 or a later version, the legacy
MicroStrategy Office add-in cannot be installed from Web.;
For more information, see the MicroStrategy for Office page in the Readme
and the MicroStrategy for Office Help.
From the MicroStrategy Web Administrator page, you can designate the
installation directory path to MicroStrategy Office, and also determine
whether a link to Office installation information appears in the MicroStrategy
Web interface.
You must install and deploy MicroStrategy Web Services to allow the
installation of MicroStrategy Office from MicroStrategy Web. For information
about deploying MicroStrategy Web Services, see the MicroStrategy for Office
Help.
2. Click Connect.
http://localhost/MicroStrategyWS/office/Lang_
1033/officeinstall.htm
8. Click Save.
Below is the section of the web.config file that contains the time-out
setting (line 5):
1 <sessionStatemode="InProc"
2 stateConnectionString="tcpip=127.0.0.1:42424"
3 sqlConnectionString="data source=127.0.0.1;user id=sa;password="
4 cookieless="false"
5 timeout="20"
6 />
This setting does not affect Web Universal because it does not use .NET
architecture.
This setting does not automatically reconnect the .NET session object.
User is
30
20 minutes 45 minutes Yes automatically
minutes
logged back in
l You can modify certain settings in the MicroStrategy Web server machine
or application for best performance. Details for MicroStrategy Web and
Web Universal follow:
l Tune Microsoft's Internet Information Services (IIS). For details, see the
MicroStrategy Tech Notes TN11275 and TN7449.
l Increase the server machine's Java Virtual Machine heap size. For
information on doing this, see MicroStrategy Tech Note TN6446.
See the documentation for your particular Web application server for
additional tuning information. In general, these are the things you can do:
COM BIN IN G
A D M IN ISTRATIVE TASKS
WITH SYSTEM M AN AGER
Creating a Workflow
You use System Manager to create a workflow visually, by dragging and
dropping processes and linking them together. This allows you to see the
step-by-step process that leads the workflow from one process to the next.
This visual approach to creating a workflow can help you to notice
opportunities to troubleshoot and error check processes as part of a
workflow.
The steps provided below show you how to create a workflow using System
Manager. Additional details on the various components that constitute a
System Manager workflow are provided after these steps.
It can be beneficial to determine the purpose of your workflow and plan the
general logical order of the workflow before using System Manager.
2. Right-click the process and select Rename. Type a new name for the
process.
3. Select the process, and then select Properties in the pane on the right
side. Provide all the required information for the process. For details on
the properties required for each process, see Defining Processes, page
1447.
4. While providing the information for a process, you can review the exit
codes for a process. On the Properties pane, scroll down to the bottom
and click Show Description, as shown in the image below.
Exit code -4242424 is a general exit code that is shared among all
processes. This exit code indicates that either the user canceled the
workflow manually, or the reason for the process error cannot be
determined.
Each workflow has one set of parameters, which can be defined on the
Parameters pane on the right side. The parameters can be used to provide
values for a process when the workflow is executed. Using parameters can
also let you provide this information in a secure fashion. For more
information on how to include parameters in a workflow, including importing
parameters from a file, see Using Parameters for Processes, page 1536.
Once you have all the processes required for a workflow, you can begin
to define the logical order of the workflow by creating connectors
between all the processes. Each process in a workflow needs to connect
to another process in the workflow, otherwise the workflow could end
prematurely. You can also define a process as an entry processes of a
workflow, create decisions to direct the logical order of a workflow, and
add comments to provide further information and explanation to a
workflow.
While defining the logical order of a workflow, you may find that additional
processes are required. Processes can be added at any time while creating
a workflow.
1. From the Connectors and processes pane, select from the following
types of connectors:
l Failure: The red arrow, in the middle, is the failure connector. If the
current process is completed with an exit code that is defined as a
failure status, the process that the failure connector points to is the
next process that is attempted. If you use a failure connector from a
process, it is recommended that you also provide a success
connector.
With a connector type selected, click the process to start from and drag
to the process to proceed to next in the workflow. A connector is drawn
between the two processes. If you use the success and failure
connectors for a process, you must do this process for each connector.
2. From the Connectors and processes pane, select the Decision icon,
and then click in the workflow area. A decision process is created in the
workflow, as shown in the image below.
Create as many decisions as you need for your workflow. Each decision
should use a success and a failure connector to other processes in the
workflow.
For information on how you can use the iterative retrieval process to
perform related tasks one by one in a workflow, see Processing Related
Tasks One by One, page 1422.
For information on how you can use the split execution and merge
execution to handle the parallel processing of tasks in a workflow, see
Once a workflow execution is split into multiple paths, each task is
performed independently of the other tasks. However, while the tasks
are done independently, all the tasks may need to be completed before
performing other tasks later in the workflow. For example, you can
create a DSN and start Intelligence Server as separate tasks at the
same time, but you may need both of those tasks to be fully complete
before starting another task that requires the DSN to be available and
Intelligence Server to be operational. To support this workflow, you can
use the merge execution process to combine multiple paths back into
one workflow path. For example, the merge execution process shown
below combines the three tasks performed in parallel back into one
execution after the three tasks are completed., page 1426.
1. Create an exit process, which ends the workflow and can explain how
the workflow ended. From the Connectors and processes pane, select
the Exit Workflow icon, and then click in the workflow area. An exit
process is created in the workflow, as shown in the image below.
2. With the exit process selected, from the Properties pane, you can
choose to have the exit process return the exit code from the previous
process or return a customized exit code. For more information on how
to use exit processes to end a workflow, see Using Exit Processes to
End a Workflow, page 1421.
To Validate a Workflow
l If the workflow is not valid, click Details to review the reasons why
the workflow is not valid. Click OK and make any required changes to
the workflow. Once all changes are made, validate the workflow
again.
3. Click Save.
To Deploy a Workflow
The steps below show you how to deploy a workflow from within System
Manager. For information on deploying a workflow from the command line or
as a silent process, see Deploying a Workflow, page 1545.
2. In the Log file path field, type the path of a log file, or use the folder
(browse) icon to browse to a log file. All results of deploying a workflow
are saved to the file that you select.
3. Click OK.
5. From the Starting process drop-down list, select the process to act as
the first process in the workflow. You can select only a process that has
been enabled as an entry process for the workflow.
If you need to end the workflow prematurely, from the Workflow menu,
select Terminate Execution. A dialog box is displayed asking you to
verify your choice to terminate the execution of the workflow. Click Yes
to terminate the workflow. If some processes in the workflow have
already been completed, those processes are not rolled back.
failure connector, the workflow may unexpectedly end with the current
process.
l Failure: The red arrow, in the middle, is the failure connector. If a process
is completed with an exit code that is defined as a failure status, the
process that the failure connector points to is the next process that is
attempted. If you use a failure connector from a process, it is
recommended that you also provide a success connector. Without a
success connector, the workflow may unexpectedly end with the current
process.
This example also includes a few continue connectors. For example, the
Start Intelligence Server process uses a continue connector to lead to a
decision process. The decision process is then used to determine the exit
code of the previous process. For examples of how decisions can be used to
define the logical order of a workflow, see Using Decisions to Determine the
Next Step in a Workflow, page 1415.
l Some steps in a workflow may not work as entry processes. For example,
a decision process that relies on the exit code of the previous process
should not be enabled as an entry process. This is because the decision
process could not retrieve the required exit code. Without the ability to
retrieve an exit code, the decision process would not be able to perform a
comparison, and the workflow would appear to be unresponsive.
2. Select to use a parameter or an exit code as the first item for the
comparison:
l Previous process exit code: Select this option to use the exit code
of the previous process in the comparison. Using the exit code of a
process allows you to determine in greater detail why a process was
successful or unsuccessful. This allows you to take more specific
action to troubleshoot potential problems in a workflow.
indicates that the script has a syntax error. For this exit code, a
decision process could lead to an exit process, so the workflow could
be ended and the Command Manager script could be manually
reviewed for syntax errors.
3. From the Comparison operator drop-down list, select the operator for
the comparison.
If you do not need to use the exit code from the previous process later
in the workflow, you can leave the Previous process exit code drop-
down list blank.
l Exists: Select this option to check only if the file or directory exists.
The decision process returns as true if the file or directory can be
found.
l Exists and not empty: Select this option to check if the file or
directory exists, and if the file or directory is empty. For files, this
check verifies that some information is in the file. For directories, this
check verifies whether any other files or folders are in the directory.
The decision process returns as true if the file or directory exists, and
the file or directory has some type of content available.
then overwrite the original exit code, which would prevent you from
comparing the original exit code in the later decision processes.
If the script was a success, the first decision process allows the workflow to
continue. If the script fails, a second decision process is started. This
second decision process (labeled as Failed to connect to Intelligence
Server?) uses the value previously stored in the Decision parameter to
determine if the exit code is equal to four. With an exit code equal to four,
this decision process can attempt to start Intelligence Server and then
attempt to run the Command Manager script again. If this second decision
process fails, which means the exit code is not equal to four, a third decision
process (labeled as Script syntax error?) is started.
This third decision process again uses the value that was stored in the
Decision parameter by the first decision process to determine if the exit
code is equal to six. With an exit code equal to six, this decision process can
send an email to a someone to review the Command Manager script for
syntax errors, and it can attach the script to the email. Once the email is
sent, the workflow is exited. If this final decision process fails, that means
the Command Manager script failed for another reason. In this case, the
workflow is exited for additional troubleshooting.
To add an exit process to your workflow, from the Connectors and processes
pane, select the Exit Workflow icon, and then click in the workflow area. An
exit process is created in the workflow, as shown in the image below.
With the process selected, from the Properties pane, you can define what
type of exit code is provided when the exit code is reached:
l Use previous process exit code: Select this option to return the exit
code of the process that was completed just before the exit process. If you
use this option you can use the same exit process from multiple processes
in the workflow, and the exit code returned provides information on
whatever process led to the exit process. For example, the steps of a
workflow shown in the image below show two processes leading to the
same exit process.
When the workflow completes, the same exit process returns the exit code
either on the decision process that determines if Intelligence Server can
be started, or the process that completes a Command Manager script.
l Use customized exit code: Select this option to define your own exit
code for the exit process by typing in the available field. This allows you to
create exit codes customized to your needs. You can use only numeric
values for the customized exit code.
If you use this option, you may want to use multiple exit processes in a
workflow. You can then define each exit process with a unique exit code.
This can explain what path the workflow took and how it ended. This can
be helpful because workflows can have multiple possible paths including a
successful path where all processes were completed and unsuccessful
paths where the workflow had to be ended prematurely.
Every workflow should include at least one exit process. Ensuring that
processes either lead to another process or to an exit process provides a
consistent expectation for the results of a workflow.
For example, you have multiple projects that require object updates on an
intermittent schedule. At the start of each week, any updates that are
required are included in a separate update package for each project, and all
update package files are stored in a folder. The number of update packages
required for a week varies depending on requirements of the various
projects. By using the iterative retrieval process, the folder that stores the
weekly update packages can be analyzed to determine how many update
packages need to be applied for the week. The workflow shown below then
retrieves these update packages from the folder one by one, applying the
update package, emailing the project administrator, and using the iterative
retrieval process to retrieve the next update package.
With the process selected, from the Properties pane, you can define how the
iterative retrieval process retrieves information to be processed as part of a
System Manager workflow:
l Files in Directory: Select this option to retrieve files from a folder. When
retrieving files from a folder, be aware that each time a file is retrieved, it
is stored in the same parameter and thus provided to the same process in
the System Manager workflow. This means that the System Manager
process that uses these files must be able to process all files in a folder. In
the example update package scenario, the folder must contain only update
packages. If, for example, a text file was stored in the folder, retrieving
this text file and passing it to the import package process would cause an
error in the workflow.
Click the folder icon to browse to and select a folder, or type the full path
in the Directory Name field. You must also determine how the files are
retrieved, using the following options:
l File Names Only: Select this option to retrieve only the name of the file,
including the file extension. If you clear this check box, the full file path
to the file is retrieved, which is commonly required if you need the
location of the file for other processes in the System Manager workflow.
l All Files: Select this option to retrieve files from only the top-level
folder.
l Content of File: Select this option to retrieve the contents of a file. Click
the folder icon to browse to and select a file, or type the full path in the
File Name field. You must also determine if a separator is used to
segment the content within the file, using the following option:
If you clear this check box, the entire contents of the file is returned in a
single retrieval.
following option:
If you clear this check box, the entire contents of the parameter is returned
in a single retrieval.
l Ensure that the tasks do not depend on each other. Workflows are often
linear processes that require that one task is completed before starting
another task. For example, you cannot run certain Command Manager
scripts until Intelligence Server is started. This means a task to start
Intelligence Server should not be done in parallel with other tasks that
require Intelligence Server to be operational.
l Split execution processes can use only the continue connector (see Using
Connectors to Create the Logical Order of a Workflow, page 1412) to link
to new tasks to perform in parallel. You must also use two or more
continue connectors, as a split execution is meant to split a workflow into
at least two paths to perform in parallel.
tasks performed in parallel back into one execution after the three tasks are
completed.
For each merge execution process, you must supply a time out value. This
time out value is the amount of time, in seconds, that is allowed to complete
all the parallel tasks that are connected to the merge execution process. The
time starts to count down once the first task connected to a merge execution
process is completed. How the remaining tasks connected to the merge
execution are processed depends on the connectors used to continue from
the merge execution process:
It is recommended that you use the success and failure connectors to exit
the merge process:
To avoid these types of performance issues, you can limit the number of
tasks that can be processed at the same time for a workflow. This ensures
that even if a workflow requests a certain number of tasks to be processed at
the same time, only the specified limit is allowed to run at a time.
The default value for the limit is the greater of either the number of CPUs for
the system or 2. Although the number of CPUs for the system is a
reasonable default, be aware of the following:
l Systems can process more tasks simultaneously than the number of CPUs
available.
l Systems can have multiple CPUs, but this does not necessarily mean all
the CPUs are available to the user who is deploying a workflow. For
example, consider a Linux machine with eight CPUs available. In this
scenario, the Maximum Thread default value is 8. However, the user
account that is being used to deploy the workflow may be allowed to use
only one CPU for the Linux machine. When determining the maximum
number of tasks to run simultaneously in System Manager workflows, you
should understand details about system resource configuration.
As a workflow is deployed, any tasks over the set limit are put into a queue.
For example, if a split execution process attempts to start five tasks, but the
Maximum Threads option is set at three, two of the tasks are immediately
put in the queue. Once a task is completed, the next task in the queue can
begin processing.
The Maximum Threads option is set at three, which means that two of the
tasks are immediately put in the queue. Assume then that one of the
three tasks being processed (Task A) comes to completion, and it links to
another task in the workflow (Task B). Rather than immediately starting
to process Task B, the workflow must first process the tasks that were
already included in the queue (Task E and Task F). This puts Task B
behind the two existing tasks already in the queue.
3. Click OK.
You can then type the information for the comment. You can also resize the
comment and move it to the required location in a workflow.
You can use comments to explain to the workflow's design. For example,
you can use comments to explain the paths of a decision process, as shown
in the image below.
The same information in the comment is included in the description for the
Command Manager script process. However, providing the information in a
comment allows this information to be displayed directly in the workflow
area.
Validating a Workflow
Validating a workflow is an important step in creating a workflow. Although
validating a workflow does not guarantee that every process will be
completed successfully when deploying a workflow, it helps to limit the
possibility for errors during the deployment.
While you are creating a workflow, you can use System Manager to validate
the workflow. This validation process performs the following checks on the
workflow:
l The workflow contains at least one entry process. This is required so that
the workflow has at least one process to use as the first step in the
workflow.
l All processes have values for all required properties. For example, if you
are creating a DSN, you must supply a name for the DSN, the machine that
stores the data source, the port number, and other required values for the
data source type.
The validation checks only that values exist for all required properties, not
whether the values are valid for the process.
l Each process has either one continue connector or one success connector
and one failure connector leading from it. This ensures that each process
continues on to another step in the workflow regardless of whether the
process is successful or unsuccessful. For more information on correctly
supplying connectors for a workflow, see Using Connectors to Create the
Logical Order of a Workflow, page 1412.
l The workflow has at least one exit process. Exit processes verify that a
workflow deployment has completed. For more information on how you can
use exit processes in a workflow, see Using Exit Processes to End a
Workflow, page 1421.
l Step through the logical order of the workflow and double-check that all
the possible paths make sense with the purpose of the workflow. You can
also use this as an opportunity to check for parts of the workflow that could
become cyclical. For example, in the workflow shown in the image below,
a potential cyclical path is highlighted with purple, dashed arrows.
Although this cyclical path would let the workflow attempt to start
Intelligence Server multiple times, if Intelligence Server cannot be started
successfully, the workflow could continue to execute until it was manually
ended. An alternative would be to modify the logical order of the workflow
to attempt to start Intelligence Server a second time, but end the workflow
if the second attempt also fails. This new path is shown in the image
below.
workflow, you can keep track of how many times a loop in a workflow is
repeated. After a certain amount of attempts, the loop can be exited even if
the required configuration was not completed successfully.
For example, the workflow shown below uses a loop to attempt to start
Intelligence Server, multiple times if necessary, before performing a
Command Manager script that requires Intelligence Server to be operational.
With the workflow shown above, if Intelligence Server starts successfully the
first time, the Command Manager script is executed next and the loop is not
needed. However, if starting Intelligence Server is not successful, the first
thing that occurs is that the Update Loop Counter configuration updates a
parameter for the workflow. A parameter named Loop is included in the
workflow with the initial value of zero, and the Update Loop Counter
configuration updates this parameter with the following statement:
${Loop} + 1
Using this statement, the Loop parameter is increased by one each time the
Update Loop Counter configuration is executed. Once the Loop parameter
has been increased, a decision process is used to check the value of the
Loop parameter. If the Loop parameter is less than three, the configuration
To use split and merge executions in a workflow that uses logical loops, see
Once a workflow execution is split into multiple paths, each task is
performed independently of the other tasks. However, while the tasks are
done independently, all the tasks may need to be completed before
performing other tasks later in the workflow. For example, you can create a
DSN and start Intelligence Server as separate tasks at the same time, but
you may need both of those tasks to be fully complete before starting
another task that requires the DSN to be available and Intelligence Server
to be operational. To support this workflow, you can use the merge
execution process to combine multiple paths back into one workflow path.
For example, the merge execution process shown below combines the three
tasks performed in parallel back into one execution after the three tasks are
completed., page 1426.
From the System Manager home page, you can access the template
workflows in the Templates section. To choose from the full list of template
workflows, click the More Templates folder.
Once the workflow is open in System Manager, you can select each process
in the workflow to review the task that it performs for the workflow. You can
also modify the properties of each process so that the workflow can be used
to configure and administer your environment. For information on the
properties available for each type of process available using System
Manager, see Defining Processes, page 1447.
Before using this template, be sure the following prerequisites are met:
l Creates a new project source, which allows access to the new metadata.
l Creates a new project for the MicroStrategy Suite and connects it to the
new database instance.
Before using this template, be sure the following prerequisites are met:
l Creates a copy of the new web archive (.war) file to deploy the new
version of MicroStrategy Web.
l Restarts the web application server, which extracts the contents of the
.war file.
l Copies the MicroStrategy Web customization files into the newly deployed
environment.
l Stops and then restarts the web application server, which deploys the new
MicroStrategy Web environment, including any customizations.
Before using this template, be sure the following prerequisites are met:
l Access to the .war file for the version of MicroStrategy Web to upgrade to.
l A file used to start the web application server. By default, the template
expects an Apache Tomcat web application server. You can swap in a file
that starts your web application server.
Before using this template, be sure the following prerequisites are met:
l Access to the metadata, and a SQL statement that can be used to create a
copy of the metadata. By default, the template expects the metadata to be
stored in a Microsoft SQL Server database. You can change the supplied
SQL script to reflect the SQL syntax required for the database
management system that you use to store your metadata.
l A response file used to upgrade the metadata. This response file can be
created using MicroStrategy Configuration Wizard, as described in the
Installation and Configuration Help.
l A test file that defines how to perform the automated test of reports and
documents for the metadata. This file can be created using Integrity
Manager, as described in Creating an Integrity Test, page 1580.
Before using this template, be sure the following prerequisites are met:
l A file that defines how the duplicate projects are to be merged. This file is
created using the Project Merge Wizard. For steps on how to create this
configuration file, see Merge Projects with the Project Merge Wizard, page
811.
l A test file that defines how to perform the automated test of reports and
documents for the project. This file can be created using Integrity
Manager, as described in Creating an Integrity Test, page 1580.
l Searches through a Command Manager script file used to join the cloud-
based environment to an Intelligence Server cluster. The Intelligence
Server machine name is modified to match the machine name for the
cloud-based environment.
Before using this template, be sure the following prerequisites are met:
l A response file used to create a new project source. This response file can
be created using MicroStrategy Configuration Wizard, as described in the
Installation and Configuration Help.
l Creates a parameter that determines how many times the loop in the
workflow has been completed. This parameter is used to choose the
correct update packages and to exit the loop in the workflow at the proper
time.
l Checks for all required update package files and undo package files.
l Creates an undo package to roll back changes that were made to a project
using an update package.
l Completes the undo package to roll back changes for the project, and then
completes a new update package to update the objects for the project.
l Continues to loop through the workflow to do the same type of updates for
other projects, or ends the workflow after updating four projects with these
changes.
Before using this template, be sure the following prerequisites are met:
l Undo package files that define how to roll back the changes made by an
update package for a project. This file is created using MicroStrategy
Object Manager. For steps on how to create this undo package, see Copy
Objects in a Batch: Update Packages, page 786.
l Update package files that define how a project is to be updated. This file is
created using MicroStrategy Object Manager. For steps on how to create
this update package, see Copy Objects in a Batch: Update Packages,
page 786.
l Command Manager script files that are used to create and administer the
undo package files. These script files can be created using Command
Manager, as described in Creating and Executing Scripts, page 1556.
Before using this template, be sure the following prerequisites are met:
l A text file that includes the information required to publish the Intelligent
Cubes. Each line of the file must include two columns. The first column
provides the Intelligent Cube name, and the second column provides the
full path to the Command Manager script files used to publish the
Intelligent Cube.
l Uses a split execution process to start two threads for the workflow to
perform parallel processing.
Before using this template, be sure the following prerequisites are met:
l A text file that includes a list of people to notify about the availability of the
newly created update package.
Defining Processes
The tasks that are completed as part of a System Manager workflow are
determined by the processes that you include. System Manager provides a
set of MicroStrategy and non-MicroStrategy processes to include in a
workflow. These processes can be categorized as follows:
Metadata, History List, and statistics repositories can be part of the same
process or included in their own separate processes in a System Manager
workflow. Including them as one process allows you to do all these
configurations in a single process. However, including them in separate
processes allows you to find and fix errors specific to each separate type of
repository configuration and perform each configuration at different stages
of the workflow.
Help.
Managing Projects
A MicroStrategy business intelligence application consists of many objects
within projects. These objects are ultimately used to create reports and
documents that display data to the end user. As in other software systems,
these objects should be developed and tested before they can be used in a
production system. Once in production, projects need to be managed to
account for new requirements and previously unforeseen circumstances.
This process is referred to as the project life cycle.
With System Manager, you can include these project management tasks in a
workflow. This lets you create, manage, and update your projects silently,
which can be done during off-peak hours and system down times. In
performing project maintenance in this way, users of the MicroStrategy
system are less affected by project maintenance.
l Project Merge XML File: The file that defines how the duplicate projects
are to be merged. This file is created using the Project Merge Wizard. For
steps on how to create this configuration file, see Merge Projects with the
Project Merge Wizard, page 811.
For the password fields listed below, you can use the button to the right of
the password fields to determine whether the password characters are
shown or asterisks are displayed instead.
l Update the schema of the destination project at the end: If this check
box is selected, the system updates the schema of the destination project
after the merge is completed. This update is required when you make any
changes to schema objects such as facts, attributes, or hierarchies.
l Forcefully take over locks if any of the sessions are locked: If this
check box is selected, the system takes ownership of any metadata locks
that exist on the source or destination projects. If this check box is cleared
and sessions are locked, the project merge cannot be completed.
Duplicating Projects
You can duplicate projects as part of a System Manager workflow. If you
want to copy objects between two projects, MicroStrategy recommends that
the projects have related schemas. This means that one must have originally
been a duplicate of the other, or both must have been duplicates of a third
project.
l Base Project Password: The password for the source project's project
source. You can use the button to the right of this password field to
determine whether the password characters are shown or asterisks are
displayed instead.
l Update Target Metadata: If this check box is selected, the system forces
a metadata update of the destination metadata if it is older than the source
metadata. The duplication is not executed unless the destination metadata
is the same version as or more recent than the source metadata.
resolution rules. It allows you to save the objects you want to copy in an
update package and import that package into destination projects later.
l Project Source Name: The name of the project source that contains the
project to update objects in using the update package.
l Password: The password for the user name that you provided to log in to
the project source. You can use the button to the right of the Password
field to determine whether the password characters are shown or asterisks
are displayed instead.
l Package file: The update package file that defines how a project is to be
duplicated. This file is created using MicroStrategy Object Manager. For
steps to create this update package, see Copy Objects in a Batch: Update
Packages, page 786.
If you are importing a package that is stored on a machine other than the
Intelligence Server machine, ensure that the package can be accessed by
the Intelligence Server machine.
l If the update package is a project update package, select this check box
and type the name of the project to update objects in using the update
package.
l Use logging: If this check box is selected, the system logs the update
package process. Click the folder icon to browse to and select the file to
save the update package results to. If this check box is cleared, no log is
created.
l Forcefully acquire locks: If this check box is selected, the system takes
ownership of any locks that exist. If this check box is cleared and sessions
are locked, the update package cannot be completed.
l Package XML File: The .xml file that contains the definition to create a
package file. You can use Object Manager to create this .xml file, as
described in Copy Objects in a Batch: Update Packages, page 786.
l Source Project Source Password: The password for the user account
you used to create the package .xml file. This authentication information is
used to log in to the project source. You can use the button to the right of
l Source Metadata Password: The password for the user account you used
to create the package .xml file. This authentication information is used to
log in to the project metadata. You can use the button to the right of the
password field to determine whether the password characters are shown
or asterisks are displayed instead.
l You can determine which machine to administer its services for using one
of the following options:
l Local machine: This option performs the start, stop, or restart action for
the service of the machine used to deploy the workflow.
l Remote machine: This option lets you specify the machine that hosts
the service to perform the start, stop, or restart action for. You must
provide the information listed below:
l Machine Name: The name of the machine that hosts the service.
l Password: The password for the user name that you provided to
administer the service. You can use the button to the right of the
Password field to determine whether the password characters are
shown or asterisks are displayed instead.
l Service Type: Determines the service to start, stop, or restart. From this
drop-down list, you can select one of the following MicroStrategy services:
o MicroStrategy Intelligence Server: The main service for your
MicroStrategy reporting environment. It provides the authentication,
clustering, governing, and other administrative management
requirements for your MicroStrategy reporting environment.
o MicroStrategy Listener: Also known as Test Listener. A ping utility that
allows you to check the availability of an Intelligence Server on your
network, whether a DSN can connect to a database, and whether a
project source name can connect to a project source. From any machine
that has the Test Listener installed and operational, you can get
information about other MicroStrategy services available on the network
without having to actually go to each machine.
o MicroStrategy Enterprise Manager Data Loader: The service for
Enterprise Manager that retrieves data for the projects for which
statistics are being logged. This data is then loaded into the Enterprise
Manager lookup tables for further Enterprise Manager reporting and
analysis.
o MicroStrategy Distribution Manager: The service for Narrowcast
Server that distributes subscription processing across available
Execution Engines.
o MicroStrategy Execution Engine: The service for Narrowcast Server
that gathers, formats, and delivers the content to the devices for a
subscription.
o Notes: Information to describe this process as part of the workflow.
You can determine the machine for which to retrieve the service status by
using one of the following options:
l Local machine: Retrieves the status for the service of the machine used
to deploy the workflow.
l Remote machine: Lets you specify the machine that hosts the service to
retrieve the status for the service. If you select this option, you must type
the name of the machine that hosts the service.
l Service Type: Determines the service to start, stop, or restart. From this
drop-down list, you can select one of the following MicroStrategy services:
l Password: The password for the user name that you provided to
connect to the project source. Use the button to the right of the
Password field to determine whether the password characters are
shown or asterisks are displayed instead.
password contains one or more quotation marks ("), you must replace
them with two quotation marks ("") and enclose the entire password in
quotes. For example, if your password is 1"2"3'4'5, you must enter the
password as "1""2""3'4'5".
l Script File (.scp): Browse to and select the Command Manager script
file that defines all the tasks to be completed.
l Log Output To Default Location: Logs all results to the default folder.
l Log Output To Specified File: Logs all results to the log file specified.
You can browse to and select a log file.
l Split Output Into Three Specified Files: Logs all results of execution
to three separate log files that you choose:
l DSN: The data source name that points to the database that stores the
Narrowcast Server repository. If the DSN requires specific permissions,
select the Authentication for DSN check box to provide a valid user name
and password.
l MTC Configuration File: The test file that defines how to perform the
automated test of reports and documents. This file is created using
Integrity Manager. For steps on how to create this test file, see Creating
an Integrity Test, page 1580.
l Base Project Password: The password for the user specified in the test
file to log in to the base project. This is not required for a baseline-versus-
l Target Project Password: The password for the user specified in the test
file to log in to the destination project. This is not required for a single-
project or baseline-versus-baseline integrity test. You can use the button
to the right of this password field to determine whether the password
characters are shown or asterisks are displayed instead. Refer to
Specifying Passwords for Multiple User Accounts and Special Characters,
page 1470 below for information on providing multiple passwords or
passwords that use special characters for an Integrity Manager test.
You can use the following parameters to provide alternative test information
and details when running an Integrity Manager test as part of a workflow. All
parameters are optional, and if you clear the check box for a parameter
listed below, any required information for that parameter is provided by the
Integrity Manager test file instead:
l Output Directory: The directory for any results. Click the folder icon to
browse to and select an output directory.
l Log File: Click the folder icon to browse to and select a log file directory.
l Base Baseline File: Click the folder icon to browse to and select a
baseline file for the base project.
l Target Baseline File: Click the folder icon to browse to and select a
baseline file for the target project.
l Base Server Name: The name of the machine that is running the
Intelligence Server that hosts the base project for the test.
l Base Server Port: The port that Intelligence Server is using. The default
port is 34952.
l Target Server Name: The name of the machine that is running the
Intelligence Server that hosts the target project for the test.
l Target Server Port: The port that Intelligence Server is using. The default
port is 34952.
l Base Project Name: The name of the base project for the test.
l Login(s) for Base Project: The login accounts required to run any reports
or documents in the base project for the test. For multiple logins, enclose
all logins in double quotes ("") and separate each login with a comma (,).
l Target Project Name: The name of the target project for the test.
l Login(s) for Target Project: The login accounts required to run any
reports or documents in the base project for the test. For multiple logins,
enclose all logins in double quotes ("") and separate each login with a
comma (,).
l Test Folder GUID: The GUID of the test folder. If this option is used, the
reports and documents specified in the Integrity Manager test file are
ignored. Instead, Integrity Manager executes all reports and documents in
the specified folder.
To use multiple user accounts for testing, the passwords associated with
each user account must also be provided. If your Integrity Manager test
includes multiple user accounts, use the following rules to provide any
required passwords for the base project and target project:
l You must include a password for each user account defined in the Integrity
Manager test configuration file. However, if all user accounts use a blank
password, you can leave the base project and target project password
fields blank to indicate that a blank password is used for each user
account.
l The passwords must be listed in the order that user accounts are defined
in the Integrity Manager test. Use Integrity Manager to review the test file
as required to determine the proper order.
An Integrity Manager test can include user accounts that include special
characters in their passwords. Use the following rules to denote special
characters in passwords for the base project and target project:
l If a password includes a single quote (') or comma (,), you must enclose
the entire password in single quotes. For example, for the password
sec,ret, you must type this password as 'sec,ret'.
l To denote a single quote (') in a password, use two single quotes. For
example, for the password sec'ret, you must type this password as
'sec''ret'.
System Manager allows you to create DSNs for the following types of
databases:
DB2 UDB
To perform this configuration, in System Manager, from the Connectors and
processes pane, add the DB2 UDB process to your workflow. The following
information is required to create a DSN for DB2 UDB when running against
DB2:
l Data Source Name: A name to identify the DB2 UDB data source
configuration in MicroStrategy. For example, Finance or DB2-Serv1 can
serve to identify the connection.
l IP Address: The IP address or name of the machine that runs the DB2
UDB server.
l TCP Port: The DB2 UDB server listener's port number. In most cases, the
default port number is 50000, but you should check with your database
administrator for the correct number.
l Overwrite: If this check box is selected, the system updates a DSN with
the same name with the information provided below. If this check box is
cleared and a DSN with the same name exists on the system, no DSN is
created, and the DSN is not updated.
l Test Connection: If this check box is selected, the system tests the DSN
information provided to determine if a successful connection can be made.
If this check box is cleared, no connection test is performed. If this check
box is selected, you must provide the following connection information:
l Password: The password for the user name that you provided to
connect to the database. You can use the button to the right of the
Password field to determine whether the password characters are shown
or asterisks are displayed instead.
l Data Source Name: A name to identify the DB2 for i data source
configuration in MicroStrategy. For example, Finance or DB2fori-1 can
serve to identify the connection.
l IP Address: The IP Address of the machine where the catalog tables are
stored. This can be either a numeric address, such as 123.456.789.98,
or a host name. If you use a host name, it must be in the HOSTS file of the
machine or a DNS server.
l Location: The DB2 location name, which is defined during the local DB2
installation.
l Isolation Level: The method by which locks are acquired and released by
the system.
l Package Owner: The package's AuthID if you want to specify a fixed user
to create and modify the packages on the database. The AuthID must have
authority to execute all the SQL in the package.
l TCP Port: The DB2 DRDA listener process's port number on the server
host machine provided by your database administrator. The default port
number is usually 446.
l Overwrite: If this check box is selected, the system updates a DSN with
the same name with the information provided below. If this check box is
cleared and a DSN with the same name exists on the system, no DSN is
created, and the DSN is not updated.
l Password: The password for the user name that you provided to
connect to the database. You can use the button to the right of the
Password field to determine whether the password characters are shown
or asterisks are displayed instead.
DB2 z/OS
To perform this configuration, in System Manager, from the Connectors and
processes pane, add the DB2 z/OS process to your workflow. The following
information is required to create a DSN for DB2 z/OS:
l Data Source Name: A name to identify the DB2 z/OS data source
configuration in MicroStrategy. For example, Finance or DB2UDBz/OS-1
can serve to identify the connection.
l IP Address: The IP Address of the machine where the catalog tables are
stored. This can be either a numeric address such as 123.456.789.98,
or a host name. If you use a host name, it must be in the HOSTS file of the
machine or a DNS server.
l Location: The DB2 z/OS location name, which is defined during the local
DB2 z/OS installation. To determine the DB2 location, you can run the
command DISPLAY DDF.
l Package Owner: The package's AuthID if you want to specify a fixed user
to create and modify the packages on the database. The AuthID must have
authority to execute all the SQL in the package.
l TCP Port: The DB2 DRDA listener process's port number on the server
host machine provided by your database administrator. The default port
number is usually 446.
l Overwrite: If this check box is selected, the system updates a DSN with
the same name with the information provided below. If this check box is
cleared and a DSN with the same name exists on the system, no DSN is
created and the DSN is not updated.
l Test Connection: If this check box is selected, the system tests the DSN
information provided to determine if a successful connection can be made.
If this check box is cleared, no connection test is performed. If this check
box is selected, you must provide the following connection information:
l Password: The password for the user name that you provided to
connect to the database. You can use the button to the right of the
Password field to determine whether the password characters are shown
or asterisks are displayed instead.
Greenplum
To perform this configuration, in System Manager, from the Connectors and
processes pane, add the Greenplum process to your workflow. The
following information is required to create a DSN for Greenplum:
l Port Number: The port number for the connection. The default port
number for Greenplum is usually 5432. Check with your database
administrator for the correct number.
l Overwrite: If this check box is selected, the system updates a DSN with
the same name with the information provided below. If this check box is
cleared and a DSN with the same name exists on the system, no DSN is
created and the DSN is not updated.
l Test Connection: If this check box is selected, the system tests the DSN
information provided to determine if a successful connection can be made.
If this check box is cleared, no connection test is performed. If this check
l Password: The password for the user name that you provided to
connect to the database. You can use the button to the right of the
Password field to determine whether the password characters are shown
or asterisks are displayed instead.
Hive
To perform this configuration, in System Manager, from the Connectors and
processes pane, add the Hive process to your workflow. The following
information is required to create a DSN for Apache Hive:
l Data Source Name: A name to identify the Apache Hive data source
configuration in MicroStrategy. For example, Finance or ApacheHive-1
can serve to identify the connection.
l Host Name: The name or IP address of the machine on which the Apache
Hive data source resides. The system administrator or database
administrator assigns the host name.
l Port Number: The port number for the connection. The default port
number for Apache Hive is usually 10000. Check with your database
administrator for the correct number.
l Overwrite: If this check box is selected, the system updates a DSN with
the same name with the information provided below. If this check box is
cleared and a DSN with the same name exists on the system, no DSN is
created and the DSN is not updated.
l Test Connection: If this check box is selected, the system tests the DSN
information provided to determine if a successful connection can be made.
If this check box is cleared, no connection test is performed.
Informix
To perform this configuration, in System Manager, from the Connectors and
processes pane, add the Informix process to your workflow. The following
information is required to create a DSN for Informix Wire Protocol:
l Server Name: The client connection string designating the server and
database to be accessed.
l Host Name: The name of the machine on which the Informix server
resides. The system administrator or database administrator assigns the
host name.
l Port Number: The Informix server listener's port number. The default port
number for Informix is commonly 1526.
l Overwrite: If this check box is selected, the system updates a DSN with
the same name with the information provided below. If this check box is
cleared and a DSN with the same name exists on the system, no DSN is
created and the DSN is not updated.
l Test Connection: If this check box is selected, the system tests the DSN
information provided to determine if a successful connection can be made.
If this check box is cleared, no connection test is performed. If this check
box is selected, you must provide the following connection information:
l Password: The password for the user name that you provided to
connect to the database. You can use the button to the right of the
Password field to determine whether the password characters are shown
or asterisks are displayed instead.
Informix XPS
To perform this configuration, in System Manager, from the Connectors and
processes pane, add the Informix XPS (Windows Only) process to your
workflow. The following information is required to create a DSN for Informix
XPS:
l Server Name: The client connection string designating the server and
database to be accessed.
l Host Name: The name of the machine on which the Informix server
resides. The system administrator or database administrator assigns the
host name.
l Service Name: The service name, as it exists on the host machine. The
system administrator assigns the service name.
l Protocol Type: The protocol used to communicate with the server. Select
the appropriate protocol from this drop-down list.
l Overwrite: If this check box is selected, the system updates a DSN with
the same name with the information provided below. If this check box is
cleared and a DSN with the same name exists on the system, no DSN is
created and the DSN is not updated.
l Test Connection: If this check box is selected, the system tests the DSN
information provided to determine if a successful connection can be made.
If this check box is cleared, no connection test is performed. If this check
box is selected, you must provide the following connection information:
l Password: The password for the user name that you provided to
connect to the database. You can use the button to the right of the
Password field to determine whether the password characters are shown
or asterisks are displayed instead.
l Data Source Name: A name to identify the Microsoft SQL Server data
source configuration in MicroStrategy. For example, Personnel or
SQLServer-1 can serve to identify the connection.
l Windows: Select this option if you are configuring the Microsoft SQL
Server driver on Windows:
l Server Name: The name of a SQL Server on your network, in the format
ServerName_or_IPAddress,PortNumber. For example, if your
network supports named servers, you can specify an address such as
SQLServer-1,1433. You can also specify the IP address such as
123.45.678.998,1433.
123.45.678.998\Instance1,1433
SQLServer-1\Instance1,1433
If you use Windows NT authentication with SQL Server, you must enter the
Windows NT account user name and password in Service Manager. For
background information on Service Manager, see Running Intelligence
Server as an Application or a Service, page 30.
l Server Name: The name of a SQL Server on your network. For example,
if your network supports named servers, you can specify an address
such as SQLServer-1. You can also specify the IP address such as
123.45.678.998. Contact your system administrator for the server
name or IP address.
following are examples of providing the server name for your SQL Server
database:
SQLServer-1\Instance1
123.45.678.998\Instance1
l Port Number: The port number for the connection. The default port
number for SQL Server is usually 1433. Check with your database
administrator for the correct number.
l Overwrite: If this check box is selected, the system updates a DSN with
the same name with the information provided below. If this check box is
cleared and a DSN with the same name exists on the system, no DSN is
created and the DSN is not updated.
l Test Connection: If this check box is selected, the system tests the DSN
information provided to determine if a successful connection can be made.
If this check box is cleared, no connection test is performed. If this check
box is selected, you must provide the following connection information:
l Password: The password for the user name that you provided to
connect to the database. You can use the button to the right of the
Password field to determine whether the password characters are shown
or asterisks are displayed instead.
Microsoft Access
The MicroStrategy ODBC Driver for SequeLink allows you to access
Microsoft Access databases stored on a Windows machine from an
Intelligence Server hosted on a UNIX or Linux machine.
l Data Source Name: A name to identify the Microsoft SQL Server data
source configuration in MicroStrategy. For example, Personnel or
MicrosoftAccess-1 can serve to identify the connection.
l Overwrite: If this check box is selected, the system updates a DSN with
the same name with the information provided below. If this check box is
cleared and a DSN with the same name exists on the system, no DSN is
created and the DSN is not updated.
Oracle
To perform this configuration, in System Manager, from the Connectors and
processes pane, add the Oracle process to your workflow. The following
information is required to create a DSN for Oracle Wire Protocol:
l Host Name: The name of the Oracle server to be accessed. This can
be a server name such as Oracle-1 or an IP address such as
123.456.789.98.
l SID: The Oracle System Identifier for the instance of Oracle running
on the server. The default SID is usually ORCL.
l Overwrite: If this check box is selected, the system updates a DSN with
the same name with the information provided below. If this check box is
cleared and a DSN with the same name exists on the system, no DSN is
created and the DSN is not updated.
l Test Connection: If this check box is selected, the system tests the DSN
information provided to determine if a successful connection can be made.
If this check box is cleared, no connection test is performed. If this check
box is selected, you must provide the following connection information:
l Password: The password for the user name you provided to connect to
the database. You can use the button to the right of the Password field
to determine whether the password characters are shown or asterisks
are displayed instead.
PostgreSQL
To perform this configuration, in System Manager, from the Connectors and
processes pane, add the PostgreSQL process to your workflow. The
following information is required to create a DSN for PostgreSQL:
l Port Number: The port number for the connection. The default port
number for PostgreSQL is usually 5432. Check with your database
administrator for the correct number.
l Default User ID: The name of a valid user for the PostgreSQL database.
l Overwrite: If this check box is selected, the system updates a DSN with
the same name with the information provided below. If this check box is
cleared and a DSN with the same name exists on the system, no DSN is
created and the DSN is not updated.
l Password: The password for the default user name that you provided.
You can use the button to the right of the Password field to determine
whether the password characters are shown or asterisks are displayed
instead.
Salesforce
To perform this configuration, in System Manager, from the Connectors and
processes pane, add the Salesforce process to your workflow. The
following information is required to create a DSN for Salesforce:
l Host Name: The host name to connect to Salesforce.com. You can keep
the default value of login.salesforce.com.
l Overwrite: If this check box is selected, the system updates a DSN with
the same name with the information provided below. If this check box is
cleared and a DSN with the same name exists on the system, no DSN is
created and the DSN is not updated.
l Test Connection: If this check box is selected, the system tests the DSN
information provided to determine if a successful connection can be made.
If this check box is cleared, no connection test is performed. If this check
box is selected, you must supply the following information to test the
connection:
l Password: The password for the Salesforce.com user account that was
supplied. The password syntax is PasswordSecuritytoken, where
Password is the password for the user account and Securitytoken is
the additional security token required to access Salesforce.com. Do not
use any spaces or other characters to separate the password and
security token.
Sybase ASE
To perform this configuration, in System Manager, from the Connectors and
processes pane, add the Sybase ASE process to your workflow. The
following information is required to create a DSN for Sybase ASE:
l Data Source Name: A name to identify the Sybase ASE data source
configuration in MicroStrategy. For example, Finance or SybaseASE-1 can
serve to identify the connection.
l Enable Unicode support (UTF8): Select this check box if the database
supports UNICODE.
l Overwrite: If this check box is selected, the system updates a DSN with
the same name with the information provided below. If this check box is
cleared and a DSN with the same name exists on the system, no DSN is
created and the DSN is not updated.
l Test Connection: If this check box is selected, the system tests the DSN
information provided to determine if a successful connection can be made.
If this check box is cleared, no connection test is performed. If this check
box is selected, you must provide the following connection information:
l Password: The password for the user name that you provided to
connect to the database. You can use the button to the right of the
Password field to determine whether the password characters are shown
or asterisks are displayed instead.
By separating tasks into multiple workflows, you can then re-use these
workflows as components of other larger workflows. For example, starting
Intelligence Server and troubleshooting this service may be required for
multiple workflows that you create. You can include the steps to start and
troubleshoot Intelligence Server into a separate workflow, and then use this
workflow in all the workflows that require these steps.
l Workflow File: Click the folder icon to browse to and select a System
Manager workflow file. This is the workflow that is included as a process in
the current workflow.
l Starting Process: Select this check box to specify the first process to
attempt for the workflow. Type the name of the process, including the
proper case, in the field below. Ensure that the process is enabled as an
entry process for the workflow. For steps to enable a process as an entry
process, see Using Entry Processes to Determine the First Step in a
Workflow, page 1414.
l Use a Parameter File: Select this check box to specify a parameters file
to provide values for the parameters of the workflow. Click the folder icon
to browse to and select a parameters file for the workflow. For information
on using parameters in a workflow, see Using Parameters for Processes,
page 1536. You can also specify parameter values using the Use Console
Parameters option described below.
l Use a Customized Log File: Select this check box to specify a log file to
save all results of the workflow to. Click the folder icon to browse to and
select a log file. This lets you separate the results of each workflow into
individual log files. If you clear this check box, the results of the workflow
are included in the log file for the main workflow.
l Display Output on the Console: Select this check box to output all
results to the System Manager console. If this check box is cleared, the
results of any actions taken as part of this System Manager workflow are
not displayed on the console and instead only provided in any specified
log files.
l Personalize Success Exit Code(s): Select this check box to specify the
exit codes that indicate successful execution of the underlying workflow.
Type the exit codes in the text box, separating multiple codes with a
comma. Valid exit codes must be an integer. The success exit codes you
specify here map to a new exit code of 0, which is passed on to the
larger workflow to indicate that this workflow executed successfully.
l Personalize Failure Exit Code(s): Select this check box to specify the
exit codes that indicate failed execution of the underlying workflow. Type
the exit codes in the text box, separating multiple codes with a comma.
Valid exit codes must be an integer. The failure exit codes you specify
here map to a new exit code of -1, which is passed on to the larger
workflow to indicate that this workflow failed.
If you do not use the Personalize Exit Code(s) options, or if you configure
them incorrectly, one of the following exit codes will be passed on to the
larger workflow:
l -2: Indicates that the input format of the specified exit codes is incorrect,
for example, if you use an exit code that is not an integer, or if you
separate multiple codes with anything other than a comma.
l -3: Indicates that there is at least one conflict in the personalized exit
codes. For example, if you use exit code 4 in both the Success Exit Code
(s) list and the Failure Exit Code(s) list.
l Home Path: The path that acts as the home directory for the
MicroStrategy installation. This path includes MicroStrategy
configuration files that can be modified after a successful installation.
l Common Path: The path that contains important files. The types of files
included in this path varies depending on your operating system, but it
can include files such as log files, SQL files, WAR files, JAR files,
libraries, and more.
You can also execute any process that uses system or third-party tools. This
lets you perform custom processes that can be executed from the system's
command line.
l Action: Select either Encrypt or Decrypt from the drop-down list. Encrypt
algorithmically encodes plain text into a non-readable form. Decrypt
deciphers the encrypted text back to its original plain text form.
The Decrypt action only works on text that was encrypted using the
Encrypt action. Also, files encoded using the Encrypt action must be
decrypted using the Decrypt action. Other encryption/decryption
programs will not work.
l Password: Select the check box and type the required password if a
specific password is required to perform this process. If this option is not
selected, it will use the default password specified by System Manager.
l Text: Select this option and type the text to be encrypted or decrypted in
the text box. This is useful for encrypting or decrypting a small amount of
text.
l File: Select this option and click the folder icon to select the file to encrypt
or decrypt. This option is useful if you have a large amount of text to
encrypt or decrypt.
l Output File: Click the folder icon to select the file in which to store the
encrypted or decrypted results.
l Overwrite: Select this check box to overwrite the output file if it already
exists.
l Source File or Directory: The location of the file or folder to copy. If the
path to a file is provided, only that file is copied. If the path to a folder is
provided, the folder along with all the files within it are copied. Click the
folder icon to browse to and select a file or folder.
You can also use wildcard characters (* and ?) to select files or folders to
copy. For example, you can use the syntax *.txt to copy all files with the
extension .txt in a folder. For additional examples of how you can use
these wildcard characters, see Using Wildcard Characters in Processes,
page 1543.
l If you are copying a file, you can provide a path to a specific folder
location and file name to store the new copy.
l If the location you provide does not exist, a new directory is created with
the name of the destination and all source files are copied to the
directory. Click the folder icon to browse to and select a file or folder.
l Parent Directory: The location in which to create the file or folder in. Click
the folder icon to browse to and select a folder.
l File or Directory Name: The name for the new file or folder:
l For files, type any file name and extension to create an empty file of that
file type. Be aware that this process does not validate whether the file
type is valid.
l For folders, type the folder name. Along with creating a single folder at
the parent directory location, you can create a series of subfolders by
using backslashes (\). For example, if the parent location is C:\, you can
create the following folders:
l The Directory: The location of the top level folder to determine the
number of files. Click the folder icon to browse to and select a folder.
l File Filter: Select this option to apply a single filter to the files that are to
be included in the count of files in a folder. You can then type the filter,
including wildcard characters such as an asterisk (*) to represent multiple
characters, and a question mark (?) to represent a single character. For
example, if you type *.exe, only files that end with the .exe extension are
included in the count. If you type test?.exe, files such as test1.exe,
test2.exe, test3.exe, and testA.exe are included in the count. If
you clear this check box, all files in a folder are included in the final count.
l Among All Files: Select this option to count files only in the top-level
folder.
l File or Directory: The location of the file or folder to delete. If the path to
a file is provided, only that file is deleted. If the path to a folder is
provided, the folder and all the files in it are deleted. Click the folder icon
to browse to and select a file or folder.
You can also use wildcard characters (* and ?) to select files or folders for
deletion. For example, you can use the syntax *.txt to delete all files
with the extension .txt in a folder. For additional examples of how you can
use these wildcard characters, see Using Wildcard Characters in
Processes, page 1543.
l Source File or Directory: The location of the file or folder to move. If the
path to a file is provided, only that file is moved. If the path to a folder is
provided, the folder along with all the files and folders within it are moved.
Click the folder icon to browse to and select a file or folder.
You can also use wildcard characters (* and ?) to select files or folders to
move. For example, you can use the syntax *.txt to move all files with
the extension .txt in a folder. For additional examples of how you can use
these wildcard characters, see Using Wildcard Characters in Processes,
page 1543
l If you are moving a file, you can provide a path to a specific folder
location and file name to store the file.
l If the location you provide does not exist, a new directory is created with
the name of the destination and all source files will be copied to this
directory. Click the folder icon to browse to and select a file or folder.
l Source File: The location of the file to search for content to replace. Click
the folder icon to browse to and select a file.
l Destination File: The location and name of the file that is created with all
content replacements. You can create a new file to retain a copy of the
original file, or select the same file as the source file to overwrite the
existing file. To overwrite the existing file, you must also select the option
Overwrite Destination File If It Already Exists described below. Click the
folder icon to browse to and select a file.
l Match Case: If this check box is selected, the system replaces keywords
and phrases if the content and the case of the content matches. If this
check box is cleared, keywords and phrases are replaced if the content
matches, regardless of the case.
l Keyword: The keyword or phrase to search for in the file. The search finds
and replaces all instances of the keyword or phrase in the file. You must
type the keyword or phrase exactly; wildcard characters cannot be used.
To replace multiple lines in the file, use $\n$ to indicate a line break.
For example, if you have an XML file that includes multiple instances of
the same address, and the person or company with that address has
recently moved to another city, you can find and replace all instances of
the customer address. If the XML for the address is:
<address1>123 Main
Street</address1>$\n$<city>Vienna</city>$\n$<state>Virginia</state>$\n$<zip>
22180</zip>
l Use This Additional Keyword / Value Pair: If this check box is selected,
the system includes a find and replace action to search for and replace a
given keyword or phrase. Each of these check boxes includes a single,
additional find and replace action. For each find and replace action that
you include, you must provide the following information:
l Keyword: The keyword or phrase to search for in the file. The search
finds and replaces all instances of the keyword or phrase within the file.
You must type the keyword or phrase exactly; wildcard characters
cannot be used. If you want to replace multiple lines within the file, you
can use $\n$ to indicate a line break.
l Value: The content used to replace the keyword or phrase. If you want to
replace a keyword with multiple lines, you can use $\n$ to indicate a
line break.
l New Name of File or Directory: The new name for the file or folder.
l Zip File: The location of the compressed file to extract, which can use
either zip or gzip format. Click the folder icon to browse to and select a
file.
l Output Directory: The location of where the files in the compressed file
are to be extracted to. Click the folder icon to browse to and select a
folder.
l Overwrite: Replaces any existing files in the output directory with the files
that are being extracted. If this check box is cleared and a file with the
same name exists in the output directory, the file is not updated.
l Source File or Directory: The location of the file or folders to include in the zip file. If
you select a folder, all of the contents of the folder are included in the zip file, which
includes the subfolders and their content. Click the folder icon to browse to and select
files and folders.
l You can also use wildcard characters (* and ?) to select files or folders to compress
into a zip file. For example, you can use the syntax *.txt to select all files with the
extension .txt in a folder for compression into a zip file. For additional examples of
how you can use these wildcard characters, see Using Wildcard Characters in
Processes, page 1543.
l Output File: The location and name of the final compressed zip file. Click
the folder icon to browse to and select an existing zip file.
l Append: If an existing zip file is found, the new files and folders are
added to the existing zip file.
However, if a folder already exists in the same location in the zip file, it
is ignored along with any contents of the folder. This means that if a
folder has new files, they are not included as part of appending files to
the existing zip file.
l FTP Server: The URL for the FTP or SFTP site. You must also define
whether the site allows anonymous access or requires a user name and
password:
l Port Number: The port number to access the FTP or SFTP site. By
default a value of 22 is expected. Select this check box and type the port
number for your FTP or SFTP site.
l Login: Defines the connection to the FTP or SFTP site as one that
requires a user name and password to log into the FTP or SFTP site.
You must provide the following information:
l User Name: The name of a valid user for the FTP or SFTP site.
l Password: The password for the user name that you provided to
connect to the FTP or SFTP site. You can use the button to the right of
the Password field to determine whether the password characters are
shown or asterisks are displayed instead.
If you have both an FTP and an SFTP site, you can choose to clear
this check box to use the FTP site, or select this check box to encrypt
the communication and use the SFTP site. However, if you only have
an FTP site or an SFTP site, your use of this option must reflect the
type of site you are using.
l Single File: Downloads a single file from the FTP or SFTP site. Type the
location of the file on the FTP or SFTP site to download.
l All Files: Downloads all the files directly within the folder selected.
Subfolders are not downloaded recursively if you select this option.
l All Files And Subfolders Recursively: Downloads all the files and
subfolders recursively, within the folder selected.
l Overwrite: If this check box is selected, the system replaces files with the
same name as the files or folders downloaded from the FTP or SFTP site.
If this check box is cleared and a file or folder with the same name exists
on the system, the file or folder is not downloaded from the FTP or SFTP
site.
l FTP Server: The URL for the FTP or SFTP site. You must also define
whether the site allows anonymous access or requires a user name and
password:
l Port Number: The port number to access the FTP or SFTP site. By
default a value of 22 is expected. Select this check box and type the port
number for your FTP or SFTP site.
l Login: Defines the connection to the FTP or SFTP site as one that
requires a user name and password to log into the FTP or SFTP site.
You must provide the following information:
l User Name: The name of a valid user for the FTP or SFTP site.
l Password: The password for the user name that you provided to
connect to the FTP or SFTP site. You can use the button to the right of
the Password field to determine whether the password characters are
shown or asterisks are displayed instead.
l Use SFTP: Encrypts the entire upload communication. You must have
a secure FTP site for this encryption to work successfully. If you clear
this check box, the communication is not encrypted.
If you have both an FTP and an SFTP site, you can choose to clear
this check box to use the FTP site, or select this check box to encrypt
the communication and use the SFTP site. However, if you only have
an FTP site or an SFTP site, your use of this option must reflect the
type of site you are using.
l Single File: Uploads a single file to the FTP or SFTP site. Click the
folder icon to browse to and select a file.
l Local Directory: The local folder to upload the files from. Click the
folder icon to browse to and select a folder.
l All Files: Uploads all the files directly within the folder selected.
Subfolders are not uploaded recursively if you select this option.
l All Files And Subfolders Recursively: Uploads all the files and
subfolders recursively, within the folder selected.
l Overwrite: If this check box is selected, the system replaces files with
the same name as the files or folders uploaded to the FTP or SFTP site.
If this check box is cleared and a file or folder with the same name exists
on the FTP or SFTP site, the file or folder is not uploaded.
l Specify a DSN: Defines the connection to the database through the use
of a DSN. You must provide the following information:
l Password: The password for the user name that you provided to
connect to the database. You can use the button to the right of the
Password field to determine whether the password characters are
shown or asterisks are displayed instead.
l Encoding: From this drop-down list, select the character encoding for
the data source you are connecting to:
l Non UTF-8: Select this option if the data source uses a character
encoding other than UTF-8. This can support character encodings
such as UTF-16 and USC-2. This encoding option is selected by
default.
l UTF-8: Select this option if the data source uses UTF-8 character
encoding. For example, Teradata databases may require UTF-8
encoding.
l Save Execution Output Into a File: If this check box is selected, the
system saves all resulting output of executing the SQL statements to the
selected file. No output or data is included in the file for SQL statements
that do not return any output, such as create table or update table
statements. Click the folder icon to browse to and select a file, which can
either be a .txt or .csv file.
If this check box is cleared, the output of executing the SQL statements is
not saved to a file.
If you select this check box, the column header information is provided in
the SQL output along with the associated values. This can provide
additional context to the values.
l Output Parameters: As part of executing SQL, you can store any results
in parameters:
Sending an Email
You can send an email as part of a System Manager workflow. The email can
include the results of the workflow, which can provide verification of what
processes have been successfully completed.
l From: The email address of the sender. For an email sent from a System
Manager workflow, you must type the email address of the person who
deploys the workflow.
l To: The email addresses for the intended primary recipients of the email.
Use a comma to separate each email address.
l Cc: The email addresses of the secondary recipients who should receive a
copy of the email addressed to the primary recipients. Select the check
box to enter the email addresses. Use a comma to separate each email
address.
l Bcc: The email addresses of the recipients who should receive the email
while concealing their email address from the other recipients. Select the
check box to enter the email addresses. Use a comma to separate each
email address.
l Message Subject: The title of the email that is displayed in the subject
line. This can be used to give a brief description of the purpose behind
deploying the workflow. Select the check box to enter the message
subject.
l Message Body: The main content of the email. This can give additional
details on what was completed as part of the workflow and next steps for a
user or administrator to take. Select the check box to enter the message
content.
l Attach System Manager Log: If this check box is selected, the system
includes the System Manager log file as an attachment to the email. This
log file includes all the results of the workflow up to the time of the email
request. Any processes in the workflow that are completed after the email
request are not included in the log file. If this check box is cleared, the log
file is not attached to the email.
l Attach Any Other File: If this check box is selected, the system includes
a file as an attachment to the email. Click the folder icon to browse to and
select a file to include as an attachment. You can also use wildcard
characters if the folder or file name is not known when creating the
workflow (see Using Wildcard Characters in Processes, page 1543).
l If you need to send multiple files, you can do one of the following:
l Compress the required files into a single file such as a .zip file. You can
include compressing files into a single .zip file as part of a System
Manager workflow, using the process described in Compressing Files
into a Zip File, page 1503.
l Outgoing SMTP Server: If this check box is selected, the system lets you
define the outgoing SMTP server to use to send the email. If this check
box is cleared, a default SMTP server is used to send the email. If you
choose to specify an SMTP server, you must provide the following
information:
l You must select the type of port used for the SMTP server. Contact your
SMTP server administrator to determine the proper port type:
l Plain Text: Defines the connection to the SMTP sever in plain text,
without using any security protocol. By default, this option is selected.
l User Name: The name of a user account that has the necessary rights to
send emails using the SMTP server.
l User Password: The password for the user name that you provided to
send emails using the SMTP server. You can use the button to the right
of the Password field to determine whether the password characters are
shown or asterisks are displayed instead.
l Waiting Time (sec): The number of seconds to remain on the current wait
process before proceeding to the next process in a workflow. Type a
numeric, integer value to represent the number of seconds to wait before
proceeding to the next process in a workflow.
You can add additional time to the waiting process using the following
options:
You must supply a valid numerical value for the seconds of the wait
process, regardless of whether you define the minutes and hours for the
wait process. You can type a value of zero (0) to define the wait process
as a length of time in only minutes and hours.
l Hours: Select this check box to determine the number of hours to remain
on the current wait process before proceeding to the next process in a
workflow. Type a numeric, integer value to represent the number of
hours to wait before proceeding to the next process in a workflow. This
time is added to any seconds or minutes also defined for the wait
process.
l File: Updates the parameter value with the entire contents of a file. If
you select this option, you must type the full path to the file in the New
Value field. You can use .txt or .csv files to update the value of a
parameter.
l Registry: Updates the parameter value with the value of a registry key.
If you select this option, you must type the full path to the registry key in
the New Value field.
l New Value: The new value to assign to the parameter. If you selected the
Resolve the value from check box listed above, you must type the full path
to the file or registry key.
${Loop} + 1
it increases the value of the Loop parameter by one each time the Update
Parameters configuration is processed in the workflow. This type of
parameter value update supports exiting loops in a workflow after a certain
number of attempts. For best practices on using the Update Parameters
process to support loops in workflows, see Supporting Loops in a
Workflow to Attempt Configurations Multiple Times, page 1435.
l System property: The information about the system that is retrieved. You
can select from the following options:
l User Home Directory: The path that acts as the current user's home
directory, which can be used to store files if other paths are restricted for
security reasons.
l Hostname: The host name of the system, which can be used to connect
to the system.
l Java Virtual Machine (JVM) bit-size: The size allowed for the Java
Virtual Machine, which is also often referred to as the heap size. This
determines how much memory can be used to perform various Java
tasks. You can tune this value to improve the performance of your
machine.
l Local Machine Date: The date and time for the system. The time is
returned as the time zone for the system. If the time zone for the system
is changed, you must restart System Manager to return the new time
zone for the machine.
that you select, an additional System property and Parameter pair is made
available.
Creating an Image
You can create an Amazon Machine Image (AMI) from an Amazon EBS-
backed instance as part of the System Manager workflow. An Amazon
Machine Image is a template that contains the software configuration for
your server. While creating an image, ensure that the EBS-backed instance
is either running or stopped.
l Credential Properties File: The file that includes your secretkey and
accesskey for your account. Click the folder icon to browse to and select a
credential properties file.
l Set No Reboot: Select this check box to prohibit the Amazon EC2 from
shutting down the Amazon EBS-backed instance before creating the new
image. If you clear this check box, the Amazon EC2 attempts to shut down
EBS-backed instance before creating the new image and then restarts the
instance.
l none: To omit a mapping of the device from the AMI used to launch the
instance, specify none. For example: "/dev/sdc=none".
l snapshot-id:volume-size:delete-on-termination:volume-
type:iops. Where
All of these variables are optional. You can choose to use any or all of
them. Refer to your Amazon third-party documentation for additional
examples, updates, and information on the block device variables
listed below.
l New AMI ID: The newly created image ID for the Amazon Machine
Image (AMI).
l Credential Properties File: The file that includes your secretkey and
accesskey for your account. Click the folder icon to browse to and select a
credential properties file.
l AMI ID: The image ID for the Amazon Machine Image (AMI) to use for your
cloud-based environment. Type the image ID, which you can retrieve from
Amazon's cloud resources.
l Credential Properties File: The file that includes your secretkey and
accesskey for your account. Click the folder icon to browse to and select a
credential properties file.
l AMI ID: The image ID for the Amazon Machine Image (AMI) to use for your
cloud-based environment. Type the image ID, which you can retrieve from
Amazon's cloud resources.
l Instance Type: The image type for your cloud-based environment, which
determines the computing capacity of the cloud-based environment. Select
the appropriate instance type from the drop-down list.
l Key Pair Name: Select this check box to create the key pair name, which
acts as a password to access the cloud-based environment once it is
launched. If you clear this check box, this security method is not used with
the cloud-based environment.
l Name Tag: Select this check box to create a name to distinguish the
cloud-based environment. If you clear this check box, no name is provided
for the cloud-based environment.
l Security Group: Select this check box to create new security groups or
use existing security groups. Use a semicolon (;) to separate multiple
security groups. If you clear this check box, no security groups are used
for the cloud-based environment.
l Public DNS Name: The public Domain Name System (DNS) name of the
cloud-based environment, which is provided upon launching an instance.
Using the Amazon EC2 console, you can view the public DNS name for a
running instance.
l Private DNS Name: The private Domain Name System (DNS) name of
the cloud-based environment, which is provided upon launching an
instance. Using the Amazon EC2 console, you can view the private DNS
name for a running instance.
l Credential Properties File: The file that includes your secretkey and
accesskey for your account. Click the folder icon to browse to and select a
credential properties file.
l Action: The list of actions—that is, start, stop, or force stop—that can be
performed on your cloud-based environment. Select the appropriate action
from the drop-down list.
l Credential Properties File: The file that includes your secretkey and
accesskey for your account. Click the folder icon to browse to and select a
credential properties file.
Creating a vApp
You can create a new vApp as part of a System Manager workflow. A vApp is
a collection of one or more virtual machines that can be deployed as a
single, cloud-based environment.
If you are unsure of any of the option values required to create a vApp,
contact the vCloud administrator for the necessary information.
l User Name: The name of a user account that has the necessary rights to
work with and create vApps.
l Password: The password for the user name that you provided to create
the vApp. You can use the button to the right of the Password field to
determine whether the password characters are shown or asterisks are
displayed instead.
l Virtual Datacenter: The name of the virtual datacenter that allocates the
system resources for a vCloud environment.
l New vApp Name: The name that is used to identify the vApp.
l Add VM: Select this check box to also create a virtual machine for the
vApp. If you select this check box, you must provide the following
information to create a virtual machine:
l Catalog Name: The name of the catalog that stores the template that
you use to create the virtual machine.
l Template Name: The name of the template required to create the virtual
machine. A template defines the initial setup and configuration of a
virtual machine.
l Start the vApp: Determines if the virtual machine and its associated
vApp are powered on so that it can be used after the creation process is
completed. Select this check box to power on the virtual machine and its
associated vApp. If you do not select this option, you can use the
Manage VM process to power on the virtual machine at a later time (see
Starting, Stopping, and Restarting a Virtual Machine, page 1526).
If you are unsure about any of the option values required to manage a vApp,
contact the vCloud administrator for the necessary information.
l User Name: The name of a user account that has the necessary rights to
work with vApps.
l Password: The password for the user name that you provided to create
the vApp. You can use the button to the right of the Password field to
determine whether the password characters are shown or asterisks are
displayed instead.
l Start: Starts a vApp so that users can access and work with a vApp.
l Stop: Stops a vApp through a vCloud request, which makes the vApp
unavailable to users. This type of vCloud power off request can be
monitored by the vCloud system to determine the success or failure of
the action.
l Virtual Datacenter: The name of the virtual datacenter that allocates the
system resources for a vCloud environment.
machine must be powered on for users to access and work with a virtual
machine. You may need to power off or shut down a virtual machine to
perform various administrative maintenance tasks on the virtual machine.
If you are unsure about any of the option values required to manage a
virtual machine, contact the vCloud administrator for the necessary
information.
l User Name: The name of a user account that has the necessary rights to
work with vApps and virtual machines.
l Password: The password for the user name that you provided to create
the vApp. You can use the button to the right of the Password field to
determine whether the password characters are shown or asterisks are
displayed instead.
l Action: The type of action to perform on the virtual machine. You can
select one of the following actions:
l Power on: Starts a virtual machine so that users can access and work
with the virtual machine.
l Virtual Datacenter: The name of the virtual datacenter that allocates the
system resources for a vCloud environment.
l vApp Name: The name of the vApp that contains the virtual machine to
start, stop, or restart.
l VM Name: The name of the virtual machine within the vApp to start, stop,
or restart.
Duplicating a vApp
You can duplicate a vApp as part of a System Manager workflow. A vApp is a
collection of one or more virtual machines, which can be deployed as a
single cloud-based environment.
If you are unsure about any of the option values required to duplicate a
vApp, contact the vCloud administrator for the necessary information.
l User Name: The name of a user account that has the necessary rights to
work with and create vApps.
l Password: The password for the user name that you provided to create
the vApp. You can use the button to the right of the Password field to
determine whether the password characters are shown or asterisks are
displayed instead.
l Virtual Datacenter: The name of the virtual datacenter that allocates the
system resources for a vCloud environment.
l Destination vApp Name: The name for the duplicate copy of the vApp.
l Start the vApp: Determines if the duplicate copy of the vApp is powered
on so that it can be used after the duplication process is completed. Select
this check box to power on the vApp. If you do not select this option, you
can use the Manage vApp process to power on the vApp at a later time
(see Starting, Stopping, and Restarting a vApp, page 1525).
Deleting a vApp
You can delete a vApp as part of a System Manager workflow. A vApp is a
collection of one or more virtual machines, which can be deployed as a
single cloud-based environment.
If you are unsure about any of the option values required to delete a vApp,
contact the vCloud administrator for the necessary information.
l User Name: The name of a user account that has the necessary rights to
work with and delete vApps.
l Password: The password for the user name that you provided to create
the vApp. You can use the button to the right of the Password field to
determine whether the password characters are shown or asterisks are
displayed instead.
l Virtual Datacenter: The name of the virtual datacenter that allocates the
system resources for a vCloud environment.
If you are unsure of any of the option values required to delete a virtual
machine, contact the vCloud administrator for the necessary information.
l User Name: The name of a user account that has the necessary rights to
work with and delete virtual machines within vApps.
l Password: The password for the user name that you provided to create
the vApp. You can use the button to the right of the Password field to
determine whether the password characters are shown or asterisks are
displayed instead.
l Virtual Datacenter: The name of the virtual datacenter that allocates the
system resources for a vCloud environment and includes the vApp that
hosts the virtual machine to be deleted.
l vApp Name: The name of the vApp that hosts the virtual machine that is to
be deleted.
If you are unsure of any of the option values required to create a virtual
machine within a vApp, contact the vCloud administrator for the necessary
information.
l User Name: The name of a user account that has the necessary rights to
work with and create vApps.
l Password: The password for the user name that you provided to create
the vApp. You can use the button to the right of the Password field to
determine whether the password characters are shown or asterisks are
displayed instead.
l From vApp: This option duplicates a virtual machine that already exists
in the vApp:
l vApp Name: The name of the vApp that includes the virtual machine
to duplicate.
l Catalog Name: The name of the catalog that stores the template that
you use to create the virtual machine.
l vApp Name: The name of the vApp that will host the new virtual
machine.
l Configure New VM: These options determine additional details about the
new virtual machine:
l Full Name: The name for the virtual machine that is created.
l Computer Name: Select this check box to provide the host name of the
new virtual machine. If you clear this check box, the name that you
specified for Full Name is also used for this host name.
l Computer Name: The host name for the new virtual machine. Select a
parameter from the drop-down list to store the information in that
parameter.
Along with determining the success or failure of a process, an exit code can
also provide additional information on why the process was a success or a
failure.
While providing the information for a process, you can review the exit codes
for a process. On the Properties pane, scroll down to the bottom and click
Show Description, as shown in the image below.
The exit codes for a custom process are dependent on that custom process.
Refer to any documentation related to the custom process to determine
possible exit codes.
You can use these exit codes to determine the next step to take in a
workflow:
l Using the success and failure connectors lets you guide the workflow
based on whether the process was completed with a success or failure exit
code. For additional information on how connectors determine the logical
order of a workflow based on the exit code of the process they are coming
from, see Using Connectors to Create the Logical Order of a Workflow,
page 1412.
l Using a decision process, you can guide the workflow according to error
codes rather than just whether the process was considered successful or
unsuccessful. This can help to support additional troubleshooting and
error checking during a workflow. For examples of how decisions can be
used to guide a workflow on more than just the success or failure of a
process, see Using Decisions to Determine the Next Step in a Workflow,
page 1415.
The steps below show you how to create parameters for a workflow.
This procedure assumes you are creating new parameters for a workflow.
For information on importing parameters for a workflow, see Importing
Parameters into a Workflow, page 1538.
l Name: The name for the parameter. This is the name that is used to
identify the parameter in a process or decision within the workflow.
l Value: The value that is used in place of the parameter when the
workflow is executed. This works as the default value for the
parameter if no value for the parameter is given from the command
line when the workflow is executed. For information on the
precedence of providing values for parameters, see Providing
Parameter Values during Deployment of a Workflow, page 1542.
l Confidential: Select the check box to turn off any logging and
feedback information for parameter values that are updated by a
process in your workflow (defined as an output parameter of a
process). For example, if you save the result of a SQL execution to a
parameter, this result is hidden from any System Manager logs. If the
parameter value for a confidential parameter has to be shown in the
feedback console, it is displayed as asterisks instead of the actual
value. For information on the feedback console, see Using System
Manager to Test and Deploy a Workflow, page 1545.
You can import parameters into a workflow that have been saved as a
parameters response file. This lets you update the values for your workflow.
When parameters are imported into a workflow, any existing parameters are
updated with the values included in the parameters file. Parameters can only
be updated when importing a parameters file. This means that if a parameter
does not already exist in a workflow, it is not created when importing the
parameters file.
Additionally, if parameters are in the workflow that are not defined in the
parameters file, the value for the parameters is not updated during the
import process.
The workflow you are importing parameters for already has parameters defined
for it. Only these parameters can be updated by importing a parameters file.
4. Select the parameters file to import and click Open. You are returned to
System Manager and the parameters are updated accordingly. If the
changes are not what you expected, you can click Clear to undo all the
parameter updates.
You can export the parameters in a workflow to a file. This file can serve
various purposes:
l You can modify the parameter file and apply updates to the original
workflow.
l You can modify the parameter file and include it during execution to make
changes just before execution.
l You can modify the parameter file to include comments, which can provide
additional information on the parameters and their values. To include a
comment in a parameters file you can use the characters // or # to denote
a line in the parameters file as a comment. Any line that begins with either
// or # is ignored when using the parameters file with System Manager.
The steps below show you how to export the parameters of a workflow to a
file.
3. In the File name field, type a name for the parameters file.
4. Click Save.
Parameters can be included in any option that takes some type of text or
numeric data as input. For example, a Password field can take a parameter
that supplies a password to access the task or system resource for a
process. However, check boxes and any other options that do not accept
text or numeric data cannot use parameters.
${ParameterName}
The values for parameters can be provided in a few different ways. For
information on how parameter values can be provided and the precedence of
each option, see Providing Parameter Values during Deployment of a
Workflow, page 1542 below.
l When defining the parameters for the workflow. These values act as the
default value of the parameter.
l From the command line during execution of a workflow. This lets the user
executing the process provide sensitive information such as user
passwords on the command line rather than saving them in a workflow.
l You can also use the Update Parameters process (see Performing System
Processes, page 1493) to update the value of a parameter during the
deployment of a workflow.
l If the value for a parameter is provided from the command line during
execution, this value is used. Any values for the parameter provided in a
parameters file or default values provided in the workflow are ignored.
l If the value for a parameter is not provided from the command line during
execution, but a value for the parameter is provided in a parameters file,
the value from the parameters file is used. The default value provided in
the workflow is ignored.
l If the value for a parameter is not provided in a parameters file or from the
command line during execution, the default value provided when defining
a parameter in a workflow is used.
l Refer to folders or files that do not exist yet or do not have known names.
For example, a file or folder can be created as part of the same System
Manager workflow. If the full name of the file or folder is not known (for
example, the file name itself might include creation time information) you
can use wildcard characters to refer to the expected file or folder.
l Select multiple files for a single process, such as attaching multiple files to
an email. For example, rather than listing a single file, you can use wild
cards to select all .txt files in a folder.
l Compressing files into a zip file (see Performing System Processes, page
1493)
For the configurations of a System Manager process that can use wildcard
characters, the following characters are supported:
l *.txt
This syntax would search for and select all .txt files in a given folder.
l filename.*
This syntax would search for and select all files, regardless of file extension, with the
name filename.
l *.*
l *
This syntax would search for and select all files and folders in a given folder.
l The ? (question mark) character: You can use ? to represent any single
character. Some examples of how you can use this wildcard character
include:
l filename?.ini
This syntax would search for and select all .ini files with the name filename and a
single character. For example, the syntax config?.ini would select files such as
config1.ini, configA.ini, and so on.
l filename.??
This syntax would search for and select all files with the name filename and any
two character file extension.
Deploying a Workflow
Once you create a workflow, you can deploy the workflow to attempt the
processes that are included in the workflow. System Manager provides the
following methods for deploying a workflow:
l Using System Manager to Test and Deploy a Workflow, page 1545: System
Manager's interface can be used to test and deploy a workflow.
Be aware that some processes are dependent on the machine that you use
to deploy the workflow. For example, if you include processes to create
DSNs, the DSNs are created on the machine that you use to deploy the
workflow.
The steps below show you how to deploy a workflow from within System
Manager.
You have created a workflow and saved it in a location that can be accessed
from the machine that you are deploying the workflow on. Steps to create a
workflow are provided in Creating a Workflow, page 1402.
You have installed any MicroStrategy products and components that are
required for the processes of a workflow. For the products and components
required for each process, see Defining Processes, page 1447.
If required, you have created a parameters file to provide values for the
parameters of the workflow and saved it in a location that can be accessed
from the machine that you are deploying the workflow on.
3. Browse to the workflow file, select the file, and then click Open. The
workflow is displayed within System Manager.
6. In the Log file path field, type the path of a log file or use the folder
(browse) icon to browse to a log file. All results of deploying a workflow
are saved to the file that you select.
8. Click OK.
9. From the Workflow menu, point to Execute Workflow, and then select
Run Configuration.
10. From the Starting process drop-down list, select the process to act as
the first process in the workflow. You can only select processes that
have been enabled as entry processes for the workflow.
11. In the Parameters area, type any parameters required to execute the
processes in the workflow, which can include user names, passwords,
and other values. To include multiple parameter and value pairs, you
must enclose each parameter in double quotes (" ") and separate
each parameter and value pair using a space. The following example
contains the syntax to provide values for the parameters UserName and
Password:
"UserName=User1" "Password=1234"
12. Click Run to begin the workflow. As the workflow is being executed the
results of each process are displayed in the Console pane. You can use
the Console pane to review additional details on the results of each
process and export these details. The results are also saved to the log
file that you specified earlier. If you marked any process parameters as
Confidential, the parameter value will either not be displayed in the
feedback console and logs, or it will be masked and displayed as
asterisks instead of the actual value.
If you need to end the workflow prematurely, from the Workflow menu,
select Terminate Execution. A dialog box is displayed asking you to
verify your choice to terminate the execution of the workflow. To
terminate the execution of the workflow, click Yes. If some processes
in the workflow have already been completed, those processes are not
rolled back.
Be aware that some processes are dependent on the machine that you use
to deploy the workflow. For example, if you include processes to create
DSNs, the DSNs are created on the machine that you use to deploy the
workflow.
To include multiple parameter and value pairs, you must enclose each
parameter in double quotes (" ") and separate each parameter and value
pair using a space. For example, -p "UserName=User1"
"Password=1234" is valid syntax to provide values for the parameters
UserName and Password.
The steps below show you how to deploy a workflow using the command line
version of System Manager.
You have created a workflow and saved it in a location that can be accessed
from the machine that you are deploying the workflow on. You have created a
workflow. Steps to create a workflow are provided in Creating a Workflow, page
1402.
You have installed any MicroStrategy products and components that are
required for the processes of the workflow. For the products and components
required for each process, see Defining Processes, page 1447.
If required, you have created a parameters file to provide values for the
parameters of the workflow and saved it in a location that can be accessed
from the machine that you are deploying the workflow on.
4. Once you have typed the full command, press Enter. The workflow is
started and results are saved to the log file, as well as displayed on the
screen if you included the parameter -showoutput.
l Ensure that the machine that is to be used for the deployment meets all
the prerequisites listed in Using the Command Line to Deploy a Workflow,
page 1548.
l Determine the syntax to deploy the workflow using the command line
version of System Manager. The required and optional parameters are
described in Using the Command Line to Deploy a Workflow, page 1548.
This syntax can then be used in one of the following ways:
l Log in to the machine to perform the deployment from, and use the steps
provided in To Deploy a Workflow Using the Command Line Version of
System Manager, page 1551 to deploy the workflow.
l Review the results of the deployment using the log file specified to verify
that the required processes were completed successfully.
A UTOM ATIN G
A D M IN ISTRATIVE TASKS
WITH COM M AN D
M AN AGER
The Command Manager script engine uses a unique syntax that is similar to
SQL and other such scripting languages. For a complete guide to the
commands and statements used in Command Manager, see the Command
Manager Help help.
Here are more examples of tasks you can perform using Command Manager:
l User management: Add, remove, or modify users or user groups; list user
profiles
l Security: Grant or revoke user privileges; create security filters and apply
them to users or groups; change security roles and user profiles; assign or
revoke ACL permissions; disconnect users or disable their accounts
For full access to all Command Manager functionality, a user must have all
privileges in the Common, Distribution Services, and Administration groups,
except for Bypass All Object Security Access Checks.
For more information about using Command Manager and for script syntax,
see Command Manager Help.
Script Outlines
The Command Manager script outlines help you insert script statements with
the correct syntax into your scripts. Outlines are preconstructed statements
with optional features and user-defined parameters clearly marked.
Outlines are grouped by the type of objects that they affect. The outlines
that are available to be inserted depend on whether the active Script window
is connected to a project source or a Narrowcast server. Only the outlines
that are relevant to the connected metadata source are available.
4. Navigate the Outline tree to locate the outline you want, and select it.
6. Click Cancel.
For example, you can create a procedure called NewUser that creates a user
and adds the user to groups. You can then call this procedure from another
Command Manager script, supplying the name of the user and the groups.
To use the procedure to create a user named KHuang and add the user to
the group Customers, use the following syntax:
where NewUser is the name of the procedure, and KHuang and Customers
are the inputs to the procedure.
Procedures are available only for use with project sources. Procedures
cannot be used with Narrowcast Server statements.
Command Manager contains many sample procedures that you can view and
modify. These are stored in the following Command Manager directory:
\Outlines\Procedure_Outlines\Sample_Procedures\
For instructions on how to use procedures, see the Command Manager Help.
l execute runs any Command Manager command, but it does not return the
results.
For specific command syntax for the command line interface, see the
Command Manager Help.
If the project source name, the input file, or an output file contain a space in
the name or path, you must enclose the name in double quotes.
Effect Parameters
-n
Connect to a project source
ProjectSourceName
-w ODBC_DSN
[-p Password]
If -p or -s are omitted, Command Manager assumes a null
password or system prefix. -d Database
[-s SystemPrefix]
[-d Database]
If -s is omitted, Command Manager assumes a null system
prefix. [-s SystemPrefix]
-f InputFile
If this parameter is omitted, the Command Manager GUI is
launched.
Effect Parameters
CmdMgrResults.log
CmdMgrFail.log
CmdMgrSuccess.log
-of FailFile
You can omit one or more of these parameters. For
example, if you want to log only error messages, use only -os SuccessFile
the -of parameter.
If you create a batch file to execute a Command Manager script from the
command line, the password for the project source or Narrowcast Server
login must be stored in plain text in the batch file. You can protect the
security of this information by encrypting the script and having it connect to
a project source or Narrowcast Server when it is executed, using the
CONNECT SERVER statement. You can then execute the script from a
connection-less session, which does not require a user name or password.
The user name and password are provided in the Command Manager script,
as part of the CONNECT SERVER statement. For detailed syntax
instructions for using the CONNECT SERVER statement, see the Command
Manager Help (from within the Command Manager graphical interface, press
F1).
When you encrypt a script, you specify a password for the script. This
password is required to open the script, either in the Command Manager
graphical interface, or using the LOADFILE command in the Command
Manager command line interface. Because a script must be opened before it
can be executed in the Command Manager graphical interface, the password
is required to execute the script from the graphical interface as well.
However, the password is not required to execute the script from the
command line or through the command line interface.
The password for an encrypted script cannot be blank, cannot contain any
spaces, and is case-sensitive.
l Critical errors occur when the main part of the instruction is not able to
complete. These errors interrupt script execution when the Stop script
execution on error option is enabled (GUI) or when the -stoponerror
flag is used (command line).
on error option is enabled, the script stops executing and any further
instructions are ignored.
l Noncritical errors occur when the main part of the instruction is able to
complete. These errors never interrupt script execution.
An error message is written to the Messages tab of the Script window for all
execution errors, critical or noncritical. In addition, if logging is enabled in
the Options dialog box, the error message is written to the log file.
Timeout Errors
To avoid locking up the system indefinitely, Command Manager has a built-
in timeout limit of 20 minutes. If a statement has been executing for 20
minutes with no response from Intelligence Server, Command Manager
reports a request timeout error for that command and executes the next
instruction in the script. However, Command Manager does not attempt to
abort the command. In some cases, such as database-intensive tasks such
as purging the statistics database, the task may continue to execute even
after Command Manager reports a timeout error.
l identifiers, which are words that the user provides as parameters for the
script. For example, in the statement LIST MEMBERS FOR USER GROUP
"Managers"; the word Managers is an identifier. Identifiers must be
Use double quotes to enclose the identifier and put carets in front of the
interior double quotes:
l dates
l object GUIDs
When you start the command line interface, it is in console mode, with a
connection-less project source connection. The command prompt in console
mode displays the metadata source and user to which Command Manager is
connected.
To see a list of instructions for the command line interface, from the
command line interface type help and press Enter. A list of Command
Manager command line instructions and an explanation of their effects is
displayed.
VERIFYIN G REPORTS AN D
D OCUM EN TS WITH
I N TEGRITY M AN AGER
For instance, you may want to ensure that the changes involved in moving
your project from a development environment into production do not alter
any of your reports. Integrity Manager can compare reports in the
development and the production projects, and highlight any differences. This
can assist you in tracking down discrepancies between the two projects.
For reports you can test and compare the SQL, grid data, graph, Excel, or
PDF output. For documents you can test and compare the Excel or PDF
output, or test whether the documents execute properly. If you choose not to
test and compare the Excel or PDF output, no output is generated for the
documents. Integrity Manager still reports whether the documents executed
successfully and how long it took them to execute.
l To execute an integrity test on a project, you must have the Use Integrity
Manager privilege for that project.
l To test the Excel export of a report or document, you must have Microsoft
Excel installed on the machine running Integrity Manager.
The Integrity Manager Wizard walks you through the process of setting up
integrity tests. You specify what kind of integrity test to run, what reports or
documents to test, and the execution and output settings. Then you can
execute the test immediately, or save the test for later use and re-use. For
information on reusing tests, see Saving and Loading a Test, page 1582.
These tests can be useful if you have existing baselines from previous
tests that you want to compare. For example, your system is configured in
the recommended project life cycle of development > test > production (for
more information on this life cycle, see the Managing your projects section
in the System Administration Help). You have an existing baseline from a
single project test of the production project, and the results of a project
versus project test on the development and test projects. In this situation,
you can use a baseline versus baseline test to compare the production
project to the test project
2. Perform a single project test against the second project, saving the
performance results.
l Wait until the performance test is complete before attempting to view the
results of the test in Integrity Manager. Otherwise the increased load on
the Integrity Manager machine may cause the recorded times to be
increased for reasons not related to Intelligence Server performance.
l If the Use Cache setting is selected on the Select Execution Settings page
of the Integrity Manager Wizard, make sure that a valid cache exists for
testing material. Otherwise the first execution cycle of each report takes
longer than the subsequent cycles, because it must generate the cache for
the other cycles to use. One way to ensure that a cache exists for each
object is to run a single-project integrity test of each object before you run
the performance test.
This setting only affects reports, and does not apply to documents.
run only one report or document at a time, and provides the most accurate
benchmark results for that Intelligence Server.
l The Cycles setting on the Select Processing Options page of the Integrity
Manager Wizard indicates how many times each report or document is
executed. A high value for this setting can dramatically increase the
execution time of your test, particularly if you are running many reports or
documents, or several large reports and documents.
l Use 64-bit Integrity Manager when the comparison data is large. The
default position of 64-bit Integrity Manager is under C:\Program Files
(x86)\MicroStrategy\Integrity Manager called MIntMgr_
64.exe. Additionally, use 64-bit Integrity Manager if you have memory
issues.
l Run large integrity tests during off-peak hours, so that the load on
Intelligence Server from the integrity test does not interfere with normal
operation. You can execute integrity tests from the command line using a
scheduler, such as the Windows AT scheduler. For information about
executing integrity tests from the command line, see Executing a Test from
the Command Line, page 1584.
l If you are having trouble comparing prompted reports, you can save static
versions of those reports in a "regression test" folder in each project, and
use those static reports for integrity tests.
l In a comparative integrity test, you must have the same OS version and
the same font installed on your machine to use the Graph view to compare
two PDF reports. Font rendering on a PDF is version and OS specific, so
differences may result in formatting issues, which can affect comparison
results.
l Alternately, you can execute the test using multiple MicroStrategy users,
as described in Executing a Test Under Multiple MicroStrategy User
Accounts, page 1594. Make sure that the users that you are comparing
have matching security filters. For example, if User1 is assigned security
filter FilterA in project Project1, make sure you compare the
reports with a user who is also assigned security filter FilterA in
project Project2.
l When you are comparing graph reports and noting the differences between
the graphs, adjust the Granularity slider so that the differences are
grouped in a way that is useful. For more information about how Integrity
Manager evaluates and groups differences in graph and PDF reports, see
Grouping Differences in Graph and PDF Reports, page 1601.
Heap size should not exceed the available memory on the machine from
which Integrity Manager is launched.
l Open Integrity Manager in command line with the -Xmx flag and the
corresponding memory size, such as -Xmx12G for 12 GB or -
Xmx10240m for 10,240 MB.
MIntMgrW_64.exe -Xmx12G
If you select any Intelligent Cube reports, make sure that the
Intelligent Cube the reports are based on has been published before
you perform the integrity test. Integrity Manager can test the SQL of
Intelligent Cubes even if they have not been published, but cannot test
Intelligent Cube reports based on an unpublished Intelligent Cube.
7. Select what types of analysis to perform. For reports, you can analyze
any or all of the grid data, underlying SQL, graph data, Excel export, or
PDF output. For documents you can analyze the Excel export or PDF
output.
Only reports that have been saved in Graph or Grid/Graph view can be
analyzed as graphs.
You can also select to record the execution time of each report and/or
document, to analyze the performance of Intelligence Server.
9. Click Save Test. Navigate to the desired directory, enter a file name,
and click OK.
10. To execute the test immediately, regardless of whether you saved the
settings, click Run. The Integrity Manager Wizard closes and Integrity
Manager begins to execute the selected reports and documents. As the
reports execute, the results of each report or document appear in the
Results Summary area of the Integrity Manager interface.
For security reasons, the passwords for the project logins (provided on the
Enter Base Project Information page and Enter Target Project Information
page) are not saved to the test file. You must re-enter these passwords
when you load the test.
1. Step through the Integrity Manager Wizard and answer its questions.
For detailed instructions, see Creating an Integrity Test, page 1580.
2. When you reach the Summary page of the Integrity Manager Wizard,
click Save Test.
3. Navigate to the desired folder and enter a file name to save the test as.
By default this file will have an extension of .mtc.
4. Click OK.
You can execute the test immediately by clicking Run. The Integrity
Manager Wizard closes and Integrity Manager begins to execute the
selected reports and documents. As they execute, their results appear in the
Results Summary area of the Integrity Manager interface.
2. Navigate to the file containing your test information and open it.
3. Step through the wizard and confirm the settings for the test.
5. When you reach the Summary page, review the information presented
there. When you are satisfied that the test settings shown are correct,
click Run. The Integrity Manager wizard closes and Integrity Manager
begins to execute the selected reports and documents. As they
execute, their results appear in the Results Summary area of the
Integrity Manager interface.
You can also re-run reports in a test that has just finished execution. For
example, a number of reports in an integrity test may fail because of an error
in a metric. You can correct the metric and then re-run those reports to
confirm that the reports now match. To re-run the reports, select them, and
then from the Run menu, select Refresh selected items.
After creating and saving a test (for instructions, see Saving and Loading a
Test, page 1582), call the Integrity Manager executable MIntMgr.exe with
the parameters listed in the table below. All parameters are optional except
the -f parameter, which specifies the integrity test file path and name.
Effect Parameters
The following parameters modify the execution of the test. They do not modify the .mtc
test file.
Output directory.
-o OutputDirectory
This directory must exist before the test can be executed.
-logfile
Log file path and name.
LogfileName
-tserver
Target server name.
TargetServer
-bproject
Base project.
BaseProject
Effect Parameters
-tproject
Target project.
TargetProject
-blogin BaseLogin
Login for base project.
-blogin
For multiple logins, enclose all logins in double quotes (") "BaseLogin1, ..,
and separate each login with a comma (,). BaseLoginN"
-tlogin TargetLogin
Login for target project.
-tlogin
For multiple logins, enclose all logins in double quotes (") "TargetLogin1, ..,
and separate each login with a comma (,) TargetLoginN"
GUID of the test folder. If this option is used, the reports and -folderid
documents specified in the integrity test file are ignored. FolderGUID
Effect Parameters
Password Syntax
l If multiple logins are used, a password must be specified for each login.
The entire list of passwords must be enclosed in double quotes (") and the
passwords must be separated by a comma (,).
l If multiple passwords are used and a user in the base project or target
project has an empty password, the position of that user's password in the
list of passwords is indicated by a space between commas.
For example, if the users for an integrity test are User1, User2, and User3,
and User2 has an empty password, the list of passwords is "password1,
,password3".
To view the error code, in the same command prompt window as the test
execution, type echo %ERRORLEVEL% and press Enter.
The test execution succeeded, but at least one report has a status
1
other than Matched.
The test execution failed. For more information about this error, see
4
the integrity test log for this test.
The test file is a plain-text XML file, and can be edited in a text editor, such
as Notepad. For an explanation of all the XML tags included in the test file,
see List of Tags in the Integrity Test File, page 1606.
3. For each Intelligence Server machine that you want to test against, add
a line to the file in the same format as the examples given in the file.
4. Save and close the hosts file. You can now execute integrity tests
against the Intelligence Servers specified in the file.
Integrity Manager can use any of the following methods to resolve prompts:
l Personal answer: Personal answers are default prompt answers that are
saved for individual MicroStrategy logins. Any prompts with personal
answers saved for the login using Integrity Manager can be resolved using
those personal answers.
l Default object answer: A prompted report can have two possible default
answers: a default answer saved with the prompt, and a default answer
saved with the report. These default answers can be used to resolve the
prompt. If both default answers exist, Integrity Manager uses the answer
saved with the report.
By default Integrity Manager uses all of these options, in the order listed
above. You can disable some options or change the order of the options in
the Advanced Options dialog box in the Integrity Manager Wizard.
For example, you may want to never use your personal answers to answer
prompts, and use the user-defined answers instead of the default answers
for value prompts. You can configure the user-defined answers for value
prompts in the Select Prompt Settings page. Then, in the Advanced Options
dialog box, clear the Personal answer check box and move Integrity
Manager user-defined answer above Default object answer.
Optional Prompts
You control whether Integrity Manager answers optional prompts on the
Select Prompt Settings page of the Integrity Manager Wizard.
To change this default, in the Advanced Options dialog box, select the
Group personal prompt answers by their names option. When this option
is selected, Integrity Manager executes each report/document once for each
personal answer for each prompt in the report/document. If multiple prompts
in the report/document have personal answers with the same name, those
personal answers are used for each prompt in a single execution of the
report/document.
For personal prompt answers to be grouped, the answers must have the
exact same name. For example, if the base project contains a personal
prompt answer named AnswerA and the target project contains a personal
prompt answer named Answer_A, those prompt answers will not be
grouped together.
For example, consider a report with two prompts, Prompt1 and Prompt2. The
user executing the report has personal answers for each of these prompts.
The personal answers are named as follows:
Prompt Answers
AnswerA, AnswerC,
Prompt2
AnswerD
Integrity Manager executes this report four times, as shown in the table
below:
Since Prompt1 and Prompt2 both have a personal answer saved with the
name AnswerA, Integrity Manager groups those answers together in a single
execution. Only Prompt1 has an answer named AnswerB, so Integrity
Manager executes the report with AnswerB for Prompt1 and uses the next
available method for answering prompts to answer Prompt2. In the same
way, only Prompt2 has answers named AnswerC and AnswerD, so when
Integrity Manager executes the report using those answers for Prompt2 it
uses the next available prompt answer method for Prompt1.
Unanswered Prompts
If a prompt cannot be answered by Integrity Manager, the report execution
fails and the report's status changes to Not Supported. A detailed
description of the prompt that could not be answered can be found in the
Details tab of the Report Data area for that failed report. To view this
description, select the report in the Results summary area and then click the
Details tab.
l Level prompts that use the results of a search object to generate a list of
possible levels
1. Create an integrity test. Step through the Integrity Manager Wizard and
enter the information required on each page.
4. In the URL for Base connection and URL for Target Connection
fields, type the URL for the baseline and target projects' Web servers.
To test each URL, click the Test button. If it is correct, a browser
window opens at the main MicroStrategy Web page for that server.
5. Click OK.
2. To save the report with the correct prompt answers, click the report's
name in the dialog box.
3. Answer the prompts for the report and save it. Depending on your
choices in the Advanced Options dialog box, you may need to save the
report as a static, unprompted report.
For example, your MicroStrategy system may use security filters to restrict
access to data for different users. If you know the MicroStrategy login and
password for a user who has each security filter, you can run the integrity
test under each of these users to ensure that the security filters are working
as designed after an upgrade. You can also compare a set of reports from
the same project under two different users to ensure that the users are
seeing the same data.
On the Enable Multiple Logins page of the Integrity Manager Wizard, you
specify the authentication method, MicroStrategy login, and password for
When the test is executed, the reports are executed in the following order:
Note that the reports executed by Alice in the base project are compared
with the reports executed by Bob in the target project, and the reports
executed by Carol in the base project are compared with the reports
executed by Alice in the target project.
2. On the Welcome page, select the Enable Multiple Logins check box.
3. On the Enable Multiple Logins page, for each user, specify the
authentication mode, login, and password.
4. Make sure the users are in the order that you want the test to be
executed in. In addition, if you are creating a comparative integrity test,
make sure that the users whose results you want to compare are paired
up correctly in the tables.
1. For reports that use dynamic SQL, enclose the dynamic SQL in
identifying SQL comments. Enter the comments in the VLDB properties
Pre/Post statements.
5. In the Dynamic SQL Start field, type the text that matches the text you
entered in the VLDB properties to indicate the beginning of the dynamic
SQL. For this example, type /* BEGIN DYNAMIC SQL */
6. In the End field, type the text that matches the text you entered in the
VLDB properties to indicate the end of the dynamic SQL. For this
example, type /* END DYNAMIC SQL */
In this case, you can use the SQL Replacement feature to replace TEST with
PREFIX in the base project, and PROD with PREFIX in the target project.
Now, when Integrity Manager compares the report SQL, it treats all
occurrences of TEST in the base and PROD in the target as PREFIX, so they
are not considered to be differences.
The changes made by the SQL Replacement Table are not stored in the SQL
files for each report. Rather, Integrity Manager stores those changes in
memory when it executes the integrity test.
Access the SQL Replacement feature from the Advanced Options dialog
box, on the Select Processing Options page of the Integrity Manager wizard.
l Timed Out reports and documents did not finish executing in the time
specified in the Max Timeout field in the Select Execution Settings page.
These reports and documents have been canceled by Integrity Manager
and will not be executed again during this run of the test.
l Error indicates that an error has prevented this report or document from
executing correctly. To view the error, double-click the status. The report
details open in the Report Data area of Integrity Manager, below the
Results Summary area. The error message is listed in the Execution
Details section.
l Not Supported reports and documents contain one or more prompts for
which an answer could not be automatically generated. To see a
description of the errors, double-click the status. For details of how
Integrity Manager answers prompts, see Executing Prompted Reports with
Integrity Manager, page 1589.
l Matched indicates that the results from the two projects are identical for
the report or document. In a single-project integrity test, Matched
indicates that the reports and documents executed successfully.
In a comparative integrity test, both the base and the target report or
document are shown in the Report Data area. Any differences between the
base and target are highlighted in red, as follows:
l In the Data, SQL, or Excel view, the differences are printed in red. In Data
and Excel view, to highlight and bold the next or previous difference, click
the Next Difference or Previous Difference icon.
l In the Graph view, the current difference is circled in red. To circle the
next or previous difference, click the Next Difference or Previous
Difference icon. To change the way differences are grouped, use the
Granularity slider. For more information about differences in graph
reports, see Grouping Differences in Graph and PDF Reports, page 1601.
Viewing graphs in Overlap layout enables you to switch quickly between the
base and target graphs. This layout makes it easy to compare the
discrepancies between the two graphs.
l Users of Integrity Manager can view, add, and edit notes even if they do
not have the privileges to view, add, or edit notes in MicroStrategy Web or
Developer.
To make sure you are viewing the most recent version of the notes, click
Refresh. Integrity Manager contacts Intelligence Server and retrieves the
latest version of the notes attached to the report or document.
To add a note, enter the new note and click Submit. To edit the notes, click
Edit, make changes to the listed notes, and click Submit.
When Integrity Manager compares two graph or PDF reports, it saves the
graphs as .png or .pdf files. It then performs a pixel-by-pixel comparison of
the two images. If any pixels are different in the base and target graph, the
graph or PDF is considered Not Matched.
In the image below, the title for the graph has been changed between the
baseline and the target. In the base graph, the title is in normal font; in the
target, it is in italic font.
The white space between the words is the same in both the base and target
reports. When the granularity is set to a low level, this unchanged space
causes Integrity Manager to treat each word as a separate difference, as
seen below:
If the granularity is set to a higher level, the space between the words is no
longer sufficient to cause Integrity Manager to treat each word as a separate
difference. The differences in the title are all grouped together, as seen
below:
Integrity Manager also creates a separate folder within the output folder for
the report or document results from each project. These folders are named
after the Intelligence Server machines on which the projects are kept.
l For the baseline server, _0 is appended to the machine name to create the
name of the folder.
For example, the image below is taken from a machine that executes a
project-versus-project integrity test at nine AM on the first Monday of each
month. The baseline project is on a machine named ARCHIMEDES, and the
Each results folder contains a number of files containing the results of each
report that is tested. These files are named <ID>_<GUID>.<ext>, where
<ID> is the number indicating the order in which the report was executed,
<GUID> is the report object GUID, and <ext> is an extension based on the
type of file. The report results are saved in the following files:
l Grid data is saved in CSV format, in the file <ID>_<GUID>.csv, but only
if you select the Save CSV files check box in the Advanced Options dialog
box.
l Excel data is saved in XLS format, in the file <ID>_<GUID>.xls, but only
if you select the Save XLS files check box in the Advanced Options dialog
box.
l Only report results for formats requested in the Select Processing Options
page during test setup are generated.
l SQL, graph, and PDF data are always saved if they are generated. Grid
and Excel data are only saved if you choose to save those results during
test creation. Notes are always saved.
Each results folder also contains a file called baseline.xml that provides
a summary of the tested reports. This file is used to provide a baseline
summary for baseline-versus-project and baseline-versus-baseline integrity
tests.
If needed, you can edit the integrity test file with any XML editor or text
editor, such as Notepad. The table below lists all the XML tags in an integrity
test file, with an explanation of each tag.
Except in a single project integrity test, this section is repeated for both the base
connection and the target connection.
1: Standard
Authentication_Mode
2: Windows
16: LDAP
32: Database
Objects to be tested
This section must be repeated for each object included in the integrity test.
3: Report
8: Folder
Type
18: Shortcut
55: Document
Object type.
If Type is set to 3:
778: Transaction
14081: Document
Prompt settings
textAnswerIsNull true: Custom answers are not provided for text prompts.
dateAnswerIsNull true: Custom answers are not provided for date prompts.
1: Personal answer
PromptAnswerSource
-1: Personal answer (disabled)
Execution Settings
Full path to the location where the integrity test results are
Output_Directory
saved.
Processing options
false: Disabled.
false: Disabled.
false: Disabled.
false: Disabled.
true: Enabled.
false: Disabled.
false: Disabled.
false: Disabled.
false: Disabled.
false: Disabled.
2: Target only.
false: Disabled.
For all Excel processing options, if the option is left blank, the setting for that option is
imported from the user's MicroStrategy Web export preferences, as per the Use
Default option in the Integrity Manager Wizard.
false: Disabled.
1: Excel 2000.
excelVersion
2: Excel XP/2003.
For all PDF processing options, if the option is left blank or not listed in the MTC file,
that option is processed using the default setting in Intelligence Server's PDF
generation options.
0: Use ScalePercentage .
Page orientation:
Orientation 0: Portrait.
1: Landscape.
1: Report details.
0: Letter (8.5"x11")
1: Legal (8.5"x14")
2: Executive (7.25"x10.5")
PaperType
3: Folio (8.5"x13")
4: A3 (11.69"x16.54")
5: A4 (8.27"x11.69")
6: A5 (5.83"x8.27")
1: Embed fonts.
VLDB properties can provide support for unique configurations and optimize
performance in special reporting and analysis scenarios. You can use the
VLDB Properties Editor to alter the syntax or behavior of a SQL statement
and take advantage of unique, database-specific optimizations. You can
also alter how the Analytical Engine processes data in certain situations,
such as subtotals with consolidations and sorting null values.
Each VLDB property has two or more VLDB settings which are the different
options available for a VLDB property. For example, the Metric Join Type
VLDB property has two VLDB settings, Inner Join and Outer Join.
l Flexibility: VLDB properties are available at multiple levels so that the SQL
generated for one report, for example, can be manipulated separately from
the SQL generated for another, similar report. For a diagram, see Order of
Precedence, page 1624.
Modifying any VLDB property should be performed with caution only after
understanding the effects of the VLDB settings you want to apply. A given
VLDB setting can support or optimize one system setup, but the same
setting can cause performance issues or errors for other systems. Use this
manual to learn about the VLDB properties before modifying any default
settings.
VLDB properties also help you configure and optimize your system. You can
use MicroStrategy for different types of data analysis on a variety of data
warehouse implementations. VLDB properties offer different configurations
to support or optimize your reporting and analysis requirements in the best
way.
For example, you may find that enabling the Set Operator Optimization
VLDB property provides a significant performance gain by utilizing set
operators such as EXCEPT and INTERSECT in your SQL queries. On the
other hand, this property must offer the option to be disabled, since not all
DBMS types support these types of operators. VLDB properties offer you a
choice in configuring your system.
Order of Precedence
VLDB properties can be set at multiple levels, providing flexibility in the way
you can configure your reporting environment. For example, you can choose
to apply a setting to an entire database instance or only to a single report
associated with that database instance.
The following diagram shows how VLDB properties that are set for one level
take precedence over those set for another.
The arrows depict the override authority of the levels, with the report level
having the greatest authority. For example, if a VLDB property is set one
way for a report and the same property is set differently for the database
instance, the report setting takes precedence.
Properties set at the report level override properties at every other level.
Properties set at the template level override those set at the metric level, the
database instance level, and the DBMS level, and so on.
When you access the VLDB Properties Editor for a database instance, you
see the most complete set of the VLDB properties. However, not all
properties are available at the database instance level. The rest of the
access methods have a limited number of properties available depending on
which properties are supported for the selected object/level.
The table below describes every way to access the VLDB Properties Editor:
To set VLDB
properties at Open the VLDB Properties Editor this way
this level
Attribute In the Attribute Editor, on the Tools menu, select VLDB Properties.
In the Report Editor or Report Viewer, on the Data menu, select VLDB
Report (or
Properties. This is also the location in which you can access the VLDB
Intelligent Cube)
Properties Editor for Intelligent Cubes.
Template In the Template Editor, on the Data menu, select VLDB Properties.
VLDB properties exist at the filter level and the function level, but they are
not accessible through the VLDB Properties Editor.
All VLDB properties at the DBMS level are used for initialization and
debugging only. You cannot modify a VLDB property at the DBMS level.
l VLDB Settings list: Shows the list of folders into which the VLDB
properties are grouped. Expand a folder to see the individual properties.
The settings listed depend on the level at which the VLDB Properties
Editor was accessed (see the table above). For example, if you access the
VLDB Properties Editor from the project level, you only see Analytical
Engine properties.
l Options and Parameters box: Where you set or change the parameters
that affect the SQL syntax.
l SQL preview box: (Only appears for VLDB properties that directly impact
the SQL statement.) Shows a sample SQL statement and how it changes
when you edit a property.
When you change a property from its default, a check mark appears on the
folder in which the property is located and on the property itself.
l Display the physical setting names alongside the names that appear in the
interface. The physical setting names can be useful when you are working
with MicroStrategy Technical Support to troubleshoot the effect of a VLDB
property.
l Display descriptions of the values for each setting. This displays the full
description of the option chosen for a VLDB property.
l Hide all settings that are currently set to default values. This can be useful
if you want to see only those properties and their settings which have been
changed from the default.
The steps below show you how to create a VLDB settings report. A common
scenario for creating a VLDB settings report is to create a list of default
VLDB settings for the database or other data source you are connecting to,
which is described in Default VLDB Settings for Specific Data Sources, page
1925.
1. Open the VLDB Properties Editor to display the VLDB properties for the
level at which you want to work. (For information on accessing the
VLDB Properties Editor, see Opening the VLDB Properties Editor, page
1625.)
4. You can choose to have the report display or hide the information
described above, by selecting the appropriate check boxes.
5. You can copy the content in the report using the Ctrl+C keys on your
keyboard. Then paste the information into a text editor or word
processing program (such as Microsoft Word) using the Ctrl+V keys.
Modifying any VLDB property should be performed with caution only after
understanding the effects of the VLDB settings that you want to apply. A
given VLDB setting can support or optimize one system setup, but the same
setting can cause performance issues or errors for other systems. Use this
manual to learn about the VLDB properties before modifying any default
settings.
1. Open the VLDB Properties Editor to display the VLDB properties for the
level at which you want to work. (For information on object levels, see
Order of Precedence, page 1624.)
2. Modify the VLDB property you want to change. For use cases,
examples, sample code, and other information on every VLDB property,
see Details for All VLDB Properties, page 1636.
3. If necessary, you can ensure that a property is set to the default. At the
bottom of the Options and Parameters area for that property (on the
right), select the Use default inherited value check box. Next to this
check box name, information appears about what level the setting is
inheriting its default from.
5. You must also save in the object or editor window through which you
accessed the VLDB Properties Editor. For example, if you accessed the
VLDB properties by opening the Metric Editor and then opening the
VLDB Properties Editor, after you click Save and Close in the VLDB
Properties Editor, you must also click Save and Close in the Metric
Editor to save your changes to VLDB properties.
1. Open the VLDB Properties Editor to display the VLDB properties for the
level at which you want to work. (For information on object levels, see
Order of Precedence, page 1624.)
3. Modify the VLDB property you want to change. For use cases,
examples, sample code, and other information on every VLDB property,
see Details for All VLDB Properties, page 1636.
4. If necessary, you can ensure that a property is set to the default. At the
bottom of the Options and Parameters area for that property (on the
right), select the Use default inherited value check box. Next to this
check box name, information appears about what level the setting is
inheriting its default from.
6. You must also save in the object or editor window through which you
accessed the VLDB Properties Editor. For example, if you accessed the
VLDB properties by opening the Metric Editor and then opening the
VLDB Properties Editor, after you click Save and Close in the VLDB
Properties Editor, you must also click Save and Close in the Metric
Editor to save your changes to VLDB properties.
If you perform this procedure, any changes you may have made to any or all
VLDB properties displayed in the chosen view of the VLDB Properties Editor
will be lost. For details on which VLDB properties are displayed depending
on how you access the VLDB Properties Editor, see Details for All VLDB
Properties, page 1636.
1. Use either or both of the following methods to see your system's VLDB
properties that are not set to default. You should know which VLDB
properties you will be affecting when you return properties to their
default settings:
l Generate a report listing VLDB properties that are not set to the
default settings. For steps, see Creating a VLDB Settings Report,
page 1627, and select the check box named Do not show settings
with Default values.
2. Open the VLDB Properties Editor to display the VLDB properties that
you want to set to their original defaults. (For information on object
levels, see Order of Precedence, page 1624.)
3. In the VLDB Properties Editor, you can identify any VLDB properties
that have had their default settings changed, because they are
identified with a check mark. The folder in which the property is stored
has a check mark on it (as shown on the Joins folder in the example
image below), and the property name itself has a check mark on it (as
shown on the gear icon in front of the Cartesian Join Warning property
name in the second image below).
4. From the Tools menu, select Set all values to default. See the
warning above if you are unsure about whether to set properties to the
default.
5. In the confirmation window that appears, click Yes. All VLDB properties
that are displayed in the VLDB Properties Editor are returned to their
default settings.
6. Click Save and Close to save your changes and close the VLDB
Properties Editor.
7. You must also save in the object or editor window through which you
accessed the VLDB Properties Editor. For example, if you accessed the
VLDB properties by opening the Metric Editor and then opening the
VLDB Properties Editor, after you click Save and Close in the VLDB
Properties Editor, you must also click Save and Close in the Metric
Editor to save your changes to VLDB properties.
l It loads updated properties for existing database types that are still
supported.
5. Click Load.
6. Use the arrows to add any required database types by moving them
from the Available database types list to the Existing database
types list.
7. Click OK twice.
For descriptions and examples of all VLDB properties and to see what
properties can be modified, see Details for All VLDB Properties, page 1636.
When you enable this setting, be aware of the following requirements and
options:
This VLDB property must be set at the project level for the calculation to be
performed correctly.
The setting takes effect when the project is initialized, so after this setting is
changed you must reload the project or restart Intelligence Server.
After you enable this setting, you must enable subtotals at either the
consolidation level or the report level. If you enable subtotals at the
consolidation level, subtotals are available for all reports in which the
consolidation is used. (Consolidation Editor > Elements menu > Subtotals >
Enabled.) If you enable subtotals at the report level, subtotals for
consolidations can be enabled on a report-by-report basis. (Report Editor >
Report Data Options > Subtotals > Yes. If Default is selected, the Analytical
Engine reverts to the Enabled/Disabled property as set on the consolidation
object itself.)
Change this property from the default only when all Developer clients have
upgraded to MicroStrategy version 7.5.x.
Project only
With the first setting selected, "Evaluate subtotals over consolidation elements
and their corresponding attribute elements," the report appears as follows:
The Total value is calculated for more elements than are displayed in the Super
Regions column. The Analytical Engine is including the following elements in
the calculation: East + (Northeast + Mid-Atlantic + Southeast) + Central +
(Central + South) + West + (Northwest + Southwest).
The Total value is now calculated for only the Super Regions consolidation
elements. The Analytical Engine is including only the following elements in the
calculation: East + Central + West.
Apply Filter Options for queries against in-memory datasets determines how
many times the view filter is applied, which can affect the final view of data.
You create a Yearly Cost derived metric that uses the following definition:
Sum(Cost){!Year%}
The level definition of {!Year%} defines the derived metric to ignore filtering
related to Year and to perform no grouping related to Year (for explanation and
examples of defining the level for metrics, see the Advanced Reporting Help ).
This means that this derived metric displays the total cost for all years, as
shown in the report below:
You can also further filter this report using a view filter. For example, a view
filter is applied to this report, which restricts the results to only 2014, as shown
below:
By default, only Cost for 2014 is displayed, but Yearly Cost remains the same
since it has been defined to ignore filtering and grouping related to Year. This
is supported by the default option Apply view filter to passes touching fact
tables and last join pass of the Apply Filter Options for queries against in-
memory datasets VLDB property.
If analysts of this report are meant to be more aware of the cost data that goes
into the total of Yearly Cost, you can modify the Apply Filter Options for queries
against in-memory datasets VLDB property to use the option Apply view filter
only to passes touching fact tables. This displays the other elements of Year,
as shown in the report below:
You have the following options for the Apply Filter Options for queries
against in-memory datasets VLDB property:
l Apply view filter only to passes touching fact tables: This option
applies the view filter to only SQL passes that touch fact tables, but not to
the last pass that combines the data. As shown in the example above, this
can include additional information on the final display by removing the
view filter from the final display of the report.
l Apply view filter to passes touching fact tables and last join pass
(default): This option applies the view filter to SQL passes that touch fact
tables as well as the last pass that combines the data. As shown in the
example above, this applies the view filter to the final display of the report
to ensure that the data meets the restrictions defined by the view filter.
Multiple filter qualifications that are based on attributes are used to define
a custom group element. For example, you can include one filter
qualification that filters data for only the year 2011, and another filter
qualification that filters data for the Northeast region. This would include
both the attributes Year and Region for the custom group element. Steps
to create filter qualifications for custom group elements are provided in the
Advanced Reporting Help.
A joint element list is used to define the custom group element. A joint
element list is a filter that allows you to join attribute elements and then
filter on that attribute result set. In other words, you can select specific
element combinations, such as quarter and category. Steps to create a
joint element list are provided in the Advanced Reporting Help.
l The individual attribute elements must be displayed for each custom group
element. For steps to display the individual attribute elements for a custom
group element, see the Advanced Reporting Help.
For custom groups that meet the criteria listed above, the Custom Group
Display for Joint Elements VLDB property provides the following formatting
options:
The attribute elements for both Region and Category are displayed for
each custom group element.
l Display element names from only the first attribute in the joint
element: Displays only one attribute element for the attributes that are
included in the filter qualifications for the custom group element. An
attribute element from the attribute that is first in terms of alphabetical
order is displayed for the custom group. For example, the attributes
Region and Category are used in separate filter qualifications, which are
then used to create a custom group element. When this custom group is
included in a report, the Category attribute element is displayed for the
custom group elements, as shown in the report below.
Only the attribute elements for the Category attribute are displayed. The
attribute elements for Region are not displayed because Category is first
in terms of alphabetical order.
Project only
Project only
Evaluation Ordering
Evaluation Ordering is an advanced property that is hidden by default. For
information on how to display this property, see Viewing and Changing
Advanced VLDB Properties, page 1630.
See the Advanced Reporting Help for examples of how you can modify the
evaluation order of objects in a project.
l Trim trailing spaces: Attribute elements that include trailing spaces are
not returned as separate attribute elements when filtering on the attribute.
Instead, any trailing spaces are ignored. For example, an attribute has two
attribute elements, one with the description information "South" and the
other with the description information "South " which has an extra trailing
space at the end. By selecting this option, only a single South attribute
element is returned when filtering on the attribute. Since trailing spaces
are commonly viewed as an error in the data, it is recommended that you
use this default Trim trailing spaces option to ignore any trailing spaces.
Project only
For example, a report includes the attributes Year, Month, Category, and
Subcategory. The Year and Month attributes are from the same hierarchy
and Month is the lowest-level attribute from that hierarchy on the report.
Similarly, the Category and Subcategory attributes are from the same
hierarchy and Subcategory is the lowest-level attribute from that hierarchy
on the report. When selecting this option for the Metric Level
Determination VLDB property, the level of the report is defined as Month
and Subcategory. It can be defined in this way because these are the
lowest-level attributes from the hierarchies that are present on the report.
This level can then be used with metrics to determine the level at which
their data must be reported. If the physical schema of your project
matches the expected logical schema, correct metric data is displayed and
the resources required to determine the report level are optimized.
Consider the example used to describe the previous option. If the physical
schema of your project matches the expected logical schema, then
including only the lowest-level attributes displays correct metric data.
However, differences between your physical schema and expected logical
schema can cause unexpected data to be displayed if only the lowest level
attributes are used to define the level of the report.
The default option is for aggregation calculations to ignore nulls and for
scalar calculations to treat null values as zero. Any projects that existed
prior to upgrading metadata to MicroStrategy ONE retain their original
VLDB property settings. See the Advanced Reporting Help. for more
information on this setting.
Changes made to this VLDB setting can cause differences to appear in your
data output. Metrics using count or average, metrics with dynamic
aggregation set to count or average, as well as thresholds based on such
metrics could be impacted by altered calculation behavior.
below:
Regardless of the property setting, a text field that contains a dataset object
(such as an attribute or a metric) will display the object name instead of
values. For example, a text field displays {Region} instead of North, South,
and so on.
For an example that uses multiple datasets in a single Grid/Graph, see the
Document Creation Help.
Exam p l e
Quarterly Dollar
Year Quarter Yearly Dollar Sales
Sales
I f Su bt ot a l Di m e n si on a l i t y Aw a r e i s Se t t o FALSE
The quarterly subtotal is calculated as 600, that is, a total of the Quarterly
Dollar Sales values. The yearly subtotal is calculated as 2400, the total of
the Yearly Dollar Sales values. This is how MicroStrategy 7.1 calculates the
subtotal.
I f Su bt ot a l Di m e n si on a l i t y Aw a r e i s Se t t o TRU E
The quarterly subtotal is still 600. Intelligence Server is aware of the level of
the Yearly Dollar Sales metric, so rather than adding the column values, it
correctly calculates the Yearly Dollar Sales total as 600.
Aggregate tables
contain the same
data as
corresponding
detail tables and
the aggregation Aggregate tables
Attribute columns
in fact tables and
lookup tables do
not contain NULLs
and all attribute
elements in fact Attribute columns
Intelligent Cube.
Enable dynamic
Defines whether dynamic sourcing for metric
Metric Enable dynamic
sourcing is enabled or
Validation Disable dynamic sourcing for metric
disabled for metrics.
sourcing for metric
Use case
insensitive string
Defines whether dynamic comparison with Use case
String dynamic sourcing
sourcing is enabled or insensitive string
Comparison
disabled for attributes that are Do not allow any comparison with
Behavior
used in filter qualifications. string comparison dynamic sourcing
with dynamic
sourcing
Reports that use aggregate tables are available for dynamic sourcing by
default, but there are some data modeling conventions that should be
considered when using dynamic sourcing.
You can enable and disable dynamic sourcing for aggregate tables by
modifying the Aggregate Table Validation VLDB property. This VLDB
property has the following options:
l Aggregate tables contain either less data or more data than their
corresponding detail tables and/or the aggregation function is not
SUM: This option disables dynamic sourcing for aggregate tables. This
setting should be used if your aggregate tables are not modeled to support
dynamic sourcing. The use of an aggregation function other than Sum or
the mismatch of data in your aggregate tables with the rest of your data
warehouse can cause incorrect data to be returned to reports from
Intelligent Cubes through dynamic sourcing.
You can disable dynamic sourcing individually for reports that use aggregate
tables or you can disable dynamic sourcing for all reports that use aggregate
tables within a project. While the definition of the VLDB property at the
project level defines a default for all reports in the project, any modifications
at the report level take precedence over the project level definition. For
information on defining a project-wide dynamic sourcing strategy, see the In-
memory Analytics Help.
Attribute Validation
Attribute Validation is an advanced VLDB property that is hidden by default.
For information on how to display this property, see Viewing and Changing
Advanced VLDB Properties, page 1630.
Attributes are available for dynamic sourcing by default, but there are some
data modeling conventions that should be considered when using dynamic
sourcing.
Two scenarios can cause attributes that use inner joins to return incorrect
data when dynamic sourcing is used:
l All attribute elements in fact tables are not also present in lookup tables.
You can enable and disable dynamic sourcing for attributes by modifying the
Attribute Validation VLDB property. This VLDB property has the following
options:
You can disable dynamic sourcing for attributes individually or you can
disable dynamic sourcing for all attributes within a project. While the
definition of the VLDB property at the project level defines a default for all
attributes in the project, any modifications at the attribute level take
The Intelligent Cube parse log helps determine which reports use dynamic
sourcing to connect to an Intelligent Cube, as well as why some reports
cannot use dynamic sourcing to connect to an Intelligent Cube. By default,
the Intelligent Cube parse log can only be viewed using the MicroStrategy
Diagnostics and Performance Logging tool. You can also allow this log to be
viewed in the SQL View of an Intelligent Cube.
l Disable Cube Parse Log in SQL View (default): This option allows the
Intelligent Cube parse log to only be viewed using the MicroStrategy
Diagnostics and Performance Logging tool.
l Enable Cube Parse Log in SQL View: Select this option to allow the
Intelligent Cube parse log to be viewed in the SQL View of an Intelligent
Cube. This information can help determine which reports use dynamic
sourcing to connect to the Intelligent Cube.
You can enable dynamic sourcing for reports by modifying the Enable
Dynamic Sourcing for Report VLDB property. This VLDB property has the
following options:
You can enable dynamic sourcing for reports individually or you can enable
dynamic sourcing for all reports within a project. While the definition of the
VLDB property at the project level defines a default for all reports in the
project, any modifications at the report level take precedence over the
project level definition. For information on defining a project-wide dynamic
sourcing strategy, see the In-memory Analytics Help.
The extended mismatch log helps determine why a metric prevents the use
of dynamic sourcing is provided in the extended mismatch log. This
information is listed for every metric that prevents the use of dynamic
sourcing. By default, the extended mismatch log can only be viewed using
the MicroStrategy Diagnostics and Performance Logging tool. You can also
allow this log to be viewed in the SQL View of a report.
The extended mismatch log can increase in size quickly and thus is best
suited for troubleshooting purposes.
l Enable Extended Mismatch Log in SQL View: Select this option to allow
the extended mismatch log to be viewed in the SQL View of a report. This
information can help determine why a report that can use dynamic
sourcing cannot connect to a specific Intelligent Cube.
The mismatch log helps determine why a report that can use dynamic
sourcing cannot connect to a specific Intelligent Cube. By default, the
mismatch log can only be viewed using the MicroStrategy Diagnostics and
Performance Logging tool. You can also allow this log to be viewed in the
SQL View of a report.
l Disable Mismatch Log in SQL View (default): This option allows the
mismatch log to only be viewed using the MicroStrategy Diagnostics and
Performance Logging tool.
l Enable Mismatch Log in SQL View: Select this option to allow the
mismatch log to be viewed in the SQL View of a report. This information
can help determine why a report that can use dynamic sourcing cannot
connect to a specific Intelligent Cube.
The report parse log helps determine whether the report can use dynamic
sourcing to connect to an Intelligent Cube. By default, the report parse log
can only be viewed using the MicroStrategy Diagnostics and Performance
Logging tool. You can also allow this log to be viewed in the SQL View of a
report.
l Disable Report Parse Log in SQL View (default): This option allows the
report parse log to only be viewed using the MicroStrategy Diagnostics
l Enable Report Parse Log in SQL View: Select this option to allow the
report parse log to be viewed in the SQL View of a report. This information
can help determine whether the report can use dynamic sourcing to
connect to an Intelligent Cube.
Metric Validation
Metric Validation is an advanced VLDB property that is hidden by default.
For information on how to display this property, see Viewing and Changing
Advanced VLDB Properties, page 1630.
Metrics are available for dynamic sourcing by default, but there are some
data modeling conventions that should be considered when using dynamic
sourcing.
If the fact table that stores data for metrics includes NULL values for metric
data, this can cause metrics that use inner joins to return incorrect data
when dynamic sourcing is used. This scenario is uncommon.
You can enable and disable dynamic sourcing for metrics by modifying the
Metric Validation VLDB property. This VLDB property has the following
options:
You can disable dynamic sourcing for metrics individually or you can disable
dynamic sourcing for all metrics within a project. While the definition of the
VLDB property at the project level defines a default for all metrics in the
project, any modifications at the metric level take precedence over the
project level definition. For information on defining a project-wide dynamic
sourcing strategy, see the In-memory Analytics Help.
To ensure that dynamic sourcing can return the correct results for attributes,
you must also verify that filtering on attributes achieves the same results
when executed against your database versus an Intelligent Cube.
The results returned from a filter on attributes can potentially return different
results when executing against the database versus using dynamic sourcing
to execute against an Intelligent Cube. This can occur if your database is
case-sensitive and you create filter qualifications that qualify on the text
data of attribute forms.
Consider a filter qualification that filters on customers that have a last name
beginning with the letter h. If your database is case-sensitive and uses
uppercase letters for the first letter in a name, a filter qualification using a
lowercase h is likely to return no data. However, this same filter qualification
on the same data stored in an Intelligent Cube returns all customers that
have a last name beginning with the letter h.
You can configure this dynamic sourcing behavior for attributes by modifying
the String Comparison Behavior VLDB property. This VLDB property has the
following options:
This is a good option if your database does not enforce case sensitivity. In
this scenario, dynamic sourcing returns the same results that would be
returned by the filter qualification if the report was executed against the
database.
You can modify this VLDB property for attributes individually or you can
modify it for all attributes within a project. While the definition of the VLDB
property at the project level defines a default for all attributes in the project,
any modifications at the attribute level take precedence over the project
level definition. For information on defining a project-wide dynamic sourcing
strategy, see the In-memory Analytics Help.
Default
Property Description Possible Values
Value
The GUID of attributes in profit and loss hierarchy (separated by ':') that has
dummy rows to be removed VLDB property lets you identify attributes that
include empty elements, which can then be ignored when exporting to
Microsoft Excel or to a PDF file. This is useful when creating financial line
item attributes as part of supporting a financial reporting solution in
MicroStrategy. For a detailed explanation of how to support financial
reporting in MicroStrategy, along with using this VLDB property to identify
attributes that include empty elements, refer to the Project Design Help .
To identify attributes that include empty elements, type the ID value for each
attribute in the text field for this VLDB property. To determine the ID value
for an attribute object, navigate to an attribute in Developer, right-click the
attribute, and then select Properties. Details about the attribute, including
the ID value are displayed.
Project only
Default
Property Description Possible Values
Value
Default
Property Description Possible Values
Value
The Ignore Empty Result for Freeform SQL VLDB property provides the
flexibility to display or hide warnings when a Freeform SQL statement
returns an empty result.
l Do not turn off warnings for Freeform SQL statements with empty
results, such as updates (default): This option allows warnings to be
displayed when a Freeform SQL statement causes a Freeform SQL report
to return an empty result. This is a good option if you use Freeform SQL to
return and display data with Freeform SQL reports.
l Turn off warnings for Freeform SQL statements with empty results,
such as updates: Select this option to hide all warnings when a Freeform
SQL statement causes a Freeform SQL report to return an empty result.
This is a good option if you commonly use Freeform SQL to execute
various SQL statements that are not expected to return any report results.
This prevents users from seeing a warning every time a SQL statement is
executed using Freeform SQL.
However, be aware that if you also use Freeform SQL to return and display
data with Freeform SQL reports, no warnings are displayed if the report
returns a single empty result.
l Turn off warnings for Freeform SQL statements that return multiple
result sets with an empty first result set and return second result
set, such as stored procedures: Select this option to hide all warnings
when a Freeform SQL report returns an initial empty result, followed by
additional results that include information. Stored procedures can
sometimes have this type of behavior as they can include statements that
do not return any results (such as update statements or create table
statements), followed by statements to return information from the
updated tables. This prevents users from seeing a warning when these
types of stored procedures are executed using Freeform SQL.
If you select this option and a Freeform SQL report returns only a single
empty result, then a warning is still displayed.
The XQuery Success Code VLDB property lets you validate Transaction
Services reports that use XQuery. MicroStrategy Transaction Services and
XQuery allow you to access and update information available in third-party
web services data sources. The steps to create a Transaction Services
report using XQuery are provided in the Advanced Reporting Help.
When Transaction Services and XQuery are used to update data for third-
party web services, sending the data to be updated is considered as a
successful transaction. By default, any errors that occur for the third-party
web service during a transaction are not returned to MicroStrategy.
To check for errors, you can include logic in your XQuery syntax to
determine if the transaction successfully updated the data within the third-
party web service. Just after the XQuery table declaration, you can include
the following syntax:
<ErrorCode>{Error_Code}</ErrorCode>
<ErrorMessage>{Error_Message}</ErrorMessage>
Error_Code returns any value other than the XQuery Success Code, the
content for the Error_Message is returned. This lets you validate each
transaction that is sent to the third-party web service.
Possible Default
Property Description
Values Value
Maximum
Maximum size of SQL string accepted by User-
SQL/MDX 65536
ODBC driver (bytes). defined
Size
Possible Default
Property Description
Values Value
SQL Time-
governing
Out:
setting)
Governing
Autocommit
The Autocommit VLDB property determines whether a commit statement is
automatically issued after each SQL statement for a database connection.
You have the following options:
Multiple SQL statements are required for various reporting and analysis
features in MicroStrategy. When multiple SQL statements are used, each
can be viewed as a separate transaction. If your database is being
updated by a separate transaction, ETL process, or other update, this
can cause data inconsistency with each SQL statement, since each SQL
statement is returned as a separate transaction. Disabling automatic
commit statements includes all SQL statements as a single transaction,
which can be used in conjunction with other database techniques to
ensure data consistency when reporting and analyzing a database that is
being updated. For example, if reporting on an Oracle database you can
use this in conjunction with defining the isolation level of the SQL
statements.
Be aware that if you disable automatic commit statements for each SQL
statement, these transaction control commands must be included for the
report. If you are using Freeform SQL or creating your own SQL
statement for use in MicroStrategy, these can be included directly in
those SQL statements. For reports that use SQL that is automatically
generated by MicroStrategy, you can use the Pre/Post Statement VLDB
properties (see Customizing SQL Statements: Pre/Post Statements, page
1768) to provide the required transaction control commands.
The table below explains the possible values and their behavior:
Value Behavior
Report only
The table below explains the possible values and their behavior:
Value Behavior
Number The maximum SQL pass size (in bytes) is limited to the specified number
By selecting the check box Use default inherited value, the value is set to the
Default default for the database type used for the related database instance. The
default size varies depending on the database type.
Increasing the maximum to a large value can cause the report to fail in the
ODBC driver. This is dependent on the database type you are using.
If the report result set exceeds the limit specified in the Result Set Row
Limit, the report execution is terminated.
This property overrides the Number of report result rows setting in the
Project Configuration Editor: Governing Rules category.
When the report contains a custom group, this property is applied to each
element in the group. Therefore, the final result set displayed could be
larger than the predefined setting. For example, if you set the Result Set
Row Limit to 1,000, it means you want only 1,000 rows to be returned. Now
apply this setting to each element in the custom group. If the group has
three elements and each uses the maximum specified in the setting (1,000),
the final report returns 3,000 rows.
The table below explains the possible values and their behavior:
Value Behavior
Report only
The table below explains the possible values and their behavior:
Value Behavior
0 This governing setting does not impose a time limit on SQL pass execution.
The maximum amount of time (in seconds) a SQL pass can execute is limited to
Number
the specified number.
IN INDEXSPACE
CLUSTERED
Create partitioning
key (typically
applicable to MPP
systems)
Determines whether and
Intermediate Create partitioning Don't create an
when to create an index for
Table Index key and secondary index
the intermediate table.
index on intermediate
table
Create only
secondary index on
intermediate table
Create Composite
Index for Temporary
Table Column Create Composite
Defines what type of index Indexing
Secondary Index for
is created for temporary
Index Type Create Individual Temporary Table
table column indexing.
Indexes for Column Indexing
Temporary Table
Column Indexing
The Allow Index on Metric property determines whether or not to use fact or
metric columns in index creation. You can see better performance in
different environments, especially in Teradata, when you add the fact or
metric column in the index. Usually, the indexes are created on attribute
columns; but with this setting, the fact or metric columns are added as well.
All fact or metric columns are added.
Exam p l e
This example is the same as the example above except that the last line of
code should be replaced with the following:
Index Prefix
This property allows you to define the prefix to add to the beginning of the
CREATE INDEX statement when automatically creating indexes for
intermediate SQL passes.
For example, the index prefix you define appears in the CREATE INDEX
statement as shown below:
l Teradata:
Exam p l e
The Index Post String setting allows you to add a custom string to the end of
the CREATE INDEX statement.
The table below explains the possible values and their behavior:
Value Behavior
Number The maximum number of attribute ID columns to use with the wildcard
You can define attribute weights in the Project Configuration Editor. Select
the Report definition: SQL generation category, and in the Attribute
weights section, click Modify.
The table below explains the possible values and their behavior:
Value Behavior
Number The maximum number of attribute ID columns that are placed in the index
The Primary Index Type property determines the pattern for creating primary
keys and indexes. In the VLDB Properties Editor, select an option to view
example SQL statements used by various databases for the selected option.
The examples also display whether the option is applicable for a given
database type. If you select an option that is not applicable for the database
type that you use, then the other option is used automatically. While this
ensures that the primary index type is correct for your database, you should
select an option that is listed as applicable for the database that you use.
Some databases such as DB2 UDB support both primary index type options.
Use the example SQL statements and your third-party database
documentation to determine the best option for your environment.
l Create index after inserting into table (default): This option creates the
index after inserting data into a table, which is a good option to support
most database and indexing strategies.
l Create index before inserting into table: This option creates the index
before inserting data into a table, which can improve performance for
some environments, including Sybase IQ. The type of index created can
also help to improve performance in these types of environments, and can
be configured with the Secondary Index Type VLDB property (see
Secondary Index Order, page 1683).
the Secondary Index Order VLDB property (see Secondary Index Type,
page 1683).
Attribute to
Join When Controls whether tables Join common key on
Key From are joined only on the both sides
Join common key on
Neither Side common keys or on all Join common both sides
can be common columns for attributes (reduced)
Supported by each table. on both sides
the Other Side
Execute
Cancel execution
Do not do downward
outer join for
database that support
Do not do downward
outer join for
database that support
full outer join, and
order temp tables in
last pass by level
Reverse FROM
clause order as
generated by the
engine
Join 89
Join 92
Join Type Type of column join. and Cross Join and Join 89
SQL 92 Outer Join
Partially based on
attribute level
(behavior prior to
version 8.0.1) Partially based on
Determines how lookup
Lookup Table Fully based on attribute level
tables are loaded for join
Join Order attribute level. Lookup (behavior prior to
operations.
tables for lower level version 8.0.1)
attributes are joined
before those for
higher level attributes
on nested
aggregation when all
formulas have the
same level
Do perform downward
functions. outer join on nested
aggregation when all
formulas can
downward outer join
to a common lower
level
Preserving
Data Using
Outer Joins
Preserve common
elements of final pass
result table and
lookup/relationship
table
Preserve common
elements of lookup
and final pass result
table
This VLDB property becomes obsolete when you change your Data Engine
version to 2021 or above. See KB484738 for more information.
This VLDB property determines how MicroStrategy joins tables with common
columns. The options for this property are:
l Join common key on both sides (default): Joins on tables only use
columns that are in each table, and are also keys for each table.
l You have two different tables named Table1 and Table2. Both tables
share 3 ID columns for Year, Month, and Date along with other columns
of data. Table1 uses Year, Month, and Date as keys while Table2 uses
only Year and Month as keys. Since the ID column for Date is not a key
for Table2, you must set this option to include Day to join the tables
along with Year and Month.
l You have a table named Table1 that includes the columns for the
attributes Quarter, Month of Year, and Month. Since Month is a child of
Quarter and Month of Year, its ID column is used as the key for Table1.
There is also a temporary table named TempTable that includes the
columns for the attributes Quarter, Month of Year, and Year, using all
three ID columns as keys of the table. It is not possible to join Table1
and TempTable unless you set this option because they do not share
any common keys. If you set this option, Table1 and TempTable can join
on the common attributes Quarter and Month of Year.
Caution must be taken when changing this setting since the results can be
different depending on the types of metrics on the report.
Exam p l e
on (pa1.MARKET_NBR = a11.MARKET_NBR)
This property allows the MicroStrategy SQL Engine to use a new algorithm
for evaluating whether or not a Cartesian join is necessary. The new
algorithm can sometimes avoid a Cartesian join when the old algorithm
cannot. For backward compatibility, the default is the old algorithm. If you
see Cartesian joins that appear to be avoidable, use this property to
determine whether the engine's new algorithm avoids the Cartesian join.
Exam p l es
max(a12.ATTR1_DESC) ATTR1_DESC,
a13.ATTR2_ID ATTR2_ID,
max(a13.ATTR2_DESC) ATTR2_DESC,
count(a11.FACT_ID) METRIC
from FACTTABLE a11
cross join LU_TABLE1 a12
join LU_TABLE2 a13
on (a11.ATTR3_ID = a13.ATTR3_ID and
a12.ATTR1_ID = a13.ATTR1_CD)
group by a12.ATTR1_ID,
a13.ATTR2_ID
l Some Cartesian joins may not be a direct table-to-table join. If one join
"Cartesian joins" to another join, and one of the joins contains a
warehouse table (not an intermediate table), then the execution is either
canceled or allowed depending on the option selected (see below). For
example, if (TT_A join TT_B) Cartesian join (TT_C join WH_D) the
following occurs based on the following settings:
Traditionally, the outer join flag is ignored, because M2 (at Region level) is
higher than the report level of Store. It is difficult to preserve all of the stores
for a metric at the Region level. However, you can preserve rows for a metric
at a higher level than the report. Since M2 is at the region level, it is
impossible to preserve all regions for M2 because the report only shows
Store. To do that, a downward join pass is needed to find all stores that
belong to the region in M2, so that a union is formed among all these stores
with the stores in M1.
When performing a downward join, another issue arises. Even though all the
stores that belong to the region in M2 can be found, these stores may not be
those from which M2 is calculated. If a report filters on a subset of stores,
then M2 (if it is a filtered metric) is calculated only from those stores, and
aggregated to regions. When a downward join is done, either all the stores
that belong to the regions in M2 are included or only those stores that
belong to the regions in M2 and in the report filter. Hence, this property has
three options.
Exam p l e
Using the above example and applying a filter for Atlanta and Charlotte, the
default Do not preserve all the rows for metrics higher than template
level option returns the following results. Note that Charlotte does not
appear because it has no sales data in the fact table; the outer join is
ignored. The outer join flag on metrics higher than template level is ignored.
Using Preserve all the rows for metrics higher than template level
without report filter returns the results shown below. Now Charlotte
appears because the outer join is used, and it has an inventory, but
Washington appears as well because it is in the Region, and the filter is not
applied.
Charlotte 300
Washington 300
Using Preserve all the rows for metrics higher than template level with
report filter produces the following results. Washington is filtered out but
Charlotte still appears because of the outer join.
Charlotte 300
For backward compatibility, the default is to ignore the outer join flag for
metrics higher than template level. This is the SQL Engine behavior for
MicroStrategy 6.x or lower, as well as for MicroStrategy 7.0 and 7.1.
The DSS Star Join property specifies whether a partial star join is performed
or not. A partial star join means the lookup table of a column is joined if and
only if a column is in the SELECT clause or involved in a qualification in the
WHERE clause of the SQL. In certain databases, for example, RedBrick and
Teradata, partial star joins can improve SQL performance if certain types of
indexes are maintained in the data warehouse. Notice that the lookup table
joined in a partial star join is not necessarily the same as the lookup table
defined in the attribute form editor. Any table that acts as a lookup table
rather than a fact table in the SQL and contains the column is considered a
feasible lookup table.
Exam p l es
a14.STORE_DESC
Exam p l es
Move MQ Table in normal FROM clause order to the last (for RedBrick)
This setting is added primarily for RedBrick users. The default order of table
joins is as follows:
The Full Outer Join Support property specifies whether the database
platform supports full outer join syntax:
l Support: Full outer joins are attempted when required by your report or
dashboard actions. By selecting this option, the Join Type VLDB property
is assumed to be Join 92 and any other setting in Join Type is ignored.
Additionally, the COALESCE function can be included in the SQL query.
Since full outer joins can require a lot of database and Intelligence
Server resources, and full outer joins are not supported for all databases,
it is recommended to enable support for individual reports first. If your
results are returned successfully and full outer joins are used often for
your report or dashboard environment, you can consider enabling support
for the entire database. However, enabling full outer join support for
specific reports is recommended if full outer joins are only used for a
small to moderate amount of reporting needs. Creating a template with
full outer join support enabled can save report developers time when
requiring full outer joins.
Exam p l es
Join Type
The Join Type property determines which ANSI join syntax pattern to use.
Some databases, such as Oracle, do not support the ANSI 92 standard yet.
Some databases, such as DB2, support both Join 89 and Join 92. Other
databases, such as some versions of Teradata, have a mix of the join
standards and therefore need their own setting.
MicroStrategy uses different defaults for the join type based on the database
you are using. This is to support the most common scenarios for your
If the Full Outer Join Support VLDB property (see Join Type, page 1702) is
set to Support, this property is ignored and the Join 92 standard is used.
Exam p l es
Join 89 (default)
Join 92
a22.DEPARTMENT_NBR DEPARTMENT_NBR,
a21.CUR_TRN_DT CUR_TRN_DT
from LOOKUP_DAY a21,
LOOKUP_DEPARTMENT a22,
LOOKUP_STORE a23
select a21.MARKET_NBR MARKET_NBR,
max(a24.MARKET_DESC) MARKET_DESC,
sum((a22.COST_AMT * a23.TOT_SLS_DLR)) SUMTSC
from ZZOL00 a21
left outer join COST_STORE_DEP a22
on (a21.DEPARTMENT_NBR = a22.DEPARTMENT_NBR and
a21.CUR_TRN_DT = a22.CUR_TRN_DT and
a21.STORE_NBR = a22.STORE_NBR)
left outer join STORE_DEPARTMENT a23
on (a21.STORE_NBR = a23.STORE_NBR and
a21.DEPARTMENT_NBR = a23.DEPARTMENT_NBR and
a21.CUR_TRN_DT = a23.CUR_TRN_DT),
LOOKUP_MARKET a24
where a21.MARKET_NBR = a24.MARKET_NBR
group by a21.MARKET_NBR
This property determines how lookup tables are loaded for being joined. The
setting options are
l Fully based on attribute level. Lookup tables for lower level attributes are
joined before those for higher level attributes
If you select the first option, lookup tables are loaded for join in alphabetic
order.
If you select the second option, lookup tables are loaded for join based on
attribute levels, and joining is performed on the lowest level attribute first.
The Max Tables in Join property works together with the Max Tables in Join
Warning property. It specifies the maximum number of tables in a join. If the
maximum number of tables in a join (specified by the Max Tables In Join
property) is exceeded, then the Max Tables in Join Warning property
decides the course of action.
The table below explains the possible values and their behavior:
Value Behavior
Number The maximum number of tables in a join is set to the number specified
The Max Tables in Join Warning property works in conjunction with the Max
Tables in Join property. If the maximum number of tables in a join (specified
by the Max Tables in Join property) is exceeded, then this property controls
the action taken. The options are to either continue or cancel the execution.
options to control the outer join behavior for metrics that use nested
aggregation:
1 East
2 Central
3 South
6 North
Fact Table
1 2002 1000
2 2002 2000
3 2002 5000
1 2003 4000
2 2003 6000
3 2003 7000
4 2003 3000
5 2003 1500
The Fact table has data for Store IDs 4 and 5, but the Store table does not
have any entry for these two stores. On the other hand, notice that the North
Store does not have any entries in the Fact table. This data is used to show
examples of how the next two properties work.
The following Preserve All Final Pass Result Elements VLDB property
settings determine how to outer join the final result, as well as the lookup
and relationship tables:
l If you choose the Preserve all final result pass elements option, the
SQL Engine generates an outer join, and your report contains all of the
elements that are in the final result set. When this setting is turned ON,
outer joins are generated for any joins from the fact table to the lookup
table, as well as to any relationship tables. This is because it is hard to
distinguish which table is used as a lookup table and which table is used
as a relationship table, the two roles one table often plays. For example,
LOOKUP_DAY serves as both a lookup table for the Day attribute, as well
as a relationship table for Day and Month.
This setting should not be used in standard data warehouses, where the
lookup tables are properly maintained and all elements in the fact table
have entries in the respective lookup table. It should be used only when a
certain attribute in the fact table contains more (unique) attribute
elements than its corresponding lookup table. For example, in the
example above, the Fact Table contains sales for five different stores, but
the Store Table contains only four stores. This should not happen in a
standard data warehouse because the lookup table, by definition, should
contain all the attribute elements. However, this could happen if the fact
tables are updated more often than the lookup tables.
l If you choose the Preserve all elements of final pass result table with
respect to lookup table but not relationship table option, the SQL
Engine generates an inner join on all passes except the final pass; on the
final pass it generates an outer join.
l If you choose the Do not listen to per report level setting, preserve
elements of final pass according to the setting at attribute level. If
this choice is selected at attribute level, it will be treated as preserve
common elements (that is, choice 1) option at the database instance,
report, or template level, the setting for this VLDB property is used at the
attribute level. This value should not be selected at the attribute level. If
you select this setting at the attribute level, the VLDB property is set to the
Preserve common elements of final pass result table and lookup
table option.
This setting is useful if you have only a few attributes that require different
join types. For example, if among the attributes in a report only one needs
to preserve elements from the final pass table, you can set the VLDB
property to Preserve all final pass result elements setting for that one
attribute. You can then set the report to the Do not listen setting for the
VLDB property. When the report is run, only the attribute set differently
causes an outer join in SQL. All other attribute lookup tables will be joined
using an equal join, which leads to better SQL performance.
Exam p l es
The first two examples below are based on the Preserve All Final Pass
Result Elements, page 1708 example above. The third example, for the
Preserve all elements of final pass result table with respect to lookup
table but not relationship table option, is a separate example designed to
reflect the increased complexity of that option's behavior.
The "Preserve common elements of final pass result table and lookup table"
option returns the following results using the SQL below.
East 5000
Central 8000
South 12000
The "Preserve all final result pass elements" option returns the following
results using the SQL below. Notice that the data for Store_IDs 4 and 5 are
now shown.
East 5000
Central 8000
South 12000
3000
1500
Example: Preserve all elements of final pass result table with respect to lookup
table but not to relationship table
A report has Country, Metric 1, and Metric 2 on the template. The following
fact tables exist for each metric:
CALLCENTER_
Fact 1
ID
1 1000
2 2000
1 1000
2 2000
3 1000
4 1000
EMPLOYEE_ID Fact 2
1 5000
2 6000
1 5000
2 6000
3 5000
4 5000
5 1000
The SQL Engine performs three passes. In the first pass, the SQL Engine
calculates metric 1. The SQL Engine inner joins the "Fact Table (Metric 1)"
table above with the call center lookup table "LU_CALL_CTR" below:
CALLCENTER_ID COUNTRY_ID
1 1
2 1
3 2
COUNTRY_ID Metric 1
1 6000
2 1000
In the second pass, metric 2 is calculated. The SQL Engine inner joins the
"Fact Table (Metric 2)" table above with the employee lookup table "LU_
EMPLOYEE" below:
EMPLOYEE_ID COUNTRY_ID
1 1
EMPLOYEE_ID COUNTRY_ID
2 2
3 2
COUNTRY_ID Metric 2
1 10000
2 17000
In the third pass, the SQL Engine uses the following country lookup table,
"LU_COUNTRY":
COUNTRY_ID COUNTRY_DESC
1 United States
3 Europe
The SQL Engine left outer joins the METRIC1_TEMPTABLE above and the
LU_COUNTRY table. The SQL Engine then left outer joins the METRIC2_
TEMPTABLE above and the LU_COUNTRY table. Finally, the SQL Engine
inner joins the results of the third pass to produce the final results.
The "Preserve all elements of final pass result table with respect to lookup
table but not to relationship table" option returns the following results using
the SQL below.
COUNTRY_
COUNTRY_ID Metric 1 Metric 2
DESC
2 1000 17000
The Preserve All Lookup Table Elements VLDB property is used to show all
attribute elements that exist in the lookup table, even though there is no
corresponding fact in the result set. For example, your report contains Store
and Sum(Sales), and it is possible that a store does not have any sales at
all. However, you want to show all the store names in the final report, even
those stores that do not have sales. To do that, you must not rely on the
stores in the sales fact table. Instead, you must make sure that all the stores
from the lookup table are included in the final report. The SQL Engine needs
to do a left outer join from the lookup table to the fact table.
It is possible that there are multiple attributes on the template. To keep all
the attribute elements, Analytical Engine needs to do a Cartesian Join
between involved attributes' lookup tables before doing a left outer join to
the fact table.
In MicroStrategy 7.1, this property was known as Final Pass Result Table
Outer Join to Lookup Table.
The Analytical Engine does a normal (equal) join to the lookup table.
Sometimes the fact table level is not the same as the report or template
level. For example, a report contains Store, Month, Sum(Sales) metric, but
the fact table is at the level of Store, Day, and Item. There are two ways to
keep all the store and month elements:
l Do a left outer join first to keep all attribute elements at the Store, Day,
and Item level, then aggregate to the Store and Month level.
This option is for the first approach. In the example given previously, it
makes two SQL passes:
The advantage of this approach is that you can do a left outer join and
aggregation in the same pass (pass 2). The disadvantage is that because
you do a Cartesian join with the lookup tables at a much lower level (pass 1),
the result of the Cartesian joined table (TT1) can be very large.
This option corresponds to the second approach described above. Still using
the same example, it makes three SQL passes:
This approach needs one more pass than the previous option, but the cross
join table (TT2) is usually smaller.
This option is similar to Option 3. The only difference is that the report filter
is applied in the final pass (Pass 3). For example, a report contains Store,
Month, and Sum(Sales) with a filter of Year = 2002. You want to display
every store in every month in 2002, regardless of whether there are sales.
However, you do not want to show any months from other years (only the 12
months in year 2002). Option 4 resolves this issue.
means that every attribute participates with the Preserve All Lookup Tables
Elements property. You still need to turn on this property to make it take
effect, which can be done using either this dialog box or the VLDB dialog
box.
Example
The Preserve common elements of lookup and final pass result table
option simply generates a direct join between the fact table and the lookup
table. The results and SQL are as follows.
East 5000
Central 8000
South 12000
The "Preserve lookup table elements joined to final pass result table based
on fact keys" option creates a temp table that is a Cartesian join of all lookup
table key columns. Then the fact table is outer joined to the temp table. This
preserves all lookup table elements. The results and SQL are as below:
East 5000
Central 8000
South 12000
North
The "Preserve lookup table elements joined to final pass result table based
on template attributes without filter" option preserves the lookup table
elements by left outer joining to the final pass of SQL and only joins on
attributes that are on the template. For this example and the next, the filter
of "Store not equal to Central" is added. The results and SQL are as follows:
East 5000
Central
South 12000
North
The "Preserve lookup table elements joined to final pass result table based
on template attributes with filter" option is the newest option and is the same
as above, but you get the filter in the final pass. The results and SQL are as
follows:
East 5000
South 12000
North
In the table below, the default values for each VLDB property are the general
defaults that can be applied most broadly for the set of certified MDX cube
sources. Certain VLDB properties use different default settings depending
on which MDX cube source you are using. To determine all default VLDB
property settings for the MDX cube source you are reporting on, follow the
steps provided in Default VLDB Settings for Specific Data Sources, page
1925.
source.
No non-empty
Determines how null
optimization
values from an MDX
cube source are Non-empty optimization,
ignored using the use default measure
MDX Non Empty non-empty keyword Non-empty optimization, No non-empty
Optimization when attributes from use first measure on optimization
different hierarchies template
(dimensions) are
Non-empty optimization,
included on the same
use all measures on
MDX cube report.
template
Determines whether
TopCount is used in Do not use TopCount in
place of Rank and the place of Rank and Use TopCount
MDX TopCount Order to support Order instead of Rank
Support certain MicroStrategy
Use TopCount instead of and Order
features such as
metric filter Rank and Order
qualifications.
Modifying Third-
Defines the name of
Party Cube
the measures
Sources in User-defined [Measures]
dimension in an MDX
MicroStrategy:
cube source.
MDX
Determines the
maximum number of
rows that each MDX
MDX Query
query can return User-defined -1
Result Rows
against MSAS and
Essbase for a
hierarchical attribute.
See the MDX Cube Reporting Help for information on supporting MDX cube
source date data in MicroStrategy.
review attribute information. If this type of MDX cube report accesses data
that is partitioned within the MDX cube source, the report can require
additional resources and impact the performance of the report. To avoid this
performance issue, the MDX Add Fake Measure VLDB property provides the
following options:
The MDX Add Non Empty VLDB property determines how null values are
returned to MicroStrategy from an MDX cube source and displayed on MDX
cube reports. To determine whether null data should be displayed on MDX
cube reports, when attributes from different hierarchies (dimensions) are
included on the same MDX cube report, see MDX Non Empty Optimization,
page 1732.
l Do not add the non-empty keyword in the MDX select clause: When
this option is selected, data is returned from rows that contain data and
rows that have null metric values (similar to an outer join in SQL). The null
values are displayed on the MDX cube report.
l Add the non-empty keyword in the MDX select clause only if there
are metrics on the report (default): When this option is selected, and
metrics are included on an MDX cube report, data is not returned from the
MDX cube source when the default metric in the MDX cube source has null
data. Any data not returned is not included on MDX cube reports (similar to
an inner join in SQL). If no metrics are present on an MDX cube report,
then all values for the attributes are returned and displayed on the MDX
cube report.
l Always add the non-empty keyword in the MDX select clause: When
this option is selected, data is not returned from the MDX cube source
when a metric on the MDX cube report has null data. Any data not returned
is not included on MDX cube reports (similar to an inner join in SQL).
See the MDX Cube Reporting Help for more information on MDX sources.
Inheriting value formats from your MDX cube source also enables you to
apply multiple value formats to a single MicroStrategy metric.
l MDX metric values are formatted per column (default): If you select this
option, MDX cube source formatting is not inherited. You can only apply a
single format to all metric values on an MDX cube report.
l MDX metric values are formatted per cell: If you select this option, MDX
cube source formatting is inherited. Metric value formats are determined
by the formatting that is available in the MDX cube source, and metric
values can have different formats.
For examples of using these options and steps to configure your MDX cube
sources properly, see the MDX Cube Reporting Help.
This VLDB property determines how null values are identified if you use the
MDX Non Empty Optimization VLDB property (see MDX has Measure Values
in Other Hierarchies, page 1729) to ignore null values coming from MDX
cube sources.
If you define the MDX Non Empty Optimization VLDB property as No non-
empty optimization, then this VLDB property has no effect on how null
values are ignored. If you use any other option for the MDX Non Empty
Optimization VLDB property, you can choose from the following settings:
l Only include the affected hierarchy in the "has measure values" set
definition: Only a single hierarchy on the MDX cube report is considered
when identifying and ignoring null values. This requires fewer resources to
determine the null values, but some values can be mistakenly identified as
null values in scenarios such as using calculated members in an MDX
cube source.
This VLDB property is useful only for MDX cube reports that access an
Oracle Hyperion Essbase MDX cube source. To help illustrate the
functionality of the property, consider an unbalanced hierarchy with the
levels Products, Department, Category, SubCategory, Item, and SubItem.
The image below shows how this hierarchy is populated on a report in
MicroStrategy.
on a grid from the top of the hierarchy down. If this setting is selected for
the example scenario described above, the report is populated as shown
in the image below.
Setting this VLDB property to Add the generation number property for a
ragged hierarchy from Essbase can cause incorrect formatting.
The MDX Measure Values to Treat as Null VLDB property allows you to
specify what measure values are defined as NULL values, which can help to
support how your SAP environment handles non-calculated measures. The
default value to treat as NULL is X. This supports defining non-calculated
measures as NULL values for SAP 7.4 environments.
The MDX Non Empty Optimization VLDB property determines how null
values from an MDX cube source are ignored using the non-empty keyword
when attributes from different hierarchies (dimensions) are included on the
same MDX cube report.
Electronics $2,500,000
2008
Movies $500,000
Music
By selecting this option, the following data would be returned on the MDX
cube report:
Movies $500,000
The Music row does not appear because all the metrics have null values.
If you use this option, you can also control whether null values from MDX
cube sources are ignored using the VLDB property MDX Has Measure
Values In Other Hierarchies (See MDX Non Empty Optimization, page
1732).
This VLDB property defines how the name of the measure dimension is
determined for an MDX cube source. You can choose from the following
settings:
l Do not remember the name of the measure dimension: The MDX cube
source is not analyzed to determine the name of the measure dimension.
Since most MDX cube sources use [Measures] as the measure
dimension name and MicroStrategy recognizes this default name, this
option is recommended for most MDX cube sources.
l Remember the name of the measure dimension: The MDX cube source
is analyzed to determine the name of the measure dimension. The name
returned is then used later when querying the MDX cube source. This
option can be used when an MDX cube source does not use [Measures]
as the measure dimension name, which is the default used for most MDX
cube sources. Essbase is the MDX cube source that most commonly uses
a measure dimension name other than [Measures].
l Read the name of the measure dimension from the "Name of Measure
Dimension" VLDB setting: The measure dimension name defined using
the Name of Measure Dimension VLDB property (see MDX Remember
Measure Dimension Name, page 1734) is used as the measure dimension
name. You can use this option if the MDX cube source does not use
l Do not use TopCount in the place of Rank and Order: The functions
Rank and Order are always used instead of TopCount. This option
supports backwards compatibility.
this property, see Viewing and Changing Advanced VLDB Properties, page
1630.
This VLDB property supports a unique scenario when analyzing MDX cube
reports. An example of this scenario is provided below.
You have an MDX cube report that includes a low level attribute on the
report, along with some metrics. You create a filter on the attribute's ID
form, where the ID is between two ID values. You also include a filter on a
metric. Below is an example of such an MDX cube report definition:
When you run the report, you receive an error that alerts you that an
unexpected level was found in the result. This is because the filter on the
attribute's ID form can include other levels due to the structure of ID values
in some MDX cube sources. When these other levels are included, the
metric filter cannot be evaluated correctly by default.
You can support this type of report by modifying the MDX Verify Limit Filter
Literal Level. This VLDB property has the following options:
This VLDB property defines the name of the measures dimension in an MDX
cube source. The default name for the measures dimension is [Measures].
If your MDX cube source uses a different name for the measures dimension,
you must modify this VLDB property to match the name used in your MDX
cube source. Requiring this change is most common when connecting to
Essbase MDX cube sources, which do not always use [Measures] as the
measure dimension name.
Identifying the name of the measure dimension is also configured using the
MDX Remember Measure Dimension Name VLDB property, as described in
Name of Measure Dimension, page 1738.
This VLDB property allows users to set the maximum number of rows that
each MDX query can return against MSAS and Essbase for a hierarchical
attribute. The default value is -1, which means there’s no limit. Any positive
integer value can be set as the maximum number of rows. Any values other
than positive integers are treated as default, meaning they have no limit.
When the limit is exceeded, an error message appears.
Database instance
a dashboard or
unrelated common
document, are included
attributes:
with metrics.
Do nothing
One pass
Multiple count
distinct, but count
expression must be
Indicates how to handle the same No count distinct,
COUNT (and other
Separate COUNT Multiple count use select distinct
aggregation functions)
DISTINCT distinct, but only one and count(*)
when DISTINCT is
count distinct per instead
present in the SQL.
pass
No count distinct,
use select distinct
and count(*) instead
Determines the
evaluation order to
support variance and False
Smart Metric
variance percentage False
Transformation True
transformations on smart
metric or compound
metric results.
aggregation.
Do nothing
l Use Temp Table as set in the Fallback Table Type setting: When this
option is set, the table creation type follows the option selected in the
VLDB property Fallback Table Type. The SQL Engine reads the Fallback
Table Type VLDB setting and determines whether to create the
intermediate table as a true temporary table or a permanent table.
In most cases, the default Fallback Table Type VLDB setting is Temporary
table. However, for a few databases, like UDB for 390, this option is set to
Permanent table. These databases have their Intermediate Table Type
defaulting to True Temporary Table, so you set their Fallback Table Type
to Permanent. If you see permanent table creation and you want the
absolute non-aggregation metric to use a True Temporary table, set the
Fallback Table Type to Temporary table on the report as well.
l Use subquery (default): With this setting, the engine performs the non-
aggregation calculation with a subquery.
Exam p l es
Use Sub-query
where ((c11.MARKET_NBR)
in (select s21.MARKET_NBR
from DSSADMIN.LOOKUP_STORE s21
where s21.STORE_NBR in (3, 2, 1)))
select a11.CLASS_NBR CLASS_NBR,
a12.CLASS_DESC CLASS_DESC,
sum(a11.TOT_SLS_QTY) WJXBFS1
from DSSADMIN.MARKET_CLASS a11,
TPZZOP00 pa1,
DSSADMIN.LOOKUP_CLASS a12
where a11.MARKET_NBR = pa1.WJXBFS1 and
a11.CLASS_NBR = a12.CLASS_NBR
and ((a11.MARKET_NBR)
in (select s21.MARKET_NBR
from DSSADMIN.LOOKUP_STORE s21
where s21.STORE_NBR in (3, 2, 1)))
group by a11.CLASS_NBR,
a12.CLASS_DESC
Exam p l es
in (select min(a11.CUR_TRN_DT)
from HARI_LOOKUP_DAY a11
group by a11.YEAR_ID))
group by a12.YEAR_ID
create table #ZZTIS00H5J7MQ000(
YEAR_ID DECIMAL(10, 0))
[Placeholder for an analytical SQL]
select a12.YEAR_ID YEAR_ID,
max(a13.YEAR_DESC) YEAR_DESC,
sum(a11.TOT_SLS_QTY) TSQDIMYEARNA
from HARI_REGION_DIVISION a11
join HARI_LOOKUP_DAY a12
on (a11.CUR_TRN_DT = a12.CUR_TRN_DT)
join #ZZTIS00H5J7MQ000 pa1
on (a12.YEAR_ID = pa1.YEAR_ID)
join HARI_LOOKUP_YEAR a13
on (a12.YEAR_ID = a13.YEAR_ID)
where ((a11.CUR_TRN_DT)
in (select min(a15.CUR_TRN_DT)
from #ZZTIS00H5J7MQ000 pa1
join HARI_LOOKUP_DAY a15
on (pa1.YEAR_ID = a15.YEAR_ID)
group by pa1.YEAR_ID))
group by a12.YEAR_ID
[The rest of the INSERT statements have been omitted from display].
Exam p l es
COUNT(column) Support
COUNT(column) Support is an advanced property that is hidden by default.
For information on how to display this property, see Viewing and Changing
Advanced VLDB Properties, page 1630.
Exam p l es
Use COUNT(column)
Use COUNT(*)
Default to Metric Name allows you to choose whether you want to use the
metric name or a MicroStrategy-generated name as the column alias. When
metric names are used, only the first 20 standard characters are used. If you
have different metrics, the metric names start with the same 20 characters.
It is hard to differentiate between the two, because they are always the
same. The Default to Metric Name option does not work for some
international customers.
If you choose to use the metric name and the metric name begins with a
number, the letter M is attached to the beginning of the name during SQL
If you select the option Use the metric name as the default metric
column alias, you should also set the maximum metric alias size. See
Default to Metric Name, page 1749 below for information on setting this
option.
Exam p l es
Do not use the metric name as the default metric column alias (default)
The Join Across Datasets VLDB property determines how values for metrics
are calculated when unrelated attributes from different datasets of a
dashboard or document are included with metrics. For example, consider a
dashboard with two separate datasets that include the following data:
Notice that one dataset includes the Region attribute, however the other
dataset only includes Category. The Region attribute is also not directly
related to the Category attribute, but it is included with Category in one of
the two datasets.
The data for Sales is displayed as $260 for both Regions, which is the total
sales of all regions. In most scenarios, this sales data should instead reflect
the data for each region. This can be achieved by allowing data to be joined
for the unrelated attributes Category and Region, which then displays the
following data:
Now the data for Sales displays $185 for North (a combination of the sales
for Books and Electronics, which were both for the North region) and $85 for
South (sales for Movies, which was for the South region).
Max Metric Alias Size allows you to set the maximum size of the metric alias
string. This is useful for databases that only accept a limited number of
characters for column names.
Set the Max Metric Alias Size VLDB property for the following gateways:
Db2 128
PostgreSQL 63
Oracle 12c 30
Oracle 12cR2
Oracle 18c
128
Oracle 19c
Oracle 21c
Redshift 127
Teradata 15.x
128
Teradata 16.x
Teradata 17
Snowflake 256
You should set the maximum metric alias size to fewer characters than your
database's limit. This is because, in certain instances, such as when two
column names are identical, the SQL engine adds one or more characters to
one of the column names during processing to be able to differentiate
between the names. Identical column names can develop when column
names are truncated.
For example, if your database rejects any column name that is more than 30
characters and you set this VLDB property to limit the maximum metric alias
size to 30 characters, the example presented by the following metric names
still causes your database to reject the names during SQL processing:
The system limits the names to 30 characters based on the VLDB option you
set in this example, which means that the metric aliases for both columns is
as follows:
The SQL engine adds a 1 to one of the names because the truncated
versions of both metric names are identical. That name is then 31 characters
long and so the database rejects it.
Therefore, in this example you should use this feature to set the maximum
metric alias size to fewer than 30 (perhaps 25), to allow room for the SQL
engine to add one or two characters during processing in case the first 25
characters of any of your metric names are the same.
l At the metric level, it can be set in either the VLDB Properties Editor or
from the Metric Editor's Tools menu, and choosing Metric Join Type. The
setting is applied in all the reports that include this metric.
l At the report level, it can be set from the Report Editor's Data menu, by
pointing to Report Data Options, and choosing Metric Join Type. This
setting overrides the setting at the metric level and is applied only for the
currently selected report.
There is a related but separate property called Formula Join Type that can
also be set at the metric level. This property is used to determine how to
combine the result set together within this metric. This normally happens
when a metric formula contains multiple facts that cause the Analytical
Engine to use multiple fact tables. As a result, sometimes it needs to
calculate different components of one metric in different intermediate tables
and then combine them. This property can only be set in the Metric Editor
from the Tools menu, by pointing to Advanced Settings, and then choosing
Formula Join Type.
Both Metric Join Type and Formula Join Type are used in the Analytical
Engine to join multiple intermediate tables in the final pass. The actual logic
is also affected by another VLDB property, Full Outer Join Support. When
this property is set to YES, it means the corresponding database supports
full outer join (92 syntax). In this case, the joining of multiple intermediate
tables makes use of outer join syntax directly (left outer join, right outer join,
or full outer join, depending on the setting on each metric/table). However, if
the Full Outer Join Support is NO, then the left outer join is used to simulate
a full outer join. This can be done with a union of the IDs of the multiple
intermediate tables that need to do an outer join and then using the union
table to left outer join to all intermediate tables, so this approach generates
more passes. This approach was also used by MicroStrategy 6.x and earlier.
Also note that when the metric level is higher than the template level, the
Metric Join Type property is normally ignored, unless you enable another
property, Downward Outer Join Option. For detailed information, see
Relating Column Data with SQL: Joins, page 1684.
even larger fact table. If you are short on temporary table space or insert
much data from the fact table into the temporary table, it may be better to
use the fact table multiple times rather than create temporary tables. Your
choice for this property depends on your data and report definitions.
Exam p l es
The following example first creates a fairly large temporary table, but then
never touches the fact table again.
The following example does not create the large temporary table but must
query the fact table twice.
Null Check
The Null Check VLDB property indicates how to handle arithmetic operations
with NULL values. If Null Check is enabled, the NULL2ZERO function is
added, which changes NULL to 0 in any arithmetic calculation (+,-,*,/).
Database instance
Due to the evaluation order used for smart metrics, compound metrics, and
transformations, creating transformation metrics to display the variance or
variance percentage of a smart metric or compound metric can return
unexpected results in some scenarios.
For example, the report sample shown below includes quarterly profit
margins. Transformation metrics are included to display the last quarter's
profit margin (Last Quarter's (Profit Margin) and the variance of
the profit margin and last quarter's profit margin ((Profit Margin -
(Last Quarter's (Profit Margin))).
You can modify the evaluation order to return correct variance results by
defining the Smart Metric Transformation VLDB property as True. After
making this change, the report displays the following results.
The Smart Metric Transformation VLDB property has the following options:
l False (default): Select this option for backwards compatibility with existing
transformation metrics based on smart metrics or compound metrics.
the report when dynamic aggregation is also used. You can define the
level of calculation for a metric's dynamic aggregation function by creating
a subtotal, and then defining the level of calculation for that subtotal.
When selecting this option, only the grouping option for a subtotal is used
to calculate metric data. For detailed examples and information on
creating subtotals, refer to the Advanced Reporting Help.
l Use both the grouping and filtering property of a level metric for
dynamic aggregation: The dimensionality, or level, of the metric is used
to define how the metric data is calculated on the report when dynamic
aggregation is also used. When selecting this option, both the grouping
and filtering options for a level metric are used to calculate metric data.
For detailed examples and information on defining the dimensionality of a
metric, refer to the documentation on level metrics provided in the
Advanced Reporting Help.
l Use both the grouping and filtering property of a level subtotal for
dynamic aggregation: The dimensionality, or level, of the metric's
dynamic aggregation function is used to define how the metric data is
calculated on the report when dynamic aggregation is also used. You can
define the level of calculation for a metric's dynamic aggregation function
by creating a subtotal, and then defining the level of calculation for that
subtotal. When selecting this option, both the grouping and filtering
options for a subtotal are used to calculate metric data. For detailed
examples and information on creating subtotals, refer to the Advanced
Reporting Help.
Exam p l e
Consider a metric that performs a simple sum of cost data by using the
following metric definition:
Sum(Cost) {~+}
This metric is named Cost, and the syntax {~+} indicates that it calculates
data at the level of the report it is included on. Another metric is created with
the following metric definition:
Sum(Cost) {~+}
This metric also uses a subtotal for its dynamic aggregation function that
uses the following definition:
Notice that the function for this subtotal includes additional level information
to perform the calculation based on the report level, Year, and Category. As
shown in the image below, this subtotal function, named Sum
(Year,Category) is applied as the metric's dynamic aggregation function.
This metric is named Cost (subtotal dimensionality). This metric along with
the simple Cost metric is displayed on the report shown below, which also
contains the attributes Year, Region, and Category.
Notice that the values for these two metrics are the same. This is because
no dynamic aggregation is being performed, and the Subtotal Dimensionality
Use VLDB property is also using the default option of Use dimensionality
from metric for dynamic aggregation. With this default behavior still applied,
the attribute Year can be removed from the grid of the report to trigger
dynamic aggregation, as shown in the report below.
The metric values are still the same because both metrics are using the level
of the metric. If the Subtotal Dimensionality Use VLDB property for the
report is modified to use the option Use dimensionality from subtotal for
dynamic aggregation, this affects the report results as shown in the report
below.
The Cost (subtotal dimensionality) metric now applies the level defined in
the subtotal function that is used as the metric's dynamic aggregation
function. This displays the same Cost value for all categories in the
Northeast region because the data is being returned as the total for all years
and categories combined.
Transformable AggMetric
The Transformable AggMetric VLDB property allows you to define what
metrics should be used to perform transformations on compound metrics
that use nested aggregation.
For example, you create two metrics. The first metric, referred to as Metric1,
uses an expression of Sum(Fact) {~+, Attribute+}, where Fact is a
fact in your project and Attribute is an attribute in your project used to
define the level of Metric1. The second metric, referred to as Metric2, uses
an expression of Avg(Metric1){~+}. Since both metrics use aggregation
functions, Metric2 uses nested aggregation.
Including Metric2 on a report can return incorrect results for the following
scenario:
Metric only
Exam p l e
You have a report with Week, Sales, and Last Year Sales on the template,
filtered by Month. The default behavior is to calculate the Last Year Sales
with the following SQL. Notice that the date transformation is done for Month
and Week.
The new behavior applies transformation only to the highest common child
when it is applicable to multiple attributes. The SQL is shown in the following
syntax. Notice that the date transformation is done only at the Day level,
because Day is the highest common child of Week and Month. So the days
are transformed, and then you filter for the correct Month, and then Group by
Week.
Zero Check
The Zero Check VLDB property indicates how to handle division by zero. If
zero checking is enabled, the ZERO2NULL function is added, which changes
0 to NULL in the denominator of any division calculation.
Default
Property Description Possible Values
Value
Data Mart
SQL to be SQL statements included after the
Executed CREATE statement used to create User-defined NULL
After Data the data mart.
Mart Creation
Data Mart
SQL to be
SQL statements included before the
Executed
INSERT statement used to insert User-defined NULL
Before
data into the data mart.
Inserting
Data
Data Mart
SQL to be SQL statements included before the
Executed CREATE statement used to create User-defined NULL
Prior to Data the data mart.
Mart Creation
Drop database
connection after Drop
Defines whether the database running user- database
Drop defined SQL
connection is dropped after user- connection
Database
defined SQL is executed on the Do not drop after running
Connection
database. database user-defined
connection after SQL
running user-
Default
Property Description Possible Values
Value
defined SQL
Element
Browsing SQL statements issued after element
User-defined NULL
Post browsing requests.
Statement
Element
SQL statements issued before
Browsing Pre User-defined NULL
element browsing requests.
Statement
Default
Property Description Possible Values
Value
You can insert the following syntax into strings to populate dynamic
information by the SQL Engine:
l ??? inserts the table name (can be used in Data Mart Insert/Pre/Post
statements, Insert Pre/Post, and Table Post statements).
l !a inserts column names for attributes only (can be used in Table Pre/Post
and Insert Pre/Mid statements).
l !f inserts the report path (can be used in all Pre/Post statements except
Element Browsing). An example is: \MicroStrategy
Tutorial\Public Objects\Reports\MicroStrategy Platform
Capabilities\Ad hoc Reporting\Sorting\Yearly Sales
l !r inserts the report GUID, the unique identifier for the report object that is
also available in the Enterprise Manager application (can be used in all
Pre/Post statements).
l !p inserts the project name with spaces omitted (can be used in all
Pre/Post statements).
l !z inserts the project GUID, the unique identifier for the project (can be
used in all Pre/Post statements).
l !s inserts the user session GUID, the unique identifier for the user's
session that is also available in the Enterprise Manager application (can
be used in all Pre/Post statements).
The table below shows the location of some of the most important
VLDB/DSS settings in a Structured Query Language (SQL) query structure.
If the properties in the table are set, the values replace the corresponding
tag in the query:
Query Structure
<1>
<2>
CREATE <3> TABLE <4> <5><table name> <6>
(<fields' definition>)
<7>
<8>
<9>(COMMIT)
<10>
INSERT INTO <5><table name><11>
SELECT <12> <fields list>
FROM <tables list>
WHERE <joins and filter>
<13>(COMMIT)
<14>
<15>
<16>
CREATE <17> INDEX <index name> ON
<fields list>
<18>
SELECT <12> <fields list>
FROM <tables list>
WHERE <joins and filter>
<19>
<20>
DROP TABLE TABLENAME
<21>
<22>
The Commit after Final Drop property (<21>) is sent to the warehouse even
if the SQL View for the report does not show it.
Exam p l e
A2.COL2,
A3.COL3
from TABLE4 A1,
TABLE5 A2,
TABLE6 A3
where A1.COL1 = A2.COL1 and A2.COL4=A3.COL5
Including SQL statements prior to element browsing requests can allow you
to define the priority of element browsing requests to be higher or lower than
the priority for report requests. You can also include any other SQL
statements required to better support element browsing requests. You can
include multiple statements to be executed, separated by a semicolon (;).
The SQL Engine then executes the statements separately.
Exam p l es
with a ";". The SQL Engine then breaks it into individual statements using ";"
as the separator and executes the statements separately.
Exam p l e
a11.CLASS_NBR,
a11.STORE_NBR
/* ZZTIS00H601PO000 Insert PostStatement1 */
select pa1.ITEM_NBR ITEM_NBR,
pa1.CLASS_NBR CLASS_NBR,
max(a11.ITEM_DESC) ITEM_DESC,
max(a11.CLASS_DESC) CLASS_DESC,
pa1.STORE_NBR STORE_NBR,
max(a12.STORE_DESC) STORE_DESC,
sum(pa1.TOTALSALES) TOTALSALES
from ZZTIS00H601PO000 pa1
join HARI_LOOKUP_ITEM a11
on (pa1.CLASS_NBR = a11.CLASS_NBR and
pa1.ITEM_NBR = a11.ITEM_NBR)
join HARI_LOOKUP_STORE a12
on (pa1.STORE_NBR = a12.STORE_NBR)
group by pa1.ITEM_NBR,
pa1.CLASS_NBR,
pa1.STORE_NBR
Exam p l es
into ZZTIS00H60BPO000
from HARI_STORE_ITEM_93 a11
group by a11.ITEM_NBR,
a11.CLASS_NBR,
a11.STORE_NBR
insert into ZZTIS00H60BPO000
select a11.ITEM_NBR ITEM_NBR,
a11.CLASS_NBR CLASS_NBR,
a11.STORE_NBR STORE_NBR,
1 XKYCGT,
sum(a11.TOT_SLS_DLR) TOTALSALES
from HARI_STORE_ITEM_94 a11
group by a11.ITEM_NBR,
a11.CLASS_NBR,
a11.STORE_NBR
select pa1.ITEM_NBR ITEM_NBR,
pa1.CLASS_NBR CLASS_NBR,
max(a11.ITEM_DESC) ITEM_DESC,
max(a11.CLASS_DESC) CLASS_DESC,
pa1.STORE_NBR STORE_NBR,
max(a12.STORE_DESC) STORE_DESC,
sum(pa1.TOTALSALES) TOTALSALES
from ZZTIS00H60BPO000 pa1
join HARI_LOOKUP_ITEM a11
on (pa1.CLASS_NBR = a11.CLASS_NBR and
pa1.ITEM_NBR = a11.ITEM_NBR)
join HARI_LOOKUP_STORE a12
on (pa1.STORE_NBR = a12.STORE_NBR)
group by pa1.ITEM_NBR,
pa1.CLASS_NBR,
If you do not modify the Element Browsing Post Statement VLDB property,
the statements defined in this Report Post Statement VLDB property are
also used for element browsing requests. For example, an element browsing
request occurs when a user expands an attribute to view its attribute
Exam p l e
If you do not modify the Element Browsing Pre Statement VLDB property,
the statements defined in this Report Pre Statement VLDB property are also
used for element browsing requests. For example, an element browsing
request occurs when a user expands an attribute to view its attribute
elements. To define statements that apply only to element browsing
requests, seeElement Browsing Pre Statement.
Exam p l e
The example below shows an instance of how pre and post statements at
both the report level and database instance level are applied and executed
against multiple sources.
For examples of the syntax required for these statements, see the Report
Pre Statement and Report Post Statement sections.
Exam p l e
Exam p l e
max(a12.DEPARTMENT_DESC) DEPARTMENT_DESC,
a11.STORE_NBR STORE_NBR,
max(a13.STORE_DESC) STORE_DESC,
sum(a11.TOT_SLS_DLR) TOTALSALES
from HARI_STORE_DEPARTMENT a11
join ZZTIS00H63RMQ000 pa1
on (a11.DEPARTMENT_NBR = pa1.DEPARTMENT_NBR and
a11.STORE_NBR = pa1.STORE_NBR)
join HARI_LOOKUP_DEPARTMENT a12
on (a11.DEPARTMENT_NBR = a12.DEPARTMENT_NBR)
join HARI_LOOKUP_STORE a13
on (a11.STORE_NBR = a13.STORE_NBR)
group by a11.DEPARTMENT_NBR,
a11.STORE_NBR
Optimizing Queries
The table below summarizes the Query Optimizations VLDB properties.
Additional details about each property, including examples where
necessary, are provided in the sections following the table.
Possible
Property Description Default Value
Values
(default) Final
pass CAN do
aggregation and
Determines whether the join lookup Final pass CAN do
Engine calculates an tables in one
Additional Final aggregation and
aggregation function and a pass
Pass Option join lookup tables
join in a single pass or in
One additional in one pass
separate passes in the SQL.
final pass only to
join lookup
tables
Possible
Property Description Default Value
Values
passes touching
warehouse tables
and last join
pass, if it does a
downward join
from the temp
table level to the
template level
Apply filter to
passes touching
warehouse tables
and last join pass
Use Count
Use Count (Attribute@ID) to
(Attribute@ID) to calculate total
calculate total element number
element number (uses count
Controls how the total (uses count
Attribute Element distinct if
number of rows are distinct if
Number Count necessary)
calculated for incremental necessary)
Method
fetch.
Use ODBC For Tandem
cursor to databases,
Do not select
distinct elements
Determines how distinct for each partition Do not select
Count Distinct
counts of values are retrieved distinct elements
with Partitions Select distinct
from partitioned tables. for each partition
elements for
each partition
Custom Group Helps optimize custom group Treat banding as Treat banding as
Possible
Property Description Default Value
Values
normal
calculation
banding when using the
Count Banding method. You Use standard
can choose to use the case statement
Banding Count standard method that uses syntax
normal calculation
Method the Analytical Engine or
Insert band
database-specific syntax, or
range to
you can choose to use case
database and
statements or temp tables.
join with metric
value
Treat banding as
Helps optimize custom group normal
banding when using the calculation
Points Banding method. You
Use standard
Custom Group can choose to use the
case statement Treat banding as
Banding Points standard method that uses
syntax normal calculation
Method the Analytical Engine or
database-specific syntax, or Insert band range
you can choose to use case to database and
statements or temp tables. join with metric
value
Treat banding as
normal
Helps optimize custom group
calculation
banding when using the Size
Banding method. You can Use standard
Custom Group choose to use the standard case statement
Treat banding as
Banding Size method that uses the syntax
normal calculation
Method Analytical Engine or
Insert band
database-specific syntax, or
range to
you can choose to use case
database and
statements or temp tables.
join with metric
value
Possible
Property Description Default Value
Values
Do not normalize
Intelligent Cube
data
Normalize
Intelligent Cube
data in
Intelligence
Server
Normalize
Intelligent Cube
data in database
using
Intermediate
Table Type
Normalize
Data Population Defines if and how Intelligent Normalize Intelligent Cube
for Intelligent Cube data is normalized to Intelligent Cube data in Intelligence
Cubes save memory resources. data in database Server
using Fallback
Type
Normalize
Intelligent Cube
data basing on
dimensions with
attribute lookup
filtering
Normalize
Intelligent Cube
data basing on
dimensions with
no attribute
lookup filtering
Data Population Defines if and how report Do not normalize Do not normalize
Possible
Property Description Default Value
Values
report data
Normalize report
data in
Intelligence
Server
Normalize report
data in database
using
Intermediate
data is normalized to save
for Reports Table Type report data
memory resources.
Normalize report
data in database
using Fallback
Table Type
Normalize report
data basing on
dimensions with
attribute lookup
filtering
Sort attribute
elements based
on the attribute
ID form for each
Default Sort Determines whether the sort attribute Sort attribute
Behavior for order of attribute elements on elements based on
Attribute reports considers special sort Sort attribute the attribute ID
Elements in order formatting defined for elements based form for each
Reports attributes. on the defined attribute
'Report Sort'
setting of all
attribute forms
for each attribute
Possible
Property Description Default Value
Values
Enable Engine
Enable or disable the Attribute Role
Analytical Engine's ability to feature Disable Engine
Engine Attribute
treat attributes defined on the Attribute Role
Role Options Disable Engine
same column with the same feature
expression as attribute roles. Attribute Role
feature
Possible
Property Description Default Value
Values
Use constant in
prequery
Use MultiSource
Option to access
multiple data
sources Use MultiSource
Defines which technique to
Multiple Data Option to access
use to support multiple data Use database
Source Support multiple data
sources in a project. gateway support source
to access
multiple data
sources
Preserve
Defines whether OLAP backwards Preserve
functions support backwards compatibility with
OLAP Function backwards
compatibility or reflect 8.1.x and earlier
Support compatibility with
enhancements to OLAP
Recommended 8.1.x and earlier
function logic.
with 9.0 and later
Disable parallel
query execution
Enable parallel
Determines whether
query execution
MicroStrategy attempts to
for multiple data
Parallel Query execute multiple queries in Disable parallel
source reports
Execution parallel to return report query execution
only
results faster and publish
Intelligent Cubes. Enable parallel
query execution
for all reports
that support it
Possible
Property Description Default Value
Values
estimate in SQL
view
processing time that would be
Estimate in SQL saved if parallel Query Enable parallel estimate in SQL
View execution was used to run query execution view
multiple queries in parallel. improvement
estimate in SQL
view
Use ODBC
Rank Method if ranking (MSTR 6
Determines how calculation method) Use ODBC ranking
DB Ranking Not
ranking is performed. (MSTR 6 method).
Used Analytical engine
performs rank
Remove
aggregation
according to key
Determines whether to keep of FROM clause Remove
Remove
or remove aggregations in aggregation
Aggregation Remove
SQL queries executed from according to key of
Method aggregation
MicroStrategy. FROM clause
according to key
of fact tables (old
behavior)
Remove
aggregation and
Group By when Remove
Determines whether Group Select level is aggregation and
Remove Group By and aggregations are identical to From Group By when
by Option used for attributes with the level Select level is
same primary key. Remove identical to From
aggregation and level
Group By when
Select level
Possible
Property Description Default Value
Values
contains all
attribute(s) in
From level
Disable
optimization to
remove repeated
tables in full Enable
Determines whether an outer join and left optimization to
Remove outer join passes
optimization for outer join remove repeated
Repeated Tables
processing is enabled or Enable tables in full outer
for Outer Joins
disabled. optimization to join and left outer
remove repeated join passes
tables in full
outer join and left
outer join passes
Disable Set
Operator
Allows you to use set Optimization
operators in sub queries to
combine multiple filter Enable Set Disable Set
Set Operator Operator
qualifications. Set operators Operator
Optimization Optimization (if
are only supported by certain Optimization
database platforms and with supported by
Level 0: No
optimization
Level 4: Level 2 +
Determines the level by which
SQL Global Level 1: Remove Merge All Passes
SQL queries in reports are
Optimization Unused and with Different
optimized.
Duplicate Passes WHERE
Level 2: Level 1 +
Possible
Property Description Default Value
Values
Merge Passes
with Different
SELECT
Level 3: Level 2 +
Merge Passes,
which only hit DB
Tables, with
different WHERE
Level 4: Level 2 +
Merge All Passes
with Different
WHERE
WHERE EXISTS
(SELECT * ...)
WHERE EXISTS
(SELECT col1,
col2...)
WHERE COL1 IN
(SELECT
s1.COL1...) Use Temporary
falling back to Table, falling back
Allows you to determine the
EXISTS to EXISTS
Sub Query Type type of subquery used in
(SELECT * ...) (SELECT *...) for
engine-generated SQL.
for multiple correlated
columns IN subquery
WHERE (COL1,
COL2...) IN
(SELECT
s1.COL1,
s1.COL2...)
Use Temporary
Table, falling
Possible
Property Description Default Value
Values
back to EXISTS
(SELECT *...) for
correlated
subquery
WHERE COL1 IN
(SELECT
s1.COL1...)
falling back to
EXISTS
(SELECT col1,
col2 ...) for
multiple columns
IN
Use Temporary
Table, falling
back to IN
(SELECT COL)
for correlated
subquery
Possible
Property Description Default Value
Values
Keep unrelated
filter
Keep unrelated
keep or remove the unrelated filter and put
filter. condition from
unrelated
attributes in one
subquery group
The Additional Final Pass Option determines whether the Engine calculates
an aggregation function and a join in a single pass or in separate passes in
the SQL.
It is recommended that you use this property on reports. You must update
the metadata to see the property populated in the metadata.
Exam p l e
The following SQL example was created using SQL Server metadata and
warehouse.
From the above warehouse structure, define the following schema objects:
l Salary = Avg(Salary_Dept){~+}
1 Pass0
2 select a12.Mgr_Id Mgr_Id,
3 a11.Dept_Id Dept_Id,
4 sum(a11.Salary) WJXBFS1
5 into #ZZTUW0200LXMD000
6 from dbo.Emp_Dept_Salary a11
7 join dbo.Emp_Mgr a12
8 on (a11.Emp_Id = a12.Emp_Id)
9 group by a12.Mgr_Id,
10 a11.Dept_Id
11 Pass1
12 select pa1.Mgr_Id Mgr_Id,
13 max(a11.Mgr_Desc) Mgr_Desc,
14 avg(pa1.WJXBFS1) WJXBFS1
15 from #ZZTUW0200LXMD000 pa1
16 join dbo.Emp_Mgr a11
17 on (pa1.Mgr_Id = a11.Mgr_Id)
18 group by pa1.Mgr_Id
19 Pass2
20 drop table #ZZTUW0200LXMD000
The problem in the SQL pass above, in lines 14-17, is that the join condition
and the aggregation function are in a single pass. The SQL joins the
ZZTUW0200LXMD000 table to the Emp_Mgr table on column Mgr_ID, but
Mgr_ID is not the primary key to the LU_Emp_Mgr table. Therefore, there
are many rows on the LU_Emp_Mgr table with the same Mgr_ID. This results
in a repeated data problem.
Clearly, if both the conditions, aggregation and join, do not exist on the
same table, this problem does not occur.
To resolve this problem, select the option One additional final pass only
to join lookup tables in the VLDB Properties Editor. With this option
selected, the report, when executed, generates the following SQL:
1 Pass0
2 select a12.Mgr_Id Mgr_Id,
3 a11.Dept_Id Dept_Id,
4 sum(a11.Salary) WJXBFS1
5 into #ZZTUW01006IMD000
6 from dbo.Emp_Dept_Salary a11
In this SQL, lines 12-13 and 21-23 show that the Engine calculates the
aggregation function, which is the Average function, in a separate pass and
performs the join operation in another pass.
l Apply filter to passes touching warehouse tables and last join pass,
if it does a downward join from the temporary table level to the
template level: The filter is applied in the final pass if it is a downward
join. For example, you have Store, Region Sales, and Region Cost on the
report, with the filter "store=1." The intermediate passes calculate the total
sales and cost for Region 1 (to which Store 1 belongs). In the final pass, a
downward join is done from the Region level to the Store level, using the
relationship table LOOKUP_STORE. If the "store = 1" filter in this pass is
not applied, stores that belong to Region 1 are included on the report.
However, you usually expect to see only Store 1 when you use the filter
"store=1." So, in this situation, you should choose this option to make sure
the filter is applied in the final pass.
l Apply filter to passes touching warehouse tables and last join pass:
The filter in the final pass is always applied, even though it is not a
downward join. This option should be used for special types of data
modeling. For example, you have Region, Store Sales, and Store Cost on
the report, with the filter "Year=2002." This looks like a normal report and
the final pass joins from Store to Region level. But the schema is
abnormal: certain stores do not always belong to the same region, perhaps
due to rezoning. For example, Store 1 belongs to Region 1 in 2002, and
belongs to Region 2 in 2003. To solve this problem, put an additional
column Year in LOOKUP_STORE so that you have the following data.
1 1 2002
1 2 2003
...
Apply the filter Year=2002 to your report. This filter must be applied in the
final pass to find the correct store-region relationship, even though the
final pass is a normal join instead of a downward join.
In t er act i o n w i t h Ot h er VLDB Pr o p er t i es
Two other VLDB properties, Downward Outer Join Option and Preserve All
Lookup Table Elements, have an option to apply the filter. If you choose
those options, then the filter is applied accordingly, regardless of what the
value of Apply Filter Option is.
l Use ODBC cursor to calculate the total element number: This setting
causes Intelligence Server to determine the total number of rows by
looping through the table after the initial SELECT pass.
For Tandem databases, the default is Use ODBC Cursor to calculate the
total element number.
The Custom Group Banding Count Method helps optimize custom group
banding when using the Count Banding method. You have the following
options:
l Use standard case statement syntax: Select this option to utilize case
statements within your database to perform the custom group banding.
l Insert band range to database and join with metric value: Select this
option to use temporary tables to perform the custom group banding.
Exam p l es
end) as DA57
from ZZMQ02 pa3
select sum(a11.TOT_DOLLAR_SALES) as WJXBFS1
from CUSTOMER_SLS a11, ZZOP03 a12
where a11.CUSTOMER_ID = a12.CUSTOMER_ID
select a12.DA57 AS DA57, sum(a11.TOT_DOLLAR_SALES)
as WJXBFS1
from CUSTOMER_SLS a11, ZZOP03 a12
where a11.CUSTOMER_ID = a12.CUSTOMER_ID
group by a12.DA57
drop table ZZMD00
drop table ZZMD01
drop table ZZMQ02
drop table ZZOP03
The Custom Group Banding Points Method helps optimize custom group
banding when using the Points Banding method. You can choose to use the
standard method that uses the Analytical Engine or database-specific
syntax, or you can choose to use case statements or temp tables.
Exam p l es
The Custom Group Banding Size Method helps optimize custom group
banding when using the Size Banding method. You can choose to use the
standard method that uses the Analytical Engine or database-specific
syntax, or you can choose to use case statements or temp tables.
Exam p l es
You can avoid this duplication of data by normalizing the Intelligent Cube
data. In this scenario, the South region description information would only
be stored once even though the region contains five stores. While this saves
memory resources, the act of normalization requires some processing time.
This VLDB property provides the following options to determine if and how
Intelligent Cube data is normalized:
This is a good option if you publish your Intelligent Cubes at times when
Intelligence Server use is low. Normalization can then be performed
without affecting your user community. You can use schedules to support
this strategy. For information on using schedules to publish Intelligent
Cubes, see the In-memory Analytics Help .
l The other options available for Intelligent Cube normalization all perform
the normalization within the database. Therefore, these are all good
options if Intelligent Cubes are published when Intelligence Server is in
use by the user community, or any time when the memory resources of
Intelligence Server must be conserved.
If you used this option in 9.0.0 and have upgraded to the most recent
version of MicroStrategy, it is recommended that you use a different
Intelligent Cube normalization technique. If the user account for the data
warehouse has permissions to create tables, switch to the option
Normalize Intelligent Cube data in the database. This option is
described below. If the user account does not have permissions to
create tables, switch to the option Normalize Intelligent Cube data in
Intelligence Server.
To use this option, the user account for the database must have
permissions to create tables.
To use this option, the user account for the database must have
permissions to create tables.
To use this option, the user account for the database must have
permissions to create tables. Additionally, using this option can return
different results than the other Intelligent Cube normalization
techniques. For information on these differences, see Data Differences
when Normalizing Intelligent Cube Data Using Direct Loading, page
1818 below.
The option Direct loading of dimensional data and filtered fact data can
return different results than the other Intelligent Cube normalization
techniques in certain scenarios. Some of these scenarios and the effect that
they have on using direct loading for Intelligent Cube normalization are
described below:
l There are extra rows of data in fact tables that are not available in the
attribute lookup table. In this case the VLDB property Preserve all final
pass result elements (see Relating Column Data with SQL: Joins, page
1684) determines how to process the data. The only difference between
direct loading and the other normalization options is that the option
Preserve all final result pass elements and the option Preserve all
elements of final pass result table with respect to lookup table but not
relationship table both preserve the extra rows by adding them to the
lookup table.
l There are extra rows of data in the attribute lookup tables that are not
available in the fact tables. With direct loading, these extra rows are
included. For other normalization techniques, the VLDB property Preserve
all lookup table elements (see Relating Column Data with SQL: Joins,
page 1684) determines whether or not to include these rows.
When a report is executed, the description information for the attributes (all
data mapped to non-ID attribute forms) included on the report is repeated for
every row. For example, a report includes the attributes Region and Store,
with each region having one or more stores. Without performing
normalization, the description information for the Region attribute would be
repeated for every store. If the South region included five stores, then the
information for South would be repeated five times.
You can avoid this duplication of data by normalizing the report data. In this
scenario, the South region description information would only be stored
once even though the region contains five stores. While this saves memory
resources, the act of normalization requires some processing time. This
VLDB property provides the following options to determine if and how report
data is normalized:
l The other options available for report data normalization all perform the
normalization within the database. Therefore, these are all good options if
the memory resources of Intelligence Server must be conserved.
If you used this option in 9.0.0 and have upgraded to the most recent
version of MicroStrategy, it is recommended that you use a different
report data normalization technique. If the user account for the data
warehouse has permissions to create tables, switch to the option
Normalize report data in the database. This option is described
below. If the user account does not have permissions to create tables,
switch to the option Normalize report data in Intelligence Server.
To use this option, the user account for the database must have
permissions to create tables.
To use this option, the user account for the database must have
permissions to create tables.
Dimensionality Model
Dimensionality Model is an advanced property that is hidden by default. For
information on how to display this property, see Viewing and Changing
Advanced VLDB Properties, page 1630.
l Use relational model (default): For all projects, Use relational model is
the default value. With the Use relational model setting, all the
dimensionality (level) resolution is based on the relationship between
attributes.
l Use dimensional model: The Use dimensional model setting is for cases
where attribute relationship dimensionality (level) resolution is different
from dimension-based resolution. There are very few cases when the
setting needs to be changed to Use dimensional model. The following
situations may require the Use dimensional model setting:
l Metric Conditionality: You have a report with the Year attribute and the
"Top 3 Stores Dollar Sales" metric on the template and the filters Store,
Region, and Year. Therefore, the metric has a metric conditionality of
"Top 3 Stores."
If you change the default of the Remove related report filter element
option in advanced conditionality, the Use dimensional model setting
does not make a difference in the report. For more information
regarding this advanced setting, see the Metrics chapter in the
Advanced Reporting Help.
l Analysis level calculation: For the next situation, consider the following
split hierarchy model.
Market and State are both parents of Store. A report has the attributes
Market and State and a Dollar Sales metric with report level
dimensionality. In MicroStrategy 7.x and later, with the Use relational
model setting, the report level (metric dimensionality level) is Market
and State. To choose the best fact table to use to produce this report,
the Analytical Engine considers both of these attributes. With the Use
dimensional model setting in MicroStrategy 7.x and later, Store is
used as the metric dimensionality level and for determining the best
fact table to use. This is because Store is the highest common
descendent between the two attributes.
The Engine Attribute Role Options property allows you to share an actual
physical table to define multiple schema objects. There are two approaches
for this feature:
l The first approach is a procedure called table aliasing, where you can
define multiple logical tables in the schema that point to the same physical
table, and then define different attributes and facts on these logical tables.
Table aliasing provides you a little more control and is best when
upgrading or when you have a complex schema. Table aliasing is
described in detail in the Project Design Help.
l The second approach is called Engine Attribute Role. With this approach,
rather than defining multiple logical tables, you only need to define
multiple attributes and facts on the same table. The MicroStrategy Engine
automatically detects "multiple roles" of certain attributes and splits the
table into multiple tables internally. There is a limit on the number of
tables into which a table can split. This limit is known as the Attribute Role
limit. This limit is hard coded to 128 tables. If you are a new MicroStrategy
user starting with 7i or later, it is suggested that you use the automatic
detection (Engine Attribute Role) option.
l If two attributes are defined on the same column from the same table, have
the same expression, and are not related, it is implied that they are playing
different roles and must be in different tables after the split.
l If two attributes are related to each other, they must stay in the same table
after the split.
Given the diversity of data modeling in projects, the above algorithm cannot
be guaranteed to split tables correctly in all situations. Thus, this property is
added in the VLDB properties to turn the Engine Attribute Role on or off.
When the feature is turned off, the table splitting procedure is bypassed.
Fact table FT1 contains the columns "Order_Day," "Ship_Day," and "Fact_
1." Lookup table LU_DAY has columns "Day," "Month," and "Year."
Attributes "Ship Day" and "Order Day" are defined on different columns in
FT1, but they share the same column ("Day") on LU_DAY. Also the attributes
"Ship Month" and "Order Month" share the same column "month" in LU_DAY.
The "Ship Year" and "Order Year" attributes are the same as well. During the
schema loading, the Analytical Engine detects the duplicated definitions of
attributes on column "Day," "Month," and "Year." It automatically splits LU_
DAY into two internal tables, LU_DAY(1) and LU_DAY(2), both having the
same physical table name LU_DAY. As a result, the attributes "Ship Day,"
"Ship Month," and "Ship Year" are defined on LU_DAY(1) and "Order Day,"
"Order Month," and "Order Year" are defined on LU_DAY(2). Such table
splitting allows you to display Fact_1 that is ordered last year and shipped
this year.
select a1.fact_1
from FT1 a1 join LU_DAY a2 on (a1.order_day=a2.day)
join LU_DAY a3 on (a1.ship_day = a3.day)
where a2.year = 2002 and
a3.year = 2003
Note that LU_DAY appears twice in the SQL, playing different "roles." Also,
note that in this example, the Analytical Engine does not split table FT1
because "Ship Day" and "Order Day" are defined on different columns.
Fact table FT1 contains columns "day" and "fact_1." "Ship Day" and "Order
Day" are defined on column "day." The Analytical Engine detects that these
two attributes are defined on the same column and therefore splits FT1 into
FT1(1) and FT1(2), with FT1(1) containing "Ship Day" and "Fact 1", and FT
(2) containing "Order Day" and "Fact 1." If you put "Ship Day" and "Order
Day" on the template, as well as a metric calculating "Fact 1," the Analytical
Engine cannot find such a fact. Although externally, FT1 contains all the
necessary attributes and facts, internally, "Fact 1" only exists on either "Ship
Day" or "Order Day," but not both. In this case, to make the report work
(although still incorrectly), you should turn OFF the Engine Attribute Role
feature.
l If this property is turned ON, and you use this feature incorrectly, the
most common error message from the Analytical Engine is
l This feature is turned OFF by default starting from 7i Beta 2. Before that,
this feature was turned OFF for upgraded projects and turned ON by
default for new projects. So for some 7i beta users, if you create a new
metadata using the Beta1 version of 7i, this feature may be turned on in
your metadata.
l While updating the schema, if the Engine Attribute Role feature is ON,
and if the Attribute Role limit is exceeded, you may get an error message
from the Engine. You get this error because there is a limit on the number
of tables into which a given table can be split internally. In this case, you
should turn the Engine Attribute Role feature OFF and use table aliasing
instead.
The Maximum Parallel Queries Per Report property determines how many
queries can be executed in parallel as part of parallel query execution
support. By default, a maximum of two queries can be executed in parallel,
and you can increase this number to perform additional queries in parallel.
For data that is integrated into MicroStrategy using Data Import, the default
maximum number of queries that can be executed in parallel is five. When
determining this maximum, consider the following:
l When multiple queries are executed in parallel, this means that the actual
processing of the multiple queries is performed in parallel on the
database. If a database is required to do too many tasks at the same time
this can cause the response time of the database to slow down, and thus
degrade the overall performance. You should take into account the
databases used to retrieve data and their available resources when
deciding how to restrict parallel Query execution.
Project only
There are multiple ways to generate a SELECT statement that checks for the
data, but the performance of the query can differ depending on the platform.
The default value for this property is: "select count(*) …" for all database
platforms, except UDB, which uses "select distinct 1…"
You can specify a secondary database instance for a table, which is used
to support database gateways. For example, in your environment you
might have a gateway between two databases such as an Oracle database
and a DB2 database. One of them is the primary database and the other is
the secondary database. The primary database receives all SQL requests
and passes them to the correct database.
Any object using a data source other than the primary data source is
considered as having multiple data sources. Therefore, the Execute Report
that uses multiple data sources privilege is required. This rule also
applies to scenarios when the object uses data only from a non-primary data
source.
This behavior does not correctly use multiple passes for nested or sibling
metrics that use OLAP functions. It also does not correctly apply attributes
in the SortBy and BreakBy parameters.
Disabling parallel query execution by default allows you to first verify that
your reports and Intelligent Cubes are executing correctly prior to any
parallel query optimization. If you enable parallel query execution and
errors are encountered or data is not being returned as expected,
disabling parallel query execution can help to troubleshoot the report or
Intelligent Cube.
For reports and Intelligent Cubes that do not use MultiSource Option or
database gateway support to access multiple data sources, all queries are
processed sequentially.
l Enable parallel query execution for all reports that support it:
MicroStrategy attempts to execute multiple queries in parallel for all
MicroStrategy reports and Intelligent Cubes. This option is automatically
used for data that you integrate into MicroStrategy using Data Import.
Ho w Par al l el Qu er y Execu t i o n i s Su p p o r t ed
l The creation of tables to store intermediate results, which are then used
later in the same query.
If your report or Intelligent Cube uses any of the features listed above, it
may be a good candidate for using parallel query execution. Additionally,
using parallel query execution can be a good option for Intelligent Cubes
that are published during off-peak hours when the system is not in heavy use
by the reporting community. Using parallel query execution to publish these
Intelligent Cubes can speed up the publication process, while not affecting
the reporting community for your system.
There are some scenarios where parallel query execution cannot be used.
These are described below:
Parallel query execution is disabled by default to allow you to first verify that
your reports and Intelligent Cubes are executing correctly prior to any
parallel query optimization. If you enable parallel query execution and errors
are encountered or data is not being returned as expected, disabling parallel
query execution can help to troubleshoot the report or Intelligent Cube.
When multiple queries are performed in parallel, the actual processing of the
multiple queries is performed in parallel on the database. If a database is
required to do too many tasks at the same time this can cause the response
time of the database to slow down, and thus degrade the overall
performance. You should take into account the databases used to retrieve
data and their available resources when deciding whether to enable parallel
query execution.
Disabling parallel query execution can be a good option for reports and
Intelligent Cubes that are not used often or ones that do not have strict
performance requirements. If you can disable parallel query execution for
these reports and Intelligent Cubes that do not have a great need for
enhanced performance, that can save database resources to handle other
potentially more important requests.
Additionally, you can limit the number of queries that can be executed in
parallel for a given report or Intelligent Cube. This can allow you to enable
parallel query execution, but restrict how much processing can be done in
parallel on the database. To define the number of passes of SQL that can be
executed in parallel, see Maximum Parallel Queries Per Report, page 1830.
Be aware that this estimate does not factor in the capabilities of the
database you are using, which can have an effect on the performance of
parallel query execution since the database is what processes the multiple
passes in parallel. Additionally, this estimate assumes that all queries that
can be done in parallel are in fact performed in parallel. If parallel query
execution is enabled, the number of queries that can be performed in
parallel is controlled by the Maximum Parallel SQLs Per Report VLDB
property (see Parallel Query Execution Improvement Estimate in SQL
View, page 1837).
Project only
2. If the database supports the Rank function, then the ranking is done in
the database.
3. If neither of the above criteria is met, then the Rank Method property
setting is used.
For example, the report shown in the image below was created in the
MicroStrategy Tutorial project.
To create this report, data must be joined from the tables LU_MONTH, LU_
CUST_CITY, and CITY_MNTH_SLS. Since the attribute lookup tables
combine to have a level of Customer City and Month, and the CITY_MNTH_
SLS table has a level of Customer City and Month, normally this VLDB
property would have no effect on the SQL. However, for the purposes of
this example the LU_MONTH table was modified to include an extra
attribute named Example, and it is not related to the Month attribute.
The SQL statement above uses DISTINCT in the SELECT clause to return
the Month data. However, since there is an additional attribute on the LU_
MONTH table, the correct SQL to use includes aggregations on the data
rather than using DISTINCT. Therefore, if you use this Remove
aggregation according to key of FROM clause option for the VLDB
property, the following SQL is created:
The Remove Repeated Tables For Outer Joins property determines whether
an optimization for outer join processing is enabled or disabled. You have
the following options:
However, if you sort or rank report results and some of the values used
for the sort or rank are identical, you may encounter different sort or rank
orders depending on whether you disable or enable this optimization. To
preserve current sorting or ranking orders on identical values, you may
want to disable this optimization.
l Relationship qualifications
l Set operators can only be used to combine the filter qualifications listed
above if they have the same output level. For example, a relationship
qualification with an output level set to Year and Region cannot be
combined with another relationship qualification with an output level of
Year.
l Metric qualifications at the same level are combined into one set
qualification before being applied to the final result pass. This is more
efficient than using a set operator. Consult MicroStrategy Tech Note
TN13536 for more details.
l For more information on filters and filter qualifications, see the Advanced
Filters section of the MicroStrategy Advanced Reporting Guide.
Along with the restrictions described above, SQL set operators also depend
on the subquery type and the database platform. For more information on
sub query type, see Set Operator Optimization, page 1843. Set Operator
Optimization can be used with the following sub query types:
l Use Temporary Table, falling back to IN (SELECT COL) for correlated sub
query
If either of the two sub query types that use fallback actions perform a
fallback, Set Operator Optimization is not applied.
Yes
Oracle Yes No No Yes Yes
(Minus)
Tandem No No No No No No
If you enable Set Operator Optimization for a database platform that does
not support operators such as EXCEPT and INTERSECT, the Set Operator
Optimization property is ignored.
The Set Operator Optimization property provides you with the following
options:
The default option for this VLDB property has changed in 9.0.0. For
information on this change, see SQL Global Optimization, page 1846.
You can set the following SQL Global Optimization options to determine the
extent to which SQL queries are optimized:
l Level 4: Level 2 + Merge All Passes with Different WHERE: This is the
default level. Level 2 optimization takes place as described above, and all
SQL passes with different WHERE clauses are consolidated when it is
appropriate to do so. While Level 3 only consolidates SQL statements that
access database tables, this option also considers SQL statements that
access temporary tables, derived tables, and common table expressions.
l Level 5: Level 2 + Merge All Passes, which hit the same warehouse
fact tables: Level 2 optimization takes place as described above, and
when multiple passes hit the same fact table, a compiled table is created
from the lookup tables of the multiple passes. This compiled table hits the
warehouse fact table only once.
The SQL optimization available with Level 3 or Level 4 can be applied for
SQL passes that use the functions Plus (+), Minus (-), Times (*), Divide (/),
Unary minus (U-), Sum, Count, Avg (average), Min, and Max. To ensure that
valid SQL is returned, if the SQL passes that are generated use any other
functions, the SQL passes are not combined.
Exa m pl e : Re du n da n t SQL Pa ss
This example demonstrates how some SQL passes are redundant and
therefore removed when the Level 1 or Level 2 SQL Global Optimization
option is selected.
l Year attribute
l Region attribute
l SQL Pass 1: Retrieves the set of categories that satisfy the metric
qualification
l SQL Pass 2: Final pass that selects the related report data, but does not
use the results of the first SQL pass:
If you select either the Level 1: Remove Unused and Duplicate Passes or
Level 2: Level 1 + Merge Passes with different SELECT option, only one
SQL pass—the second SQL pass described above—is generated because it
is sufficient to satisfy the query on its own. By selecting either option, you
reduce the number of SQL passes from two to one, which can potentially
decrease query time.
Sometimes, two or more passes contain SQL that can be consolidated into a
single SQL pass, as shown in the example below. In such cases, you can
select the Level 2: Level 1 + Merge Passes with different SELECT option
to combine multiple passes from different SELECT statements.
l Region attribute
l SQL Pass 3: Final pass that calculates Metric 3 = Metric 1/Metric 2 and
displays the result:
Because SQL passes 1 and 2 contain almost exactly the same code, they
can be consolidated into one SQL pass. Notice the italicized SQL in Pass 1
and Pass 2. These are the only unique characteristics of each pass;
therefore, Pass 1 and 2 can be combined into just one pass. Pass 3 remains
as it is.
You can achieve this type of optimization by selecting the Level 2: Level 1
+ Merge Passes with different SELECT option. The SQL that results from
this level of SQL optimization is as follows:
Pa ss 1:
Pa ss 2:
Sometimes, two or more passes contain SQL with different where clauses
that can be consolidated into a single SQL pass, as shown in the example
below. In such cases, you can select the Level 3: Level 2 + Merge Passes,
which only hit DB Tables, with different WHERE option or the Level 4:
Level 2 + Merge All Passes with Different WHERE option to combine
multiple passes with different WHERE clauses.
l Quarter attribute
l Metric 1 = Web Sales (Calculates sales for the web call center)
l Metric 2 = Non-Web Sales (Calculates sales for all non-web call centers)
Pa ss 1:
Pa ss 2:
Pa ss 3:
Pa ss 4:
Pa ss 5:
Pass 2 calculates the Web Sales and Pass 4 calculates all non-Web Sales.
Because SQL passes 2 and 4 contain almost exactly the same SQL, they can
be consolidated into one SQL pass. Notice the highlighted SQL in Pass 2
and Pass 4. These are the only unique characteristics of each pass;
therefore, Pass 2 and 4 can be combined into just one pass.
You can achieve this type of optimization by selecting the Level 3: Level 2
+ Merge Passes, which only hit DB Tables, with different WHERE option
or the Level 4: Level 2 + Merge All Passes with Different WHERE option.
The SQL that results from this level of SQL optimization is as follows:
Pa ss 1:
Pa ss 2:
AS WJXBFS2,
max(iif(a11.[CALL_CTR_ID] not in (18), 1, 0))
AS GODWFLAG2_1
from [DAY_CTR_SLS] a11,
[LU_DAY] a12
where a11.[DAY_DATE] = a12.[DAY_DATE]
and (a11.[CALL_CTR_ID] in (18)
or a11.[CALL_CTR_ID] not in (18))
group by a12.[QUARTER_ID]
Pa ss 3:
When projects are upgraded to 9.0.x, if you have defined this VLDB property
to use the default setting, this new default is applied. This change improves
performance for the majority of reporting scenarios. However, the new
default can cause certain reports to become unresponsive or fail with time-
out errors. For example, reports that contain custom groups or a large
number of conditional metrics may encounter performance issues with this
new default.
To resolve this issue for a report, after completing an upgrade, modify the
SQL Global Optimization VLDB property for the report to use the option
Level 2: Level 1 + Merge Passes with different SELECT.
The Sub Query Type property tells the Analytical Engine what type of syntax
to use when generating a subquery. A subquery is a secondary SELECT
statement in the WHERE clause of the primary SQL statement.
The Sub Query Type property is database specific, due to the fact that
different databases have different syntax support for subqueries. Some
databases can have improved query building and performance depending on
the subquery type used. For example, it is more efficient to use a subquery
that only selects the needed columns rather than selecting every column.
Subqueries can also be more efficient by using the IN clause rather than
using the EXISTS function.
review example SQL syntax for each VLDB setting for Sub Query Type, see
HERE EXISTS (Select *…), page 1857.
Database Default
Microsoft Access Use Temporary Table, falling back to EXISTS (SELECT *...) for
2000/2002/2003 correlated subquery
Microsoft Excel Use Temporary Table, falling back to EXISTS (SELECT *...) for
2000/2003 correlated subquery
Notice that some options have a fallback action. In some scenarios, the
selected option does not work, so the SQL Engine must fall back to an
approach that always works. The typical scenario for falling back is when
multiple columns are needed in the IN list, but the database does not
support it and the correlated subqueries.
For a further discussion of the Sub Query Type VLDB property, refer to
MicroStrategy Tech Note TN13870.
WHERE COL1 IN (SELECT s1.COL1...) falling back to EXISTS (SELECT * ...) for
multiple columns IN
Use Temporary Table, falling back to EXISTS (SELECT *...) for correlated
subquery (default)
WHERE COL1 IN (SELECT s1.COL1...) falling back to EXISTS (SELECT col1, col2
...) for multiple columns IN
Use Temporary Table, falling back to IN (SELECT COL) for correlated subquery
l No attributes on the report grid or the Report Objects of the report are
related to the transformation's member attribute. For example, if a
transformation is defined on the attribute Year of the Time hierarchy, no
attributes in the Time hierarchy can be included on the report grid or
Report Objects.
l The filter of the report does contain attributes that are related to the
transformation's member attribute. For example, if a transformation is
The SQL statements shown below display a SQL statement before (Statement
1) and after (Statement 2) applying the transformation optimization.
Statement 1
WJXBFS1
from ORDER_DETAIL a11
join LU_DAY a12
on (a11.ORDER_DATE = a12.DAY_DATE - 1 YEAR)
join LU_ITEM a13
on (a11.ITEM_ID = a13.ITEM_ID)
join LU_SUBCATEG a14
on (a13.SUBCAT_ID = a14.SUBCAT_ID)
join LU_CATEGORY a15
on (a14.CATEGORY_ID = a15.CATEGORY_ID)
where a12.DAY_DATE = '08/31/2021'
group by a14.CATEGORY_ID
Statement 2
MicroStrategy contains the logic to ignore filter qualifications that are not
related to the template attributes, to avoid unnecessary Cartesian joins.
However, in some cases a relationship is created that should not be ignored.
The Unrelated Filter Options property determines whether to remove or keep
unrelated filter qualifications that are included in the report's filter or through
the use of joint element lists. This VLDB property has the following options:
For example, you have report with a filter on the Country attribute, and the
Year attribute is on the report template. This example assumes that no
relationship between Country and Year is defined in the schema. In this
case, the filter is removed regardless of this VLDB property setting. This is
because the filter qualification does not include any attributes that could
be related to the attributes on the report.
l Report filters:
l Keep unrelated filter and put condition from unrelated attributes in one
subquery group: The filter qualifications on Country are included on the
report and in the report SQL, as shown below:
For the example explained above, the metric includes the Region attribute
(through the use of Region@ID) and the report filter includes the Category
attribute. Since the Category attribute is unrelated to the Region attribute, it
l Use the 8.1.x behavior (default): Select this option to use the behavior in
MicroStrategy 8.1.x. In the example described above, this returns the
following SQL statement, which has been abbreviated for clarity:
While the unrelated filter qualification is kept in the first pass of SQL, it
is removed from the second pass of SQL. This means that the filtering
on Category is applied to the inner aggregation that returns a
summation of revenue for the Northeast region only. However, the
filtering on category is not used in the final summation.
l Use the 9.0.x behavior: Select this option to use the behavior in
MicroStrategy 9.0.x. In the example described above, this returns the
following SQL statement, which has been abbreviated for clarity:
By using the 9.0.x behavior, the unrelated filter qualification is kept in both
SQL passes. This means that the filtering on category is applied to the
inner aggregation that returns a summation of revenue for the Northeast
region only. The filtering on category is also used in the final summation.
If Use lookup table is selected, but there is no lookup table in the FROM
clause for the column being qualified on, the Analytical Engine does not add
the lookup table to the FROM clause. To make sure that a qualification is
done on a lookup table column, the DSS Star Join property should be set to
use Partial star join.
Possible Default
Property Description
Values Value
Select ID form
Attribute only
Selection and Allows you to choose whether to Select ID and
Form Selection select attribute forms that are on other forms if Select ID
Option for the template in the intermediate they are on form only
Intermediate pass (if available). template and
Passes available in
existing join tree
(Default) Select
only the
attributes
Allows you to choose whether to needed
select additional attributes (usually
Attribute Form parent attributes) needed on the Select other
attributes in Select only
Selection Option template as the join tree and their
current join tree the attributes
for Intermediate child attributes have already been
if they are on needed
Pass selected in the Attribute Form
Selection option for Intermediate template and
Possible Default
Property Description
Values Value
Pure select, no
group by
Use max, no
Allows you to choose whether to use
group by
a GROUP BY and how the GROUP
Constant Pure select,
BY should be constructed when Group by column
Column Mode no group by
working with a column that is a (expression)
constant.
Group by alias
Group by
position
No interaction -
static custom
group
Apply report
filter to custom No
Custom Group
Allows you define how a report filter group interaction -
Interaction with
interacts with a custom group. static custom
the Report Filter Apply report
group
filter to custom
group, but
ignore related
elements from
the report filter
Possible Default
Property Description
Values Value
Columns created
in order based
on attribute Columns
Allows you to determine the order in weight created in
Data Mart
which datamart columns are order based
Column Order Columns created
created. on attribute
in order in which weight
they appear on
the template
Use "." as
decimal Use "." as
Use to change the decimal separator (ANSI decimal
Decimal separator in SQL statements from a standard) separator
Separator decimal point to a comma, for
Use "," as (ANSI
international database users.
decimal standard)
separator
(Default) Use
(Default) Use
prefix in both
Allows you to choose whether or not prefix in both
Disable Prefix in warehouse
to use the prefix partition queries. warehouse
WH Partition partition pre-
The prefix is always used with pre- partition pre-
Table query and
queries. query and
partition
partition query
query
Possible Default
Property Description
Values Value
Use prefix in
warehouse
partition
prequery but not
in partition query
Group by
expression
Determines how to group by a
GROUP BY ID selected ID column when an Group by alias Group by
Attribute expression is performed on the ID Group by column expression
expression.
Group by
position
Possible Default
Property Description
Values Value
Merge same
Merge Same metric
expression Merge same
Metric Determines how to handle metrics
metric
Expression that have the same definition. Do not merge expression
Option same metric
expression
Select
Defines the custom SQL string to be
Statement Post User-defined NULL
appended to the final SELECT .
String
Possible Default
Property Description
Values Value
A report template contains the attributes Region and Store, and metrics M1 and
M2. M1 uses the fact table FT1, which contains Store_ID, Store_Desc, Region_
ID, Region_Desc, and f1. M2 uses the fact table FT2, which contains Store_ID,
Store_Desc, Region_ID, Region_Desc, and F2. With the normal SQL Engine
algorithm, the intermediate pass that calculates M1 selects Store_ID and F1,
the intermediate pass that calculates M2 selects Store_ID and F2. Then the
final pass joins these two intermediate tables together. But that is not enough.
Since Region is on the template, it should join upward to the region level and
find the Region_Desc form. This can be done by joining either FT1 or FT2 in
the final pass. So with the original algorithm, either FT1 or FT2 is being
accessed twice. If these tables are big, and they usually are, the performance
can be very slow. On the other hand, if Store_ID, Store_Desc, Region_ID, and
For this reason, the following two properties are available in MicroStrategy:
l These two properties work independently. One does not influence the
other.
l Each property has two values. The default behavior is the original
algorithm.
l The SQL Engine does not join additional tables to select more attributes
or forms. So for intermediate passes, the number of tables to be joined
is the same as when the property is disabled.
The Bulk Insert String property appends the string provided in front of the
INSERT statement. For Teradata, this property is set to ";" to increase query
performance. The string is appended only for the INSERT INTO SELECT
statements and not the INSERT INTO VALUES statement that is generated
by the Analytical Engine. Since the string is appended for the INSERT INTO
SELECT statement, this property takes effect only during explicit,
permanent, or temporary table creation.
GROUP BY alias
GROUP BY position
In this scenario, the report filter is evaluated after the custom group. If the
same customer that has a total of $7,500 only had $2,500 in 2007, then the
report would only display $2,500 for that customer. However, the customer
would still be in the $5,000 to $10,000 in revenue range because the custom
group did not account for the report filter.
You can define report filter and custom group interaction to avoid this
scenario. This VLDB property has the following options:
l Apply report filter to custom group, but ignore related elements from
the report filter: Report filter qualifications that do not qualify on attribute
elements that are used to define the custom group elements are applied to
custom groups. These filter qualifications are used to determine the values
for each custom group element. For example, a report filter that qualifies
on the Customer attribute is not applied to a custom group that also uses
the Customer attribute to define its custom group elements.
For information on custom groups and defining these options for a custom
group, see the Advanced Reporting Help.
Database instance
l Only ODBC: Standard methods are used to retrieve data. This option must
be used in all cases, except for connections that are expected to make use
of the Teradata Parallel Transporter API.
l Allow Native API: Third-party native APIs can be used to retrieve data.
MicroStrategy supports the use of the Teradata Parallel Transporter API.
Enabling Teradata Parallel Transporter can improve performance when
retrieving large amounts of data from Teradata, typically 1 Gigabyte and
larger, which can occur most commonly in MicroStrategy when publishing
Intelligent Cubes.
You can also select this VLDB property option for the database instance
for Teradata connections that are not created through the use of Data
Import.
For this VLDB property to take effect, you must defined the Data Retrieval
Mode VLDB property (see Data Retrieval Parameters, page 1879) as Allow
Native API. You can then define the required parameters to retrieve data
using the third-party, native API. For example, you can enable Teradata
Parallel Transporter by defining the following parameters:
When providing the parameters and their values, each parameter must be of
the form:
ParameterName=ParameterValue
TD_TDP_ID=123.45.67.89;TD_MAX_SESSIONS=3;TD_MIN_SESSIONS=1;TD_
MAX_INSTANCES=3
You can also define this VLDB property for the database instance for
Teradata connections that are not created through the use of Data Import.
Date Format
The Date Format VLDB property specifies the format of the date string literal
in the SQL statements when date-related qualifications are present in the
report.
Default yyyy-mm-dd
Oracle dd-mmm-yy
Teradata yyyy/mm/dd
Date Pattern
Date Pattern is an advanced VLDB property that is hidden by default. For
information on how to display this property, see Viewing and Changing
Advanced VLDB Properties, page 1630.
The Date Pattern VLDB property is used to add or alter a syntax pattern for
handling date columns.
Tandem (d'#0')
Decimal Separator
The Decimal Separator VLDB property specifies whether a "." or "," is used
as a decimal separator. This property is used for non-English databases that
use commas as the decimal separator.
Use the Default Attribute Weight property to determine how attribute weights
should be treated, for those attributes that are not in the attribute weights
list.
You can access the attribute weights list from the Project Configuration
Editor. In the Project Configuration Editor, expand Report Definition and
select SQL generation. From the Attribute weights section, click Modify to
open the attribute weights list.
The attribute weights list allows you to change the order of attributes used in
the SELECT clause of a query. For example, suppose the Region attribute is
placed higher on the attribute weights list than the Customer State attribute.
When the SQL for a report containing both attributes is generated, Region is
referenced in the SQL before Customer State. However, suppose another
attribute, Quarter, also appears on the report template but is not included in
the attribute weights list.
In this case, you can select either of the following options within the Default
Attribute Weight property to determine whether Quarter is considered
highest or lowest on the attribute weights list:
l Lowest: When you select this option, those attributes not in the attribute
weights list are treated as the lightest weight. Using the example above,
with this setting selected, Quarter is considered to have a lighter attribute
weight than the other two attributes. Therefore, it is referenced after
Region and Customer State in the SELECT statement.
l Highest (default): When you select this option, those attributes not in the
attribute weights list are treated as the highest weight. Using the example
above, with this setting selected, Quarter is considered to have a higher
attribute weight than the other two attributes. Therefore, it is referenced
before Region and Customer State in the SELECT statement.
For those projects that need their own prefix in the PBT, the MicroStrategy
6.x approach (using the DDBSOURCE column) no longer works due to
architectural changes. The solution is to store the prefix along with the PBT
name in the column PBTNAME of the partition mapping table. So instead of
storing PBT1, PBT2, and so on, you can put in DB1.PBT1, DB2.PBT2, and
so on. This effectively adds a different prefix to different PBTs by treating
the entire string as the partition base table name.
The solution above works in most cases but does not work if the PMT needs
its own prefix. For example, if the PMT has the prefix "DB0.", the prequery
works fine. However, in the partition query, this prefix is added to what is
stored in the PBTNAME column, so it gets DB0.DB1.PBT1, DB0.DB1.PBT2,
and so on. This is not what you want to happen. This new VLDB property is
used to disable the prefix in the WH partition table. When this property is
turned on, the partition query no longer shares the prefix from the PMT.
Instead, the PBTNAME column (DB1.PBT1, DB2.PBT2, and so on) is used
as the full PBT name.
Even when this property is turned ON, the partition prequery still applies a
prefix, if there is one.
l No DISTINCT, no GROUP BY
l Use GROUP BY
If you are using a Vertica database that includes correlated subqueries, to support the
use of the Use GROUP By option listed above, you must also define the Sub Query
Type VLDB property (see Optimizing Queries, page 1791) to use either of the following
options:
Use Temporary Table, falling back to EXISTS (SELECT *...) for correlated subquery
Use Temporary Table, falling back to IN (SELECT COL) for correlated subquery
Upon selecting an option, a sample SQL statement shows the effect that
each option has.
The SQL Engine ignores the option selected for this property in the following
situations:
l If there is COUNT (DISTINCT …) and the database does not support this
functionality, a SELECT DISTINCT pass of SQL is used, which is followed
by a COUNT(*) pass of SQL.
l If the database does not allow DISTINCT or GROUP BY for certain column
data types, DISTINCT and GROUP BY are not used.
l If the select level is the same as the table key level and the table's true
key property is selected, DISTINCT is not used.
When none of the above conditions are met, the option selected for this
property determines how DISTINCT and GROUP BY are used in the SQL
statement.
GROUP BY ID Attribute
The GROUP BY ID Attribute is an advanced property that is hidden by
default. For information on how to display this property, see Viewing and
Changing Advanced VLDB Properties, page 1630.
The code fragment following each description replaces the section named
group by ID in the following sample SQL statement.
a22.MARKET_NBR * 10
MARKET_ID
a22.MARKET_NBR
max(a15.YEAR_DESC) YEAR_DESC,
sum(a11.TOT_SLS_DLR) TOTALSALES
from MARKET_CLASS a11
join LOOKUP_DAY a12
on (a11.CUR_TRN_DT = a12.CUR_TRN_DT)
join LOOKUP_CLASS a13
on (a11.CLASS_NBR = a13.CLASS_NBR)
join LOOKUP_MARKET a14
on (a11.MARKET_NBR = a14.MARKET_NBR)
join LOOKUP_YEAR a15
on (a12.YEAR_ID = a15.YEAR_ID)
group by a11.MARKET_NBR, a11.CLASS_NBR,
a12.YEAR_ID
Use Group by
statement, enter six # characters in the code. You can get any desired
string with the right number of # characters. Using the # character is the
same as using the ; character.
With this VLDB property you can determine whether long integers are
mapped to a BigInt data type when MicroStrategy creates tables in the
database. A data mart is an example of a MicroStrategy feature that requires
MicroStrategy to create tables in a database.
When long integers from databases are integrated into MicroStrategy, the
Big Decimal data type is used to define the data in MicroStrategy. Long
integers can be of various database data types such as Number, Decimal,
and BigInt.
In the case of BigInt, when data that uses the BigInt data type is integrated
into MicroStrategy as a Big Decimal, this can cause a data type mismatch
when MicroStrategy creates a table in the database. MicroStrategy does not
use the BigInt data type by default when creating tables. This can cause a
data type mismatch between the originating database table that contained
the BigInt and the database table created by MicroStrategy.
You can use the following VLDB settings to support BigInt data types:
l Do not use BigInt (default): Long integers are not mapped as BigInt data
types when MicroStrategy creates tables in the database. This is the
default behavior.
If you use BigInt data types, this can cause a data type mismatch between
the originating database table that contained the BigInt and the database
table created by MicroStrategy.
This setting is a good option if you can ensure that your BigInt data uses
no more than 18 digits. The maximum number of digits that a BigInt can
use is 19. With this option, if your database contains BigInt data that uses
all 19 digits, it is not mapped as a BigInt data type when MicroStrategy
creates a table in the database.
However, using this setting requires you to manually modify the column
data type mapped to your BigInt data. You can achieve this by creating a
column alias for the column of data in the Attribute Editor or Fact Editor in
MicroStrategy. The column alias must have a data type of Big Decimal, a
precision of 18, and a scale of zero. For steps to create a column alias to
modify a column data type, see the Project Design Help.
However, this option can cause an overflow error if you have long integers
that use exactly 19 digits, and its value is greater than the maximum
allowed for a BigInt (9,223,372,036,854,775,807).
The Max Digits in Constant property controls the number of significant digits
that get inserted into columns during Analytical Engine inserts. This is only
applicable to real numbers and not to integers.
Database-specific setting
SQL Server 28
Teradata 18
To include a post string only on the final SELECT statement you should use
the Select Statement Post String VLDB property, which is described in
Select Post String, page 1893.
The SQL statement shown below displays an example of where the Select
Post String and Select Statement Post String VLDB properties would include
their SQL statements.
This can be helpful if you use common table expressions with an IBM DB2
database. These common table expressions do not support certain custom
SQL strings. This VLDB property allows you to apply the custom SQL string
to only the final SELECT statement which does not use a common table
expression.
The SQL statement shown below displays an example of where the Select
Post String and Select Statement Post String VLDB properties include their
SQL statements.
with gopa1 as
(select a12.REGION_ID REGION_ID
from CITY_CTR_SLS a11
join LU_CALL_CTR a12
on (a11.CALL_CTR_ID = a12.CALL_CTR_ID)
group by a12.REGION_ID
having sum(a11.TOT_UNIT_SALES) = 7.0
/* select post string */)select
a11.REGION_ID REGION_ID,
a14.REGION_NAME REGION_NAME0,
sum(a11.TOT_DOLLAR_SALES) Revenue
from STATE_SUBCATEG_REGION_SLS a11
join gopa1 pa12
on (a11.REGION_ID = pa12.REGION_ID)
join LU_SUBCATEG a13
on (a11.SUBCAT_ID = a13.SUBCAT_ID)
join LU_REGION a14
on (a11.REGION_ID = a14.REGION_ID)
where a13.CATEGORY_ID in (2)
group by a11.REGION_ID,
a14.REGION_NAME/* select post string */
/* select statement post string */
SQL Hint
The SQL Hint property is used for the Oracle SQL Hint pattern. This string is
placed after the SELECT word in the Select statement. This property can be
used to insert any SQL string that makes sense after the SELECT in a Select
statement, but it is provided specifically for Oracle SQL Hints.
Exam p l e
Sybase IQ hh:nn:ss:lll
Timestamp Format
The Timestamp Format property allows you to determine the format of the
timestamp literal accepted in the WHERE clause. This is a database-specific
property; some examples are shown in the table below.
Exam p l e
DB2 yyyy-mm-dd-hh.nn.ss
l SQL Server
l Teradata
The Use Column Type Hint for Parameterized Query VLDB property
determines whether the WCHAR data type is used when applicable to return
data accurately while using parameterized queries. This VLDB property has
the following options:
l Enable ODBC Column Type Binding Hint for "WCHAR" and "CHAR":
This option should be used only if you have enabled parameterized
queries in MicroStrategy for your database and data is not being correctly
displayed on reports. This can include viewing question marks in place of
other valid characters. This can occur for Netezza databases.
By selecting this option, the WCHAR data type is used when applicable
so that the data is returned correctly while using parameterized queries.
Character Column
Option and Defines how to support
National multiple character sets User-defined NULL
Character Column used in Teradata.
Option
No Commit
Sets when to issue a
COMMIT statement after Post DDL
Commit Level No Commit
creating an intermediate Post DML
table.
Post DDL and DML
Do not apply
hexadecimal
character
transformation to
quoted strings
Permanent table
Determines the type of
Intermediate Table Derived table
intermediate (temp) table Permanent table
Type
to create. Common table
expression
True temporary
table
Temporary view
Table Descriptor
Table Option
Table Prefix
Table Space
Alias Pattern
Alias Pattern is an advanced property that is hidden by default. For
information on how to display this property, see Viewing and Changing
Advanced VLDB Properties, page 1630.
The Alias Pattern property allows you to alter the pattern for aliasing column
names. Most databases do not need this pattern, because their column
aliases follow the column name with only a space between them. However,
Microsoft Access needs an AS between the column name and the given
column alias. This pattern is automatically set for Microsoft Access users.
This property is provided for customers using the Generic DBMS object
because some databases may need the AS or another pattern for column
aliasing.
Attribute ID Constraint
This property is available at the attribute level. You can access this property
by opening the Attribute Editor, selecting the Tools menu, then choosing
VLDB Properties.
When creating intermediate tables in the explicit mode, you can specify the
NOT NULL/NULL constraint during the table creation phase. This takes
effect only when permanent or temporary tables are created in the explicit
table creation mode. Furthermore, it applies only to the attribute columns in
the intermediate tables.
Exam ple
NOT NULL
MicroStrategy uses two sets of data types to support multiple character sets.
The Char and VarChar data types are used to support a character set. The
NChar and NVarChar data types are used to support a different character
set than the one supported by Char and VarChar. The NChar and NVarChar
data types are commonly used to support the Unicode character set while
Char and VarChar data types are used to support another character set.
You can support the character sets in your Teradata database using these
VLDB properties:
l The Character Column Option VLDB property defines the character set
used for columns that use the MicroStrategy Char or VarChar data types.
If left empty, these data types use the default character set for the
Teradata database user.
If you use the Unicode character set and it is not the default character set
for the Teradata database user, you should define NChar and NVarChar
data types to use the Unicode character set.
For example, your Teradata database uses the Latin and Unicode character
sets, and the default character set for your Teradata database is Latin. In
this scenario you should leave Character Column Option empty so that it
uses the default of Latin. You should also define National Character Column
as CHARACTER SET UNICODE so that NChar and NVarChar data types
support the Unicode data for your Teradata database.
To extend this example, assume that your Teradata database uses the Latin
and Unicode character sets, but the default character set for your Teradata
database is Unicode. In this scenario you should leave National Character
Column Option empty so that it uses the default of Unicode. You should also
define Character Column as CHARACTER SET LATIN so that Char and
VarChar data types support the Latin data for your Teradata database.
The Character Column Option and National Character Column Option VLDB
properties can also support the scenario where two character sets are used,
and Unicode is not one of these character sets. For this scenario, you can
use these two VLDB properties to define which MicroStrategy data types
support the character sets of your Teradata database.
Column Pattern
Column Pattern is an advanced property that is hidden by default. For
information on how to display this property, see Viewing and Changing
Advanced VLDB Properties, page 1630.
The Column Pattern property allows you to alter the pattern for column
names. Most databases do not need this pattern altered. However, if you are
using a case-sensitive database and need to add double quotes around the
column name, this property allows you to do that.
Exam ple
The standard column pattern is #0.#1. If double quotes are needed, the
pattern changes to:
"#0.#1"
Commit Level
The Commit Level property is used to issue COMMIT statements after the
Data Definition Language (DDL) and Data Manipulation Language (DML)
statements. When this property is used in conjunction with the INSERT MID
Statement, INSERT PRE Statement, or TABLE POST Statement VLDB
properties, the COMMIT is issued before any of the custom SQL passes
specified in the statements are executed. The only DDL statement issued
after the COMMIT is issued is the explicit CREATE TABLE statement.
Commit is issued after DROP TABLE statements even though it is a DDL
statement.
The only DML statement issued after the COMMIT is issued is the INSERT
INTO TABLE statement. If the property is set to Post DML, the COMMIT is
not issued after an individual INSERT INTO VALUES statement; instead, it is
issued after all the INSERT INTO VALUES statements are executed.
The Post DDL COMMIT only shows up if the Intermediate Table Type VLDB
property is set to Permanent tables or Temporary tables and the Table
Creation Type VLDB property is set to Explicit mode.
The Post DML COMMIT only shows up if the Intermediate Table Type VLDB
property is set to Permanent tables, Temporary tables, or Views.
Not all database platforms support COMMIT statements and some need
special statements to be executed first, so this property must be used in
projects whose warehouse tables are in databases that support it.
Exam ples
No Commit (default)
HARI_LOOKUP_STORE a13
where a11.DEPARTMENT_NBR = pa1.DEPARTMENT_NBR and
a11.STORE_NBR = pa1.STORE_NBR and
a11.DEPARTMENT_NBR = a12.DEPARTMENT_NBR and
a11.STORE_NBR = a13.STORE_NBR
group by a11.DEPARTMENT_NBR,
a11.STORE_NBR
No Commit (default)
a11.STORE_NBR STORE_NBR,
max(a13.STORE_DESC) STORE_DESC,
sum(a11.TOT_SLS_DLR) TOTALSALES
from HARI_STORE_DEPARTMENT a11,
ZZTIS00H8LCMQ000 pa1,
HARI_LOOKUP_DEPARTMENT a12,
HARI_LOOKUP_STORE a13
where a11.DEPARTMENT_NBR = pa1.DEPARTMENT_NBR and
a11.STORE_NBR = pa1.STORE_NBR and
a11.DEPARTMENT_NBR = a12.DEPARTMENT_NBR and
a11.STORE_NBR = a13.STORE_NBR
group by a11.DEPARTMENT_NBR,
a11.STORE_NBR
a11.STORE_NBR STORE_NBR,
max(a13.STORE_DESC) STORE_DESC,
sum(a11.TOT_SLS_DLR) TOTALSALES
from HARI_STORE_DEPARTMENT a11,
ZZTIS00H8LTMQ000 pa1,
HARI_LOOKUP_DEPARTMENT a12,
HARI_LOOKUP_STORE a13
where a11.DEPARTMENT_NBR = pa1.DEPARTMENT_NBR and
a11.STORE_NBR = pa1.STORE_NBR and
a11.DEPARTMENT_NBR = a12.DEPARTMENT_NBR and
a11.STORE_NBR = a13.STORE_NBR
group by a11.DEPARTMENT_NBR,
a11.STORE_NBR
This setting is recommended for databases that are used to support fully
functioning MicroStrategy projects.
This option can also be used along with the MultiSource Option feature,
which allows you to access multiple databases in one MicroStrategy
project. You can define your secondary database instances to disallow
CREATE and INSERT statements so that all information is only inserted
into the primary database instance. For information on the MultiSource
Option feature, see the Project Design Help.
You can also use this option to avoid the creation of temporary tables on
databases for various performance or security purposes.
This option does not control the SQL that can be created and executed
against a database using Freeform SQL and Query Builder reports.
prevent the creation of permanent or temporary tables, you can set the
Fallback Table Type VLDB property to Fail report. This causes reports that
rely on the Fallback Table Type to fail, so it should only be used when it is
necessary to prevent the creation of permanent or temporary tables.
Exam ples
This property can have a major impact on the performance of the report.
Permanent tables are usually less optimal. Derived tables, common table
expressions, and true temporary tables usually perform well, but they do not
work in all cases and for all databases. The default setting is permanent
tables, because it works for all databases in all situations. However, based
on your database type, this setting is automatically changed to what is
generally the most optimal option for that platform, although other options
could prove to be more optimal on a report-by-report basis. You can access
the VLDB Properties Editor for the database instance for your database (see
Opening the VLDB Properties Editor, page 1625), and then select the Use
default inherited value check box to determine the default option for your
database.
To help support the use of common table expressions and derived tables,
you can also use the Maximum SQL Passes Before FallBack and Maximum
Tables in FROM Clause Before FallBack VLDB properties. These properties
(described in Maximum SQL Passes Before FallBack, page 1919 and
Maximum Tables in FROM Clause Before FallBack, page 1920) allow you to
define when a report is too complex to use common table expressions and
derived table expressions and instead use a fallback table type.
Exam ples
Derived Table
a11.STORE_NBR STORE_NBR,
max(a13.STORE_DESC) STORE_DESC,
sum(a11.TOT_SLS_DLR) TOTALSALES
from HSTORE_DEPARTMENT a11
join (select a11.DEPARTMENT_NBR DEPARTMENT_NBR,
a11.STORE_NBR STORE_NBR
from HSTORE_DEPARTMENT a11
group by a11.DEPARTMENT_NBR,
a11.STORE_NBR
having sum(a11.TOT_SLS_DLR) > 100000
) pa1
on (a11.DEPARTMENT_NBR = pa1.DEPARTMENT_NBR and
a11.STORE_NBR = pa1.STORE_NBR)
join HLOOKUP_DEPARTMENT a12
on (a11.DEPARTMENT_NBR = a12.DEPARTMENT_NBR)
join HLOOKUP_STORE a13
on (a11.STORE_NBR = a13.STORE_NBR)
group by a11.DEPARTMENT_NBR,
a11.STORE_NBR
with pa1 as
(select a11.DEPARTMENT_NBR DEPARTMENT_NBR,
a11.STORE_NBR STORE_NBR
from HSTORE_DEPARTMENT a11
group by a11.DEPARTMENT_NBR,
a11.STORE_NBR
having sum(a11.TOT_SLS_DLR) > 100000
)
select a11.DEPARTMENT_NBR DEPARTMENT_NBR,
max(a12.DEPARTMENT_DESC) DEPARTMENT_DESC,
a11.STORE_NBR STORE_NBR,
max(a13.STORE_DESC) STORE_DESC,
sum(a11.TOT_SLS_DLR) TOTALSALES
from HSTORE_DEPARTMENT a11
join pa1
on (a11.DEPARTMENT_NBR = pa1.DEPARTMENT_NBR and
a11.STORE_NBR = pa1.STORE_NBR)
join HLOOKUP_DEPARTMENT a12
on (a11.DEPARTMENT_NBR = a12.DEPARTMENT_NBR)
join HLOOKUP_STORE a13
on (a11.STORE_NBR = a13.STORE_NBR)
group by a11.DEPARTMENT_NBR,
a11.STORE_NBR
Temporary Table
Views
Using common table expressions or derived tables can often provide good
performance for reports. However, some production environments have
shown better performance when using temporary tables for reports that
require multi-pass SQL.
To support the use of the best table type for each type of report, you can use
the Maximum SQL Passes Before FallBack VLDB property to define how
many passes are allowed for a report that uses intermediate tables. If a
report uses more passes than are defined in this VLDB property, the table
type defined in the Fallback Table Type VLDB property (see Fallback Table
Type, page 1913) is used rather than the table type defined in the
Intermediate Table Type VLDB property (see Intermediate Table Type, page
1915).
For example, you define the Intermediate Table Type VLDB property to use
derived tables for the entire database instance. This default is then used for
all reports within that database instance. You also define the Fallback Table
Type VLDB property to use temporary tables as the fallback table type. For
your production environment, you define the Maximum SQL Passes Before
FallBack VLDB property to use the fallback table type for all reports that use
more than five passes.
A report is executed. The report requires six passes of SQL to return the
required report results. Usually this type of report would use derived tables,
as defined by the Intermediate Table Type VLDB property. However, since it
uses more passes than the limit defined in the Maximum SQL Passes Before
FallBack VLDB property, it must use the fallback table type. Since the
Fallback Table Type VLDB property is defined as temporary tables, the
report uses temporary tables to perform the multi-pass SQL and return the
report results.
Using common table expressions or derived tables can often provide good
performance for reports. However, some production environments have
shown better performance when using temporary tables for reports that
require joining a large amount of database tables.
To support the use of the best table type for each type of report, you can use
the Maximum Tables in FROM Clause Before FallBack VLDB property (see
Fallback Table Type, page 1913) to define how many tables are allowed in a
From clause for a report that uses intermediate tables. If a report uses more
tables in a From clause than are defined in this VLDB property, the table
type defined in the Fallback Table Type VLDB property is used rather than
the table type defined in the Intermediate Table Type VLDB property (see
Intermediate Table Type, page 1915).
For example, you define the Intermediate Table Type VLDB property to use
derived tables for the entire database instance. This default is then used for
all reports within that database instance. You also define the Fallback Table
Type VLDB property to use temporary tables as the fallback table type. For
your production environment, you define the Maximum Tables in FROM
Clause Before FallBack VLDB property to use the fallback table type for all
reports that use more than seven tables in a From clause.
A report is executed. The report requires a SQL statement that includes nine
tables in the From clause. Usually this type of report would use derived
tables, as defined by the Intermediate Table Type VLDB property. However,
since it uses more tables in the From clause than the limit defined in the
l Permanent Table: When the queries for a report or Intelligent Cube are
performed in parallel, any intermediate tables are created as permanent
If you select this option and derived tables cannot be created for your
database, permanent tables are created instead.
Quoting Behavior
The Quoting Behavior property controls whether a project uses unified
quoting for all identifiers. You must upgrade your metadata to 2020 and set
the Data Engine version to 12 to enable this feature. Upgrading the
metadata will enable Unified Quoting in all projects in the metadata, all the
supported DBMS will have the correct quoting patterns and all database
instances set to supported DBMS will inherit the patterns. For more
information, see Unified Quoting Behavior for Warehouse Identifiers.
Project
Exam ple
You have the query select col name from t1. The column name is col
name, but the database interprets the query as "get the column named col
and alias it as name." When Quoting Behavior is enabled, it will change the
query to select “col name” from “t1” and identify the correct
column.
Exam ples
Implicit table
where a21.STORE_NBR = 1
group by a21.STORE_NBR
For platforms like Teradata and DB2 UDB 6.x and 7.x versions, the Primary
Index or the Partition Key SQL syntax is placed between the Table Space
and Create Post String VLDB property.
You must upgrade your metadata to 2020 and set the Data Engine version to
12 to enable this feature. Upgrading the metadata will enable Unified
Quoting in all projects in the metadata, all the supported DBMS will have the
correct quoting patterns and all database instances set to supported DBMS
will inherit the patterns. For more information, see Unified Quoting Behavior
for Warehouse Identifiers.
Supported DBMS
These include databases, data sources, and MDX cube sources from third-
party vendors such as IBM DB2, Oracle, Informix, SAP, Sybase, Microsoft,
Netezza, Teradata, and so on. For certification information on these data
sources, refer to the Readme.
You can determine the default options for each VLDB property for a
database by performing the steps below. This provides an accurate list of
default VLDB properties for your third-party data source for the version of
MicroStrategy that you are using.
Ensure that you have fully upgraded your MicroStrategy environment and the
available database types, as described in Upgrading the VLDB Options for a
Particular Database Type, page 1634.
3. From the File menu, point to New, and select Database Instance.
4. In the Database instance name field, type a descriptive name for the
database instance.
6. Click OK to exit the Database Instances Editor and save the database
instance.
7. Right-click the new database instance that you created and select
VLDB Properties.
10. Select the Show descriptions of setting values check box. This
displays the descriptive information of each default VLDB property
setting in the VLDB settings report.
11. The VLDB settings report now displays all the default settings for the
data source. You can copy the content in the report using the Ctrl+C
keys on your keyboard, then paste the information into a text editor or
word processing program (such as Microsoft Word) using the Ctrl+V
keys.
13. You can then either delete the database instance that you created
earlier, or modify it to connect to your data source.
CREATIN G A
M ULTILIN GUAL
EN VIRON M EN T:
I N TERN ATION ALIZATION
Translating your data and metadata allows your users to view their reports in
a variety of languages. It also allows report designers and others to display
report and document editors and other objects editors in various languages.
And because all translation information can be stored in the same project,
project maintenance is easier and more efficient for administrators.
The image below shows which parts of a report are translated using data
internationalization and which parts of a report are translated using
metadata internationalization:
About Internationalization
For a fully internationalized environment, both metadata internationalization
and data internationalization are required. However, you can
internationalize only your metadata, or only your data, based on your needs.
Both are described below.
This section also describes translating the user interface and how
internationalization affects report/document caching.
Tale of Two Cities to the English user and Un Conte de Deux Villes to the
French user.
l For steps to select the interface language in Developer, see Selecting the
Interface Language Preference, page 1965.
Different caches are created for different DI languages, but not for different
MDI languages. When a user whose MDI language and DI language are
French runs a report, a cache is created containing French data and using
the report's French name. When a second user whose MDI language and DI
language are German runs the same report, a new cache is created with
German data and using the report's German name. If a third user whose MDI
language is French and DI language is German runs the same report, the
second user's cache is hit. Two users with the same DI language preference
use the same cache, regardless of MDI preferences.
fonts ensure that all characters can be displayed correctly when a report
or document is displayed in a double-byte language.
Not all Unicode fonts can display double-byte languages, for example,
Lucida Sans Unicode does not display double-byte languages.
l If you have old projects with metadata objects that have been previously
translated, it is recommended that you merge your translated strings from
your old metadata into the newly upgraded metadata using MicroStrategy
Project Merge. For steps, see Translating Already Translated Pre-9.x
Projects, page 1950.
l If you are using or plan to use MicroStrategy Intelligent Cubes, and you
plan to implement data internationalization, it is recommended that you
use a SQL-based DI model. The SQL-based DI model is described in
Providing Data Internationalization, page 1951. Because a single
Intelligent Cube cannot connect to more than one data warehouse, using a
connection-based DI model requires a separate Intelligent Cube to be
created for each language, which can be resource-intensive. Details on
this cost-benefit analysis as well as background information on Intelligent
Cubes are in the In-memory Analytics Help.
This step must be performed before you update your project's metadata
definitions.
This step must be completed during your installation or upgrade to the latest
version of Developer. For steps to install, see the Installation and
Configuration Help. For steps to perform a general MicroStrategy upgrade,
see the Upgrade Help.
2. Log into the project. You are prompted to update your project. Click
Yes.
If you prefer to provide your own translations (for example if you will be
customizing folder names), you do not need to perform this procedure.
4. Click Update.
You can also provide a user with the Use Repository Translation Wizard
privilege. This allows a user to perform the necessary steps to translate or
modify translations of strings in all languages, without giving the user the
ability to modify an object in any other way. To change a privilege, open the
user in the User Editor and select Project Access on the left, then expand
the Object Manager set of privileges on the right and select the Use
Repository Translation Wizard check box.
You can modify these default privileges for a specific user role or a specific
language object.
1. In the Folder List on the left, within the appropriate project source,
expand Administration.
3. All language objects are listed on the right. To change ACL permissions
for a language object, right-click the object and select Properties.
Gather a list of languages used by filters and prompts in the project. These
languages should be enabled for the project, otherwise a report containing a
filter or prompt in a language not enabled for the project will not be able to
execute successfully.
The languages displayed in bold blue are those languages that the
metadata objects have been enabled to support. This list is displayed
as a starting point for the set of languages you can choose to enable for
supporting data internationalization.
5. Select the check boxes for the languages that you want to enable for
this project.
6. Click OK.
7. Select one of the languages on the right side to be the default language
for this project. The default language is used by the system to maintain
object name uniqueness.
This may have been set when the project was first created. If so, it will
not be available to be selected here.
If you are enabling a language for a project that has been upgraded
from 8.x or earlier, the default metadata language must be the
language in which the project was originally created (the 8.x Developer
8. Click OK.
Any translations for the disabled language are not removed from the
metadata with these steps. Retaining the translations in the metadata allows
you to enable the language again later, and the translations will still exist.
To remove translations in the disabled language from the metadata, objects
that contain these terms must be modified individually and saved.
4. On the right side, under Selected Languages, clear the check box for
the language that you want to disable for the project, and click OK.
The rest of this section describes the method to translate bulk object strings,
using a separate translation database, with the Repository Translation
Wizard.
All of the procedures in this section assume that your projects have been
prepared for internationalization. Preparation steps are in Preparing a
Project to Support Internationalization, page 1934.
4. Import the newly translated object strings back into the metadata
repository (see Importing Translated Strings from the Translation
Database to the Metadata, page 1949)
You cannot extract strings from the project's default metadata language.
l OBJECTID: This is the ID of the object from which the string is extracted.
Japanese 1041
Korean 1042
Swedish 1053
This string is used only as a reference during the translation process. For
example, if the translator is comfortable with the German language, you
can set German as the translation reference language. The
REFTRANSLATION column will then contain all the extracted strings in
the German language, for the translator to use as a reference when they
are translating extracted strings.
l STATUS: You can use this column to enter flags in the table to control
which strings are imported back into the metadata. A flag is a character
you type, for example, a letter, a number, or a special character (as long
as it is allowed by your database). When you use the wizard to import the
strings back into the metadata, you can identify this character for the
system to use during the import process, to determine which strings to
import.
For example, if a translator has finished only some translations, you may
want to import only the completed ones. Or if a reviewer has completed the
language review for only some of the translations, you may wish to import
only those strings that were reviewed. You can flag the strings that were
completed and are ready to be imported.
l 0: This means that the object has not been modified between extraction
and import.
l 1: This means that the object has been modified between extraction and
import.
l 2: This means that the object that is being imported is no longer present
in the metadata.
l LASTMODIFIED: This is the date and time when the strings were
extracted.
the extraction process does not extract the LU_YEAR string because
there is no reason to translate a lookup table's name. To determine
whether an object's name can be translated, right-click the object, select
Properties, and look for the International option on the left. If this option
is missing, the object is not supported for translation.
To confirm that your translations have successfully been imported back into
the metadata, navigate to one of the translated objects in Developer, right-
click, and select Properties. On the left, select International, then click
Translate. The table shows all translations currently in the metadata for this
object.
3. To import strings from the translation database back into the metadata,
select the Import Translations option from the Metadata Repository
page in the wizard.
After the strings are imported back into the project, any objects that were
modified while the translation process was being performed, are
automatically marked with a 1. These translations should be checked for
correctness, since the modification may have included changing the object's
name or description.
When you are finished with the string translation process, you can proceed
with data internationalization if you plan to provide translated report data to
your users. For background information and steps, see Providing Data
Internationalization, page 1951. You can also set user language preferences
for translated metadata objects and data in Enabling or Disabling Languages
in the Project to Support DI, page 1958.
All of the procedures in this section assume that you have completed any
final import of translations to your pre-9.x project using the old Repository
Translation Tool, and that your projects have been prepared for
internationalization. Preparation steps are in Preparing a Project to Support
Internationalization, page 1934.
3. Merge the translated projects into the master project using the Project
Merge Wizard. Do not merge any translations.
4. You now have a single master project that contains all objects that were
present in both the original master project and in the translated project.
5. Extract all objects from the master project using the MicroStrategy
Repository Translation Wizard (see Extracting Metadata Object Strings
for Translation, page 1944).
7. Import all translations back into the master project (see Importing
Translated Strings from the Translation Database to the Metadata,
page 1949).
All of the procedures in this section assume that your projects have been
prepared for internationalization. Preparation steps are in Preparing a
Project to Support Internationalization, page 1934.
You must connect MicroStrategy to your storage system for translated data.
To do this, you must identify which type of storage system you are using.
Translated data for a given project is stored in one of two ways:
l In columns and tables within the same data warehouse as your source
(untranslated) data (see SQL-Based DI Model, page 1953)
SQL-Based DI Model
If all of your translations are stored in the same data warehouse as the
source (untranslated) data, this is a SQL-based DI model. This model
assumes that your translation storage is set up for column-level data
translation (CLDT) and/or table-level data translation (TLDT), with
standardized naming conventions.
This model is called SQL-based because SQL queries are used to directly
access data in a single warehouse for all languages. You can provide
translated DESC (description) forms for attributes with this DI model.
If you are using a SQL-based DI model, you must specify the column pattern
or table pattern for each language. The pattern depends upon the table and
column names that contain translated data in your warehouse.
MicroStrategy supports a wide range of string patterns. The string pattern is
not limited to suffixes only. However, using prefixes or other non-suffix
naming conventions requires you to use some functions so that the system
can recognize the location of translated data. These functions are included
in the steps to connect the system to your database.
Connection-Based DI Model
If the translated data is stored in different data warehouses for each
language, MicroStrategy retrieves the translations using a database
Choosing a DI Model
You must evaluate your physical data storage for both your source
(untranslated) language and any translated languages, and decide which
data internationalization model is appropriate for your environment.
Data
Translation Access
Translation Storage Location Internationalization
Method
Model
Data
Translation Access
Translation Storage Location Internationalization
Method
Model
Different database
One data warehouse for each
connection for each Connection-based
language
language
If you are creating a new data warehouse and plan to implement DI, and you
also use Intelligent Cubes, it is recommended that you use a SQL-based DI
model, with different tables and/or columns for each language. Because a
single Intelligent Cube cannot connect to more than one data warehouse,
using a connection-based DI model requires a separate Intelligent Cube to
be created for each language. This is very resource-intensive. For
information about Intelligent Cubes in general and details on designing
Intelligent Cubes for an internationalized environment, see the
MicroStrategy In-memory Analytics Help.
Your table suffixes for languages should be consistent and unified across
the entire warehouse. For example, if you have Spanish translations in your
warehouse, the suffix should be _SP for all tables that include Spanish
translations, and not _SP, _ES, _EP, and so on.
For detailed steps to connect the system to your translation database, see
the Project Design Help, Enabling data internationalization through SQL
queries section. The Project Design Help includes details to select your
table or column naming pattern, as well as functions to use if your naming
pattern does not use suffixes.
If you are changing from one DI model to another, you must reload the
project after completing the steps above. Settings from the old DI model are
preserved, in case you need to change back.
The database connection that you use for each data warehouse must be
configured in MicroStrategy before you can provide translated data to
MicroStrategy users.
The procedure in the Project Design Guide assumes that you will enable the
connection-based DI model. If you decide to enable the SQL-based model, you
can still perform the steps to enable the connection-based model, but the
language-specific connection maps you create in the procedure will not be
active.
You must have the Configure Connection Map privilege, at either the user level
or the project level.
For detailed steps to connect the system to more than one data warehouse,
see the Project Design Help, Enabling data internationalization through
connection mappings section.
If you are changing from one DI model to another, you must reload the
project after completing the steps in the Project Design Help. Settings from
the old DI model are preserved, in case you need to change back.
If the project designer has not already done so, you must define attribute
forms in the project so that they can be displayed in multiple languages.
Detailed information and steps to define attribute forms to support multiple
languages are in the Project Design Help, Supporting data
internationalization for attribute elements section.
You can also add a custom language to the list of languages available to be
enabled for data internationalization. For steps to add a custom language to
the project, see Adding or Removing a Language in the System, page 1989.
5. Select the DI model that you are using. For details, see Storing
Translated Data: Data Internationalization Models, page 1952.
6. Click Add.
7. Languages displayed in bold blue are those languages that have been
enabled for the project to support translated metadata objects, if any.
This list is displayed as a starting point for the set of languages you can
choose to enable for supporting data internationalization.
8. Select the check box next to any language or languages that you want
to enable for this project.
9. Click OK.
10. In the Default column, select one language to be the default language
for data internationalization in the project. This selection does not have
any impact on the project or how languages are supported for data
internationalization. Unlike the MDI default language, this DI default
language can be changed at any time.
11. For each language you have enabled, define the column/table naming
pattern or the connection-mapped warehouse, depending on which DI
model you are using (for information on DI models and on naming
patterns, see Storing Translated Data: Data Internationalization
Models, page 1952):
l Some languages may have the same suffix - for example, English US
and English UK. You can also specify a NULL suffix.
13. Disconnect and reconnect to the project source so that your changes
take effect. To do this, right-click the project source, select Disconnect
from Project Source, then repeat this and select Connect to Project
Source.
If you disable all languages for data internationalization (DI), the system
treats DI as disabled. Likewise, if you do not have a default language set for
DI, the system treats DI as disabled.
4. On the right side, under Selected Languages, clear the check box for
the language that you want to disable for the project.
5. Click OK.
l Language disabling will only affect MDX cubes and regular reports
and documents if an attribute form description in the disabled
language exists in the cube or report. If this is true, the cube, report,
or document cannot be published or used. The cube, report, or
document designer must remove attribute forms in the disabled
language before the cube/report/document can be used again.
These language preferences are for metadata languages only. All data
internationalization languages fall back to the project's default language if a
The following sections show you how to select language preferences based
on various priority levels within the system, starting with a section that
explains the priority levels:
l Report data: Determine the language that will be displayed for report
results that come from your data warehouse, such as attribute element
names. For steps to set this preference, see Configuring Metadata Object
and Report Data Language Preferences, page 1967.
4. From the Interface Language drop-down list, select the language that
you want to use as the interface default language
6. Select OK.
Language preferences can be set at six different levels, from highest priority
to lowest. The language that is set at the highest level is the language that is
always displayed, if it is available. If that language does not exist or is not
available in the metadata or the data warehouse, the next highest level
language preference is used.
The following table describes each level, from highest priority to lowest
priority, and points to information on how to set the language preference at
each level.
Language
Preference
Level Setting Location for Setting Location for
Description
(highest to End Users Administrators
lowest
priority)
In the Project
Configuration Editor,
The language expand Languages,
Project-All preference for all select User Preferences.
Not applicable.
Users level users in a specific See Selecting the All
project. Users in Project Level
Language Preference,
page 1974.
Language
Preference
Level Setting Location for Setting Location for
Description
(highest to End Users Administrators
lowest
priority)
For example, a user has their User-Project Level preference for Project A
set to English. their User-All Projects Level preference is set to French. If
the user logs in to Project A and runs a report, the language displayed will
be English. If the user logs in to Project B, which does not have a User-
Project Level preference specified, and runs a report, the project will be
displayed in French. This is because there is no User-Project Level
preference for Project B, so the system automatically uses the next, lower
language preference level (User-All Projects) to determine the language to
display.
2. Right-click the project that you want to set the language preference for
and select Project Configuration.
6. Select the users from the list on the left side of the User Language
Preferences Manager that you want to change the User-Project level
language preference for, and click > to add them to the list on the right.
You can narrow the list of users displayed on the left by doing one of
the following:
l To search for users in a specific user group, select the group from the
drop-down menu that is under the Choose a project to define user
language preferences drop-down menu.
l To search for users containing a certain text string, type the text
string in the Find field, and click the icon.
This returns a list of users matching the text string you typed.
Previous strings you have typed into the Find field can be accessed
again by expanding the Find drop-down menu.
7. On the right side, select the user(s) that you want to change the User-
Project level preferred language for, and do the following:
8. Click OK.
Once the user language preferences have been saved, users can no
longer be removed from the Selected list.
9. Click OK.
10. Disconnect and reconnect to the project source so that your changes
take effect. To do this, right-click the project source, select Disconnect
from Project Source, then repeat this and select Connect to Project
Source.
If the User-Project language preference is specified for the user, the user
will see the User-All Projects language only if the User-Project language is
not available. To see the hierarchy of language preference priorities, see
the table in Configuring Metadata Object and Report Data Language
Preferences, page 1967.
2. In the Folder List on the left, within the appropriate project source,
expand Administration, expand User Manager, and navigate to the
user that you want to set the language preference for.
4. On the left side of the User Editor, expand the International category
and select Language.
6. Click OK.
The All Users In Project level language preference determines the language
that will be displayed for all users that connect to a project, unless a higher
priority language is specified for the user. Use the steps below to set this
preference.
2. In the Folder List on the left, select the project. From the
Administration menu, select Projects, then Project Configuration.
l From the Data language preference for all users in this project
drop-down menu, select the language that you want to be displayed
for report results in this project.
5. Click OK.
preference has been specified. This is the same as the interface preference.
4. Select one of the following from the Language for metadata and
warehouse data if user and project level preferences are set to
default drop-down menu.
5. Select the language that you want to use as the default Developer
interface language from the Interface Language drop-down menu.
6. Click OK.
This preference determines the language that is used on all objects on the
local machine. MicroStrategy Web uses the language that is specified in the
user's web browser if a language is not specified at a level higher than this
one.
This language preference specifies the default language for the project. This
language preference has the lowest priority in determining the language
display. Use the steps below to set this preference.
cannot be changed after that point. The following steps assume the project
default language has not yet been selected.
2. Select the project that you want to set the default preferred language
for.
l To specify the default data language for the project, select Data from
the Language category. Then select Default for the desired
language.
5. Select OK.
Some objects may not have their object default language preference set, for
example, if objects are merged from an older, non-internationalized
MicroStrategy system into an upgraded, fully internationalized environment.
In this case, for those objects that do not have a default language, the
system automatically assigns them the project's default language.
When duplicating a project, objects in the source that are set to take the
project default language will take whatever the destination project's default
language is.
1. Log in to the project source that contains the object as a user with
administrative privileges.
l You can set the default language for multiple objects by holding the
Ctrl key while selecting multiple objects.
4. From the Select the default language for the object drop-down
menu, select the default language for the object(s).
5. Click OK.
Translation or
Language Display that Where to Enable It
You Want to Achieve
Fonts that support all Few fonts support all languages. One that does is Arial
languages Unicode MS, which is licensed from Microsoft.
PDFs, portable PDFs, Embed fonts when you are designing the document; this
bookmarks in PDFs, and ensures that the fonts selected by the document designer are
language display in a used to display and print the PDF, even on machines that do
Translation or
Language Display that Where to Enable It
You Want to Achieve
Translation or
Language Display that Where to Enable It
You Want to Achieve
Default language for all Right-click a project, select Project Configuration >
users in a project Language > User Preferences.
Caches in an
internationalized See Caching and Internationalization, page 1932.
environment
Translation or
Language Display that Where to Enable It
You Want to Achieve
Repository Translation Enable languages the project supports for metadata objects
Wizard list of available (see Enabling Metadata Languages for an Existing Project,
languages page 1939).
Translation or
Language Display that Where to Enable It
You Want to Achieve
Translation or
Language Display that Where to Enable It
You Want to Achieve
These maintenance processes and tools are described below. This section
also covers security and specialized translator user roles.
l List resolved languages, which are the languages that are displayed to
users from among the list of possible preferences.
For these and all the other scripts you can use in Command Manager, open
Command Manager and click Help.
Specifically, the database that allocates the metadata must be set with a
code page that supports the languages that are intended to be used in your
MicroStrategy project.
Variant languages (also called custom languages) can also be added. For
example, you can create a new language called Accounting, based on the
English language, for all users in your Accounting department. The language
contains its own work-specific terminology.
You must have the Browse permission for the language object's ACL (access
control list).
4. Click Add.
5. Click New.
6. Click OK.
l System folders: The Public Objects folder and the Schema Objects
folder
8. Click Yes. You can also perform this update later, using the Project
Configuration Editor, and selecting Upgrade in the Project Definition
category.
After adding a new language, if you use translator roles, be sure to create a
new user group for translators of the new language (see Creating Translator
Roles, page 1996).
This procedure provides high-level steps for adding a new language to the
display of languages in MicroStrategy Web. After the new language is
added, Web users can select this language for displaying various aspects of
Web in the new language. For details and best practices to customize your
MicroStrategy Web files, see the MicroStrategy Developer Library (MSDL),
which is part of the MicroStrategy SDK.
3. Create resource files for the new language, for generic descriptors,
based on existing resource files. For example:
l DashboardDatesBundle_13313.xml
l DossierViewerBundle_13313.xml
2. For metadata languages, any translations for the disabled language are
not removed from the metadata with these steps. To remove
translations:
l For the entire metadata: Duplicate the project after the language has
been removed, and do not include the translated strings in the
duplicated project.
3. For objects that had the disabled language as their default language,
the following scenarios occur. The scenarios assume the project
defaults to English, and the French language is disabled for the project:
l If the object's default language is French and the object contains only
French translations, then, after French is disabled from the project,
the French translation will be displayed but will be treated by the
system as if it were English. The object's default language
automatically changes to English.
For both scenarios above: If you later re-enable French for the
project, the object's default language automatically changes back to
French as long as no changes were made and saved for the object
while the object had English as its default language. If changes were
made and saved to the object while it had English as its default
language, and you want to return the object's default language back
to French, you can do so manually: right-click the object, select
Properties, select Internationalization on the left, and choose a new
default language.
You also use the language object's ACLs in combination with MicroStrategy
user privileges to create a translator or linguist role. This type of role allows
a user to translate terms for an object in a given language, but keeps that
user from making changes to the object's translations in other languages or
making changes to the object's name and description in the object's default
language.
For example, you can create 2 groups of users and provide Group 1 with
browse and use access to the English language object and the French
language object, and provide Group 2 with browse and use access to
Spanish only. In this scenario, users in Group 2 can only choose Spanish as
their language preference, and can only access Spanish data from your
warehouse. If an object which is otherwise available to Group 2 users does
not have a Spanish translation, Group 2 users will be able to access that
object in the project's default language (which may be English, French, or
any other language.)
1. In Developer, from the Folder List on the left, within the appropriate
project source, go to Administration > Configuration Managers.
2. Select Languages.
You can modify this approach to customize your language object security as
it fits your specific needs. Suggestions are provided after the steps, to
modify the translator role setup for specific situations.
l Reference language: Any language other than the source language which
the translator needs to translate from
l Target language: Any language other than the source language which the
translator needs to translate to
l Grant each user the Use Developer privilege, in the Analyst privilege
group.
l Grant each user the Use Translation Editor privilege, in the Common
privilege group.
l Grant each user the Use Translation Editor Bypass privilege, in the
Developer privilege group.
This privilege allows the user to use the Translation Editor to change
an object's name and/or description for a given language, and does
not require the user to have Write access to the object whose
name/description is being changed (the system bypasses the normal
check for Write access).
1. Grant the View permission on the ACL (access control list) for a
language object to the user account that is allowed to translate objects
into that language. This permission should be granted to the target
language. The View permission allows a user to:
l Translate object names and descriptions in the language the user has
View permission for.
To grant the View permission for a language object, use the following
substeps:
5. Click the field in the Permissions column next to the newly added
user and select View.
4. On the right, click Add to add the appropriate user account to the
security for this language. Navigate to the appropriate translator
user, select the user, and click Custom.
5. Click the field in the Permissions column next to the newly added
user and select Browse and Read.
2. Repeat these substeps for any other languages in the list of language
objects that you want this user to be able to see.
The Use Developer privilege, in the Analyst The Use Developer privilege, in the Analyst
privilege group. privilege group.
The Browse permission on the language The Browse permission on the language
object that the translator will be translating object that the translator will be translating
into, and on a reference language object. into, and on a reference language object.
The Read permission on the language that the translator will be translating into, and
into, and on a reference language object. The Use permission on the language object
that the translator will be translating into.
Be sure you do not grant the Use permission on any language object that
represents a language you do not want the translator to be able to make
changes to.
l Allow the translator user to see an object's name and definition in the
source language and in any other language that the object uses, as well
as the translator's target language. To do this, grant the translator user
the Browse and Read permissions for each language object listed in
Administration > Configuration Managers > Languages. The Browse
and Read permissions allow the user to see translations in the
Translation Editor but not edit the translation strings.
l Grant the user privileges to access the object within the various
Developer object editors. These privileges allow the user to execute the
object so that it opens within its appropriate editor, thus displaying
additional detail about the object. Access can allow context such as
seeing a string as it appears within a dashboard; a metric's
expression/formula; an attribute's forms and the data warehouse tables
that the data comes from; and so on. For example, in the User Editor,
grant the translator the Execute Document and Use Report Editor
privileges from the Analyst privilege group. Also grant Use Custom
Group Editor, Use Metric Editor, Use Filter Editor, and so on, from the
Developer privilege group.
For example, if you grant a translator Browse, Read, and Use permissions
for the French language object, Browse and Read permissions for the
object's default language, and Deny All for all other languages, the
translator will only see the French translations column and the default
language column in the Translations Editor in Developer.
However, be aware that this limits the translator to only being able to use
the object's default language as their reference language. If the translator
can benefit from seeing context in other languages, it is not recommended
to Deny All for other languages.
If this privilege is assigned, be aware that the user will be able to export
strings and import translations for those strings in all languages that the
project supports. This is true no matter what other language restrictions
are applied.
L IST OF PRIVILEGES
A privilege with the note "Server level only" can be granted only at the
project source level. It cannot be granted for a specific project.
Privileges vary between releases. For the most up to date privileges, see
the dashboard in Privileges by License Type.
Client - Reporter
l Web export
l Web sort
l Web user
Client - Web
l Use Desktop
l Use office
l Web modify the list of Report Objects (use object browser -- all objects)
Client - Application
l Use Application
Client - Mobile
Client - Architect
l Alias objects
l Configure toolbars
l Execute Document
l Format graph
l Import function
l Modify sorting
l Pivot Report
l Send to Email
l Use Developer
l View SQL
Server - Reporter
l Export to Excel
l Export to Flash
l Export to HTML
l Export to PDF
l Export to text
l Schedule request
l Use analytics
l View notes
Server - Intelligence
l Add notes
l Administer Caches
l Administer Cubes
l Administer jobs
l Administer Subscriptions
l Configure caches
l Configure governing
l Configure statistics
l Duplicate project
l Edit notes
l Monitor caches
l Monitor Cubes
l Monitor Jobs
l Monitor project
l Monitor subscriptions
l Publish Content
l Use Workstation
l Web administration
Server - Analytics
l Access data (files) from Local, URL, DropBox, Google Drive, Sample
Files, Clipboard, Push API
Server - Collaboration
Server - Distribution
l Subscribe to Email
l Subscribe to file
l Subscribe to FTP
l Subscribe to print
Server - Transaction
l Execute Transaction
Client - Reporter
l Web export
l Web sort
l Web user
Client - Web
l Use Desktop
l Use office
l Web modify the list of Report Objects (use object browser -- all objects)
Client - Mobile
Client - Architect
l Alias objects
l Configure toolbars
l Execute Document
l Format graph
l Import function
l Modify sorting
l Pivot Report
l Send to Email
l Use Developer
l View SQL
Server - Reporter
l Export to Excel
l Export to Flash
l Export to HTML
l Export to PDF
l Export to text
l Schedule request
l Use analytics
l View notes
Server - Intelligence
l Add notes
l Administer Caches
l Administer Cubes
l Administer jobs
l Administer Subscriptions
l Configure caches
l Configure governing
l Configure statistics
l Duplicate project
l Edit notes
l Monitor caches
l Monitor Cubes
l Monitor Jobs
l Monitor project
l Monitor subscriptions
l Publish Content
l Use Workstation
l Web administration
Server - Analytics
l Access data (files) from Local, URL, DropBox, Google Drive, Sample
Files, Clipboard, Push API
Server - Collaboration
Server - Distribution
l Subscribe to Email
l Subscribe to file
l Subscribe to FTP
l Subscribe to print
Server - Transaction
l Execute Transaction
l Administer Caches
l Administer Cubes
l Administer Jobs
l Administer Subscriptions
l Administer Caches
l Administer Jobs
l Monitor Caches
l Monitor Cubes
l Monitor Jobs
l Monitor Project
l Monitor Subscriptions
l Configure Caches
l Configure Governing
l Configure Statistics
l Web Administration
l All users are members of the Everyone group and inherit all privileges
granted to that group.
l LDAP Public/Guest
l LDAP Users
l Public/Guest
l Warehouse Users
l API
l Architect
l Collaboration Server
l Distribution Server
l Mobile
l Reporter
l System Monitor
l System Administrators
l User Administrators
l Transaction Server
l Web
l MicroStrategy Architect
Developer Privileges
These privileges correspond to the report design functionality available in
Developer. The predefined Developer group is assigned these privileges by
default. The Developer group also inherits all the privileges assigned to the
Analyst group. License Manager counts any user who has any of these
privileges as a Developer user.
* Save derived elements Save stand-alone derived elements, separate from the report.
*** Use bulk export editor Use the Bulk Export Editor to define a bulk export report.
**** Define transaction Define a Transaction Services report using the Freeform SQL
report editor.
Define Freeform SQL Define a new report using Freeform SQL, and see the
report Freeform SQL icon in the Create Report dialog box.
Define MDX cube report Define a new report that accesses an MDX cube.
Modify the list of report Add objects to a report, which are not currently displayed in
objects (use Object the Report Objects window. This determines whether the user
Browser) is a report designer or a report creator. A report designer is a
user who can build new reports based on any object in the
project. A report creator can work only within the parameters
of a predesigned report that has been set up by a report
designer. This privilege is required to edit the report filter and
the report limit. For more information on these features, see
the Advanced Reporting Help.
Use the Metric Editor. Among other tasks, this privilege allows
Use Metric Editor the user to import DMX (Data Mining Services) predictive
metrics.
Use Translation Editor Use the Translation Editor. Users with this privilege can
bypass translate an object without having Write access to the object.
Privileges marked with * are included only if you have OLAP Services installed as part of
Intelligence Server.
Privileges marked with ** are included only if you have Report Services installed.
Privileges marked with *** are included only if you have Distribution Services installed.
Privileges marked with **** are included only if you have Transaction Services installed.
Analyst
l All privileges in the Web Reporter privilege group (see Web Reporter
privileges).
l All privileges in the Common Privileges privilege group, except for Create
Schema Objects and Edit Notes.
l All privileges in the Web Analyst privilege group (see Web Analyst
privileges).
Some of these privileges are also inherited from the groups that the Web
Analyst group is a member of.
Some of these privileges are also inherited from the groups that the Web
Professional group is a member of.
l Administer Caches
l Administer Cluster
l Administer Cubes
l Administer Jobs
l Administer Subscriptions
l Fire Events
l Administer Caches
l Administer Cluster
l Administer Jobs
l Monitor Caches
l Monitor Cluster
l Monitor Cubes
l Monitor Jobs
l Monitor Projects
l Monitor Subscriptions
l Configure Caches
l Configure Governing
l Configure Statistics
l Web Administration
l Grant/Revoke Privilege
l Enable User
l Grant/Revoke Privilege
licenses. Every license type comes with a unique set of privileges, and
system administrators are responsible for assigning these privileges based
on security roles, user groups, as well as the individual user. Some licenses
and their associated privileges are sold in bundled product packages.
License Bundles
The following is a list of modern license bundles available to MicroStrategy
Cloud users:
l Cloud Reporter User: Allows users to view, execute, and interact with
dashboards, reports, and documents via MicroStrategy in a web browser.
Users also receive distributed reporting.
Reference the dashboard below to see the license types and privilege set
that comes with each license bundle.
l Client - Hyper: A chrome browser extension that can embed analytics into
any website or web application. The HyperIntelligence client automatically
detects predefined keywords on a webpage or web application and
surfaces contextual insights from enterprise data sources using cards.
l Client - Architect: License that provides the ability to create the project
schema and build a centralized data model to deliver a single version of
The privileges in any license type you have do not rely on additional licenses
to function properly. However, it is possible to inherit privileges from other
license types. The Client - Reporter and - Web licenses are linked together
in a hierarchy that allows users to inherit specific privilege sets. The
hierarchy is set up such that the Client - Reporter license is a subset of the
Client - Web license.
This means that if you have a Client - Web license, in addition to the
privilege set that comes with that license, you will automatically inherit the
privileges that come with the Client - Reporter license.
However this hierarchy does not work in reverse, so if you have the Client -
Reporter license, you will not inherit the Client - Web privilege set. Keep in
mind that you can still use each of the Client product license types
individually regardless of whether or not they are apart of a hierarchy.
Reference the dashboard below to see the privilege set that comes with
each license type. License types that contain a subset have already been
set up to include the privileges from their subset license.
l Server - Identity: Provides the organization with the ability to, create,
configure, distribute and manage digital identities (Badge) for users.
Similar to the Client product license types, the Server - Intelligence and
Server - Reporter license are organized into a hierarchy that allows users to
inherit certain privileges. In this hierarchy, the Server - Reporter license is a
subset of the Server - Intelligence license.
This means that if you have the Server - Intelligence license, in addition to
that license's privilege set you will have access to the privilege set available
in the Server - Reporter license. However this does not prevent you from
using the privilege set of either license individually.
Add-Ons
Server product licenses also include add-on licenses that contain their own
privilege sets. Each of these license types can be obtained separately and
added on top of either the Server - Intelligence or Server - Reporter
licenses. The only restriction is that certain add-ons can only be added to
specific license types:
Starting July 2024, this license is included in the AI Power User bundle.
Starting July 2024, this license is included in the AI Power User and AI
Consumer User bundles.
Starting July 2024, this license is included in the AI Power User and AI
Consumer User bundles.
Once an add-on has been obtained you will have access to its privilege set,
as well as the privilege set of the license type you combined it with.
Reference the dashboard below to see the privilege sets that come with
each license type. If you are looking at a license type combined with an add-
on, you must select both to see the full list of available privileges.
Once your enterprise has purchased one or more of the license types
available, you will also get access to License Manager. This product
manages the license types your enterprise has by auditing them to keep
track of which ones are in use, and which ones are available.
In compliance example
Now let's say that there are three employees in the enterprise that have
been using these licenses to access the following privileges:
Now let's say that there are three employees in the enterprise that have
been using these licenses to access the following privileges:
The following licenses are included in an AI or Cloud bundle but are not
reflected in the dashboard:
l Drivers - OLAP
M ULTI -TEN AN T
EN VIRON M EN TS: O BJECT
N AM E PERSON ALIZATION
For example, you have an attribute stored in the metadata repository, with a
base name of Inventory Date. This metadata object will appear on reports
accessed by users in Organization A and Organization B. You can use object
name personalization to configure MicroStrategy to automatically display the
object to Organization A with the name Date In Inventory, and display the
same object to Organization B with the name Date First Purchased.
You can also provide a user with the Use Repository Translation Wizard
privilege, within the Object Manager set of privileges. This allows a user to
perform the necessary steps to rename strings in bulk, for all tenants,
without giving the user the ability to modify an object in any other way. To
change a privilege, open the user in the User Editor and select Project
Access on the left.
You can modify these default privileges for a specific user role or a specific
tenant language.
1. In the Folder List on the left, within the appropriate project source,
expand Administration.
3. All tenant languages are listed on the right. To change ACL permissions
for a tenant language, right-click the object and select Properties.
4. Select Security on the left. For details on each ACL and what access it
allows, click Help.
system folder, security role names, user group names, and so on. Software
strings stored in the metadata include embedded text strings (embedded in
an object's definition), such as prompt instructions, aliased names (which
can be used in attributes, metrics, and custom groups), consolidation
element names, custom group element names, graph titles, and threshold
text.
1. Add tenant languages to the system, for each of your tenants. For
steps, see Adding a New Tenant Language to the System, page 2055.
2. Enable tenant languages for your project's metadata objects. For steps,
see Enabling and Disabling Tenant Languages, page 2056.
You must have the Browse permission for the language object's ACL (access
control list).
4. Click Add.
5. Click New.
6. Click OK.
After adding a new tenant language, enable the tenant language for the
project. For steps, see Enabling and Disabling Tenant Languages, page
2056.
Gather a list of tenant languages used by filters and prompts in the project.
These tenant languages should be enabled for the project, otherwise a report
containing a filter or prompt in a tenant language not enabled for the project
will not be able to execute successfully.
4. Click Add to see a list of available tenant languages. The list includes
languages that have been added to the system.
5. Select the check boxes for the tenant languages that you want to
enable for this project.
6. Click OK.
7. Select one of the tenant languages on the right side to be the default
tenant language for this project. The default tenant language is used by
the system to maintain object name uniqueness.
This may have been set when the project was first created. If so, it will
not be available to be selected here.
8. Click OK.
Any object names for the disabled tenant language are not removed from the
metadata with these steps. Retaining the object names in the metadata
allows you to enable the tenant language again later, and the object names
will still exist. To remove object names in the disabled tenant language from
the metadata, objects must be modified individually and saved.
4. On the right side, under Selected Languages, clear the check box for
the tenant language that you want to disable for the project, and click
OK.
to rename your metadata objects. Steps to access this tool are below.
The rest of this section describes the method to rename object strings in
bulk, using a separate database, with the Repository Translation Wizard.
All of the procedures in this section assume that your projects have been
prepared for object renaming. Preparation steps are in Granting User
Access to Rename Objects and View Tenant Languages, page 2053.
1. Add and enable tenant languages for the metadata repository (see
Adding a New Tenant Language to the System, page 2055 and Enabling
and Disabling Tenant Languages, page 2056)
4. Import the newly renamed object strings back into the metadata
repository (see Importing Renamed Strings from the Database to the
Metadata, page 2064)
You cannot extract strings from the project's default metadata language.
l OBJECTID: This is the ID of the object from which the string is extracted.
This string is used only as a reference during the translation process. For
example, if the translator is comfortable with the German language, you
can set German as the translation reference language. The
REFTRANSLATION column will then contain all the extracted strings in the
German language.
l STATUS: You can use this column to enter flags in the table to control
which strings are imported back into the metadata. A flag is a character
you type, for example, a letter, a number, or a special character (as long
as it is allowed by your database). When you use the wizard to import the
strings back into the metadata, you can identify this character for the
system to use during the import process, to determine which strings to
import.
For example, if only some objects have been renamed, you may want to
import only the completed ones. Or you may wish to import only those
strings that were reviewed. You can flag the strings that were completed
and are ready to be imported.
l 0: This means that the object has not been modified between extraction
and import.
l 1: This means that the object has been modified between extraction and
import.
l 2: This means that the object that is being imported is no longer present
in the metadata.
l LASTMODIFIED: This is the date and time when the strings were
extracted.
Once the extraction process is complete, the strings in the database need to
be renamed in the extraction table described above.
To confirm that your new object names have successfully been imported
back into the metadata, navigate to one of the renamed objects in
Developer, right-click, and select Properties. On the left, select
International, then click Translate. The table shows all names currently in
the metadata for this object.
3. To import strings from the database back into the metadata, select the
Import Translations option from the Metadata Repository page in the
wizard.
After the strings are imported back into the project, any objects that were
modified while the renaming process was being performed, are automatically
marked with a 1. These object names should be checked for correctness.
The following sections show you how to select language preferences based
on various priority levels within the system, starting with a section that
explains the priority levels:
l Report data: Determine the language that will be displayed for report
results that come from your data warehouse, such as attribute element
names. For steps to set this preference, see Configuring Metadata Object
and Report Data Language Preferences, page 2068.
4. From the Interface Language drop-down list, select the language that
you want to use as the interface default language.
5. Select OK.
Language preferences can be set at six different levels, from highest priority
to lowest. The language that is set at the highest level is the language that is
always displayed, if it is available. If that language does not exist or is not
available in the metadata or the data warehouse, the next highest level
language preference is used.
The following table describes each level, from highest priority to lowest
priority, and points to information on how to set the language preference at
each level.
Language
Preference
Level Setting Location for Setting Location for
Description
(highest to End Users Administrators
lowest
priority)
In the Project
Configuration Editor,
The language expand Languages,
Project-All preference for all select User Preferences.
Not applicable.
Users level users in a specific See Selecting the All
project. Users in Project Level
Language Preference,
page 2075.
Language
Preference
Level Setting Location for Setting Location for
Description
(highest to End Users Administrators
lowest
priority)
l Data warehouse: When a name for data in the data warehouse is missing
in the preferred language (the column or table is present in the data
warehouse but is empty), the report returns no data.
2. Right-click the project that you want to set the language preference for
and select Project Configuration.
6. Select the users from the list on the left side of the User Language
Preferences Manager that you want to change the User-Project level
language preference for, and click > to add them to the list on the right.
You can narrow the list of users displayed on the left by doing one of
the following:
l To search for users in a specific user group, select the group from the
drop-down menu that is under the Choose a project to define user
language preferences drop-down menu.
l To search for users containing a certain text string, type the text
string in the Find field, and click the Filter icon:
This returns a list of users matching the text string you typed.
Previous strings you have typed into the Find field can be accessed
again by expanding the Find drop-down menu.
7. On the right side, select the user(s) that you want to change the User-
Project level preferred language for, and do the following:
8. Click OK.
Once the user language preferences have been saved, users can no
longer be removed from the Selected list.
9. Click OK.
10. Disconnect and reconnect to the project source so that your changes
take effect. To do this, right-click the project source, select Disconnect
from Project Source, then repeat this and select Connect to Project
Source.
If the User-Project language preference is specified for the user, the user
will see the User-All Projects language only if the User-Project language is
not available. To see the hierarchy of language preference priorities, see
the table in Configuring Metadata Object and Report Data Language
Preferences, page 2068.
2. In the Folder List on the left, within the appropriate project source,
expand Administration, expand User Manager, and navigate to the
user that you want to set the language preference for.
4. On the left side of the User Editor, expand the International category
and select Language.
6. Click OK.
The All Users In Project level language preference determines the language
that will be displayed for all users that connect to a project, unless a higher
priority language is specified for the user. Use the steps below to set this
preference.
2. In the Folder List on the left, select the project. From the
Administration menu, select Projects, then Project Configuration.
4. Do the following:
l From the Data language preference for all users in this project
drop-down menu, select the language that you want to be displayed
for report results in this project.
5. Click OK.
4. Select one of the following from the Language for metadata and
warehouse data if user and project level preferences are set to
default drop-down menu.
5. Select the language that you want to use as the default Developer
interface language from the Interface Language drop-down menu.
6. Click OK.
This preference determines the language that is used on all objects on the
local machine. MicroStrategy Web uses the language that is specified in the
user's web browser if a language is not specified at a level higher than this
one.
This language preference specifies the default language for the project. This
language preference has the lowest priority in determining the language
display. Use the steps below to set this preference.
cannot be changed after that point. The following steps assume the project
default language has not yet been selected.
2. Select the project for which you want to set the default preferred
language.
l To specify the default data language for the project, select Data from
the Language category. Then select Default for the desired
language.
5. Click OK.
Some objects may not have their object default language preference set, for
example, if objects are merged from an older MicroStrategy system that was
not set up for multi-tenancy into an upgraded system that is set up for multi-
tenancy. In this case, for those objects that do not have a default language,
the system automatically assigns them the project's default language.
This is not true for newly created objects within an established multi-
tenancy environment. Newly created objects are automatically assigned the
creator's metadata language preference. For details on the metadata
language, see Configuring Metadata Object and Report Data Language
Preferences, page 2068.
When duplicating a project, objects in the source that are set to take the
project default language will take whatever the destination project's default
language is.
1. Log in to the project source that contains the object as a user with
administrative privileges.
l You can set the default language for multiple objects by holding the
Ctrl key while selecting multiple objects.
4. From the Select the default language for the object drop-down
menu, select the default language for the object(s).
5. Click OK.
new tenant language, see Adding a New Tenant Language to the System,
page 2055.
1. Disable the tenant language from all projects in which it was enabled.
To disable a metadata language from a project, see Enabling and
Disabling Tenant Languages, page 2056.
2. For metadata languages, any names for the disabled language are not
removed from the metadata with these steps. To remove names:
l For individual objects: Objects that contain names for the disabled
tenant language must be modified and saved. You can use the Search
dialog box from the Tools menu in Developer to locate objects that
have names for a given tenant. In the dialog box, on the International
tab, click Help for details on setting up a search for these objects.
l For the entire metadata: Duplicate the project after the tenant
language has been removed, and do not include the renamed strings
in the duplicated project.
3. For objects that had the disabled language as their default language,
the following scenarios occur. The scenarios assume the project
l If the object's default language is Tenant B's language, and the object
has names for both Tenant A and Tenant B, then, after Tenant B's
language is disabled from the project, the object will only display
Tenant A's names. The object's default language automatically
changes to Tenant A's language.
l If the object's default language is Tenant B's language and the object
contains only Tenant B's names, then, after Tenant B's language is
disabled from the project, Tenant B's names will be displayed but will
be treated by the system as if they belong to Tenant A's language.
The object's default language automatically changes to Tenant A's
language.
For both scenarios above: If you later re-enable Tenant B's language
for the project, the object's default language automatically changes
back to Tenant B's language as long as no changes were made and
saved for the object while the object had Tenant A's language as its
default language. If changes were made and saved to the object while
it used Tenant A's language as its default language, and you want to
return the object's default language back to Tenant B's language, you
can do so manually: right-click the object, select Properties, select
Internationalization on the left, and choose a new default language.
I N TELLIGEN CE SERVER
STATISTICS D ATA
D ICTION ARY
This section lists the staging tables in the statistics repository to which
Intelligence Server logs statistics. The detailed information includes the
table name, its function, the table to which the data is moved in the
Enterprise Manager repository, and the table's columns. For each column
we provide the description and datatypes for DB2, MySQL, SQL Server,
Oracle, Teradata, and Sybase databases. A Bold column name indicates
that it is a primary key, and (I) indicates that the column is used in an index.
STG_CT_DEVICE_STATS
Records statistics related to the mobile client and the mobile device. This
table is used when the Mobile Clients option is selected in the Statistics
category of the Project Configuration Editor and the mobile client is
configured to log statistics. The data load process moves this table's
information to the CT_DEVICE_STATS table, which has the same columns
and datatypes.
Day the
TIMEST
DAY_ID action was DATE DATE DATE DATE DATE
AMP
started.
Hour the
TINYIN NUMBE SMALLI BYTEIN TINYIN TINYIN
HOUR_ID action was
T R(3) NT T T T
started.
Minute the
SMALL NUMBE SMALLI SMALLI SMALL SMALL
MINUTE_ID action was
INT R(5) NT NT INT INT
started.
Intelligenc
e Server
processing
the
request.
Name of
the
Intelligenc VARC VARCH VARC VARC
SERVERMA VARCH VARCH
e Server HAR AR2 HAR HAR
CHINE AR(255) AR(255)
processing (255) (255) (255) (255)
the
request.
Unique
installation
DEVICEINST CHAR CHAR CHAR CHAR CHAR CHAR
ID of the
ID (40) (40) (40) (40) (40) (40)
mobile
app.
Type of
device the
app is
VARC VARC VARC
DEVICETYP installed VARCH VARCH VARCH
HAR HAR HAR
E on, such AR2(40) AR(40) AR(40)
(40) (40) (40)
as iPad,
Droid, or
iPhone.
Operating
system of
VARCH VARCH VARCH VARCH VARCH VARCH
OS the device
AR(40) AR2(40) AR(40) AR(40) AR(40) AR(40)
the app is
installed
on, such as
iOS or
Android.
Version of
the
VARC VARC VARC
operating VARCH VARCH VARCH
OSVER HAR HAR HAR
system, AR2(40) AR(40) AR(40)
(40) (40) (40)
such as
5.2.1.
Version of
the VARCH VARCH VARCH VARCH VARCH VARCH
APPVER
MicroStrat AR(40) AR2(40) AR(40) AR(40) AR(40) AR(40)
egy app.
An integer
value that
increments
whenever
the device
STATECOUN informatio SMALL NUMBE SMALLI SMALLI SMALL SMALL
TER n, such as INT R(5) NT NT INT INT
DEVICETY
PE, OS,
OSVER, or
APPVER,
changes.
Date and
time when
STATECHAN STATECO DATET TIMEST TIMEST TIMEST DATET DATET
GETIME UNTER is IME AMP AMP AMP IME IME
incremente
d.
Timestamp
of when
the record
was
written to
RECORDTIM the DATET TIMEST TIMEST TIMEST DATET DATET
E database, IME AMP AMP AMP IME IME
according
to
database
system
time.
STG_CT_EXEC_STATS
Records statistics related to execution of reports/documents in a mobile
app. This table is used when the Mobile Clients option is selected in the
Statistics category of the Project Configuration Editor and the mobile client
is configured to log statistics. The data load process moves this table's
information to the CT_EXEC_STATS table, which has the same columns and
datatypes.
SQL
Terad Syba MySQ
Serve Oracle DB2
Descriptio ata se L
Column r Data- Data-
n Data- Data- Data-
Data- type type
type type type
type
SQL
Terad Syba MySQ
Serve Oracle DB2
Descriptio ata se L
Column r Data- Data-
n Data- Data- Data-
Data- type type
type type type
type
started.
Hour the
TINYI NUMBE SMALL BYTEI TINYI TINYI
HOUR_ID action was
NT R(3) INT NT NT NT
started.
Minute the
SMAL NUMB SMALL SMALL SMAL SMAL
MINUTE_ID action was
LINT ER(5) INT INT LINT LINT
started.
Unique
installation CHAR CHAR CHAR CHAR CHAR CHAR
DEVICEINSTID (I)
ID of the (40) (40) (40) (40) (40) (40)
mobile app.
An integer
value that
increments
when the
device
information,
such as
STATECOUNTE SMAL NUMB SMALL SMALL SMAL SMAL
DEVICETYP
R (I) LINT ER(5) INT INT LINT LINT
E, OS,
OSVER, or
APPVER (in
STG_CT_
DEVICE_
STATS),
changes.
GUID of the
CHAR CHAR CHAR CHAR CHAR CHAR
USERID user making
(32) (32) (32) (32) (32) (32)
the request.
SQL
Terad Syba MySQ
Serve Oracle DB2
Descriptio ata se L
Column r Data- Data-
n Data- Data- Data-
Data- type type
type type type
type
GUID of the
session that
executed
the request.
This should
be the same
CHAR CHAR CHAR CHAR CHAR CHAR
SESSIONID as the
(32) (32) (32) (32) (32) (32)
SESSIONID
for this
request in
STG_IS_
REPORT_
STATS.
GUID of the
MicroStrate
gy Mobile
client
session ID.
A new client INTEG NUMBE INTEG INTEG INTEG INTEG
CTSESSIONID
session ID is ER R(10) ER ER ER ER
generated
every time a
user logs in
to the mobile
app.
ID
correspondi
CHAR CHAR CHAR CHAR CHAR CHAR
MESSAGEID ng to the
(32) (32) (32) (32) (32) (32)
JOBID (in
STG_IS_
SQL
Terad Syba MySQ
Serve Oracle DB2
Descriptio ata se L
Column r Data- Data-
n Data- Data- Data-
Data- type type
type type type
type
REPORT_
STATS) of
the
message
generated
by the
execution.
Similar to
JOBID but
generated
by the client
and cannot
be NULL.
SMAL NUMBE SMALL SMALL SMAL SMAL
ACTIONID The JOBID
LINT R(5) INT INT LINT LINT
may be
NULL if the
user is
offline
during
execution.
GUID of the
Intelligence
CHAR CHAR CHAR CHAR CHAR CHAR
SERVERID Server
(32) (32) (32) (32) (32) (32)
processing
the request.
Name of the
machine VARC VARCH VARCH VARCH VARC VARC
SERVERMACHI
hosting the HAR AR2 AR AR HAR HAR
NE
Intelligence (255) (255) (255) (255) (255) (255)
Server
SQL
Terad Syba MySQ
Serve Oracle DB2
Descriptio ata se L
Column r Data- Data-
n Data- Data- Data-
Data- type type
type type type
type
processing
the request.
GUID of the
report used CHAR CHAR CHAR CHAR CHAR CHAR
REPORTID
in the (32) (32) (32) (32) (32) (32)
request.
GUID of the
document CHAR CHAR CHAR CHAR CHAR CHAR
DOCUMENTID
used in the (32) (32) (32) (32) (32) (32)
request.
Name of the
VARC VARCH VARCH VARCH VARC VARC
MSERVERMAC load
HAR AR2 AR AR HAR HAR
HINE balancing
(255) (255) (255) (255) (255) (255)
machine.
Time when
the user
CTREQUESTTI submits a DATE TIMES TIMES TIMES DATE DATE
ME request to TIME TAMP TAMP TAMP TIME TIME
the mobile
app.
Time when
the mobile
CTRECEIVEDTI app begins DATE TIMES TIMES TIMES DATE DATE
ME receiving TIME TAMP TAMP TAMP TIME TIME
data from
MicroStrate
SQL
Terad Syba MySQ
Serve Oracle DB2
Descriptio ata se L
Column r Data- Data-
n Data- Data- Data-
Data- type type
type type type
type
gy Mobile
Server.
Difference
between
CTRequest
CTREQRECTIM Time and INTE NUMB INTEG INTEG INTE INTE
E CTReceived GER ER(10) ER ER GER GER
Time, in
millisecond
s.
Time when
CTRENDERSTA the mobile DATE TIMES TIMES TIMES DATE DATE
RTTIME app begins TIME TAMP TAMP TAMP TIME TIME
rendering.
Time when
CTRENDERFINI the mobile DATE TIMES TIMES TIMES DATE DATE
SHTIME app finishes TIME TAMP TAMP TAMP TIME TIME
rendering.
Difference
between
CTRenderSt
CTRENDERTIM artTime and INTEG NUMBE INTEG INTEG INTEG INTEG
E CTRenderFi ER R(10) ER ER ER ER
nishTime, in
millisecond
s.
Type of
EXECUTIONTY TINYI NUMB SMALL BYTEI TINYI TINYI
report/docu
PE NT ER(3) INT NT NT NT
ment
SQL
Terad Syba MySQ
Serve Oracle DB2
Descriptio ata se L
Column r Data- Data-
n Data- Data- Data-
Data- type type
type type type
type
execution:
1: User
execution
2: Pre-
cached
execution
3:
Application
recovery
execution
4:
Subscription
cache pre-
loading
execution
5:
Transaction
subsequent
action
execution
6: Report
queue
execution
7: Report
queue recall
execution
8: Back
button
SQL
Terad Syba MySQ
Serve Oracle DB2
Descriptio ata se L
Column r Data- Data-
n Data- Data- Data-
Data- type type
type type type
type
execution
Whether a
cache was
hit during
the
execution,
and if so,
what type of
cache hit
occurred:
0: No cache
hit TINYI NUMBE SMALL BYTEI TINYI TINYI
CACHEIND
NT R(3) INT NT NT NT
1:
Intelligence
Server
cache hit
2: Device
cache hit
6:
Application
memory
cache hit
Whether the
report or
document is NUMB SMALL BYTEI TINYI
PROMPTIND prompted: BIT BIT
ER(1) INT NT NT(1)
0: Not
prompted
SQL
Terad Syba MySQ
Serve Oracle DB2
Descriptio ata se L
Column r Data- Data-
n Data- Data- Data-
Data- type type
type type type
type
1: Prompted
Whether the
job is for a
report or a
document: TINYI NUMBE SMALL BYTEI TINYI TINYI
CTDATATYPE
NT R(3) INT NT NT NT
3: Report
55:
Document
The type of
network
used:
VARC VARC VARC VARC VARC VARC
CTNETWORKT 3G
HAR HAR2 HAR HAR HAR HAR
YPE
WiFi (40) (40) (40) (40) (40) (40)
LTE
4G
Estimated
network INTEG NUMBE INTEG INTEG INTEG INTEG
CTBANDWIDTH
bandwidth, ER R(10) ER ER ER ER
in kbps.
Time at
which the
user either
VIEWFINISHTI clicks on DATE TIMES TIMES TIMES DATE DATE
ME another TIME TAMP TAMP TAMP TIME TIME
report/docu
ment, or
navigates
SQL
Terad Syba MySQ
Serve Oracle DB2
Descriptio ata se L
Column r Data- Data-
n Data- Data- Data-
Data- type type
type type type
type
away from
the mobile
app.
Difference
between
CTRenderFi
nishTime
INTEG NUMBE INTEG INTEG INTEG INTEG
VIEWTIME and
ER R(10) ER ER ER ER
ViewFinishT
ime, in
millisecond
s.
An integer
value that
increases
with every
manipulatio
n the user
makes after
the
MANIPULATIO report/docu SMAL NUMB SMALL SMALL SMAL SMAL
NS ment is LINT ER(5) INT INT LINT LINT
rendered,
excluding
those that
require
fetching
more data
from
Intelligence
SQL
Terad Syba MySQ
Serve Oracle DB2
Descriptio ata se L
Column r Data- Data-
n Data- Data- Data-
Data- type type
type type type
type
Server
and/or
result in
another
report/docu
ment
execution.
Average
rendering
CTAVGMANIPR time for INTEG NUMBE INTEG INTEG INTEG INTEG
ENDERTIME each ER R(10) ER ER ER ER
manipulatio
n.
GUID of the
CHAR CHAR CHAR CHAR CHAR CHAR
REPOSITORYID metadata
(32) (32) (32) (32) (32) (32)
repository.
Latitude of DOUBL
CTLATITUDE FLOAT FLOAT FLOAT FLOAT FLOAT
the user. E
STG_CT_MANIP_STATS
Records statistics related to manipulation of reports/documents in a mobile
app. This table is used when the Mobile Clients and Mobile Clients
Manipulations options are selected in the Statistics category of the Project
Configuration Editor and the mobile client is configured to log statistics. The
SQL
Terada Sybas MySQ
Serve Oracle DB2
ta e L
Column Description r Data- Data-
Data- Data- Data-
Data- type type
type type type
type
Day the
TIMES
DAY_ID action was DATE DATE DATE DATE DATE
TAMP
started.
Hour the
TINYI NUMBE SMALLI BYTEIN TINYI TINYI
HOUR_ID action was
NT R(3) NT T NT NT
started.
Minute the
SMAL NUMBE SMALL SMALL SMAL SMAL
MINUTE_ID action was
LINT R(5) INT INT LINT LINT
started.
Unique
DEVICEINSTI installation ID CHAR CHAR CHAR CHAR CHAR CHAR
D (I) of the mobile (40) (40) (40) (40) (40) (40)
app.
An integer
value that
increments
when the
device
information,
STATECOUN such as INTEG NUMBE SMALL SMALL SMAL SMAL
TER (I) DEVICETYP ER R(5) INT INT LINT LINT
E, OS,
OSVER, or
APPVER (in
STG_CT_
DEVICE_
STATS),
SQL
Terada Sybas MySQ
Serve Oracle DB2
ta e L
Column Description r Data- Data-
Data- Data- Data-
Data- type type
type type type
type
changes.
GUID of the
CHAR CHAR CHAR CHAR CHAR CHAR
USERID user making
(32) (32) (32) (32) (32) (32)
the request.
GUID of the
session that
executed the
request. This
should be the
same as the CHAR CHAR CHAR CHAR CHAR CHAR
SESSIONID
SESSIONID (32) (32) (32) (32) (32) (32)
for this
request in
STG_ IS_
REPORT_
STATS.
GUID of the
MicroStrategy
Mobile client
session ID. A
new client
CHAR NUMBE INTEG INTEG INTEG INTEG
CTSESSIONID session ID is
(32) R(10) ER ER ER ER
generated
every time a
user logs in to
the mobile
app.
SQL
Terada Sybas MySQ
Serve Oracle DB2
ta e L
Column Description r Data- Data-
Data- Data- Data-
Data- type type
type type type
type
generated by
the client and
cannot be
NULL. The
JOBID may
be NULL if
user is offline
during
execution.
GUID of the
Intelligence
CHAR CHAR CHAR CHAR CHAR CHAR
SERVERID Server
(32) (32) (32) (32) (32) (32)
processing
the request.
Name of the
machine
hosting the VARC VARCH VARCH VARCH VARC VARC
SERVERMA
Intelligence HAR AR2 AR AR HAR HAR
CHINE
Server (255) (255) (255) (255) (255) (255)
processing
the request.
GUID of the
CHAR CHAR CHAR CHAR CHAR CHAR
REPORTID report used in
(32) (32) (32) (32) (32) (32)
the request.
GUID of the
DOCUMENTI document CHAR CHAR CHAR CHAR CHAR CHAR
D used in the (32) (32) (32) (32) (32) (32)
request.
SQL
Terada Sybas MySQ
Serve Oracle DB2
ta e L
Column Description r Data- Data-
Data- Data- Data-
Data- type type
type type type
type
The order in
which the
manipulation
s were made.
For each
manipulation,
MANIPSEQUE SMAL NUMBE SMALL SMALL SMAL SMAL
the mobile
NCEID LINT R(5) INT INT LINT LINT
client returns
a row, and
the value in
this column
increments
for each row.
Type of
manipulation:
0: Unknown
1: Selector
2: Panel
Selector
MANIPTYPEI SMALL NUMBE SMALLI SMALLI SMALL SMALL
D 3: Action INT R(5) NT NT INT INT
Selector
4: Change
Layout
5: Change
View
6: Sort
SQL
Terada Sybas MySQ
Serve Oracle DB2
ta e L
Column Description r Data- Data-
Data- Data- Data-
Data- type type
type type type
type
7: Page By
Name of the
item that was
manipulated.
For example, VARC VARCH VARCH VARCH VARC VARC
MANIPNAME if a selector HAR AR2 AR AR HAR HAR
was clicked, (255) (255) (255) (255) (255) (255)
this is the
name of the
selector.
Value of the
item that was
manipulated.
For example,
VARC VARCH VARCH VARCH VARC VARC
MANIPVALU if a panel
HAR AR2 AR AR HAR HAR
E selector was
(2000) (2000) (2000) (2000) (2000) (2000)
clicked, this
is the name of
the selected
panel.
If the value
for
MANIPVALU
E is too long
MANIPVALUE to fit in a SMAL NUMBE SMALL SMALL SMAL SMAL
SEQ single row, LINT R(5) INT INT LINT LINT
this
manipulation
is spread
over multiple
SQL
Terada Sybas MySQ
Serve Oracle DB2
ta e L
Column Description r Data- Data-
Data- Data- Data-
Data- type type
type type type
type
rows, and
this value is
incremented.
Time when
CTMANIPST the user DATET TIMEST TIMEST TIMEST DATET DATET
ARTTIME submits the IME AMP AMP AMP IME IME
manipulation.
Time when
the mobile
app finishes
processing
CTMANIPFIN DATE TIMES TIMES TIMES DATE DATE
the
ISHTIME TIME TAMP TAMP TAMP TIME TIME
manipulation
and forwards
it for
rendering.
Difference
between
CTMANIPST
CTMANIPTIM DOUBL
ARTTIME and FLOAT FLOAT FLOAT FLOAT FLOAT
E E
CTMANIPFIN
ISHTIME, in
milliseconds.
GUID of the
REPOSITOR CHAR CHAR CHAR CHAR CHAR CHAR
metadata
YID (32) (32) (32) (32) (32) (32)
repository.
SQL
Terada Sybas MySQ
Serve Oracle DB2
ta e L
Column Description r Data- Data-
Data- Data- Data-
Data- type type
type type type
type
different
states of
manipulation.
A flexible
column to
VARC VARCH VARCH VARCH VARC VARC
capture
DETAIL2 HAR AR2 AR AR HAR HAR
different
(2000) (2000) (2000) (2000) (2000) (2000)
states of
manipulation.
STG_IS_CACHE_HIT_STATS
Tracks job executions that hit the report cache. This table is used when the
Basic Statistics option is selected in the Statistics category of the Project
Configuration Editor. The data load process moves this table's information
to the IS_CACHE_HIT_STATS table, which has the same columns and
datatypes.
Day the
job
executi
TIMEST
DAY_ID on hit DATE DATE DATE DATE DATE
AMP
the
report
cache.
Hour the
job
executio TINYIN NUMBE SMALLI BYTEIN TINYIN TINYIN
HOUR_ID
n hit the T R(3) NT T T T
report
cache.
Minute
the job
executi
SMALL NUMBE SMALLI SMALLI SMALL SMALL
MINUTE_ID on hit
INT R(5) NT NT INT INT
the
report
cache.
A
sequenti
al INTEG NUMBE INTEGE INTEGE INTEG INTEG
CACHEINDEX (I)
number ER R(10) R R ER ER
for this
table.
GUID of
CACHESESSIO CHAR CHAR CHAR CHAR CHAR CHAR
the user
NID (I) (32) (32) (32) (32) (32) (32)
session.
the
server
(32) (32) (32) (32) (32) (32)
definitio
n.
Timesta
mp
CACHEHITTIM when DATET TIMEST TIMEST TIMEST DATET DATET
E (I) this IME AMP AMP AMP IME IME
cache is
hit.
Type of
cache
hit:
0:
Report
CACHEHITTYP TINYIN NUMBE SMALLI BYTEIN TINYIN TINYIN
cache
E (I) T R(3) NT T T T
hit
1 or 2:
Docume
nt cache
hit
Job ID
that
CACHECREAT INTEG NUMBE INTEGE INTEGE INTEG INTEG
created
ORJOBID (I) ER R(10) R R ER ER
the
cache.
GUID for
CREATORSES the CHAR CHAR CHAR CHAR CHAR CHAR
SIONID (I) session (32) (32) (32) (32) (32) (32)
in which
cache
was
created.
Job ID
for
partial
cache
hit, or
docume
nt
parent
INTEG NUMBE INTEGE INTEGE INTEG INTEG
JOBID (I) job ID if
ER R(10) R R ER ER
the
cache
hit
originat
ed from
docume
nt child
report.
Timesta
mp of
DATET TIMEST TIMEST TIMEST DATET DATET
STARTTIME when
IME AMP AMP AMP IME IME
the job
started.
Timesta
mp of
RECORDTIME when DATET TIMEST TIMEST TIMEST DATET DATET
(I) the IME AMP AMP AMP IME IME
record
was
written
to the
databas
e,
accordi
ng to
databas
e
system
time.
(Server
machine
VARCH VARCH VARCH VARCH
SERVERMACHI name:po VARCH VARCH
AR AR2 AR AR
NE rt AR(255) AR(255)
(255) (255) (255) (255)
number)
pair.
GUID of
CHAR CHAR CHAR CHAR CHAR CHAR
PROJECTID (I) the
(32) (32) (32) (32) (32) (32)
project.
GUID of
the
REPOSITORYI metadat CHAR CHAR CHAR CHAR CHAR CHAR
D a (32) (32) (32) (32) (32) (32)
reposito
ry.
The table below lists combinations of CACHEHITTYPE and JOBID that can
occur in the STG_IS_CACHE_HIT_STATS table and what those
combinations mean.
Cache Hit
JobID Description
Type
Real
0 For a normal report, a partial cache hit
JobID
Child For a child report from a document, a partial cache hit, child
2
JobID report has a job
STG_IS_CUBE_REP_STATS
Records statistics related to Intelligent Cube manipulations. This table is not
populated unless at least one of the Advanced Statistics Collection
Options are selected in the Statistics category of the Project Configuration
Editor. The data load process moves this table's information to the IS_
CUBE_REP_STATS table, which has the same columns and datatypes.
Day the
action TIMEST
DAY_ID (I) DATE DATE DATE DATE DATE
was AMP
started.
Hour the
action TINYIN NUMBE SMALLI BYTEIN TINYIN TINYIN
HOUR_ID
was T R(3) NT T T T
started.
Minute
the
SMALL NUMBE SMALLI SMALLI SMALL SMALL
MINUTE_ID action
INT R(5) NT NT INT INT
was
started.
GUID of
the
session
that
CHAR CHAR CHAR CHAR CHAR CHAR
SESSIONID executed
(32) (32) (32) (32) (32) (32)
the action
on the
Intelligen
t Cube.
Job ID
for the
action on INTEG NUMBE INTEGE INTEGE INTEG INTEG
JOBID
the ER R(10) R R ER ER
Intelligen
t Cube.
GUID of
CHAR CHAR CHAR CHAR CHAR CHAR
PROJECTID the
(32) (32) (32) (32) (32) (32)
project.
Timesta
mp of
DATET TIMEST TIMEST TIMEST DATET DATET
STARTTIME when the
IME AMP AMP AMP IME IME
action
started.
when the
action
finished.
GUID of
the
Intelligen
CUBEREPO t Cube CHAR CHAR CHAR CHAR CHAR CHAR
RTGUID report (32) (32) (32) (32) (32) (32)
that was
execute
d.
GUID of
the
Intelligen
CUBEINSTA CHAR CHAR CHAR CHAR CHAR CHAR
t Cube
NCEID (32) (32) (32) (32) (32) (32)
instance
in
memory.
Type of
action
against
the
Intelligen
CUBEACTIO t Cube: INTEG NUMBE INTEGE INTEGE INTEG INTEG
NID 0: ER R(10) R R ER ER
Reserve
d for
MicroStr
ategy
use
1: Cube
Publish
2: Cube
View Hit
3: Cube
Dynamic
Source
Hit
4: Cube
Append
5: Cube
Update
6: Cube
Delete
7: Cube
Destroy
If a report
hit the
Intelligen
REPORTGUI CHAR CHAR CHAR CHAR CHAR CHAR
t Cube,
D (32) (32) (32) (32) (32) (32)
the GUID
of that
report.
If the
Intelligen
CUBEKBSIZ t Cube is INTEG NUMBE INTEGE INTEGE INTEG INTEG
E publishe ER R(10) R R ER ER
d or
refreshe
d, the
size of
the
Intelligen
t Cube in
KB.
If the
Intelligen
t Cube is
published
or
CUBEROWSI refreshe INTEG NUMBE INTEGE INTEGE INTEG INTEG
ZE d, the ER R(10) R R ER ER
number
of rows in
the
Intelligen
t Cube.
Name of
the
Intelligen
VARC VARCH VARC VARCH
SERVERMA ce VARCH VARCH
HAR AR2 HAR AR
CHINE Server AR(255) AR(255)
(255) (255) (255) (255)
processi
ng the
request.
GUID of
the
REPOSITOR CHAR CHAR CHAR CHAR CHAR CHAR
metadata
YID (32) (32) (32) (32) (32) (32)
repositor
y.
Timesta
mp of
when the
record
was
written to
RECORDTIM the DATET TIMEST TIMEST TIMEST DATET DATET
E databas IME AMP AMP AMP IME IME
e,
accordin
g to
database
system
time.
STG_IS_DOC_STEP_STATS
Tracks each step in the document execution process. This table is used
when the Document Job Steps option is selected in the Statistics category
of the Project Configuration Editor. The data load process moves this table's
information to the IS_DOC_STEP_STATS table, which has the same
columns and datatypes.
SQL
Oracle DB2 Teradat Sybas MySQL
Descrip Server
Column Data- Data- a Data- e Data- Data-
tion Data-
type type type type type
type
SQL
Oracle DB2 Teradat Sybas MySQL
Descrip Server
Column Data- Data- a Data- e Data- Data-
tion Data-
type type type type type
type
nt was
requeste
d for
executio
n.
Hour the
documen
t was
TINYIN NUMBE SMALLI BYTEIN TINYIN TINYIN
HOUR_ID requeste
T R(3) NT T T T
d for
executio
n.
Minute
the
docume
nt was SMALL NUMBE SMALLI SMALLI SMALL SMALL
MINUTE_ID
requeste INT R(5) NT NT INT INT
d for
executio
n.
GUID of
the INTEG NUMBE INTEGE INTEGE INTEG INTEG
JOBID
documen ER R(10) R R ER ER
t job.
Sequenc
e
STEPSEQUE number TINYIN NUMBE SMALLI BYTEIN TINYIN TINYIN
NCE for a T R(3) NT T T T
job's
steps.
SQL
Oracle DB2 Teradat Sybas MySQL
Descrip Server
Column Data- Data- a Data- e Data- Data-
tion Data-
type type type type type
type
GUID of
CHAR CHAR CHAR CHAR CHAR CHAR
SESSIONID the user
(32) (32) (32) (32) (32) (32)
session.
GUID of
the
CHAR CHAR CHAR CHAR CHAR CHAR
SERVERID server
(32) (32) (32) (32) (32) (32)
definitio
n.
Type of
step. For
a
descripti
on, see
Report
and
Docume
nt Steps,
page
2185.
TINYIN NUMBE SMALLI BYTEIN TINYIN TINYIN
STEPTYPE 1:
T R(3) NT T T T
Metadata
object
request
step
2: Close
job
3: SQL
generati
on
4: SQL
SQL
Oracle DB2 Teradat Sybas MySQL
Descrip Server
Column Data- Data- a Data- e Data- Data-
tion Data-
type type type type type
type
executio
n
5:
Analytica
l Engine
server
task
6:
Resoluti
on server
task
7: Report
net
server
task
8:
Element
request
step
9: Get
report
instance
10: Error
message
send
task
11:
Output
message
send
SQL
Oracle DB2 Teradat Sybas MySQL
Descrip Server
Column Data- Data- a Data- e Data- Data-
tion Data-
type type type type type
type
task
12: Find
report
cache
task
13:
Docume
nt
executio
n step
14:
Docume
nt send
step
15:
Update
report
cache
task
16:
Request
execute
step
17: Data
mart
execute
step
18:
Docume
nt data
SQL
Oracle DB2 Teradat Sybas MySQL
Descrip Server
Column Data- Data- a Data- e Data- Data-
tion Data-
type type type type type
type
preparati
on
19:
Docume
nt
formattin
g
20:
Docume
nt
manipula
tion
21: Apply
view
context
22:
Export
engine
23: Find
Intelligen
t Cube
task
24:
Update
Intelligen
t Cube
task
25: Post-
processi
ng task
SQL
Oracle DB2 Teradat Sybas MySQL
Descrip Server
Column Data- Data- a Data- e Data- Data-
tion Data-
type type type type type
type
26:
Delivery
task
27:
Persist
result
task
28:
Docume
nt
dataset
executio
n task
Timesta
mp of
the DATET TIMEST TIMEST TIMEST DATET DATET
STARTTIME
step's IME AMP AMP AMP IME IME
start
time.
Timesta
mp of the
DATETI TIMEST TIMEST TIMEST DATETI DATETI
FINISHTIME step's
ME AMP AMP AMP ME ME
finish
time.
Time
duration,
in INTEG NUMBE INTEGE INTEGE INTEG INTEG
QUEUETIME
milliseco ER R(10) R R ER ER
nds,
between
SQL
Oracle DB2 Teradat Sybas MySQL
Descrip Server
Column Data- Data- a Data- e Data- Data-
tion Data-
type type type type type
type
the last
step
finish
and the
next step
start.
CPU
time, in
milliseco
INTEG NUMBE INTEGE INTEGE INTEG INTEG
CPUTIME nds,
ER R(10) R R ER ER
used
during
this step.
FINISHT
IME
minus
STEPDURA INTEG NUMBE INTEGE INTEGE INTEG INTEG
STARTT
TION ER R(10) R R ER ER
IME, in
milliseco
nds.
Timesta
mp of
when the
record
was
RECORDTI DATETI TIMEST TIMEST TIMEST DATETI DATETI
written to
ME ME AMP AMP AMP ME ME
the
databas
e,
accordin
g to
SQL
Oracle DB2 Teradat Sybas MySQL
Descrip Server
Column Data- Data- a Data- e Data- Data-
tion Data-
type type type type type
type
database
system
time.
(Server
machine
VARCH VARCH VARCH VARCH
SERVERMA name:po VARCH VARCH
AR AR2 AR AR
CHINE rt AR(255) AR(255)
(255) (255) (255) (255)
number)
pair.
GUID of
CHAR CHAR CHAR CHAR CHAR CHAR
PROJECTID the
(32) (32) (32) (32) (32) (32)
project.
GUID of
the
REPOSITO metadat CHAR CHAR CHAR CHAR CHAR CHAR
RYID a (32) (32) (32) (32) (32) (32)
repositor
y.
STG_IS_DOCUMENT_STATS
Tracks document executions that the Intelligence Server processes. This
table is used when the Basic Statistics option is selected in the Statistics
category of the Project Configuration Editor. The data load process moves
this table's information to the IS_DOCUMENT_STATS table, which has the
same columns and datatypes.
SQL
Terada Sybas MySQ
Serve Oracle DB2
Descripti ta e L
Column r Data- Data-
on Data- Data- Data-
Data- type type
type type type
type
Day the
document
was TIMES
DAY_ID DATE DATE DATE DATE DATE
requested TAMP
for
execution.
Hour the
document
was TINYI NUMBE SMALLI BYTEIN TINYI TINYI
HOUR_ID
requested NT R(3) NT T NT NT
for
execution.
Minute the
document
was SMAL NUMB SMALL SMALL SMAL SMAL
MINUTE_ID
requested LINT ER(5) INT INT LINT LINT
for
execution.
GUID of
CHAR CHAR CHAR CHAR CHAR CHAR
SESSIONID (I) the user
(32) (32) (32) (32) (32) (32)
session.
GUID of
the
Intelligenc CHAR CHAR CHAR CHAR CHAR CHAR
SERVERID
e Server's (32) (32) (32) (32) (32) (32)
server
definition
SQL
Terada Sybas MySQ
Serve Oracle DB2
Descripti ta e L
Column r Data- Data-
on Data- Data- Data-
Data- type type
type type type
type
at the time
of the
request.
Server
VARC VARCH VARCH VARCH VARC VARC
SERVERMACHI machine
HAR AR2 AR AR HAR HAR
NE name or IP
(255) (255) (255) (255) (255) (255)
address.
GUID of
CHAR CHAR CHAR CHAR CHAR CHAR
PROJECTID the
(32) (32) (32) (32) (32) (32)
project.
GUID of
CHAR CHAR CHAR CHAR CHAR CHAR
DOCUMENTID (I) the
(32) (32) (32) (32) (32) (32)
document.
The
timestamp
REQUESTRECTI at which DATE TIMES TIMES TIMES DATE DATE
ME the TIME TAMP TAMP TAMP TIME TIME
request is
received.
Total
queue time
REQUESTQUEU INTEG NUMBE INTEG INTEG INTEG INTEG
of all steps
ETIME ER R(10) ER ER ER ER
in this
request.
SQL
Terada Sybas MySQ
Serve Oracle DB2
Descripti ta e L
Column r Data- Data-
on Data- Data- Data-
Data- type type
type type type
type
duration
between
request
receive
time and
document
job was
created.
An offset
of the
RequestR
ecTime.
Time
duration
between
request
receive
time and
document INTEG NUMBE INTEG INTEG INTEG INTEG
FINISHTIME job last ER R(10) ER ER ER ER
step was
finished.
An offset
of the
RequestR
ecTime.
Execution
EXECERRORCO error INTEG NUMB INTEG INTEG INTEG INTEG
DE code. If no ER ER(10) ER ER ER ER
error, the
SQL
Terada Sybas MySQ
Serve Oracle DB2
Descripti ta e L
Column r Data- Data-
on Data- Data- Data-
Data- type type
type type type
type
value is 0.
Number of
reports
SMAL NUMBE SMALLI SMALLI SMAL SMAL
REPORTCOUNT included in
LINT R(5) NT NT LINT LINT
the
document.
Was the
CANCELINDICA document NUMB SMALL BYTEI TINYI
BIT BIT
TOR job ER(1) INT NT NT(1)
canceled?
Number of
PROMPTINDICA SMAL NUMBE SMALLI SMALLI SMAL SMAL
prompts in
TOR LINT R(5) NT NT LINT LINT
the report.
Was the
CACHEDINDICA NUMB SMALL BYTEI TINYI
document BIT BIT
TOR ER(1) INT NT NT(1)
cached?
Timestamp
of when
the record
was
written to
the DATE TIMES TIMES TIMES DATE DATE
RECORDTIME (I)
database, TIME TAMP TAMP TAMP TIME TIME
according
to
database
system
time.
SQL
Terada Sybas MySQ
Serve Oracle DB2
Descripti ta e L
Column r Data- Data-
on Data- Data- Data-
Data- type type
type type type
type
CPU time,
in
millisecon
INTEG NUMB INTEG INTEG INTEG INTEG
CPUTIME ds, used
ER ER(10) ER ER ER ER
for
document
execution.
Total
number of
steps
involved in TINYI NUMBE SMALLI BYTEIN TINYI TINYI
STEPCOUNT
execution NT R(3) NT T NT NT
(not just
unique
steps).
Duration
of
execution, INTEG NUMB INTEG INTEG INTEG INTEG
EXECDURATION
in ER ER(10) ER ER ER ER
millisecon
ds.
Error
message
displayed
VARC VARCH VARCH VARCH VARC VARC
ERRORMESSAG to the user
HAR AR2 AR AR HAR HAR
E when an
(4000) (4000) (4000) (4000) (4000) (4000)
error is
encounter
ed.
SQL
Terada Sybas MySQ
Serve Oracle DB2
Descripti ta e L
Column r Data- Data-
on Data- Data- Data-
Data- type type
type type type
type
Intelligenc
e Server-
related
actions
that need INTEG INTEG INTEG INTEG INTEG INTEG
EXECACTIONS
to take ER ER ER ER ER ER
place
during
document
execution.
Intelligenc
e Server-
related
processes INTEG INTEG INTEG INTEG INTEG INTEG
EXECFLAGS
needed to ER ER ER ER ER ER
refine the
document
execution.
Total time,
in
millisecon
ds, the
PROMPTANSTI INTEG NUMB INTEG INTEG INTEG INTEG
user spent
ME ER ER(10) ER ER ER ER
answering
prompts
on the
document.
1 if the
TINYI NUMBE SMALLI BYTEIN TINYI TINYI
EXPORTINDC document
NT R(3) NT T NT NT
was
SQL
Terada Sybas MySQ
Serve Oracle DB2
Descripti ta e L
Column r Data- Data-
on Data- Data- Data-
Data- type type
type type type
type
exported,
otherwise
0.
If the job
hit a
cache, the
job ID of
the job
CACHECREATO INTEG NUMB INTEG INTEG INTEG INTEG
that
RJOBID ER ER(10) ER ER ER ER
created
the cache
used by
the current
job.
If the job
hit a
cache, the
GUID for
CACHECREATO CHAR CHAR CHAR CHAR CHAR CHAR
the
RSESSONID (32) (32) (32) (32) (32) (32)
session in
which the
cache was
created.
GUID of
the CHAR CHAR CHAR CHAR CHAR CHAR
REPOSITORYID
metadata (32) (32) (32) (32) (32) (32)
repository.
For
CHAR CHAR CHAR CHAR CHAR CHAR
MESSAGEID MicroStrat
(32) (32) (32) (32) (32) (32)
egy use.
STG_IS_INBOX_ACT_STATS
Records statistics related to History List manipulations. This table is used
when the Inbox Messages option is selected in the Statistics category of
the Project Configuration Editor. The data load process moves this table's
information to the IS_INBOX_ACT_STATS table, which has the same
columns and datatypes.
Day the
manipul
TIMEST
DAY_ID (I) ation DATE DATE DATE DATE DATE
AMP
was
started.
Hour the
manipula TINYIN NUMBE SMALLI BYTEIN TINYIN TINYIN
HOUR_ID
tion was T R(3) NT T T T
started.
Minute
the
manipul SMALL NUMBE SMALLI SMALLI SMALL SMALL
MINUTE_ID
ation INT R(5) NT NT INT INT
was
started.
GUID of
the
session
CHAR CHAR CHAR CHAR CHAR CHAR
SESSIONID (I) that
(32) (32) (32) (32) (32) (32)
started
the
History
List
manipula
tion.
GUID of
the
server
definitio
n of the
CHAR CHAR CHAR CHAR CHAR CHAR
SERVERID Intellige
(32) (32) (32) (32) (32) (32)
nce
Server
being
manipul
ated.
Name
and port
number
of the
Intelligen
ce
VARCH VARCH VARCH VARCH
SERVERMAC Server VARCH VARCH
AR AR2 AR AR
HINE machine AR(255) AR(255)
(255) (255) (255) (255)
where
the
manipula
tion is
taking
place.
GUID of
CHAR CHAR CHAR CHAR CHAR CHAR
PROJECTID the
(32) (32) (32) (32) (32) (32)
project
where
the
History
List
message
is
mapped.
Type of
manipula
tion:
0:
Reserve
d for
MicroStr
ategy
use
1: Add:
Add
INBOXACTIO TINYIN NUMBE SMALLI BYTEIN TINYIN TINYIN
message
N T R(3) NT T T T
to
History
List
2:
Remove:
Remove
message
from
History
List
3:
Rename:
Rename
message
4:
Execute:
Execute
contents
of
message
5:
Change
Status:
Change
message
status
from
Ready to
Read
6:
Request
ed:
Retrieve
message
contents
7: Batch
Remove:
Intelligen
ce
Server
bulk
operatio
n, such
as cache
expiratio
n
ID of the
user
doing CHAR CHAR CHAR CHAR CHAR CHAR
USERID
the (32) (32) (32) (32) (32) (32)
manipul
ation.
ID of the
user that
created CHAR CHAR CHAR CHAR CHAR CHAR
OWNERID
the (32) (32) (32) (32) (32) (32)
messag
e.
GUID of
the
History
List CHAR CHAR CHAR CHAR CHAR CHAR
MESSAGEID
message (32) (32) (32) (32) (32) (32)
being
acted
on.
Name of
the
VARCH VARCH VARCH VARCH
MESSAGETIT report or VARCH VARCH
AR AR2 AR AR
LE documen AR(255) AR(255)
(255) (255) (255) (255)
t
referenc
ed in the
History
List
messag
e.
User-
defined
name of
the
History
List
messag
e. Blank VARC VARCH VARC VARC
MESSAGEDIS VARCH VARCH
unless HAR AR2 HAR HAR
PNAME AR(255) AR(255)
the user (255) (255) (255) (255)
has
renamed
the
History
List
messag
e.
Date and
time
when the
CREATIONTI History DATET TIMEST TIMEST TIMEST DATET DATET
ME List IME AMP AMP AMP IME IME
message
was
created.
time
when the
manipul IME AMP AMP AMP IME IME
ation
started.
Report
job ID for
the
History
List
Message
Content
Request.
REPORTJOBI INTEG NUMBE INTEGE INTEGE INTEG INTEG
Blank if
D (I) ER R(10) R R ER ER
no job
was
executed
or if a
documen
t was
execute
d.
Docume
nt job ID
for the
History
DOCUMENTJ List INTEG NUMBE INTEGE INTEGE INTEG INTEG
OBID (I) Message ER R(10) R R ER ER
Content
Request.
Blank if
no job
was
executed
or if a
report
was
execute
d.
ID of the
subscript
ion that
SUBSCRIPTI CHAR CHAR CHAR CHAR CHAR CHAR
invoked
ONID (32) (32) (32) (32) (32) (32)
the
manipula
tion.
If the
manipul
ation is a
batch
deletion
of
History
List VARC VARCH VARCH VARCH VARC VARC
ACTIONCOM
message HAR AR2 AR AR HAR HAR
MENT
s, this (4000) (4000) (4000) (4000) (4000) (4000)
field
contains
the
condition
or SQL
stateme
nt used
to delete
the
message
s.
If there
is an
error,
this field
holds the
error
messag
e.
GUID of
the
REPOSITORY CHAR CHAR CHAR CHAR CHAR CHAR
metadata
ID (32) (32) (32) (32) (32) (32)
repositor
y.
Timesta
mp of
when the
record
was
written
RECORDTIM to the DATET TIMEST TIMEST TIMEST DATET DATET
E databas IME AMP AMP AMP IME IME
e,
accordin
g to
databas
e system
time.
STG_IS_MESSAGE_STATS
Records statistics related to sending messages through Distribution
Services. This table is used when the Basic statistics option is selected in
the Statistics category of the Project Configuration Editor. The data load
process moves this table's information to the IS_MESSAGE_STATS table,
which has the same columns and datatypes.
SQL
Terada Sybas MySQ
Serve Oracle DB2
Descripti ta e L
Column r Data- Data-
on Data- Data- Data-
Data- type type
type type type
type
Day the
job was
TIMES
DAY_ID requested DATE DATE DATE DATE DATE
TAMP
for
execution.
Hour the
job was
TINYI NUMBE SMALLI BYTEIN TINYI TINYI
HOUR_ID requested
NT R(3) NT T NT NT
for
execution.
Minute the
job was
SMAL NUMB SMALL SMALL SMAL SMAL
MINUTE_ID requested
LINT ER(5) INT INT LINT LINT
for
execution.
Message
GUID used
INTEG NUMBE INTEG INTEG INTEG INTEG
MESSAGEINDEX to identify
ER R(10) ER ER ER ER
a
message.
SQL
Terada Sybas MySQ
Serve Oracle DB2
Descripti ta e L
Column r Data- Data-
on Data- Data- Data-
Data- type type
type type type
type
GUID of
the user
session
CHAR CHAR CHAR CHAR CHAR CHAR
SESSIONID created to
(32) (32) (32) (32) (32) (32)
generate
the
message.
History List
message
ID. If there
is no
History List
message
associated
HISTORYLISTM with the CHAR CHAR CHAR CHAR CHAR CHAR
ESSAGEID subscriptio (32) (32) (32) (32) (32) (32)
n, this
value is
00000000
00000000
00000000
00000000.
Job ID of
report/doc
ument
SCHEDULEJOBI INTEG NUMB INTEG INTEG INTEG INTEG
executed
D ER ER(10) ER ER ER ER
to run the
subscriptio
SQL
Terada Sybas MySQ
Serve Oracle DB2
Descripti ta e L
Column r Data- Data-
on Data- Data- Data-
Data- type type
type type type
type
n instance.
If no job is
created,
this value
is -1. If a
fresh job A
is created
and it hits
the cache
of an old
job B,
SCHEDUL
EJOBID
takes the
value of
the fresh
job A.
Type of
subscribed
object:
TINYI NUMBE SMALLI BYTEIN TINYI TINYI
DATATYPE
3: Report NT R(3) NT T NT NT
55:
Document
GUID of
RECIPIENTCON the CHAR CHAR CHAR CHAR CHAR CHAR
TACTID message (32) (32) (32) (32) (32) (32)
recipient.
SQL
Terada Sybas MySQ
Serve Oracle DB2
Descripti ta e L
Column r Data- Data-
on Data- Data- Data-
Data- type type
type type type
type
n:
1: Email
2: File
4: Printer
8: Custom
16: History
List
32: Client
40: Cache
update
128:
Mobile
100: Last
one
255: All
Subscripti
on
instance CHAR CHAR CHAR CHAR CHAR CHAR
SUBSINSTID
GUID used (32) (32) (32) (32) (32) (32)
to send the
message.
Schedule
GUID. If
CHAR CHAR CHAR CHAR CHAR CHAR
SCHEDULEID there is no
(32) (32) (32) (32) (32) (32)
schedule
associated
SQL
Terada Sybas MySQ
Serve Oracle DB2
Descripti ta e L
Column r Data- Data-
on Data- Data- Data-
Data- type type
type type type
type
with the
subscriptio
n, this
value is -1.
Name of
VARC VARCH VARCH VARCH VARC VARC
the
SUBINSTNAME HAR AR2 AR AR HAR HAR
subscriptio
(255) (255) (255) (255) (255) (255)
n.
GUID of
CHAR CHAR CHAR CHAR CHAR CHAR
DATAID the data
(32) (32) (32) (32) (32) (32)
content.
The
contact
type for
this
TINYI NUMB SMALL BYTEI TINYI TINYI
CONTACTTYPE subscriptio
NT ER(3) INT NT NT NT
n
instance's
RecipientI
D.
Recipient's
group ID
for group
messages
RECIPIENTGRO CHAR CHAR CHAR CHAR CHAR CHAR
sent to a
UPID (32) (32) (32) (32) (32) (32)
Contact
Collection
or a User
Group.
SQL
Terada Sybas MySQ
Serve Oracle DB2
Descripti ta e L
Column r Data- Data-
on Data- Data- Data-
Data- type type
type type type
type
Name of
the contact
VARC VARCH VARCH VARCH VARC VARC
RECIPIENTCON who
HAR AR2 AR AR HAR HAR
TACTNAME received
(255) (255) (255) (255) (255) (255)
the
message.
Whether
the
address
that the
message
was sent to
is the
ISDEFAULTADD NUMBE SMALLI BYTEIN TINYI
default BIT BIT
RESS R(1) NT T NT(1)
address of
a
MicroStrat
egy user:
0: No
1: Yes
GUID of
the
address
CHAR CHAR CHAR CHAR CHAR CHAR
ADDRESSID the
(32) (32) (32) (32) (32) (32)
message
was sent
to.
SQL
Terada Sybas MySQ
Serve Oracle DB2
Descripti ta e L
Column r Data- Data-
on Data- Data- Data-
Data- type type
type type type
type
message
was sent
to.
Whether a
notification
ISNOTIFICATIO was sent: NUMB SMALL BYTEI TINYI
BIT BIT
NMESSAGE ER(1) INT NT NT(1)
0: No
1: Yes
Address ID
VARC VARCH VARCH VARCH VARC VARC
NOTIFICATIONA the
HAR AR2 AR AR HAR HAR
DDR notification
(255) (255) (255) (255) (255) (255)
is sent to.
Server
definition
GUID
CHAR CHAR CHAR CHAR CHAR CHAR
SERVERID under
(32) (32) (32) (32) (32) (32)
which the
subscriptio
n ran.
Server
machine
name or IP
address VARC VARCH VARCH VARCH VARC VARC
SERVERMACHI
under HAR AR2 AR AR HAR HAR
NE
which the (255) (255) (255) (255) (255) (255)
report or
document
job ran.
SQL
Terada Sybas MySQ
Serve Oracle DB2
Descripti ta e L
Column r Data- Data-
on Data- Data- Data-
Data- type type
type type type
type
Project
GUID
under
CHAR CHAR CHAR CHAR CHAR CHAR
PROJECTID which the
(32) (32) (32) (32) (32) (32)
data
content
resides.
Time at
which the
EXECSTARTTIM DATE TIMES TIMES TIMES DATE DATE
message
E TIME TAMP TAMP TAMP TIME TIME
creation
started.
Time at
which the
EXECFINISH DATE TIMES TIMES TIMES DATE DATE
message
TIME TIME TAMP TAMP TAMP TIME TIME
delivery
finished.
Status of
DELIVERYSTAT the INTEG NUMBE INTEG INTEG INTEG INTEG
US message ER R(10) ER ER ER ER
delivery.
Email
address
VARC VARCH VARCH VARCH VARC VARC
PHYSICALADDR the
HAR AR2 AR AR HAR HAR
ESS message
(255) (255) (255) (255) (255) (255)
was sent
to.
SQL
Terada Sybas MySQ
Serve Oracle DB2
Descripti ta e L
Column r Data- Data-
on Data- Data- Data-
Data- type type
type type type
type
Timestamp
of when
the record DATE TIMES TIMES TIMES DATE DATE
RECORDTIME
was TIME TAMP TAMP TAMP TIME TIME
written to
the table.
GUID of
the CHAR CHAR CHAR CHAR CHAR CHAR
REPOSITORYID
metadata (32) (32) (32) (32) (32) (32)
repository.
STG_IS_PERF_MON_STATS
Records statistics related to notification, diagnostics, and performance
counters logged by Intelligence Server. This table is used when the
performance counters in the Diagnostics and Performance Monitoring Tool
are configured to record statistics information. The data load process moves
this table's information to the IS_PERF_MON_STATS table, which has the
same columns and datatypes.
SQL Teradat
Oracle DB2 Sybase MySQL
Descript Server a
Column Datatyp Datatyp Dataty Dataty
ion Dataty Datatyp
e e pe pe
pe e
Day the
DAY_ID TIMEST
performa DATE DATE DATE DATE DATE
(I) AMP
nce
SQL Teradat
Oracle DB2 Sybase MySQL
Descript Server a
Column Datatyp Datatyp Dataty Dataty
ion Dataty Datatyp
e e pe pe
pe e
counter
was
recorded.
Hour the
performan
TINYIN NUMBER SMALLI TINYIN TINYIN
HOUR_ID ce counter BYTEINT
T (3) NT T T
was
recorded.
Minute
the
performa
MINUTE_ SMALLI NUMBER SMALLI SMALLI SMALLI SMALLI
nce
ID NT (5) NT NT NT NT
counter
was
recorded.
The
server
machine
SERVER_ that logs VARCH VARCHA VARCHA VARCHA VARCH VARCH
MACHINE the AR(255) R2(255) R(255) R(255) AR(255) AR(255)
notificatio
n
message.
The
category
of the VARCH VARCH VARCH
COUNTE VARCHA VARCH VARCH
counter, AR AR AR
R_CAT R2(255) AR(255) AR(255)
such as (255) (255) (255)
Memory,
MicroStra
SQL Teradat
Oracle DB2 Sybase MySQL
Descript Server a
Column Datatyp Datatyp Dataty Dataty
ion Dataty Datatyp
e e pe pe
pe e
tegy
Server
Jobs, or
MicroStra
tegy
Server
Users.
COUNTE
For
R_ VARCH VARCHA VARCHA VARCHA VARCH VARCH
MicroStrat
INSTANC AR(255) R2(255) R(255) R(255) AR(255) AR(255)
egy use.
E
Name of
the VARCH VARCH VARCH
COUNTE VARCHA VARCH VARCH
performa AR AR AR
R_NAME R2(255) AR(255) AR(255)
nce (255) (255) (255)
counter.
Timestam
p of when
the event
EVENT_ DATETI TIMESTA TIMEST TIMEST DATETI DATETI
occurred
TIME ME MP AMP AMP ME ME
in
Intelligen
ce Server.
COUNTE Counter
FLOAT FLOAT DOUBLE FLOAT FLOAT FLOAT
R_VALUE value.
Counter
CTR_ TINYIN NUMBER SMALLI TINYIN TINYIN
value BYTEINT
VAL_TYP T (3) NT T T
type.
SQL Teradat
Oracle DB2 Sybase MySQL
Descript Server a
Column Datatyp Datatyp Dataty Dataty
ion Dataty Datatyp
e e pe pe
pe e
the
ID (32) (32) (32) (32) (32) (32)
project.
Timestam
p of when
the record
was
written to
RECORD the DATETI TIMESTA TIMEST TIMEST DATETI DATETI
TIME database, ME MP AMP AMP ME ME
according
to
database
system
time.
STG_IS_PR_ANS_STATS
Records statistics related to prompts and prompt answers. This table is used
when the Prompts option is selected in the Statistics category of the Project
Configuration Editor. The data load process moves this table's information
to the IS_PR_ANS_STATS table, which has the same columns and
datatypes.
prompt
was
AMP
answere
d.
Hour the
prompt
TINYIN NUMBE SMALLI BYTEIN TINYIN TINYIN
HOUR_ID was
T R(3) NT T T T
answere
d.
Minute
the
prompt SMALL NUMBE SMALLI SMALLI SMALL SMALL
MINUTE_ID
was INT R(5) NT NT INT INT
answere
d.
Job ID
assigned INTEG NUMBE INTEGE INTEGE INTEG INTEG
JOBID
by the ER R(10) R R ER ER
server.
GUID of
CHAR CHAR CHAR CHAR CHAR CHAR
SESSIONID the user
(32) (32) (32) (32) (32) (32)
session.
Order in
which
prompts
PR_ORDER_ were SMALLI NUMBE SMALLI SMALLI SMALLI SMALLI
ID answere NT R(5) NT NT NT NT
d. Prompt
order is
set in
Develope
r's
Prompt
Ordering
dialog
box.
Sequenc
e ID. For
ANS_SEQ_ SMALL NUMBE SMALLI SMALLI SMALL SMALL
MicroStr
ID INT R(5) NT NT INT INT
ategy
use.
The COM
object
type of
the object
that the
prompt
resides
in:
PR_LOC_ TINYIN NUMBE SMALLI BYTEIN TINYIN TINYIN
TYPE • 1: T R(3) NT T T T
Filter
• 2:
Templ
ate
• 12:
Attribu
te
ID of the
CHAR CHAR CHAR CHAR CHAR CHAR
PR_LOC_ID object
(32) (32) (32) (32) (32) (32)
that the
prompt
resides
in.
Object
name of
the object VARCH VARCH VARCH VARCH
PR_LOC_ VARCH VARCH
that the AR AR2 AR AR
DESC AR(255) AR(255)
prompt (255) (255) (255) (255)
resides
in.
GUID of
CHAR CHAR CHAR CHAR CHAR CHAR
PR_GUID the
(32) (32) (32) (32) (32) (32)
prompt.
Prompt
title. This
cannot
be NULL.
This is
the text
VARCH VARCH VARCH VARCH
that is VARCH VARCH
PR_TITLE AR AR2 AR AR
displayed AR(255) AR(255)
(255) (255) (255) (255)
in
Develope
r's
Prompt
Ordering
dialog
box,
under
Title.
Type of
prompt.
For
example,
PR_ANS_ TINYIN NUMBE SMALLI BYTEIN TINYIN TINYIN
element,
TYPE T R(3) NT T T T
expressio
n, object,
or
numeric.
VARCH
VARCH VARCH VARCH VARCH VARCH
PR_ Prompt AR2
AR AR AR AR AR
ANSWERS answers. (4000
(4000) (4000) (4000) (4000) (4000)
CHAR)
Y: If a
prompt
answer is
required.
IS_
N: If a CHAR CHAR CHAR CHAR CHAR CHAR
REQUIRED
prompt
answer is
not
required.
server
definition.
GUID of
CHAR CHAR CHAR CHAR CHAR CHAR
PROJECTID the
(32) (32) (32) (32) (32) (32)
project.
The
Intelligen
ce Server VARCH VARCH VARCH VARCH
SERVERMA VARCH VARCH
machine AR AR2 AR AR
CHINE AR(255) AR(255)
name and (255) (255) (255) (255)
IP
address.
Timesta
mp of the DATET TIMEST TIMEST TIMEST DATET DATET
STARTTIME
job start IME AMP AMP AMP IME IME
time.
Timestam
p of when
the
record
was
written to
RECORDTI DATETI TIMEST TIMEST TIMEST DATETI DATETI
the
ME ME AMP AMP AMP ME ME
database,
according
to
database
system
time.
the
metadata
RYID (32) (32) (32) (32) (32) (32)
repositor
y.
STG_IS_PROJ_SESS_STATS
Records statistics related to project session. This table is used when the
Basic Statistics option is selected in the Statistics category of the Project
Configuration Editor. The data load process moves this table's information
to the IS_PROJ_SESS_STATS table, which has the same columns and
datatypes.
Day the
project
TIMEST
DAY_ID session DATE DATE DATE DATE DATE
AMP
was
started.
Hour the
project
TINYIN NUMBE SMALLI BYTEIN TINYIN TINYIN
HOUR_ID session
T R(3) NT T T T
was
started.
project
session
INT R(5) NT NT INT INT
was
started.
Session
object
GUID. This
is the
same
session ID
used in
STG_IS_
SESSION_
STATS.
If you
close
and CHAR CHAR CHAR CHAR CHAR CHAR
SESSIONID reope (32) (32) (32) (32) (32) (32)
n the
proje
ct
conn
ectio
n
witho
ut
loggi
ng
out
from
Intelli
genc
e
Serve
r, the
sessi
on ID
is
reuse
d.
Server
CHAR CHAR CHAR CHAR CHAR CHAR
SERVERID definition
(32) (32) (32) (32) (32) (32)
GUID.
The
Intelligenc
e Server VARCH VARCH VARCH VARCH
SERVERMA VARCH VARCH
machine AR AR2 AR AR
CHINE AR(255) AR(255)
name and (255) (255) (255) (255)
IP
address.
GUID of
the user CHAR CHAR CHAR CHAR CHAR CHAR
USERID
performing (32) (32) (32) (32) (32) (32)
the action.
Timestam
p of when
CONNECTT DATET TIMEST TIMEST TIMEST DATET DATET
the
IME IME AMP AMP AMP IME IME
session
was
opened.
Timestamp
of when
DISCONNE the DATET TIMEST TIMEST TIMEST DATET DATET
CTTIME (I) session IME AMP AMP AMP IME IME
was
closed.
Timestam
p of when
the record
RECORDTI was DATET TIMEST TIMEST TIMEST DATET DATET
ME (I) written to IME AMP AMP AMP IME IME
the
statistics
database.
GUID of
REPOSITO the CHAR CHAR CHAR CHAR CHAR CHAR
RYID metadata (32) (32) (32) (32) (32) (32)
repository.
STG_IS_REP_COL_STATS
Tracks the column-table combinations used in the SQL during report
executions. This table is used when the Report job tables/columns
accessed option is selected in the Statistics category of the Project
Configuration Editor. The data load process moves this table's information
to the IS_REP_COL_STATS table, which has the same columns and
datatypes.
Day the
report was
TIMEST
DAY_ID requested DATE DATE DATE DATE DATE
AMP
for
execution.
Hour the
report was
TINYIN NUMBE SMALLI BYTEIN TINYIN TINYIN
HOUR_ID requested
T R(3) NT T T T
for
execution.
Minute the
report was
SMALL NUMBE SMALLI SMALLI SMALL SMALL
MINUTE_ID requested
INT R(5) NT NT INT INT
for
execution.
GUID of the
CHAR CHAR CHAR CHAR CHAR CHAR
SESSIONID user
(32) (32) (32) (32) (23) (23)
session.
GUID of the
CHAR CHAR CHAR CHAR CHAR CHAR
SERVERID server
(32) (32) (32) (32) (32) (32)
definition.
GUID of the
CHAR CHAR CHAR CHAR CHAR CHAR
TABLEID database
(32) (32) (32) (32) (32) (32)
tables used.
GUID of the
CHAR CHAR CHAR CHAR CHAR CHAR
COLUMNID columns
(32) (32) (32) (32) (32) (32)
used.
Description
VARC VARCH VARC VARC
COLUMNNA of the VARCH VARCH
HAR AR2 HAR HAR
ME column AR(255) AR(255)
(255) (255) (255) (255)
used.
The SQL
clause in
SQLCLAUSE TINYIN NUMBE SMALLI BYTEIN TINYIN TINYIN
which the
TYPEID T R(3) NT T T T
column is
being used.
The number
of times a
specific
column/tabl
e/clause
INTEG NUMBE INTEGE INTEGE INTEG INTEG
COUNTER type
ER R(10) R R ER ER
combination
occurs
within a
report
execution.
Timestamp
DATET TIMEST TIMEST TIMEST DATET DATET
STARTTIME of the job
IME AMP AMP AMP IME IME
start time.
Timestamp
of when the
record was
RECORDTI written to DATET TIMEST TIMEST TIMEST DATET DATET
ME the IME AMP AMP AMP IME IME
database,
according to
database
system time.
(Server
machine VARC VARCH VARC VARCH
SERVERMA VARCH VARCH
name:port HAR AR2 HAR AR
CHINE AR(255) AR(255)
number) (255) (255) (255) (255)
pair.
GUID of the
REPOSITO CHAR CHAR CHAR CHAR CHAR CHAR
metadata
RYID (32) (32) (32) (32) (32) (32)
repository.
STG_IS_REP_SEC_STATS
Tracks executions that used security filters. This table is used when the
Basic Statistics option is selected in the Statistics category of the Project
Configuration Editor. The data load process moves this table's information
to the IS_REP_SEC_STATS table, which has the same columns and
datatypes.
Day the
TIMEST
DAY_ID job was DATE DATE DATE DATE DATE
AMP
request
ed for
executio
n.
Hour the
job was
requeste TINYIN NUMBE SMALLI BYTEIN TINYIN TINYIN
HOUR_ID
d for T R(3) NT T T T
executio
n.
Minute
the job
was
SMALL NUMBE SMALLI SMALLI SMALL SMALL
MINUTE_ID request
INT R(5) NT NT INT INT
ed for
executio
n.
GUID of
CHAR CHAR CHAR CHAR CHAR CHAR
SESSIONID (I) the user
(32) (32) (32) (32) (32) (32)
session.
Sequenc
e
number
of the
SECURITYFIL SMALL NUMBE SMALLI SMALLI SMALL SMALL
security
TERSEQ INT R(5) NT NT INT INT
filter,
when
multiple
security
filters
are
used.
Server
CHAR CHAR CHAR CHAR CHAR CHAR
SERVERID definitio
(32) (32) (32) (32) (32) (32)
n GUID.
Security
SECURITYFIL CHAR CHAR CHAR CHAR CHAR CHAR
filter
TERID (I) (32) (32) (32) (32) (32) (32)
GUID.
Timesta
mp of
DATET TIMEST TIMEST TIMEST DATET DATET
STARTTIME when
IME AMP AMP AMP IME IME
the job
started.
Timesta
mp of
when
the
record
was
written
DATET TIMEST TIMEST TIMEST DATET DATET
RECORDTIME to the
IME AMP AMP AMP IME IME
databas
e,
accordin
g to
databas
e system
time.
machine
name:p
HAR AR2 HAR HAR
HINE ort AR(255) AR(255)
(255) (255) (255) (255)
number)
pair.
GUID of
CHAR CHAR CHAR CHAR CHAR CHAR
PROJECTID the
(32) (32) (32) (32) (32) (32)
project.
GUID of
the
REPOSITORY metadat CHAR CHAR CHAR CHAR CHAR CHAR
ID a (32) (32) (32) (32) (32) (32)
reposito
ry.
STG_IS_REP_SQL_STATS
Enables access to the SQL for a report execution. This table is used when
the Report Job SQL option is selected in the Statistics category of the
Project Configuration Editor. The data load process moves this table's
information to the IS_REP_SQL_STATS table, which has the same columns
and datatypes.
SQL
pass
AMP
was
started.
Hour the
SQL
TINYIN NUMBE SMALLI BYTEIN TINYIN TINYIN
HOUR_ID pass
T R(3) NT T T T
was
started.
Minute
the SQL
SMALL NUMBE SMALLI SMALLI SMALL SMALL
MINUTE_ID pass
INT R(5) NT NT INT INT
was
started.
Sequen
ce
SQLPASSSEQU number SMALL NUMBE SMALLI SMALLI SMALL SMALL
ENCE of the INT R(5) NT NT INT INT
SQL
pass.
GUID of
CHAR CHAR CHAR CHAR CHAR CHAR
SESSIONID the user
(32) (32) (32) (32) (32) (32)
session.
GUID of
the CHAR CHAR CHAR CHAR CHAR CHAR
SERVERID
server (32) (32) (32) (32) (32) (32)
definitio
n.
Start
timesta
DATET TIMEST TIMEST TIMEST DATET DATET
STARTTIME mp of
IME AMP AMP AMP IME IME
the SQL
pass.
Finish
timesta
DATET TIMEST TIMEST TIMEST DATET DATET
FINISHTIME mp of
IME AMP AMP AMP IME IME
the SQL
pass.
Executi
on time,
in
millisec INTEG NUMBE INTEGE INTEGE INTEG INTEG
EXECTIME
onds, ER R(10) R R ER ER
for the
SQL
pass.
SQL
VARC VARCH VARCH VARCH VARC VARC
SQLSTATEMEN used in
HAR AR2 AR AR HAR HAR
T the
(4000) (4000) (4000) (4000) (4000) (4000)
pass.
Type of
SQL
pass: TINYIN NUMBE SMALLI BYTEIN TINYIN TINYIN
SQLPASSTYPE
0: SQL T R(3) NT T T T
unknow
n
1: SQL
select
2: SQL
insert
3: SQL
create
4:
Analytic
al
5:
Select
into
6: Insert
into
values
7:
Homoge
n.
partition
query
8:
Heterog
en.
portend
query
9:
Metadat
a
portend
pre-
query
10:
Metadat
a
portend
list pre-
query
11:
Empty
12:
Create
index
13:
Metric
qual.
break by
14:
Metric
qual.
threshol
d
15:
Metric
qual.
16:
User-
defined
17:
Homoge
n.
portend
loop
18:
Homoge
n.
portend
one tbl
19:
Heterog
en.
portend
loop
20:
Heterog
en.
portend
one tbl
21:
Insert
fixed
values
into
22:
Datamar
t from
Analytic
al
Engine
23:
Clean
up temp
resourc
es
24:
Return
elm
number
25:
Increme
ntal
elem
browsin
g
26: MDX
query
27: SAP
BI
28:
Intellige
nt Cube
instruc
29:
Heterog
en. data
access
30:
Excel
file data
import
31: Text
file data
import
32:
Databas
e table
import
33: SQL
data
import
Number
of
TOTALTABLEA tables SMALL NUMBE SMALLI SMALLI SMALL SMALL
CCESSED hit by INT R(5) NT NT INT INT
the SQL
pass.
Error
messag
e VARC VARCH VARCH VARCH VARC VARCH
DBERRORMES
returned HAR AR2 AR AR HAR AR
SAGE
from (4000) (4000) (4000) (4000) (4000) (4000)
databas
e.
Timesta
mp of
when DATET TIMEST TIMEST TIMEST DATET DATET
RECORDTIME
the IME AMP AMP AMP IME IME
record
was
written
to the
databas
e,
accordi
ng to
databas
e
system
time.
(Server
machine
VARC VARCH VARC VARCH
SERVERMACHI name:p VARCH VARCH
HAR AR2 HAR AR
NE ort AR(255) AR(255)
(255) (255) (255) (255)
number)
pair.
GUID of
CHAR CHAR CHAR CHAR CHAR CHAR
PROJECTID the
(32) (32) (32) (32) (32) (32)
project.
GUID of
the
physical
CHAR CHAR CHAR CHAR CHAR CHAR
DBINSTANCEID databas
(32) (32) (32) (32) (32) (32)
e
instanc
e.
GUID of
DBCONNECTIO CHAR CHAR CHAR CHAR CHAR CHAR
the
NID (32) (32) (32) (32) (32) (32)
databas
e
connect
ion.
GUID of
the CHAR CHAR CHAR CHAR CHAR CHAR
DBLOGINID
databas (32) (32) (32) (32) (32) (32)
e login.
Sequen
ce
number
SQLSTATEMENT TINYI NUMBE SMALLI BYTEIN TINYI TINYIN
of the
SEQ NT R(3) NT T NT T
SQL
stateme
nt.
GUID of
the
metadat CHAR CHAR CHAR CHAR CHAR CHAR
REPOSITORYID
a (32) (32) (32) (32) (32) (32)
reposito
ry.
STG_IS_REP_STEP_STATS
Tracks each step in the report execution process. This table is used when
the Report Job Steps option is selected in the Statistics category of the
Project Configuration Editor. The data load process moves this table's
information to the IS_REP_STEP_STATS table, which has the same
columns and datatypes.
SQL
Oracle DB2 Teradat Sybas MySQL
Descrip Server
Column Data- Data- a Data- e Data- Data-
tion Data-
type type type type type
type
Day the
report
was
TIMEST
DAY_ID requeste DATE DATE DATE DATE DATE
AMP
d for
executio
n.
Hour the
report
was
TINYIN NUMBE SMALLI BYTEIN TINYIN TINYIN
HOUR_ID requeste
T R(3) NT T T T
d for
executio
n.
Minute
the
report
was SMALL NUMBE SMALLI SMALLI SMALL SMALL
MINUTE_ID
requeste INT R(5) NT NT INT INT
d for
executio
n.
Sequenc
e
STEPSEQUE number TINYIN NUMBE SMALLI BYTEIN TINYIN TINYIN
NCE for a T R(3) NT T T T
job's
steps.
SQL
Oracle DB2 Teradat Sybas MySQL
Descrip Server
Column Data- Data- a Data- e Data- Data-
tion Data-
type type type type type
type
GUID of
CHAR CHAR CHAR CHAR CHAR CHAR
SESSIONID the user
(32) (32) (32) (32) (32) (32)
session.
GUID of
the
Intellige
nce CHAR CHAR CHAR CHAR CHAR CHAR
SERVERID
Server (32) (32) (32) (32) (32) (32)
processi
ng the
request.
Type of
step. For
a
descripti
on, see
Report
and
Docume
nt Steps,
page TINYIN NUMBE SMALLI BYTEIN TINYIN TINYIN
STEPTYPE 2185. T R(3) NT T T T
1:
Metadata
object
request
step
2: Close
job
3: SQL
SQL
Oracle DB2 Teradat Sybas MySQL
Descrip Server
Column Data- Data- a Data- e Data- Data-
tion Data-
type type type type type
type
generati
on
4: SQL
executio
n
5:
Analytica
l Engine
server
task
6:
Resoluti
on server
task
7: Report
net
server
task
8:
Element
request
step
9: Get
report
instance
10: Error
message
send
task
SQL
Oracle DB2 Teradat Sybas MySQL
Descrip Server
Column Data- Data- a Data- e Data- Data-
tion Data-
type type type type type
type
11:
Output
message
send
task
12: Find
report
cache
task
13:
Docume
nt
executio
n step
14:
Docume
nt send
step
15:
Update
report
cache
task
16:
Request
execute
step
17: Data
mart
execute
SQL
Oracle DB2 Teradat Sybas MySQL
Descrip Server
Column Data- Data- a Data- e Data- Data-
tion Data-
type type type type type
type
step
18:
Docume
nt data
preparati
on
19:
Docume
nt
formattin
g
20:
Docume
nt
manipula
tion
21: Apply
view
context
22:
Export
engine
23: Find
Intelligen
t Cube
task
24:
Update
Intelligen
t Cube
SQL
Oracle DB2 Teradat Sybas MySQL
Descrip Server
Column Data- Data- a Data- e Data- Data-
tion Data-
type type type type type
type
task
25: Post-
processi
ng task
26:
Delivery
task
27:
Persist
result
task
28:
Docume
nt
dataset
executio
n task
Timesta
mp of
the DATET TIMEST TIMEST TIMEST DATET DATET
STARTTIME
step's IME AMP AMP AMP IME IME
start
time.
Timesta
mp of the
DATETI TIMEST TIMEST TIMEST DATETI DATETI
FINISHTIME step's
ME AMP AMP AMP ME ME
finish
time.
SQL
Oracle DB2 Teradat Sybas MySQL
Descrip Server
Column Data- Data- a Data- e Data- Data-
tion Data-
type type type type type
type
duration
between
last step
finish
and the ER R(10) R R ER ER
next step
start, in
milliseco
nds.
CPU time
used
during
INTEG NUMBE INTEGE INTEGE INTEG INTEG
CPUTIME this step,
ER R(10) R R ER ER
in
milliseco
nds.
FINISHT
IME -
STEPDURA STARTT INTEG NUMBE INTEGE INTEGE INTEG INTEG
TION IME, in ER R(10) R R ER ER
milliseco
nds
Timesta
mp of
when the
record
RECORDTI DATETI TIMEST TIMEST TIMEST DATETI DATETI
was
ME ME AMP AMP AMP ME ME
logged in
the
databas
e,
SQL
Oracle DB2 Teradat Sybas MySQL
Descrip Server
Column Data- Data- a Data- e Data- Data-
tion Data-
type type type type type
type
accordin
g to
database
system
time
(Server
machine
VARCH VARCH VARCH VARCH
SERVERMA name:po VARCH VARCH
AR AR2 AR AR
CHINE rt AR(255) AR(255)
(255) (255) (255) (255)
number)
pair.
GUID of
CHAR CHAR CHAR CHAR CHAR CHAR
PROJECTID the
(32) (32) (32) (32) (32) (32)
project.
GUID of
the
REPOSITO metadat CHAR CHAR CHAR CHAR CHAR CHAR
RYID a (32) (32) (32) (32) (32) (32)
repositor
y.
1: MD Object
Requesting an object definition from the project metadata.
Request
2: Close Job Closing a job and removing it from the list of pending jobs.
3: SQL
Generating SQL that is required to retrieve data, based on schema.
Generation
4: SQL
Executing SQL that was generated for the report.
Execution
5: Analytical
Applying analytical processing to the data retrieved from the data source.
Engine
6: Resolution
Loading the definition of an object.
Server
7: Report Net
Transmitting the results of a report.
Server
8: Element
Browsing attribute elements.
Request
9: Get Report
Retrieving a report instance from the metadata.
Instance
10: Error
Sending an error message.
Message Send
11: Output
Sending a message other than an error message.
Message Send
12: Find
Searching or waiting for a report cache.
Report Cache
13: Document
Executing a document
Execution
14: Document
Transmitting a document
Send
Report Cache
16: Request
Requesting the execution of a report
Execute
18: Document
Constructing a document structure using data from the document's
Data
datasets
Preparation
19: Document
Exporting a document to the requested format
Formatting
20: Document
Applying a user's changes to a document
Manipulation
22: Export Exporting a document or report to PDF, plain text, Excel spreadsheet, or
Engine XML
23: Find Locating the cube instance from the Intelligent Cube Manager, when a
Intelligent subset report, or a standard report that uses dynamic caching, is
Cube executed.
24: Update
Updating the cube instance from the Intelligent Cube Manager, when
Intelligent
republishing or refreshing a cube.
Cube
25: Post-
Reserved for MicroStrategy use.
processing
28: Document
Dataset Waiting for child dataset reports in a document to execute.
Execution
STG_IS_REPORT_STATS
Tracks job-level statistics information about every report that Intelligence
Server executes to completion. This table is used when the Basic Statistics
option is selected in the Statistics category of the Project Configuration
Editor. The data load process moves this table's information to the IS_
REPORT_STATS table, which has the same columns and datatypes.
Day the
report was
TIMEST
DAY_ID requested DATE DATE DATE DATE DATE
AMP
for
execution.
Hour the
report was
TINYIN NUMBE SMALLI BYTEIN TINYIN TINYIN
HOUR_ID requested
T R(3) NT T T T
for
execution.
Minute the
report was
SMAL NUMBE SMALLI SMALLI SMAL SMAL
MINUTE_ID requested
LINT R(5) NT NT LINT LINT
for
execution.
GUID of
CHAR CHAR CHAR CHAR CHAR CHAR
SESSIONID (I) the user
(32) (32) (32) (32) (32) (32)
session.
GUID of
the
Intelligenc
e Server's
CHAR CHAR CHAR CHAR CHAR CHAR
SERVERID server
(32) (32) (32) (32) (32) (32)
definition
that made
the
request.
Server
machine
name, or
IP address VARC VARCH VARCH VARCH VARC VARC
SERVERMAC
if the HAR AR2 AR AR HAR HAR
HINE
machine (255) (255) (255) (255) (255) (255)
name is
not
available.
For an ad
hoc report,
the
Template
ID is
created on
the fly and
there is no
correspond
ing object
with this
GUID in the
object
lookup
table.
1 if an
embedded
filter was
EMBEDDEDFI SMALL NUMBE SMALLI SMALLI SMALL SMALL
used in the
LTER INT R(5) NT NT INT INT
report,
otherwise
0.
GUID of
the
template.
CHAR CHAR CHAR CHAR CHAR CHAR
TEMPLATEID
For an ad (32) (32) (32) (32) (32) (32)
hoc report,
the
Template
ID is
created on
the fly, and
there is no
correspon
ding object
with this
GUID in
the object
lookup
table.
1 if an
embedded
template
EMBEDDEDT was used SMALL NUMBE SMALLI SMALLI SMALL SMALL
EMPLATE in the INT R(5) NT NT INT INT
report,
otherwise
0.
Job ID of
the parent
document
job, if the
current job
PARENTJOBI is a INTEG NUMBE INTEG INTEG INTEG INTEG
D (I) document ER R(10) ER ER ER ER
job's child.
-1 if the
current job
is not a
document
job's child.
GUID for
the
DBINSTANCEI CHAR CHAR CHAR CHAR CHAR CHAR
physical
D (32) (32) (32) (32) (32) (32)
database
instance.
Database
user ID for
the CHAR CHAR CHAR CHAR CHAR CHAR
DBUSERID
physical (32) (32) (32) (32) (32) (32)
database
instance.
1 if this job
is a
PARENTINDIC document NUMBE SMALLI BYTEIN TINYIN
BIT BIT
ATOR job's child, R(1) NT T T(1)
otherwise
0.
Timestamp
REQUESTRE when the DATE TIMEST TIMEST TIMEST DATE DATE
CTIME request is TIME AMP AMP AMP TIME TIME
received.
Total
queue time
REQUESTQU INTEG NUMBE INTEGE INTEGE INTEG INTEG
of all steps
EUETIME ER R(10) R R ER ER
in this
request.
passed
before the
first step
started.
IME ER R(10) ER ER ER ER
An offset
of the
RequestRe
cTime.
Time
passed
when the
last step is
EXECFINISHT finished. INTEG NUMBE INTEGE INTEGE INTEG INTEG
IME ER R(10) R R ER ER
An offset of
the
RequestRe
cTime.
Number of
SMAL NUMBE SMALLI SMALLI SMAL INTEG
SQLPASSES SQL
LINT R(5) NT NT LINT ER
passes.
Job error
JOBERRORC code. If no INTEG NUMBE INTEGE INTEGE INTEG INTEG
ODE error, the ER R(10) R R ER ER
value is 0.
1 if the job
was
CANCELINDI NUMBE SMALLI BYTEIN TINYI
canceled, BIT BIT
CATOR R(1) NT T NT(1)
otherwise
0.
1 if the
report was
ad hoc,
otherwise
0. This
includes
any
executed
job that is
not saved
in the
project as
ADHOCINDIC a report NUMBE SMALLI BYTEIN TINYIN
BIT BIT
ATOR (for R(1) NT T T(1)
example:
drilling
results,
attribute
element
prompts,
creating
and
running a
report
before
saving it).
Number of
PROMPTINDI SMAL NUMBE SMALLI SMALLI SMAL SMAL
prompts in
CATOR LINT R(5) NT NT LINT LINT
the report.
1 if the
DATAMARTIN NUMBE SMALLI BYTEIN TINYIN
report BIT BIT
DICATOR R(1) NT T T(1)
created a
data mart,
otherwise
0.
1 if the
report was
a result of
ELEMENTLOA NUMBE SMALLI BYTEIN TINYI
an element BIT BIT
DINDIC R(1) NT T NT(1)
browse,
otherwise
0.
1 if the
report was
DRILLINDICA the result NUMBE SMALLI BYTEIN TINYIN
BIT BIT
TOR of a drill, R(1) NT T T(1)
otherwise
0.
1 if the
report was
SCHEDULEIN run from a NUMBE SMALLI BYTEIN TINYI
BIT BIT
DICATOR schedule, R(1) NT T NT(1)
otherwise
0.
1 if the
report
CACHECREAT created a NUMBE SMALLI BYTEIN TINYIN
BIT BIT
EINDIC cache, R(1) NT T T(1)
otherwise
0.
step
priority.
User-
SMALL NUMBE SMALLI SMALLI SMALL SMALL
JOBCOST supplied
INT R(5) NT NT INT INT
report cost.
Number of
FINALRESUL INTEG NUMBE INTEG INTEG INTEG INTEG
rows in the
TSIZE ER R(10) ER ER ER ER
report.
Timestamp
of when the
record was
logged in
the
RECORDTIME DATET TIMEST TIMEST TIMEST DATET DATET
database,
(I) IME AMP AMP AMP IME IME
according
to the
database
system
time.
The error
message
displayed
VARC VARCH VARCH VARCH VARC VARC
ERRORMESS to the user
HAR AR2 AR AR HAR HAR
AGE when an
(4000) (4000) (4000) (4000) (4000) (4000)
error is
encounter
ed.
For
DRILLTEMPLA CHAR CHAR CHAR CHAR CHAR CHAR
MicroStrat
TEUNIT (32) (32) (32) (32) (32) (32)
egy use.
GUID of
the object
that was
drilled
from.
For
MicroStrat
egy use.
CHAR CHAR CHAR CHAR CHAR CHAR
NEWOBJECT GUID of
(32) (32) (32) (32) (32) (32)
the object
that was
drilled to.
For
MicroStrat
egy use.
Enumerati
TINYIN NUMBE SMALLI BYTEIN TINYIN TINYIN
DRILLTYPE on of the
T R(3) NT T T T
type of
drilling
action
performed.
Total
number of
unique
tables
TOTALTABLE SMAL NUMBE SMALLI SMALLI SMAL INTEG
accessed
ACCESS LINT R(5) NT NT LINT ER
by the
report
during
execution.
Length in
characters
of the SQL
statement.
For
multiple
INTEG NUMBE INTEGE INTEGE INTEG INTEG
SQLLENGTH passes,
ER R(10) R R ER ER
this value
is the sum
of SQL
statement
lengths of
each pass.
Duration of
the report
EXECDURATI execution, INTEG NUMBE INTEG INTEG INTEG INTEG
ON in ER R(10) ER ER ER ER
millisecon
ds.
CPU time
used for
report
INTEG NUMBE INTEGE INTEGE INTEG INTEG
CPUTIME execution,
ER R(10) R R ER ER
in
millisecond
s.
Total
number of
TINYI NUMBE SMALLI BYTEIN TINYI TINYI
STEPCOUNT steps
NT R(3) NT T NT NT
involved in
execution
(not just
unique
steps).
Intelligenc
e Server-
related
actions
EXECACTION that need INTEG NUMBE INTEGE INTEGE INTEG INTEG
S to take ER R(10) R R ER ER
place
during
report
execution.
Intelligenc
e Server-
related
processes INTEG NUMBE INTEG INTEG INTEG INTEG
EXECFLAGS
needed to ER R(10) ER ER ER ER
refine the
report
execution.
1 if a
database
error
DBERRORIND occurred NUMBE SMALLI BYTEIN TINYIN
BIT BIT
IC during R(1) NT T T
execution,
otherwise
0.
in
millisecon
ds, the
user spent
answering
prompts on
the report.
GUID of
the
Intelligent
CUBEINSTAN Cube used CHAR CHAR CHAR CHAR CHAR CHAR
CEID in a Cube (32) (32) (32) (32) (32) (32)
Publish or
Cube Hit
job.
Size, in
KB, of the
Intelligent
Cube in INTEG NUMBE INTEG INTEG INTEG INTEG
CUBESIZE
memory for ER R(10) ER ER ER ER
a Cube
Publish
job.
1 if any
SQL was
executed
SQLEXECINDI NUMBE SMALLI BYTEIN TINYIN
against the BIT BIT
C R(1) NT T T
database,
otherwise
0.
1 if the
report was
EXPORTINDI TINYI NUMBE SMALLI BYTEIN TINYI TINYI
exported,
C NT R(3) NT T NT NT
otherwise
0.
GUID of
REPOSITORYI the CHAR CHAR CHAR CHAR CHAR CHAR
D metadata (32) (32) (32) (32) (32) (32)
repository.
STG_IS_SCHEDULE_STATS
Tracks which reports have been run as the result of a subscription. This
table is used when the Subscriptions option is selected in the Statistics
category of the Project Configuration Editor. The data load process moves
this table's information to the IS_SCHEDULE_STATS table, which has the
same columns and datatypes.
Day the
job was TIMEST
DAY_ID (I) DATE DATE DATE DATE DATE
requeste AMP
d for
executio
n.
Hour the
job was
requeste TINYIN NUMBE SMALLI TINYIN TINYIN
HOUR_ID (I) BYTEINT
d for T R(3) NT T T
executio
n.
Minute
the job
was
MINUTE_ID SMALL NUMBE SMALLI SMALLI SMALL SMALL
requeste
(I) INT R(5) NT NT INT INT
d for
executio
n.
GUID of
SESSIONID CHAR CHAR CHAR CHAR CHAR CHAR
the user
(I) (32) (32) (32) (32) (32) (32)
session.
GUID for
server CHAR CHAR CHAR CHAR CHAR CHAR
SERVERID
definitio (32) (32) (32) (32) (32) (32)
n.
GUID of
the
TRIGGERID CHAR CHAR CHAR CHAR CHAR CHAR
object
(I) (32) (32) (32) (32) (32) (32)
that
triggered
the
subscrip
tion.
Type of
schedule
:0 if it is
SCHEDULE TINYIN NUMBE SMALLI TINYIN TINYIN
a report, BYTEINT
TYPE (I) T R(3) NT T T
1 if it is a
documen
t
0 if the
job does
not hit
TINYIN NUMBE SMALLI BYTEIN TINYIN TINYIN
HITCACHE the
T R(3) NT T T T
cache, 1
if it
does.
Timesta
mp of the
DATETI TIMEST TIMEST TIMEST DATETI DATETI
STARTTIME schedule
ME AMP AMP AMP ME ME
start
time.
Timesta
mp of
when the
RECORDTI record DATET TIMEST TIMEST TIMEST DATET DATET
ME (I) was IME AMP AMP AMP IME IME
logged
in the
databas
e,
accordin
g to
databas
e system
time.
(Server
machine
VARCH VARCH VARCH
SERVERMA name:po VARCHA VARCHA VARCHA
AR AR AR
CHINE rt R2(255) R(255) R(255)
(255) (255) (255)
number)
pair.
GUID of
PROJECTID CHAR CHAR CHAR CHAR CHAR CHAR
the
(I) (32) (32) (32) (32) (32) (32)
project.
GUID of
the
REPOSITO CHAR CHAR CHAR CHAR CHAR CHAR
metadata
RYID (32) (32) (32) (32) (32) (32)
repositor
y.
STG_IS_SESSION_STATS
Logs every Intelligence Server user session. This table is used when the
Basic Statistics option is selected in the Statistics category of the Project
Configuration Editor. The data load process moves this table's information
to the IS_SESSION_STATS table, which has the same columns and
datatypes.
SQL Sybas
Oracle DB2 Teradat MySQL
Descript Server e
Column Data- Data- a Data- Data-
ion Data- Data-
type type type type
type type
Day the
session TIMEST
DAY_ID DATE DATE DATE DATE DATE
was AMP
started.
Hour the
session TINYIN NUMBE SMALLI BYTEIN TINYIN TINYIN
HOUR_ID
was T R(3) NT T T T
started.
Minute
the
SMALL NUMBE SMALLI SMALLI SMALL SMALL
MINUTE_ID session
INT R(5) NT NT INT INT
was
started.
GUID of
SESSIONID CHAR CHAR CHAR CHAR CHAR CHAR
the user
(I) (32) (32) (32) (32) (32) (32)
session.
Server
CHAR CHAR CHAR CHAR CHAR CHAR
SERVERID definition
(32) (32) (32) (32) (32) (32)
GUID.
(Server
machine VARCH VARCH VARCH VARCH
SERVERMA VARCH VARCH
name:por AR AR2 AR AR
CHINE AR(255) AR(255)
t number) (255) (255) (255) (255)
pair.
SQL Sybas
Oracle DB2 Teradat MySQL
Descript Server e
Column Data- Data- a Data- Data-
ion Data- Data-
type type type type
type type
Client
machine
name, or
IP
VARCH VARCH VARCH VARCH
CLIENTMAC address if VARCH VARCH
AR AR2 AR AR
HINE the AR(255) AR(255)
(255) (255) (255) (255)
machine
name is
not
available.
Source
from
which the
session
originate
d:
0:
Unknown
2:
Intelligen
ce Server
Administr
ator
3: Web
Administr
SQL Sybas
Oracle DB2 Teradat MySQL
Descript Server e
Column Data- Data- a Data- Data-
ion Data- Data-
type type type type
type type
ator
4:
Intelligen
ce Server
5:
Project
Upgrade
6: Web
7:
Schedule
r
8:
Custom
Applicati
on
9:
Narrowc
ast
Server
10:
Object
Manager
11:
ODBO
Provider
12:
ODBO
Cube
Designer
SQL Sybas
Oracle DB2 Teradat MySQL
Descript Server e
Column Data- Data- a Data- Data-
ion Data- Data-
type type type type
type type
13:
Comman
d
Manager
14:
Enterpris
e
Manager
15:
Comman
d Line
Interface
16:
Project
Builder
17:
Configur
ation
Wizard
18: MD
Scan
19:
Cache
Utility
20: Fire
Event
21:
MicroStr
ategy
SQL Sybas
Oracle DB2 Teradat MySQL
Descript Server e
Column Data- Data- a Data- Data-
ion Data- Data-
type type type type
type type
Java
admin
clients
22:
MicroStr
ategy
Web
Services
23:
MicroStr
ategy
Office
24:
MicroStr
ategy
Tools
25:
Portal
Server
26:
Integrity
Manager
27:
Metadata
Update
28:
Reserved
for
MicroStr
ategy
SQL Sybas
Oracle DB2 Teradat MySQL
Descript Server e
Column Data- Data- a Data- Data-
ion Data- Data-
type type type type
type type
use
29:
Schedule
r for
Mobile
30:
Reposito
ry
Translati
on
Wizard
31:
Health
Center
32: Cube
Advisor
33:
Operatio
ns
Manager
34:
Desktop
35:
Library
36:
Library
iOS
37:
Workstati
SQL Sybas
Oracle DB2 Teradat MySQL
Descript Server e
Column Data- Data- a Data- Data-
ion Data- Data-
type type type type
type type
on
39:
Library
Android
Timestam
p of when
the
record
was
logged in
RECORDTIM DATETI TIMEST TIMEST TIMEST DATETI DATETI
the
E (I) ME AMP AMP AMP ME ME
database,
according
to
database
system
time.
Web
server
machine
from VARCH VARCH VARCH VARCH
WEBMACHI VARCH VARCH
which a AR AR2 AR AR
NE AR(255) AR(255)
web (255) (255) (255) (255)
session
originate
s.
Timestam
p of when
CONNECTTI DATETI TIMEST TIMEST TIMEST DATETI DATETI
the
ME (I) ME AMP AMP AMP ME ME
session
is
SQL Sybas
Oracle DB2 Teradat MySQL
Descript Server e
Column Data- Data- a Data- Data-
ion Data- Data-
type type type type
type type
opened.
Timesta
mp of
DISCONNEC DATET TIMEST TIMEST TIMEST DATET DATET
when the
TTIME IME AMP AMP AMP IME IME
session
is closed.
GUID of
the
REPOSITOR CHAR CHAR CHAR CHAR CHAR CHAR
metadata
YID (32) (32) (32) (32) (32) (32)
repositor
y.
STG_MSI_STATS_PROP
For MicroStrategy use. Provides information about the statistics database
properties. Intelligence Server uses this table to initialize statistics logging.
Column
Column Description
Name
PROP_ Property name, such as statistics database version, upgrade script, and so
NAME on.
PROP_ Property value, such as statistics database version number, time that an
VAL upgrade script was run, and so on.
EN TERPRISE M AN AGER
D ATA D ICTION ARY
l For information about configuring Enterprise Manager and how you can
use it to help tune the MicroStrategy system and information about setting
up project documentation so it is available to networked users, see the
Enterprise Manager Help .
Temporary tables are created and used by the data loading process when
data is migrated from the statistics tables to the Enterprise Manager
warehouse. These temporary tables are the following:
l IS_REP_SQL_TMP
l IS_REP_STEP_TMP
l IS_SESSION_TMP1
l IS_PROJECT_FACT_1_TMP
l EM_IS_LAST_UPD_1
l EM_IS_LAST_UPD_2
Fact Tables
l CT_EXEC_FACT
l CT_MANIP_FACT
l IS_CONFIG_PARAM_FACT
l IS_CUBE_ACTION_FACT
l IS_DOC_FACT
l IS_DOC_STEP_FACT
l IS_INBOX_ACT_FACT
l IS_MESSAGE_FACT
l IS_PERF_MON_FACT
l IS_PR_ANS_FACT
l IS_PROJECT_FACT_1
l IS_REP_COL_FACT
l IS_REP_FACT
l IS_REP_SEC_FACT
l IS_REP_SQL_FACT
l IS_REP_STEP_FACT
l IS_SESSION_FACT
l IS_SESSION_MONITOR
CT_EXEC_FACT
Contains information about MicroStrategy Mobile devices and
report/document executions and manipulations. Created as a view based on
columns in the source tables listed below.
Source Tables
l CT_DEVICE_STATS: Statistics table containing information about the
mobile client and the mobile device
CT_DEVICE_INST_
Unique installation ID of the mobile app.
ID
CT_STATE_
Date and time when STATECOUNTER is incremented.
CHANGE_TS
CT_OS Operating system the app is installed on, such as iOS or Android.
GUID of the session that executed the request. This should be the
IS_SESSION_ID
same as the SESSIONID for this request in IS_REP_FACT.
EM_APP_SRV_ Name and port number of the Intelligence Server machine where
MACHINE the mobile document execution is taking place.
EM_MOB_SRV_ Name and port number of the Mobile Server machine where the
MACHINE mobile document execution is taking place.
CT_REQ_TS Time when the user submits a request to the mobile app.
CT_REQ_REC_TM_
Difference in milliseconds between CT_REQ_TS and CT_REC_TS.
MS
CT_RENDER_ST_
Time when the mobile app begins rendering.
TS
CT_RENDER_FN_
Time when the mobile app finishes rendering.
TS
1: User execution
2: Pre-cached execution
Whether a cache was hit during the execution, and if so, what type
of cache hit occurred:
0: No cache hit
CT_CACHE_HIT_
IND_ID 1: Intelligence Server cache hit
55: Document
3G
CT_NETWORK_
WiFi
TYPE
LTE
4G
CT_BANDWIDTH_
Estimated network bandwidth, in kbps.
KBPS
CT_AVG_MANIP_
Average rendering time for each manipulation.
RENDER_TM_MS
Date and time when this information was written to the statistics
EM_RECORD_TS
database.
CT_REQ_
Whether the manipulation request was received.
RECEIVED_FLAG
CT_REQ_
Whether the manipulation was completed.
RENDERED_FLAG
CT_REQ_HAS_
Whether the manipulation request was made by a mobile app.
DEVICE_FLAG
IS_PROJ_NAME Name of the project used for the request or if it is a deleted project.
EM_USER_NAME Name of the user who made the request or if it is a deleted user.
CT_MANIP_FACT
Contains information about MicroStrategy Mobile devices and
report/document manipulations. Created as a view based on columns in the
source tables listed below.
Source Tables
l CT_MANIP_STATS: Statistics table containing information about the
report or document manipulations
CT_DEVICE_INST_
Unique installation ID of the mobile app.
ID
EM_APP_SRV_ Name and port number of the Intelligence Server machine where
MACHINE the manipulation is taking place.
IS_MANIP_SEQ_ID The order in which the manipulations were made in a session. For
each manipulation, the mobile client returns a row, and the value in
this column increments for each row.
Type of manipulation:
0: Unknown
1: Selector
2: Panel Selector
4: Change Layout
5: Change View
6: Sort
7: Page By
If the value for IS_MANIP_VALUE is too long to fit in one row, this
IS_MANIP_VALUE_
manipulation is spread over multiple rows, and this value is
SEQ
incremented.
Time when the mobile app finished processing the manipulation and
CT_MANIP_FN_TS
forwarded it for rendering.
Date and time when this information was written to the statistics
EM_RECORD_TS
database.
IS_CONFIG_PARAM_FACT
Contains information about Intelligence Server and project configuration
settings.
PARAM_ID
IS_CONFIG_
Value of the configuration setting.
PARAM_VALUE
IS_CUBE_ACTION_FACT
Contains information about Intelligent Cube manipulations. Created as a
view based on columns in the source tables listed below.
Source Tables
l EM_MD: Lookup table for metadata
Date and time when this information was written to the statistics
EM_RECORD_TS
database.
GUID of the session that started the action against the Intelligent
IS_SESSION_ID
Cube.
IS_CUBE_REP_ID Integer ID of the Intelligent Cube report that was published, if any
1: Cube Publish
4: Cube Append
5: Cube Update
6: Cube Delete
7: Cube Destroy
IS_REP_ID Integer ID of the report that hit the Intelligent Cube, if any.
IS_DOC_FACT
Contains information on the execution of a document job.
Primary key:
l DAY_ID2
l IS_SESSION_ID
l IS_DOC_JOB_SES_ID
l IS_DOC_JOB_ID
l IS_DOC_CACHE_IDX
Source Tables
l IS_DOCUMENT_STATS: Statistics table containing information about
document executions
GUID of the session that created the cache if a cache was hit in
IS_DOC_JOB_SES_ID
this execution; otherwise, current session (default behavior).
IS_CUBE_EXEC_ST_ Date and time when cube execution was started by Intelligence
TS Server.
IS_CUBE_EXEC_FN_ Date and time when cube execution was finished by Intelligence
TS Server.
IS_DOC_EXEC_ST_
Timestamp of the execution start.
TS
IS_DOC_EXEC_TM_
Execution duration in milliseconds.
MS
IS_DOC_NBR_
Number of reports contained in the document job execution.
REPORTS
IS_DOC_NBR_PU_
Number of steps processed in the document job execution.
STPS
IS_DOC_NBR_
Number of prompts in the document job execution.
PROMPTS
IS_DOC_STEP_FACT
Contains information on each processing step of a document execution.
Created as a view based on columns in the source tables listed below.
Source Tables
l IS_DOC_STEP_STATS: Statistics table containing information about
processing steps of document execution
IS_DOC_JOB_SES_ GUID of the session that created the cache if a cache was hit in
ID this execution; otherwise, current session (default behavior).
IS_DOC_STEP_
Integer ID of the document job execution step.
SEQ_ID
IS_DOC_STEP_
Integer ID of the document job execution step type.
TYP_ID
IS_DOC_EXEC_ST_
Timestamp of the execution start.
TS
IS_DOC_EXEC_FN_
Timestamp of the execution finish.
TS
IS_DOC_CPU_TM_
CPU duration in milliseconds.
MS
IS_DOC_EXEC_TM_
Execution duration in milliseconds.
MS
IS_INBOX_ACT_FACT
Contains information about History List manipulations. Created as a view
based on columns in the source tables listed below.
Source Tables
l IS_INBOX_ACT_STATS: Statistics table containing information about
History List manipulations
IS_SESSION_ID GUID of the session that started the History List manipulation.
EM_APP_SRV_ Name and port number of the Intelligence Server machine where
MACHINE the manipulation is taking place.
IS_PROJ_ID GUID of the project where the History List message is mapped.
Type of manipulation:
IS_HL_MESSAGE_
GUID of the History List message being acted on.
ID
IS_HL_MESSAGE_ User-defined name of the History List message. Blank unless the
DISP user has renamed the History List message.
IS_CREATION_TS Date and time when the History List message was created.
Report job ID for the History List Message Content Request. Blank
IS_REP_JOB_ID
if no job was executed or if a document was executed.
IS_SUBSCRIPTION_
ID of the subscription that invoked the manipulation
ID
Date and time when this information was written to the statistics
EM_RECORD_TS
database.
IS_MESSAGE_FACT
Records all messages sent through Distribution Services.
Source Table
l IS_MESSAGE_STATS: Statistics table containing information about sent
messages
3: Report
55: Document
Type of delivery:
1: Email
2: File
4: Printer
8: Custom
20: Client
40: Cache
128: Mobile
1: Contact
2: Contact group
IS_CONTACT_TYPE_ID
4: MicroStrategy user
5: Count
IS_RCPT_CONTACT_
Name of the contact recipient.
NAME
IS_PERF_MON_FACT
Contains information about job performance .
Source Table
l IS_PERF_MON_STATS: Statistics table containing information about job
performance
EM_APP_SRV_
The name of the Intelligence Server machine logging the statistics.
MACHINE
IS_COUNTER_
MicroStrategy use.
INSTANCE
IS_COUNTER_
The name of the performance counter.
NAME
IS_COUNTER_
The value of the performance counter.
VALUE
IS_PR_ANS_FACT
Contains information about prompt answers. Created as a view based on
columns in the source tables listed below.
Source Tables
l EM_MD: Lookup table for metadata
PR_LOC_TYPE COM object type of the object that the prompt resides in.
PR_LOC_DESC Object name of the object that the prompt resides in.
EM_APP_SRV_
The Intelligence Server machine name and IP address.
MACHINE
IS_REPOSITORY_
Integer ID of the metadata repository.
ID
IS_PROJECT_FACT_1
Represents the number of logins to a project in a day by user session and
project.
Source Tables
l IS_PROJ_SESSION_STATS: Statistics table containing information on
session activity by project
EM_APP_SRV_
The name of the Intelligence Server machine logging the statistics.
MACHINE
IS_DISCONNECT_ Timestamp of the end of the session (logout). NULL if the session is
TS still open at the time of Enterprise Manager data load.
IS_SESSION_TM_
Duration within the hour, in seconds, of the session.
SEC
IS_REPOSITORY_
Integer ID of the metadata repository.
ID
IS_REP_COL_FACT
Used to analyze which data warehouse tables and columns are accessed by
MicroStrategy report jobs, by which SQL clause they are accessed
(SELECT, FROM, and so on), and how frequently they are accessed. This
fact table is at the level of a Report Job rather than at the level of each SQL
pass executed to satisfy a report job request. The information available in
this table can be useful for database tuning. Created as a view based on
columns in the source tables listed below.
Source Tables
l IS_REP_COL_STATS: Statistics table containing information about
column-table combinations used in the SQL during report executions
IS_COL_NAME Name of the column in the database table that was used.
SQL_CLAUSE_ Integer ID of the type of SQL clause (SELECT, FROM, WHERE, and
TYPE_ID so on).
IS_REP_FACT
Contains information about report job executions.
Primary key:
l DAY_ID2
l IS_SESSION_ID
l IS_REP_JOB_SES_ID
l IS_REP_JOB_ID
l IS_DOC_JOB_ID
l IS_REP_CACHE_IDX
Source Tables
l IS_CACHE_HIT_STATS: Statistics table containing information about job
executions that hit a cache
GUID of the session that created the cache if a cache was hit in
IS_REP_JOB_SES_ID
this execution; otherwise, current session (default behavior).
Integer ID of the cache hit index; similar to Job ID but only for
IS_REP_CACHE_IDX
cache hits. -1 if no cache hit.
IS_CACHE_CREATE_
Indicates whether a cache was created.
ID
ID
Otherwise, -1.
IS_REP_EXEC_ST_
Timestamp of the execution start.
TS
IS_REP_EXEC_FN_
Timestamp of the execution finish.
TS
IS_REP_EXEC_TM_
Execution duration in milliseconds.
MS
IS_REP_ELAPS_TM_ Difference between start time and finish time; includes time for
MS prompt responses.
IS_REP_NBR_SQL_
Number of SQL passes.
PAS
IS_REP_RESULT_
Number of rows in the result set.
SIZE
IS_REP_SQL_
Not yet available. Number of characters.
LENGTH
IS_REP_NBR_
Not yet available. Number of tables.
TABLES
IS_REP_NBR_PU_
Number of steps processed in the execution.
STPS
IS_REP_NBR_
Number of prompts in the report execution.
PROMPTS
IS_DB_ERROR_IND_
Indicates whether the database returned an error.
ID
IS_ELEM_LOAD_ID Indicates whether the job was the result of an element load.
IS_SEC_FILT_IND_ID Indicates whether the job had a security filter associated with it.
DRILLFROM_OT_ID Integer ID for the object type of the object that is drilled from.
DRILLTO_OT_ID Integer ID for the object type of the object that is drilled to.
Integer flag indicating the type of drill performed (for example, drill
DRILLTYPE
to template, drill to attribute, and so on).
IS_CACHE_JOB_ID Integer ID of the job that created the cache on Intelligence Server.
IS_REP_PMT_ANS_
Data and time when the prompt was answered.
TS
IS_SQL_EXEC_IND_ Integer ID indicating if this job hit generated SQL and hit a
ID database or not.
IS_CUBE_INST_ID GUID of the Intelligent Cube object (if job hits it).
IS_CUBE_SIZE Size of the Intelligent Cube the job hits (if applicable).
IS_REP_PR_ANS_ Time in milliseconds of how long the user took to answer the
TM_MS prompt.
IS_REP_SEC_FACT
Contains information about security filters applied to report jobs. Created as
a view based on columns in the source tables listed below.
Source Tables
l IS_REP_FACT: Contains information about report job executions
IS_REP_JOB_SES_ GUID of the session that created the cache if a cache was hit in this
ID execution; otherwise, current session (default behavior).
IS_REP_SEC_FILT_
Integer ID of the security filter.
ID
IS_REPOSITORY_
Integer ID of the metadata repository.
ID
IS_REP_SQL_FACT
Contains the SQL that is executed on the warehouse by report job
executions. Created as a view based on columns in the source tables listed
below.
Source Tables
l IS_REP_FACT: Contains information about report job executions
IS_REP_JOB_SES_
GUID of the current session object.
ID
IS_REP_EXEC_ST_
Timestamp of the execution start.
TS
IS_REP_EXEC_FN_
Timestamp of the execution finish.
TS
IS_REP_EXEC_TM_
Execution duration in milliseconds.
MS
IS_REP_SQL_
SQL statement.
STATEM
IS_REP_SQL_
Length of SQL statement.
LENGTH
IS_REP_NBR_
Number of tables accessed by SQL statement.
TABLES
IS_REP_DB_ERR_
Error returned from the database; NULL if no error.
MSG
IS_REP_STEP_FACT
Contains information about the processing steps through which the report
execution passes. Created as a view based on columns in the source tables
listed below.
Source Tables
l IS_REP_STEP_STATS: Statistics table containing information about
report job processing steps
IS_REP_JOB_SES_
GUID of the current session object.
ID
IS_REP_STEP_
Integer ID of the sequence of the step.
SEQ_ID
IS_REP_STEP_
Integer ID of the type of step.
TYP_ID
IS_REP_EXEC_ST_
Timestamp of the execution start.
TS
IS_REP_EXEC_FN_
Timestamp of the execution finish.
TS
IS_REP_CPU_TM_
CPU duration in milliseconds.
MS
IS_REP_EXEC_TM_
Execution duration in milliseconds.
MS
IS_SESSION_FACT
Enables session concurrency analysis. Keeps data on each session for each
hour of connectivity.
0: Unknown
1: MicroStrategy Developer
6: MicroStrategy Web
7: MicroStrategy Scheduler
8: Custom application
IS_SESSION_MONITOR
For MicroStrategy use. A view table that provides an overview of recent
session activity.
Lookup Tables
0: Unknown
1: User
2: Pre-cached
3: Application recovery
CT_EXEC_TYPE
4: Subscription cache pre-loading
6: Report queue
8: Back button
0: Unknown
1: Selector
CT_MANIP_TYPE
2: Panel Selector
3: Action Selector
4: Change Layout
5: Change View
6: Sort
7: Page By
8: Information Window
9: Annotations
DT_MONTH_OF_
Lookup table for Months of the Year in the Date hierarchy.
YR
DT_QUARTER_
Lookup table for the Quarters of the Year in the Date hierarchy.
OF_YR
DT_WEEKDAY Lookup table for the Days of the Week in the Date hierarchy.
DT_
Lookup table for Weeks of the Year in the Date hierarchy.
WEEKOFYEAR
EM_APP_SRV_
Lookup table for Intelligence Server machines used in statistics.
MACHINE
EM_CLIENT_
Lookup table for Client Machines used in the statistics.
MACHINE
SOURCE Server.
EM_DB_USER Lookup table for the database users used in the statistics.
0: Ready
1: Executing
2: Waiting
3: Completed
4: Error
EM_JOB_
STATUS 5: Cancelled
(Deprecated) 6: Stopped
EM_
Provides information about the projects being monitored and when the
MONITORED_
first and last data loads occurred.
PROJECTS
EM_USR_GP Provides descriptive information about the user groups being tracked.
EM_WEB_SRV_
Lookup table for the Web Server Machines used in the statistics.
MACHINE
IS_CACHE_
Lookup table for the Cache Creation indicator.
CREATION_IND
0: Reserved
IS_CACHE_HIT_
1: Server cache or no cache hit
TYPE
2: Device cache
IS_CANCELLED_
Lookup table for the Canceled indicator.
IND
IS_CHILD_JOB_
Lookup table for the Child Job indicator.
IND
IS_CONTACT_ Lookup table for the type of contact delivered to through Distribution
TYPE Services.
IS_CUBE_HIT_
Lookup table for the Cube Hit indicator.
IND
IS_DATA_TYPE 3: Report
55: Document
IS_DATAMART_
Lookup table for the Data Mart indicator.
IND
IS_DB_ERROR_
Lookup table for the Database Error indicator.
IND
IS_DELIVERY_
Lookup table for the Delivery Status indicator.
STATUS_IND
IS_DELIVERY_
Lookup table for the Distribution Services delivery type.
TYPE
IS_DOC_STEP_ Lookup table for the step types in document execution. For a list and
TYPE explanation of values, see Lookup Tables, page 2253.
IS_DOCTYPE_ Indicator lookup table for document or dashboard type. Types include:
IND -1: Unknown
0: HTML document
IS_ELEM_LOAD_
Lookup table for the Element Load indicator.
IND
IS_HIER_DRILL_
Lookup table for the Drillable Hierarchy indicator.
IND
IS_JOB_
Lookup table for the Job Priority type.
PRIORITY_TYPE
IS_PRIORITY_
Indicator lookup table for priority maps.
MAP
IS_REP_SQL_
Lookup table for the SQL pass types of report execution.
PASS_TYPE
IS_REP_STEP_ Lookup table for the step types of report execution. For a list and
TYPE explanation of values, see Lookup Tables, page 2253.
0: Reserved
1: Base Report
0: Reserved: Ad hoc reports. May include other reports that are not
persisted in the metadata at the point of execution.
IS_SCHEDULE_
Lookup table for the Schedule indicator.
IND
IS_SEC_FILT_
Lookup table for the Security Filter indicator.
IND
Lookup table for SQL clause types; used to determine which SQL
clause (SELECT, WHERE, GROUP BY, and so on) a particular column
was used in during a report execution.
1: Select: Column was used in the SELECT clause but was not
aggregated, nor does it appear in a GROUP BY clause. For example,
a11.Report column in "Select a11.Report from LU_REPORT a11".
IS_SQL_ 2: Select Group By: Column was used in the GROUP BY clause. For
CLAUSE_TYPE example, a11.Report Column in "select a11.Report, sum(a11.Profit)
from LU_REPORT group by a11.Report".
IND
Transformation Tables
DT_MONTH_YTD Transformation table to calculate the Year to Date values for Month.
TM_HOUR_DTH Transformation table to calculate the Hour to Day values for Hour.
Not all steps are applicable to all types of reports. For example, if you are
not using Intelligent Cubes, those steps are skipped.
1: MD Object The Object Server component in Intelligence Server requests the objects
Request necessary for the report.
3: SQL The SQL Engine generates the SQL to be executed against the data
Generation warehouse.
4: SQL The Query Engine submits the generated SQL to the data warehouse,
Execution and receives the result.
5: Analytical The Analytical Engine applies additional processing steps to the data
Engine retrieved from the warehouse.
6: Resolution The Resolution Server uses the report definition to retrieve objects from
Server the Object Server.
7: Report Net The Report Net Server processes report requests and sends them to the
Server Report Server.
8: Element The Resolution Server works with the Object Server and Element Server
Request to resolve prompts for report requests.
9: Get Report
Intelligence Server receives the report instance from the Report Server.
Instance
10: Error If an error occurs, Intelligence Server sends a message to the user, and
Message Send logs the error.
11: Output
When the report finishes executing, the output data is sent to the client.
Message Send
13: Document Intelligence Server executes the datasets needed for the document, and
14: Document Once a document is executed, Intelligence Server sends the output to the
Send client (such as MicroStrategy Developer or Web).
15: Update Once a report is executed, the Report Server writes the data to the report
Report Cache cache.
16: Request The client (such as MicroStrategy Developer or Web) requests the
Execute execution of a report or document.
18: Document
Intelligence Server prepares the document data, performing tasks such
Data
as dataset joins, where applicable.
Preparation
19: Document Intelligence Server combines the data for the document with the
Formatting structure, and formats the output.
20: Document
Intelligence Server applies the user's manipulations to a document.
Manipulation
22: Export The Export Engine formats a report or document for export as a PDF,
Engine Excel workbook, or XML.
23: Find
The SQL Engine matches a view report, or a report that uses dynamic
Intelligent
sourcing, with the corresponding Intelligent Cube.
Cube
24: Update
The Query Engine runs the SQL required to refresh the data in the
Intelligent
Intelligent Cube.
Cube
25: Post-
processing Reserved for MicroStrategy use.
Task
28: Document
Dataset The document is waiting for its dataset report jobs to finish executing.
Execution
Relationship Tables
IS_USR_GP_USR_GP Relationship table between User Group and User Group (Parent).
Table
Function
Name
EM_IS_
Provides the Data Loading process with a working window that identifies the
LAST_
period during which data should be moved into production area tables.
UPDATE
Defines all items in each component of the MicroStrategy product line being
monitored. When a new item is added to a component, it can be entered in
this table for monitoring, without any change to the migration code. This
EM_ITEM table also specifies the item's object type according to server and the
abbreviation used in the lookup table name.
EM_ITEM_ Identifies properties being tracked on a given item for a given component.
PROPS Examples: Attribute Number of Parents, Hierarchy Drill Enabled
Table
Function
Name
Stores logging information for Enterprise Manager data loads. The logging
option is enabled from the Enterprise Manager console, Tools menu,
EM_LOG Options selection.
EM_
Contains a list of many-to-many relationship tables and the MicroStrategy
RELATE_
items they relate.
ITEM
Provides the SQL necessary to insert, update, and delete a row from the
lookup item table once the necessary information from the component API is
available. If the SQL must be changed, make the change in this table (no
EM_SQL changes in the code are necessary). This table also provides the SQL used
to transform the logged statistics into the lookup tables.
Relationship Tables
IS_USR_GP_USR_GP Relationship table between User Group and User Group (Parent).
Table
Function
Name
Table
Function
Name
EM_IS_
Provides the Data Loading process with a working window that identifies the
LAST_
period during which data should be moved into production area tables.
UPDATE
Defines all items in each component of the MicroStrategy product line being
monitored. When a new item is added to a component, it can be entered in
this table for monitoring, without any change to the migration code. This
EM_ITEM table also specifies the item's object type according to server and the
abbreviation used in the lookup table name.
Stores logging information for Enterprise Manager data loads. The logging
option is enabled from the Enterprise Manager console, Tools menu,
EM_LOG Options selection.
EM_
Contains a list of many-to-many relationship tables and the MicroStrategy
RELATE_
items they relate.
ITEM
Provides the SQL necessary to insert, update, and delete a row from the
lookup item table once the necessary information from the component API is
available. If the SQL must be changed, make the change in this table (no
EM_SQL changes in the code are necessary). This table also provides the SQL used
to transform the logged statistics into the lookup tables.
Cache Creation
Indicates whether an execution has created a cache.
Indicator
Configuration Object
Indicates whether a configuration object exists.
Exists Status
Configuration
Lists all configuration parameter types.
Parameter Value Type
Delivery Status
Indicates whether a delivery was successful.
Indicator
Lists the object from which a user drilled when a new report was
Drill from Object
run because of a drilling action.
Lists the object to which a user drilled when a new report was run
Drill to Object
because of a drilling action.
Execution Type Indicates how the content was requested, such as User
Indicator Execution, Pre-Cached, Application Recovery, and so on.
Job ErrorCode Lists all the possible errors that can be returned during job
executions.
Object Creation Date Indicates the date on which an object was created.
Object Creation
Indicates the week of the year in which an object was created.
Week of year
Object Modification
Indicates the date on which an object was last modified.
Date
Object Modification Indicates the week of the year in which an object was last
Week of year modified.
Prompt Answer Indicates whether a prompt answer was required for the job
Required execution.
Report Job SQL Pass Lists the types of SQL passes that the Intelligence Server
Type generates.
Report Job Step Type Lists all possible steps of report job execution.
Report Type Indicates the type of a report, such as XDA, relational, and so on.
Report/Document
Indicates whether the execution was a report or a document.
Indicator
Security Filter Indicator Indicates whether a security filter was used in the job execution.
SQL Clause Type Lists the various SQL clause types used by the SQL Engine.
SQL Execution
Indicates whether SQL was executed in the job execution.
Indicator
Attribute
Function
name
Configuration Object
Lists the owners of configuration objects.
Owner
Intelligence Server
Lists all Intelligence Server definitions.
Definition
User Group (Parent) Lists all user groups that are parents of other user groups.
Attribute
Function
name
Calendar
Lists every calendar week, beginning with 2000-01-01, as an integer.
Week
Lists all the minutes in an hour. For example, if the hour specified is 10
Minute AM - 11 AM, lists minutes as 10.30 AM - 10.31 AM, 10.32 AM - 10.33 AM,
and so on.
Quarter of
Lists all quarters of the year.
Year
Lists all weeks in all years, beginning in 2000. Weeks in 2000 are
Week of Year represented as a number ranging from 200001 to 200053, weeks in 2001
are represented as a number ranging from 200101 to 200153, and so on.
subscription subscriptions.
metadata.
DP Average Execution Duration Metric of the average duration of all document job
per Job (hh:mm:ss) executions.
DP Average Queue Duration per Metric of the average duration of all document job
Job (hh:mm:ss) executions waiting in the queue.
DP Average Queue Duration per Metric of the average duration, in seconds, of all
Job (secs) document job executions waiting in the queue.
DP Execution Duration
Metric of the duration of a document job's execution.
(hh:mm:ss)
DP Number of Jobs (IS_DOC_ Metric of the number of document jobs that were
FACT) executed.
DP Number of Jobs with Error Metric of the number of document jobs that failed.
DP Percentage of Jobs with Metric of the percentage of document jobs that hit a
Cache Hit cache.
DP Percentage of Jobs with Error Metric of the percentage of document jobs that failed.
Document Job Step Type Indicates the type of step for a document job.
DP Average Execution Duration Metric of the average duration of all document job
per Job (hh:mm:ss) executions.
DP Average Queue Duration per Metric of the average duration of all document job
Job (hh:mm:ss) executions waiting in the queue.
DP Average Queue Duration per Metric of the average duration, in seconds, of all
Job (secs) document job executions waiting in the queue.
DP Execution Duration
Metric of the duration of a document job's execution.
(hh:mm:ss)
Displays the timestamp of the end of the data load process for
Data Load Finish Time
the projects that are being monitored.
Data Load Project Lists all projects that are being monitored.
Lists the timestamp of the start of the data load process for the
Data Load Start Time
projects that are being monitored.
Attribute or metric
Function
name
HL Days Since Last Metric of the number of days since the last request was made
Action: Request for the contents of a message.
HL Last Action Date: Any Metric of the date and time of the last action performed on a
Action message such as read, deleted, marked as read, and so on.
HL Last Action Date: Metric of the date and time of the last request made for the
Request contents of a message.
HL Number of Actions with Metric of the number of actions on a message that resulted in
Errors an error.
HL Number of Document Metric of the number of document jobs that result with
Jobs messages.
HL Number of Messages
Metric of the number of messages that resulted in an error.
with Errors
HL Number of Report Metric of the number of report jobs that result from messages.
Attribute or metric
Function
name
Jobs
Intelligence Server Indicates the Intelligence Server machine that executed the
Machine message.
Report Job Indicates the job ID of the report included in the message.
User Indicates the user who manipulated the History List message.
Intelligence Server
Indicates the Intelligence Server processing the request.
Machine
Attribute or metric
Function
name
Intelligent Cube Action Metric of the duration, in seconds, for an action that was
Duration (secs) performed on the Intellgent Cube.
Intelligent Cube Action Indicates the type of action taken on the Intelligent Cube such
Type as cube publish, cube view hit, and so on.
Number of Intelligent
Metric of how many times an Intelligent Cube was published.
Cube Publishes
Number of Intelligent
Metric of how many times an Intelligent Cube was refreshed.
Cube Refreshes
Number of Intelligent
Metric of how many times an Intelligent Cube was republished.
Cube Republishes
Number of Jobs with Metric of how many job executions used an Intelligent Cube.
Attribute or metric
Function
name
Counter Instance Indicates the instance ID of the counter, for MicroStrategy use.
Performance Monitor Indicates the name of the performance counter and its value
Counter type.
Attribute or metric
Function
name
Attribute or metric
Function
name
Intelligence Server Indicates the Intelligence Server machine that executed the
Machine job.
Prompt Answer Indicates the answers for the prompt in various instances.
Prompt Answer Required Indicates whether an answer to the prompt was required.
Prompt Instance Answer Indicates the answer of an instance of a prompt in a report job.
Indicates the type of the object in which the prompt is stored, such as
Prompt Location Type
filter, template, attribute, and so on.
Indicates the title of the prompt (the title the user sees when
Prompt Title
presented during job execution).
Report Job Indicates the report job that used the prompt.
Attribute or metric
Function
name
RP Number of Jobs
Metric of how many report jobs had a specified prompt answer
Containing Prompt
value.
Answer Value
RP Number of Jobs with Metric of how many report jobs had a prompt that was not
Unanswered Prompts answered.
Attribute or metric
Function
name
Attribute or metric
Function
name
Intelligence Server Indicates the Intelligence Server machine that executed the
Machine report.
RP Average Elapsed
Metric of the average difference between start time and finish
Duration per Job
time (including time for prompt responses) of all report job
(hh:mm:ss) (IS_REP_
executions.
FACT)
RP Average Elapsed Metric of the average difference between start time and finish
Duration per Job (secs) time (including time for prompt responses) of all report job
Attribute or metric
Function
name
(IS_REP_FACT) executions.
RP Average Execution
Duration per Job Metric of the average duration of all report job executions.
(hh:mm:ss) (IS_REP_ Includes time in queue and execution for a report job.
FACT)
RP Average Execution Metric of the average duration, in seconds, of all report job
Duration per Job (secs) executions. Includes time in queue and execution for a report
(IS_REP_FACT) job.
RP Average Prompt
Metric of the average time users take to answer the set of
Answer Time per Job
prompts in all report jobs.
(hh:mm:ss)
RP Average Prompt
Metric of the average time, in seconds, users take to answer
Answer Time per Job
the set of prompts in all report jobs.
(secs)
RP Average Queue
Metric of the average time report jobs waited in the
Duration per Job
Intelligence Server's queue before the report job was
(hh:mm:ss) (IS_REP_
executed.
FACT)
RP Average Queue Metric of the average time, in seconds, report jobs waited in
Duration per Job (secs) the Intelligence Server's queue before the report job was
(IS_REP_FACT) executed.
Attribute or metric
Function
name
RP Number of Ad Hoc Metric of how many report jobs resulted from an ad hoc report
Jobs creation.
RP Number of Cancelled
Metric of how many job executions were canceled.
Jobs
RP Number of Drill Jobs Metric of how many job executions resulted from a drill action.
RP Number of Jobs hitting Metric of how many report jobs were executed against the
Database database.
RP Number of Jobs w/o Metric of how many report jobs were executed that did not
Cache Creation result in creating a server cache.
RP Number of Jobs w/o Metric of how many report jobs were executed that did not hit a
Cache Hit server cache.
RP Number of Jobs w/o Metric of how many report jobs were executed that did not
Element Loading result from loading additional attribute elements.
RP Number of Jobs with Metric of how many report jobs were executed that resulted in
Cache Creation a server cache being created.
RP Number of Jobs with Metric of how many report jobs were executed that hit a server
Cache Hit cache.
RP Number of Jobs with Metric of how many report jobs were executed that resulted in
Datamart Creation a data mart being created.
RP Number of Jobs with Metric of how many report jobs failed because of a database
DB Error error.
RP Number of Jobs with Metric of how many report jobs were executed that resulted
Element Loading from loading additional attribute elements.
Attribute or metric
Function
name
RP Number of Jobs with Metric of how many report job executions used an Intelligent
Intelligent Cube Hit Cube.
RP Number of Jobs with Metric of how many report job executions used a security
Security Filter filter.
RP number of Narrowcast Metric of how many report job executions were run through
Server jobs MicroStrategy Narrowcast Server.
RP Number of Prompted
Metric of how many report job executions included a prompt.
Jobs
RP Number of Report
Metric of how many report jobs executed as a result of a
Jobs from Document
document execution.
Execution
RP Number of Result Metric of how many result rows were returned from a report
Rows execution.
RP Number of Scheduled
Metric of how many report jobs were scheduled.
Jobs
RP Prompt Answer Metric of the how long users take to answer the set of prompts
Duration (hh:mm:ss) in report jobs.
RP Prompt Answer Metric of the how long, in seconds, users take to answer the
Duration (secs) set of prompts in report jobs.
RP Queue Duration Metric of how long a report job waited in the Intelligence
(hh:mm:ss) Server's queue before the report job was executed.
Attribute or metric
Function
name
Security Filter Indicates the security filter used in the report execution.
SQL Execution Indicator Indicates that SQL was executed during report execution.
Attribute or metric
Function
name
Hour Indicates the hour in which the report job was executed.
Minute Indicates the minute in which the report job was started.
Attribute or metric
Function
name
Indicates the SQL statement that was executed during the SQL
Report Job SQL Pass
pass.
RP Last Execution Finish Metric of the finish timestamp when the report job was last
Timestamp executed.
RP Last Execution Start Metric of the start timestamp when the report job was last
Timestamp executed.
RP Number of DB Tables Metric of how many database tables were accessed in a report
Accessed job execution.
RP SQL Size Metric of how large, in bytes, the SQL was for a report job.
Attribute or metric
Function
name
Attribute or metric
Function
name
Hour Indicates the hour in which the report job was executed.
Minute Indicates the minute in which the report job was started.
Report Job Step Indicates the sequence number in the series of execution
Sequence steps a report job passes through in the Intelligence Server.
Indicates the type of step for a report job. Examples are SQL
Report Job Step Type generation, SQL execution, Analytical Engine, Resolution
Server, element request, update Intelligent Cube, and so on.
RP Average CPU
Execution Duration per Metric of the average duration, in milliseconds, a report job
Job (msecs) (IS_REP_ execution takes in the Intelligence Server CPU.
STEP_FACT)
RP Average Queue Metric of the average time report jobs waited in the
Duration per Job (secs) Intelligence Server's queue before the report job was
Attribute or metric
Function
name
(IS_REP_STEP_FACT) executed.
RP Elapsed Duration Metric of the difference between start time and finish time of
(hh:mm:ss) report job executions. Includes time for prompt responses.
RP Execution Duration Metric of the difference between start time and finish time of
(hh:mm:ss) report job executions. Includes database execution time.
RP Last Execution Finish Metric of the finish timestamp when the report job was last
Timestamp executed.
RP Last Execution Start Metric of the start timestamp when the report job was last
Timestamp executed.
RP Query Engine
Metric of how long the Query Engine took to execute SQL for a
Duration (hh:mm:ss) (IS_
report job.
REP_STEP_FACT)
RP Queue Duration Metric of how long a report job waited in the Intelligence
(hh:mm:ss) Server's queue before the report job was executed.
RP Queue Duration (secs) Metric of how long, in seconds, a report job waited in the
Attribute or metric
Function
name
Attribute or metric
Function
name
Day Indicates the day on which the table column was accessed.
Hour Indicates the hour on which the table column was accessed.
Minute Indicates the minute on which the table column was accessed.
Lists all physical tables in the data warehouse that are set up to be
DB Table
monitored by Enterprise Manager.
Intelligence Server
Lists the cluster of Intelligence Servers.
Cluster
Attribute or metric
Function
name
Connection Duration
Metric of the time a connection to an Intelligence Server lasts.
(hh:mm:ss)
Cache Creation
Indicates whether an execution has created a cache.
Indicator
Configuration Object
Indicates whether a configuration object exists.
Exists Status
Configuration
Lists all configuration parameter types.
Parameter Value Type
Delivery Status
Indicates whether a delivery was successful.
Indicator
Lists the object from which a user drilled when a new report was
Drill from Object
run because of a drilling action.
Drill to Object Lists the object to which a user drilled when a new report was run
Execution Type Indicates how the content was requested, such as User
Indicator Execution, Pre-Cached, Application Recovery, and so on.
Lists all the possible errors that can be returned during job
Job ErrorCode
executions.
Object Creation Date Indicates the date on which an object was created.
Object Creation
Indicates the week of the year in which an object was created.
Week of year
Object Modification
Indicates the date on which an object was last modified.
Date
Object Modification Indicates the week of the year in which an object was last
Week of year modified.
Prompt Answer Indicates whether a prompt answer was required for the job
Required execution.
Report Job SQL Pass Lists the types of SQL passes that the Intelligence Server
Type generates.
Report Job Step Type Lists all possible steps of report job execution.
Report Type Indicates the type of a report, such as XDA, relational, and so on.
Report/Document
Indicates whether the execution was a report or a document.
Indicator
Security Filter Indicator Indicates whether a security filter was used in the job execution.
SQL Clause Type Lists the various SQL clause types used by the SQL Engine.
SQL Execution
Indicates whether SQL was executed in the job execution.
Indicator
Attribute
Function
name
Attribute
Function
name
Configuration Object
Lists the owners of configuration objects.
Owner
Intelligence Server
Lists all Intelligence Server definitions.
Definition
User Group (Parent) Lists all user groups that are parents of other user groups.
Attribute
Function
name
Calendar
Lists every calendar week, beginning with 2000-01-01, as an integer.
Week
Attribute
Function
name
Lists all the minutes in an hour. For example, if the hour specified is 10
Minute AM - 11 AM, lists minutes as 10.30 AM - 10.31 AM, 10.32 AM - 10.33 AM,
and so on.
Quarter of
Lists all quarters of the year.
Year
Lists all weeks in all years, beginning in 2000. Weeks in 2000 are
Week of Year represented as a number ranging from 200001 to 200053, weeks in 2001
are represented as a number ranging from 200101 to 200153, and so on.
DP Average Execution Duration Metric of the average duration of all document job
per Job (hh:mm:ss) executions.
DP Average Queue Duration per Metric of the average duration of all document job
Job (hh:mm:ss) executions waiting in the queue.
DP Average Queue Duration per Metric of the average duration, in seconds, of all
Job (secs) document job executions waiting in the queue.
DP Execution Duration
Metric of the duration of a document job's execution.
(hh:mm:ss)
DP Number of Jobs (IS_DOC_ Metric of the number of document jobs that were
FACT) executed.
DP Number of Jobs with Error Metric of the number of document jobs that failed.
DP Percentage of Jobs with Metric of the percentage of document jobs that hit a
Cache Hit cache.
DP Percentage of Jobs with Error Metric of the percentage of document jobs that failed.
Document Job Step Type Indicates the type of step for a document job.
DP Average Execution Duration Metric of the average duration of all document job
per Job (hh:mm:ss) executions.
DP Average Queue Duration per Metric of the average duration of all document job
Job (hh:mm:ss) executions waiting in the queue.
DP Average Queue Duration per Metric of the average duration, in seconds, of all
Job (secs) document job executions waiting in the queue.
DP Execution Duration
Metric of the duration of a document job's execution.
(hh:mm:ss)
Displays the timestamp of the end of the data load process for
Data Load Finish Time
the projects that are being monitored.
Data Load Project Lists all projects that are being monitored.
Lists the timestamp of the start of the data load process for the
Data Load Start Time
projects that are being monitored.
Attribute or metric
Function
name
Attribute or metric
Function
name
HL Days Since Last Metric of the number of days since the last request was made
Action: Request for the contents of a message.
HL Last Action Date: Any Metric of the date and time of the last action performed on a
Action message such as read, deleted, marked as read, and so on.
HL Last Action Date: Metric of the date and time of the last request made for the
Request contents of a message.
HL Number of Actions with Metric of the number of actions on a message that resulted in
Errors an error.
HL Number of Document Metric of the number of document jobs that result with
Jobs messages.
HL Number of Messages
Metric of the number of messages that resulted in an error.
with Errors
HL Number of Report
Metric of the number of report jobs that result from messages.
Jobs
Inbox Action Type Indicates the type of manipulation that was performed on a
Attribute or metric
Function
name
Intelligence Server Indicates the Intelligence Server machine that executed the
Machine message.
Report Job Indicates the job ID of the report included in the message.
User Indicates the user who manipulated the History List message.
Intelligence Server
Indicates the Intelligence Server processing the request.
Machine
Attribute or metric
Function
name
Intelligent Cube Action Metric of the duration, in seconds, for an action that was
Duration (secs) performed on the Intellgent Cube.
Attribute or metric
Function
name
Intelligent Cube Action Indicates the type of action taken on the Intelligent Cube such
Type as cube publish, cube view hit, and so on.
Number of Intelligent
Metric of how many times an Intelligent Cube was published.
Cube Publishes
Number of Intelligent
Metric of how many times an Intelligent Cube was refreshed.
Cube Refreshes
Number of Intelligent
Metric of how many times an Intelligent Cube was republished.
Cube Republishes
Counter Instance Indicates the instance ID of the counter, for MicroStrategy use.
Performance Monitor Indicates the name of the performance counter and its value
Counter type.
Attribute or metric
Function
name
Intelligence Server Indicates the Intelligence Server machine that executed the
Machine job.
Attribute or metric
Function
name
Prompt Answer Indicates the answers for the prompt in various instances.
Prompt Answer Required Indicates whether an answer to the prompt was required.
Prompt Instance Answer Indicates the answer of an instance of a prompt in a report job.
Indicates the type of the object in which the prompt is stored, such as
Prompt Location Type
filter, template, attribute, and so on.
Indicates the title of the prompt (the title the user sees when
Prompt Title
presented during job execution).
Report Job Indicates the report job that used the prompt.
RP Number of Jobs
Metric of how many report jobs had a specified prompt answer
Containing Prompt
value.
Answer Value
RP Number of Jobs with Metric of how many report jobs had a prompt that was not
Unanswered Prompts answered.
Attribute or metric
Function
name
Intelligence Server Indicates the Intelligence Server machine that executed the
Attribute or metric
Function
name
Machine report.
RP Average Elapsed
Metric of the average difference between start time and finish
Duration per Job
time (including time for prompt responses) of all report job
(hh:mm:ss) (IS_REP_
executions.
FACT)
RP Average Elapsed Metric of the average difference between start time and finish
Duration per Job (secs) time (including time for prompt responses) of all report job
(IS_REP_FACT) executions.
RP Average Execution
Duration per Job Metric of the average duration of all report job executions.
(hh:mm:ss) (IS_REP_ Includes time in queue and execution for a report job.
FACT)
RP Average Execution Metric of the average duration, in seconds, of all report job
Duration per Job (secs) executions. Includes time in queue and execution for a report
(IS_REP_FACT) job.
RP Average Prompt
Metric of the average time users take to answer the set of
Answer Time per Job
prompts in all report jobs.
(hh:mm:ss)
Attribute or metric
Function
name
RP Average Prompt
Metric of the average time, in seconds, users take to answer
Answer Time per Job
the set of prompts in all report jobs.
(secs)
RP Average Queue
Metric of the average time report jobs waited in the
Duration per Job
Intelligence Server's queue before the report job was
(hh:mm:ss) (IS_REP_
executed.
FACT)
RP Average Queue Metric of the average time, in seconds, report jobs waited in
Duration per Job (secs) the Intelligence Server's queue before the report job was
(IS_REP_FACT) executed.
RP Number of Ad Hoc Metric of how many report jobs resulted from an ad hoc report
Jobs creation.
RP Number of Cancelled
Metric of how many job executions were canceled.
Jobs
RP Number of Drill Jobs Metric of how many job executions resulted from a drill action.
RP Number of Jobs hitting Metric of how many report jobs were executed against the
Database database.
Attribute or metric
Function
name
RP Number of Jobs w/o Metric of how many report jobs were executed that did not
Cache Creation result in creating a server cache.
RP Number of Jobs w/o Metric of how many report jobs were executed that did not hit a
Cache Hit server cache.
RP Number of Jobs w/o Metric of how many report jobs were executed that did not
Element Loading result from loading additional attribute elements.
RP Number of Jobs with Metric of how many report jobs were executed that resulted in
Cache Creation a server cache being created.
RP Number of Jobs with Metric of how many report jobs were executed that hit a server
Cache Hit cache.
RP Number of Jobs with Metric of how many report jobs were executed that resulted in
Datamart Creation a data mart being created.
RP Number of Jobs with Metric of how many report jobs failed because of a database
DB Error error.
RP Number of Jobs with Metric of how many report jobs were executed that resulted
Element Loading from loading additional attribute elements.
RP Number of Jobs with Metric of how many report job executions used an Intelligent
Intelligent Cube Hit Cube.
RP Number of Jobs with Metric of how many report job executions used a security
Security Filter filter.
RP number of Narrowcast Metric of how many report job executions were run through
Server jobs MicroStrategy Narrowcast Server.
RP Number of Prompted
Metric of how many report job executions included a prompt.
Jobs
Attribute or metric
Function
name
RP Number of Report
Metric of how many report jobs executed as a result of a
Jobs from Document
document execution.
Execution
RP Number of Result Metric of how many result rows were returned from a report
Rows execution.
RP Number of Scheduled
Metric of how many report jobs were scheduled.
Jobs
RP Prompt Answer Metric of the how long users take to answer the set of prompts
Duration (hh:mm:ss) in report jobs.
RP Prompt Answer Metric of the how long, in seconds, users take to answer the
Duration (secs) set of prompts in report jobs.
RP Queue Duration Metric of how long a report job waited in the Intelligence
(hh:mm:ss) Server's queue before the report job was executed.
Security Filter Indicates the security filter used in the report execution.
SQL Execution Indicator Indicates that SQL was executed during report execution.
Attribute or metric
Function
name
Hour Indicates the hour in which the report job was executed.
Minute Indicates the minute in which the report job was started.
Indicates the SQL statement that was executed during the SQL
Report Job SQL Pass
pass.
RP Last Execution Finish Metric of the finish timestamp when the report job was last
Timestamp executed.
RP Last Execution Start Metric of the start timestamp when the report job was last
Timestamp executed.
RP Number of DB Tables Metric of how many database tables were accessed in a report
Accessed job execution.
Attribute or metric
Function
name
RP SQL Size Metric of how large, in bytes, the SQL was for a report job.
Attribute or metric
Function
name
Hour Indicates the hour in which the report job was executed.
Minute Indicates the minute in which the report job was started.
Report Job Step Indicates the sequence number in the series of execution
Sequence steps a report job passes through in the Intelligence Server.
Indicates the type of step for a report job. Examples are SQL
Report Job Step Type generation, SQL execution, Analytical Engine, Resolution
Server, element request, update Intelligent Cube, and so on.
RP Average CPU
Execution Duration per Metric of the average duration, in milliseconds, a report job
Job (msecs) (IS_REP_ execution takes in the Intelligence Server CPU.
STEP_FACT)
Attribute or metric
Function
name
Duration per Job (secs) time and finish time of report job executions. Includes time for
(IS_REP_STEP_FACT) prompt responses.
RP Average Queue Metric of the average time report jobs waited in the
Duration per Job (secs) Intelligence Server's queue before the report job was
(IS_REP_STEP_FACT) executed.
RP Elapsed Duration Metric of the difference between start time and finish time of
(hh:mm:ss) report job executions. Includes time for prompt responses.
RP Execution Duration Metric of the difference between start time and finish time of
(hh:mm:ss) report job executions. Includes database execution time.
RP Last Execution Finish Metric of the finish timestamp when the report job was last
Timestamp executed.
RP Last Execution Start Metric of the start timestamp when the report job was last
Timestamp executed.
RP Number of Jobs (IS_ Metric of how many report jobs were executed.
Attribute or metric
Function
name
REP_STEP_FACT)
RP Query Engine
Metric of how long the Query Engine took to execute SQL for a
Duration (hh:mm:ss) (IS_
report job.
REP_STEP_FACT)
RP Queue Duration Metric of how long a report job waited in the Intelligence
(hh:mm:ss) Server's queue before the report job was executed.
Attribute or metric
Function
name
Day Indicates the day on which the table column was accessed.
Attribute or metric
Function
name
Hour Indicates the hour on which the table column was accessed.
Minute Indicates the minute on which the table column was accessed.
Lists all physical tables in the data warehouse that are set up to be
DB Table
monitored by Enterprise Manager.
Intelligence Server
Lists the cluster of Intelligence Servers.
Cluster
Attribute or metric
Function
name
Connection Duration
Metric of the time a connection to an Intelligence Server lasts.
(hh:mm:ss)
Attribute or metric
Function
name
Cache Creation
Indicates whether an execution has created a cache.
Indicator
Configuration Object
Indicates whether a configuration object exists.
Exists Status
Configuration
Lists all configuration parameter types.
Parameter Value Type
Delivery Status
Indicates whether a delivery was successful.
Indicator
Lists the object from which a user drilled when a new report was
Drill from Object
run because of a drilling action.
Lists the object to which a user drilled when a new report was run
Drill to Object
because of a drilling action.
Execution Type Indicates how the content was requested, such as User
Indicator Execution, Pre-Cached, Application Recovery, and so on.
Lists all the possible errors that can be returned during job
Job ErrorCode
executions.
Object Creation Date Indicates the date on which an object was created.
Object Creation
Indicates the week of the year in which an object was created.
Week of year
Object Modification
Indicates the date on which an object was last modified.
Date
Object Modification Indicates the week of the year in which an object was last
Week of year modified.
Prompt Answer Indicates whether a prompt answer was required for the job
Required execution.
Report Job SQL Pass Lists the types of SQL passes that the Intelligence Server
Type generates.
(Deprecated)
Report Job Step Type Lists all possible steps of report job execution.
Report Type Indicates the type of a report, such as XDA, relational, and so on.
Report/Document
Indicates whether the execution was a report or a document.
Indicator
Security Filter Indicator Indicates whether a security filter was used in the job execution.
SQL Clause Type Lists the various SQL clause types used by the SQL Engine.
SQL Execution
Indicates whether SQL was executed in the job execution.
Indicator
Attribute
Function
name
Attribute
Function
name
Configuration Object
Lists the owners of configuration objects.
Owner
Intelligence Server
Lists all Intelligence Server definitions.
Definition
User Group (Parent) Lists all user groups that are parents of other user groups.
Attribute
Function
name
Calendar
Lists every calendar week, beginning with 2000-01-01, as an integer.
Week
Lists all the minutes in an hour. For example, if the hour specified is 10
Minute AM - 11 AM, lists minutes as 10.30 AM - 10.31 AM, 10.32 AM - 10.33 AM,
and so on.
Quarter of
Lists all quarters of the year.
Year
Attribute
Function
name
Lists all weeks in all years, beginning in 2000. Weeks in 2000 are
Week of Year represented as a number ranging from 200001 to 200053, weeks in 2001
are represented as a number ranging from 200101 to 200153, and so on.
DP Average Execution Duration Metric of the average duration of all document job
per Job (hh:mm:ss) executions.
DP Average Queue Duration per Metric of the average duration of all document job
DP Average Queue Duration per Metric of the average duration, in seconds, of all
Job (secs) document job executions waiting in the queue.
DP Execution Duration
Metric of the duration of a document job's execution.
(hh:mm:ss)
DP Number of Jobs (IS_DOC_ Metric of the number of document jobs that were
FACT) executed.
DP Number of Jobs with Error Metric of the number of document jobs that failed.
DP Percentage of Jobs with Metric of the percentage of document jobs that hit a
Cache Hit cache.
DP Percentage of Jobs with Error Metric of the percentage of document jobs that failed.
Document Job Step Type Indicates the type of step for a document job.
DP Average Execution Duration Metric of the average duration of all document job
per Job (hh:mm:ss) executions.
DP Average Queue Duration per Metric of the average duration of all document job
Job (hh:mm:ss) executions waiting in the queue.
DP Average Queue Duration per Metric of the average duration, in seconds, of all
Job (secs) document job executions waiting in the queue.
DP Execution Duration
Metric of the duration of a document job's execution.
(hh:mm:ss)
Data Load Finish Time Displays the timestamp of the end of the data load process for
Data Load Project Lists all projects that are being monitored.
Lists the timestamp of the start of the data load process for the
Data Load Start Time
projects that are being monitored.
Attribute or metric
Function
name
HL Days Since Last Metric of the number of days since the last request was made
Action: Request for the contents of a message.
HL Last Action Date: Any Metric of the date and time of the last action performed on a
Action message such as read, deleted, marked as read, and so on.
HL Last Action Date: Metric of the date and time of the last request made for the
Request contents of a message.
Attribute or metric
Function
name
User message.
HL Number of Actions with Metric of the number of actions on a message that resulted in
Errors an error.
HL Number of Document Metric of the number of document jobs that result with
Jobs messages.
HL Number of Messages
Metric of the number of messages that resulted in an error.
with Errors
HL Number of Report
Metric of the number of report jobs that result from messages.
Jobs
Intelligence Server Indicates the Intelligence Server machine that executed the
Machine message.
Report Job Indicates the job ID of the report included in the message.
User Indicates the user who manipulated the History List message.
Intelligence Server
Indicates the Intelligence Server processing the request.
Machine
Attribute or metric
Function
name
Intelligent Cube Action Metric of the duration, in seconds, for an action that was
Duration (secs) performed on the Intellgent Cube.
Intelligent Cube Action Indicates the type of action taken on the Intelligent Cube such
Type as cube publish, cube view hit, and so on.
Number of Intelligent Metric of how many times an Intelligent Cube was published.
Attribute or metric
Function
name
Cube Publishes
Number of Intelligent
Metric of how many times an Intelligent Cube was refreshed.
Cube Refreshes
Number of Intelligent
Metric of how many times an Intelligent Cube was republished.
Cube Republishes
Counter Instance Indicates the instance ID of the counter, for MicroStrategy use.
Performance Monitor Indicates the name of the performance counter and its value
Counter type.
Attribute or metric
Function
name
Intelligence Server Indicates the Intelligence Server machine that executed the
Machine job.
Prompt Answer Indicates the answers for the prompt in various instances.
Prompt Answer Required Indicates whether an answer to the prompt was required.
Prompt Instance Answer Indicates the answer of an instance of a prompt in a report job.
Indicates the type of the object in which the prompt is stored, such as
Prompt Location Type
filter, template, attribute, and so on.
Indicates the title of the prompt (the title the user sees when
Prompt Title
presented during job execution).
Attribute or metric
Function
name
Report Job Indicates the report job that used the prompt.
RP Number of Jobs
Metric of how many report jobs had a specified prompt answer
Containing Prompt
value.
Answer Value
RP Number of Jobs with Metric of how many report jobs had a prompt that was not
Unanswered Prompts answered.
Attribute or metric
Function
name
Attribute or metric
Function
name
database error.
Intelligence Server Indicates the Intelligence Server machine that executed the
Machine report.
RP Average Elapsed Metric of the average difference between start time and finish
Duration per Job time (including time for prompt responses) of all report job
Attribute or metric
Function
name
(hh:mm:ss) (IS_REP_
executions.
FACT)
RP Average Elapsed Metric of the average difference between start time and finish
Duration per Job (secs) time (including time for prompt responses) of all report job
(IS_REP_FACT) executions.
RP Average Execution
Duration per Job Metric of the average duration of all report job executions.
(hh:mm:ss) (IS_REP_ Includes time in queue and execution for a report job.
FACT)
RP Average Execution Metric of the average duration, in seconds, of all report job
Duration per Job (secs) executions. Includes time in queue and execution for a report
(IS_REP_FACT) job.
RP Average Prompt
Metric of the average time users take to answer the set of
Answer Time per Job
prompts in all report jobs.
(hh:mm:ss)
RP Average Prompt
Metric of the average time, in seconds, users take to answer
Answer Time per Job
the set of prompts in all report jobs.
(secs)
RP Average Queue
Metric of the average time report jobs waited in the
Duration per Job
Intelligence Server's queue before the report job was
(hh:mm:ss) (IS_REP_
executed.
FACT)
RP Average Queue Metric of the average time, in seconds, report jobs waited in
Duration per Job (secs) the Intelligence Server's queue before the report job was
(IS_REP_FACT) executed.
RP Elapsed Duration Metric of the difference, in seconds, between start time and
(secs) finish time of a report job. Includes time for prompt responses,
Attribute or metric
Function
name
RP Number of Ad Hoc Metric of how many report jobs resulted from an ad hoc report
Jobs creation.
RP Number of Cancelled
Metric of how many job executions were canceled.
Jobs
RP Number of Drill Jobs Metric of how many job executions resulted from a drill action.
RP Number of Jobs hitting Metric of how many report jobs were executed against the
Database database.
RP Number of Jobs w/o Metric of how many report jobs were executed that did not
Cache Creation result in creating a server cache.
RP Number of Jobs w/o Metric of how many report jobs were executed that did not hit a
Cache Hit server cache.
RP Number of Jobs w/o Metric of how many report jobs were executed that did not
Element Loading result from loading additional attribute elements.
RP Number of Jobs with Metric of how many report jobs were executed that resulted in
Cache Creation a server cache being created.
RP Number of Jobs with Metric of how many report jobs were executed that hit a server
Cache Hit cache.
RP Number of Jobs with Metric of how many report jobs were executed that resulted in
Datamart Creation a data mart being created.
RP Number of Jobs with Metric of how many report jobs failed because of a database
Attribute or metric
Function
name
DB Error error.
RP Number of Jobs with Metric of how many report jobs were executed that resulted
Element Loading from loading additional attribute elements.
RP Number of Jobs with Metric of how many report job executions used an Intelligent
Intelligent Cube Hit Cube.
RP Number of Jobs with Metric of how many report job executions used a security
Security Filter filter.
RP number of Narrowcast Metric of how many report job executions were run through
Server jobs MicroStrategy Narrowcast Server.
RP Number of Prompted
Metric of how many report job executions included a prompt.
Jobs
RP Number of Report
Metric of how many report jobs executed as a result of a
Jobs from Document
document execution.
Execution
RP Number of Result Metric of how many result rows were returned from a report
Rows execution.
RP Number of Scheduled
Metric of how many report jobs were scheduled.
Jobs
RP Prompt Answer Metric of the how long users take to answer the set of prompts
Duration (hh:mm:ss) in report jobs.
RP Prompt Answer Metric of the how long, in seconds, users take to answer the
Duration (secs) set of prompts in report jobs.
Attribute or metric
Function
name
RP Queue Duration Metric of how long a report job waited in the Intelligence
(hh:mm:ss) Server's queue before the report job was executed.
Security Filter Indicates the security filter used in the report execution.
SQL Execution Indicator Indicates that SQL was executed during report execution.
Attribute or metric
Function
name
Hour Indicates the hour in which the report job was executed.
Minute Indicates the minute in which the report job was started.
Attribute or metric
Function
name
Indicates the SQL statement that was executed during the SQL
Report Job SQL Pass
pass.
RP Last Execution Finish Metric of the finish timestamp when the report job was last
Timestamp executed.
RP Last Execution Start Metric of the start timestamp when the report job was last
Timestamp executed.
RP Number of DB Tables Metric of how many database tables were accessed in a report
Accessed job execution.
RP SQL Size Metric of how large, in bytes, the SQL was for a report job.
Attribute or metric
Function
name
Attribute or metric
Function
name
Hour Indicates the hour in which the report job was executed.
Minute Indicates the minute in which the report job was started.
Report Job Step Indicates the sequence number in the series of execution
Sequence steps a report job passes through in the Intelligence Server.
Indicates the type of step for a report job. Examples are SQL
Report Job Step Type generation, SQL execution, Analytical Engine, Resolution
Server, element request, update Intelligent Cube, and so on.
RP Average CPU
Execution Duration per Metric of the average duration, in milliseconds, a report job
Job (msecs) (IS_REP_ execution takes in the Intelligence Server CPU.
STEP_FACT)
Attribute or metric
Function
name
RP Average Queue Metric of the average time report jobs waited in the
Duration per Job (secs) Intelligence Server's queue before the report job was
(IS_REP_STEP_FACT) executed.
RP Elapsed Duration Metric of the difference between start time and finish time of
(hh:mm:ss) report job executions. Includes time for prompt responses.
RP Execution Duration Metric of the difference between start time and finish time of
(hh:mm:ss) report job executions. Includes database execution time.
RP Last Execution Finish Metric of the finish timestamp when the report job was last
Timestamp executed.
RP Last Execution Start Metric of the start timestamp when the report job was last
Timestamp executed.
RP Query Engine
Metric of how long the Query Engine took to execute SQL for a
Duration (hh:mm:ss) (IS_
report job.
REP_STEP_FACT)
RP Queue Duration Metric of how long a report job waited in the Intelligence
(hh:mm:ss) Server's queue before the report job was executed.
Attribute or metric
Function
name
Attribute or metric
Function
name
Day Indicates the day on which the table column was accessed.
Hour Indicates the hour on which the table column was accessed.
Minute Indicates the minute on which the table column was accessed.
Lists all physical tables in the data warehouse that are set up to be
DB Table
monitored by Enterprise Manager.
Intelligence Server
Lists the cluster of Intelligence Servers.
Cluster
Attribute or metric
Function
name
Connection Duration
Metric of the time a connection to an Intelligence Server lasts.
(hh:mm:ss)
Cache Creation
Indicates whether an execution has created a cache.
Indicator
Configuration Object
Indicates whether a configuration object exists.
Exists Status
Configuration
Lists all configuration parameter types.
Parameter Value Type
Delivery Status
Indicates whether a delivery was successful.
Indicator
Lists the object from which a user drilled when a new report was
Drill from Object
run because of a drilling action.
Drill to Object Lists the object to which a user drilled when a new report was run
Execution Type Indicates how the content was requested, such as User
Indicator Execution, Pre-Cached, Application Recovery, and so on.
Lists all the possible errors that can be returned during job
Job ErrorCode
executions.
Object Creation Date Indicates the date on which an object was created.
Object Creation
Indicates the week of the year in which an object was created.
Week of year
Object Modification
Indicates the date on which an object was last modified.
Date
Object Modification Indicates the week of the year in which an object was last
Week of year modified.
Prompt Answer Indicates whether a prompt answer was required for the job
Required execution.
Report Job SQL Pass Lists the types of SQL passes that the Intelligence Server
Type generates.
Report Job Step Type Lists all possible steps of report job execution.
Report Type Indicates the type of a report, such as XDA, relational, and so on.
Report/Document
Indicates whether the execution was a report or a document.
Indicator
Security Filter Indicator Indicates whether a security filter was used in the job execution.
SQL Clause Type Lists the various SQL clause types used by the SQL Engine.
SQL Execution
Indicates whether SQL was executed in the job execution.
Indicator
Attribute
Function
name
Attribute
Function
name
Configuration Object
Lists the owners of configuration objects.
Owner
Intelligence Server
Lists all Intelligence Server definitions.
Definition
User Group (Parent) Lists all user groups that are parents of other user groups.
Attribute
Function
name
Calendar
Lists every calendar week, beginning with 2000-01-01, as an integer.
Week
Attribute
Function
name
Lists all the minutes in an hour. For example, if the hour specified is 10
Minute AM - 11 AM, lists minutes as 10.30 AM - 10.31 AM, 10.32 AM - 10.33 AM,
and so on.
Quarter of
Lists all quarters of the year.
Year
Lists all weeks in all years, beginning in 2000. Weeks in 2000 are
Week of Year represented as a number ranging from 200001 to 200053, weeks in 2001
are represented as a number ranging from 200101 to 200153, and so on.
DP Average Execution Duration Metric of the average duration of all document job
per Job (hh:mm:ss) executions.
DP Average Queue Duration per Metric of the average duration of all document job
Job (hh:mm:ss) executions waiting in the queue.
DP Average Queue Duration per Metric of the average duration, in seconds, of all
Job (secs) document job executions waiting in the queue.
DP Execution Duration
Metric of the duration of a document job's execution.
(hh:mm:ss)
DP Number of Jobs (IS_DOC_ Metric of the number of document jobs that were
FACT) executed.
DP Number of Jobs with Error Metric of the number of document jobs that failed.
DP Percentage of Jobs with Metric of the percentage of document jobs that hit a
Cache Hit cache.
DP Percentage of Jobs with Error Metric of the percentage of document jobs that failed.
Document Job Step Type Indicates the type of step for a document job.
DP Average Execution Duration Metric of the average duration of all document job
per Job (hh:mm:ss) executions.
DP Average Queue Duration per Metric of the average duration of all document job
Job (hh:mm:ss) executions waiting in the queue.
DP Average Queue Duration per Metric of the average duration, in seconds, of all
Job (secs) document job executions waiting in the queue.
DP Execution Duration
Metric of the duration of a document job's execution.
(hh:mm:ss)
Displays the timestamp of the end of the data load process for
Data Load Finish Time
the projects that are being monitored.
Data Load Project Lists all projects that are being monitored.
Lists the timestamp of the start of the data load process for the
Data Load Start Time
projects that are being monitored.
Attribute or metric
Function
name
Attribute or metric
Function
name
HL Days Since Last Metric of the number of days since the last request was made
Action: Request for the contents of a message.
HL Last Action Date: Any Metric of the date and time of the last action performed on a
Action message such as read, deleted, marked as read, and so on.
HL Last Action Date: Metric of the date and time of the last request made for the
Request contents of a message.
HL Number of Actions with Metric of the number of actions on a message that resulted in
Errors an error.
HL Number of Document Metric of the number of document jobs that result with
Jobs messages.
HL Number of Messages
Metric of the number of messages that resulted in an error.
with Errors
HL Number of Report
Metric of the number of report jobs that result from messages.
Jobs
Inbox Action Type Indicates the type of manipulation that was performed on a
Attribute or metric
Function
name
Intelligence Server Indicates the Intelligence Server machine that executed the
Machine message.
Report Job Indicates the job ID of the report included in the message.
User Indicates the user who manipulated the History List message.
Intelligence Server
Indicates the Intelligence Server processing the request.
Machine
Attribute or metric
Function
name
Intelligent Cube Action Metric of the duration, in seconds, for an action that was
Duration (secs) performed on the Intellgent Cube.
Attribute or metric
Function
name
Intelligent Cube Action Indicates the type of action taken on the Intelligent Cube such
Type as cube publish, cube view hit, and so on.
Number of Intelligent
Metric of how many times an Intelligent Cube was published.
Cube Publishes
Number of Intelligent
Metric of how many times an Intelligent Cube was refreshed.
Cube Refreshes
Number of Intelligent
Metric of how many times an Intelligent Cube was republished.
Cube Republishes
Counter Instance Indicates the instance ID of the counter, for MicroStrategy use.
Performance Monitor Indicates the name of the performance counter and its value
Counter type.
Attribute or metric
Function
name
Intelligence Server Indicates the Intelligence Server machine that executed the
Machine job.
Attribute or metric
Function
name
Prompt Answer Indicates the answers for the prompt in various instances.
Prompt Answer Required Indicates whether an answer to the prompt was required.
Prompt Instance Answer Indicates the answer of an instance of a prompt in a report job.
Indicates the type of the object in which the prompt is stored, such as
Prompt Location Type
filter, template, attribute, and so on.
Indicates the title of the prompt (the title the user sees when
Prompt Title
presented during job execution).
Report Job Indicates the report job that used the prompt.
RP Number of Jobs
Metric of how many report jobs had a specified prompt answer
Containing Prompt
value.
Answer Value
RP Number of Jobs with Metric of how many report jobs had a prompt that was not
Unanswered Prompts answered.
Attribute or metric
Function
name
Intelligence Server Indicates the Intelligence Server machine that executed the
Attribute or metric
Function
name
Machine report.
RP Average Elapsed
Metric of the average difference between start time and finish
Duration per Job
time (including time for prompt responses) of all report job
(hh:mm:ss) (IS_REP_
executions.
FACT)
RP Average Elapsed Metric of the average difference between start time and finish
Duration per Job (secs) time (including time for prompt responses) of all report job
(IS_REP_FACT) executions.
RP Average Execution
Duration per Job Metric of the average duration of all report job executions.
(hh:mm:ss) (IS_REP_ Includes time in queue and execution for a report job.
FACT)
RP Average Execution Metric of the average duration, in seconds, of all report job
Duration per Job (secs) executions. Includes time in queue and execution for a report
(IS_REP_FACT) job.
RP Average Prompt
Metric of the average time users take to answer the set of
Answer Time per Job
prompts in all report jobs.
(hh:mm:ss)
Attribute or metric
Function
name
RP Average Prompt
Metric of the average time, in seconds, users take to answer
Answer Time per Job
the set of prompts in all report jobs.
(secs)
RP Average Queue
Metric of the average time report jobs waited in the
Duration per Job
Intelligence Server's queue before the report job was
(hh:mm:ss) (IS_REP_
executed.
FACT)
RP Average Queue Metric of the average time, in seconds, report jobs waited in
Duration per Job (secs) the Intelligence Server's queue before the report job was
(IS_REP_FACT) executed.
RP Number of Ad Hoc Metric of how many report jobs resulted from an ad hoc report
Jobs creation.
RP Number of Cancelled
Metric of how many job executions were canceled.
Jobs
RP Number of Drill Jobs Metric of how many job executions resulted from a drill action.
RP Number of Jobs hitting Metric of how many report jobs were executed against the
Database database.
Attribute or metric
Function
name
RP Number of Jobs w/o Metric of how many report jobs were executed that did not
Cache Creation result in creating a server cache.
RP Number of Jobs w/o Metric of how many report jobs were executed that did not hit a
Cache Hit server cache.
RP Number of Jobs w/o Metric of how many report jobs were executed that did not
Element Loading result from loading additional attribute elements.
RP Number of Jobs with Metric of how many report jobs were executed that resulted in
Cache Creation a server cache being created.
RP Number of Jobs with Metric of how many report jobs were executed that hit a server
Cache Hit cache.
RP Number of Jobs with Metric of how many report jobs were executed that resulted in
Datamart Creation a data mart being created.
RP Number of Jobs with Metric of how many report jobs failed because of a database
DB Error error.
RP Number of Jobs with Metric of how many report jobs were executed that resulted
Element Loading from loading additional attribute elements.
RP Number of Jobs with Metric of how many report job executions used an Intelligent
Intelligent Cube Hit Cube.
RP Number of Jobs with Metric of how many report job executions used a security
Security Filter filter.
RP number of Narrowcast Metric of how many report job executions were run through
Server jobs MicroStrategy Narrowcast Server.
RP Number of Prompted
Metric of how many report job executions included a prompt.
Jobs
Attribute or metric
Function
name
RP Number of Report
Metric of how many report jobs executed as a result of a
Jobs from Document
document execution.
Execution
RP Number of Result Metric of how many result rows were returned from a report
Rows execution.
RP Number of Scheduled
Metric of how many report jobs were scheduled.
Jobs
RP Prompt Answer Metric of the how long users take to answer the set of prompts
Duration (hh:mm:ss) in report jobs.
RP Prompt Answer Metric of the how long, in seconds, users take to answer the
Duration (secs) set of prompts in report jobs.
RP Queue Duration Metric of how long a report job waited in the Intelligence
(hh:mm:ss) Server's queue before the report job was executed.
Security Filter Indicates the security filter used in the report execution.
SQL Execution Indicator Indicates that SQL was executed during report execution.
Attribute or metric
Function
name
Hour Indicates the hour in which the report job was executed.
Minute Indicates the minute in which the report job was started.
Indicates the SQL statement that was executed during the SQL
Report Job SQL Pass
pass.
RP Last Execution Finish Metric of the finish timestamp when the report job was last
Timestamp executed.
RP Last Execution Start Metric of the start timestamp when the report job was last
Timestamp executed.
RP Number of DB Tables Metric of how many database tables were accessed in a report
Accessed job execution.
Attribute or metric
Function
name
RP SQL Size Metric of how large, in bytes, the SQL was for a report job.
Attribute or metric
Function
name
Hour Indicates the hour in which the report job was executed.
Minute Indicates the minute in which the report job was started.
Report Job Step Indicates the sequence number in the series of execution
Sequence steps a report job passes through in the Intelligence Server.
Indicates the type of step for a report job. Examples are SQL
Report Job Step Type generation, SQL execution, Analytical Engine, Resolution
Server, element request, update Intelligent Cube, and so on.
RP Average CPU
Execution Duration per Metric of the average duration, in milliseconds, a report job
Job (msecs) (IS_REP_ execution takes in the Intelligence Server CPU.
STEP_FACT)
Attribute or metric
Function
name
Duration per Job (secs) time and finish time of report job executions. Includes time for
(IS_REP_STEP_FACT) prompt responses.
RP Average Queue Metric of the average time report jobs waited in the
Duration per Job (secs) Intelligence Server's queue before the report job was
(IS_REP_STEP_FACT) executed.
RP Elapsed Duration Metric of the difference between start time and finish time of
(hh:mm:ss) report job executions. Includes time for prompt responses.
RP Execution Duration Metric of the difference between start time and finish time of
(hh:mm:ss) report job executions. Includes database execution time.
RP Last Execution Finish Metric of the finish timestamp when the report job was last
Timestamp executed.
RP Last Execution Start Metric of the start timestamp when the report job was last
Timestamp executed.
RP Number of Jobs (IS_ Metric of how many report jobs were executed.
Attribute or metric
Function
name
REP_STEP_FACT)
RP Query Engine
Metric of how long the Query Engine took to execute SQL for a
Duration (hh:mm:ss) (IS_
report job.
REP_STEP_FACT)
RP Queue Duration Metric of how long a report job waited in the Intelligence
(hh:mm:ss) Server's queue before the report job was executed.
Attribute or metric
Function
name
Day Indicates the day on which the table column was accessed.
Attribute or metric
Function
name
Hour Indicates the hour on which the table column was accessed.
Minute Indicates the minute on which the table column was accessed.
Lists all physical tables in the data warehouse that are set up to be
DB Table
monitored by Enterprise Manager.
Intelligence Server
Lists the cluster of Intelligence Servers.
Cluster
Attribute or metric
Function
name
Connection Duration
Metric of the time a connection to an Intelligence Server lasts.
(hh:mm:ss)
Attribute or metric
Function
name
Cache Creation
Indicates whether an execution has created a cache.
Indicator
Configuration Object
Indicates whether a configuration object exists.
Exists Status
Configuration
Lists all configuration parameter types.
Parameter Value Type
Delivery Status
Indicates whether a delivery was successful.
Indicator
Lists the object from which a user drilled when a new report was
Drill from Object
run because of a drilling action.
Lists the object to which a user drilled when a new report was run
Drill to Object
because of a drilling action.
Execution Type Indicates how the content was requested, such as User
Indicator Execution, Pre-Cached, Application Recovery, and so on.
Lists all the possible errors that can be returned during job
Job ErrorCode
executions.
Object Creation Date Indicates the date on which an object was created.
Object Creation
Indicates the week of the year in which an object was created.
Week of year
Object Modification
Indicates the date on which an object was last modified.
Date
Object Modification Indicates the week of the year in which an object was last
Week of year modified.
Prompt Answer Indicates whether a prompt answer was required for the job
Required execution.
Report Job SQL Pass Lists the types of SQL passes that the Intelligence Server
Type generates.
(Deprecated)
Report Job Step Type Lists all possible steps of report job execution.
Report Type Indicates the type of a report, such as XDA, relational, and so on.
Report/Document
Indicates whether the execution was a report or a document.
Indicator
Security Filter Indicator Indicates whether a security filter was used in the job execution.
SQL Clause Type Lists the various SQL clause types used by the SQL Engine.
SQL Execution
Indicates whether SQL was executed in the job execution.
Indicator
Attribute
Function
name
Attribute
Function
name
Configuration Object
Lists the owners of configuration objects.
Owner
Intelligence Server
Lists all Intelligence Server definitions.
Definition
User Group (Parent) Lists all user groups that are parents of other user groups.
Attribute
Function
name
Calendar
Lists every calendar week, beginning with 2000-01-01, as an integer.
Week
Lists all the minutes in an hour. For example, if the hour specified is 10
Minute AM - 11 AM, lists minutes as 10.30 AM - 10.31 AM, 10.32 AM - 10.33 AM,
and so on.
Quarter of
Lists all quarters of the year.
Year
Attribute
Function
name
Lists all weeks in all years, beginning in 2000. Weeks in 2000 are
Week of Year represented as a number ranging from 200001 to 200053, weeks in 2001
are represented as a number ranging from 200101 to 200153, and so on.
DP Average Execution Duration Metric of the average duration of all document job
per Job (hh:mm:ss) executions.
DP Average Queue Duration per Metric of the average duration of all document job
DP Average Queue Duration per Metric of the average duration, in seconds, of all
Job (secs) document job executions waiting in the queue.
DP Execution Duration
Metric of the duration of a document job's execution.
(hh:mm:ss)
DP Number of Jobs (IS_DOC_ Metric of the number of document jobs that were
FACT) executed.
DP Number of Jobs with Error Metric of the number of document jobs that failed.
DP Percentage of Jobs with Metric of the percentage of document jobs that hit a
Cache Hit cache.
DP Percentage of Jobs with Error Metric of the percentage of document jobs that failed.
Document Job Step Type Indicates the type of step for a document job.
DP Average Execution Duration Metric of the average duration of all document job
per Job (hh:mm:ss) executions.
DP Average Queue Duration per Metric of the average duration of all document job
Job (hh:mm:ss) executions waiting in the queue.
DP Average Queue Duration per Metric of the average duration, in seconds, of all
Job (secs) document job executions waiting in the queue.
DP Execution Duration
Metric of the duration of a document job's execution.
(hh:mm:ss)
Data Load Finish Time Displays the timestamp of the end of the data load process for
Data Load Project Lists all projects that are being monitored.
Lists the timestamp of the start of the data load process for the
Data Load Start Time
projects that are being monitored.
Attribute or metric
Function
name
HL Days Since Last Metric of the number of days since the last request was made
Action: Request for the contents of a message.
HL Last Action Date: Any Metric of the date and time of the last action performed on a
Action message such as read, deleted, marked as read, and so on.
HL Last Action Date: Metric of the date and time of the last request made for the
Request contents of a message.
Attribute or metric
Function
name
User message.
HL Number of Actions with Metric of the number of actions on a message that resulted in
Errors an error.
HL Number of Document Metric of the number of document jobs that result with
Jobs messages.
HL Number of Messages
Metric of the number of messages that resulted in an error.
with Errors
HL Number of Report
Metric of the number of report jobs that result from messages.
Jobs
Intelligence Server Indicates the Intelligence Server machine that executed the
Machine message.
Report Job Indicates the job ID of the report included in the message.
User Indicates the user who manipulated the History List message.
Intelligence Server
Indicates the Intelligence Server processing the request.
Machine
Attribute or metric
Function
name
Intelligent Cube Action Metric of the duration, in seconds, for an action that was
Duration (secs) performed on the Intellgent Cube.
Intelligent Cube Action Indicates the type of action taken on the Intelligent Cube such
Type as cube publish, cube view hit, and so on.
Number of Intelligent Metric of how many times an Intelligent Cube was published.
Attribute or metric
Function
name
Cube Publishes
Number of Intelligent
Metric of how many times an Intelligent Cube was refreshed.
Cube Refreshes
Number of Intelligent
Metric of how many times an Intelligent Cube was republished.
Cube Republishes
Counter Instance Indicates the instance ID of the counter, for MicroStrategy use.
Performance Monitor Indicates the name of the performance counter and its value
Counter type.
Attribute or metric
Function
name
Intelligence Server Indicates the Intelligence Server machine that executed the
Machine job.
Prompt Answer Indicates the answers for the prompt in various instances.
Prompt Answer Required Indicates whether an answer to the prompt was required.
Prompt Instance Answer Indicates the answer of an instance of a prompt in a report job.
Indicates the type of the object in which the prompt is stored, such as
Prompt Location Type
filter, template, attribute, and so on.
Indicates the title of the prompt (the title the user sees when
Prompt Title
presented during job execution).
Attribute or metric
Function
name
Report Job Indicates the report job that used the prompt.
RP Number of Jobs
Metric of how many report jobs had a specified prompt answer
Containing Prompt
value.
Answer Value
RP Number of Jobs with Metric of how many report jobs had a prompt that was not
Unanswered Prompts answered.
Attribute or metric
Function
name
Attribute or metric
Function
name
database error.
Intelligence Server Indicates the Intelligence Server machine that executed the
Machine report.
RP Average Elapsed Metric of the average difference between start time and finish
Duration per Job time (including time for prompt responses) of all report job
Attribute or metric
Function
name
(hh:mm:ss) (IS_REP_
executions.
FACT)
RP Average Elapsed Metric of the average difference between start time and finish
Duration per Job (secs) time (including time for prompt responses) of all report job
(IS_REP_FACT) executions.
RP Average Execution
Duration per Job Metric of the average duration of all report job executions.
(hh:mm:ss) (IS_REP_ Includes time in queue and execution for a report job.
FACT)
RP Average Execution Metric of the average duration, in seconds, of all report job
Duration per Job (secs) executions. Includes time in queue and execution for a report
(IS_REP_FACT) job.
RP Average Prompt
Metric of the average time users take to answer the set of
Answer Time per Job
prompts in all report jobs.
(hh:mm:ss)
RP Average Prompt
Metric of the average time, in seconds, users take to answer
Answer Time per Job
the set of prompts in all report jobs.
(secs)
RP Average Queue
Metric of the average time report jobs waited in the
Duration per Job
Intelligence Server's queue before the report job was
(hh:mm:ss) (IS_REP_
executed.
FACT)
RP Average Queue Metric of the average time, in seconds, report jobs waited in
Duration per Job (secs) the Intelligence Server's queue before the report job was
(IS_REP_FACT) executed.
RP Elapsed Duration Metric of the difference, in seconds, between start time and
(secs) finish time of a report job. Includes time for prompt responses,
Attribute or metric
Function
name
RP Number of Ad Hoc Metric of how many report jobs resulted from an ad hoc report
Jobs creation.
RP Number of Cancelled
Metric of how many job executions were canceled.
Jobs
RP Number of Drill Jobs Metric of how many job executions resulted from a drill action.
RP Number of Jobs hitting Metric of how many report jobs were executed against the
Database database.
RP Number of Jobs w/o Metric of how many report jobs were executed that did not
Cache Creation result in creating a server cache.
RP Number of Jobs w/o Metric of how many report jobs were executed that did not hit a
Cache Hit server cache.
RP Number of Jobs w/o Metric of how many report jobs were executed that did not
Element Loading result from loading additional attribute elements.
RP Number of Jobs with Metric of how many report jobs were executed that resulted in
Cache Creation a server cache being created.
RP Number of Jobs with Metric of how many report jobs were executed that hit a server
Cache Hit cache.
RP Number of Jobs with Metric of how many report jobs were executed that resulted in
Datamart Creation a data mart being created.
RP Number of Jobs with Metric of how many report jobs failed because of a database
Attribute or metric
Function
name
DB Error error.
RP Number of Jobs with Metric of how many report jobs were executed that resulted
Element Loading from loading additional attribute elements.
RP Number of Jobs with Metric of how many report job executions used an Intelligent
Intelligent Cube Hit Cube.
RP Number of Jobs with Metric of how many report job executions used a security
Security Filter filter.
RP number of Narrowcast Metric of how many report job executions were run through
Server jobs MicroStrategy Narrowcast Server.
RP Number of Prompted
Metric of how many report job executions included a prompt.
Jobs
RP Number of Report
Metric of how many report jobs executed as a result of a
Jobs from Document
document execution.
Execution
RP Number of Result Metric of how many result rows were returned from a report
Rows execution.
RP Number of Scheduled
Metric of how many report jobs were scheduled.
Jobs
RP Prompt Answer Metric of the how long users take to answer the set of prompts
Duration (hh:mm:ss) in report jobs.
RP Prompt Answer Metric of the how long, in seconds, users take to answer the
Duration (secs) set of prompts in report jobs.
Attribute or metric
Function
name
RP Queue Duration Metric of how long a report job waited in the Intelligence
(hh:mm:ss) Server's queue before the report job was executed.
Security Filter Indicates the security filter used in the report execution.
SQL Execution Indicator Indicates that SQL was executed during report execution.
Attribute or metric
Function
name
Hour Indicates the hour in which the report job was executed.
Minute Indicates the minute in which the report job was started.
Attribute or metric
Function
name
Indicates the SQL statement that was executed during the SQL
Report Job SQL Pass
pass.
RP Last Execution Finish Metric of the finish timestamp when the report job was last
Timestamp executed.
RP Last Execution Start Metric of the start timestamp when the report job was last
Timestamp executed.
RP Number of DB Tables Metric of how many database tables were accessed in a report
Accessed job execution.
RP SQL Size Metric of how large, in bytes, the SQL was for a report job.
Attribute or metric
Function
name
Attribute or metric
Function
name
Hour Indicates the hour in which the report job was executed.
Minute Indicates the minute in which the report job was started.
Report Job Step Indicates the sequence number in the series of execution
Sequence steps a report job passes through in the Intelligence Server.
Indicates the type of step for a report job. Examples are SQL
Report Job Step Type generation, SQL execution, Analytical Engine, Resolution
Server, element request, update Intelligent Cube, and so on.
RP Average CPU
Execution Duration per Metric of the average duration, in milliseconds, a report job
Job (msecs) (IS_REP_ execution takes in the Intelligence Server CPU.
STEP_FACT)
Attribute or metric
Function
name
RP Average Queue Metric of the average time report jobs waited in the
Duration per Job (secs) Intelligence Server's queue before the report job was
(IS_REP_STEP_FACT) executed.
RP Elapsed Duration Metric of the difference between start time and finish time of
(hh:mm:ss) report job executions. Includes time for prompt responses.
RP Execution Duration Metric of the difference between start time and finish time of
(hh:mm:ss) report job executions. Includes database execution time.
RP Last Execution Finish Metric of the finish timestamp when the report job was last
Timestamp executed.
RP Last Execution Start Metric of the start timestamp when the report job was last
Timestamp executed.
RP Query Engine
Metric of how long the Query Engine took to execute SQL for a
Duration (hh:mm:ss) (IS_
report job.
REP_STEP_FACT)
RP Queue Duration Metric of how long a report job waited in the Intelligence
(hh:mm:ss) Server's queue before the report job was executed.
Attribute or metric
Function
name
Attribute or metric
Function
name
Day Indicates the day on which the table column was accessed.
Hour Indicates the hour on which the table column was accessed.
Minute Indicates the minute on which the table column was accessed.
Lists all physical tables in the data warehouse that are set up to be
DB Table
monitored by Enterprise Manager.
Intelligence Server
Lists the cluster of Intelligence Servers.
Cluster
Attribute or metric
Function
name
Connection Duration
Metric of the time a connection to an Intelligence Server lasts.
(hh:mm:ss)
Cache Creation
Indicates whether an execution has created a cache.
Indicator
Configuration Object
Indicates whether a configuration object exists.
Exists Status
Configuration
Lists all configuration parameter types.
Parameter Value Type
Delivery Status
Indicates whether a delivery was successful.
Indicator
Lists the object from which a user drilled when a new report was
Drill from Object
run because of a drilling action.
Drill to Object Lists the object to which a user drilled when a new report was run
Execution Type Indicates how the content was requested, such as User
Indicator Execution, Pre-Cached, Application Recovery, and so on.
Lists all the possible errors that can be returned during job
Job ErrorCode
executions.
Object Creation Date Indicates the date on which an object was created.
Object Creation
Indicates the week of the year in which an object was created.
Week of year
Object Modification
Indicates the date on which an object was last modified.
Date
Object Modification Indicates the week of the year in which an object was last
Week of year modified.
Prompt Answer Indicates whether a prompt answer was required for the job
Required execution.
Report Job SQL Pass Lists the types of SQL passes that the Intelligence Server
Type generates.
Report Job Step Type Lists all possible steps of report job execution.
Report Type Indicates the type of a report, such as XDA, relational, and so on.
Report/Document
Indicates whether the execution was a report or a document.
Indicator
Security Filter Indicator Indicates whether a security filter was used in the job execution.
SQL Clause Type Lists the various SQL clause types used by the SQL Engine.
SQL Execution
Indicates whether SQL was executed in the job execution.
Indicator
Attribute
Function
name
Attribute
Function
name
Configuration Object
Lists the owners of configuration objects.
Owner
Intelligence Server
Lists all Intelligence Server definitions.
Definition
User Group (Parent) Lists all user groups that are parents of other user groups.
Attribute
Function
name
Calendar
Lists every calendar week, beginning with 2000-01-01, as an integer.
Week
Attribute
Function
name
Lists all the minutes in an hour. For example, if the hour specified is 10
Minute AM - 11 AM, lists minutes as 10.30 AM - 10.31 AM, 10.32 AM - 10.33 AM,
and so on.
Quarter of
Lists all quarters of the year.
Year
Lists all weeks in all years, beginning in 2000. Weeks in 2000 are
Week of Year represented as a number ranging from 200001 to 200053, weeks in 2001
are represented as a number ranging from 200101 to 200153, and so on.
DP Average Execution Duration Metric of the average duration of all document job
per Job (hh:mm:ss) executions.
DP Average Queue Duration per Metric of the average duration of all document job
Job (hh:mm:ss) executions waiting in the queue.
DP Average Queue Duration per Metric of the average duration, in seconds, of all
Job (secs) document job executions waiting in the queue.
DP Execution Duration
Metric of the duration of a document job's execution.
(hh:mm:ss)
DP Number of Jobs (IS_DOC_ Metric of the number of document jobs that were
FACT) executed.
DP Number of Jobs with Error Metric of the number of document jobs that failed.
DP Percentage of Jobs with Metric of the percentage of document jobs that hit a
Cache Hit cache.
DP Percentage of Jobs with Error Metric of the percentage of document jobs that failed.
Document Job Step Type Indicates the type of step for a document job.
DP Average Execution Duration Metric of the average duration of all document job
per Job (hh:mm:ss) executions.
DP Average Queue Duration per Metric of the average duration of all document job
Job (hh:mm:ss) executions waiting in the queue.
DP Average Queue Duration per Metric of the average duration, in seconds, of all
Job (secs) document job executions waiting in the queue.
DP Execution Duration
Metric of the duration of a document job's execution.
(hh:mm:ss)
Displays the timestamp of the end of the data load process for
Data Load Finish Time
the projects that are being monitored.
Data Load Project Lists all projects that are being monitored.
Lists the timestamp of the start of the data load process for the
Data Load Start Time
projects that are being monitored.
Attribute or metric
Function
name
Attribute or metric
Function
name
HL Days Since Last Metric of the number of days since the last request was made
Action: Request for the contents of a message.
HL Last Action Date: Any Metric of the date and time of the last action performed on a
Action message such as read, deleted, marked as read, and so on.
HL Last Action Date: Metric of the date and time of the last request made for the
Request contents of a message.
HL Number of Actions with Metric of the number of actions on a message that resulted in
Errors an error.
HL Number of Document Metric of the number of document jobs that result with
Jobs messages.
HL Number of Messages
Metric of the number of messages that resulted in an error.
with Errors
HL Number of Report
Metric of the number of report jobs that result from messages.
Jobs
Inbox Action Type Indicates the type of manipulation that was performed on a
Attribute or metric
Function
name
Intelligence Server Indicates the Intelligence Server machine that executed the
Machine message.
Report Job Indicates the job ID of the report included in the message.
User Indicates the user who manipulated the History List message.
Intelligence Server
Indicates the Intelligence Server processing the request.
Machine
Attribute or metric
Function
name
Intelligent Cube Action Metric of the duration, in seconds, for an action that was
Duration (secs) performed on the Intellgent Cube.
Attribute or metric
Function
name
Intelligent Cube Action Indicates the type of action taken on the Intelligent Cube such
Type as cube publish, cube view hit, and so on.
Number of Intelligent
Metric of how many times an Intelligent Cube was published.
Cube Publishes
Number of Intelligent
Metric of how many times an Intelligent Cube was refreshed.
Cube Refreshes
Number of Intelligent
Metric of how many times an Intelligent Cube was republished.
Cube Republishes
Counter Instance Indicates the instance ID of the counter, for MicroStrategy use.
Performance Monitor Indicates the name of the performance counter and its value
Counter type.
Attribute or metric
Function
name
Intelligence Server Indicates the Intelligence Server machine that executed the
Machine job.
Attribute or metric
Function
name
Prompt Answer Indicates the answers for the prompt in various instances.
Prompt Answer Required Indicates whether an answer to the prompt was required.
Prompt Instance Answer Indicates the answer of an instance of a prompt in a report job.
Indicates the type of the object in which the prompt is stored, such as
Prompt Location Type
filter, template, attribute, and so on.
Indicates the title of the prompt (the title the user sees when
Prompt Title
presented during job execution).
Report Job Indicates the report job that used the prompt.
RP Number of Jobs
Metric of how many report jobs had a specified prompt answer
Containing Prompt
value.
Answer Value
RP Number of Jobs with Metric of how many report jobs had a prompt that was not
Unanswered Prompts answered.
Attribute or metric
Function
name
Intelligence Server Indicates the Intelligence Server machine that executed the
Attribute or metric
Function
name
Machine report.
RP Average Elapsed
Metric of the average difference between start time and finish
Duration per Job
time (including time for prompt responses) of all report job
(hh:mm:ss) (IS_REP_
executions.
FACT)
RP Average Elapsed Metric of the average difference between start time and finish
Duration per Job (secs) time (including time for prompt responses) of all report job
(IS_REP_FACT) executions.
RP Average Execution
Duration per Job Metric of the average duration of all report job executions.
(hh:mm:ss) (IS_REP_ Includes time in queue and execution for a report job.
FACT)
RP Average Execution Metric of the average duration, in seconds, of all report job
Duration per Job (secs) executions. Includes time in queue and execution for a report
(IS_REP_FACT) job.
RP Average Prompt
Metric of the average time users take to answer the set of
Answer Time per Job
prompts in all report jobs.
(hh:mm:ss)
Attribute or metric
Function
name
RP Average Prompt
Metric of the average time, in seconds, users take to answer
Answer Time per Job
the set of prompts in all report jobs.
(secs)
RP Average Queue
Metric of the average time report jobs waited in the
Duration per Job
Intelligence Server's queue before the report job was
(hh:mm:ss) (IS_REP_
executed.
FACT)
RP Average Queue Metric of the average time, in seconds, report jobs waited in
Duration per Job (secs) the Intelligence Server's queue before the report job was
(IS_REP_FACT) executed.
RP Number of Ad Hoc Metric of how many report jobs resulted from an ad hoc report
Jobs creation.
RP Number of Cancelled
Metric of how many job executions were canceled.
Jobs
RP Number of Drill Jobs Metric of how many job executions resulted from a drill action.
RP Number of Jobs hitting Metric of how many report jobs were executed against the
Database database.
Attribute or metric
Function
name
RP Number of Jobs w/o Metric of how many report jobs were executed that did not
Cache Creation result in creating a server cache.
RP Number of Jobs w/o Metric of how many report jobs were executed that did not hit a
Cache Hit server cache.
RP Number of Jobs w/o Metric of how many report jobs were executed that did not
Element Loading result from loading additional attribute elements.
RP Number of Jobs with Metric of how many report jobs were executed that resulted in
Cache Creation a server cache being created.
RP Number of Jobs with Metric of how many report jobs were executed that hit a server
Cache Hit cache.
RP Number of Jobs with Metric of how many report jobs were executed that resulted in
Datamart Creation a data mart being created.
RP Number of Jobs with Metric of how many report jobs failed because of a database
DB Error error.
RP Number of Jobs with Metric of how many report jobs were executed that resulted
Element Loading from loading additional attribute elements.
RP Number of Jobs with Metric of how many report job executions used an Intelligent
Intelligent Cube Hit Cube.
RP Number of Jobs with Metric of how many report job executions used a security
Security Filter filter.
RP number of Narrowcast Metric of how many report job executions were run through
Server jobs MicroStrategy Narrowcast Server.
RP Number of Prompted
Metric of how many report job executions included a prompt.
Jobs
Attribute or metric
Function
name
RP Number of Report
Metric of how many report jobs executed as a result of a
Jobs from Document
document execution.
Execution
RP Number of Result Metric of how many result rows were returned from a report
Rows execution.
RP Number of Scheduled
Metric of how many report jobs were scheduled.
Jobs
RP Prompt Answer Metric of the how long users take to answer the set of prompts
Duration (hh:mm:ss) in report jobs.
RP Prompt Answer Metric of the how long, in seconds, users take to answer the
Duration (secs) set of prompts in report jobs.
RP Queue Duration Metric of how long a report job waited in the Intelligence
(hh:mm:ss) Server's queue before the report job was executed.
Security Filter Indicates the security filter used in the report execution.
SQL Execution Indicator Indicates that SQL was executed during report execution.
Attribute or metric
Function
name
Hour Indicates the hour in which the report job was executed.
Minute Indicates the minute in which the report job was started.
Indicates the SQL statement that was executed during the SQL
Report Job SQL Pass
pass.
RP Last Execution Finish Metric of the finish timestamp when the report job was last
Timestamp executed.
RP Last Execution Start Metric of the start timestamp when the report job was last
Timestamp executed.
RP Number of DB Tables Metric of how many database tables were accessed in a report
Accessed job execution.
Attribute or metric
Function
name
RP SQL Size Metric of how large, in bytes, the SQL was for a report job.
Attribute or metric
Function
name
Hour Indicates the hour in which the report job was executed.
Minute Indicates the minute in which the report job was started.
Report Job Step Indicates the sequence number in the series of execution
Sequence steps a report job passes through in the Intelligence Server.
Indicates the type of step for a report job. Examples are SQL
Report Job Step Type generation, SQL execution, Analytical Engine, Resolution
Server, element request, update Intelligent Cube, and so on.
RP Average CPU
Execution Duration per Metric of the average duration, in milliseconds, a report job
Job (msecs) (IS_REP_ execution takes in the Intelligence Server CPU.
STEP_FACT)
Attribute or metric
Function
name
Duration per Job (secs) time and finish time of report job executions. Includes time for
(IS_REP_STEP_FACT) prompt responses.
RP Average Queue Metric of the average time report jobs waited in the
Duration per Job (secs) Intelligence Server's queue before the report job was
(IS_REP_STEP_FACT) executed.
RP Elapsed Duration Metric of the difference between start time and finish time of
(hh:mm:ss) report job executions. Includes time for prompt responses.
RP Execution Duration Metric of the difference between start time and finish time of
(hh:mm:ss) report job executions. Includes database execution time.
RP Last Execution Finish Metric of the finish timestamp when the report job was last
Timestamp executed.
RP Last Execution Start Metric of the start timestamp when the report job was last
Timestamp executed.
RP Number of Jobs (IS_ Metric of how many report jobs were executed.
Attribute or metric
Function
name
REP_STEP_FACT)
RP Query Engine
Metric of how long the Query Engine took to execute SQL for a
Duration (hh:mm:ss) (IS_
report job.
REP_STEP_FACT)
RP Queue Duration Metric of how long a report job waited in the Intelligence
(hh:mm:ss) Server's queue before the report job was executed.
Attribute or metric
Function
name
Day Indicates the day on which the table column was accessed.
Attribute or metric
Function
name
Hour Indicates the hour on which the table column was accessed.
Minute Indicates the minute on which the table column was accessed.
Lists all physical tables in the data warehouse that are set up to be
DB Table
monitored by Enterprise Manager.
Intelligence Server
Lists the cluster of Intelligence Servers.
Cluster
Attribute or metric
Function
name
Connection Duration
Metric of the time a connection to an Intelligence Server lasts.
(hh:mm:ss)
Attribute or metric
Function
name
Cache Creation
Indicates whether an execution has created a cache.
Indicator
Configuration Object
Indicates whether a configuration object exists.
Exists Status
Configuration
Lists all configuration parameter types.
Parameter Value Type
Delivery Status
Indicates whether a delivery was successful.
Indicator
Lists the object from which a user drilled when a new report was
Drill from Object
run because of a drilling action.
Lists the object to which a user drilled when a new report was run
Drill to Object
because of a drilling action.
Execution Type Indicates how the content was requested, such as User
Indicator Execution, Pre-Cached, Application Recovery, and so on.
Lists all the possible errors that can be returned during job
Job ErrorCode
executions.
Object Creation Date Indicates the date on which an object was created.
Object Creation
Indicates the week of the year in which an object was created.
Week of year
Object Modification
Indicates the date on which an object was last modified.
Date
Object Modification Indicates the week of the year in which an object was last
Week of year modified.
Prompt Answer Indicates whether a prompt answer was required for the job
Required execution.
Report Job SQL Pass Lists the types of SQL passes that the Intelligence Server
Type generates.
(Deprecated)
Report Job Step Type Lists all possible steps of report job execution.
Report Type Indicates the type of a report, such as XDA, relational, and so on.
Report/Document
Indicates whether the execution was a report or a document.
Indicator
Security Filter Indicator Indicates whether a security filter was used in the job execution.
SQL Clause Type Lists the various SQL clause types used by the SQL Engine.
SQL Execution
Indicates whether SQL was executed in the job execution.
Indicator
Attribute
Function
name
Attribute
Function
name
Configuration Object
Lists the owners of configuration objects.
Owner
Intelligence Server
Lists all Intelligence Server definitions.
Definition
User Group (Parent) Lists all user groups that are parents of other user groups.
Attribute
Function
name
Calendar
Lists every calendar week, beginning with 2000-01-01, as an integer.
Week
Lists all the minutes in an hour. For example, if the hour specified is 10
Minute AM - 11 AM, lists minutes as 10.30 AM - 10.31 AM, 10.32 AM - 10.33 AM,
and so on.
Quarter of
Lists all quarters of the year.
Year
Attribute
Function
name
Lists all weeks in all years, beginning in 2000. Weeks in 2000 are
Week of Year represented as a number ranging from 200001 to 200053, weeks in 2001
are represented as a number ranging from 200101 to 200153, and so on.
DP Average Execution Duration Metric of the average duration of all document job
per Job (hh:mm:ss) executions.
DP Average Queue Duration per Metric of the average duration of all document job
DP Average Queue Duration per Metric of the average duration, in seconds, of all
Job (secs) document job executions waiting in the queue.
DP Execution Duration
Metric of the duration of a document job's execution.
(hh:mm:ss)
DP Number of Jobs (IS_DOC_ Metric of the number of document jobs that were
FACT) executed.
DP Number of Jobs with Error Metric of the number of document jobs that failed.
DP Percentage of Jobs with Metric of the percentage of document jobs that hit a
Cache Hit cache.
DP Percentage of Jobs with Error Metric of the percentage of document jobs that failed.
Document Job Step Type Indicates the type of step for a document job.
DP Average Execution Duration Metric of the average duration of all document job
per Job (hh:mm:ss) executions.
DP Average Queue Duration per Metric of the average duration of all document job
Job (hh:mm:ss) executions waiting in the queue.
DP Average Queue Duration per Metric of the average duration, in seconds, of all
Job (secs) document job executions waiting in the queue.
DP Execution Duration
Metric of the duration of a document job's execution.
(hh:mm:ss)
Data Load Finish Time Displays the timestamp of the end of the data load process for
Data Load Project Lists all projects that are being monitored.
Lists the timestamp of the start of the data load process for the
Data Load Start Time
projects that are being monitored.
Attribute or metric
Function
name
HL Days Since Last Metric of the number of days since the last request was made
Action: Request for the contents of a message.
HL Last Action Date: Any Metric of the date and time of the last action performed on a
Action message such as read, deleted, marked as read, and so on.
HL Last Action Date: Metric of the date and time of the last request made for the
Request contents of a message.
Attribute or metric
Function
name
User message.
HL Number of Actions with Metric of the number of actions on a message that resulted in
Errors an error.
HL Number of Document Metric of the number of document jobs that result with
Jobs messages.
HL Number of Messages
Metric of the number of messages that resulted in an error.
with Errors
HL Number of Report
Metric of the number of report jobs that result from messages.
Jobs
Intelligence Server Indicates the Intelligence Server machine that executed the
Machine message.
Report Job Indicates the job ID of the report included in the message.
User Indicates the user who manipulated the History List message.
Intelligence Server
Indicates the Intelligence Server processing the request.
Machine
Attribute or metric
Function
name
Intelligent Cube Action Metric of the duration, in seconds, for an action that was
Duration (secs) performed on the Intellgent Cube.
Intelligent Cube Action Indicates the type of action taken on the Intelligent Cube such
Type as cube publish, cube view hit, and so on.
Number of Intelligent Metric of how many times an Intelligent Cube was published.
Attribute or metric
Function
name
Cube Publishes
Number of Intelligent
Metric of how many times an Intelligent Cube was refreshed.
Cube Refreshes
Number of Intelligent
Metric of how many times an Intelligent Cube was republished.
Cube Republishes
Counter Instance Indicates the instance ID of the counter, for MicroStrategy use.
Performance Monitor Indicates the name of the performance counter and its value
Counter type.
Attribute or metric
Function
name
Intelligence Server Indicates the Intelligence Server machine that executed the
Machine job.
Prompt Answer Indicates the answers for the prompt in various instances.
Prompt Answer Required Indicates whether an answer to the prompt was required.
Prompt Instance Answer Indicates the answer of an instance of a prompt in a report job.
Indicates the type of the object in which the prompt is stored, such as
Prompt Location Type
filter, template, attribute, and so on.
Indicates the title of the prompt (the title the user sees when
Prompt Title
presented during job execution).
Attribute or metric
Function
name
Report Job Indicates the report job that used the prompt.
RP Number of Jobs
Metric of how many report jobs had a specified prompt answer
Containing Prompt
value.
Answer Value
RP Number of Jobs with Metric of how many report jobs had a prompt that was not
Unanswered Prompts answered.
Attribute or metric
Function
name
Attribute or metric
Function
name
database error.
Intelligence Server Indicates the Intelligence Server machine that executed the
Machine report.
RP Average Elapsed Metric of the average difference between start time and finish
Duration per Job time (including time for prompt responses) of all report job
Attribute or metric
Function
name
(hh:mm:ss) (IS_REP_
executions.
FACT)
RP Average Elapsed Metric of the average difference between start time and finish
Duration per Job (secs) time (including time for prompt responses) of all report job
(IS_REP_FACT) executions.
RP Average Execution
Duration per Job Metric of the average duration of all report job executions.
(hh:mm:ss) (IS_REP_ Includes time in queue and execution for a report job.
FACT)
RP Average Execution Metric of the average duration, in seconds, of all report job
Duration per Job (secs) executions. Includes time in queue and execution for a report
(IS_REP_FACT) job.
RP Average Prompt
Metric of the average time users take to answer the set of
Answer Time per Job
prompts in all report jobs.
(hh:mm:ss)
RP Average Prompt
Metric of the average time, in seconds, users take to answer
Answer Time per Job
the set of prompts in all report jobs.
(secs)
RP Average Queue
Metric of the average time report jobs waited in the
Duration per Job
Intelligence Server's queue before the report job was
(hh:mm:ss) (IS_REP_
executed.
FACT)
RP Average Queue Metric of the average time, in seconds, report jobs waited in
Duration per Job (secs) the Intelligence Server's queue before the report job was
(IS_REP_FACT) executed.
RP Elapsed Duration Metric of the difference, in seconds, between start time and
(secs) finish time of a report job. Includes time for prompt responses,
Attribute or metric
Function
name
RP Number of Ad Hoc Metric of how many report jobs resulted from an ad hoc report
Jobs creation.
RP Number of Cancelled
Metric of how many job executions were canceled.
Jobs
RP Number of Drill Jobs Metric of how many job executions resulted from a drill action.
RP Number of Jobs hitting Metric of how many report jobs were executed against the
Database database.
RP Number of Jobs w/o Metric of how many report jobs were executed that did not
Cache Creation result in creating a server cache.
RP Number of Jobs w/o Metric of how many report jobs were executed that did not hit a
Cache Hit server cache.
RP Number of Jobs w/o Metric of how many report jobs were executed that did not
Element Loading result from loading additional attribute elements.
RP Number of Jobs with Metric of how many report jobs were executed that resulted in
Cache Creation a server cache being created.
RP Number of Jobs with Metric of how many report jobs were executed that hit a server
Cache Hit cache.
RP Number of Jobs with Metric of how many report jobs were executed that resulted in
Datamart Creation a data mart being created.
RP Number of Jobs with Metric of how many report jobs failed because of a database
Attribute or metric
Function
name
DB Error error.
RP Number of Jobs with Metric of how many report jobs were executed that resulted
Element Loading from loading additional attribute elements.
RP Number of Jobs with Metric of how many report job executions used an Intelligent
Intelligent Cube Hit Cube.
RP Number of Jobs with Metric of how many report job executions used a security
Security Filter filter.
RP number of Narrowcast Metric of how many report job executions were run through
Server jobs MicroStrategy Narrowcast Server.
RP Number of Prompted
Metric of how many report job executions included a prompt.
Jobs
RP Number of Report
Metric of how many report jobs executed as a result of a
Jobs from Document
document execution.
Execution
RP Number of Result Metric of how many result rows were returned from a report
Rows execution.
RP Number of Scheduled
Metric of how many report jobs were scheduled.
Jobs
RP Prompt Answer Metric of the how long users take to answer the set of prompts
Duration (hh:mm:ss) in report jobs.
RP Prompt Answer Metric of the how long, in seconds, users take to answer the
Duration (secs) set of prompts in report jobs.
Attribute or metric
Function
name
RP Queue Duration Metric of how long a report job waited in the Intelligence
(hh:mm:ss) Server's queue before the report job was executed.
Security Filter Indicates the security filter used in the report execution.
SQL Execution Indicator Indicates that SQL was executed during report execution.
Attribute or metric
Function
name
Hour Indicates the hour in which the report job was executed.
Minute Indicates the minute in which the report job was started.
Attribute or metric
Function
name
Indicates the SQL statement that was executed during the SQL
Report Job SQL Pass
pass.
RP Last Execution Finish Metric of the finish timestamp when the report job was last
Timestamp executed.
RP Last Execution Start Metric of the start timestamp when the report job was last
Timestamp executed.
RP Number of DB Tables Metric of how many database tables were accessed in a report
Accessed job execution.
RP SQL Size Metric of how large, in bytes, the SQL was for a report job.
Attribute or metric
Function
name
Attribute or metric
Function
name
Hour Indicates the hour in which the report job was executed.
Minute Indicates the minute in which the report job was started.
Report Job Step Indicates the sequence number in the series of execution
Sequence steps a report job passes through in the Intelligence Server.
Indicates the type of step for a report job. Examples are SQL
Report Job Step Type generation, SQL execution, Analytical Engine, Resolution
Server, element request, update Intelligent Cube, and so on.
RP Average CPU
Execution Duration per Metric of the average duration, in milliseconds, a report job
Job (msecs) (IS_REP_ execution takes in the Intelligence Server CPU.
STEP_FACT)
Attribute or metric
Function
name
RP Average Queue Metric of the average time report jobs waited in the
Duration per Job (secs) Intelligence Server's queue before the report job was
(IS_REP_STEP_FACT) executed.
RP Elapsed Duration Metric of the difference between start time and finish time of
(hh:mm:ss) report job executions. Includes time for prompt responses.
RP Execution Duration Metric of the difference between start time and finish time of
(hh:mm:ss) report job executions. Includes database execution time.
RP Last Execution Finish Metric of the finish timestamp when the report job was last
Timestamp executed.
RP Last Execution Start Metric of the start timestamp when the report job was last
Timestamp executed.
RP Query Engine
Metric of how long the Query Engine took to execute SQL for a
Duration (hh:mm:ss) (IS_
report job.
REP_STEP_FACT)
RP Queue Duration Metric of how long a report job waited in the Intelligence
(hh:mm:ss) Server's queue before the report job was executed.
Attribute or metric
Function
name
Attribute or metric
Function
name
Day Indicates the day on which the table column was accessed.
Hour Indicates the hour on which the table column was accessed.
Minute Indicates the minute on which the table column was accessed.
Lists all physical tables in the data warehouse that are set up to be
DB Table
monitored by Enterprise Manager.
Intelligence Server
Lists the cluster of Intelligence Servers.
Cluster
Attribute or metric
Function
name
Connection Duration
Metric of the time a connection to an Intelligence Server lasts.
(hh:mm:ss)
Cache Creation
Indicates whether an execution has created a cache.
Indicator
Configuration Object
Indicates whether a configuration object exists.
Exists Status
Configuration
Lists all configuration parameter types.
Parameter Value Type
Delivery Status
Indicates whether a delivery was successful.
Indicator
Lists the object from which a user drilled when a new report was
Drill from Object
run because of a drilling action.
Drill to Object Lists the object to which a user drilled when a new report was run
Execution Type Indicates how the content was requested, such as User
Indicator Execution, Pre-Cached, Application Recovery, and so on.
Lists all the possible errors that can be returned during job
Job ErrorCode
executions.
Object Creation Date Indicates the date on which an object was created.
Object Creation
Indicates the week of the year in which an object was created.
Week of year
Object Modification
Indicates the date on which an object was last modified.
Date
Object Modification Indicates the week of the year in which an object was last
Week of year mod