SlideShare a Scribd company logo
Oracle Fusion
Transactional Business
Intelligence and BI Cloud
Connector Performance
Recommendations
An Oracle White Paper, 3rd Edition
January 5, 2021 | Version 3.0
Copyright © 2021, Oracle and/or its affiliates
Confidential – Public
2 WHITE PAPER | Oracle Fusion Transactional Business Intelligence and BI Cloud Connector Performance Recommendations | Version 3.0
Copyright © 2021, Oracle and/or its affiliates | Confidential – Public
PURPOSE STATEMENT
This document covers performance topics and best practices for Oracle Transactional Business Intelligence (OTBI) and Business
Intelligence Cloud Connector (BICC) for Fusion Applications Release 20A and higher. Most of the recommendations are generic for
OTBI and BICC contents as well as Oracle Business Intelligence Enterprise Edition (OBIEE) technology stack. Release specific topics
will refer to exact version numbers20A and higher. Most of the recommendations are generic for OTBI and BICC contents as well as
Oracle Business Intelligence Enterprise Edition (OBIEE) technology stack. Release specific topics will refer to exact version numbers
Note: The document is intended for BI system integrators, BI report developers and administrators. It covers advanced performance
tuning techniques in OTBI, BICC, OBIEE and Oracle RDBMS. All recommendations must be carefully verified in a test environment
before applied to a production instance.
DISCLAIMER
This document in any form, software or printed matter, contains proprietary information that is the exclusive property of Oracle.
Your access to and use of this confidential material is subject to the terms and conditions of your Oracle software license and service
agreement, which has been executed and with which you agree to comply. This document and information contained herein may not
be disclosed, copied, reproduced or distributed to anyone outside Oracle without prior written consent of Oracle. This document is
not part of your license agreement nor can it be incorporated into any contractual agreement with Oracle or its subsidiaries or
affiliates.
This document is for informational purposes only and is intended solely to assist you in planning for the implementation and upgrade
of the product features described. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon
in making purchasing decisions. The development, release, and timing of any features or functionality described in this document
remains at the sole discretion of Oracle.
Due to the nature of the product architecture, it may not be possible to safely include all features described in this document without
risking significant destabilization of the code.
3 WHITE PAPER | Oracle Fusion Transactional Business Intelligence and BI Cloud Connector Performance Recommendations | Version 3.0
Copyright © 2021, Oracle and/or its affiliates | Confidential – Public
TABLE OF CONTENTS
Purpose Statement 2
Disclaimer 2
Introduction 5
Logical SQL Concepts 5
Understanding OTBI Query Execution 5
OTBI and BICC Performance Monitoring and Diagnostics 6
OBIEE Usage Tracking for OTBI and BICC Monitoring 6
Query Log Analysis 6
Guidelines for Analyzing Reports Performance 10
Best Practices for OTBI Dashboards and Reports Design 11
Dashboard Design Recommendations 11
Dashboard Design: Best Practices 11
Dashboard Prompts Recommendations 11
Choice of Logical Entities in Prompts 11
Use Default Prompt Values 12
Logical Aggregate Functions in Prompt Filters 13
Use ‘Calendar’ Inputs in Date Prompts 13
Use Text Field in Prompts 14
Use of Lookup Tables vs. Dimensions in Prompts 17
“OTBI HCM Prompts” Subject Area for Manager Prompts 17
Report Design Recommendations 17
Use Restrictive Filters to Constrain Result Set 17
Use Indexed Logical Attributes in Logical Joins and Filters 18
Eliminate Duplicate and Redundant Filters in Logical SQLs 18
Avoid Fetching Very Large Volumes from Database to OBIEE 18
Limit the Number of Logical Columns and ORDER BY Attributes in Reports 20
Review LOBs Usage in OTBI Reports 21
Reduce Multiple Logical Unions in OTBI Reports 22
Cross Subject Area Reports Considerations 22
Cross Fact Reports Recommendations 22
Use of OTBI Reports as Infolets in FA Embedded Pages 23
Considerations for Using Custom Extensible Attributes and Flex Fields in Logical Filters 24
Considerations for Using Hierarchical Attributes in Logical Filters 24
Employ DESCRIPTOR_IDOF Function in Logical Filters 24
Generated PSQL(s) with a Large Number In-List Values or BIND Variables Optimization 26
CONSTANT_OPTIMIZATION_LEVEL and CONSTANT_CASE_OPTIMIZATION_LEVEL Request Variables 27
Use REPORT_SUM/REPORT_COUNT Instead of REPORT_AGGREGATE Logical Functions 27
Use COUNT Instead of COUNT(DISTINCT) 29
EVALUATE vs. EVALUATE_ANALYTIC Considerations 30
Use of Logical Joins in OTBI Reports 30
Use of OBIEE CACHE Variables for Diagnostics in OTBI Reports 30
Use of OBIS_REFRESH_CACHE in OTBI Reports 30
Use of Database Hints to Optimize Generated Queries Plans 30
Use of MATERIALIZE Hint in Cross Subject Area and Cross Reports 31
Functional Design Patterns Affecting OTBI Reports Performance 32
"Time"."Fiscal Period" Performance Considerations in Financial Reports 32
Required Use of Logical UPPER() Function in Logical Filters 33
"Payroll - Payroll Run Results Real Time" Subject Area Recommendations 33
“Payroll - Payroll Balances Real Time” Subject Area Recommendations 34
Data Security Predicates Impact on OTBI Reports Performance 35
Security Predicate in OTBI: Performance Recommendations 35
Security Materialization in OTBI Reports 36
OTBI Export Limiters and Recommendations 36
4 WHITE PAPER | Oracle Fusion Transactional Business Intelligence and BI Cloud Connector Performance Recommendations | Version 3.0
Copyright © 2021, Oracle and/or its affiliates | Confidential – Public
ResultRowLimit 36
DefaultRowsDisplayedInDownloadCSV 36
DefaultRowsDisplayedInDownload 37
DefaultRowsDisplayedInDelivery 37
Oracle BI Cloud Connector Performance Considerations 37
BICC OTBI Metadata Independent Mode 37
BICC Performance Recommendations 37
TEMP and UNDO Tablespace Sizing 37
BICC Jobs Design Optimization for Better Performance 38
OTBI Performance Related Errors: Recommendations and Workarounds 39
Conclusion 41
5 WHITE PAPER | Oracle Fusion Transactional Business Intelligence and BI Cloud Connector Performance Recommendations | Version 3.0
Copyright © 2021, Oracle and/or its affiliates | Confidential – Public
INTRODUCTION
With Oracle Transactional Business Intelligence embedded analytics, role-based dashboards, as well as on-the-fly ad-hoc queries
capabilities, business users have very powerful means for accessing, interpreting and analyzing real-time data in Oracle Fusion
Applications (FA). The end users can put together sophisticated reports with various custom attributes, filters, join conditions and
analytic functions in just few mouse clicks. It is very important to know how to design such dashboards, prompts, reports and ad-hoc
queries in OTBI, ensuring maximum performance and scalability, and minimum impact on Fusion Middleware and Database
resources.
The generated OTBI contents complexity can come from:
Custom dashboards and reports design
Data Security Predicates (DSP)
Functional requirements to report contents
Custom Subject Areas
Customizations and extensibility attributes
Use of cross Subject Areas in a single report
Combination of the mentioned factors above.
Oracle Business Intelligence Cloud Connector (BICC) provides the functionality to extract data from Fusion Applications and loading
into Oracle Universal Content Management (UCM) Server or cloud storage in CSV format. It supports initial and incremental data
extracts, as well as primary keys extracts for tracking deletes.
The document will cover each topic in more details, with examples and recommendations how to improve design and achieve better
performance. Some examples will refer to Advanced Logical SQL options, as well generated physical query patterns and database
query plans, so the targeted audience is expected to know SQL basics and be able to read and understand sample execution plans.
Logical SQL Concepts
OBIEE uses its own internal language to describe reports, prompts, initialization blocks in a form of Logical SQLs (LSQL). Logical SQL
text uniquely identifies a report business logic:
SET VARIABLE QUERY_SRC_CD='Report', SAW_SRC_PATH='/shared/Custom/Sample Report/Manager
Name Query', PREFERRED_CURRENCY='Local Currency';
SELECT "Compensation Manager"."Manager Name") saw_0,
FROM "Compensation - Workforce Compensation Real Time"
WHERE "Compensation Manager"."Manager Name" = 'Joe Doe'
ORDER BY saw_0, saw_1
“SET VARIABLE” defines report type (QUERY_SRC_CD), its stored catalog path (SAW_SRC_PATH) and other attributes.
It also includes any OBIEE hints and directives, defined as query PREFIX values in OBIEE Answers -> Advanced Tab ->
Prefix field. Some of the directives are discussed in this document.
SELECT specifies all queried logical attributes. There may be more than one SELECT clause in Logical SQL.
FROM specifies the queried Subject Area. It may be used as a placeholder, while SELECT clause pull in attributes from
completely different subject area. Refer to the discussion on Cross Subject Area usage and recommendations.
WHERE clause includes logical joins, filters and subqueries.
ORDER BY is automatically generated by OBIEE and includes all select attributes by default.
SELECT and WHERE clauses may contain OBIEE internal functions, logical UNION, MINUS or INTERSECT and other SQL
operands. This document covers a number of techniques, that involve LSQL, so it’s important to read and understand the basic LSQL
concepts.
Understanding OTBI Query Execution
OTBI Metadata Repository (RPD) logical model delivers facts and dimensions on top of Fusion Application Development Framework
(ADF) Business model across multiple Subject Areas (SA) in OBIEE Presentation layer. When a business user designs a report as a
logical star, i.e. joins a logical fact to one or more logical dimensions, OBIEE converts the designed report into a logical SQL (LSQL),
and then generates one or more corresponding physical SQLs (PSQL), which then get executed in a database. OTBI logical facts and
dimensions get expanded into complex joins using ADF View Objects (VO) defined in OTBI RPD Physical layer. BI Server generates
6 WHITE PAPER | Oracle Fusion Transactional Business Intelligence and BI Cloud Connector Performance Recommendations | Version 3.0
Copyright © 2021, Oracle and/or its affiliates | Confidential – Public
XML queries to BI Broker to translate these FA ADF VOs into SQLs, that ultimately join FA database tables and views, and produces
the PSQL(s), that get executed in Fusion Apps OLTP database.
OTBI AND BICC PERFORMANCE MONITORING AND DIAGNOSTICS
This section provides the recommendations and guidelines for OTBI self-service monitoring and performance optimizations.
OBIEE Usage Tracking for OTBI and BICC Monitoring
OTBI delivered OBIEE Usage Tracking (UT) in database, starting from FA 19B. Every single logical SQL, whether it’s a prompt,
report, ad-hoc query or BICC extract, is being recorded into OBIEE Usage Tracking tables in each FA POD. UT captures all runtime
metrics such as timestamps, timing, row counts, logical and physical SQL text, errors, that can be analyzed for any performance or
usage anomalies, and potential issues or trends detected proactively in the FA environments.
Starting from FA release 20D, OTBI delivers two subject areas that can be used for creating reports to monitor OTBI usage and
performance:
• OTBI Usage Real Time: monitors OTBI usage, including user, analysis and dashboard, and subject area usage trends.
• OTBI Performance Real Time: monitors usage trends and OTBI analysis execution time, execution errors, and database
physical SQL execution statistics.
Refer to Oracle Customer Connect for more details and examples of OTBI Usage Tracking reports.
The example below shows a sample histogram report, combining reports and subject areas. From a quick glance, looking into number
of errors, total runs, and runtime histogram one can prioritize the reports for internal analysis:
Usage Tracking can be used to extract logical SQL text, that uniquely defines each report, and review the business logic,
check for valid filters, joins cross subject areas, etc. LSQL (and not PSQLs) should always be the starting point, as it provides
the important metadata to BI Server to generate PSQLs. Alternatively, you can locate and extract the LSQL in OBIEE
nqquery.log(s) if they are still available on the POD.
Review all runtime metrics, such as “Total time in BI Server”, “Logical Compilation Time”, each PSQL runtime, compute
end-to-end runtime by timestamps, check row counts for each PSQL and the LSQL. Both Usage Tracking and nqquery.log
capture these summaries. Refer to metrics detailed descriptions in the next section.
Since BICC issues LSQLs for its data extracts, the same Usage Tracking can be used for monitoring BICC jobs too. All BICC
LSQLs are executed by a special system user FUSION_APPS_OBIA_BIEE_APPID, that can be used for filtering extracts.
Note that “Total time in BI Server” may not account for the data fetching time. You can use start and end timestamps to
calculate more accurate extracts run time in your monitoring reports.
Query Log Analysis
OBIEE generates detailed diagnostic events with default second trace level [TRACE:2] and records the logs into its nqquery.log. This
trace level is sufficient for most diagnostic activities in FA. Some analyses may require higher trace level 7, that can be set via “SET
VARIABLE LOGLEVEL=7;” prefix. You can set it in OBIEE Answers -> Advanced tab -> Prefix section, when you design or optimize
your report.
Important! Make sure you remove LOGLEVEL=7 from your report prefix after you complete your performance analysis.
The logs are retained on each POD for primary and high availability (HA) node up to 10 days. If you have Administrator role, you can
retrieve the same log from OBIEE Answers by navigating to ‘Administration’ link -> ‘Manage Sessions’ -> Your session link. Below is
7 WHITE PAPER | Oracle Fusion Transactional Business Intelligence and BI Cloud Connector Performance Recommendations | Version 3.0
Copyright © 2021, Oracle and/or its affiliates | Confidential – Public
the summary example for a single report, extracted from nqquery.log with the detailed explanation of the important metrics and
events:
[2019-01-30T00:30:36.273+00:00] [OracleBIServerComponent] [TRACE:2] [] [] [ecid:
005WA82e9RH66Ut_odl3iY000EXm000_na,0:1:3:2:1:256:3] [tid: efa0700] [messageid: USER-0] [requestid: cec70034] [sessionid:
cec70000] [username: sampleuser@oracle.com] ############################################## [[
-------------------- SQL Request, logical request hash:
d9cd7a2f
SET VARIABLE QUERY_SRC_CD='Report',SAW_DASHBOARD='/shared/Custom/S2S Reports/S2SR Team/AGENTS/ApOps/Daily/Supplier
Registration/Daily Alert - Supplier Registration',SAW_DASHBOARD_PG='Strategic Buyer Review',SAW_SRC_PATH='/shared/Custom/S2S
Reports/S2SR Team/AGENTS/ApOps/Daily/Supplier Registration/Strategic Buyer Review',PREFERRED_CURRENCY='Document Currency';SELECT
0 s_0,
"Supplier Qualification - Qualifications and Assessments Real Time"."Qualification Area Question"."Responder Type" s_1,
"Supplier Qualification - Qualifications and Assessments Real Time"."Qualification Detail"."Qualification Evaluation Date"
s_2,
"Supplier Qualification - Qualifications and Assessments Real Time"."Qualification Detail"."Qualification Finalization Date"
s_3,
"Supplier Qualification - Qualifications and Assessments Real Time"."Qualification Detail"."Qualification Outcome" s_4,
"Supplier Qualification - Qualifications and Assessments Real Time"."Qualification Evaluated By"."Email Address" s_5,
"Supplier Qualification - Qualifications and Assessments Real Time"."Qualification Owner"."Buyer Email Address" s_6,
"Supplier Qualification - Qualifications and Assessments Real Time"."Qualification Owner"."Buyer Name" s_7,
"Supplier Qualification - Qualifications and Assessments Real Time"."Qualification Response Detail"."Acceptance Date" s_8,
"Supplier Qualification - Qualifications and Assessments Real Time"."Qualification Response Detail"."Response Date" s_9,
"Supplier Qualification - Qualifications and Assessments Real Time"."Qualification Supplier Profile"."Supplier Name" s_10,
"Supplier Qualification - Qualifications and Assessments Real Time"."Qualification Supplier Profile"."Supplier Number" s_11,
DESCRIPTOR_IDOF("Supplier Qualification - Qualifications and Assessments Real Time"."Qualification Area Question"."Responder
Type") s_12
FROM "Supplier Qualification - Qualifications and Assessments Real Time"
WHERE
(("Qualification Response Detail"."Acceptance Date" IS NOT NULL) AND ("Qualification Detail"."Qualification Outcome" IS NULL)
AND (DESCRIPTOR_IDOF("Supplier Qualification - Qualifications and Assessments Real Time"."Qualification Area
Question"."Responder Type") = 'INTERNAL'))
ORDER BY 1, 7 ASC NULLS LAST, 8 ASC NULLS LAST, 5 ASC NULLS LAST, 10 ASC NULLS LAST, 3 ASC NULLS LAST, 4 ASC NULLS LAST, 6 ASC
NULLS LAST, 9 ASC NULLS LAST, 12 ASC NULLS LAST, 11 ASC NULLS LAST, 2 ASC NULLS LAST, 13 ASC NULLS LAST
FETCH FIRST 500001 ROWS ONLY
]]
[2019-01-30T00:30:36.273+00:00] [OracleBIServerComponent] [TRACE:2] [] [] [ecid:
005WA82e9RH66Ut_odl3iY000EXm000_na,0:1:3:2:1:256:3] [tid: efa0700] [messageid: USER-23] [requestid: cec70034] [sessionid:
cec70000] [username: sampleuser@oracle.com] -------------------- General Query Info: [[
Repository: Star, Subject Area: Core, Presentation: Supplier Qualification - Qualifications and Assessments Real Time
]]
[2019-01-30T00:30:36.312+00:00] [OracleBIServerComponent] [TRACE:2] [] [] [ecid:
005WA82e9RH66Ut_odl3iY000EXm000_na,0:1:3:2:1:256:5] [tid: efa0700] [messageid: USER-18] [requestid: cec70034] [sessionid:
cec70000] [username: sampleuser@oracle.com] -------------------- Sending query to database named
oracle.apps.fscm.model.analytics.applicationModule.FscmTopModelAM_FscmTopModelAMLocal (id: <<1158231996>> SQLBypass Gateway),
connection pool named Connection Pool, logical request hash d9cd7a2f, physical request hash ed5d81bd: [[
<?xml version="1.0" encoding="UTF-8" ?>
<ADFQuery mode="SQLBypass" queryid="1547096721-2023840527" locale="en">
<Parameters>
<Parameter><Name><![CDATA[AOL_LANGUAGE]]></Name><Value><![CDATA[en]]></Value></Parameter>
<Parameter><Name><![CDATA[OTBI_CDS_ENABLED]]></Name><Value><![CDATA[false]]></Value></Parameter>
</Parameters>
<Projection>
<Attribute><Name><![CDATA[QualificationQualEvaluationDate]]></Name><ViewObject><![CDATA[FscmTopModelAM.PrcPoqPublicVi
ewAM.QualificationResponsesPVO]]></ViewObject></Attribute>
<Attribute><Name><![CDATA[QualificationQualCompletedDate]]></Name><ViewObject><![CDATA[FscmTopModelAM.PrcPoqPublicVie
wAM.QualificationResponsesPVO]]></ViewObject></Attribute>
...
...
...
</ViewCriteriaRow>
</ViewCriteria>
</DetailFilter>
</ADFQuery>
]]
[2019-01-30T00:30:36.525+00:00] [OracleBIServerComponent] [TRACE:2] [] [] [ecid:
005WA82e9RH66Ut_odl3iY000EXm000_na,0:1:3:2:1:256:5] [tid: efa0700] [messageid: USER-18] [requestid: cec70034] [sessionid:
cec70000] [username: sampleuser@oracle.com] -------------------- Sending query to database named
oracle.apps.fscm.model.analytics.applicationModule.FscmTopModelAM_FscmTopModelAMLocal (id: <<1158232468>> SQLBypass Gateway),
connection pool named Connection Pool, logical request hash d9cd7a2f, physical request hash 174c60f0: [[
<?xml version="1.0" encoding="UTF-8" ?>
<ADFQuery mode="SQLBypass" queryid="1782285574-110497155" locale="en">
<Parameters>
<Parameter><Name><![CDATA[AOL_LANGUAGE]]></Name><Value><![CDATA[en]]></Value></Parameter>
<Parameter><Name><![CDATA[OTBI_CDS_ENABLED]]></Name><Value><![CDATA[false]]></Value></Parameter>
</Parameters>
<Projection>
<Attribute><Name><![CDATA[Meaning]]></Name><ViewObject><![CDATA[FscmTopModelAM.AnalyticsServiceAM.LookupValuesTLPVO]]
></ViewObject></Attribute>
<Attribute><Name><![CDATA[LookupCode]]></Name><ViewObject><![CDATA[FscmTopModelAM.AnalyticsServiceAM.LookupValuesTLPV
O]]></ViewObject></Attribute>
8 WHITE PAPER | Oracle Fusion Transactional Business Intelligence and BI Cloud Connector Performance Recommendations | Version 3.0
Copyright © 2021, Oracle and/or its affiliates | Confidential – Public
...
...
...
</ViewCriteriaRow>
</ViewCriteria>
</DetailFilter>
</ADFQuery>
]]
[2019-01-30T00:30:36.620+00:00] [OracleBIServerComponent] [TRACE:2] [] [] [ecid:
005WA82e9RH66Ut_odl3iY000EXm000_na,0:1:3:2:1:256:5] [tid: efa0700] [messageid: USER-18] [requestid: cec70034] [sessionid:
cec70000] [username: sampleuser@oracle.com] -------------------- Sending query to database named FSCM_OLTP (id: <<1158231989>>),
connection pool named Connection Pool, logical request hash d9cd7a2f, physical request hash d91d1d65: [[
WITH
SAWITH0 AS (select T2521292.C277165028 as c3,
T2521292.C440229984 as c4,
T2521292.C104108913 as c5,
T2521292.C168651775 as c6,
T2521292.C398378115 as c7,
T2521292.C527336469 as c8,
T2521292.C402205937 as c9,
T2521292.C248158144 as c10,
T2521292.C203825585 as c11,
T2521292.C323161287 as c12,
T2521292.C464440102 as c13,
T2521292.C410075615 as c15
from
(SELECT V228191905.QUAL_EVALUATION_DATE AS C277165028,
...
...
...
from
SAWITH3 D901)
select D1.c1 as c1, D1.c2 as c2, D1.c3 as c3, D1.c4 as c4, D1.c5 as c5, D1.c6 as c6, D1.c7 as c7, D1.c8 as c8, D1.c9 as c9,
D1.c10 as c10, D1.c11 as c11, D1.c12 as c12, D1.c13 as c13 from ( select distinct 0 as c1,
D1.c2 as c2,
D1.c3 as c3,
D1.c4 as c4,
D1.c5 as c5,
D1.c6 as c6,
D1.c7 as c7,
D1.c8 as c8,
D1.c9 as c9,
D1.c10 as c10,
D1.c11 as c11,
D1.c12 as c12,
D1.c13 as c13
from
SAWITH4 D1
order by c7, c8, c5, c10, c3, c4, c6, c9, c12, c11, c2, c13 ) D1 where rownum <= 500001
]]
[2019-01-30T00:30:36.621+00:00] [OracleBIServerComponent] [TRACE:2] [] [] [ecid:
005WA82e9RH66Ut_odl3iY000EXm000_na,0:1:3:2:1:256:5:3] [tid: 16620700] [messageid: USER-18] [requestid: cec70034] [sessionid:
cec70000] [username: sampleuser@oracle.com] -------------------- Sending query to database named FSCM_OLTP (id: <<1158231989>>
pre query 0), connection pool named Connection Pool, logical request hash d9cd7a2f, physical request hash e9e71988: [[
BEGIN fnd_session_mgmt.attach_session('80A2F84E0AAE4340E05302313D0A7036'); END;
]]
[2019-01-30T00:31:00.586+00:00] [OracleBIServerComponent] [TRACE:2] [] [] [ecid:
005WA82e9RH66Ut_odl3iY000EXm000_na,0:1:3:2:1:256:5:3] [tid: 16620700] [messageid: USER-18] [requestid: cec70034] [sessionid:
cec70000] [username: sampleuser@oracle.com] -------------------- Sending query to database named FSCM_OLTP (id: <<1158231989>>
post query 0), connection pool named Connection Pool, logical request hash d9cd7a2f, physical request hash d90563e6: [[
BEGIN fnd_session_mgmt.detach_session; END;
]]
[2019-01-30T00:31:00.590+00:00] [OracleBIServerComponent] [TRACE:2] [] [] [ecid:
005WA82e9RH66Ut_odl3iY000EXm000_na,0:1:3:2:1:257] [tid: efa0700] [messageid: USER-34] [requestid: cec70034] [sessionid:
cec70000] [username: sampleuser@oracle.com] -------------------- Query Status: Successful Completion
[2019-01-30T00:31:00.590+00:00] [OracleBIServerComponent] [TRACE:2] [] [] [ecid:
005WA82e9RH66Ut_odl3iY000EXm000_na,0:1:3:2:1:257] [tid: efa0700] [messageid: USER-26] [requestid: cec70034] [sessionid:
cec70000] [username: sampleuser@oracle.com] -------------------- Rows 2, bytes 65544 retrieved from database query id:
<<1158231996>> SQLBypass Gateway
[2019-01-30T00:31:00.590+00:00] [OracleBIServerComponent] [TRACE:2] [] [] [ecid:
005WA82e9RH66Ut_odl3iY000EXm000_na,0:1:3:2:1:257] [tid: efa0700] [messageid: USER-28] [requestid: cec70034] [sessionid:
cec70000] [username: sampleuser@oracle.com] -------------------- Physical query response time 0.210 (seconds), id <<1158231996>>
SQLBypass Gateway
[2019-01-30T00:31:00.590+00:00] [OracleBIServerComponent] [TRACE:2] [] [] [ecid:
005WA82e9RH66Ut_odl3iY000EXm000_na,0:1:3:2:1:257] [tid: efa0700] [messageid: USER-26] [requestid: cec70034] [sessionid:
cec70000] [username: sampleuser@oracle.com] -------------------- Rows 1, bytes 32772 retrieved from database query id:
<<1158232468>> SQLBypass Gateway
[2019-01-30T00:31:00.590+00:00] [OracleBIServerComponent] [TRACE:2] [] [] [ecid:
005WA82e9RH66Ut_odl3iY000EXm000_na,0:1:3:2:1:257] [tid: efa0700] [messageid: USER-28] [requestid: cec70034] [sessionid:
cec70000] [username: sampleuser@oracle.com] -------------------- Physical query response time 0.080 (seconds), id <<1158232468>>
SQLBypass Gateway
[2019-01-30T00:31:00.590+00:00] [OracleBIServerComponent] [TRACE:2] [] [] [ecid:
005WA82e9RH66Ut_odl3iY000EXm000_na,0:1:3:2:1:257] [tid: efa0700] [messageid: USER-26] [requestid: cec70034] [sessionid:
cec70000] [username: sampleuser@oracle.com] -------------------- Rows 46, bytes 570032 retrieved from database query id:
<<1158231989>>
[2019-01-30T00:31:00.590+00:00] [OracleBIServerComponent] [TRACE:2] [] [] [ecid:
005WA82e9RH66Ut_odl3iY000EXm000_na,0:1:3:2:1:257] [tid: efa0700] [messageid: USER-28] [requestid: cec70034] [sessionid:
9 WHITE PAPER | Oracle Fusion Transactional Business Intelligence and BI Cloud Connector Performance Recommendations | Version 3.0
Copyright © 2021, Oracle and/or its affiliates | Confidential – Public
cec70000] [username: sampleuser@oracle.com] -------------------- Physical query response time 23.964 (seconds), id
<<1158231989>>
[2019-01-30T00:31:00.590+00:00] [OracleBIServerComponent] [TRACE:2] [] [] [ecid:
005WA82e9RH66Ut_odl3iY000EXm000_na,0:1:3:2:1:257] [tid: efa0700] [messageid: USER-29] [requestid: cec70034] [sessionid:
cec70000] [username: sampleuser@oracle.com] -------------------- Physical Query Summary Stats: Number of physical queries 3,
Cumulative time 24.254, DB-connect time 0.002 (seconds)
[2019-01-30T00:31:00.590+00:00] [OracleBIServerComponent] [TRACE:2] [] [] [ecid:
005WA82e9RH66Ut_odl3iY000EXm000_na,0:1:3:2:1:257] [tid: efa0700] [messageid: USER-24] [requestid: cec70034] [sessionid:
cec70000] [username: sampleuser@oracle.com] -------------------- Rows returned to Client 46
[2019-01-30T00:31:00.590+00:00] [OracleBIServerComponent] [TRACE:2] [] [] [ecid:
005WA82e9RH66Ut_odl3iY000EXm000_na,0:1:3:2:1:257] [tid: efa0700] [messageid: USER-33] [requestid: cec70034] [sessionid:
cec70000] [username: sampleuser@oracle.com] -------------------- Logical Query Summary Stats: Elapsed time 24.322, Total time in
BI Server 24.321, Response time 24.322, Compilation time 0.352 (seconds), Logical hash d9cd7a2f
Highlighted are the important fields and metrics to check for. Note the more critical fields are printed in bold below:
[requestid] uniquely identifies all events for a single report execution in nqquery.log
[username] captures the user that issued the report
<Logical request hash: value> uniquely identifies the report’s signature, which is its logical SQL text. It’s stamped at the
beginning and the end of the report log.
Start and end timestamps can be used to calculate the total time, that includes the data fetch time. This calculation is very
important for reports, producing large row counts, or BICC extracts extracting and uploading the data. OBIEE will keep its
cursor open and wait for its consumer, BICC for example, until it fetched all the records. This ‘fetch’ time may not be
reflected in ‘Total Time in BI Server’.
SAW_DASHBOARD and SAW_DASHBOARD_PG define the catalog path for the stored dashboard and dashboard page
name, hosting a report.
SAW_SRC_PATH is the catalog path for the customer’s report.
Logical SQL starts with <SET VARIABLE> clause, that includes OBIEE directives and variables, defined in Advanced Prefix
section. Refer to more details on VARIABLEs in sections below. The logical SQL text defines the report business logic for
the defined report.
BI Server generates two types of physical queries (PSQLs), and prints complete XML or SQL text:
− XML ADFQuery type, that connects to WLS connection pool, and each ADFQuery associated with its unique ID
<<…>> SQLBypass Gateway.
− Database SQL type, that connects to Oracle database, also associated with its unique ID <<…>>. Note it does not
have SQLBypass Gateway, it is executed against (CRM/HCM/FSCM/GRC)_OLTP database.
− BI Server issues pre- and post- SQLs to connect to FA Database, attaching and then detaching via FND API. Note
that the detach call is logged with much later timestamp, though it comes right after the attach call in the log.
BI Server prints several summary lines at the end of the report execution:
− Each <Physical query response time (seconds)> is OBIEE reported time for a single query, issued to WLS
connection pool to read ADF SQL query block, or to FA DB to retrieve the results from database. You can identify
the query by its OBIEE SQL ID. Note, that query response time would not reflect consumer wait time to fetch the
data.
− Each Physical query <Rows> retrieved from WLS or %_OLTP (DB) connection pools.
− < bytes retrieved> is OBIEE estimated fetched amount of data, based on maximum row size. This metric helps to
measure the volume impact from heavy reports or BICC extracts data fetching.
<Physical Query Summary Stats> shows:
− <Number of physical queries> is the total number of XML ADFquery and DB SQLs, in the example total 3 reflects
2 ADFQueries and one Oracle DB SQL
− <Cumulative time> is the sum of all PSQL queries. Note that the queries are run in parallel, so the sum would not
be representative of the total time spent in WLS or Oracle DB.
− <DB-connect time> is the time to establish DB connection, typically a tiny fraction of total time.
− <Rows returned to Client> is the number of rows, produced by the report.
<Logical Query Summary Stats> shows:
− <Elapsed time> is the lifetime of OBIEE cursor. It is not the representative metric for report runtime.
− <Total Time in BI Server> is the best runtime metric to use for measuring report runtime. It includes all runtimes
from all XML queries, DB SQLs as well as Logical SQL compilation time. Remember that it does not include the
data fetching waits from clients such as BICC. So, you need to rely on timestamps to compute the total time for
such heavy jobs.
− <Compilation Time> is another important metric, that includes BI Server time on XML ADFQueries, post-
processing time in BI Server, when it joins multiple PSQL result sets or applying non-pushed functions to the data.
This metric should be very closely watched in FA OBIEE env.
The End timestamp completes with the same unique Logical hash value <…>.
10 WHITE PAPER | Oracle Fusion Transactional Business Intelligence and BI Cloud Connector Performance Recommendations | Version 3.0
Copyright © 2021, Oracle and/or its affiliates | Confidential – Public
Guidelines for Analyzing Reports Performance
When analyzing query log for reports performance, use the following guidelines:
1. Review Logical SQL for <SET VARIABLE> clause to check for any non-standard variables and directives. They are defined in
Advanced tab in Prefix section.
Important! The VARIABLE values could have significant impact on SQL generation logic. They should not be applied unless
you understand the impact and would like to change the report generation to improve report’s performance. This document
provides the guidance and explanation of the critical variables, that may be applicable to OTBI reports.
2. Logical SQL (LSQL) text is very important source for analysis, such as use of logical attributes, logical functions, lack or
presence of comprehensive filters in a report, implicit use of cross Subject Areas, etc. You cannot rewrite the generated
Physical SQL in DB, you can use LSQL to change the generation logic and improve your report performance. In some case you
may consider using Advanced tab option and manually rewrite logical SQL text.
Important! Always start from inspecting your logical SQL text. Do not rush to optimize your PSQL test or tune its query
execution plan in database. Logical SQL should be carefully scrutinized, before moving next to PSQL analysis.
3. Review physical SQL(s), generated by OBIEE. Note, that OBIEE may generate more than one physical SQL for your report, so
use the query log or Usage Tracking stats to identify the most expensive PSQL for your review. Remember, that you cannot
manually rewrite the physical SQL text, but you can use database query explain plan to ensure the DB optimizer picking the
correct joins order, pushing predicates, applying filtering, etc. In some cases, you may want to influence the SQL query plan
by applying database hints to your report. This option is covered in detail in the document.
11 WHITE PAPER | Oracle Fusion Transactional Business Intelligence and BI Cloud Connector Performance Recommendations | Version 3.0
Copyright © 2021, Oracle and/or its affiliates | Confidential – Public
BEST PRACTICES FOR OTBI DASHBOARDS AND REPORTS DESIGN
OTBI Analytics is extensively integrated into Fusion Applications. The end users can execute the analytic reports directly embedded
into Fusion UI pages, run custom dashboards with many OTBI reports, use dashboard prompts for BI Publisher queries, execute
integration reports using Web Services APIs, etc. The power users can run ad-hoc queries directly from OBIEE Answers or use BI
Cloud Connector (BICC) to extract the data from FA VOs, available from BICC Admin Console’s enabled offerings. This chapter will
discuss high level recommendations for using OTBI in Fusion.
Dashboard Design Recommendations
This chapter covers the best practices and guidelines for designing OTBI Dashboards. Dashboards proper design will help to minimize
performance workload and improve end user experience in Fusion Applications environments.
Dashboard Design: Best Practices
4. Design your landing dashboard page very carefully, as it will be used the most and shall not generate performance overhead
from any heavy reports.
5. OBIEE Dashboards allow creating multiple pages. When designing multi-page dashboard, do not put too many pages per
dashboard. Typically, dashboards consolidate functional report contents in five pages or less. All pages should be visible and
clearly labeled for end users to pick the right page without unnecessary clicking every single page (and triggering reports
execution).
6. Define reasonable number of reports per dashboard page. Avoid scrolling in your pages. Typically having six reports or less
allows utilizing the dashboard space and presenting the relevant information. OBIEE will execute the reports in parallel, so a
single dashboard page with many reports could generate significant workload.
Important! If you put a large number of reports on the same dashboard, OBIEE would make multiple calls to BI Broker to
translate ADF VOs into SQL queries in parallel. Such high concurrent load would manifest in longer XML queries to generate
ADF VO SQL definitions, adding to the reports total runtime, causing higher resource usage on WLS servers that host BI
Broker, and impacting regular UI flows in FA.
7. Carefully design dashboard reports, find the optimal balance between complexity of a single report versus number of reports
on a dashboard page.
8. Consolidate reports with similar report logic on the same dashboard page into as few items as possible.
9. Consider placing reports with drastically different runtimes on separate dashboard pages.
10. Do the due diligence to group the logical reports together on the same dashboard page to improve the usability and minimize
unnecessary clicks through other dashboards and pages, as every single page click would trigger more report executions.
11. DO NOT include long running reports on commonly used dashboard pages and do not embed them into FA UI pages. Instead,
build such reports as the links off the dashboards or UI pages to limit their execution by interested parties only. You may
consider using BI Delivers for scheduling and delivering such reports to the targeted users instead of running them online.
12. Design your dashboard as much interactive as possible, using column selectors, drill downs and guided navigation.
13. Avoid using filters, that are based on the output from other reports on a dashboard.
14. Use sub-totals and grand totals in reports wisely. Each total value results in additional level of aggregation and may have
impact on report performance.
Dashboard Prompts Recommendations
OTBI Dashboard Prompts, which select List of Values (LOV), often produce their own LSQLs, that generate PSQLs to render the
desired values. Poorly designed prompts could result in very heavy, slowly performing PSQLs, producing long lists of values,
consuming significant resources on both BI and Database tiers, and causing frustration to end users. Review the following guidelines
for dashboard prompts design below.
Choice of Logical Entities in Prompts
Design dashboard prompts to generate efficient logical queries. Explore the alternative options to pull Prompt values from less heavy
logical entities such as dimensions and lookups instead of facts. Additionally, you can explore alternative subject areas for your
prompts.
Example 1. To create a prompt with a list of direct and indirect employees in your reporting hierarchy, consider selecting the names
from Resource Hierarchy and apply a filter by login. That will result in more optimal LSQL with a single logical dimension:
SELECT
12 WHITE PAPER | Oracle Fusion Transactional Business Intelligence and BI Cloud Connector Performance Recommendations | Version 3.0
Copyright © 2021, Oracle and/or its affiliates | Confidential – Public
0 s_0,
"Sales - CRM Resource"."Resource Hierarchy"."Current Base Resource Name" s_1
FROM "Sales - CRM Resource"
WHERE
("Resource Hierarchy"."User Organization Hierarchy Based Login"
= VALUEOF(NQ_SESSION."USER_PARTY_ID"))
The less efficient option to query data from two separate dimensions would add an implicit logical fact to the generated query and
take much longer to extract the values for the same prompt:
SELECT
"Sales - CRM Resource"."Employee"."Employee Name" s_1
FROM "Sales - CRM Resource"
WHERE
("Resource Hierarchy"."User Organization Hierarchy Based Login"
= VALUEOF(NQ_SESSION."USER_PARTY_ID"))
Example 2. Rather than use “Sales – CRM Asset Contact” SA for querying all employees for a specific sales country the implementer
could retrieve the same list of values much faster from “Sales – CRM Asset”:
SELECT
0 s_0,
"Sales - CRM Asset Contact"."Contact"."Full Name" s_1
FROM "Sales - CRM Asset Contact"
ORDER BY 1, 2 ASC NULLS LAST
FETCH FIRST 75001 ROWS ONLY
Vs. less optimal LSQL for the same prompt:
SELECT
0 s_0,
"Sales - CRM Asset"."Contact"."Full Name" s_1
FROM "Sales - CRM Asset"
ORDER BY 1, 2 ASC NULLS LAST
FETCH FIRST 75001 ROWS ONLY
Example 3. Avoid querying large HCM transactional tables and fetch all employee records when retrieving " Supervisor Full Name" in
your HCM prompts. The following LSQL restricts the data to fetch only department managers and delivers better performance:
SELECT
"Department"."Supervisor Full Name",
"Department"."Department Name"
FROM "Workforce Management - Worker Assignment Real Time"
FETCH FIRST 65001 ROWS ONLY
While the original query would hit a large HCM transaction table, fetching all employee data for all Department Managers and worse
performance:
SELECT
"Department"."Supervisor Full Name"
FROM "Workforce Management - Worker Assignment Event Real Time"
ORDER BY 1
FETCH FIRST 65001 ROWS ONLY
Use Default Prompt Values
Always define default values for your prompts.
13 WHITE PAPER | Oracle Fusion Transactional Business Intelligence and BI Cloud Connector Performance Recommendations | Version 3.0
Copyright © 2021, Oracle and/or its affiliates | Confidential – Public
Pick the value as default, which is most commonly selected by end users.
Use a dummy value if you cannot identify the most common one. Then the reports would produce no data until the end users
choose the right values.
Consider customizing ‘No Results’ message in the report(s) Analysis Properties to tell users to choose the right prompt values.
For example:
Logical Aggregate Functions in Prompt Filters
Avoid using aggregate functions in logical filters, as they may not be pushed to physical tables. For example, the use
MAX("Activity"."Actual Start Date") would not be pushed to the physical table ZMM_ACTY_ACTIVITY below:
SELECT "Customer"."Customer Unique Name" saw_0 FROM "Sales - CRM Sales Activity"
WHERE MAX("Activity"."Actual Start Date") BETWEEN timestamp '2017-01-01 0:00:00' AND
timestamp '2017-01-24 00:00:00' ORDER BY saw_0 FETCH FIRST 65001 ROWS ONLY
The generated PSQL shows two WITH factored subqueries, with MAX(d1.c3) pushed into the second WITH query block, and the
MAX filter value applied in the HAVING clause instead of filtering the records directly from ZMM_ACTY_ACTIVITIES table.
WITH sawith0 AS (
SELECT ...
FROM zmm_acty_activities activitypeo
... ) t1931714
),sawith1 AS (
SELECT
MAX(d1.c3) AS c2,
d1.c1 AS c3
FROM sawith0 d1
GROUP BY d1.c1
HAVING MAX(d1.c3) BETWEEN TO_DATE('2017-01-01 00:00:00','YYYY-MM-DD HH24:MI:SS') AND
TO_DATE('2017-01-24 00:00:00','YYYY-MM-DD HH24:MI:SS')
)...
WHERE ROWNUM <= 65001
If you have the functional requirement to employ such aggregated filters, then consider adding non-aggregated filters as well to
constrain the result set in your query.
Note, the same recommendation applies to prompts as well as regular reports with aggregate logical filters.
Use ‘Calendar’ Inputs in Date Prompts
Consider using ‘Calendar’ input instead of generating a ‘Choice List’.
The ‘Choice List’ would result in more expensive LSQL, querying database:
SET VARIABLE QUERY_SRC_CD='ValuePrompt';SELECT "Person"."Person Date Of Birth" saw_0 FROM
"Workforce Management - Person Real Time" ORDER BY saw_0 FETCH FIRST 65001 ROWS ONLY
14 WHITE PAPER | Oracle Fusion Transactional Business Intelligence and BI Cloud Connector Performance Recommendations | Version 3.0
Copyright © 2021, Oracle and/or its affiliates | Confidential – Public
compared to the use of more efficient ‘Calendar’ input below:
Use Text Field in Prompts
Consider using ‘Text Field’ in User Input when you build prompts using complex logical model, such as secure logical hierarchies. If
you implement such prompts via ‘Choice List’, you may end up with very expensive hierarchical PSQL without any filters, running for
a long time and not scaling for multiple business users. The ‘Text Field’ Prompts should comply with security requirements and have
input validation before OBIEE run the prompt LSQL and corresponding PSQL. You can ensure such validation by implementing a
hidden second prompt. Refer to the example for implementing ‘Text Field’ prompt based on Compensation Manager Hierarchy.
For example, the ‘Choice List’ Prompt using Compensation Manager Hierarchy would result in the following logical SQL without any
constraining logical filters:
SELECT "Compensation Manager"."Manager Name" saw_0,
DESCRIPTOR_IDOF("Compensation - Workforce Compensation Real Time"."Compensation
Manager"."Manager Name") saw_1 FROM "Compensation - Workforce Compensation Real Time"
ORDER BY saw_0 FETCH FIRST 65001 ROWS ONLY
Such query could take time to run, especially with multiple levels of the hierarchy. The same prompt can be designed to use Text Field
Input and the second hidden prompt, validating the input using the security, enforced for this hierarchy in OTBI:
1. Create a ‘Text Input’ Prompt for Compensation Manager Prompt
− Prompt for: Presentation Variable (p_open_txt)
− Label: Compensation Manager
− User Input: Text Field
− Variable Data Type: Default (Text)
− Default selection: SQL Results
− Enter SQL Statement to generate the list of values:
SELECT "Compensation Manager"."Manager Name"
FROM "Compensation - Workforce Compensation Real Time"
WHERE "Worker"."Person ID" = VALUEOF(NQ_SESSION.PERSON_ID_HCM) FETCH FIRST 1 ROWS ONLY
Refer to the screenshot below:
15 WHITE PAPER | Oracle Fusion Transactional Business Intelligence and BI Cloud Connector Performance Recommendations | Version 3.0
Copyright © 2021, Oracle and/or its affiliates | Confidential – Public
2. Create Hidden ‘Choice List’ Prompt
Compensation Manager Prompt (Hidden) validates the value, entered into the ‘Text Field’ Prompt example above. The validated
Display Value is passed into another presentation variable p_val_var. Even though it is defined having ‘Choice List’, it will have a
single value, because the logical SQL will have an added filter in it.
− Prompt for Column: "Compensation Manager"."Manager Name"
− Label: Manager Name
− Choice List Values: SQL Results
− Enter SQL Statement to generate the list of values:
SELECT "Compensation Manager"."Manager Name" FROM "Compensation - Workforce Compensation
Real Time" Where "Compensation Manager"."Manager Name" = '@{p_open_txt}{" "}' FETCH FIRST
1 ROWS ONLY
− Default selection: SQL Results
− Enter SQL Statement to generate the list of values:
SELECT "Compensation Manager"."Manager Name" FROM "Compensation - Workforce Compensation
Real Time" WHERE "Compensation Manager"."Manager Name" = '@{p_open_txt}{" "}' FETCH
FIRST 1 ROWS ONLY
Set a variable: Presentation Variable (p_val_var)
Refer to the screenshot below:
16 WHITE PAPER | Oracle Fusion Transactional Business Intelligence and BI Cloud Connector Performance Recommendations | Version 3.0
Copyright © 2021, Oracle and/or its affiliates | Confidential – Public
3. Update the Prompted Filter for an affected report to depend on the variable p_val_var:
To prevent rendering incorrect result from entering blank values, space(s) or ‘%’ in the ‘Text Field’ prompt, add "AND
@{p_open_txt}['@']{' '} is prompted" to the report criteria as in the sample report below:
Criteria Filter Details:
"Compensation Manager"."Manager Name" IN (@{p_val_var}['@']{' '})
AND @{p_open_txt}['@']{' '}
Refer to the screenshot below:
And the generated logical SQLs include the logical filer for Manager Name:
17 WHITE PAPER | Oracle Fusion Transactional Business Intelligence and BI Cloud Connector Performance Recommendations | Version 3.0
Copyright © 2021, Oracle and/or its affiliates | Confidential – Public
SET VARIABLE QUERY_SRC_CD='DashboardPrompt'; SELECT "Compensation Manager"."Manager Name"
FROM "Compensation - Workforce Compensation Real Time"
Where "Compensation Manager"."Manager Name" = 'Joe Doe' FETCH FIRST 1 ROWS ONLY
SET VARIABLE QUERY_SRC_CD='DisplayValueMap',PREFERRED_CURRENCY='Local Currency';SELECT
DESCRIPTOR_IDOF("Compensation Manager"."Manager Name") saw_0,
"Compensation Manager"."Manager Name" saw_1,
DESCRIPTOR_IDOF("Compensation - Workforce Compensation Real Time"."Compensation
Manager"."Manager Name") saw_2
FROM "Compensation - Workforce Compensation Real Time"
WHERE "Compensation Manager"."Manager Name" = 'Joe Doe' ORDER BY saw_0, saw_1
Use of Lookup Tables vs. Dimensions in Prompts
OTBI prompts may deliver better performance, if forced to use lookup tables over dimensions. OBIEE allows ‘SET VARIABLE
OBIS_VALUE_PROMPT_LOOKUP_DIRECT_ACCESS=1;’ to influence OBIEE’s choice of using lookup table (value 1). Such option
applies to Prompt related LSQLs only (QUERY_SRC_CD =ValuePrompt or DisplayValueMap, that can be seen in nqquery.log) with
columns traced to a single Lookup table.
Important! Make sure you verify functional equivalency and carefully benchmark the use of the variable in your prompt in test
before you enable it in a production environment.
“OTBI HCM Prompts” Subject Area for Manager Prompts
When you build prompts to pull ‘Manager’ values in your HCM OTBI contents, “HCM OTBI Prompts” should be your first choice
before exploring other options. This subject area has been designed to simplify HCM ‘Manager’ prompts design and deliver improved
performance.
The example below uses a prompt query to retrieve Manager Name:
SELECT
"Manager"."Name"
FROM
"Workforce Management - Worker Assignment Event Real Time"
ORDER BY 1
Instead, you can use "OTBI HCM Prompts" to select Manager Name from "Assignment Manager List" unsecured dimension:
SELECT
"Assignment Manager List Unsecured"."Manager Name",
descriptor_idof("Assignment Manager List Unsecured"."Manager Name")
FROM
"OTBI HCM Prompts"
FETCH FIRST 65001 ROWS ONLY
Report Design Recommendations
OBIEE offers maximum ease and flexibility in creating various reports with complex measures and computations. OTBI logical model
is optimized to deliver fast performance for customer reports and ad-hoc queries in Answers. There may be cases however, when
poorly designed reports with unnecessarily sophisticated logic generate very heavy database SQLs and deliver sub-optimal
performance. Review the guidelines and recommendations below for ensuring fast performance and keep them in mind when you
build your reports in OTBI.
Use Restrictive Filters to Constrain Result Set
1. When you create an OBIEE report, always pick effective logical filters, that produce the desired numbers and prevent
generating very high row counts. The end users may not even be aware that their reports fetch large volumes and start drill
downs or switch to other reports after reviewing the first set of records.
18 WHITE PAPER | Oracle Fusion Transactional Business Intelligence and BI Cloud Connector Performance Recommendations | Version 3.0
Copyright © 2021, Oracle and/or its affiliates | Confidential – Public
Important! Do not create open ended reports with weak filters or no filters in place. If needed, use default values, which can
be passed as filter values to such reports.
2. To prevent reports with runaway row counts, OTBI uses the default limit to fetch first 75,000 rows (the value varies by FA
POD shapes), thereby yielding better performance. This limit is less effective than logical filters, since it is pushed down to
database by appending ROWNUM <= 75001 to WHERE clause. As the result, it is applied as the very last step in the SQL
execution plan, compared to filter predicates, employed in much earlier steps in the query plans.
3. Another common pattern comes from row explosion when database optimizer picks SQL execution plan, that joins
participating tables with one of its steps producing row explosion, even though the final SQL result remains small. Such
intermediate row explosion would directly affect SQL performance, causing heavy database I/O. It is not simple to detect and
diagnose such impact, as OBIEE would not report database logical reads. Such analysis could be possible only by generating
SQL execution plan by DBAs with direct database access. Applying effective filters could directly influence the database
optimizer’s choice of SQL execution plan, allowing it to correctly estimate join cardinalities and pick more efficient execution
path without row explosion. There are more techniques and options, discussed in this document how to influence the query
execution plan in OTBI reports.
4. You may also consider other technologies such as BI Cloud Connector (BICC) or HCM Extract in HCM Cloud to extract much
larger volumes instead of pulling the same numbers via OTBI.
5. Avoid using function(s) on a filter, unless there is a corresponding function-based index for the mapped column in database.
6. Always choose ID columns for your logical filters, or use DESCRIPTOR_IDOF on a filter, so that OBIEE pick the appropriate
ID column in the physical query. Refer to the section discussing use of DESCRIPTOR_IDOF for more details.
7. Note, there still may be valid functional reasons for fetching larger row counts in OTBI reports and exporting them into other
formats.
Use Indexed Logical Attributes in Logical Joins and Filters
It is important to identify and use logical attributes in WHERE Clause that are mapped to database columns with supporting
indexes in place. While you can use virtually every single presentation column for logical joins or filters, do the due
diligence to find out the indexed columns, when designing your reports.
Some presentation attributes may transform into complex calculations, or map to database functions and other expressions,
coming from OTBI logical model or ADF VO transient attributes. Unless you confirm the availability of supporting
function-based indexes, that match the identified expression or function, avoid using those in your logical WHERE clause.
You can use the ‘Administration’ link -> ‘Manage Sessions’ -> ‘Your test report’ link in OBIEE Answers to check the
generated physical SQL and identify the mapped physical columns and their expression.
Review periodically Oracle Customer Connect for FA “Data Lineage” guides, posted after every FA release to understand the
logical business model, trace presentation attributes to physical tables and their columns, and check for existing indexes to
ensure OTBI reports fast performance.
Eliminate Duplicate and Redundant Filters in Logical SQLs
1. The use of duplicate logical filters can cause additional performance impact, resulting in unnecessary joins to physical tables,
and generating sub-optimal execution plans in database. For example:
AND (TIMESTAMPDIFF(SQL_TSI_DAY, MAX("Activity"."Actual End Date"), NOW()) > 90)
AND ("Activity"."Actual End Date" < VALUEOF("CURRENT_DAY")))
The second logical filter on "("Activity"."Actual End Date" < VALUEOF("CURRENT_DAY")))" results in an extra join to the
calendar table. This redundant logic can be safely removed from the report.
2. Filters must be applied once, when you are using reports as filters. A logical filter should be removed from all ‘filter’ reports, if
the same filter is already defined in the main ‘sub-query’ report. By eliminating such redundancy, you will end up with less
complicated and better performing generated PSQL.
Avoid Fetching Very Large Volumes from Database to OBIEE
There are several types of reports which could trigger fetching larger amounts of data from database to OBIEE for subsequent data
manipulation on OBIEE tier and providing the final results:
Reports containing logical functions, which cannot be pushed into database. For example, the use of LENGTH() or RAND() in
logical SQL definition may result in BI Server generating multiple sub-requests with corresponding PSQLs, executing the
PSQLs in database, fetching ALL records to OBIEE tier and ‘stitching’ the results there. Consider using CHAR_LENGTH()
instead of LENGTH() to avoid breaking a physical SQL into multiple.
19 WHITE PAPER | Oracle Fusion Transactional Business Intelligence and BI Cloud Connector Performance Recommendations | Version 3.0
Copyright © 2021, Oracle and/or its affiliates | Confidential – Public
Note: starting from version 12c OBIEE changed LENTGH() logic. It now pushes the logic into SQL generated query in
database and does not trigger generating multiple PSQLs.
Reports with cross facts are implicitly joined via Full Outer Join (i.e. BI Server joins logical sub-requests and picks full outer
join in its execution plan), may benefit from using PERF_PREFER_INTERNAL_STITCH_JOIN feature (that has been turned
on in RPD), that is forcing BI Server to break a report into multiple sub-requests, generate multiple PSQLs and fetch ALL
records for ‘stitching’ on OBIEE tier. To avoid the overhead from fetching too much data to BI Server, make sure you use
effective filters in such reports.
Reports containing CLOBs, and poorly constrained, may produce significant load on the system from fetching large volumes.
Certain reports logic may trigger implicit post processing on OBIEE tier, which requires fetching ALL rows from database to
OBIEE. The best indicator for such scenario would be a logical SQL with ‘FETCH FIRST X rows’ (default 75,000) having a
single physical SQL without ‘where rownum < X’ rows. For example, a logical SQL has:
SELECT
0 s_0,
...
IFNULL("General Ledger - Journals Real Time"."- Lines"."Journal Line Accounted Amount
Debit",0)
-IFNULL("General Ledger - Journals Real Time"."- Lines"."Journal Line Accounted
Amount Credit",0) s_43,
REPORT_AGGREGATE(IFNULL("General Ledger - Journals Real Time"."- Lines"."Journal Line
Accounted Amount Debit",0)
-IFNULL("General Ledger - Journals Real Time"."- Lines"."Journal Line Accounted
Amount Credit",0) BY ... s_44,
REPORT_SUM("General Ledger - Journals Real Time"."- Lines"."Journal Line Accounted
Amount Credit" BY ... s_45,
REPORT_SUM("General Ledger - Journals Real Time"."- Lines"."Journal Line Accounted
Amount Debit" BY ... s_46
FROM "General Ledger - Journals Real Time"
WHERE
...
FETCH FIRST 75001 ROWS ONLY
However, the generated PSQL didn’t have WHERE ROWNUM <75001:
WITH sawith0 AS (
SELECT
...
FROM
sawith1 d1
ORDER BY ...;
As the result the report was failing with the error:
[nQSError: 43113] Message returned from OBIS.
[nQSError: 43119] Query Failed: [nQSError: 46168] RawFile::checkSoftLimit failed because
a temporary file exceeds the limit (10240 MB) specified by the NQ_LIMIT_WRITE_FILESIZE
environment variable.
The analysis of the report execution plan in the log (requires loglevel=7) shows:
ifnull(sum_SQL99(D1.c43 by ...) as c45 [for database 0:0,0],
sum_SQL99(D1.c42 by ...) as c46 [for database 0:0,0],
sum_SQL99(D1.c43 by ...) as c47 [for database 0:0,0]
Child Nodes (RqJoinSpec): <<...>> [for database 0:0,0]
...
Child Nodes (RqJoinSpec): ...
...
RqList <<...>> [for database 3023:1418642:FSCM_OLTP,57]
sum(D1.c32 by ...) as c1 [for database 3023:1418642,57],
sum(D1.c34 by ...) as c2 [for database 3023:1418642,57],
20 WHITE PAPER | Oracle Fusion Transactional Business Intelligence and BI Cloud Connector Performance Recommendations | Version 3.0
Copyright © 2021, Oracle and/or its affiliates | Confidential – Public
Per OBIEE logical query execution plan, REPORT_AGGREGATE got transformed into SUM_SQL99 and executed in OBIEE. In this
example the replacement of REPORT_AGGREGATE with REPORT_SUM results in all three functions pushed into the database:
RqList <<...>> [for database 3023:1418642:FSCM_OLTP,57]
sum(D1.c46) over (partition by D1.c17, ...) as c46,
sum(D1.c47) over (partition by D1.c17, ...) as c47,
sum(D1.c48) over (partition by D1.c17, ...) as c48
And the use of ROWNUM filter in the generated PSQL:
WITH sawith0 AS (
SELECT
...
FROM
SAWITH3 D1 ) D1 where rownum <= 75001
As the result of fetching too much data to BI Server, you may run into the TEMP file size limit (default 10Gb) on OBIEE tier and get
nQSError:46168 error, as shown in the example above:
[nQSError: 46168] RawFile::checkSoftLimit failed because a temporary file exceeds the
limit (10240 MB) specified by the NQ_LIMIT_WRITE_FILESIZE environment variable.
NQ_LIMIT_WRITE_FILESIZE is an important limiter, defining the upper value for TEMP space size on OBIEE tier, enforced in
Fusion Applications. Its larger value may be justified only for those customers who execute Cloud Extractor and require higher
OBIEE TEMP usage. The same error message, observed during OTBI reports execution is an indicator of too much data, pushed to
OBIEE, and the need to review the report design and logic.
Limit the Number of Logical Columns and ORDER BY Attributes in Reports
If you design a report with multiple logical columns, make sure you review its generated logical SQL and carefully inspect its logical
ORDER BY clause. Typically, the generated ORDER BY list includes all applicable columns, and with many of them included into the
report, as in the example below, that produced 116 ORDER BY attributes. As the result, the generated SQL’s execution plan would
have very expensive SORT operation across multiple columns. If your report produces very large row count, the very last SORT
operation could cause additional unnecessary overhead.
Carefully inspect ORDER BY generated clause, and use ‘Advanced’ tab to edit LSQL, pruning the ORDER BY clause to the
list of logical columns, which address the report functional requirements.
Consolidate logical attributes from as few subject areas as possible to mitigate or eliminate the complexity from the use of
cross SAs.
DO NOT load your report with a large number of logical columns, as they may get converted into fairly complex physical
expressions, causing poor report performance.
Remove the logical columns, excluded in ‘Edit View’, unless they are included in your report for a reason.
The example below shows such bad design of a report with both large number of columns and heavy ORDER by clause:
SELECT
0 s_0,
"Environment Health and Safety - Incidents Real Time"."Incident Event Additional
Details"."Incident Event Created By" s_1,
"Environment Health and Safety - Incidents Real Time"."Incident Event Additional
Details"."Incident Event Creation Date" s_2,
"Environment Health and Safety - Incidents Real Time"."Incident Event Additional
Details"."Incident Event Last Update Date" s_3,
"Environment Health and Safety - Incidents Real Time"."Incident Event Additional
Details"."Incident EventLast Update Login" s_4,
...
...
SORTKEY("Environment Health and Safety - Incidents Real Time"."Injury or
Illness"."Activity Description") s_114,
SORTKEY("Environment Health and Safety - Incidents Real Time"."Vehicle
Incident"."Third Party Details") s_115
21 WHITE PAPER | Oracle Fusion Transactional Business Intelligence and BI Cloud Connector Performance Recommendations | Version 3.0
Copyright © 2021, Oracle and/or its affiliates | Confidential – Public
FROM "Environment Health and Safety - Incidents Real Time"
WHERE
((DESCRIPTOR_IDOF("Environment Health and Safety - Incidents Real Time"."Incident Event
Details"."Incident Event Type") = '...') AND
(DESCRIPTOR_IDOF("Environment Health and Safety - Incidents Real Time"."Near Miss"."Type
of Near Miss") = '...') AND
("Incident Event Details"."Incident Event Completed Date" = date '2017-07-14'))
ORDER BY 1, 20 ASC NULLS LAST, 87 ASC NULLS LAST, 19 ASC NULLS LAST, 18 ASC
NULLS LAST, 86 ASC NULLS LAST, 16 ASC NULLS LAST, 15 ASC NULLS LAST, 17 ASC
NULLS LAST, 114 ASC NULLS LAST, 13 ASC NULLS LAST, 85 ASC NULLS LAST, 12 ASC
NULLS LAST, 84 ASC NULLS LAST, 113 ASC NULLS LAST, 11 ASC NULLS LAST, 83 ASC
NULLS LAST, 10 ASC NULLS LAST, 82 ASC NULLS LAST, 7 ASC NULLS LAST, 8 ASC
NULLS LAST, 81 ASC NULLS LAST, 14 ASC NULLS LAST, 9 ASC NULLS LAST, 76 ASC
NULLS LAST, 109 ASC NULLS LAST, 77 ASC NULLS LAST, 110 ASC NULLS LAST, 116
ASC NULLS LAST, 78 ASC NULLS LAST, 79 ASC NULLS LAST, 111 ASC NULLS LAST, 80
ASC NULLS LAST, 112 ASC NULLS LAST, 75 ASC NULLS LAST, 108 ASC NULLS LAST, 70
ASC NULLS LAST, 105 ASC NULLS LAST, 72 ASC NULLS LAST, 107 ASC NULLS LAST, 71
ASC NULLS LAST, 106 ASC NULLS LAST, 73 ASC NULLS LAST, 74 ASC NULLS LAST, 69
ASC NULLS LAST, 104 ASC NULLS LAST, 68 ASC NULLS LAST, 103 ASC NULLS LAST, 67
ASC NULLS LAST, 102 ASC NULLS LAST, 115 ASC NULLS LAST, 21 ASC NULLS LAST, 88
ASC NULLS LAST, 25 ASC NULLS LAST, 27 ASC NULLS LAST, 22 ASC NULLS LAST, 89
ASC NULLS LAST, 28 ASC NULLS LAST, 23 ASC NULLS LAST, 24 ASC NULLS LAST, 31
ASC NULLS LAST, 33 ASC NULLS LAST, 91 ASC NULLS LAST, 32 ASC NULLS LAST, 90
ASC NULLS LAST, 34 ASC NULLS LAST, 92 ASC NULLS LAST, 35 ASC NULLS LAST, 26
ASC NULLS LAST, 36 ASC NULLS LAST, 93 ASC NULLS LAST, 37 ASC NULLS LAST, 94
ASC NULLS LAST, 38 ASC NULLS LAST, 39 ASC NULLS LAST, 40 ASC NULLS LAST, 41
ASC NULLS LAST, 42 ASC NULLS LAST, 43 ASC NULLS LAST, 44 ASC NULLS LAST, 45
ASC NULLS LAST, 46 ASC NULLS LAST, 47 ASC NULLS LAST, 48 ASC NULLS LAST, 49
ASC NULLS LAST, 50 ASC NULLS LAST, 51 ASC NULLS LAST, 95 ASC NULLS LAST, 52
ASC NULLS LAST, 55 ASC NULLS LAST, 53 ASC NULLS LAST, 54 ASC NULLS LAST, 56
ASC NULLS LAST, 57 ASC NULLS LAST, 96 ASC NULLS LAST, 58 ASC NULLS LAST, 29
ASC NULLS LAST, 59 ASC NULLS LAST, 60 ASC NULLS LAST, 61 ASC NULLS LAST, 97
ASC NULLS LAST, 62 ASC NULLS LAST, 98 ASC NULLS LAST, 30 ASC NULLS LAST, 63
ASC NULLS LAST, 65 ASC NULLS LAST, 100 ASC NULLS LAST, 64 ASC NULLS LAST, 99
ASC NULLS LAST, 66 ASC NULLS LAST, 101 ASC NULLS LAST, 2 ASC NULLS LAST, 3
ASC NULLS LAST, 5 ASC NULLS LAST, 6 ASC NULLS LAST, 4 ASC NULLS LAST
FETCH FIRST 75001 ROWS ONLY
Even though the report included effective logical filters, it failed during physical SQL parsing running into 1Gb PGA Heap size limit,
enforced in the database:
State: HY000. Code: 10058. [NQODBC] [SQL_STATE: HY000] [nQSError: 10058] A general error
has occurred. [nQSError: 43113] Message returned from OBIS. [nQSError: 43119] Query
Failed: [nQSError: 17001] Oracle Error code: 10260, message: ORA-10260: limit size
(1048576) of the PGA heap set by event 10261 exceeded at OCI call OCIStmtExecute.
[nQSError: 17010] SQL statement preparation failed. (HY000)
Review LOBs Usage in OTBI Reports
Some OTBI Presentation Subject Areas have attributes, mapped to physical database columns of Large Object(LOB) data type,
character (CLOB) or binary (BLOB). Review the following guidelines for CLOBs usage in your reports:
FA does not initialize OBIS_MAX_FIELD_SIZE variable, so by default OBIEE applies 32,678 bytes limit to the maximum
size of any field, including CLOBs. If you attempt to fetch a larger, than 32Kb file size, it would come blank in OTBI.
Reports with CLOB attributes in SELECT clause may result in up to 70% longer performance, compared non-CLOB reports.
You should expect observing slower performance with CLOB reports, fetching larger row counts.
OBIEE does not support logical MINUS for reports with CLOBs attributes.
Use of CLOBs may cause failures with logical UNIONs, DISTINCT, GROUP BY and ORDER BY clauses.
Exclude CLOB attributes in filters and join predicates.
When you design a report containing CLOBs, make sure you enforce effective filters including the use of their default
values.
Important! Do not use non-pushed logical functions with CLOBs, as such combination could result in many more rows
fetched to OBIEE for the additional processing.
22 WHITE PAPER | Oracle Fusion Transactional Business Intelligence and BI Cloud Connector Performance Recommendations | Version 3.0
Copyright © 2021, Oracle and/or its affiliates | Confidential – Public
The CLOB misuse cases above would result in Oracle Database error:
ORA-00932: inconsistent datatypes: expected - got CLOB
Reduce Multiple Logical Unions in OTBI Reports
OBIEE provides very flexible logic to combine the results based on unions, intersection, and minus (or difference) operations. The
users can produce sophisticated analysis using logical UNION or UNION ALL by pulling the logical attributes from the same or
multiple subject areas. However, such flexibility comes with the cost, as multiple logical UNIONs further complicate the generated
physical SQL, significantly increase the query parsing time and overall runtime. Follow the guidelines below for designing UNION
based reports:
Limit the number of logical UNIONs to 5 or less
If possible, combine logical UNIONs into a single logical SELECT for the same subject area
Avoid joining many cross subject areas via logical UNIONs
Apply restrictive filters inside each logical UNION.
You may try to use ‘SET VARIABLE OBIS_DBFEATURES_IS_UNION_SUPPORTED =0;‘ in report prefix to break LSQL
with logical UNION and UNION ALL into multiple PSQLs.
Important! Make sure you carefully benchmark use of prefix variable OBIS_DBFEATURES_IS_UNION_SUPPORTED=0
with your logical UNION reports before you implement it in your production environment.
Cross Subject Area Reports Considerations
Cross Subject Area (SA) Reports include logical columns from more than one Subject Area (SA), referenced in SELECT, FROM or
WHERE logical clauses:
Explicit Cross SA pattern: reports reference multiple SAs explicitly in FROM clause
Implicit Cross SA: reports use a single SA in FROM clause and more referenced SAs in fully qualified attributes in SELECT
or WHERE clauses.
The most common implicit Cross SA pattern is shown below:
-- Select attributes from two SAs
SELECT “Subject Area1”.”attribute1” s_1,
“Subject Area2”.”attribute1” s_2,
…
FROM “Subject Area1” WHERE …;
-- Select attributes from one SA, but applying filters from the second SA:
SELECT “Subject Area1”.”attribute1” s_1,
“Subject Area1”.”attribute1” s_2,
…
FROM “Subject Area1” WHERE “Subject Area2”.”attribute1”=…;
Important! Refer to Oracle White Paper “Guidelines for creating Cross Subject Area Transactional BI Reports in Fusion”( Doc ID
1567672.1) for the detailed explanations and guidelines for using Cross SA in your reports.
This section covers additional recommendations for addressing performance related topics for using Cross SAs in OTBI reports.
1. Limit the use of Cross Subject Areas to as few as possible in your reports.
a. Every single SA adds the complexity to the generated PSQL, and results in significant overhead in both SQL structure,
security predicates, and as the result very complex execution plans.
b. There have been a number of cases, when the users unintentionally introduced 5-10 subject areas in a single report.
Avoid introducing more than three SAs in a single report, if you can.
c. Inspect the logical attributes, used from multiple SAs and verify if you can extract the same or functionally equivalent
logical columns from the primary SA instead.
2. Cross SA reports form the largest category of the queries, failing with “ORA-10260: limit size (1048576) of the PGA heap
size”. Such error cannot be worked around and requires the use of advanced logical SQL to address such reports functional
requirements.
Cross Fact Reports Recommendations
23 WHITE PAPER | Oracle Fusion Transactional Business Intelligence and BI Cloud Connector Performance Recommendations | Version 3.0
Copyright © 2021, Oracle and/or its affiliates | Confidential – Public
Cross Fact or Cross Star reports reference more than one logical fact from the single subject area in OTBI reports. The table below
provides two examples, showing the difference between cross subject and cross fact queries:
the left column shows the case with implicit cross subject areas, highlighted in green and blue
the right column shows LSQL with cross facts, using two facts (highlighted in blue and red) from the same subject area (Sales
– CRM Pipeline).
TWO CROSS SAS REPORT TWO CROSS FACTS REPORT
Typically produces LEFT OUTER JOIN in sub-requests Typically produces FULL OUTER JOIN in sub-requests
SELECT "Workforce Management - Worker Assignment
Real Time"."Worker"."Assignment Number" s_1,
"Workforce Management - Worker Assignment Real
Time"."Worker"."Employee First Name" s_2,
"Workforce Management - Employment Contract Real
Time"."Employment Contract Details"."Type" s_3
FROM "Workforce Management - Worker Assignment Real
Time"
WHERE (DESCRIPTOR_IDOF("Workforce Management -
Worker Assignment Real Time"."Worker"."Assignment
Status Type") IN ('ACTIVE', 'SUSPENDED'))
ORDER BY 1
FETCH FIRST 75001 ROWS ONLY
SELECT "Sales - CRM
Pipeline"."Employee"."Employee Name" s_1,
"Sales - CRM Pipeline"."Pipeline Detail
Facts"."Closed Opportunity Line Revenue" s_2,
"Sales - CRM Pipeline"."Resource Quota
Facts"."Resource Quota" s_3
FROM "Sales - CRM Pipeline"
WHERE ("Time"."Enterprise Quarter" = VALUEOF
("CURRENT_ENTERPRISE_QUARTER"))
ORDER BY 1
FETCH FIRST 75001 ROWS ONLY
OTBI logical model design uses several categories of dimensions:
common OTBI dimensions, shared across multiple pillars such as Financials, HCM, etc.
common Pillar dimensions that apply to all subject areas within each pillar
common Fact dimensions that apply to two or more logical facts within each subject area
local Fact dimensions that are specific to a logical fact in the subject area.
Review the recommendations below for designing and optimizing your cross fact OTBI reports:
Select at least one metric from each fact to ensure report consistency. You may select and exclude them Edit View not to
show in your report.
Select at least one common dimension, shared by participating facts, otherwise OBIEE would generate cartesian product and
create wrong resultset.
Select common fact dimension attributes, that OBIEE will use to join cross facts. If you don’t need in your report, you can
select these attributes in the report but exclude them in Edit View.
Carefully design filters to apply them to all facts in cross fact reports. Remember that a filter on a local fact dimension
applies to one logical fact only. With no additional filters to the other fact(s), the report would result in slower performance.
Consider using fewer (than more) logical facts in a single report. Adding more facts into a single report would complicate
the report design, generate more complex SQL with security predicates applying to ALL facts, and affect the report
performance.
Check for any possibly redundant SYS_OP_MAP_NONNULL in generated PSQLs. Typically, cross fact queries produce Full
Outer Join (FOJ) between Fact VOs. Such join may result in equi-joining NULLs, that produces SYS_OP_MAP_NONNULL
in PSQL. SYS_OP_MAP_NONNULL generation can be eliminated by unchecking ‘nullable’ property in RPD for a joining
logical column, if it references a NOT NULL column in database.
Cross fact reports with implicit Full Outer Joins may benefit from using ‘SET VARIABLE
OBIS_DBFEATURES_IS_PERF_PREFER_INTERNAL_STITCH_JOIN=1; SELECT ….”, breaking a single PSQL into multiple
database queries.
Use of OTBI Reports as Infolets in FA Embedded Pages
Reports, designed in OTBI, can be embedded as infolets into FA product pages. Infolets allow customers to draw their quick attention
to a metric or attribute and further analyze it using more detailed analytic reports. However, such infolets could become a source of
performance overhead in your FA environment, if designed inefficiently. Typically, OTBI infolets will score much higher usage,
compared to OTBI reports and dashboards, that are used by targeted audience. The UI users may or may not be interested in getting
any inputs from such OTBI reports, and yet trigger executing them every time they navigate through such UI pages. Review the
following recommendations for using OTBI infolets in FA:
Review OTBI Usage Tracking reports for the scale and usage of Infolets in your environment.
24 WHITE PAPER | Oracle Fusion Transactional Business Intelligence and BI Cloud Connector Performance Recommendations | Version 3.0
Copyright © 2021, Oracle and/or its affiliates | Confidential – Public
Make sure you design very compact reports, using effective date filters, performing optimal aggregations on a small resultset.
Mind the use functional contents that add complex (or custom) data security predicates to OTBI infolets.
Avoid using Cross Subject area reports or any other complex design patterns, offered by OTBI for infolets. Stick to using
Single Subject Area only.
Avoid using filters, based on other reports, via IN or SET operators.
Ensure restrictive and index-supported filters, to avoid aggregation of fact measures over a wide data set.
Avoid using a detailed report as a filter for an infolet, for maintaining functional consistency. Instead, create two separate
reports with shared filters using 'saved filters'. Such approach will reduce the complexity of the infolet report, while
maintaining the desired consistency, and reduce development overhead for any changes, as you need to modify a single filter
in one place to update the changes in both reports.
Avoid consolidating all infolets on a single page. Instead have multiple tabs and keep the non-critical infolets on the second
page. This will improve user experience and also ensure that all infolet queries are not fired in parallel, thereby reducing the
concurrent load on the system.
Avoid disabling OBIPS Cache via setting “SET VARIABLE OBIS_REFRESH_CACHE=1;” in your infolets. The variable would
consider entries in OBIPS cache as stale and reseed it every time you navigate to the page with infolets. The higher frequency
of OTBI infolet reports usage would result in every single page navigation skipping OBIPS cache and causing more
performance overhead to your environment. Refer below to the recommended variable usage.
Considerations for Using Custom Extensible Attributes and Flex Fields in Logical Filters
OTBI supports the use of custom flexible attributes in its logical model for extending the functionality and flexibility in BI
Transactional reporting. Review the following considerations when employing such custom attributes in your reports:
1. Extensible attributes are NOT indexed in Fusion database. If you design a logical report and apply a filter using such extensible
attribute, database optimizer would pick full table scan access path in the absence of any other filters or join predicates.
2. If you define extensible attributes inside the logical View Objects, which get generated as correlated sub-queries in PSQL, and
choose to use them as logical filters, you may run into the database error “ORA-01652: unable to extend temp segment by 128
in tablespace FUSION_TEMP”. When optimizer generates an execution plan for PSQL with such correlated sub-query, it first
joins all the required tables, retrieving all the values for extensible attribute and only then does it apply filters to it. Even if
you use an extensible attribute, mapped to an indexed physical column, its index would be of very little use. To speed up
reports with such design consider applying additional filters, which help to reduce the volumes of the interim joins.
Considerations for Using Hierarchical Attributes in Logical Filters
Logical filters on hierarchical attributes can add significant complexity and performance overhead to your reports. You may work
around the hierarchical columns in WHERE clause by picking other columns from the same logical object(s). For example, a report
referencing “Hierarchy.level1 Department” attribute in WHERE clause may be replaced with “Department Name” without affecting
the functional design. The Department hierarchy provides the structure of a department tree and it is usually traversed from the top
using the Tree name. The use of “Department Name” eliminates the generated SQL complexity and significantly improves report’s
performance.
Employ DESCRIPTOR_IDOF Function in Logical Filters
OBIEE’s double-column functionality in RPD allows to associate two logical columns in the logical layer:
A descriptor or display column, which contains the actual display values. For example, “Customer Country” shows Country
Names.
A descriptor ID or code column, which contains the code values, uniquely identifying display values, consistent across users
or locales. For example, “Country Code” maps to the Country IDs.
The double column feature allows to:
Define language independent filters
Change display values without rewriting the actual analyses
Handle the queries, which use LOB data types
Utilize indexes on ID columns and get improved performance for your queries.
Carefully review Logical filters in your report’s logical SQL as well as the corresponding Physical filters in its generated physical SQL
and verify whether you can replace a filter on a "description" column with the filter on its direct "code" / "ID" column. The
“description” columns may not be indexed, while their corresponding ID columns typically have indexes defined. The switch from
“description” to ID columns will reduce complexity of the physical SQL and utilize available indexes on CODE/ID columns.
Note, Descriptor_IDOF can be used ‘Equal’ and ‘IN’ filters but not with ‘LIKE’ comparison.
25 WHITE PAPER | Oracle Fusion Transactional Business Intelligence and BI Cloud Connector Performance Recommendations | Version 3.0
Copyright © 2021, Oracle and/or its affiliates | Confidential – Public
Refer to the example below, showing Descriptor_IDOF in RPD for “Sales - CRM Pipeline"."Customer"."Country Code", defined as a
Descriptor Id Column for "Sales – CRM Pipeline"."Customer". "Customer Country":
The use of DESCRIPTOR_IDOF helps to address additional performance cases:
1. DESCRIPTOR_IDOF function will result in producing a physical SQL with generated execution plan, where the filter value
will get applied to (most likely indexed) ID column from a dimension table earlier in the plan execution steps, compared to
less optimal plan with the filter on a lookup table. For example, the modified LSQL:
SELECT
0 s_0,
"Sales - CRM Pipeline"."Customer"."Account Active Flag" s_1,
"Sales - CRM Pipeline"."Customer"."Corporate Account Name" s_2,
"Sales - CRM Pipeline"."Customer"."Customer City" s_3,
"Sales - CRM Pipeline"."Customer"."Customer Country" s_4,
...
FROM "Sales - CRM Pipeline"
WHERE (("Customer"."Customer Name" LIKE 'A%')
AND ((DESCRIPTOR_IDOF("Customer"."Customer Country") = 'CA')
OR (DESCRIPTOR_IDOF("Customer"."Customer Country") = 'MX')))
ORDER BY 1,
6 ASC NULLS LAST
FETCH FIRST 75001 ROWS ONLY ;
Such logic will replace “NVL(D2.c1 , D1.c14) IN ('Canada', 'Mexico'))” with “D1.c14 IN ('CA', 'MX')” predicate and produce
the physical SQL pattern below:
26 WHITE PAPER | Oracle Fusion Transactional Business Intelligence and BI Cloud Connector Performance Recommendations | Version 3.0
Copyright © 2021, Oracle and/or its affiliates | Confidential – Public
. . .
FROM SAWITH2 D1
LEFT OUTER JOIN OBICOMMON0 D2
ON D1.c14 = D2.c2
WHERE --( NVL(D2.c1 , D1.c14) IN ('Canada', 'Mexico'))
D1.c14 IN ('CA', 'MX')
2. DESCRIPTOR_IDOF may affect the number of generated physical SQLs for report’s LSQL.
If you design a report with a logical filter containing a lookup, it may cause OBIEE to generate more than one physical SQL
and affect the report performance. If you need to apply such filters on a lookup, then use Descriptor ID Column in the filter.
For example, the logical query example below has the following logical filters:
FROM "Workforce Goals - Goal Status Overview Real Time"
WHERE
(("Business Unit"."Status" = 'Active') AND
(YEAR("Performance Goals"."Start Date") = 2014) AND
(DESCRIPTOR_IDOF("Workforce Goals - Goal Status Overview Real Time"."Worker"."Assignment
Status")=1)
AND ("Worker"."Person Type" ='Employee'))
BI Server will generate two physical SQLs, triggered by ("Business Unit"."Status" = 'Active'), instead of one:
Rows 1, bytes 128 retrieved from database query id: <<9201501>>
Physical query response time 0.108 (seconds), id <<9201501>>
Rows 359525, bytes 2395073904 retrieved from database query id: <<9200121>>
Physical query response time 224.454 (seconds), id <<9200121>>
Physical Query Summary Stats: Number of physical queries 12, Cumulative time 237.783, DB-
connect time 0.001 (seconds)
Rows returned to Client 359411
Logical Query Summary Stats: Elapsed time 279.306, Response time 260.537, Compilation
time 14.639 (seconds)
Since OBIEE didn’t ship all the logic to the database, it had to wait for both PSQL results before merging them on its tier.
You can rewrite the logical query to use DESCRIPTOR_IDOF and end up with all the logic shipped to database in a single
PSQL:
FROM "Workforce Goals - Goal Status Overview Real Time"
WHERE
((DESCRIPTOR_IDOF("Business Unit"."Status") = 'ACTIVE') AND
(YEAR("Performance Goals"."Start Date") = 2014) AND
(DESCRIPTOR_IDOF("Workforce Goals - Goal Status Overview Real Time"."Worker"."Assignment
Status")=1)
AND ("Worker"."Person Type" ='Employee'))
Generated PSQL(s) with a Large Number In-List Values or BIND Variables Optimization
There two most common cases, that result in generating PSQLs with large number of in-list values or BIND variables, and that may
lead to slower performance or report failures:
1. OBIEE may generate two or more physical SQLs, with one of them listing internal BIND variables, that it uses to pass the
values from another PSQL. It may be the result of using lookups and employing BIND variables to pass in the values between
lookup(s) and main generated query:
…
T2510675.C36361321 as c2
from
(SELECT V483283710.MEANING AS C61309499, V483283710.LOOKUP_CODE AS C36361321,
V483283710.LOOKUP_TYPE AS C319008066 FROM HCM_LOOKUPS V483283710 WHERE ( (
(V483283710.LOOKUP_TYPE = 'HRG_PERF_GOAL_CATEGORY' ) )
)) T2510675
where ( T2510675.C36361321 in (:PARAM1, :PARAM2, :PARAM3, :PARAM4, :PARAM5, :PARAM6,
:PARAM7, :PARAM8, :PARAM9, :PARAM10, :PARAM11, :PARAM12, :PARAM13,:PARAM14, :PARAM15,
:PARAM16, :PARAM17, :PARAM18, :PARAM19, :PARAM20) )
27 WHITE PAPER | Oracle Fusion Transactional Business Intelligence and BI Cloud Connector Performance Recommendations | Version 3.0
Copyright © 2021, Oracle and/or its affiliates | Confidential – Public
order by c2
Depending on the number of queried logical attributes in LSQL SELECT, this list could be very long, and cause performance
overhead or result in the error:
[nQSError: 46168] …temporary file exceeds the limit (10240 MB) specified by the
NQ_LIMIT_WRITE_FILESIZE environment variable.
Review your logical SQL for presence of any non-pushed functions, such as LENGTH, RAND, or non-default OBIEE
directives, passed into ‘SET VARIABLE’ clause, and if found, consider removing from the report’s LSQL.
Additionally, review the business logic to reduce the number of lookups, used in the report.
2. OBIEE may generate several PSQLs, with one returning very large number of actual values, that get passed into another
PSQL:
…AND ( NOT ( (V222235898.PARTY_NAME = 'Joe Doe' ) ) ) )) T2906423
where ( T2906423.C47764773 in (
3000000001.0, 3000000002.0, 3000000003.0, 300000004.0, 300000004.0, 300000005.0,
300000006.0, 300000007.0, 300000008.0, 30000009.0, 300000010.0, 300000011.0, 300000012.0,
300000013.0, 300000014.0, 300000015.0, 300000016.0, 300000017.0, 300000018.0,
300000019.0, ...
Oracle has the internal limit on the number of in-list values. Besides, such large lists cause very poor SQL performance. It and
cause the following error:
[nQSError: 42029] Subquery contains too many values for the IN predicate. (HY000)
Such behavior comes from the use of lookup in WHERE clause in the logical SQL. To work around such behavior, you can
either use DESCRIPTOR_IDOF (only for equi-join) or include the lookup attribute into LSQL SELECT clause. Alternatively,
you may replace the use of lookup filter with any available ID or Code attributes from queried facts or dimensions.
CONSTANT_OPTIMIZATION_LEVEL and CONSTANT_CASE_OPTIMIZATION_LEVEL Request
Variables
If you observe your report’s logical SQL having computations using logical constants or logical CASE, then consider initializing the
session variables CONSTANT_OPTIMIZATION_LEVEL=1 and/or CONSTANT_CASE_OPTIMIZATION_LEVEL=1 via ‘SET
VARIABLE’ prefix in Advanced SQL tab to enable expression optimization. It may improve performance of logical reports with CASE
WHEN filters.
The constants values are:
0 – No constant expression optimization. This is the default value.
1 - BI Server performs internal constant expression evaluation early during the query processing and they are all converted
into actual literal values. Any errors during constant evaluation would result in no constant being rewritten.
2 - This value makes the constant optimization aggressive. When faced with any error during a constant evaluation, BI
Server would skip the failing constant expression and rewrite the others.
CONSTANT_OPTIMIZATION_LEVEL helps in such cases as in the examples below:
1+2*(5 + valueof(NQ_SESSION.int_var))
upper('abc' || valueof(NQ_SESSION.varchar_var)) || left('def', 2)
'ABC'||' '||'DEF'
Use REPORT_SUM/REPORT_COUNT Instead of REPORT_AGGREGATE Logical Functions
If REPORT_AGGREGATE perform such calculation as SUM, COUNT, etc., you can safely replace it with REPORT_SUM, or
REPORT_COUNT (other corresponding calculations) to ensure better performance of your reports. REPORT_AGGREGATE may
force BI Server to generate multiple logical sub-requests, and create more than one PSQL, with all the complexity of fetching large
volumes of data and performing logical joins on OBIEE tier.
28 WHITE PAPER | Oracle Fusion Transactional Business Intelligence and BI Cloud Connector Performance Recommendations | Version 3.0
Copyright © 2021, Oracle and/or its affiliates | Confidential – Public
Note, when you build PIVOT reports with the use of aggregate functions, BI Server will not push them into database, but perform on
its tier in memory. With more columns and rows in your PIVOT reports such aggregations could take longer time. Review and, if
possible, reduce ‘BY’ clause in your AGGREGATE functions to simplify the aggregation.
Important! DO NOT use an empty ‘BY’ clause, that may get generated in your report, since OBIEE would include all queried
attributes into the aggregate clause in physical SQL. The use of empty ‘BY’ clause may result in generating more than one physical
SQL.
In the example below the original LSQL contains a mix of REPORT_SUM and REPORT_AGGREGATE:
SELECT ...
REPORT_AGGREGATE(IFNULL("General Ledger - Journals Real Time"."-Lines"."Journal Line
Accounted Amount Debit",0)-IFNULL("General Ledger - Journals Real Time"."-
Lines"."Journal Line Accounted Amount Credit",0) BY SUBSTRING("General Ledger - Journals
Real Time"."-Account"."Concatenated Segments" FROM 16 FOR 4),SUBSTRING("General Ledger -
Journals Real Time"."-Account"."Concatenated Segments" FROM 21 FOR 6),SUBSTRING("General
Ledger - Journals Real Time"."-Account"."Concatenated Segments" FROM 28 FOR
5),SUBSTRING("General Ledger - Journals Real Time"."-Account"."Concatenated Segments"
FROM 34 FOR 5),SUBSTRING("General Ledger - Journals Real Time"."-Account"."Concatenated
Segments" FROM 40 FOR 5),SUBSTRING("General Ledger - Journals Real Time"."-
Account"."Concatenated Segments" FROM 46 FOR 2),SUBSTRING("General Ledger - Journals Real
Time"."-Account"."Concatenated Segments" FROM 49 FOR 5), SUBSTRING("General Ledger -
Journals Real Time"."-Account"."Concatenated Segments" FROM 4 FOR 5),SUBSTRING("General
Ledger - Journals Real Time"."-Account"."Concatenated Segments" FROM 10 FOR
5),SUBSTRING("General Ledger - Journals Real Time"."-Account"."Concatenated Segments"
FROM 1 FOR 2)... s44,
REPORT_SUM("General Ledger - Journals Real Time"."- Lines"."Journal Line Accounted Amount
Credit" BY SUBSTRING("General Ledger - Journals Real Time"."- Account"."Concatenated
Segments"
FROM 1 FOR 52),"General Ledger - Journals Real Time"."Approval Status"."Approval Status
Description","General Ledger - Journals Real Time"."- Line Details"."Line Effective
Date","General Ledger - Journals Real Time"."- Line Details"."Line Period Name","General
Ledger - Journals Real Time"."- Line Details"."Line Number","General Ledger - Journals
Real Time"."- Line Details"."Line Description","General Ledger - Journals Real Time"."-
Line Details"."Line Type","General Ledger - Journals Real Time"."- Line Details"."Line
Currency Code","General Ledger - Journals Real Time"."Time"."Fiscal Year Number","General
Ledger - Journals Real Time"."- Journal Category"."User Journal Category Name","General
Ledger - Journals Real Time"."- Journal Category"."Description","General Ledger -
Journals Real Time"."- Journal Source"."User Journal Source Name","General Ledger -
Journals Real Time"."- Journal Source"."Source Journal Source Description","General
Ledger - Journals Real Time"."- Ledger Set".
"Ledger Set Name","General Ledger - Journals Real Time"."- Ledger"."Ledger
Name","General Ledger - Journals Real Time"."- Ledger"."Chart Of Account","General Ledger
- Journals Real Time"."- Header Details"."Header Balance Type Flag","General Ledger -
Journals Real Time"."- Header Details"."Encumbrance Type","General Ledger - Journals Real
Time"."- Header Details"."Header Close Acct Seq Value","General Ledger - Journals Real
Time"."- Header Details"."Header Description","General Ledger - Journals Real Time"."-
Header Details"."Journal Legal Entity","General Ledger - Journals Real Time"."- Header
Details"."Header Doc Sequence Name","General Ledger - Journals Real Time"."- Header
Details"."Header Local Doc Sequence Value","General Ledger - Journals Real Time"."-
Header Details"."Header Status","General Ledger - Journals Real Time"."- Header
Details"."Header Doc Sequence Value","General Ledger - Journals Real Time"."- Header
Details"."Header posting Account Seq Value", 0,
"General Ledger - Journals Real Time"."Posting Status"."Posting Status Description")
s_45,
REPORT_SUM("General Ledger - Journals Real Time"."- Lines"."Journal Line Accounted
Amount Debit" BY SUBSTRING("General Ledger - Journals Real Time"."-
Account"."Concatenated Segments"
FROM 1 FOR 52), "General Ledger - Journals Real Time"."Approval Status"."Approval Status
Description",... s_46,
REPORT_AGGREGATE(IFNULL("General Ledger - Journals Real Time"."- Lines"."Journal Line
Accounted Amount Debit",0) - IFNULL("General Ledger - Journals Real Time"."-
Lines"."Journal Line Accounted Amount Credit",0) BY SUBSTRING("General Ledger - Journals
Real Time"."- Account"."Concatenated Segments"
FROM 1 FOR 52),"General Ledger - Journals Real Time"."Approval Status"."Approval Status
Description",... s_44
29 WHITE PAPER | Oracle Fusion Transactional Business Intelligence and BI Cloud Connector Performance Recommendations | Version 3.0
Copyright © 2021, Oracle and/or its affiliates | Confidential – Public
...
The generated report was failing with OBIEE TEMP space error, as it resulted in two logical sub-requests, producing two physical
SQLs, fetching all the rows from each PSQL to OBIEE tier and consuming all its TEMP space:
Error Codes: OPR4ONWY:U9IM8TAC:OI2DL65P
State: HY000. Code: 10058. [NQODBC] [SQL_STATE: HY000] [nQSError: 10058] Ageneral error
has occurred. [nQSError: 43113] Message returned from OBIS.
[nQSError: 43119] Query Failed: [nQSError: 46168] RawFile::checkSoftLimit failed because
a temporary file exceeds the limit (10240 MB) specified by the NQ_LIMIT_WRITE_FILESIZE
environment variable. (HY000)
The LSQL portion above has two overly complex logical blocks:
1. heavy REPORT_SUM with a number of ‘BY’ logical attributes. Avoid building such complicated computations and use a
reasonable number of ‘BY’ logical columns. The other aggregations in the example above also contained similar lengthy ‘BY’
clause and were suppressed with ‘…’.
2. A number of SUBSTRING functions defined as s_44, which can be easily collapsed into a single SUBSTRING below:
SUBSTRING("General Ledger - Journals Real Time"."- Account"."Concatenated Segments" FROM
1 FOR 52)
Remember that every small optimization to your logical SQL can be a big saver when generating physical SQL(s), so eliminate as
much redundant and bulky logical constructs as you can to boost your reports performance.
The logical report contains REPORT_SUMs and one REPORT_AGGREGATE, which triggered generating two physical SQLs,
shipping the join logic to OBIEE (database 0:0,0):
...sum_SQL99(D1.c43 by [ D1.c17, D1.c16, D1.c19, D1.c18, D1.c21, D1.c26, D1.c23, D1.c7,
D1.c13, D1.c12, D1.c27, D1.c28, D1.c25, D1.c11, D1.c10, D1.c6, D1.c14, D1.c29, D1.c30,
D1.c20, D1.c22, D1.c24, D1.c9, D1.c31, D1.c4, D1.c15, D1.c32, D1.c38, D1.c33, D1.c34,
D1.c35, D1.c36, D1.c37, D1.c39, D1.c40, D1.c41] at_distinct [ D1.c17, D1.c16, D1.c19,
D1.c18, D1.c21, D1.c26, D1.c23, D1.c7, D1.c13, D1.c12, D1.c27, D1.c28, D1.c25, D1.c11,
D1.c10, D1.c6, D1.c14, D1.c29, D1.c30, D1.c20, D1.c22, D1.c24, D1.c9, D1.c31, D1.c4,
D1.c15, D1.c32, D1.c38, D1.c33, D1.c34, D1.c35, D1.c36, D1.c37, D1.c39, D1.c40, D1.c41,
D1.c53, D1.c54, D1.c2, D1.c3, D1.c55, D1.c56, D1.c57] ) as c47 [for database 0:0,0]
Child Nodes (RqJoinSpec): <<309990039>> [for database 0:0,0]
RqList <<278868707>> [for database 3023:1418642:FSCM_OLTP,57]
sum(D1.c32 by [ D1.c14, D1.c13, D1.c16, D1.c15, cast(D1.c33
as CHARACTER ( 30 )) || cast(D1.c23 as CHARACTER ( 30 )) , D1.c29, D1.c26, D1.c27,
D1.c30, D1.c31, D1.c28] ) as c1 [for database 3023:1418642,57],
Note that OBIEE rewrote REPORT_AGGREGATE into SUM_SQL99 aggregation. So, it can be replaced safely with REPORT_SUM
and the updated LSQL will produce a single PSQL.
Use COUNT Instead of COUNT(DISTINCT)
Consider switching from Count(Distinct) to Count(), if the latter satisfy your functional requirements, as Count(Distinct ) is more
expensive database operation especially on very large result set. The internal benchmarks, performing COUNT vs.
COUNT(DISTINCT) on 420M sample table showed more than 3-4x times better performance using COUNT.
Note, if you have multiple Logical Table Sources (LTS), defined for the same logical column using the common COUNT(DISTINCT)
rule, then you have to update the aggregation rule change for each LTS:
30 WHITE PAPER | Oracle Fusion Transactional Business Intelligence and BI Cloud Connector Performance Recommendations | Version 3.0
Copyright © 2021, Oracle and/or its affiliates | Confidential – Public
EVALUATE vs. EVALUATE_ANALYTIC Considerations
EVALUATE and EVALUATE_ANALYTIC functions provide similar functionality, but they have to be applied in different functional
context. EVALUATE is used for scalar functions, that take input values and return output value for a single row, while
EVALUATE_ANALYTIC takes a row set, i.e. one or more rows, and returns results for each row in the set. EVALUATE_ANALYTIC
function results in generating SQL analytic functions (also known as window functions), so it may produce more sophisticated
physical SQLs.
Note, when you choose EVALUTE function, use AGGREGATE instead of REPORT_AGGREGATE for the aggregation rule.
Use of Logical Joins in OTBI Reports
OBIEE supports different types of logical table joins: INNER JOIN, FULL OUTER JOIN, LEFT OUTER JOIN, RIGHT OUTER JOIN,
CROSS/CARTESIAN JOIN. It may push such joins to database or generate multiple PSQLs for its logical sub-requests and then
perform STITCH JOIN of the fetched data sets on OBIEE tier. This document discussed the use cases and examples with
OBIS_DBFEATURES_IS_PERF_PREFER_INTERNAL_STITCH_JOIN, which enforces STITCH JOIN for cross fact reports.
When you rewrite the report logic and apply logical joins, if you can, use more effective INNER JOIN, LEFT or RIGHT OUTER JOIN
instead of FULL OUTER JOIN.
Use of OBIEE CACHE Variables for Diagnostics in OTBI Reports
OBIEE effectively uses several caching options to speed up processing the similar or same reports, such as its own execution plan
cache (to avoid reparsing), subrequest cache (to reduce or eliminate the same sub-requests processing), etc.
When developing reports and measuring their performance, you may consider disabling the following CACHE variables and turning
on LOGLEVEL=7 to generate more detailed tracing by defining in prefix field:
SET VARIABLE DISABLE_CACHE_HIT=1, DISABLE_CACHE_SEED=1, DISABLE_SUBREQUEST_CACHE=1,
DISABLE_PLAN_CACHE_HIT=1, DISABLE_PLAN_CACHE_SEED=1, LOGLEVEL=7; SELECT …
Important! Make sure you remove the variables from report’s prefix before you deploy your reports in a production environment.
Use of OBIS_REFRESH_CACHE in OTBI Reports
OBIEE in FA environments relies on BI Presentation Server (OBIPS) cache for caching the results for every single execution and
keeps using the cached results while OBIEE cursors stay open. When users navigate back and forth to the same report, OBIEE would
retrieve the data from the seeded cache instead of executing the same query(ies) again and again. There may be few isolated cases,
however, when you may want to turn off OBIPS cache, either in selected reports or for debugging purposes. You can do that by using
‘SET VARIABLE OBIS_REFRESH_CACHE=1;’ in as a prefix in your logical SQL report.
Important! Avoid using this variable and disabling OBIPS cache for heavy and frequently used reports. Such actions would result in
overall performance impact in your environment.
Use of Database Hints to Optimize Generated Queries Plans
While OTBI PSQLs cannot be manually tweaked, you have an option to pass database hints into your report to be included into the
generated PSQLs. OBIEE introduced support for Oracle database hints at generated PSQL level, using the Request Variable
OBIS_ORACLEDB_HINTS_FOR_TOP_SELECT.
31 WHITE PAPER | Oracle Fusion Transactional Business Intelligence and BI Cloud Connector Performance Recommendations | Version 3.0
Copyright © 2021, Oracle and/or its affiliates | Confidential – Public
Important! The database hints options should be considered only after performing the comprehensive analysis of the generated
SQLHC, the query statistics and the execution plan. Make sure careful and extensive benchmarking for various filters and security
predicates (different users with different access lists) before implementing the hints in production reports.
The hints, defined via the request variable, will get passed into the TOP SELECT of the generated physical SQL. Such option can be
used to rectify an execution plan, if the database optimizer produces sub-optimal query plan for the generated SQL query. If a logical
SQL produces more than one physical SQL, then the defined hint will get stamped into EVERY SINGLE generated physical SQL for
that report. The variable can be used for passing both optimizer based hints using OPT_PARAM as well as regular hints.
Some examples are shown in the table below:
ORACLE DATABASE HINT OBIS_ORACLEDB_HINTS_FOR_TOP_S
ELECT
EXPLANATION
/*+ NO_INDEX(D1.T231459.V53960257.Reve
nue MOO_REVN_N8) */
'NO_INDEX(D1.T231459.V53960257.Rev
enue MOO_REVN_N8)'
Applies NO_INDEX hint for
MOO_REVN_N8 inside the query block
‘Revenue’
/*+
opt_param('_complex_view_merging','false') */
'OPT_PARAM(''_complex_view_merging
'',''false'')'
The following ‘alter session set
"_complex_view_merging"=false;’ can be
enforced via OPT_PARAM via SQL hint
Refer to the example below how the hint shows up in the logical and physical SQLs:
-------------------- SQL Request, logical request hash: 4dce486e
set variable LOGLEVEL=4, DISABLE_CACHE_HIT=1, DISABLE_PLAN_CACHE_HIT=1,
OBIS_ORACLEDB_HINTS_FOR_TOP_SELECT= 'OPT_PARAM(''_complex_view_merging'',''false'')'
: select employee.employeeid from snowflakesales; /* QUERY_SRC_CD='rawSQL' */
[2016-05-12T22:16:24.722+00:00] [OracleBIServerComponent] [TRACE:4] [] [] [ecid:
8f6e386743211276:-2e70ffee:1547d0cc722:-8000-0000000000124131,0:1:16:5] [SI-Name: ] [IDD-
GUID: ] [IDD-Name: ] [tid: bc40700] [messageid: USER-18] [requestid: 96420003]
[sessionid: 96420000] [username: Administrator] -------------------- Sending query to
database named SQLDB (id: <<2279739>>), connection pool named SQLDB Connections, logical
request hash 4dce486e, physical request hash b67e28b7: [[
select /*+ opt_param('_complex_view_merging','false') */ T91132.EmployeeID as c1 from
Employees T91132
The query hints can applied to the affected reports on Advanced Tab in the Prefix field via SET VARIABLE
OBIS_ORACLEDB_HINTS_FOR_TOP_SELECT=…. ;.
Note, make sure you put semicolon at the end of SET VRIABLE clause, right before SELECT clause.
Use of MATERIALIZE Hint in Cross Subject Area and Cross Reports
OTBI physical SQLs always run with “_with_subquery”=’INLINE’ context in database. Such context helps to improve the
performance of WITH generated queries for overwhelming majority of OTBI reports and minimizes TEMP tablespace usage in
database. There may be several categories of reports that can benefit from materializing targeted WITH sub-queries. OBIEE
introduced a separate variable “AddMaterializeHintToSubquery=1”, that you can use to force Oracle to override ‘INLINE’ and
materialize WITH query sub-blocks in OTBI reports. To initialize the query use “SET VARIABLE
AddMaterializeHintToSubquery=1;” as a prefix in your report’s logical SQL. The variable will insert MATERIALIZE hint into the
generated factored WITH subquery blocks in PLSQLs:
WITH
OBISUBWITH0 AS (select /*+ MATERIALIZE */ D1.c2 as c1
from
(select count(distinct D1.c3) as c1,
Some of the types of OTBI reports may benefit from using forced MATERIALIZE hint are:
Cross SA reports of type: “SELECT … FROM SA1 WHERE ATTRIBUTE1 IN (SELECT … FROM SA2 …)
32 WHITE PAPER | Oracle Fusion Transactional Business Intelligence and BI Cloud Connector Performance Recommendations | Version 3.0
Copyright © 2021, Oracle and/or its affiliates | Confidential – Public
More complex reports with nested IN subqueries such as “SELECT … FROM SA1 WHERE ATTRIBUTE1 IN (SELECT …
FROM SA2 WHERE ATTRIBUTE2 in (SELECT … from SA3 …))
Complex reports with more IN subqueries such as “SELECT … FROM SA1 WHERE ATTRIBUTE1 IN (SELECT … FROM
SA2) OR ATTRIBUTE2 in (SELECT … from SA3 …)
Cross reports, i.e. report A using a filter based on the result from report B.
Functional Design Patterns Affecting OTBI Reports Performance
The previous chapters have provided most detailed recommendations for designing performing OTBI reports. Many of them point to
the importance of having proper functional report design, joining attributes and picking most effective functional filters. This chapter
covers few such examples with functional recommendations. You are advised to periodically review Oracle publications covering
functional topics, specific to OTBI subject areas.
"Time"."Fiscal Period" Performance Considerations in Financial Reports
When you create reports that include "Time"."Fiscal Period" column, OBIEE appends a hidden SORTKEY("Time"."Fiscal Period") into
the LSQL. The SORTKEY function is required for correct sorting of the Fiscal Period data in the chronological order. This logic has
been put in place for such cases as prompts, when "Time"."Fiscal Period" ordering is a critical functional requirement. It is justified
there, since prompts typically select from logical dimensions or lookups.
When you create a report that queries a logical fact, and expose "Time"."Fiscal Period" column either in SELECT or WHERE clause,
OBIEE would include the Time dimension, adding an extra join to GL_CALENDARS and the calculated formula from SORTKEY()
function into its PSQL. Instead, you can explore attributes from your logical fact that may have the equivalent functional data and
eliminate the unnecessary complexity from the report and generated PSQLs.
Refer to an example with replacing "Time"."Fiscal Period" with "- Header Details"."Period Name" and its generated LSQL and PSQL
below.
"TIME"."FISCAL PERIOD" "- HEADER DETAILS"."PERIOD NAME"
SET VARIABLE PREFERRED_CURRENCY='User Preferred
Currency 1';SELECT
0 s_0,
"General Ledger - Journals Real
Time"."Time"."Fiscal Period" s_1,
SORTKEY("General Ledger - Journals Real
Time"."Time"."Fiscal Period") s_2
FROM "General Ledger - Journals Real Time"
ORDER BY 1, 3 ASC NULLS LAST, 2 ASC NULLS LAST
FETCH FIRST 75001 ROWS ONLY
SET VARIABLE PREFERRED_CURRENCY='User
Preferred Currency 1';SELECT
0 s_0,
"General Ledger - Journals Real Time"."-
Header Details"."Period Name" s_1
FROM "General Ledger - Journals Real Time"
ORDER BY 1, 2 ASC NULLS LAST
FETCH FIRST 75001 ROWS ONLY
WITH SAWITH0 AS
(SELECT T288012.C251991545 AS c1,
T288012.C510997710 AS c3,
T288012.C422791590 AS c4,
T288012.C355931595 AS c5,
T288012.C42913078 AS c6
FROM
(SELECT V212661565.FISCAL_PERIOD_NAME AS
C251991545,
V212661565.FISCAL_YEAR_NUMBER AS C510997710,
GlCalendars.CALENDAR_ID AS C422791590,
V212661565.FISCAL_QUARTER_NUMBER AS C355931595,
V212661565.FISCAL_PERIOD_NUMBER AS C42913078,
V212661565.FISCAL_PERIOD_SET_ID AS
PKA_FiscalPeriodSetId0,
V212661565.FISCAL_PERIOD_SET_NAME AS
PKA_FiscalPeriodSetName0,
V212661565.FISCAL_PERIOD_TYPE AS
PKA_FiscalPeriodType0,
GlCalendars.PERIOD_SET_ID AS
PKA_GlCalendarsPeriodSetId0,
GlCalendars.PERIOD_TYPE_ID AS
WITH SAWITH0 AS
(SELECT T3339838.C150150587 AS c1
FROM
(SELECT V169022212.PERIOD_NAME1 AS
C150150587,
V169022212.JE_HEADER_ID1 AS
PKA_JrnlHdrJeHeaderId0
FROM
(SELECT JrnlLine.JE_HEADER_ID,
JrnlLine.JE_LINE_NUM,
JrnlHdr.JE_HEADER_ID AS JE_HEADER_ID1,
JrnlHdr.PERIOD_NAME AS PERIOD_NAME1
FROM GL_JE_LINES JrnlLine,
GL_JE_HEADERS JrnlHdr
WHERE (JrnlLine.JE_HEADER_ID =
JrnlHdr.JE_HEADER_ID)
AND ((1 =1))
) V169022212
) T3339838
)
SELECT D1.c1 AS c1,
D1.c2 AS c2
33 WHITE PAPER | Oracle Fusion Transactional Business Intelligence and BI Cloud Connector Performance Recommendations | Version 3.0
Copyright © 2021, Oracle and/or its affiliates | Confidential – Public
PKA_GlCalendarsPeriodTypeId0
FROM GL_FISCAL_PERIOD_V V212661565,
GL_CALENDARS GlCalendars
WHERE V212661565.FISCAL_PERIOD_SET_NAME =
GlCalendars.PERIOD_SET_NAME
AND V212661565.FISCAL_PERIOD_TYPE =
GlCalendars.PERIOD_TYPE
) T288012
),
SAWITH1 AS
(SELECT D1.c1 AS c1,
D1.c3 * 10000 + D1.c4 * 100000000 + D1.c5 * 100 +
D1.c6 AS c2
FROM SAWITH0 D1
)
SELECT D1.c1 AS c1,
D1.c2 AS c2,
D1.c3 AS c3
FROM
( SELECT DISTINCT 0 AS c1,
D1.c1 AS c2,
D1.c2 AS c3
FROM SAWITH1 D1
ORDER BY c3
) D1
WHERE rownum <= 75001
FROM
( SELECT DISTINCT 0 AS c1, D1.c1 AS c2 FROM
SAWITH0 D1 ORDER BY c2
) D1
WHERE rownum <= 75001
Required Use of Logical UPPER() Function in Logical Filters
The end users, who create OTBI reports with common “Worker” dimension, from subject areas such as "Workforce Management -
Worker Assignment Real Time", are advised to include the filter on “Worker”.”Person Number” attribute. However, the attribute is
mapped to PERSON_NUMBER column, indexed via function-based index (FBI) using UPPER(PERSON_NUMBER).
To ensure the filter picking the FBI index, include logical UPPER function into the logical query to improve your report performance,
for example:
AND (UPPER("Worker"."Person Number" ) = '1111111')
The more cases for the required use of UPPER() in reports to take advantage of existing indexes in database are:
UPPER( "Assigned To Person Details"."Line Manager Number" )= 'A123'
UPPER("Worker"."Assignment Number") = 'A123')
UPPER("Sales - CRM Opportunities and Products Real Time"."Product"."Product Name")=’A123’
"Payroll - Payroll Run Results Real Time" Subject Area Recommendations
"Payroll - Payroll Run Results Real Time" logical facts are mapped to very large volume Payroll tables such as
PAY_RUN_RESULT_VALUES
PAY_RUN_RESULTS
PAY_PAYROLL_REL_ACTIONS
These transactional tables have very few indexes to support transactional flows. When you implement reports querying logical
Payroll Facts, make sure you employ the filters to utilize these indexes and avoid very expensive full table scans or row explosions in
the execution plans. Explore the following filters in your Payroll Run Result reports:
Use the combination of "Element"."Element Name" and "Payroll Run Result Details"."Date Earned"
Use the combination of "Location"."Set Name", "Payroll Period"."End Date" and "Location"."Worker Location Name"
Add any of Date filters: "Payroll Period"."End Date", "Payroll Run Result Details"."Effective Date", "Payroll Period"."Default
Pay Date"
Use "Business Unit"."Business Unit Name" with "Payroll Period"."Period Number"
Check if you can replace "Payroll Period"."Period Number" with "Payroll Period"."Period Name"
34 WHITE PAPER | Oracle Fusion Transactional Business Intelligence and BI Cloud Connector Performance Recommendations | Version 3.0
Copyright © 2021, Oracle and/or its affiliates | Confidential – Public
“Payroll - Payroll Balances Real Time” Subject Area Recommendations
“Payroll - Payroll Balances Real Time” Subject Area in Oracle Fusion Transactional BI (OTBI) introduced the logical model for
creating OBIEE end user reports against Fusion HCM Payroll Balances tables. Payroll Balances tables store very large data volumes, so
it is important to know how to design performing OTBI ad-hoc queries and stored reports and avoid unnecessary overhead on the
transnational objects in Fusion Apps.
Payroll Balances tables with large data volumes, for performance reasons, have only few indexes, required for payroll
processing. So, it's important to design OTBI reports to utilize the available indexes and avoid very expensive full table scans
on the main table PAY_RUN_BALANCES.
The logical model enforces the join logic for PAY_PAYROLL_ACTIONS and PAY_RUN_BALANCES using
PAY_PAYROLL_REL_ACTIONS table. The end users are not recommended to use an advanced Logical SQL option, as it
could produce sub-optimal execution plans with ineffective access path for PAY_RUN_BALANCES table.
OTBI queries have enforced limit of fetching (and exporting) max 75,000 rows per query (the value varies by environment
shapes). So, it's important to know what filters to apply to obtain the complete untrimmed resultset and avoid fetching
much larger resultset from database.
The next section covers the recommendations for applying logical filters to OTBI Payroll Balances Subject Area.
OTBI Payroll Balances Filter Recommendations
There are several categories of reports, grouped by the applied filters and their combinations.
"Payroll Actions"."Effective Date" = date '<YYYY-MM-DD>'
The application of "Payroll Actions"."Effective Date" filter alone is not recommended. If applied alone, using 'equal to', 'less than' or
'greater than' a specific date, it can result in very large resultset, exceeding the 75K limit, especially for very large payroll customers.
Besides, the single Effective Date filter would result in optimizer picking inefficient plan with PAY_RUN_BALANCES Full Table
Scan. To avoid the expensive full table scan use the second filter "Balance Value Details"."Effective Date" mirroring the date value, i.e.
use:
"Balance Value Details"."Effective Date" = date '<YYYY-MM-DD>' and "Payroll
Actions"."Effective Date" = date '<YYYY-MM-DD>'
Such filter application will allow to constrain the volumes in PAY_RUN_BALANCES large table, resulting in the use of its index on
EFFECTIVE_DATE instead of heavy Full Table Scan
Plan example with PAY_RUN_BALANCES Full Table Scan fetching 106M rows w/o filter "Balance Value Details"."Effective Date":
----------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------
----------------------
| Id | Operation | Name | Starts | E-Rows |E-Bytes|E-Temp | Cost (%CPU)| E-Time | A-Rows | A-Time | Buffers | Reads | Writes | OMem | 1Mem
| Used-Mem | Used-Tmp|
----------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------
----------------------
...
| 41 | NESTED LOOPS | | 1 | 26250 | 844K| | 524 (0)| 00:00:07 | 18309 |00:00:00.03 | 250 | 0 | 0
| | | | |
|* 42 | TABLE ACCESS BY INDEX ROWID | PAY_PAYROLL_ACTIONS | 1 | 70 | 1330 | | 37 (0)| 00:00:01 | 64 |00:00:00.01 | 51 | 0 | 0
| | | | |
|* 43 | INDEX SKIP SCAN | PAY_PAYROLL_ACTIONS_N2 | 1 | 70 | | | 11 (0)| 00:00:01 | 64 |00:00:00.01 | 8 | 0 | 0
| 1025K| 1025K| | |
|* 44 | INDEX RANGE SCAN | PAY_PAYROLL_REL_ACTIONS_N50 | 64 | 375 | | | 3 (0)| 00:00:01 | 18309 |00:00:00.03 | 199 | 0 | 0
| 1025K| 1025K| | |
|* 45 | TABLE ACCESS BY INDEX ROWID | PAY_PAYROLL_REL_ACTIONS | 18309 | 375 | 5250 | | 21 (0)| 00:00:01 | 18309 |00:00:00.14 | 1155 | 343 | 0
| | | | |
| 46 | JOIN FILTER USE | :BF0003 | 1 | 106M| 3663M| | 375K (1)| 01:15:11 | 106M|00:00:31.88 | 1367K| 1367K| 0
| | | | |
|* 47 | TABLE ACCESS STORAGE FULL | PAY_RUN_BALANCES | 1 | 106M| 3663M| | 375K (1)| 01:15:11 | 106M|00:00:17.15 | 1367K| 1367K| 0
| 1025K| 1025K| 14M (0)| |
Plan example with PAY_RUN_BALANCES Index Skip Scan limiting the number of fetched rows to 1.3M with the filter "Balance
Value Details"."Effective Date":
----------------------------------------------------------------------------------------------------------------------------- --------------------------------------------------------------------------------------
----------------
| Id | Operation | Name | Starts | E-Rows |E-Bytes| Cost (%CPU)| E-Time | A-Rows | A-Time | Buffers | Reads | Writes | OMem | 1Mem |
Used-Mem | Used-Tmp|
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
----------------
...
|* 27 | TABLE ACCESS STORAGE FULL | PER_ALL_ASSIGNMENTS_M | 1 | 3311 | 184K| 1371 (1)| 00:00:17 | 5104 |00:00:00.17 | 101K| 0 | 0
| 1025K| 1025K| | |
|* 28 | HASH JOIN | | 1 | 1473K| 94M| 22976 (1)| 00:04:36 | 1393K|00:00:01.93 | 21948 | 0 | 0 | 2926K| 2926K|
1837K (0)| |
|* 29 | TABLE ACCESS STORAGE FULL | PAY_PAY_RELATIONSHIPS_DN | 1 | 6684 | 150K| 68 (0)| 00:00:01 | 6684 |00:00:00.01 | 247 | 0 | 0
| 1025K| 1025K| | |
|* 30 | TABLE ACCESS BY INDEX ROWID| PAY_RUN_BALANCES | 1 | 1473K| 61M| 22904 (1)| 00:04:35 | 1393K|00:00:01.29 | 21701 | 0 | 0
| | | | |
|* 31 | INDEX SKIP SCAN | PAY_RUN_BALANCES_N1 | 1 | 1473K| | 3922 (1)| 00:00:48 | 1393K|00:00:00.36 | 3700 | 0 | 0
| 1025K| 1025K| | |
When choosing to use "Payroll Actions"."Effective Date" filter for 'less than', 'greater than' or 'between' for date values, you may not
only run into 75K resultset limit, but also incur significant workload on the storage to fetch much larger volumes and end up with
35 WHITE PAPER | Oracle Fusion Transactional Business Intelligence and BI Cloud Connector Performance Recommendations | Version 3.0
Copyright © 2021, Oracle and/or its affiliates | Confidential – Public
slow report performance. Make sure you analyze your Payroll data shape and use the date range with caution. You are strongly
encouraged add more logical Filters together with Effective Date to ensure better performance.
"Worker"."Employee Display Name"=<Employee Name>
The combination of "Worker"."Employee Display Name" and "Payroll Actions"."Effective Date" filters should produce an efficient
plan and fast performance for Payroll Balances report without the need for using "Balance Value Details"."Effective Date", though the
use of "Balance Value Details"."Effective Date" filter will ensure constraining the data in the main table PAY_RUN_BALANCES
before joining to the other Payroll tables.
Data Security Predicates Impact on OTBI Reports Performance
OTBI enforces maximum security enabled for each user and applies all enabled data security predicates (DSP) in the generated PSQLs
to secure the output data in OTBI reports. The enabled security predicates are embedded into the generated PSQLs for every single
secure ADF logical object, used in the reports. Depending on the complexity and the volume of security roles, their generated security
predicates in OTBI PSQLs often become the primary source of the SQL complexity, heavy SQL parsing, suboptimal execution plans
and poor query performance.
Security Predicate in OTBI: Performance Recommendations
Review the following common guidelines to mitigate the impact from security overhead in OTBI reports:
1. Regularly audit the enabled security roles in your Fusion Applications environment and reduce their count as much as
possible.
2. Optimize each security predicate sub-query and ensure effective indexes in place for each sub-query.
3. Use consistent SQL patterns for your security roles, which will allow optimizer to apply UNION ALL logic to them.
4. Design your DSPs to use the same unique key of the leading table across all sub-queries, so that optimizer choose UNION ALL
for all predicates. For example:
SUB-OPTIMAL DESIGN OPTIMIZED DSP
(SELECT OpportunitiesPEO.OPTY_ID,
. . .
FROM MOO_OPTY OpportunitiesPEO
WHERE ((OpportunitiesPEO.opty_id IN
(SELECT opty_id FROM moo_revn_partner
))
OR (OpportunitiesPEO.opty_id IN
(SELECT DISTINCT myopres.opty_id
FROM Moo_opty_resources myopres
WHERE myopres.resource_id IN (SELECT
HZ_SESSION_UTIL.GET_USER_PARTYID FROM dual)
AND myopres.access_level_code IN ('100', '200',
'300')
))
OR (
HZ_SESSION_UTIL.validate_party_bu(HZ_SESSION_UTIL.
get_user_partyid(),OpportunitiesPEO.bu_org_id) =
'VALID' )
) V346884149,
(SELECT OpportunitiesPEO.OPTY_ID,
. . .
FROM MOO_OPTY OpportunitiesPEO
WHERE ((OpportunitiesPEO.opty_id IN
(SELECT opty_id FROM moo_revn_partner
))
OR (OpportunitiesPEO.opty_id IN
(SELECT DISTINCT myopres.opty_id
FROM Moo_opty_resources myopres
WHERE myopres.resource_id IN (SELECT
HZ_SESSION_UTIL.GET_USER_PARTYID FROM dual)
AND myopres.access_level_code IN ('100', '200',
'300')
))
OR (OpportunitiesPEO.opty_id IN
(SELECT /*DISTINCT*/ Opt.opty_id
FROM
MOO_OPTY Opt
where (
HZ_SESSION_UTIL.validate_party_bu(HZ_SESSION_UTI
L.get_user_partyid(),opt.bu_org_id) = 'VALID' )
))) ) V346884149,
5. Avoid mixing EXISTS and IN for the same secure object and choose either all EXISTS or all IN clauses.
6. Avoid using complex database views and joins to such views in your DSPs.
7. DO NOT use FND_SESSIONS table in your security sub-queries, as it could be very large in size.
8. Avoid using hints in security predicates as they typically skew the plans for the generated OTBI physical SQLs.
9. Review generated DSP security clauses in OTBI PSQLs for any security predicates based on hierarchies. Unoptimized
hierarchy based DSPs could result in significant performance impact on OTBI reports.
36 WHITE PAPER | Oracle Fusion Transactional Business Intelligence and BI Cloud Connector Performance Recommendations | Version 3.0
Copyright © 2021, Oracle and/or its affiliates | Confidential – Public
10. Make sure that security predicates do not cause CARTESIAN MERGE JOIN, as it could cause ORA-01652 (out of TEMP space)
error.
11. Avoid using datatype VARCHAR2_TABLE attributes and SPLIT_INTO_TABLE() function in DSPs.
12. Avoid building complex OTBI reports, joining multiple secure facts in a single query, as every single secure entity comes with
all associated security predicates and results in the additional query complexity.
13. Avoid large numbers of distinct data roles with their own unique security predicates. This can lead to heavy SQL parsing
when there are large numbers of concurrent users, even if individual users have a small number of data roles.
14. Consider using a single Global Data Role rather than multiple versions of the same role to eliminate redundancy in DSP
clauses generation in OTBI PSQLs.
15. Review the option to reduce the number of data roles in HCM OTBI reports by using HCM security profiles that generate
security predicates that secure access to data, using the context of the logged in user. Such technique can help to reduce SQL
parsing overhead, compared with approaches that use large numbers of distinct data roles with static SQL predicates.
16. Check for unintended security policies, pulled into the generated report PSQLs. For example, if you create a custom copy of a
role with data security policies already attached, and then create a data role on top of the copied role(s), such logic would
result in pulling the unintended DSPs into the generated OTBI PSQLs and cause performance overhead.
17. Design your reports with restrictive functional filters. Do not rely solely on data security predicates to filter the report’s data.
18. Avoid constructing reports, using logical filter sub-queries on another subject area, included with the only purpose to ‘secure’
your data without the SA’s functional usage in the report. Do the due diligence, creating proper roles and DSPs per your
functional requirements. Refer to the example of such sub-optimal design, where the report contains ‘IN’ clause doing DSP
filtering only. Such query would result in unnecessarily complex PSQL and lead to poor report performance.
SELECT ...
FROM "Workforce Performance - Performance Rating Real Time"
WHERE ...
AND ("Worker"."Person ID" IN
(SELECT "Worker"."Person ID" saw_0
FROM "Workforce Profiles - Person Profile Real Time"....)
INTERSECT
SELECT "Worker"."Person ID" saw_0 FROM "Workforce Management - Person Real Time")
Security Materialization in OTBI Reports
Fusion Applications Release 13 has delivered limited support for the new feature to handle DSP SQL clauses performance overhead
by implementing the option to materialize security predicates and pull in materialized DSP IDs instead of the SQL query blocks. It
reduces the number of database objects, participating in security predicate joins to a single materialized table, queried via indexed
attribute. As the result, OBIEE eliminates DSP complexity, produces a simple PSQL with efficient and indexed ID column predicates.
It ensures consistent report performance across all users. This feature is available to selected HCM subject areas only. You have
configurable options to define the frequency for materialized IDs cached tables seeding and purging by session or user.
Refer to OTBI Security Materialization documentation for more details how to enable and configure the feature in your environment.
OTBI EXPORT LIMITERS AND RECOMMENDATIONS
OBIEE offers many options to present the data in both interactive and scheduled reports, allowing the end users to choose various
data download formats, such as CSV, PDF, XLS.
Important! The BI guardrails have been put in place to constrain the processed volumes and contain performance impact from
processing large query data in production environments, i.e. help the system overall from being overloaded.
The enforced guardrails vary by customer environment shape. You can review published FA sizing documents for more details about
shape configurations. This section covers the details about parameters that may cause higher performance impact.
ResultRowLimit
This document already discussed the impact from fetching too much data from database and enforced ResultRowLimit set to 75,000
rows (and higher values for larger shapes). The report’s runtime will be affected by the size of the fetched data, coming from both the
number of select attributes and row counts. Note, that the limiter would not apply to each individual PSQL, if OBIEE generate more
than one database query. That’s why it’s very important to employ restrictive filters to reduce the processed data, as ResultRowLimit
is applied by OBIEE as the very last step to render the final report result.
DefaultRowsDisplayedInDownloadCSV
37 WHITE PAPER | Oracle Fusion Transactional Business Intelligence and BI Cloud Connector Performance Recommendations | Version 3.0
Copyright © 2021, Oracle and/or its affiliates | Confidential – Public
DefaultRowsDisplayedInDownloadCSV value has been aligned with ResultRowLimit to allow end users download the complete
resultset in CSV format. CSV download benchmarks showed the fastest performance and scalability by volumes. You are advised to
use CSV format for downloading the generated resultset over XLS.
DefaultRowsDisplayedInDownload
DefaultRowsDisplayedInDownload controls the export limit for both PDF and XLS format. It is set to lower value than
ResultRowLimit, OBIEE takes higher resources to export the data into these formats. Customers are recommended to use CSV format
for exporting reports with high row counts.
DefaultRowsDisplayedInDelivery
DefaultRowsDisplayedInDelivery limits the row counts fetched into online reports and via scheduled email delivery. Since it controls
the row counts for online reports, it is set to more conservative value than CSV, PDF and XLS download limiters.
ORACLE BI CLOUD CONNECTOR PERFORMANCE CONSIDERATIONS
Oracle BI Cloud Connector (BICC) is using OBIEE technology stack for extracting Fusion View Objects (VOs) for data integration
purposes. BICC sends a logical SQL for every VO extract to OBIEE, which then processes the request, generating a physical database
SQL, running in FA DB, and fetching the data. BICC transactions can be traced both in Usage Tracking tables as well OBIEE query
logs. They are logged for the internal user FUSION_APPS_OBIA_BIEE_APPID. BICC queries have very few limiters, and since they
process very large volume extracts, they could generate significant performance impact on FA production environments.
BICC OTBI Metadata Independent Mode
BICC switched to default OTBI metadata independent mode, that no longer uses OTBI RPD metadata repository for VO extracts. BI
VOs, that are used in OTBI queries, may not deliver the desired extract performance due to more complex logic, addressing
transactional reporting requirements, but not scaling for extracts. With the new default mode, you have an option to extract wider
range of VOs, that have simplified logic, that scales for extracting very large volumes in FA environments.
BICC Performance Recommendations
Review the following recommendations to ensure the best performance for your BICC extracts in FA environments:
1. Carefully construct the list of VOs for your extracts, limiting to the list of required only objects. Review your extracts for any
complex VOs, that result in joining multiple tables, and check if simplified VO versions are available instead.
2. Audit the list of extract attributes for every single VO and check the bare minimum of the extract columns to address your
data integration business requirements.
Important! DO NOT use default ALL columns for your VO extracts, unless you really need them ALL. The larger number of
extract columns would have direct impact on your BICC performance, so make sure you choose the required only attributes.
3. BICC has the default extract timeout of 10 hours per VO extract. Some of large volume VOs may require more than 10 hours
to process initial volumes. You can overwrite the default value to accommodate for your initial extract completion in BIACM
by navigating to ‘Manage Offerings and Data Stores’ -> ‘Actions’ -> ‘Extract preference’ under ‘Job setting’ -> Timeout in
Hours: 10 Hours (default).
4. Plan to run your initial BICC extract jobs during off business hours. Some initial extracts may require larger TEMP and UNDO
tablespace space, so you will minimize the chance of running out of space during less busier times such as weekends.
5. BICC has been enhanced to improve data fetching by introducing three configurable parameters in ‘Manage Offerings and
Data Stores’ -> ‘Actions’ -> ‘Extract preference’. Consider setting the values to reduce the load on OBIEE for data fetching:
• DB fetch buffer size (MB): 3MB
• DB fetch row count: 1000
• BI JDBC fetch row count: 1000
Make sure you benchmark your extracts in your test environment before changing them in production.
TEMP and UNDO Tablespace Sizing
BICC extract queries process very large volumes. Some of the VOs produce query execution plans, that result in significant TEMP
tablespace usage. Additionally, BICC SQLs take longer time to run in FA environments, causing longer undo retention and require
larger UNDO tablespace as well.
38 WHITE PAPER | Oracle Fusion Transactional Business Intelligence and BI Cloud Connector Performance Recommendations | Version 3.0
Copyright © 2021, Oracle and/or its affiliates | Confidential – Public
Make sure you run the internal BICC benchmarks in your TEST instance, and if you observe running into TEMP and UNDO
tablespace space issues, reach to Oracle Support to arrange for sizing these two tablespaces to accommodate for your BICC job
extracts.
BICC Jobs Design Optimization for Better Performance
BICC is configured to run in distributed FA environment on primary and high availability (HA) nodes, working in ‘active’-‘active’
mode and balancing the load. Additionally, it introduced the functionality to define priority groups and priority numbers within a
job. Understanding jobs configuration and priorities management is essential to achieve most optimal extracts orchestration and
better performance.
1. The lowest granular level for balancing the load in BICC is done at JOB level. If you configured to run extracts for all VOs in
a single job, BICC would run the JOB on a single BICC node.
2. BICC limits the number of parallel processing threads for VO extracts to 5 at a node level.
3. While BICC does the load balancing at JOB level, OBIEE cluster balances the load at Logical SQL level. Since each VO
extract is a logical SQL request in OBIEE, it will load balance VO extracts (or logical SQLs) from active BICC JOBs across all
available OBIEE cluster nodes.
4. BICC number of concurrent threads applies at BICC node level and limits parallel VO extracts running on a single BICC
node across all JOBs. OBIEE does not have any limits for concurrent requests (logical SQLs.
To better understand BICC and OBIEE load balancing consider the following example:
• You created a single JOB with 10 VO extracts and the default BICC threads set to 5.
• BICC will start the JOB on a single node, spawning first 5 VO extracts.
• OBIEE will distribute the 5 VO extracts between its two cluster nodes, 3 VOs going to node1 and 2 VOs to node2.
• BICC will maintain the maximum concurrency = 5, spawning more extracts as soon as OBIEE complete any of its jobs
on node1 or node2.
• BICC will keep spawning more requests, maintaining the concurrency = 5, until it finishes all extracts.
5. Decouple heavy VO extracts into separate jobs. Having them included into common jobs could result in them running late
in the cycle and extending the extract window. In the example below SubledgerJournalDistributionPVO was run alone at
the end of the extract window:
6. Consider phasing heavy extracts in time to reduce the load on the database.
7. You can use Group Number and Group Item Priority values to manage the order of VO executions within a single JOB. To
set the values for Group Number and Group Item Priority, connect to BICC Concole as Admin, navigate to ‘Manage Jobs’ ->
click designed job name link -> ‘Edit Group’ button to get to the screen as in the example below:
39 WHITE PAPER | Oracle Fusion Transactional Business Intelligence and BI Cloud Connector Performance Recommendations | Version 3.0
Copyright © 2021, Oracle and/or its affiliates | Confidential – Public
8. If you would like a certain group of VOs from a single job to execute first, then set Group Number for these VOs to a lower
value. BICC would prioritize the jobs executions for Group Numbers from lower to higher values.
9. You can use Group Item Priority to set the order of VO extracts within a group with the same ‘Group Number’ in a single
job. BICC would prioritize the jobs executions starting with lower Group Item Priority and move up through the list.
10. If you schedule multiple jobs to run at the same time, BICC will load balance the jobs by group numbers between primary
and HA nodes, generate the list to execute on each BICC node, accounting for group numbers and group item priorities,
create a common VO extract list and execute the list in that order. It would not change the lists or move VO extracts to less
loaded BICC node.
11. If you configure both Data and Primary Keys (PKs) extracts, then create two separate jobs, one for data extract and the other
one for PKs. If you keep them in a single job, then BICC would first do the data extract and pause PK extracts until the very
last Data extract completion.
12. Carefully design your schedule and validate in Dev/Test before deploying in Production environment.
The practical scheduling, smart jobs design, use of group numbers and group item priorities, decoupling data from PKs into separate
jobs should help you to ensure most efficient extracts execution as well as load balancing in BICC and OBIEE clustered environment.
OTBI PERFORMANCE RELATED ERRORS: RECOMMENDATIONS AND
WORKAROUNDS
This chapter covers the most common errors, related to performance. The published workarounds and solutions may not address all
patterns, as the errors may manifest in other cases, not mentioned here. Each case requires careful benchmarking before applying a
recommendation in a production pod.
40 WHITE PAPER | Oracle Fusion Transactional Business Intelligence and BI Cloud Connector Performance Recommendations | Version 3.0
Copyright © 2021, Oracle and/or its affiliates | Confidential – Public
The overwhelming majority of the performance issues come from extremely complex report’s logic and, as the result, heavy generated
physical SQL. As discussed in the topics above, the complexity may come from data security predicates, use of Cross SAs, poorly
constructed logic, ineffective use of hierarchies, complex aggregations on the fly, inefficient filters applied too late in the execution
plans, etc.
ERROR MESSAGE ROOT CAUSE SOLUTION / WORKAROUND
[nQSError: 60009] The
user request exceeded
the maximum query
governing execution
time
The user report exceeded the query runtime
limit of 10 min, enforced for non-admin
users
The query governance is an important indicator, which enforces the
maximum allowed limit not to allow heavy runaway queries in
Fusion Applications. Such reports require careful analysis for sub-
optimal performance.
[nQSError: 17012] Bulk
fetch failed. ORA-01013:
user requested cancel of
current operation
The user cancelled the report execution,
most probably because it ran too long
Requires close review of nqquery.log to understand the diagnostic
statistics, review report design and applied security predicates
[nQSError: 17010] SQL
statement preparation
failed. ORA-10260: limit
size(2097152) of the
PGA heap set by event
10261 exceeded
The most common cases are:
1. Optimizer runs into 2Gb heap size
limit, failing to parse the such
complex SQL text.
2. The query execution plan results in
very heavy hash joins and running
into 2Gb heap size limit during the
SQL execution
PGA heap size limit = 2Gb is an important limiter, enforced in
Fusion Applications to prevent very complex and heavy SQLs from
consuming shared database memory areas. If you cannot simplify
the generated report logic, you may have to follow
recommendations in the document to break generated PSQL into
multiple.
[nQSError: 17001]
Oracle Error code: 1652,
message: ORA-01652:
unable to extend temp
segment by 128 in
tablespace
FUSION_TEMP
The most common cases are:
1. The generated PSQL results in
inefficient execution plans, having
intermediate steps with too high row
counts
2. The logical report uses ineffective
filters, or the filters get applied too
late in the generated query plans
The report requires logical SQL review, careful audit of logical
filters, and the generated execution plan for the PSQL to identify
any intermediate steps, possibly causing row explosion during the
SQL execution
[nQSError: 46168]
RawFile::checkSoftLimit
failed because a
temporary file exceeds
the limit (10240 MB)
specified by the
NQ_LIMIT_WRITE_FIL
ESIZE environment
variable.
This error in OTBI reports indicates that the
LSQL produces more than one PSQL and
OBIEE attempted to fetch too large resultset
to OBIEE for ‘stitching’ on BI tier.
The logical report requires careful analysis of the aggregates and
logical filters as in the example in the section ‘Avoid Fetching Very
Large Volumes from Database to OBIEE’. It may be caused by Cross
SA reports, use of physical lookups in report design, etc. Consider
rewriting LSQL to avoid the use of such non-pushed functions as
RAND() or LENGTH(). Use DESCTRIPTOR_IDOF for filter
conditions involving lookups.
If you run into such error for cross fact reports having implicit LOJ
in OBIEE execution plans, try to test your report with the prefix via
“SET VARIABLE
OBIS_DBFEATURES_LEFT_OUTER_JOIN_SUPPORTED=1;”
[nQSError: 17001]
Oracle Error code: 1722,
message: ORA-01722:
invalid number at OCI
call OCIStmtFetch.
The error is related to an implicit data
conversion.
Identify the problematic join condition and add the explicit
conversion functions such as TO_CHAR(), etc. You may also try
influence the optimizer by using database hints to change the order
of the joins. Refer to the section discussing the use of database hints.
41 WHITE PAPER | Oracle Fusion Transactional Business Intelligence and BI Cloud Connector Performance Recommendations | Version 3.0
Copyright © 2021, Oracle and/or its affiliates | Confidential – Public
[nQSError: 17001]
Oracle Error code:
32036, message: ORA-
32036: unsupported case
for inlining of query
name in WITH clause
OTBI enforces "_with_subquery"=''inline' to
optimize the performance of generated
physical SQL. However, if you define a
custom data type in security predicates and
use the security in OTBI, then affected
reports will fail with such error. The
example of custom data type in DSP, causing
the error is:
(OpportunitiesPEO.bu_org_id IN
( SELECT DISTINCT COLUMN_VALUE
AS BU_ID
FROM THE
(SELECT
CAST(SPLIT_INTO_TABLE(HZ_SESSION_U
TIL .GET_USER_BUSINESS_UNITS)
AS VARCHAR2_TABLE)
FROM DUAL)
Review and simplify the implemented security predicates. The
provided example can be rewritten below or try to test with the
logical prefix workaround:
((HZ_SESSION_UTIL.validate_party_bu(HZ_SESSION_UTI
L.get_user_partyid(), OpportunitiesPEO.bu_org_id)
= 'VALID'))
Or:
SET VARIABLE OBIS_ORACLEDB_HINTS_FOR_TOP_SELECT
='OPT_PARAM(''_complex_view_merging'',''false'')
OPT_PARAM(''_optimizer_distinct_agg_transform'',''
false'')';
[nQSError: 42029]
Subquery contains too
many values for the IN
predicate.
Cross SA logical SQLs use IN join clause
between SAs
Try the workaround:
SET VARIABLE
PERF_ENABLE_ASYMMETRIC_COND_OPTIMIZATION=1;
"Exceeded configured
maximum
number of allowed input
records. Error Codes:
EKMT3FK5:OI2DL65"
The error manifests when exporting or
opening large reports in OTBI.
Consider applying more restrictive filters to reduce the volumes or
break the report into several to produce smaller volumes.
CONCLUSION
This document consolidates the best practices and recommendations for developing and optimizing
performance for Oracle Business Transactional Intelligence for Fusion Applications Version 20A or higher.
This list of areas for performance improvements is not complete. The document will be updated with more
findings, revisions recommendations, so make sure you always use the latest version. If you observe any
performance issues with your OTBI reports, you should carefully benchmark any recommendations or
solutions discussed in this article or other sources, before implementing the changes in the production
environment.
CONNECT WITH US
Call +1.800.ORACLE1 or visit oracle.com.
Outside North America, find your local office at oracle.com/contact.
blogs.oracle.com facebook.com/oracle twitter.com/oracle
Copyright © 2021, Oracle and/or its affiliates. All rights reserved. This document is provided for information purposes only, and the contents hereof are subject to change without notice. This
document is not warranted to be error-free, nor subject to any other warranties or conditions, whether expressed orally or implied in law, including implied warranties and conditions of
merchantability or fitness for a particular purpose. We specifically disclaim any liability with respect to this document, and no contractual obligations are formed either directly or indirectly by
this document. This document may not be reproduced or transmitted in any form or by any means, electronic or mechanical, for any purpose, without our prior written permission.
Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.
Intel and Intel Xeon are trademarks or registered trademarks of Intel Corporation. All SPARC trademarks are used under license and are trademarks or registered trademarks of SPARC
International, Inc. AMD, Opteron, the AMD logo, and the AMD Opteron logo are trademarks or registered trademarks of Advanced Micro Devices. UNIX is a registered trademark of The Open
Group. 0120
Oracle Fusion Transactional Business Intelligence and BI Cloud Connector Performance Recommendations
April 2121
Authors: Pavel Buynitsky, Oksana Stepaneeva
Contributing Authors: Amar Batham, Wasimraja Abdulmajeeth
Ad

More Related Content

What's hot (20)

Validation type 'special' in value sets
Validation type 'special' in value setsValidation type 'special' in value sets
Validation type 'special' in value sets
Feras Ahmad
 
Otbi overview ow13
Otbi overview ow13Otbi overview ow13
Otbi overview ow13
Syaifuddin Ismail
 
Introduction to Oracle Fusion BIP Reporting
Introduction to Oracle Fusion BIP ReportingIntroduction to Oracle Fusion BIP Reporting
Introduction to Oracle Fusion BIP Reporting
Gurpreet singh
 
How to debug a fast formula
How to debug a fast formulaHow to debug a fast formula
How to debug a fast formula
Feras Ahmad
 
All payroll elements with eligibility Oracle Fusion Cloud
All payroll elements with eligibility Oracle Fusion CloudAll payroll elements with eligibility Oracle Fusion Cloud
All payroll elements with eligibility Oracle Fusion Cloud
Feras Ahmad
 
Oracle inventory R12 Setup Guide
Oracle inventory R12 Setup GuideOracle inventory R12 Setup Guide
Oracle inventory R12 Setup Guide
Ahmed Elshayeb
 
OEBS R12 Presentation.ppt
OEBS R12 Presentation.pptOEBS R12 Presentation.ppt
OEBS R12 Presentation.ppt
Mohd Haireeen
 
How to Create Oracle Fusion BI Publisher Report Using RTF Template
How to Create Oracle Fusion BI Publisher Report Using RTF TemplateHow to Create Oracle Fusion BI Publisher Report Using RTF Template
How to Create Oracle Fusion BI Publisher Report Using RTF Template
Feras Ahmad
 
Oracle Receivables R12
Oracle Receivables R12Oracle Receivables R12
Oracle Receivables R12
Sarfaraz Ahmed
 
Oracle Fusion HCM Presentation
Oracle Fusion HCM PresentationOracle Fusion HCM Presentation
Oracle Fusion HCM Presentation
Feras Ahmad
 
Hrms for beginners
Hrms for beginnersHrms for beginners
Hrms for beginners
sravan46
 
25 bi reporting
25   bi reporting25   bi reporting
25 bi reporting
mohamed refaei
 
Oracle Purchasing R12 Setup Steps
Oracle Purchasing R12 Setup StepsOracle Purchasing R12 Setup Steps
Oracle Purchasing R12 Setup Steps
Ahmed Elshayeb
 
Creating business group in oracle apps
Creating business group in oracle appsCreating business group in oracle apps
Creating business group in oracle apps
Gurpreet singh
 
Query all roles and duties and privileges Oracle Fusion Cloud
Query all roles and duties and privileges Oracle Fusion CloudQuery all roles and duties and privileges Oracle Fusion Cloud
Query all roles and duties and privileges Oracle Fusion Cloud
Feras Ahmad
 
Oracle HRMS Fast Formula
Oracle HRMS Fast FormulaOracle HRMS Fast Formula
Oracle HRMS Fast Formula
runjithrocking
 
Calendar working days and holidays for Oracle EBS R12 Absence management
Calendar working days and holidays for Oracle EBS R12 Absence managementCalendar working days and holidays for Oracle EBS R12 Absence management
Calendar working days and holidays for Oracle EBS R12 Absence management
Feras Ahmad
 
Oracle HRMS Payroll Table Overview
Oracle HRMS Payroll Table OverviewOracle HRMS Payroll Table Overview
Oracle HRMS Payroll Table Overview
Chris Martin
 
Most frequently used sq ls for a list of values
Most frequently used sq ls for a list of valuesMost frequently used sq ls for a list of values
Most frequently used sq ls for a list of values
Feras Ahmad
 
Oracle R12 inventory Table name details with description
Oracle R12 inventory Table name details with descriptionOracle R12 inventory Table name details with description
Oracle R12 inventory Table name details with description
Boopathy CS
 
Validation type 'special' in value sets
Validation type 'special' in value setsValidation type 'special' in value sets
Validation type 'special' in value sets
Feras Ahmad
 
Introduction to Oracle Fusion BIP Reporting
Introduction to Oracle Fusion BIP ReportingIntroduction to Oracle Fusion BIP Reporting
Introduction to Oracle Fusion BIP Reporting
Gurpreet singh
 
How to debug a fast formula
How to debug a fast formulaHow to debug a fast formula
How to debug a fast formula
Feras Ahmad
 
All payroll elements with eligibility Oracle Fusion Cloud
All payroll elements with eligibility Oracle Fusion CloudAll payroll elements with eligibility Oracle Fusion Cloud
All payroll elements with eligibility Oracle Fusion Cloud
Feras Ahmad
 
Oracle inventory R12 Setup Guide
Oracle inventory R12 Setup GuideOracle inventory R12 Setup Guide
Oracle inventory R12 Setup Guide
Ahmed Elshayeb
 
OEBS R12 Presentation.ppt
OEBS R12 Presentation.pptOEBS R12 Presentation.ppt
OEBS R12 Presentation.ppt
Mohd Haireeen
 
How to Create Oracle Fusion BI Publisher Report Using RTF Template
How to Create Oracle Fusion BI Publisher Report Using RTF TemplateHow to Create Oracle Fusion BI Publisher Report Using RTF Template
How to Create Oracle Fusion BI Publisher Report Using RTF Template
Feras Ahmad
 
Oracle Receivables R12
Oracle Receivables R12Oracle Receivables R12
Oracle Receivables R12
Sarfaraz Ahmed
 
Oracle Fusion HCM Presentation
Oracle Fusion HCM PresentationOracle Fusion HCM Presentation
Oracle Fusion HCM Presentation
Feras Ahmad
 
Hrms for beginners
Hrms for beginnersHrms for beginners
Hrms for beginners
sravan46
 
Oracle Purchasing R12 Setup Steps
Oracle Purchasing R12 Setup StepsOracle Purchasing R12 Setup Steps
Oracle Purchasing R12 Setup Steps
Ahmed Elshayeb
 
Creating business group in oracle apps
Creating business group in oracle appsCreating business group in oracle apps
Creating business group in oracle apps
Gurpreet singh
 
Query all roles and duties and privileges Oracle Fusion Cloud
Query all roles and duties and privileges Oracle Fusion CloudQuery all roles and duties and privileges Oracle Fusion Cloud
Query all roles and duties and privileges Oracle Fusion Cloud
Feras Ahmad
 
Oracle HRMS Fast Formula
Oracle HRMS Fast FormulaOracle HRMS Fast Formula
Oracle HRMS Fast Formula
runjithrocking
 
Calendar working days and holidays for Oracle EBS R12 Absence management
Calendar working days and holidays for Oracle EBS R12 Absence managementCalendar working days and holidays for Oracle EBS R12 Absence management
Calendar working days and holidays for Oracle EBS R12 Absence management
Feras Ahmad
 
Oracle HRMS Payroll Table Overview
Oracle HRMS Payroll Table OverviewOracle HRMS Payroll Table Overview
Oracle HRMS Payroll Table Overview
Chris Martin
 
Most frequently used sq ls for a list of values
Most frequently used sq ls for a list of valuesMost frequently used sq ls for a list of values
Most frequently used sq ls for a list of values
Feras Ahmad
 
Oracle R12 inventory Table name details with description
Oracle R12 inventory Table name details with descriptionOracle R12 inventory Table name details with description
Oracle R12 inventory Table name details with description
Boopathy CS
 

Similar to Otbi and bicc_psr_technote_v3_final_document (20)

EBS ECC Data Discovery and Visualization.pdf
EBS ECC Data Discovery and Visualization.pdfEBS ECC Data Discovery and Visualization.pdf
EBS ECC Data Discovery and Visualization.pdf
ssuserf605b8
 
Product Analysis Oracle BI Applications Introduction
Product Analysis Oracle BI Applications IntroductionProduct Analysis Oracle BI Applications Introduction
Product Analysis Oracle BI Applications Introduction
AcevedoApps
 
Oracle OpenWorld 2009 AIA Best Practices
Oracle OpenWorld 2009 AIA Best PracticesOracle OpenWorld 2009 AIA Best Practices
Oracle OpenWorld 2009 AIA Best Practices
Rajesh Raheja
 
Presentation on Crystal Reports and Business Objects Enterprise Features
Presentation on Crystal Reports and Business Objects Enterprise FeaturesPresentation on Crystal Reports and Business Objects Enterprise Features
Presentation on Crystal Reports and Business Objects Enterprise Features
InfoDev
 
Unit 2 b_ex_query_designer
Unit 2 b_ex_query_designerUnit 2 b_ex_query_designer
Unit 2 b_ex_query_designer
Onur Sezen
 
Summary Project Server Psi
Summary Project Server PsiSummary Project Server Psi
Summary Project Server Psi
Phuong Nguyen
 
Building Oracle BIEE (OBIEE) Reports, Dashboards
Building Oracle BIEE (OBIEE) Reports, DashboardsBuilding Oracle BIEE (OBIEE) Reports, Dashboards
Building Oracle BIEE (OBIEE) Reports, Dashboards
iWare Logic Technologies Pvt. Ltd.
 
Identify SQL Tuning Opportunities
Identify SQL Tuning OpportunitiesIdentify SQL Tuning Opportunities
Identify SQL Tuning Opportunities
Cuneyt Goksu
 
AMIS Oracle OpenWorld en Code One Review 2018 - Pillar 2: Custom Application ...
AMIS Oracle OpenWorld en Code One Review 2018 - Pillar 2: Custom Application ...AMIS Oracle OpenWorld en Code One Review 2018 - Pillar 2: Custom Application ...
AMIS Oracle OpenWorld en Code One Review 2018 - Pillar 2: Custom Application ...
Getting value from IoT, Integration and Data Analytics
 
AMIS Oracle OpenWorld & CodeOne Review - Pillar 2 - Custom Application Develo...
AMIS Oracle OpenWorld & CodeOne Review - Pillar 2 - Custom Application Develo...AMIS Oracle OpenWorld & CodeOne Review - Pillar 2 - Custom Application Develo...
AMIS Oracle OpenWorld & CodeOne Review - Pillar 2 - Custom Application Develo...
Lucas Jellema
 
Oracle Analytics Live Webinar August 2021
Oracle Analytics Live Webinar August 2021Oracle Analytics Live Webinar August 2021
Oracle Analytics Live Webinar August 2021
Benjamin Arnulf
 
Oracle and its related technologies
Oracle and its related technologiesOracle and its related technologies
Oracle and its related technologies
anup4704
 
Oracle and its related technologies
Oracle and its related technologiesOracle and its related technologies
Oracle and its related technologies
anup4704
 
Creating Your Data Governance Dashboard
Creating Your Data Governance DashboardCreating Your Data Governance Dashboard
Creating Your Data Governance Dashboard
Trillium Software
 
Sharepoint 2010: Practical Architecture from the Field
Sharepoint 2010: Practical Architecture from the FieldSharepoint 2010: Practical Architecture from the Field
Sharepoint 2010: Practical Architecture from the Field
Tihomir Ignatov
 
Obiee introductionbuildingreports
Obiee introductionbuildingreportsObiee introductionbuildingreports
Obiee introductionbuildingreports
ObieeTrainingClasses
 
Cloud Analytics for E-Business Suite
Cloud Analytics for E-Business SuiteCloud Analytics for E-Business Suite
Cloud Analytics for E-Business Suite
KPI Partners
 
25 bi business intelligence &amp; ad-hoc reporting
25   bi business intelligence &amp; ad-hoc reporting25   bi business intelligence &amp; ad-hoc reporting
25 bi business intelligence &amp; ad-hoc reporting
mohamed refaei
 
D365 Finance & Operations - Data & Analytics (see newer release of this docum...
D365 Finance & Operations - Data & Analytics (see newer release of this docum...D365 Finance & Operations - Data & Analytics (see newer release of this docum...
D365 Finance & Operations - Data & Analytics (see newer release of this docum...
Gina Pabalan
 
OBIEE ARCHITECTURE.ppt
OBIEE ARCHITECTURE.pptOBIEE ARCHITECTURE.ppt
OBIEE ARCHITECTURE.ppt
Canara bank
 
EBS ECC Data Discovery and Visualization.pdf
EBS ECC Data Discovery and Visualization.pdfEBS ECC Data Discovery and Visualization.pdf
EBS ECC Data Discovery and Visualization.pdf
ssuserf605b8
 
Product Analysis Oracle BI Applications Introduction
Product Analysis Oracle BI Applications IntroductionProduct Analysis Oracle BI Applications Introduction
Product Analysis Oracle BI Applications Introduction
AcevedoApps
 
Oracle OpenWorld 2009 AIA Best Practices
Oracle OpenWorld 2009 AIA Best PracticesOracle OpenWorld 2009 AIA Best Practices
Oracle OpenWorld 2009 AIA Best Practices
Rajesh Raheja
 
Presentation on Crystal Reports and Business Objects Enterprise Features
Presentation on Crystal Reports and Business Objects Enterprise FeaturesPresentation on Crystal Reports and Business Objects Enterprise Features
Presentation on Crystal Reports and Business Objects Enterprise Features
InfoDev
 
Unit 2 b_ex_query_designer
Unit 2 b_ex_query_designerUnit 2 b_ex_query_designer
Unit 2 b_ex_query_designer
Onur Sezen
 
Summary Project Server Psi
Summary Project Server PsiSummary Project Server Psi
Summary Project Server Psi
Phuong Nguyen
 
Identify SQL Tuning Opportunities
Identify SQL Tuning OpportunitiesIdentify SQL Tuning Opportunities
Identify SQL Tuning Opportunities
Cuneyt Goksu
 
AMIS Oracle OpenWorld & CodeOne Review - Pillar 2 - Custom Application Develo...
AMIS Oracle OpenWorld & CodeOne Review - Pillar 2 - Custom Application Develo...AMIS Oracle OpenWorld & CodeOne Review - Pillar 2 - Custom Application Develo...
AMIS Oracle OpenWorld & CodeOne Review - Pillar 2 - Custom Application Develo...
Lucas Jellema
 
Oracle Analytics Live Webinar August 2021
Oracle Analytics Live Webinar August 2021Oracle Analytics Live Webinar August 2021
Oracle Analytics Live Webinar August 2021
Benjamin Arnulf
 
Oracle and its related technologies
Oracle and its related technologiesOracle and its related technologies
Oracle and its related technologies
anup4704
 
Oracle and its related technologies
Oracle and its related technologiesOracle and its related technologies
Oracle and its related technologies
anup4704
 
Creating Your Data Governance Dashboard
Creating Your Data Governance DashboardCreating Your Data Governance Dashboard
Creating Your Data Governance Dashboard
Trillium Software
 
Sharepoint 2010: Practical Architecture from the Field
Sharepoint 2010: Practical Architecture from the FieldSharepoint 2010: Practical Architecture from the Field
Sharepoint 2010: Practical Architecture from the Field
Tihomir Ignatov
 
Cloud Analytics for E-Business Suite
Cloud Analytics for E-Business SuiteCloud Analytics for E-Business Suite
Cloud Analytics for E-Business Suite
KPI Partners
 
25 bi business intelligence &amp; ad-hoc reporting
25   bi business intelligence &amp; ad-hoc reporting25   bi business intelligence &amp; ad-hoc reporting
25 bi business intelligence &amp; ad-hoc reporting
mohamed refaei
 
D365 Finance & Operations - Data & Analytics (see newer release of this docum...
D365 Finance & Operations - Data & Analytics (see newer release of this docum...D365 Finance & Operations - Data & Analytics (see newer release of this docum...
D365 Finance & Operations - Data & Analytics (see newer release of this docum...
Gina Pabalan
 
OBIEE ARCHITECTURE.ppt
OBIEE ARCHITECTURE.pptOBIEE ARCHITECTURE.ppt
OBIEE ARCHITECTURE.ppt
Canara bank
 
Ad

Recently uploaded (20)

Defense Against LLM Scheming 2025_04_28.pptx
Defense Against LLM Scheming 2025_04_28.pptxDefense Against LLM Scheming 2025_04_28.pptx
Defense Against LLM Scheming 2025_04_28.pptx
Greg Makowski
 
Simple_AI_Explanation_English somplr.pptx
Simple_AI_Explanation_English somplr.pptxSimple_AI_Explanation_English somplr.pptx
Simple_AI_Explanation_English somplr.pptx
ssuser2aa19f
 
Flip flop presenation-Presented By Mubahir khan.pptx
Flip flop presenation-Presented By Mubahir khan.pptxFlip flop presenation-Presented By Mubahir khan.pptx
Flip flop presenation-Presented By Mubahir khan.pptx
mubashirkhan45461
 
Safety Innovation in Mt. Vernon A Westchester County Model for New Rochelle a...
Safety Innovation in Mt. Vernon A Westchester County Model for New Rochelle a...Safety Innovation in Mt. Vernon A Westchester County Model for New Rochelle a...
Safety Innovation in Mt. Vernon A Westchester County Model for New Rochelle a...
James Francis Paradigm Asset Management
 
Just-In-Timeasdfffffffghhhhhhhhhhj Systems.ppt
Just-In-Timeasdfffffffghhhhhhhhhhj Systems.pptJust-In-Timeasdfffffffghhhhhhhhhhj Systems.ppt
Just-In-Timeasdfffffffghhhhhhhhhhj Systems.ppt
ssuser5f8f49
 
Thingyan is now a global treasure! See how people around the world are search...
Thingyan is now a global treasure! See how people around the world are search...Thingyan is now a global treasure! See how people around the world are search...
Thingyan is now a global treasure! See how people around the world are search...
Pixellion
 
Molecular methods diagnostic and monitoring of infection - Repaired.pptx
Molecular methods diagnostic and monitoring of infection  -  Repaired.pptxMolecular methods diagnostic and monitoring of infection  -  Repaired.pptx
Molecular methods diagnostic and monitoring of infection - Repaired.pptx
7tzn7x5kky
 
Perencanaan Pengendalian-Proyek-Konstruksi-MS-PROJECT.pptx
Perencanaan Pengendalian-Proyek-Konstruksi-MS-PROJECT.pptxPerencanaan Pengendalian-Proyek-Konstruksi-MS-PROJECT.pptx
Perencanaan Pengendalian-Proyek-Konstruksi-MS-PROJECT.pptx
PareaRusan
 
Conic Sectionfaggavahabaayhahahahahs.pptx
Conic Sectionfaggavahabaayhahahahahs.pptxConic Sectionfaggavahabaayhahahahahs.pptx
Conic Sectionfaggavahabaayhahahahahs.pptx
taiwanesechetan
 
computer organization and assembly language.docx
computer organization and assembly language.docxcomputer organization and assembly language.docx
computer organization and assembly language.docx
alisoftwareengineer1
 
Ppt. Nikhil.pptxnshwuudgcudisisshvehsjks
Ppt. Nikhil.pptxnshwuudgcudisisshvehsjksPpt. Nikhil.pptxnshwuudgcudisisshvehsjks
Ppt. Nikhil.pptxnshwuudgcudisisshvehsjks
panchariyasahil
 
AI Competitor Analysis: How to Monitor and Outperform Your Competitors
AI Competitor Analysis: How to Monitor and Outperform Your CompetitorsAI Competitor Analysis: How to Monitor and Outperform Your Competitors
AI Competitor Analysis: How to Monitor and Outperform Your Competitors
Contify
 
Data Science Courses in India iim skills
Data Science Courses in India iim skillsData Science Courses in India iim skills
Data Science Courses in India iim skills
dharnathakur29
 
LLM finetuning for multiple choice google bert
LLM finetuning for multiple choice google bertLLM finetuning for multiple choice google bert
LLM finetuning for multiple choice google bert
ChadapornK
 
Calories_Prediction_using_Linear_Regression.pptx
Calories_Prediction_using_Linear_Regression.pptxCalories_Prediction_using_Linear_Regression.pptx
Calories_Prediction_using_Linear_Regression.pptx
TijiLMAHESHWARI
 
Principles of information security Chapter 5.ppt
Principles of information security Chapter 5.pptPrinciples of information security Chapter 5.ppt
Principles of information security Chapter 5.ppt
EstherBaguma
 
Digilocker under workingProcess Flow.pptx
Digilocker  under workingProcess Flow.pptxDigilocker  under workingProcess Flow.pptx
Digilocker under workingProcess Flow.pptx
satnamsadguru491
 
Cleaned_Lecture 6666666_Simulation_I.pdf
Cleaned_Lecture 6666666_Simulation_I.pdfCleaned_Lecture 6666666_Simulation_I.pdf
Cleaned_Lecture 6666666_Simulation_I.pdf
alcinialbob1234
 
chapter 4 Variability statistical research .pptx
chapter 4 Variability statistical research .pptxchapter 4 Variability statistical research .pptx
chapter 4 Variability statistical research .pptx
justinebandajbn
 
Ch3MCT24.pptx measure of central tendency
Ch3MCT24.pptx measure of central tendencyCh3MCT24.pptx measure of central tendency
Ch3MCT24.pptx measure of central tendency
ayeleasefa2
 
Defense Against LLM Scheming 2025_04_28.pptx
Defense Against LLM Scheming 2025_04_28.pptxDefense Against LLM Scheming 2025_04_28.pptx
Defense Against LLM Scheming 2025_04_28.pptx
Greg Makowski
 
Simple_AI_Explanation_English somplr.pptx
Simple_AI_Explanation_English somplr.pptxSimple_AI_Explanation_English somplr.pptx
Simple_AI_Explanation_English somplr.pptx
ssuser2aa19f
 
Flip flop presenation-Presented By Mubahir khan.pptx
Flip flop presenation-Presented By Mubahir khan.pptxFlip flop presenation-Presented By Mubahir khan.pptx
Flip flop presenation-Presented By Mubahir khan.pptx
mubashirkhan45461
 
Safety Innovation in Mt. Vernon A Westchester County Model for New Rochelle a...
Safety Innovation in Mt. Vernon A Westchester County Model for New Rochelle a...Safety Innovation in Mt. Vernon A Westchester County Model for New Rochelle a...
Safety Innovation in Mt. Vernon A Westchester County Model for New Rochelle a...
James Francis Paradigm Asset Management
 
Just-In-Timeasdfffffffghhhhhhhhhhj Systems.ppt
Just-In-Timeasdfffffffghhhhhhhhhhj Systems.pptJust-In-Timeasdfffffffghhhhhhhhhhj Systems.ppt
Just-In-Timeasdfffffffghhhhhhhhhhj Systems.ppt
ssuser5f8f49
 
Thingyan is now a global treasure! See how people around the world are search...
Thingyan is now a global treasure! See how people around the world are search...Thingyan is now a global treasure! See how people around the world are search...
Thingyan is now a global treasure! See how people around the world are search...
Pixellion
 
Molecular methods diagnostic and monitoring of infection - Repaired.pptx
Molecular methods diagnostic and monitoring of infection  -  Repaired.pptxMolecular methods diagnostic and monitoring of infection  -  Repaired.pptx
Molecular methods diagnostic and monitoring of infection - Repaired.pptx
7tzn7x5kky
 
Perencanaan Pengendalian-Proyek-Konstruksi-MS-PROJECT.pptx
Perencanaan Pengendalian-Proyek-Konstruksi-MS-PROJECT.pptxPerencanaan Pengendalian-Proyek-Konstruksi-MS-PROJECT.pptx
Perencanaan Pengendalian-Proyek-Konstruksi-MS-PROJECT.pptx
PareaRusan
 
Conic Sectionfaggavahabaayhahahahahs.pptx
Conic Sectionfaggavahabaayhahahahahs.pptxConic Sectionfaggavahabaayhahahahahs.pptx
Conic Sectionfaggavahabaayhahahahahs.pptx
taiwanesechetan
 
computer organization and assembly language.docx
computer organization and assembly language.docxcomputer organization and assembly language.docx
computer organization and assembly language.docx
alisoftwareengineer1
 
Ppt. Nikhil.pptxnshwuudgcudisisshvehsjks
Ppt. Nikhil.pptxnshwuudgcudisisshvehsjksPpt. Nikhil.pptxnshwuudgcudisisshvehsjks
Ppt. Nikhil.pptxnshwuudgcudisisshvehsjks
panchariyasahil
 
AI Competitor Analysis: How to Monitor and Outperform Your Competitors
AI Competitor Analysis: How to Monitor and Outperform Your CompetitorsAI Competitor Analysis: How to Monitor and Outperform Your Competitors
AI Competitor Analysis: How to Monitor and Outperform Your Competitors
Contify
 
Data Science Courses in India iim skills
Data Science Courses in India iim skillsData Science Courses in India iim skills
Data Science Courses in India iim skills
dharnathakur29
 
LLM finetuning for multiple choice google bert
LLM finetuning for multiple choice google bertLLM finetuning for multiple choice google bert
LLM finetuning for multiple choice google bert
ChadapornK
 
Calories_Prediction_using_Linear_Regression.pptx
Calories_Prediction_using_Linear_Regression.pptxCalories_Prediction_using_Linear_Regression.pptx
Calories_Prediction_using_Linear_Regression.pptx
TijiLMAHESHWARI
 
Principles of information security Chapter 5.ppt
Principles of information security Chapter 5.pptPrinciples of information security Chapter 5.ppt
Principles of information security Chapter 5.ppt
EstherBaguma
 
Digilocker under workingProcess Flow.pptx
Digilocker  under workingProcess Flow.pptxDigilocker  under workingProcess Flow.pptx
Digilocker under workingProcess Flow.pptx
satnamsadguru491
 
Cleaned_Lecture 6666666_Simulation_I.pdf
Cleaned_Lecture 6666666_Simulation_I.pdfCleaned_Lecture 6666666_Simulation_I.pdf
Cleaned_Lecture 6666666_Simulation_I.pdf
alcinialbob1234
 
chapter 4 Variability statistical research .pptx
chapter 4 Variability statistical research .pptxchapter 4 Variability statistical research .pptx
chapter 4 Variability statistical research .pptx
justinebandajbn
 
Ch3MCT24.pptx measure of central tendency
Ch3MCT24.pptx measure of central tendencyCh3MCT24.pptx measure of central tendency
Ch3MCT24.pptx measure of central tendency
ayeleasefa2
 
Ad

Otbi and bicc_psr_technote_v3_final_document

  • 1. Oracle Fusion Transactional Business Intelligence and BI Cloud Connector Performance Recommendations An Oracle White Paper, 3rd Edition January 5, 2021 | Version 3.0 Copyright © 2021, Oracle and/or its affiliates Confidential – Public
  • 2. 2 WHITE PAPER | Oracle Fusion Transactional Business Intelligence and BI Cloud Connector Performance Recommendations | Version 3.0 Copyright © 2021, Oracle and/or its affiliates | Confidential – Public PURPOSE STATEMENT This document covers performance topics and best practices for Oracle Transactional Business Intelligence (OTBI) and Business Intelligence Cloud Connector (BICC) for Fusion Applications Release 20A and higher. Most of the recommendations are generic for OTBI and BICC contents as well as Oracle Business Intelligence Enterprise Edition (OBIEE) technology stack. Release specific topics will refer to exact version numbers20A and higher. Most of the recommendations are generic for OTBI and BICC contents as well as Oracle Business Intelligence Enterprise Edition (OBIEE) technology stack. Release specific topics will refer to exact version numbers Note: The document is intended for BI system integrators, BI report developers and administrators. It covers advanced performance tuning techniques in OTBI, BICC, OBIEE and Oracle RDBMS. All recommendations must be carefully verified in a test environment before applied to a production instance. DISCLAIMER This document in any form, software or printed matter, contains proprietary information that is the exclusive property of Oracle. Your access to and use of this confidential material is subject to the terms and conditions of your Oracle software license and service agreement, which has been executed and with which you agree to comply. This document and information contained herein may not be disclosed, copied, reproduced or distributed to anyone outside Oracle without prior written consent of Oracle. This document is not part of your license agreement nor can it be incorporated into any contractual agreement with Oracle or its subsidiaries or affiliates. This document is for informational purposes only and is intended solely to assist you in planning for the implementation and upgrade of the product features described. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described in this document remains at the sole discretion of Oracle. Due to the nature of the product architecture, it may not be possible to safely include all features described in this document without risking significant destabilization of the code.
  • 3. 3 WHITE PAPER | Oracle Fusion Transactional Business Intelligence and BI Cloud Connector Performance Recommendations | Version 3.0 Copyright © 2021, Oracle and/or its affiliates | Confidential – Public TABLE OF CONTENTS Purpose Statement 2 Disclaimer 2 Introduction 5 Logical SQL Concepts 5 Understanding OTBI Query Execution 5 OTBI and BICC Performance Monitoring and Diagnostics 6 OBIEE Usage Tracking for OTBI and BICC Monitoring 6 Query Log Analysis 6 Guidelines for Analyzing Reports Performance 10 Best Practices for OTBI Dashboards and Reports Design 11 Dashboard Design Recommendations 11 Dashboard Design: Best Practices 11 Dashboard Prompts Recommendations 11 Choice of Logical Entities in Prompts 11 Use Default Prompt Values 12 Logical Aggregate Functions in Prompt Filters 13 Use ‘Calendar’ Inputs in Date Prompts 13 Use Text Field in Prompts 14 Use of Lookup Tables vs. Dimensions in Prompts 17 “OTBI HCM Prompts” Subject Area for Manager Prompts 17 Report Design Recommendations 17 Use Restrictive Filters to Constrain Result Set 17 Use Indexed Logical Attributes in Logical Joins and Filters 18 Eliminate Duplicate and Redundant Filters in Logical SQLs 18 Avoid Fetching Very Large Volumes from Database to OBIEE 18 Limit the Number of Logical Columns and ORDER BY Attributes in Reports 20 Review LOBs Usage in OTBI Reports 21 Reduce Multiple Logical Unions in OTBI Reports 22 Cross Subject Area Reports Considerations 22 Cross Fact Reports Recommendations 22 Use of OTBI Reports as Infolets in FA Embedded Pages 23 Considerations for Using Custom Extensible Attributes and Flex Fields in Logical Filters 24 Considerations for Using Hierarchical Attributes in Logical Filters 24 Employ DESCRIPTOR_IDOF Function in Logical Filters 24 Generated PSQL(s) with a Large Number In-List Values or BIND Variables Optimization 26 CONSTANT_OPTIMIZATION_LEVEL and CONSTANT_CASE_OPTIMIZATION_LEVEL Request Variables 27 Use REPORT_SUM/REPORT_COUNT Instead of REPORT_AGGREGATE Logical Functions 27 Use COUNT Instead of COUNT(DISTINCT) 29 EVALUATE vs. EVALUATE_ANALYTIC Considerations 30 Use of Logical Joins in OTBI Reports 30 Use of OBIEE CACHE Variables for Diagnostics in OTBI Reports 30 Use of OBIS_REFRESH_CACHE in OTBI Reports 30 Use of Database Hints to Optimize Generated Queries Plans 30 Use of MATERIALIZE Hint in Cross Subject Area and Cross Reports 31 Functional Design Patterns Affecting OTBI Reports Performance 32 "Time"."Fiscal Period" Performance Considerations in Financial Reports 32 Required Use of Logical UPPER() Function in Logical Filters 33 "Payroll - Payroll Run Results Real Time" Subject Area Recommendations 33 “Payroll - Payroll Balances Real Time” Subject Area Recommendations 34 Data Security Predicates Impact on OTBI Reports Performance 35 Security Predicate in OTBI: Performance Recommendations 35 Security Materialization in OTBI Reports 36 OTBI Export Limiters and Recommendations 36
  • 4. 4 WHITE PAPER | Oracle Fusion Transactional Business Intelligence and BI Cloud Connector Performance Recommendations | Version 3.0 Copyright © 2021, Oracle and/or its affiliates | Confidential – Public ResultRowLimit 36 DefaultRowsDisplayedInDownloadCSV 36 DefaultRowsDisplayedInDownload 37 DefaultRowsDisplayedInDelivery 37 Oracle BI Cloud Connector Performance Considerations 37 BICC OTBI Metadata Independent Mode 37 BICC Performance Recommendations 37 TEMP and UNDO Tablespace Sizing 37 BICC Jobs Design Optimization for Better Performance 38 OTBI Performance Related Errors: Recommendations and Workarounds 39 Conclusion 41
  • 5. 5 WHITE PAPER | Oracle Fusion Transactional Business Intelligence and BI Cloud Connector Performance Recommendations | Version 3.0 Copyright © 2021, Oracle and/or its affiliates | Confidential – Public INTRODUCTION With Oracle Transactional Business Intelligence embedded analytics, role-based dashboards, as well as on-the-fly ad-hoc queries capabilities, business users have very powerful means for accessing, interpreting and analyzing real-time data in Oracle Fusion Applications (FA). The end users can put together sophisticated reports with various custom attributes, filters, join conditions and analytic functions in just few mouse clicks. It is very important to know how to design such dashboards, prompts, reports and ad-hoc queries in OTBI, ensuring maximum performance and scalability, and minimum impact on Fusion Middleware and Database resources. The generated OTBI contents complexity can come from: Custom dashboards and reports design Data Security Predicates (DSP) Functional requirements to report contents Custom Subject Areas Customizations and extensibility attributes Use of cross Subject Areas in a single report Combination of the mentioned factors above. Oracle Business Intelligence Cloud Connector (BICC) provides the functionality to extract data from Fusion Applications and loading into Oracle Universal Content Management (UCM) Server or cloud storage in CSV format. It supports initial and incremental data extracts, as well as primary keys extracts for tracking deletes. The document will cover each topic in more details, with examples and recommendations how to improve design and achieve better performance. Some examples will refer to Advanced Logical SQL options, as well generated physical query patterns and database query plans, so the targeted audience is expected to know SQL basics and be able to read and understand sample execution plans. Logical SQL Concepts OBIEE uses its own internal language to describe reports, prompts, initialization blocks in a form of Logical SQLs (LSQL). Logical SQL text uniquely identifies a report business logic: SET VARIABLE QUERY_SRC_CD='Report', SAW_SRC_PATH='/shared/Custom/Sample Report/Manager Name Query', PREFERRED_CURRENCY='Local Currency'; SELECT "Compensation Manager"."Manager Name") saw_0, FROM "Compensation - Workforce Compensation Real Time" WHERE "Compensation Manager"."Manager Name" = 'Joe Doe' ORDER BY saw_0, saw_1 “SET VARIABLE” defines report type (QUERY_SRC_CD), its stored catalog path (SAW_SRC_PATH) and other attributes. It also includes any OBIEE hints and directives, defined as query PREFIX values in OBIEE Answers -> Advanced Tab -> Prefix field. Some of the directives are discussed in this document. SELECT specifies all queried logical attributes. There may be more than one SELECT clause in Logical SQL. FROM specifies the queried Subject Area. It may be used as a placeholder, while SELECT clause pull in attributes from completely different subject area. Refer to the discussion on Cross Subject Area usage and recommendations. WHERE clause includes logical joins, filters and subqueries. ORDER BY is automatically generated by OBIEE and includes all select attributes by default. SELECT and WHERE clauses may contain OBIEE internal functions, logical UNION, MINUS or INTERSECT and other SQL operands. This document covers a number of techniques, that involve LSQL, so it’s important to read and understand the basic LSQL concepts. Understanding OTBI Query Execution OTBI Metadata Repository (RPD) logical model delivers facts and dimensions on top of Fusion Application Development Framework (ADF) Business model across multiple Subject Areas (SA) in OBIEE Presentation layer. When a business user designs a report as a logical star, i.e. joins a logical fact to one or more logical dimensions, OBIEE converts the designed report into a logical SQL (LSQL), and then generates one or more corresponding physical SQLs (PSQL), which then get executed in a database. OTBI logical facts and dimensions get expanded into complex joins using ADF View Objects (VO) defined in OTBI RPD Physical layer. BI Server generates
  • 6. 6 WHITE PAPER | Oracle Fusion Transactional Business Intelligence and BI Cloud Connector Performance Recommendations | Version 3.0 Copyright © 2021, Oracle and/or its affiliates | Confidential – Public XML queries to BI Broker to translate these FA ADF VOs into SQLs, that ultimately join FA database tables and views, and produces the PSQL(s), that get executed in Fusion Apps OLTP database. OTBI AND BICC PERFORMANCE MONITORING AND DIAGNOSTICS This section provides the recommendations and guidelines for OTBI self-service monitoring and performance optimizations. OBIEE Usage Tracking for OTBI and BICC Monitoring OTBI delivered OBIEE Usage Tracking (UT) in database, starting from FA 19B. Every single logical SQL, whether it’s a prompt, report, ad-hoc query or BICC extract, is being recorded into OBIEE Usage Tracking tables in each FA POD. UT captures all runtime metrics such as timestamps, timing, row counts, logical and physical SQL text, errors, that can be analyzed for any performance or usage anomalies, and potential issues or trends detected proactively in the FA environments. Starting from FA release 20D, OTBI delivers two subject areas that can be used for creating reports to monitor OTBI usage and performance: • OTBI Usage Real Time: monitors OTBI usage, including user, analysis and dashboard, and subject area usage trends. • OTBI Performance Real Time: monitors usage trends and OTBI analysis execution time, execution errors, and database physical SQL execution statistics. Refer to Oracle Customer Connect for more details and examples of OTBI Usage Tracking reports. The example below shows a sample histogram report, combining reports and subject areas. From a quick glance, looking into number of errors, total runs, and runtime histogram one can prioritize the reports for internal analysis: Usage Tracking can be used to extract logical SQL text, that uniquely defines each report, and review the business logic, check for valid filters, joins cross subject areas, etc. LSQL (and not PSQLs) should always be the starting point, as it provides the important metadata to BI Server to generate PSQLs. Alternatively, you can locate and extract the LSQL in OBIEE nqquery.log(s) if they are still available on the POD. Review all runtime metrics, such as “Total time in BI Server”, “Logical Compilation Time”, each PSQL runtime, compute end-to-end runtime by timestamps, check row counts for each PSQL and the LSQL. Both Usage Tracking and nqquery.log capture these summaries. Refer to metrics detailed descriptions in the next section. Since BICC issues LSQLs for its data extracts, the same Usage Tracking can be used for monitoring BICC jobs too. All BICC LSQLs are executed by a special system user FUSION_APPS_OBIA_BIEE_APPID, that can be used for filtering extracts. Note that “Total time in BI Server” may not account for the data fetching time. You can use start and end timestamps to calculate more accurate extracts run time in your monitoring reports. Query Log Analysis OBIEE generates detailed diagnostic events with default second trace level [TRACE:2] and records the logs into its nqquery.log. This trace level is sufficient for most diagnostic activities in FA. Some analyses may require higher trace level 7, that can be set via “SET VARIABLE LOGLEVEL=7;” prefix. You can set it in OBIEE Answers -> Advanced tab -> Prefix section, when you design or optimize your report. Important! Make sure you remove LOGLEVEL=7 from your report prefix after you complete your performance analysis. The logs are retained on each POD for primary and high availability (HA) node up to 10 days. If you have Administrator role, you can retrieve the same log from OBIEE Answers by navigating to ‘Administration’ link -> ‘Manage Sessions’ -> Your session link. Below is
  • 7. 7 WHITE PAPER | Oracle Fusion Transactional Business Intelligence and BI Cloud Connector Performance Recommendations | Version 3.0 Copyright © 2021, Oracle and/or its affiliates | Confidential – Public the summary example for a single report, extracted from nqquery.log with the detailed explanation of the important metrics and events: [2019-01-30T00:30:36.273+00:00] [OracleBIServerComponent] [TRACE:2] [] [] [ecid: 005WA82e9RH66Ut_odl3iY000EXm000_na,0:1:3:2:1:256:3] [tid: efa0700] [messageid: USER-0] [requestid: cec70034] [sessionid: cec70000] [username: [email protected]] ############################################## [[ -------------------- SQL Request, logical request hash: d9cd7a2f SET VARIABLE QUERY_SRC_CD='Report',SAW_DASHBOARD='/shared/Custom/S2S Reports/S2SR Team/AGENTS/ApOps/Daily/Supplier Registration/Daily Alert - Supplier Registration',SAW_DASHBOARD_PG='Strategic Buyer Review',SAW_SRC_PATH='/shared/Custom/S2S Reports/S2SR Team/AGENTS/ApOps/Daily/Supplier Registration/Strategic Buyer Review',PREFERRED_CURRENCY='Document Currency';SELECT 0 s_0, "Supplier Qualification - Qualifications and Assessments Real Time"."Qualification Area Question"."Responder Type" s_1, "Supplier Qualification - Qualifications and Assessments Real Time"."Qualification Detail"."Qualification Evaluation Date" s_2, "Supplier Qualification - Qualifications and Assessments Real Time"."Qualification Detail"."Qualification Finalization Date" s_3, "Supplier Qualification - Qualifications and Assessments Real Time"."Qualification Detail"."Qualification Outcome" s_4, "Supplier Qualification - Qualifications and Assessments Real Time"."Qualification Evaluated By"."Email Address" s_5, "Supplier Qualification - Qualifications and Assessments Real Time"."Qualification Owner"."Buyer Email Address" s_6, "Supplier Qualification - Qualifications and Assessments Real Time"."Qualification Owner"."Buyer Name" s_7, "Supplier Qualification - Qualifications and Assessments Real Time"."Qualification Response Detail"."Acceptance Date" s_8, "Supplier Qualification - Qualifications and Assessments Real Time"."Qualification Response Detail"."Response Date" s_9, "Supplier Qualification - Qualifications and Assessments Real Time"."Qualification Supplier Profile"."Supplier Name" s_10, "Supplier Qualification - Qualifications and Assessments Real Time"."Qualification Supplier Profile"."Supplier Number" s_11, DESCRIPTOR_IDOF("Supplier Qualification - Qualifications and Assessments Real Time"."Qualification Area Question"."Responder Type") s_12 FROM "Supplier Qualification - Qualifications and Assessments Real Time" WHERE (("Qualification Response Detail"."Acceptance Date" IS NOT NULL) AND ("Qualification Detail"."Qualification Outcome" IS NULL) AND (DESCRIPTOR_IDOF("Supplier Qualification - Qualifications and Assessments Real Time"."Qualification Area Question"."Responder Type") = 'INTERNAL')) ORDER BY 1, 7 ASC NULLS LAST, 8 ASC NULLS LAST, 5 ASC NULLS LAST, 10 ASC NULLS LAST, 3 ASC NULLS LAST, 4 ASC NULLS LAST, 6 ASC NULLS LAST, 9 ASC NULLS LAST, 12 ASC NULLS LAST, 11 ASC NULLS LAST, 2 ASC NULLS LAST, 13 ASC NULLS LAST FETCH FIRST 500001 ROWS ONLY ]] [2019-01-30T00:30:36.273+00:00] [OracleBIServerComponent] [TRACE:2] [] [] [ecid: 005WA82e9RH66Ut_odl3iY000EXm000_na,0:1:3:2:1:256:3] [tid: efa0700] [messageid: USER-23] [requestid: cec70034] [sessionid: cec70000] [username: [email protected]] -------------------- General Query Info: [[ Repository: Star, Subject Area: Core, Presentation: Supplier Qualification - Qualifications and Assessments Real Time ]] [2019-01-30T00:30:36.312+00:00] [OracleBIServerComponent] [TRACE:2] [] [] [ecid: 005WA82e9RH66Ut_odl3iY000EXm000_na,0:1:3:2:1:256:5] [tid: efa0700] [messageid: USER-18] [requestid: cec70034] [sessionid: cec70000] [username: [email protected]] -------------------- Sending query to database named oracle.apps.fscm.model.analytics.applicationModule.FscmTopModelAM_FscmTopModelAMLocal (id: <<1158231996>> SQLBypass Gateway), connection pool named Connection Pool, logical request hash d9cd7a2f, physical request hash ed5d81bd: [[ <?xml version="1.0" encoding="UTF-8" ?> <ADFQuery mode="SQLBypass" queryid="1547096721-2023840527" locale="en"> <Parameters> <Parameter><Name><![CDATA[AOL_LANGUAGE]]></Name><Value><![CDATA[en]]></Value></Parameter> <Parameter><Name><![CDATA[OTBI_CDS_ENABLED]]></Name><Value><![CDATA[false]]></Value></Parameter> </Parameters> <Projection> <Attribute><Name><![CDATA[QualificationQualEvaluationDate]]></Name><ViewObject><![CDATA[FscmTopModelAM.PrcPoqPublicVi ewAM.QualificationResponsesPVO]]></ViewObject></Attribute> <Attribute><Name><![CDATA[QualificationQualCompletedDate]]></Name><ViewObject><![CDATA[FscmTopModelAM.PrcPoqPublicVie wAM.QualificationResponsesPVO]]></ViewObject></Attribute> ... ... ... </ViewCriteriaRow> </ViewCriteria> </DetailFilter> </ADFQuery> ]] [2019-01-30T00:30:36.525+00:00] [OracleBIServerComponent] [TRACE:2] [] [] [ecid: 005WA82e9RH66Ut_odl3iY000EXm000_na,0:1:3:2:1:256:5] [tid: efa0700] [messageid: USER-18] [requestid: cec70034] [sessionid: cec70000] [username: [email protected]] -------------------- Sending query to database named oracle.apps.fscm.model.analytics.applicationModule.FscmTopModelAM_FscmTopModelAMLocal (id: <<1158232468>> SQLBypass Gateway), connection pool named Connection Pool, logical request hash d9cd7a2f, physical request hash 174c60f0: [[ <?xml version="1.0" encoding="UTF-8" ?> <ADFQuery mode="SQLBypass" queryid="1782285574-110497155" locale="en"> <Parameters> <Parameter><Name><![CDATA[AOL_LANGUAGE]]></Name><Value><![CDATA[en]]></Value></Parameter> <Parameter><Name><![CDATA[OTBI_CDS_ENABLED]]></Name><Value><![CDATA[false]]></Value></Parameter> </Parameters> <Projection> <Attribute><Name><![CDATA[Meaning]]></Name><ViewObject><![CDATA[FscmTopModelAM.AnalyticsServiceAM.LookupValuesTLPVO]] ></ViewObject></Attribute> <Attribute><Name><![CDATA[LookupCode]]></Name><ViewObject><![CDATA[FscmTopModelAM.AnalyticsServiceAM.LookupValuesTLPV O]]></ViewObject></Attribute>
  • 8. 8 WHITE PAPER | Oracle Fusion Transactional Business Intelligence and BI Cloud Connector Performance Recommendations | Version 3.0 Copyright © 2021, Oracle and/or its affiliates | Confidential – Public ... ... ... </ViewCriteriaRow> </ViewCriteria> </DetailFilter> </ADFQuery> ]] [2019-01-30T00:30:36.620+00:00] [OracleBIServerComponent] [TRACE:2] [] [] [ecid: 005WA82e9RH66Ut_odl3iY000EXm000_na,0:1:3:2:1:256:5] [tid: efa0700] [messageid: USER-18] [requestid: cec70034] [sessionid: cec70000] [username: [email protected]] -------------------- Sending query to database named FSCM_OLTP (id: <<1158231989>>), connection pool named Connection Pool, logical request hash d9cd7a2f, physical request hash d91d1d65: [[ WITH SAWITH0 AS (select T2521292.C277165028 as c3, T2521292.C440229984 as c4, T2521292.C104108913 as c5, T2521292.C168651775 as c6, T2521292.C398378115 as c7, T2521292.C527336469 as c8, T2521292.C402205937 as c9, T2521292.C248158144 as c10, T2521292.C203825585 as c11, T2521292.C323161287 as c12, T2521292.C464440102 as c13, T2521292.C410075615 as c15 from (SELECT V228191905.QUAL_EVALUATION_DATE AS C277165028, ... ... ... from SAWITH3 D901) select D1.c1 as c1, D1.c2 as c2, D1.c3 as c3, D1.c4 as c4, D1.c5 as c5, D1.c6 as c6, D1.c7 as c7, D1.c8 as c8, D1.c9 as c9, D1.c10 as c10, D1.c11 as c11, D1.c12 as c12, D1.c13 as c13 from ( select distinct 0 as c1, D1.c2 as c2, D1.c3 as c3, D1.c4 as c4, D1.c5 as c5, D1.c6 as c6, D1.c7 as c7, D1.c8 as c8, D1.c9 as c9, D1.c10 as c10, D1.c11 as c11, D1.c12 as c12, D1.c13 as c13 from SAWITH4 D1 order by c7, c8, c5, c10, c3, c4, c6, c9, c12, c11, c2, c13 ) D1 where rownum <= 500001 ]] [2019-01-30T00:30:36.621+00:00] [OracleBIServerComponent] [TRACE:2] [] [] [ecid: 005WA82e9RH66Ut_odl3iY000EXm000_na,0:1:3:2:1:256:5:3] [tid: 16620700] [messageid: USER-18] [requestid: cec70034] [sessionid: cec70000] [username: [email protected]] -------------------- Sending query to database named FSCM_OLTP (id: <<1158231989>> pre query 0), connection pool named Connection Pool, logical request hash d9cd7a2f, physical request hash e9e71988: [[ BEGIN fnd_session_mgmt.attach_session('80A2F84E0AAE4340E05302313D0A7036'); END; ]] [2019-01-30T00:31:00.586+00:00] [OracleBIServerComponent] [TRACE:2] [] [] [ecid: 005WA82e9RH66Ut_odl3iY000EXm000_na,0:1:3:2:1:256:5:3] [tid: 16620700] [messageid: USER-18] [requestid: cec70034] [sessionid: cec70000] [username: [email protected]] -------------------- Sending query to database named FSCM_OLTP (id: <<1158231989>> post query 0), connection pool named Connection Pool, logical request hash d9cd7a2f, physical request hash d90563e6: [[ BEGIN fnd_session_mgmt.detach_session; END; ]] [2019-01-30T00:31:00.590+00:00] [OracleBIServerComponent] [TRACE:2] [] [] [ecid: 005WA82e9RH66Ut_odl3iY000EXm000_na,0:1:3:2:1:257] [tid: efa0700] [messageid: USER-34] [requestid: cec70034] [sessionid: cec70000] [username: [email protected]] -------------------- Query Status: Successful Completion [2019-01-30T00:31:00.590+00:00] [OracleBIServerComponent] [TRACE:2] [] [] [ecid: 005WA82e9RH66Ut_odl3iY000EXm000_na,0:1:3:2:1:257] [tid: efa0700] [messageid: USER-26] [requestid: cec70034] [sessionid: cec70000] [username: [email protected]] -------------------- Rows 2, bytes 65544 retrieved from database query id: <<1158231996>> SQLBypass Gateway [2019-01-30T00:31:00.590+00:00] [OracleBIServerComponent] [TRACE:2] [] [] [ecid: 005WA82e9RH66Ut_odl3iY000EXm000_na,0:1:3:2:1:257] [tid: efa0700] [messageid: USER-28] [requestid: cec70034] [sessionid: cec70000] [username: [email protected]] -------------------- Physical query response time 0.210 (seconds), id <<1158231996>> SQLBypass Gateway [2019-01-30T00:31:00.590+00:00] [OracleBIServerComponent] [TRACE:2] [] [] [ecid: 005WA82e9RH66Ut_odl3iY000EXm000_na,0:1:3:2:1:257] [tid: efa0700] [messageid: USER-26] [requestid: cec70034] [sessionid: cec70000] [username: [email protected]] -------------------- Rows 1, bytes 32772 retrieved from database query id: <<1158232468>> SQLBypass Gateway [2019-01-30T00:31:00.590+00:00] [OracleBIServerComponent] [TRACE:2] [] [] [ecid: 005WA82e9RH66Ut_odl3iY000EXm000_na,0:1:3:2:1:257] [tid: efa0700] [messageid: USER-28] [requestid: cec70034] [sessionid: cec70000] [username: [email protected]] -------------------- Physical query response time 0.080 (seconds), id <<1158232468>> SQLBypass Gateway [2019-01-30T00:31:00.590+00:00] [OracleBIServerComponent] [TRACE:2] [] [] [ecid: 005WA82e9RH66Ut_odl3iY000EXm000_na,0:1:3:2:1:257] [tid: efa0700] [messageid: USER-26] [requestid: cec70034] [sessionid: cec70000] [username: [email protected]] -------------------- Rows 46, bytes 570032 retrieved from database query id: <<1158231989>> [2019-01-30T00:31:00.590+00:00] [OracleBIServerComponent] [TRACE:2] [] [] [ecid: 005WA82e9RH66Ut_odl3iY000EXm000_na,0:1:3:2:1:257] [tid: efa0700] [messageid: USER-28] [requestid: cec70034] [sessionid:
  • 9. 9 WHITE PAPER | Oracle Fusion Transactional Business Intelligence and BI Cloud Connector Performance Recommendations | Version 3.0 Copyright © 2021, Oracle and/or its affiliates | Confidential – Public cec70000] [username: [email protected]] -------------------- Physical query response time 23.964 (seconds), id <<1158231989>> [2019-01-30T00:31:00.590+00:00] [OracleBIServerComponent] [TRACE:2] [] [] [ecid: 005WA82e9RH66Ut_odl3iY000EXm000_na,0:1:3:2:1:257] [tid: efa0700] [messageid: USER-29] [requestid: cec70034] [sessionid: cec70000] [username: [email protected]] -------------------- Physical Query Summary Stats: Number of physical queries 3, Cumulative time 24.254, DB-connect time 0.002 (seconds) [2019-01-30T00:31:00.590+00:00] [OracleBIServerComponent] [TRACE:2] [] [] [ecid: 005WA82e9RH66Ut_odl3iY000EXm000_na,0:1:3:2:1:257] [tid: efa0700] [messageid: USER-24] [requestid: cec70034] [sessionid: cec70000] [username: [email protected]] -------------------- Rows returned to Client 46 [2019-01-30T00:31:00.590+00:00] [OracleBIServerComponent] [TRACE:2] [] [] [ecid: 005WA82e9RH66Ut_odl3iY000EXm000_na,0:1:3:2:1:257] [tid: efa0700] [messageid: USER-33] [requestid: cec70034] [sessionid: cec70000] [username: [email protected]] -------------------- Logical Query Summary Stats: Elapsed time 24.322, Total time in BI Server 24.321, Response time 24.322, Compilation time 0.352 (seconds), Logical hash d9cd7a2f Highlighted are the important fields and metrics to check for. Note the more critical fields are printed in bold below: [requestid] uniquely identifies all events for a single report execution in nqquery.log [username] captures the user that issued the report <Logical request hash: value> uniquely identifies the report’s signature, which is its logical SQL text. It’s stamped at the beginning and the end of the report log. Start and end timestamps can be used to calculate the total time, that includes the data fetch time. This calculation is very important for reports, producing large row counts, or BICC extracts extracting and uploading the data. OBIEE will keep its cursor open and wait for its consumer, BICC for example, until it fetched all the records. This ‘fetch’ time may not be reflected in ‘Total Time in BI Server’. SAW_DASHBOARD and SAW_DASHBOARD_PG define the catalog path for the stored dashboard and dashboard page name, hosting a report. SAW_SRC_PATH is the catalog path for the customer’s report. Logical SQL starts with <SET VARIABLE> clause, that includes OBIEE directives and variables, defined in Advanced Prefix section. Refer to more details on VARIABLEs in sections below. The logical SQL text defines the report business logic for the defined report. BI Server generates two types of physical queries (PSQLs), and prints complete XML or SQL text: − XML ADFQuery type, that connects to WLS connection pool, and each ADFQuery associated with its unique ID <<…>> SQLBypass Gateway. − Database SQL type, that connects to Oracle database, also associated with its unique ID <<…>>. Note it does not have SQLBypass Gateway, it is executed against (CRM/HCM/FSCM/GRC)_OLTP database. − BI Server issues pre- and post- SQLs to connect to FA Database, attaching and then detaching via FND API. Note that the detach call is logged with much later timestamp, though it comes right after the attach call in the log. BI Server prints several summary lines at the end of the report execution: − Each <Physical query response time (seconds)> is OBIEE reported time for a single query, issued to WLS connection pool to read ADF SQL query block, or to FA DB to retrieve the results from database. You can identify the query by its OBIEE SQL ID. Note, that query response time would not reflect consumer wait time to fetch the data. − Each Physical query <Rows> retrieved from WLS or %_OLTP (DB) connection pools. − < bytes retrieved> is OBIEE estimated fetched amount of data, based on maximum row size. This metric helps to measure the volume impact from heavy reports or BICC extracts data fetching. <Physical Query Summary Stats> shows: − <Number of physical queries> is the total number of XML ADFquery and DB SQLs, in the example total 3 reflects 2 ADFQueries and one Oracle DB SQL − <Cumulative time> is the sum of all PSQL queries. Note that the queries are run in parallel, so the sum would not be representative of the total time spent in WLS or Oracle DB. − <DB-connect time> is the time to establish DB connection, typically a tiny fraction of total time. − <Rows returned to Client> is the number of rows, produced by the report. <Logical Query Summary Stats> shows: − <Elapsed time> is the lifetime of OBIEE cursor. It is not the representative metric for report runtime. − <Total Time in BI Server> is the best runtime metric to use for measuring report runtime. It includes all runtimes from all XML queries, DB SQLs as well as Logical SQL compilation time. Remember that it does not include the data fetching waits from clients such as BICC. So, you need to rely on timestamps to compute the total time for such heavy jobs. − <Compilation Time> is another important metric, that includes BI Server time on XML ADFQueries, post- processing time in BI Server, when it joins multiple PSQL result sets or applying non-pushed functions to the data. This metric should be very closely watched in FA OBIEE env. The End timestamp completes with the same unique Logical hash value <…>.
  • 10. 10 WHITE PAPER | Oracle Fusion Transactional Business Intelligence and BI Cloud Connector Performance Recommendations | Version 3.0 Copyright © 2021, Oracle and/or its affiliates | Confidential – Public Guidelines for Analyzing Reports Performance When analyzing query log for reports performance, use the following guidelines: 1. Review Logical SQL for <SET VARIABLE> clause to check for any non-standard variables and directives. They are defined in Advanced tab in Prefix section. Important! The VARIABLE values could have significant impact on SQL generation logic. They should not be applied unless you understand the impact and would like to change the report generation to improve report’s performance. This document provides the guidance and explanation of the critical variables, that may be applicable to OTBI reports. 2. Logical SQL (LSQL) text is very important source for analysis, such as use of logical attributes, logical functions, lack or presence of comprehensive filters in a report, implicit use of cross Subject Areas, etc. You cannot rewrite the generated Physical SQL in DB, you can use LSQL to change the generation logic and improve your report performance. In some case you may consider using Advanced tab option and manually rewrite logical SQL text. Important! Always start from inspecting your logical SQL text. Do not rush to optimize your PSQL test or tune its query execution plan in database. Logical SQL should be carefully scrutinized, before moving next to PSQL analysis. 3. Review physical SQL(s), generated by OBIEE. Note, that OBIEE may generate more than one physical SQL for your report, so use the query log or Usage Tracking stats to identify the most expensive PSQL for your review. Remember, that you cannot manually rewrite the physical SQL text, but you can use database query explain plan to ensure the DB optimizer picking the correct joins order, pushing predicates, applying filtering, etc. In some cases, you may want to influence the SQL query plan by applying database hints to your report. This option is covered in detail in the document.
  • 11. 11 WHITE PAPER | Oracle Fusion Transactional Business Intelligence and BI Cloud Connector Performance Recommendations | Version 3.0 Copyright © 2021, Oracle and/or its affiliates | Confidential – Public BEST PRACTICES FOR OTBI DASHBOARDS AND REPORTS DESIGN OTBI Analytics is extensively integrated into Fusion Applications. The end users can execute the analytic reports directly embedded into Fusion UI pages, run custom dashboards with many OTBI reports, use dashboard prompts for BI Publisher queries, execute integration reports using Web Services APIs, etc. The power users can run ad-hoc queries directly from OBIEE Answers or use BI Cloud Connector (BICC) to extract the data from FA VOs, available from BICC Admin Console’s enabled offerings. This chapter will discuss high level recommendations for using OTBI in Fusion. Dashboard Design Recommendations This chapter covers the best practices and guidelines for designing OTBI Dashboards. Dashboards proper design will help to minimize performance workload and improve end user experience in Fusion Applications environments. Dashboard Design: Best Practices 4. Design your landing dashboard page very carefully, as it will be used the most and shall not generate performance overhead from any heavy reports. 5. OBIEE Dashboards allow creating multiple pages. When designing multi-page dashboard, do not put too many pages per dashboard. Typically, dashboards consolidate functional report contents in five pages or less. All pages should be visible and clearly labeled for end users to pick the right page without unnecessary clicking every single page (and triggering reports execution). 6. Define reasonable number of reports per dashboard page. Avoid scrolling in your pages. Typically having six reports or less allows utilizing the dashboard space and presenting the relevant information. OBIEE will execute the reports in parallel, so a single dashboard page with many reports could generate significant workload. Important! If you put a large number of reports on the same dashboard, OBIEE would make multiple calls to BI Broker to translate ADF VOs into SQL queries in parallel. Such high concurrent load would manifest in longer XML queries to generate ADF VO SQL definitions, adding to the reports total runtime, causing higher resource usage on WLS servers that host BI Broker, and impacting regular UI flows in FA. 7. Carefully design dashboard reports, find the optimal balance between complexity of a single report versus number of reports on a dashboard page. 8. Consolidate reports with similar report logic on the same dashboard page into as few items as possible. 9. Consider placing reports with drastically different runtimes on separate dashboard pages. 10. Do the due diligence to group the logical reports together on the same dashboard page to improve the usability and minimize unnecessary clicks through other dashboards and pages, as every single page click would trigger more report executions. 11. DO NOT include long running reports on commonly used dashboard pages and do not embed them into FA UI pages. Instead, build such reports as the links off the dashboards or UI pages to limit their execution by interested parties only. You may consider using BI Delivers for scheduling and delivering such reports to the targeted users instead of running them online. 12. Design your dashboard as much interactive as possible, using column selectors, drill downs and guided navigation. 13. Avoid using filters, that are based on the output from other reports on a dashboard. 14. Use sub-totals and grand totals in reports wisely. Each total value results in additional level of aggregation and may have impact on report performance. Dashboard Prompts Recommendations OTBI Dashboard Prompts, which select List of Values (LOV), often produce their own LSQLs, that generate PSQLs to render the desired values. Poorly designed prompts could result in very heavy, slowly performing PSQLs, producing long lists of values, consuming significant resources on both BI and Database tiers, and causing frustration to end users. Review the following guidelines for dashboard prompts design below. Choice of Logical Entities in Prompts Design dashboard prompts to generate efficient logical queries. Explore the alternative options to pull Prompt values from less heavy logical entities such as dimensions and lookups instead of facts. Additionally, you can explore alternative subject areas for your prompts. Example 1. To create a prompt with a list of direct and indirect employees in your reporting hierarchy, consider selecting the names from Resource Hierarchy and apply a filter by login. That will result in more optimal LSQL with a single logical dimension: SELECT
  • 12. 12 WHITE PAPER | Oracle Fusion Transactional Business Intelligence and BI Cloud Connector Performance Recommendations | Version 3.0 Copyright © 2021, Oracle and/or its affiliates | Confidential – Public 0 s_0, "Sales - CRM Resource"."Resource Hierarchy"."Current Base Resource Name" s_1 FROM "Sales - CRM Resource" WHERE ("Resource Hierarchy"."User Organization Hierarchy Based Login" = VALUEOF(NQ_SESSION."USER_PARTY_ID")) The less efficient option to query data from two separate dimensions would add an implicit logical fact to the generated query and take much longer to extract the values for the same prompt: SELECT "Sales - CRM Resource"."Employee"."Employee Name" s_1 FROM "Sales - CRM Resource" WHERE ("Resource Hierarchy"."User Organization Hierarchy Based Login" = VALUEOF(NQ_SESSION."USER_PARTY_ID")) Example 2. Rather than use “Sales – CRM Asset Contact” SA for querying all employees for a specific sales country the implementer could retrieve the same list of values much faster from “Sales – CRM Asset”: SELECT 0 s_0, "Sales - CRM Asset Contact"."Contact"."Full Name" s_1 FROM "Sales - CRM Asset Contact" ORDER BY 1, 2 ASC NULLS LAST FETCH FIRST 75001 ROWS ONLY Vs. less optimal LSQL for the same prompt: SELECT 0 s_0, "Sales - CRM Asset"."Contact"."Full Name" s_1 FROM "Sales - CRM Asset" ORDER BY 1, 2 ASC NULLS LAST FETCH FIRST 75001 ROWS ONLY Example 3. Avoid querying large HCM transactional tables and fetch all employee records when retrieving " Supervisor Full Name" in your HCM prompts. The following LSQL restricts the data to fetch only department managers and delivers better performance: SELECT "Department"."Supervisor Full Name", "Department"."Department Name" FROM "Workforce Management - Worker Assignment Real Time" FETCH FIRST 65001 ROWS ONLY While the original query would hit a large HCM transaction table, fetching all employee data for all Department Managers and worse performance: SELECT "Department"."Supervisor Full Name" FROM "Workforce Management - Worker Assignment Event Real Time" ORDER BY 1 FETCH FIRST 65001 ROWS ONLY Use Default Prompt Values Always define default values for your prompts.
  • 13. 13 WHITE PAPER | Oracle Fusion Transactional Business Intelligence and BI Cloud Connector Performance Recommendations | Version 3.0 Copyright © 2021, Oracle and/or its affiliates | Confidential – Public Pick the value as default, which is most commonly selected by end users. Use a dummy value if you cannot identify the most common one. Then the reports would produce no data until the end users choose the right values. Consider customizing ‘No Results’ message in the report(s) Analysis Properties to tell users to choose the right prompt values. For example: Logical Aggregate Functions in Prompt Filters Avoid using aggregate functions in logical filters, as they may not be pushed to physical tables. For example, the use MAX("Activity"."Actual Start Date") would not be pushed to the physical table ZMM_ACTY_ACTIVITY below: SELECT "Customer"."Customer Unique Name" saw_0 FROM "Sales - CRM Sales Activity" WHERE MAX("Activity"."Actual Start Date") BETWEEN timestamp '2017-01-01 0:00:00' AND timestamp '2017-01-24 00:00:00' ORDER BY saw_0 FETCH FIRST 65001 ROWS ONLY The generated PSQL shows two WITH factored subqueries, with MAX(d1.c3) pushed into the second WITH query block, and the MAX filter value applied in the HAVING clause instead of filtering the records directly from ZMM_ACTY_ACTIVITIES table. WITH sawith0 AS ( SELECT ... FROM zmm_acty_activities activitypeo ... ) t1931714 ),sawith1 AS ( SELECT MAX(d1.c3) AS c2, d1.c1 AS c3 FROM sawith0 d1 GROUP BY d1.c1 HAVING MAX(d1.c3) BETWEEN TO_DATE('2017-01-01 00:00:00','YYYY-MM-DD HH24:MI:SS') AND TO_DATE('2017-01-24 00:00:00','YYYY-MM-DD HH24:MI:SS') )... WHERE ROWNUM <= 65001 If you have the functional requirement to employ such aggregated filters, then consider adding non-aggregated filters as well to constrain the result set in your query. Note, the same recommendation applies to prompts as well as regular reports with aggregate logical filters. Use ‘Calendar’ Inputs in Date Prompts Consider using ‘Calendar’ input instead of generating a ‘Choice List’. The ‘Choice List’ would result in more expensive LSQL, querying database: SET VARIABLE QUERY_SRC_CD='ValuePrompt';SELECT "Person"."Person Date Of Birth" saw_0 FROM "Workforce Management - Person Real Time" ORDER BY saw_0 FETCH FIRST 65001 ROWS ONLY
  • 14. 14 WHITE PAPER | Oracle Fusion Transactional Business Intelligence and BI Cloud Connector Performance Recommendations | Version 3.0 Copyright © 2021, Oracle and/or its affiliates | Confidential – Public compared to the use of more efficient ‘Calendar’ input below: Use Text Field in Prompts Consider using ‘Text Field’ in User Input when you build prompts using complex logical model, such as secure logical hierarchies. If you implement such prompts via ‘Choice List’, you may end up with very expensive hierarchical PSQL without any filters, running for a long time and not scaling for multiple business users. The ‘Text Field’ Prompts should comply with security requirements and have input validation before OBIEE run the prompt LSQL and corresponding PSQL. You can ensure such validation by implementing a hidden second prompt. Refer to the example for implementing ‘Text Field’ prompt based on Compensation Manager Hierarchy. For example, the ‘Choice List’ Prompt using Compensation Manager Hierarchy would result in the following logical SQL without any constraining logical filters: SELECT "Compensation Manager"."Manager Name" saw_0, DESCRIPTOR_IDOF("Compensation - Workforce Compensation Real Time"."Compensation Manager"."Manager Name") saw_1 FROM "Compensation - Workforce Compensation Real Time" ORDER BY saw_0 FETCH FIRST 65001 ROWS ONLY Such query could take time to run, especially with multiple levels of the hierarchy. The same prompt can be designed to use Text Field Input and the second hidden prompt, validating the input using the security, enforced for this hierarchy in OTBI: 1. Create a ‘Text Input’ Prompt for Compensation Manager Prompt − Prompt for: Presentation Variable (p_open_txt) − Label: Compensation Manager − User Input: Text Field − Variable Data Type: Default (Text) − Default selection: SQL Results − Enter SQL Statement to generate the list of values: SELECT "Compensation Manager"."Manager Name" FROM "Compensation - Workforce Compensation Real Time" WHERE "Worker"."Person ID" = VALUEOF(NQ_SESSION.PERSON_ID_HCM) FETCH FIRST 1 ROWS ONLY Refer to the screenshot below:
  • 15. 15 WHITE PAPER | Oracle Fusion Transactional Business Intelligence and BI Cloud Connector Performance Recommendations | Version 3.0 Copyright © 2021, Oracle and/or its affiliates | Confidential – Public 2. Create Hidden ‘Choice List’ Prompt Compensation Manager Prompt (Hidden) validates the value, entered into the ‘Text Field’ Prompt example above. The validated Display Value is passed into another presentation variable p_val_var. Even though it is defined having ‘Choice List’, it will have a single value, because the logical SQL will have an added filter in it. − Prompt for Column: "Compensation Manager"."Manager Name" − Label: Manager Name − Choice List Values: SQL Results − Enter SQL Statement to generate the list of values: SELECT "Compensation Manager"."Manager Name" FROM "Compensation - Workforce Compensation Real Time" Where "Compensation Manager"."Manager Name" = '@{p_open_txt}{" "}' FETCH FIRST 1 ROWS ONLY − Default selection: SQL Results − Enter SQL Statement to generate the list of values: SELECT "Compensation Manager"."Manager Name" FROM "Compensation - Workforce Compensation Real Time" WHERE "Compensation Manager"."Manager Name" = '@{p_open_txt}{" "}' FETCH FIRST 1 ROWS ONLY Set a variable: Presentation Variable (p_val_var) Refer to the screenshot below:
  • 16. 16 WHITE PAPER | Oracle Fusion Transactional Business Intelligence and BI Cloud Connector Performance Recommendations | Version 3.0 Copyright © 2021, Oracle and/or its affiliates | Confidential – Public 3. Update the Prompted Filter for an affected report to depend on the variable p_val_var: To prevent rendering incorrect result from entering blank values, space(s) or ‘%’ in the ‘Text Field’ prompt, add "AND @{p_open_txt}['@']{' '} is prompted" to the report criteria as in the sample report below: Criteria Filter Details: "Compensation Manager"."Manager Name" IN (@{p_val_var}['@']{' '}) AND @{p_open_txt}['@']{' '} Refer to the screenshot below: And the generated logical SQLs include the logical filer for Manager Name:
  • 17. 17 WHITE PAPER | Oracle Fusion Transactional Business Intelligence and BI Cloud Connector Performance Recommendations | Version 3.0 Copyright © 2021, Oracle and/or its affiliates | Confidential – Public SET VARIABLE QUERY_SRC_CD='DashboardPrompt'; SELECT "Compensation Manager"."Manager Name" FROM "Compensation - Workforce Compensation Real Time" Where "Compensation Manager"."Manager Name" = 'Joe Doe' FETCH FIRST 1 ROWS ONLY SET VARIABLE QUERY_SRC_CD='DisplayValueMap',PREFERRED_CURRENCY='Local Currency';SELECT DESCRIPTOR_IDOF("Compensation Manager"."Manager Name") saw_0, "Compensation Manager"."Manager Name" saw_1, DESCRIPTOR_IDOF("Compensation - Workforce Compensation Real Time"."Compensation Manager"."Manager Name") saw_2 FROM "Compensation - Workforce Compensation Real Time" WHERE "Compensation Manager"."Manager Name" = 'Joe Doe' ORDER BY saw_0, saw_1 Use of Lookup Tables vs. Dimensions in Prompts OTBI prompts may deliver better performance, if forced to use lookup tables over dimensions. OBIEE allows ‘SET VARIABLE OBIS_VALUE_PROMPT_LOOKUP_DIRECT_ACCESS=1;’ to influence OBIEE’s choice of using lookup table (value 1). Such option applies to Prompt related LSQLs only (QUERY_SRC_CD =ValuePrompt or DisplayValueMap, that can be seen in nqquery.log) with columns traced to a single Lookup table. Important! Make sure you verify functional equivalency and carefully benchmark the use of the variable in your prompt in test before you enable it in a production environment. “OTBI HCM Prompts” Subject Area for Manager Prompts When you build prompts to pull ‘Manager’ values in your HCM OTBI contents, “HCM OTBI Prompts” should be your first choice before exploring other options. This subject area has been designed to simplify HCM ‘Manager’ prompts design and deliver improved performance. The example below uses a prompt query to retrieve Manager Name: SELECT "Manager"."Name" FROM "Workforce Management - Worker Assignment Event Real Time" ORDER BY 1 Instead, you can use "OTBI HCM Prompts" to select Manager Name from "Assignment Manager List" unsecured dimension: SELECT "Assignment Manager List Unsecured"."Manager Name", descriptor_idof("Assignment Manager List Unsecured"."Manager Name") FROM "OTBI HCM Prompts" FETCH FIRST 65001 ROWS ONLY Report Design Recommendations OBIEE offers maximum ease and flexibility in creating various reports with complex measures and computations. OTBI logical model is optimized to deliver fast performance for customer reports and ad-hoc queries in Answers. There may be cases however, when poorly designed reports with unnecessarily sophisticated logic generate very heavy database SQLs and deliver sub-optimal performance. Review the guidelines and recommendations below for ensuring fast performance and keep them in mind when you build your reports in OTBI. Use Restrictive Filters to Constrain Result Set 1. When you create an OBIEE report, always pick effective logical filters, that produce the desired numbers and prevent generating very high row counts. The end users may not even be aware that their reports fetch large volumes and start drill downs or switch to other reports after reviewing the first set of records.
  • 18. 18 WHITE PAPER | Oracle Fusion Transactional Business Intelligence and BI Cloud Connector Performance Recommendations | Version 3.0 Copyright © 2021, Oracle and/or its affiliates | Confidential – Public Important! Do not create open ended reports with weak filters or no filters in place. If needed, use default values, which can be passed as filter values to such reports. 2. To prevent reports with runaway row counts, OTBI uses the default limit to fetch first 75,000 rows (the value varies by FA POD shapes), thereby yielding better performance. This limit is less effective than logical filters, since it is pushed down to database by appending ROWNUM <= 75001 to WHERE clause. As the result, it is applied as the very last step in the SQL execution plan, compared to filter predicates, employed in much earlier steps in the query plans. 3. Another common pattern comes from row explosion when database optimizer picks SQL execution plan, that joins participating tables with one of its steps producing row explosion, even though the final SQL result remains small. Such intermediate row explosion would directly affect SQL performance, causing heavy database I/O. It is not simple to detect and diagnose such impact, as OBIEE would not report database logical reads. Such analysis could be possible only by generating SQL execution plan by DBAs with direct database access. Applying effective filters could directly influence the database optimizer’s choice of SQL execution plan, allowing it to correctly estimate join cardinalities and pick more efficient execution path without row explosion. There are more techniques and options, discussed in this document how to influence the query execution plan in OTBI reports. 4. You may also consider other technologies such as BI Cloud Connector (BICC) or HCM Extract in HCM Cloud to extract much larger volumes instead of pulling the same numbers via OTBI. 5. Avoid using function(s) on a filter, unless there is a corresponding function-based index for the mapped column in database. 6. Always choose ID columns for your logical filters, or use DESCRIPTOR_IDOF on a filter, so that OBIEE pick the appropriate ID column in the physical query. Refer to the section discussing use of DESCRIPTOR_IDOF for more details. 7. Note, there still may be valid functional reasons for fetching larger row counts in OTBI reports and exporting them into other formats. Use Indexed Logical Attributes in Logical Joins and Filters It is important to identify and use logical attributes in WHERE Clause that are mapped to database columns with supporting indexes in place. While you can use virtually every single presentation column for logical joins or filters, do the due diligence to find out the indexed columns, when designing your reports. Some presentation attributes may transform into complex calculations, or map to database functions and other expressions, coming from OTBI logical model or ADF VO transient attributes. Unless you confirm the availability of supporting function-based indexes, that match the identified expression or function, avoid using those in your logical WHERE clause. You can use the ‘Administration’ link -> ‘Manage Sessions’ -> ‘Your test report’ link in OBIEE Answers to check the generated physical SQL and identify the mapped physical columns and their expression. Review periodically Oracle Customer Connect for FA “Data Lineage” guides, posted after every FA release to understand the logical business model, trace presentation attributes to physical tables and their columns, and check for existing indexes to ensure OTBI reports fast performance. Eliminate Duplicate and Redundant Filters in Logical SQLs 1. The use of duplicate logical filters can cause additional performance impact, resulting in unnecessary joins to physical tables, and generating sub-optimal execution plans in database. For example: AND (TIMESTAMPDIFF(SQL_TSI_DAY, MAX("Activity"."Actual End Date"), NOW()) > 90) AND ("Activity"."Actual End Date" < VALUEOF("CURRENT_DAY"))) The second logical filter on "("Activity"."Actual End Date" < VALUEOF("CURRENT_DAY")))" results in an extra join to the calendar table. This redundant logic can be safely removed from the report. 2. Filters must be applied once, when you are using reports as filters. A logical filter should be removed from all ‘filter’ reports, if the same filter is already defined in the main ‘sub-query’ report. By eliminating such redundancy, you will end up with less complicated and better performing generated PSQL. Avoid Fetching Very Large Volumes from Database to OBIEE There are several types of reports which could trigger fetching larger amounts of data from database to OBIEE for subsequent data manipulation on OBIEE tier and providing the final results: Reports containing logical functions, which cannot be pushed into database. For example, the use of LENGTH() or RAND() in logical SQL definition may result in BI Server generating multiple sub-requests with corresponding PSQLs, executing the PSQLs in database, fetching ALL records to OBIEE tier and ‘stitching’ the results there. Consider using CHAR_LENGTH() instead of LENGTH() to avoid breaking a physical SQL into multiple.
  • 19. 19 WHITE PAPER | Oracle Fusion Transactional Business Intelligence and BI Cloud Connector Performance Recommendations | Version 3.0 Copyright © 2021, Oracle and/or its affiliates | Confidential – Public Note: starting from version 12c OBIEE changed LENTGH() logic. It now pushes the logic into SQL generated query in database and does not trigger generating multiple PSQLs. Reports with cross facts are implicitly joined via Full Outer Join (i.e. BI Server joins logical sub-requests and picks full outer join in its execution plan), may benefit from using PERF_PREFER_INTERNAL_STITCH_JOIN feature (that has been turned on in RPD), that is forcing BI Server to break a report into multiple sub-requests, generate multiple PSQLs and fetch ALL records for ‘stitching’ on OBIEE tier. To avoid the overhead from fetching too much data to BI Server, make sure you use effective filters in such reports. Reports containing CLOBs, and poorly constrained, may produce significant load on the system from fetching large volumes. Certain reports logic may trigger implicit post processing on OBIEE tier, which requires fetching ALL rows from database to OBIEE. The best indicator for such scenario would be a logical SQL with ‘FETCH FIRST X rows’ (default 75,000) having a single physical SQL without ‘where rownum < X’ rows. For example, a logical SQL has: SELECT 0 s_0, ... IFNULL("General Ledger - Journals Real Time"."- Lines"."Journal Line Accounted Amount Debit",0) -IFNULL("General Ledger - Journals Real Time"."- Lines"."Journal Line Accounted Amount Credit",0) s_43, REPORT_AGGREGATE(IFNULL("General Ledger - Journals Real Time"."- Lines"."Journal Line Accounted Amount Debit",0) -IFNULL("General Ledger - Journals Real Time"."- Lines"."Journal Line Accounted Amount Credit",0) BY ... s_44, REPORT_SUM("General Ledger - Journals Real Time"."- Lines"."Journal Line Accounted Amount Credit" BY ... s_45, REPORT_SUM("General Ledger - Journals Real Time"."- Lines"."Journal Line Accounted Amount Debit" BY ... s_46 FROM "General Ledger - Journals Real Time" WHERE ... FETCH FIRST 75001 ROWS ONLY However, the generated PSQL didn’t have WHERE ROWNUM <75001: WITH sawith0 AS ( SELECT ... FROM sawith1 d1 ORDER BY ...; As the result the report was failing with the error: [nQSError: 43113] Message returned from OBIS. [nQSError: 43119] Query Failed: [nQSError: 46168] RawFile::checkSoftLimit failed because a temporary file exceeds the limit (10240 MB) specified by the NQ_LIMIT_WRITE_FILESIZE environment variable. The analysis of the report execution plan in the log (requires loglevel=7) shows: ifnull(sum_SQL99(D1.c43 by ...) as c45 [for database 0:0,0], sum_SQL99(D1.c42 by ...) as c46 [for database 0:0,0], sum_SQL99(D1.c43 by ...) as c47 [for database 0:0,0] Child Nodes (RqJoinSpec): <<...>> [for database 0:0,0] ... Child Nodes (RqJoinSpec): ... ... RqList <<...>> [for database 3023:1418642:FSCM_OLTP,57] sum(D1.c32 by ...) as c1 [for database 3023:1418642,57], sum(D1.c34 by ...) as c2 [for database 3023:1418642,57],
  • 20. 20 WHITE PAPER | Oracle Fusion Transactional Business Intelligence and BI Cloud Connector Performance Recommendations | Version 3.0 Copyright © 2021, Oracle and/or its affiliates | Confidential – Public Per OBIEE logical query execution plan, REPORT_AGGREGATE got transformed into SUM_SQL99 and executed in OBIEE. In this example the replacement of REPORT_AGGREGATE with REPORT_SUM results in all three functions pushed into the database: RqList <<...>> [for database 3023:1418642:FSCM_OLTP,57] sum(D1.c46) over (partition by D1.c17, ...) as c46, sum(D1.c47) over (partition by D1.c17, ...) as c47, sum(D1.c48) over (partition by D1.c17, ...) as c48 And the use of ROWNUM filter in the generated PSQL: WITH sawith0 AS ( SELECT ... FROM SAWITH3 D1 ) D1 where rownum <= 75001 As the result of fetching too much data to BI Server, you may run into the TEMP file size limit (default 10Gb) on OBIEE tier and get nQSError:46168 error, as shown in the example above: [nQSError: 46168] RawFile::checkSoftLimit failed because a temporary file exceeds the limit (10240 MB) specified by the NQ_LIMIT_WRITE_FILESIZE environment variable. NQ_LIMIT_WRITE_FILESIZE is an important limiter, defining the upper value for TEMP space size on OBIEE tier, enforced in Fusion Applications. Its larger value may be justified only for those customers who execute Cloud Extractor and require higher OBIEE TEMP usage. The same error message, observed during OTBI reports execution is an indicator of too much data, pushed to OBIEE, and the need to review the report design and logic. Limit the Number of Logical Columns and ORDER BY Attributes in Reports If you design a report with multiple logical columns, make sure you review its generated logical SQL and carefully inspect its logical ORDER BY clause. Typically, the generated ORDER BY list includes all applicable columns, and with many of them included into the report, as in the example below, that produced 116 ORDER BY attributes. As the result, the generated SQL’s execution plan would have very expensive SORT operation across multiple columns. If your report produces very large row count, the very last SORT operation could cause additional unnecessary overhead. Carefully inspect ORDER BY generated clause, and use ‘Advanced’ tab to edit LSQL, pruning the ORDER BY clause to the list of logical columns, which address the report functional requirements. Consolidate logical attributes from as few subject areas as possible to mitigate or eliminate the complexity from the use of cross SAs. DO NOT load your report with a large number of logical columns, as they may get converted into fairly complex physical expressions, causing poor report performance. Remove the logical columns, excluded in ‘Edit View’, unless they are included in your report for a reason. The example below shows such bad design of a report with both large number of columns and heavy ORDER by clause: SELECT 0 s_0, "Environment Health and Safety - Incidents Real Time"."Incident Event Additional Details"."Incident Event Created By" s_1, "Environment Health and Safety - Incidents Real Time"."Incident Event Additional Details"."Incident Event Creation Date" s_2, "Environment Health and Safety - Incidents Real Time"."Incident Event Additional Details"."Incident Event Last Update Date" s_3, "Environment Health and Safety - Incidents Real Time"."Incident Event Additional Details"."Incident EventLast Update Login" s_4, ... ... SORTKEY("Environment Health and Safety - Incidents Real Time"."Injury or Illness"."Activity Description") s_114, SORTKEY("Environment Health and Safety - Incidents Real Time"."Vehicle Incident"."Third Party Details") s_115
  • 21. 21 WHITE PAPER | Oracle Fusion Transactional Business Intelligence and BI Cloud Connector Performance Recommendations | Version 3.0 Copyright © 2021, Oracle and/or its affiliates | Confidential – Public FROM "Environment Health and Safety - Incidents Real Time" WHERE ((DESCRIPTOR_IDOF("Environment Health and Safety - Incidents Real Time"."Incident Event Details"."Incident Event Type") = '...') AND (DESCRIPTOR_IDOF("Environment Health and Safety - Incidents Real Time"."Near Miss"."Type of Near Miss") = '...') AND ("Incident Event Details"."Incident Event Completed Date" = date '2017-07-14')) ORDER BY 1, 20 ASC NULLS LAST, 87 ASC NULLS LAST, 19 ASC NULLS LAST, 18 ASC NULLS LAST, 86 ASC NULLS LAST, 16 ASC NULLS LAST, 15 ASC NULLS LAST, 17 ASC NULLS LAST, 114 ASC NULLS LAST, 13 ASC NULLS LAST, 85 ASC NULLS LAST, 12 ASC NULLS LAST, 84 ASC NULLS LAST, 113 ASC NULLS LAST, 11 ASC NULLS LAST, 83 ASC NULLS LAST, 10 ASC NULLS LAST, 82 ASC NULLS LAST, 7 ASC NULLS LAST, 8 ASC NULLS LAST, 81 ASC NULLS LAST, 14 ASC NULLS LAST, 9 ASC NULLS LAST, 76 ASC NULLS LAST, 109 ASC NULLS LAST, 77 ASC NULLS LAST, 110 ASC NULLS LAST, 116 ASC NULLS LAST, 78 ASC NULLS LAST, 79 ASC NULLS LAST, 111 ASC NULLS LAST, 80 ASC NULLS LAST, 112 ASC NULLS LAST, 75 ASC NULLS LAST, 108 ASC NULLS LAST, 70 ASC NULLS LAST, 105 ASC NULLS LAST, 72 ASC NULLS LAST, 107 ASC NULLS LAST, 71 ASC NULLS LAST, 106 ASC NULLS LAST, 73 ASC NULLS LAST, 74 ASC NULLS LAST, 69 ASC NULLS LAST, 104 ASC NULLS LAST, 68 ASC NULLS LAST, 103 ASC NULLS LAST, 67 ASC NULLS LAST, 102 ASC NULLS LAST, 115 ASC NULLS LAST, 21 ASC NULLS LAST, 88 ASC NULLS LAST, 25 ASC NULLS LAST, 27 ASC NULLS LAST, 22 ASC NULLS LAST, 89 ASC NULLS LAST, 28 ASC NULLS LAST, 23 ASC NULLS LAST, 24 ASC NULLS LAST, 31 ASC NULLS LAST, 33 ASC NULLS LAST, 91 ASC NULLS LAST, 32 ASC NULLS LAST, 90 ASC NULLS LAST, 34 ASC NULLS LAST, 92 ASC NULLS LAST, 35 ASC NULLS LAST, 26 ASC NULLS LAST, 36 ASC NULLS LAST, 93 ASC NULLS LAST, 37 ASC NULLS LAST, 94 ASC NULLS LAST, 38 ASC NULLS LAST, 39 ASC NULLS LAST, 40 ASC NULLS LAST, 41 ASC NULLS LAST, 42 ASC NULLS LAST, 43 ASC NULLS LAST, 44 ASC NULLS LAST, 45 ASC NULLS LAST, 46 ASC NULLS LAST, 47 ASC NULLS LAST, 48 ASC NULLS LAST, 49 ASC NULLS LAST, 50 ASC NULLS LAST, 51 ASC NULLS LAST, 95 ASC NULLS LAST, 52 ASC NULLS LAST, 55 ASC NULLS LAST, 53 ASC NULLS LAST, 54 ASC NULLS LAST, 56 ASC NULLS LAST, 57 ASC NULLS LAST, 96 ASC NULLS LAST, 58 ASC NULLS LAST, 29 ASC NULLS LAST, 59 ASC NULLS LAST, 60 ASC NULLS LAST, 61 ASC NULLS LAST, 97 ASC NULLS LAST, 62 ASC NULLS LAST, 98 ASC NULLS LAST, 30 ASC NULLS LAST, 63 ASC NULLS LAST, 65 ASC NULLS LAST, 100 ASC NULLS LAST, 64 ASC NULLS LAST, 99 ASC NULLS LAST, 66 ASC NULLS LAST, 101 ASC NULLS LAST, 2 ASC NULLS LAST, 3 ASC NULLS LAST, 5 ASC NULLS LAST, 6 ASC NULLS LAST, 4 ASC NULLS LAST FETCH FIRST 75001 ROWS ONLY Even though the report included effective logical filters, it failed during physical SQL parsing running into 1Gb PGA Heap size limit, enforced in the database: State: HY000. Code: 10058. [NQODBC] [SQL_STATE: HY000] [nQSError: 10058] A general error has occurred. [nQSError: 43113] Message returned from OBIS. [nQSError: 43119] Query Failed: [nQSError: 17001] Oracle Error code: 10260, message: ORA-10260: limit size (1048576) of the PGA heap set by event 10261 exceeded at OCI call OCIStmtExecute. [nQSError: 17010] SQL statement preparation failed. (HY000) Review LOBs Usage in OTBI Reports Some OTBI Presentation Subject Areas have attributes, mapped to physical database columns of Large Object(LOB) data type, character (CLOB) or binary (BLOB). Review the following guidelines for CLOBs usage in your reports: FA does not initialize OBIS_MAX_FIELD_SIZE variable, so by default OBIEE applies 32,678 bytes limit to the maximum size of any field, including CLOBs. If you attempt to fetch a larger, than 32Kb file size, it would come blank in OTBI. Reports with CLOB attributes in SELECT clause may result in up to 70% longer performance, compared non-CLOB reports. You should expect observing slower performance with CLOB reports, fetching larger row counts. OBIEE does not support logical MINUS for reports with CLOBs attributes. Use of CLOBs may cause failures with logical UNIONs, DISTINCT, GROUP BY and ORDER BY clauses. Exclude CLOB attributes in filters and join predicates. When you design a report containing CLOBs, make sure you enforce effective filters including the use of their default values. Important! Do not use non-pushed logical functions with CLOBs, as such combination could result in many more rows fetched to OBIEE for the additional processing.
  • 22. 22 WHITE PAPER | Oracle Fusion Transactional Business Intelligence and BI Cloud Connector Performance Recommendations | Version 3.0 Copyright © 2021, Oracle and/or its affiliates | Confidential – Public The CLOB misuse cases above would result in Oracle Database error: ORA-00932: inconsistent datatypes: expected - got CLOB Reduce Multiple Logical Unions in OTBI Reports OBIEE provides very flexible logic to combine the results based on unions, intersection, and minus (or difference) operations. The users can produce sophisticated analysis using logical UNION or UNION ALL by pulling the logical attributes from the same or multiple subject areas. However, such flexibility comes with the cost, as multiple logical UNIONs further complicate the generated physical SQL, significantly increase the query parsing time and overall runtime. Follow the guidelines below for designing UNION based reports: Limit the number of logical UNIONs to 5 or less If possible, combine logical UNIONs into a single logical SELECT for the same subject area Avoid joining many cross subject areas via logical UNIONs Apply restrictive filters inside each logical UNION. You may try to use ‘SET VARIABLE OBIS_DBFEATURES_IS_UNION_SUPPORTED =0;‘ in report prefix to break LSQL with logical UNION and UNION ALL into multiple PSQLs. Important! Make sure you carefully benchmark use of prefix variable OBIS_DBFEATURES_IS_UNION_SUPPORTED=0 with your logical UNION reports before you implement it in your production environment. Cross Subject Area Reports Considerations Cross Subject Area (SA) Reports include logical columns from more than one Subject Area (SA), referenced in SELECT, FROM or WHERE logical clauses: Explicit Cross SA pattern: reports reference multiple SAs explicitly in FROM clause Implicit Cross SA: reports use a single SA in FROM clause and more referenced SAs in fully qualified attributes in SELECT or WHERE clauses. The most common implicit Cross SA pattern is shown below: -- Select attributes from two SAs SELECT “Subject Area1”.”attribute1” s_1, “Subject Area2”.”attribute1” s_2, … FROM “Subject Area1” WHERE …; -- Select attributes from one SA, but applying filters from the second SA: SELECT “Subject Area1”.”attribute1” s_1, “Subject Area1”.”attribute1” s_2, … FROM “Subject Area1” WHERE “Subject Area2”.”attribute1”=…; Important! Refer to Oracle White Paper “Guidelines for creating Cross Subject Area Transactional BI Reports in Fusion”( Doc ID 1567672.1) for the detailed explanations and guidelines for using Cross SA in your reports. This section covers additional recommendations for addressing performance related topics for using Cross SAs in OTBI reports. 1. Limit the use of Cross Subject Areas to as few as possible in your reports. a. Every single SA adds the complexity to the generated PSQL, and results in significant overhead in both SQL structure, security predicates, and as the result very complex execution plans. b. There have been a number of cases, when the users unintentionally introduced 5-10 subject areas in a single report. Avoid introducing more than three SAs in a single report, if you can. c. Inspect the logical attributes, used from multiple SAs and verify if you can extract the same or functionally equivalent logical columns from the primary SA instead. 2. Cross SA reports form the largest category of the queries, failing with “ORA-10260: limit size (1048576) of the PGA heap size”. Such error cannot be worked around and requires the use of advanced logical SQL to address such reports functional requirements. Cross Fact Reports Recommendations
  • 23. 23 WHITE PAPER | Oracle Fusion Transactional Business Intelligence and BI Cloud Connector Performance Recommendations | Version 3.0 Copyright © 2021, Oracle and/or its affiliates | Confidential – Public Cross Fact or Cross Star reports reference more than one logical fact from the single subject area in OTBI reports. The table below provides two examples, showing the difference between cross subject and cross fact queries: the left column shows the case with implicit cross subject areas, highlighted in green and blue the right column shows LSQL with cross facts, using two facts (highlighted in blue and red) from the same subject area (Sales – CRM Pipeline). TWO CROSS SAS REPORT TWO CROSS FACTS REPORT Typically produces LEFT OUTER JOIN in sub-requests Typically produces FULL OUTER JOIN in sub-requests SELECT "Workforce Management - Worker Assignment Real Time"."Worker"."Assignment Number" s_1, "Workforce Management - Worker Assignment Real Time"."Worker"."Employee First Name" s_2, "Workforce Management - Employment Contract Real Time"."Employment Contract Details"."Type" s_3 FROM "Workforce Management - Worker Assignment Real Time" WHERE (DESCRIPTOR_IDOF("Workforce Management - Worker Assignment Real Time"."Worker"."Assignment Status Type") IN ('ACTIVE', 'SUSPENDED')) ORDER BY 1 FETCH FIRST 75001 ROWS ONLY SELECT "Sales - CRM Pipeline"."Employee"."Employee Name" s_1, "Sales - CRM Pipeline"."Pipeline Detail Facts"."Closed Opportunity Line Revenue" s_2, "Sales - CRM Pipeline"."Resource Quota Facts"."Resource Quota" s_3 FROM "Sales - CRM Pipeline" WHERE ("Time"."Enterprise Quarter" = VALUEOF ("CURRENT_ENTERPRISE_QUARTER")) ORDER BY 1 FETCH FIRST 75001 ROWS ONLY OTBI logical model design uses several categories of dimensions: common OTBI dimensions, shared across multiple pillars such as Financials, HCM, etc. common Pillar dimensions that apply to all subject areas within each pillar common Fact dimensions that apply to two or more logical facts within each subject area local Fact dimensions that are specific to a logical fact in the subject area. Review the recommendations below for designing and optimizing your cross fact OTBI reports: Select at least one metric from each fact to ensure report consistency. You may select and exclude them Edit View not to show in your report. Select at least one common dimension, shared by participating facts, otherwise OBIEE would generate cartesian product and create wrong resultset. Select common fact dimension attributes, that OBIEE will use to join cross facts. If you don’t need in your report, you can select these attributes in the report but exclude them in Edit View. Carefully design filters to apply them to all facts in cross fact reports. Remember that a filter on a local fact dimension applies to one logical fact only. With no additional filters to the other fact(s), the report would result in slower performance. Consider using fewer (than more) logical facts in a single report. Adding more facts into a single report would complicate the report design, generate more complex SQL with security predicates applying to ALL facts, and affect the report performance. Check for any possibly redundant SYS_OP_MAP_NONNULL in generated PSQLs. Typically, cross fact queries produce Full Outer Join (FOJ) between Fact VOs. Such join may result in equi-joining NULLs, that produces SYS_OP_MAP_NONNULL in PSQL. SYS_OP_MAP_NONNULL generation can be eliminated by unchecking ‘nullable’ property in RPD for a joining logical column, if it references a NOT NULL column in database. Cross fact reports with implicit Full Outer Joins may benefit from using ‘SET VARIABLE OBIS_DBFEATURES_IS_PERF_PREFER_INTERNAL_STITCH_JOIN=1; SELECT ….”, breaking a single PSQL into multiple database queries. Use of OTBI Reports as Infolets in FA Embedded Pages Reports, designed in OTBI, can be embedded as infolets into FA product pages. Infolets allow customers to draw their quick attention to a metric or attribute and further analyze it using more detailed analytic reports. However, such infolets could become a source of performance overhead in your FA environment, if designed inefficiently. Typically, OTBI infolets will score much higher usage, compared to OTBI reports and dashboards, that are used by targeted audience. The UI users may or may not be interested in getting any inputs from such OTBI reports, and yet trigger executing them every time they navigate through such UI pages. Review the following recommendations for using OTBI infolets in FA: Review OTBI Usage Tracking reports for the scale and usage of Infolets in your environment.
  • 24. 24 WHITE PAPER | Oracle Fusion Transactional Business Intelligence and BI Cloud Connector Performance Recommendations | Version 3.0 Copyright © 2021, Oracle and/or its affiliates | Confidential – Public Make sure you design very compact reports, using effective date filters, performing optimal aggregations on a small resultset. Mind the use functional contents that add complex (or custom) data security predicates to OTBI infolets. Avoid using Cross Subject area reports or any other complex design patterns, offered by OTBI for infolets. Stick to using Single Subject Area only. Avoid using filters, based on other reports, via IN or SET operators. Ensure restrictive and index-supported filters, to avoid aggregation of fact measures over a wide data set. Avoid using a detailed report as a filter for an infolet, for maintaining functional consistency. Instead, create two separate reports with shared filters using 'saved filters'. Such approach will reduce the complexity of the infolet report, while maintaining the desired consistency, and reduce development overhead for any changes, as you need to modify a single filter in one place to update the changes in both reports. Avoid consolidating all infolets on a single page. Instead have multiple tabs and keep the non-critical infolets on the second page. This will improve user experience and also ensure that all infolet queries are not fired in parallel, thereby reducing the concurrent load on the system. Avoid disabling OBIPS Cache via setting “SET VARIABLE OBIS_REFRESH_CACHE=1;” in your infolets. The variable would consider entries in OBIPS cache as stale and reseed it every time you navigate to the page with infolets. The higher frequency of OTBI infolet reports usage would result in every single page navigation skipping OBIPS cache and causing more performance overhead to your environment. Refer below to the recommended variable usage. Considerations for Using Custom Extensible Attributes and Flex Fields in Logical Filters OTBI supports the use of custom flexible attributes in its logical model for extending the functionality and flexibility in BI Transactional reporting. Review the following considerations when employing such custom attributes in your reports: 1. Extensible attributes are NOT indexed in Fusion database. If you design a logical report and apply a filter using such extensible attribute, database optimizer would pick full table scan access path in the absence of any other filters or join predicates. 2. If you define extensible attributes inside the logical View Objects, which get generated as correlated sub-queries in PSQL, and choose to use them as logical filters, you may run into the database error “ORA-01652: unable to extend temp segment by 128 in tablespace FUSION_TEMP”. When optimizer generates an execution plan for PSQL with such correlated sub-query, it first joins all the required tables, retrieving all the values for extensible attribute and only then does it apply filters to it. Even if you use an extensible attribute, mapped to an indexed physical column, its index would be of very little use. To speed up reports with such design consider applying additional filters, which help to reduce the volumes of the interim joins. Considerations for Using Hierarchical Attributes in Logical Filters Logical filters on hierarchical attributes can add significant complexity and performance overhead to your reports. You may work around the hierarchical columns in WHERE clause by picking other columns from the same logical object(s). For example, a report referencing “Hierarchy.level1 Department” attribute in WHERE clause may be replaced with “Department Name” without affecting the functional design. The Department hierarchy provides the structure of a department tree and it is usually traversed from the top using the Tree name. The use of “Department Name” eliminates the generated SQL complexity and significantly improves report’s performance. Employ DESCRIPTOR_IDOF Function in Logical Filters OBIEE’s double-column functionality in RPD allows to associate two logical columns in the logical layer: A descriptor or display column, which contains the actual display values. For example, “Customer Country” shows Country Names. A descriptor ID or code column, which contains the code values, uniquely identifying display values, consistent across users or locales. For example, “Country Code” maps to the Country IDs. The double column feature allows to: Define language independent filters Change display values without rewriting the actual analyses Handle the queries, which use LOB data types Utilize indexes on ID columns and get improved performance for your queries. Carefully review Logical filters in your report’s logical SQL as well as the corresponding Physical filters in its generated physical SQL and verify whether you can replace a filter on a "description" column with the filter on its direct "code" / "ID" column. The “description” columns may not be indexed, while their corresponding ID columns typically have indexes defined. The switch from “description” to ID columns will reduce complexity of the physical SQL and utilize available indexes on CODE/ID columns. Note, Descriptor_IDOF can be used ‘Equal’ and ‘IN’ filters but not with ‘LIKE’ comparison.
  • 25. 25 WHITE PAPER | Oracle Fusion Transactional Business Intelligence and BI Cloud Connector Performance Recommendations | Version 3.0 Copyright © 2021, Oracle and/or its affiliates | Confidential – Public Refer to the example below, showing Descriptor_IDOF in RPD for “Sales - CRM Pipeline"."Customer"."Country Code", defined as a Descriptor Id Column for "Sales – CRM Pipeline"."Customer". "Customer Country": The use of DESCRIPTOR_IDOF helps to address additional performance cases: 1. DESCRIPTOR_IDOF function will result in producing a physical SQL with generated execution plan, where the filter value will get applied to (most likely indexed) ID column from a dimension table earlier in the plan execution steps, compared to less optimal plan with the filter on a lookup table. For example, the modified LSQL: SELECT 0 s_0, "Sales - CRM Pipeline"."Customer"."Account Active Flag" s_1, "Sales - CRM Pipeline"."Customer"."Corporate Account Name" s_2, "Sales - CRM Pipeline"."Customer"."Customer City" s_3, "Sales - CRM Pipeline"."Customer"."Customer Country" s_4, ... FROM "Sales - CRM Pipeline" WHERE (("Customer"."Customer Name" LIKE 'A%') AND ((DESCRIPTOR_IDOF("Customer"."Customer Country") = 'CA') OR (DESCRIPTOR_IDOF("Customer"."Customer Country") = 'MX'))) ORDER BY 1, 6 ASC NULLS LAST FETCH FIRST 75001 ROWS ONLY ; Such logic will replace “NVL(D2.c1 , D1.c14) IN ('Canada', 'Mexico'))” with “D1.c14 IN ('CA', 'MX')” predicate and produce the physical SQL pattern below:
  • 26. 26 WHITE PAPER | Oracle Fusion Transactional Business Intelligence and BI Cloud Connector Performance Recommendations | Version 3.0 Copyright © 2021, Oracle and/or its affiliates | Confidential – Public . . . FROM SAWITH2 D1 LEFT OUTER JOIN OBICOMMON0 D2 ON D1.c14 = D2.c2 WHERE --( NVL(D2.c1 , D1.c14) IN ('Canada', 'Mexico')) D1.c14 IN ('CA', 'MX') 2. DESCRIPTOR_IDOF may affect the number of generated physical SQLs for report’s LSQL. If you design a report with a logical filter containing a lookup, it may cause OBIEE to generate more than one physical SQL and affect the report performance. If you need to apply such filters on a lookup, then use Descriptor ID Column in the filter. For example, the logical query example below has the following logical filters: FROM "Workforce Goals - Goal Status Overview Real Time" WHERE (("Business Unit"."Status" = 'Active') AND (YEAR("Performance Goals"."Start Date") = 2014) AND (DESCRIPTOR_IDOF("Workforce Goals - Goal Status Overview Real Time"."Worker"."Assignment Status")=1) AND ("Worker"."Person Type" ='Employee')) BI Server will generate two physical SQLs, triggered by ("Business Unit"."Status" = 'Active'), instead of one: Rows 1, bytes 128 retrieved from database query id: <<9201501>> Physical query response time 0.108 (seconds), id <<9201501>> Rows 359525, bytes 2395073904 retrieved from database query id: <<9200121>> Physical query response time 224.454 (seconds), id <<9200121>> Physical Query Summary Stats: Number of physical queries 12, Cumulative time 237.783, DB- connect time 0.001 (seconds) Rows returned to Client 359411 Logical Query Summary Stats: Elapsed time 279.306, Response time 260.537, Compilation time 14.639 (seconds) Since OBIEE didn’t ship all the logic to the database, it had to wait for both PSQL results before merging them on its tier. You can rewrite the logical query to use DESCRIPTOR_IDOF and end up with all the logic shipped to database in a single PSQL: FROM "Workforce Goals - Goal Status Overview Real Time" WHERE ((DESCRIPTOR_IDOF("Business Unit"."Status") = 'ACTIVE') AND (YEAR("Performance Goals"."Start Date") = 2014) AND (DESCRIPTOR_IDOF("Workforce Goals - Goal Status Overview Real Time"."Worker"."Assignment Status")=1) AND ("Worker"."Person Type" ='Employee')) Generated PSQL(s) with a Large Number In-List Values or BIND Variables Optimization There two most common cases, that result in generating PSQLs with large number of in-list values or BIND variables, and that may lead to slower performance or report failures: 1. OBIEE may generate two or more physical SQLs, with one of them listing internal BIND variables, that it uses to pass the values from another PSQL. It may be the result of using lookups and employing BIND variables to pass in the values between lookup(s) and main generated query: … T2510675.C36361321 as c2 from (SELECT V483283710.MEANING AS C61309499, V483283710.LOOKUP_CODE AS C36361321, V483283710.LOOKUP_TYPE AS C319008066 FROM HCM_LOOKUPS V483283710 WHERE ( ( (V483283710.LOOKUP_TYPE = 'HRG_PERF_GOAL_CATEGORY' ) ) )) T2510675 where ( T2510675.C36361321 in (:PARAM1, :PARAM2, :PARAM3, :PARAM4, :PARAM5, :PARAM6, :PARAM7, :PARAM8, :PARAM9, :PARAM10, :PARAM11, :PARAM12, :PARAM13,:PARAM14, :PARAM15, :PARAM16, :PARAM17, :PARAM18, :PARAM19, :PARAM20) )
  • 27. 27 WHITE PAPER | Oracle Fusion Transactional Business Intelligence and BI Cloud Connector Performance Recommendations | Version 3.0 Copyright © 2021, Oracle and/or its affiliates | Confidential – Public order by c2 Depending on the number of queried logical attributes in LSQL SELECT, this list could be very long, and cause performance overhead or result in the error: [nQSError: 46168] …temporary file exceeds the limit (10240 MB) specified by the NQ_LIMIT_WRITE_FILESIZE environment variable. Review your logical SQL for presence of any non-pushed functions, such as LENGTH, RAND, or non-default OBIEE directives, passed into ‘SET VARIABLE’ clause, and if found, consider removing from the report’s LSQL. Additionally, review the business logic to reduce the number of lookups, used in the report. 2. OBIEE may generate several PSQLs, with one returning very large number of actual values, that get passed into another PSQL: …AND ( NOT ( (V222235898.PARTY_NAME = 'Joe Doe' ) ) ) )) T2906423 where ( T2906423.C47764773 in ( 3000000001.0, 3000000002.0, 3000000003.0, 300000004.0, 300000004.0, 300000005.0, 300000006.0, 300000007.0, 300000008.0, 30000009.0, 300000010.0, 300000011.0, 300000012.0, 300000013.0, 300000014.0, 300000015.0, 300000016.0, 300000017.0, 300000018.0, 300000019.0, ... Oracle has the internal limit on the number of in-list values. Besides, such large lists cause very poor SQL performance. It and cause the following error: [nQSError: 42029] Subquery contains too many values for the IN predicate. (HY000) Such behavior comes from the use of lookup in WHERE clause in the logical SQL. To work around such behavior, you can either use DESCRIPTOR_IDOF (only for equi-join) or include the lookup attribute into LSQL SELECT clause. Alternatively, you may replace the use of lookup filter with any available ID or Code attributes from queried facts or dimensions. CONSTANT_OPTIMIZATION_LEVEL and CONSTANT_CASE_OPTIMIZATION_LEVEL Request Variables If you observe your report’s logical SQL having computations using logical constants or logical CASE, then consider initializing the session variables CONSTANT_OPTIMIZATION_LEVEL=1 and/or CONSTANT_CASE_OPTIMIZATION_LEVEL=1 via ‘SET VARIABLE’ prefix in Advanced SQL tab to enable expression optimization. It may improve performance of logical reports with CASE WHEN filters. The constants values are: 0 – No constant expression optimization. This is the default value. 1 - BI Server performs internal constant expression evaluation early during the query processing and they are all converted into actual literal values. Any errors during constant evaluation would result in no constant being rewritten. 2 - This value makes the constant optimization aggressive. When faced with any error during a constant evaluation, BI Server would skip the failing constant expression and rewrite the others. CONSTANT_OPTIMIZATION_LEVEL helps in such cases as in the examples below: 1+2*(5 + valueof(NQ_SESSION.int_var)) upper('abc' || valueof(NQ_SESSION.varchar_var)) || left('def', 2) 'ABC'||' '||'DEF' Use REPORT_SUM/REPORT_COUNT Instead of REPORT_AGGREGATE Logical Functions If REPORT_AGGREGATE perform such calculation as SUM, COUNT, etc., you can safely replace it with REPORT_SUM, or REPORT_COUNT (other corresponding calculations) to ensure better performance of your reports. REPORT_AGGREGATE may force BI Server to generate multiple logical sub-requests, and create more than one PSQL, with all the complexity of fetching large volumes of data and performing logical joins on OBIEE tier.
  • 28. 28 WHITE PAPER | Oracle Fusion Transactional Business Intelligence and BI Cloud Connector Performance Recommendations | Version 3.0 Copyright © 2021, Oracle and/or its affiliates | Confidential – Public Note, when you build PIVOT reports with the use of aggregate functions, BI Server will not push them into database, but perform on its tier in memory. With more columns and rows in your PIVOT reports such aggregations could take longer time. Review and, if possible, reduce ‘BY’ clause in your AGGREGATE functions to simplify the aggregation. Important! DO NOT use an empty ‘BY’ clause, that may get generated in your report, since OBIEE would include all queried attributes into the aggregate clause in physical SQL. The use of empty ‘BY’ clause may result in generating more than one physical SQL. In the example below the original LSQL contains a mix of REPORT_SUM and REPORT_AGGREGATE: SELECT ... REPORT_AGGREGATE(IFNULL("General Ledger - Journals Real Time"."-Lines"."Journal Line Accounted Amount Debit",0)-IFNULL("General Ledger - Journals Real Time"."- Lines"."Journal Line Accounted Amount Credit",0) BY SUBSTRING("General Ledger - Journals Real Time"."-Account"."Concatenated Segments" FROM 16 FOR 4),SUBSTRING("General Ledger - Journals Real Time"."-Account"."Concatenated Segments" FROM 21 FOR 6),SUBSTRING("General Ledger - Journals Real Time"."-Account"."Concatenated Segments" FROM 28 FOR 5),SUBSTRING("General Ledger - Journals Real Time"."-Account"."Concatenated Segments" FROM 34 FOR 5),SUBSTRING("General Ledger - Journals Real Time"."-Account"."Concatenated Segments" FROM 40 FOR 5),SUBSTRING("General Ledger - Journals Real Time"."- Account"."Concatenated Segments" FROM 46 FOR 2),SUBSTRING("General Ledger - Journals Real Time"."-Account"."Concatenated Segments" FROM 49 FOR 5), SUBSTRING("General Ledger - Journals Real Time"."-Account"."Concatenated Segments" FROM 4 FOR 5),SUBSTRING("General Ledger - Journals Real Time"."-Account"."Concatenated Segments" FROM 10 FOR 5),SUBSTRING("General Ledger - Journals Real Time"."-Account"."Concatenated Segments" FROM 1 FOR 2)... s44, REPORT_SUM("General Ledger - Journals Real Time"."- Lines"."Journal Line Accounted Amount Credit" BY SUBSTRING("General Ledger - Journals Real Time"."- Account"."Concatenated Segments" FROM 1 FOR 52),"General Ledger - Journals Real Time"."Approval Status"."Approval Status Description","General Ledger - Journals Real Time"."- Line Details"."Line Effective Date","General Ledger - Journals Real Time"."- Line Details"."Line Period Name","General Ledger - Journals Real Time"."- Line Details"."Line Number","General Ledger - Journals Real Time"."- Line Details"."Line Description","General Ledger - Journals Real Time"."- Line Details"."Line Type","General Ledger - Journals Real Time"."- Line Details"."Line Currency Code","General Ledger - Journals Real Time"."Time"."Fiscal Year Number","General Ledger - Journals Real Time"."- Journal Category"."User Journal Category Name","General Ledger - Journals Real Time"."- Journal Category"."Description","General Ledger - Journals Real Time"."- Journal Source"."User Journal Source Name","General Ledger - Journals Real Time"."- Journal Source"."Source Journal Source Description","General Ledger - Journals Real Time"."- Ledger Set". "Ledger Set Name","General Ledger - Journals Real Time"."- Ledger"."Ledger Name","General Ledger - Journals Real Time"."- Ledger"."Chart Of Account","General Ledger - Journals Real Time"."- Header Details"."Header Balance Type Flag","General Ledger - Journals Real Time"."- Header Details"."Encumbrance Type","General Ledger - Journals Real Time"."- Header Details"."Header Close Acct Seq Value","General Ledger - Journals Real Time"."- Header Details"."Header Description","General Ledger - Journals Real Time"."- Header Details"."Journal Legal Entity","General Ledger - Journals Real Time"."- Header Details"."Header Doc Sequence Name","General Ledger - Journals Real Time"."- Header Details"."Header Local Doc Sequence Value","General Ledger - Journals Real Time"."- Header Details"."Header Status","General Ledger - Journals Real Time"."- Header Details"."Header Doc Sequence Value","General Ledger - Journals Real Time"."- Header Details"."Header posting Account Seq Value", 0, "General Ledger - Journals Real Time"."Posting Status"."Posting Status Description") s_45, REPORT_SUM("General Ledger - Journals Real Time"."- Lines"."Journal Line Accounted Amount Debit" BY SUBSTRING("General Ledger - Journals Real Time"."- Account"."Concatenated Segments" FROM 1 FOR 52), "General Ledger - Journals Real Time"."Approval Status"."Approval Status Description",... s_46, REPORT_AGGREGATE(IFNULL("General Ledger - Journals Real Time"."- Lines"."Journal Line Accounted Amount Debit",0) - IFNULL("General Ledger - Journals Real Time"."- Lines"."Journal Line Accounted Amount Credit",0) BY SUBSTRING("General Ledger - Journals Real Time"."- Account"."Concatenated Segments" FROM 1 FOR 52),"General Ledger - Journals Real Time"."Approval Status"."Approval Status Description",... s_44
  • 29. 29 WHITE PAPER | Oracle Fusion Transactional Business Intelligence and BI Cloud Connector Performance Recommendations | Version 3.0 Copyright © 2021, Oracle and/or its affiliates | Confidential – Public ... The generated report was failing with OBIEE TEMP space error, as it resulted in two logical sub-requests, producing two physical SQLs, fetching all the rows from each PSQL to OBIEE tier and consuming all its TEMP space: Error Codes: OPR4ONWY:U9IM8TAC:OI2DL65P State: HY000. Code: 10058. [NQODBC] [SQL_STATE: HY000] [nQSError: 10058] Ageneral error has occurred. [nQSError: 43113] Message returned from OBIS. [nQSError: 43119] Query Failed: [nQSError: 46168] RawFile::checkSoftLimit failed because a temporary file exceeds the limit (10240 MB) specified by the NQ_LIMIT_WRITE_FILESIZE environment variable. (HY000) The LSQL portion above has two overly complex logical blocks: 1. heavy REPORT_SUM with a number of ‘BY’ logical attributes. Avoid building such complicated computations and use a reasonable number of ‘BY’ logical columns. The other aggregations in the example above also contained similar lengthy ‘BY’ clause and were suppressed with ‘…’. 2. A number of SUBSTRING functions defined as s_44, which can be easily collapsed into a single SUBSTRING below: SUBSTRING("General Ledger - Journals Real Time"."- Account"."Concatenated Segments" FROM 1 FOR 52) Remember that every small optimization to your logical SQL can be a big saver when generating physical SQL(s), so eliminate as much redundant and bulky logical constructs as you can to boost your reports performance. The logical report contains REPORT_SUMs and one REPORT_AGGREGATE, which triggered generating two physical SQLs, shipping the join logic to OBIEE (database 0:0,0): ...sum_SQL99(D1.c43 by [ D1.c17, D1.c16, D1.c19, D1.c18, D1.c21, D1.c26, D1.c23, D1.c7, D1.c13, D1.c12, D1.c27, D1.c28, D1.c25, D1.c11, D1.c10, D1.c6, D1.c14, D1.c29, D1.c30, D1.c20, D1.c22, D1.c24, D1.c9, D1.c31, D1.c4, D1.c15, D1.c32, D1.c38, D1.c33, D1.c34, D1.c35, D1.c36, D1.c37, D1.c39, D1.c40, D1.c41] at_distinct [ D1.c17, D1.c16, D1.c19, D1.c18, D1.c21, D1.c26, D1.c23, D1.c7, D1.c13, D1.c12, D1.c27, D1.c28, D1.c25, D1.c11, D1.c10, D1.c6, D1.c14, D1.c29, D1.c30, D1.c20, D1.c22, D1.c24, D1.c9, D1.c31, D1.c4, D1.c15, D1.c32, D1.c38, D1.c33, D1.c34, D1.c35, D1.c36, D1.c37, D1.c39, D1.c40, D1.c41, D1.c53, D1.c54, D1.c2, D1.c3, D1.c55, D1.c56, D1.c57] ) as c47 [for database 0:0,0] Child Nodes (RqJoinSpec): <<309990039>> [for database 0:0,0] RqList <<278868707>> [for database 3023:1418642:FSCM_OLTP,57] sum(D1.c32 by [ D1.c14, D1.c13, D1.c16, D1.c15, cast(D1.c33 as CHARACTER ( 30 )) || cast(D1.c23 as CHARACTER ( 30 )) , D1.c29, D1.c26, D1.c27, D1.c30, D1.c31, D1.c28] ) as c1 [for database 3023:1418642,57], Note that OBIEE rewrote REPORT_AGGREGATE into SUM_SQL99 aggregation. So, it can be replaced safely with REPORT_SUM and the updated LSQL will produce a single PSQL. Use COUNT Instead of COUNT(DISTINCT) Consider switching from Count(Distinct) to Count(), if the latter satisfy your functional requirements, as Count(Distinct ) is more expensive database operation especially on very large result set. The internal benchmarks, performing COUNT vs. COUNT(DISTINCT) on 420M sample table showed more than 3-4x times better performance using COUNT. Note, if you have multiple Logical Table Sources (LTS), defined for the same logical column using the common COUNT(DISTINCT) rule, then you have to update the aggregation rule change for each LTS:
  • 30. 30 WHITE PAPER | Oracle Fusion Transactional Business Intelligence and BI Cloud Connector Performance Recommendations | Version 3.0 Copyright © 2021, Oracle and/or its affiliates | Confidential – Public EVALUATE vs. EVALUATE_ANALYTIC Considerations EVALUATE and EVALUATE_ANALYTIC functions provide similar functionality, but they have to be applied in different functional context. EVALUATE is used for scalar functions, that take input values and return output value for a single row, while EVALUATE_ANALYTIC takes a row set, i.e. one or more rows, and returns results for each row in the set. EVALUATE_ANALYTIC function results in generating SQL analytic functions (also known as window functions), so it may produce more sophisticated physical SQLs. Note, when you choose EVALUTE function, use AGGREGATE instead of REPORT_AGGREGATE for the aggregation rule. Use of Logical Joins in OTBI Reports OBIEE supports different types of logical table joins: INNER JOIN, FULL OUTER JOIN, LEFT OUTER JOIN, RIGHT OUTER JOIN, CROSS/CARTESIAN JOIN. It may push such joins to database or generate multiple PSQLs for its logical sub-requests and then perform STITCH JOIN of the fetched data sets on OBIEE tier. This document discussed the use cases and examples with OBIS_DBFEATURES_IS_PERF_PREFER_INTERNAL_STITCH_JOIN, which enforces STITCH JOIN for cross fact reports. When you rewrite the report logic and apply logical joins, if you can, use more effective INNER JOIN, LEFT or RIGHT OUTER JOIN instead of FULL OUTER JOIN. Use of OBIEE CACHE Variables for Diagnostics in OTBI Reports OBIEE effectively uses several caching options to speed up processing the similar or same reports, such as its own execution plan cache (to avoid reparsing), subrequest cache (to reduce or eliminate the same sub-requests processing), etc. When developing reports and measuring their performance, you may consider disabling the following CACHE variables and turning on LOGLEVEL=7 to generate more detailed tracing by defining in prefix field: SET VARIABLE DISABLE_CACHE_HIT=1, DISABLE_CACHE_SEED=1, DISABLE_SUBREQUEST_CACHE=1, DISABLE_PLAN_CACHE_HIT=1, DISABLE_PLAN_CACHE_SEED=1, LOGLEVEL=7; SELECT … Important! Make sure you remove the variables from report’s prefix before you deploy your reports in a production environment. Use of OBIS_REFRESH_CACHE in OTBI Reports OBIEE in FA environments relies on BI Presentation Server (OBIPS) cache for caching the results for every single execution and keeps using the cached results while OBIEE cursors stay open. When users navigate back and forth to the same report, OBIEE would retrieve the data from the seeded cache instead of executing the same query(ies) again and again. There may be few isolated cases, however, when you may want to turn off OBIPS cache, either in selected reports or for debugging purposes. You can do that by using ‘SET VARIABLE OBIS_REFRESH_CACHE=1;’ in as a prefix in your logical SQL report. Important! Avoid using this variable and disabling OBIPS cache for heavy and frequently used reports. Such actions would result in overall performance impact in your environment. Use of Database Hints to Optimize Generated Queries Plans While OTBI PSQLs cannot be manually tweaked, you have an option to pass database hints into your report to be included into the generated PSQLs. OBIEE introduced support for Oracle database hints at generated PSQL level, using the Request Variable OBIS_ORACLEDB_HINTS_FOR_TOP_SELECT.
  • 31. 31 WHITE PAPER | Oracle Fusion Transactional Business Intelligence and BI Cloud Connector Performance Recommendations | Version 3.0 Copyright © 2021, Oracle and/or its affiliates | Confidential – Public Important! The database hints options should be considered only after performing the comprehensive analysis of the generated SQLHC, the query statistics and the execution plan. Make sure careful and extensive benchmarking for various filters and security predicates (different users with different access lists) before implementing the hints in production reports. The hints, defined via the request variable, will get passed into the TOP SELECT of the generated physical SQL. Such option can be used to rectify an execution plan, if the database optimizer produces sub-optimal query plan for the generated SQL query. If a logical SQL produces more than one physical SQL, then the defined hint will get stamped into EVERY SINGLE generated physical SQL for that report. The variable can be used for passing both optimizer based hints using OPT_PARAM as well as regular hints. Some examples are shown in the table below: ORACLE DATABASE HINT OBIS_ORACLEDB_HINTS_FOR_TOP_S ELECT EXPLANATION /*+ NO_INDEX(D1.T231459.V53960257.Reve nue MOO_REVN_N8) */ 'NO_INDEX(D1.T231459.V53960257.Rev enue MOO_REVN_N8)' Applies NO_INDEX hint for MOO_REVN_N8 inside the query block ‘Revenue’ /*+ opt_param('_complex_view_merging','false') */ 'OPT_PARAM(''_complex_view_merging '',''false'')' The following ‘alter session set "_complex_view_merging"=false;’ can be enforced via OPT_PARAM via SQL hint Refer to the example below how the hint shows up in the logical and physical SQLs: -------------------- SQL Request, logical request hash: 4dce486e set variable LOGLEVEL=4, DISABLE_CACHE_HIT=1, DISABLE_PLAN_CACHE_HIT=1, OBIS_ORACLEDB_HINTS_FOR_TOP_SELECT= 'OPT_PARAM(''_complex_view_merging'',''false'')' : select employee.employeeid from snowflakesales; /* QUERY_SRC_CD='rawSQL' */ [2016-05-12T22:16:24.722+00:00] [OracleBIServerComponent] [TRACE:4] [] [] [ecid: 8f6e386743211276:-2e70ffee:1547d0cc722:-8000-0000000000124131,0:1:16:5] [SI-Name: ] [IDD- GUID: ] [IDD-Name: ] [tid: bc40700] [messageid: USER-18] [requestid: 96420003] [sessionid: 96420000] [username: Administrator] -------------------- Sending query to database named SQLDB (id: <<2279739>>), connection pool named SQLDB Connections, logical request hash 4dce486e, physical request hash b67e28b7: [[ select /*+ opt_param('_complex_view_merging','false') */ T91132.EmployeeID as c1 from Employees T91132 The query hints can applied to the affected reports on Advanced Tab in the Prefix field via SET VARIABLE OBIS_ORACLEDB_HINTS_FOR_TOP_SELECT=…. ;. Note, make sure you put semicolon at the end of SET VRIABLE clause, right before SELECT clause. Use of MATERIALIZE Hint in Cross Subject Area and Cross Reports OTBI physical SQLs always run with “_with_subquery”=’INLINE’ context in database. Such context helps to improve the performance of WITH generated queries for overwhelming majority of OTBI reports and minimizes TEMP tablespace usage in database. There may be several categories of reports that can benefit from materializing targeted WITH sub-queries. OBIEE introduced a separate variable “AddMaterializeHintToSubquery=1”, that you can use to force Oracle to override ‘INLINE’ and materialize WITH query sub-blocks in OTBI reports. To initialize the query use “SET VARIABLE AddMaterializeHintToSubquery=1;” as a prefix in your report’s logical SQL. The variable will insert MATERIALIZE hint into the generated factored WITH subquery blocks in PLSQLs: WITH OBISUBWITH0 AS (select /*+ MATERIALIZE */ D1.c2 as c1 from (select count(distinct D1.c3) as c1, Some of the types of OTBI reports may benefit from using forced MATERIALIZE hint are: Cross SA reports of type: “SELECT … FROM SA1 WHERE ATTRIBUTE1 IN (SELECT … FROM SA2 …)
  • 32. 32 WHITE PAPER | Oracle Fusion Transactional Business Intelligence and BI Cloud Connector Performance Recommendations | Version 3.0 Copyright © 2021, Oracle and/or its affiliates | Confidential – Public More complex reports with nested IN subqueries such as “SELECT … FROM SA1 WHERE ATTRIBUTE1 IN (SELECT … FROM SA2 WHERE ATTRIBUTE2 in (SELECT … from SA3 …)) Complex reports with more IN subqueries such as “SELECT … FROM SA1 WHERE ATTRIBUTE1 IN (SELECT … FROM SA2) OR ATTRIBUTE2 in (SELECT … from SA3 …) Cross reports, i.e. report A using a filter based on the result from report B. Functional Design Patterns Affecting OTBI Reports Performance The previous chapters have provided most detailed recommendations for designing performing OTBI reports. Many of them point to the importance of having proper functional report design, joining attributes and picking most effective functional filters. This chapter covers few such examples with functional recommendations. You are advised to periodically review Oracle publications covering functional topics, specific to OTBI subject areas. "Time"."Fiscal Period" Performance Considerations in Financial Reports When you create reports that include "Time"."Fiscal Period" column, OBIEE appends a hidden SORTKEY("Time"."Fiscal Period") into the LSQL. The SORTKEY function is required for correct sorting of the Fiscal Period data in the chronological order. This logic has been put in place for such cases as prompts, when "Time"."Fiscal Period" ordering is a critical functional requirement. It is justified there, since prompts typically select from logical dimensions or lookups. When you create a report that queries a logical fact, and expose "Time"."Fiscal Period" column either in SELECT or WHERE clause, OBIEE would include the Time dimension, adding an extra join to GL_CALENDARS and the calculated formula from SORTKEY() function into its PSQL. Instead, you can explore attributes from your logical fact that may have the equivalent functional data and eliminate the unnecessary complexity from the report and generated PSQLs. Refer to an example with replacing "Time"."Fiscal Period" with "- Header Details"."Period Name" and its generated LSQL and PSQL below. "TIME"."FISCAL PERIOD" "- HEADER DETAILS"."PERIOD NAME" SET VARIABLE PREFERRED_CURRENCY='User Preferred Currency 1';SELECT 0 s_0, "General Ledger - Journals Real Time"."Time"."Fiscal Period" s_1, SORTKEY("General Ledger - Journals Real Time"."Time"."Fiscal Period") s_2 FROM "General Ledger - Journals Real Time" ORDER BY 1, 3 ASC NULLS LAST, 2 ASC NULLS LAST FETCH FIRST 75001 ROWS ONLY SET VARIABLE PREFERRED_CURRENCY='User Preferred Currency 1';SELECT 0 s_0, "General Ledger - Journals Real Time"."- Header Details"."Period Name" s_1 FROM "General Ledger - Journals Real Time" ORDER BY 1, 2 ASC NULLS LAST FETCH FIRST 75001 ROWS ONLY WITH SAWITH0 AS (SELECT T288012.C251991545 AS c1, T288012.C510997710 AS c3, T288012.C422791590 AS c4, T288012.C355931595 AS c5, T288012.C42913078 AS c6 FROM (SELECT V212661565.FISCAL_PERIOD_NAME AS C251991545, V212661565.FISCAL_YEAR_NUMBER AS C510997710, GlCalendars.CALENDAR_ID AS C422791590, V212661565.FISCAL_QUARTER_NUMBER AS C355931595, V212661565.FISCAL_PERIOD_NUMBER AS C42913078, V212661565.FISCAL_PERIOD_SET_ID AS PKA_FiscalPeriodSetId0, V212661565.FISCAL_PERIOD_SET_NAME AS PKA_FiscalPeriodSetName0, V212661565.FISCAL_PERIOD_TYPE AS PKA_FiscalPeriodType0, GlCalendars.PERIOD_SET_ID AS PKA_GlCalendarsPeriodSetId0, GlCalendars.PERIOD_TYPE_ID AS WITH SAWITH0 AS (SELECT T3339838.C150150587 AS c1 FROM (SELECT V169022212.PERIOD_NAME1 AS C150150587, V169022212.JE_HEADER_ID1 AS PKA_JrnlHdrJeHeaderId0 FROM (SELECT JrnlLine.JE_HEADER_ID, JrnlLine.JE_LINE_NUM, JrnlHdr.JE_HEADER_ID AS JE_HEADER_ID1, JrnlHdr.PERIOD_NAME AS PERIOD_NAME1 FROM GL_JE_LINES JrnlLine, GL_JE_HEADERS JrnlHdr WHERE (JrnlLine.JE_HEADER_ID = JrnlHdr.JE_HEADER_ID) AND ((1 =1)) ) V169022212 ) T3339838 ) SELECT D1.c1 AS c1, D1.c2 AS c2
  • 33. 33 WHITE PAPER | Oracle Fusion Transactional Business Intelligence and BI Cloud Connector Performance Recommendations | Version 3.0 Copyright © 2021, Oracle and/or its affiliates | Confidential – Public PKA_GlCalendarsPeriodTypeId0 FROM GL_FISCAL_PERIOD_V V212661565, GL_CALENDARS GlCalendars WHERE V212661565.FISCAL_PERIOD_SET_NAME = GlCalendars.PERIOD_SET_NAME AND V212661565.FISCAL_PERIOD_TYPE = GlCalendars.PERIOD_TYPE ) T288012 ), SAWITH1 AS (SELECT D1.c1 AS c1, D1.c3 * 10000 + D1.c4 * 100000000 + D1.c5 * 100 + D1.c6 AS c2 FROM SAWITH0 D1 ) SELECT D1.c1 AS c1, D1.c2 AS c2, D1.c3 AS c3 FROM ( SELECT DISTINCT 0 AS c1, D1.c1 AS c2, D1.c2 AS c3 FROM SAWITH1 D1 ORDER BY c3 ) D1 WHERE rownum <= 75001 FROM ( SELECT DISTINCT 0 AS c1, D1.c1 AS c2 FROM SAWITH0 D1 ORDER BY c2 ) D1 WHERE rownum <= 75001 Required Use of Logical UPPER() Function in Logical Filters The end users, who create OTBI reports with common “Worker” dimension, from subject areas such as "Workforce Management - Worker Assignment Real Time", are advised to include the filter on “Worker”.”Person Number” attribute. However, the attribute is mapped to PERSON_NUMBER column, indexed via function-based index (FBI) using UPPER(PERSON_NUMBER). To ensure the filter picking the FBI index, include logical UPPER function into the logical query to improve your report performance, for example: AND (UPPER("Worker"."Person Number" ) = '1111111') The more cases for the required use of UPPER() in reports to take advantage of existing indexes in database are: UPPER( "Assigned To Person Details"."Line Manager Number" )= 'A123' UPPER("Worker"."Assignment Number") = 'A123') UPPER("Sales - CRM Opportunities and Products Real Time"."Product"."Product Name")=’A123’ "Payroll - Payroll Run Results Real Time" Subject Area Recommendations "Payroll - Payroll Run Results Real Time" logical facts are mapped to very large volume Payroll tables such as PAY_RUN_RESULT_VALUES PAY_RUN_RESULTS PAY_PAYROLL_REL_ACTIONS These transactional tables have very few indexes to support transactional flows. When you implement reports querying logical Payroll Facts, make sure you employ the filters to utilize these indexes and avoid very expensive full table scans or row explosions in the execution plans. Explore the following filters in your Payroll Run Result reports: Use the combination of "Element"."Element Name" and "Payroll Run Result Details"."Date Earned" Use the combination of "Location"."Set Name", "Payroll Period"."End Date" and "Location"."Worker Location Name" Add any of Date filters: "Payroll Period"."End Date", "Payroll Run Result Details"."Effective Date", "Payroll Period"."Default Pay Date" Use "Business Unit"."Business Unit Name" with "Payroll Period"."Period Number" Check if you can replace "Payroll Period"."Period Number" with "Payroll Period"."Period Name"
  • 34. 34 WHITE PAPER | Oracle Fusion Transactional Business Intelligence and BI Cloud Connector Performance Recommendations | Version 3.0 Copyright © 2021, Oracle and/or its affiliates | Confidential – Public “Payroll - Payroll Balances Real Time” Subject Area Recommendations “Payroll - Payroll Balances Real Time” Subject Area in Oracle Fusion Transactional BI (OTBI) introduced the logical model for creating OBIEE end user reports against Fusion HCM Payroll Balances tables. Payroll Balances tables store very large data volumes, so it is important to know how to design performing OTBI ad-hoc queries and stored reports and avoid unnecessary overhead on the transnational objects in Fusion Apps. Payroll Balances tables with large data volumes, for performance reasons, have only few indexes, required for payroll processing. So, it's important to design OTBI reports to utilize the available indexes and avoid very expensive full table scans on the main table PAY_RUN_BALANCES. The logical model enforces the join logic for PAY_PAYROLL_ACTIONS and PAY_RUN_BALANCES using PAY_PAYROLL_REL_ACTIONS table. The end users are not recommended to use an advanced Logical SQL option, as it could produce sub-optimal execution plans with ineffective access path for PAY_RUN_BALANCES table. OTBI queries have enforced limit of fetching (and exporting) max 75,000 rows per query (the value varies by environment shapes). So, it's important to know what filters to apply to obtain the complete untrimmed resultset and avoid fetching much larger resultset from database. The next section covers the recommendations for applying logical filters to OTBI Payroll Balances Subject Area. OTBI Payroll Balances Filter Recommendations There are several categories of reports, grouped by the applied filters and their combinations. "Payroll Actions"."Effective Date" = date '<YYYY-MM-DD>' The application of "Payroll Actions"."Effective Date" filter alone is not recommended. If applied alone, using 'equal to', 'less than' or 'greater than' a specific date, it can result in very large resultset, exceeding the 75K limit, especially for very large payroll customers. Besides, the single Effective Date filter would result in optimizer picking inefficient plan with PAY_RUN_BALANCES Full Table Scan. To avoid the expensive full table scan use the second filter "Balance Value Details"."Effective Date" mirroring the date value, i.e. use: "Balance Value Details"."Effective Date" = date '<YYYY-MM-DD>' and "Payroll Actions"."Effective Date" = date '<YYYY-MM-DD>' Such filter application will allow to constrain the volumes in PAY_RUN_BALANCES large table, resulting in the use of its index on EFFECTIVE_DATE instead of heavy Full Table Scan Plan example with PAY_RUN_BALANCES Full Table Scan fetching 106M rows w/o filter "Balance Value Details"."Effective Date": ----------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------ ---------------------- | Id | Operation | Name | Starts | E-Rows |E-Bytes|E-Temp | Cost (%CPU)| E-Time | A-Rows | A-Time | Buffers | Reads | Writes | OMem | 1Mem | Used-Mem | Used-Tmp| ----------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------------------------ ---------------------- ... | 41 | NESTED LOOPS | | 1 | 26250 | 844K| | 524 (0)| 00:00:07 | 18309 |00:00:00.03 | 250 | 0 | 0 | | | | | |* 42 | TABLE ACCESS BY INDEX ROWID | PAY_PAYROLL_ACTIONS | 1 | 70 | 1330 | | 37 (0)| 00:00:01 | 64 |00:00:00.01 | 51 | 0 | 0 | | | | | |* 43 | INDEX SKIP SCAN | PAY_PAYROLL_ACTIONS_N2 | 1 | 70 | | | 11 (0)| 00:00:01 | 64 |00:00:00.01 | 8 | 0 | 0 | 1025K| 1025K| | | |* 44 | INDEX RANGE SCAN | PAY_PAYROLL_REL_ACTIONS_N50 | 64 | 375 | | | 3 (0)| 00:00:01 | 18309 |00:00:00.03 | 199 | 0 | 0 | 1025K| 1025K| | | |* 45 | TABLE ACCESS BY INDEX ROWID | PAY_PAYROLL_REL_ACTIONS | 18309 | 375 | 5250 | | 21 (0)| 00:00:01 | 18309 |00:00:00.14 | 1155 | 343 | 0 | | | | | | 46 | JOIN FILTER USE | :BF0003 | 1 | 106M| 3663M| | 375K (1)| 01:15:11 | 106M|00:00:31.88 | 1367K| 1367K| 0 | | | | | |* 47 | TABLE ACCESS STORAGE FULL | PAY_RUN_BALANCES | 1 | 106M| 3663M| | 375K (1)| 01:15:11 | 106M|00:00:17.15 | 1367K| 1367K| 0 | 1025K| 1025K| 14M (0)| | Plan example with PAY_RUN_BALANCES Index Skip Scan limiting the number of fetched rows to 1.3M with the filter "Balance Value Details"."Effective Date": ----------------------------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------- ---------------- | Id | Operation | Name | Starts | E-Rows |E-Bytes| Cost (%CPU)| E-Time | A-Rows | A-Time | Buffers | Reads | Writes | OMem | 1Mem | Used-Mem | Used-Tmp| ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ---------------- ... |* 27 | TABLE ACCESS STORAGE FULL | PER_ALL_ASSIGNMENTS_M | 1 | 3311 | 184K| 1371 (1)| 00:00:17 | 5104 |00:00:00.17 | 101K| 0 | 0 | 1025K| 1025K| | | |* 28 | HASH JOIN | | 1 | 1473K| 94M| 22976 (1)| 00:04:36 | 1393K|00:00:01.93 | 21948 | 0 | 0 | 2926K| 2926K| 1837K (0)| | |* 29 | TABLE ACCESS STORAGE FULL | PAY_PAY_RELATIONSHIPS_DN | 1 | 6684 | 150K| 68 (0)| 00:00:01 | 6684 |00:00:00.01 | 247 | 0 | 0 | 1025K| 1025K| | | |* 30 | TABLE ACCESS BY INDEX ROWID| PAY_RUN_BALANCES | 1 | 1473K| 61M| 22904 (1)| 00:04:35 | 1393K|00:00:01.29 | 21701 | 0 | 0 | | | | | |* 31 | INDEX SKIP SCAN | PAY_RUN_BALANCES_N1 | 1 | 1473K| | 3922 (1)| 00:00:48 | 1393K|00:00:00.36 | 3700 | 0 | 0 | 1025K| 1025K| | | When choosing to use "Payroll Actions"."Effective Date" filter for 'less than', 'greater than' or 'between' for date values, you may not only run into 75K resultset limit, but also incur significant workload on the storage to fetch much larger volumes and end up with
  • 35. 35 WHITE PAPER | Oracle Fusion Transactional Business Intelligence and BI Cloud Connector Performance Recommendations | Version 3.0 Copyright © 2021, Oracle and/or its affiliates | Confidential – Public slow report performance. Make sure you analyze your Payroll data shape and use the date range with caution. You are strongly encouraged add more logical Filters together with Effective Date to ensure better performance. "Worker"."Employee Display Name"=<Employee Name> The combination of "Worker"."Employee Display Name" and "Payroll Actions"."Effective Date" filters should produce an efficient plan and fast performance for Payroll Balances report without the need for using "Balance Value Details"."Effective Date", though the use of "Balance Value Details"."Effective Date" filter will ensure constraining the data in the main table PAY_RUN_BALANCES before joining to the other Payroll tables. Data Security Predicates Impact on OTBI Reports Performance OTBI enforces maximum security enabled for each user and applies all enabled data security predicates (DSP) in the generated PSQLs to secure the output data in OTBI reports. The enabled security predicates are embedded into the generated PSQLs for every single secure ADF logical object, used in the reports. Depending on the complexity and the volume of security roles, their generated security predicates in OTBI PSQLs often become the primary source of the SQL complexity, heavy SQL parsing, suboptimal execution plans and poor query performance. Security Predicate in OTBI: Performance Recommendations Review the following common guidelines to mitigate the impact from security overhead in OTBI reports: 1. Regularly audit the enabled security roles in your Fusion Applications environment and reduce their count as much as possible. 2. Optimize each security predicate sub-query and ensure effective indexes in place for each sub-query. 3. Use consistent SQL patterns for your security roles, which will allow optimizer to apply UNION ALL logic to them. 4. Design your DSPs to use the same unique key of the leading table across all sub-queries, so that optimizer choose UNION ALL for all predicates. For example: SUB-OPTIMAL DESIGN OPTIMIZED DSP (SELECT OpportunitiesPEO.OPTY_ID, . . . FROM MOO_OPTY OpportunitiesPEO WHERE ((OpportunitiesPEO.opty_id IN (SELECT opty_id FROM moo_revn_partner )) OR (OpportunitiesPEO.opty_id IN (SELECT DISTINCT myopres.opty_id FROM Moo_opty_resources myopres WHERE myopres.resource_id IN (SELECT HZ_SESSION_UTIL.GET_USER_PARTYID FROM dual) AND myopres.access_level_code IN ('100', '200', '300') )) OR ( HZ_SESSION_UTIL.validate_party_bu(HZ_SESSION_UTIL. get_user_partyid(),OpportunitiesPEO.bu_org_id) = 'VALID' ) ) V346884149, (SELECT OpportunitiesPEO.OPTY_ID, . . . FROM MOO_OPTY OpportunitiesPEO WHERE ((OpportunitiesPEO.opty_id IN (SELECT opty_id FROM moo_revn_partner )) OR (OpportunitiesPEO.opty_id IN (SELECT DISTINCT myopres.opty_id FROM Moo_opty_resources myopres WHERE myopres.resource_id IN (SELECT HZ_SESSION_UTIL.GET_USER_PARTYID FROM dual) AND myopres.access_level_code IN ('100', '200', '300') )) OR (OpportunitiesPEO.opty_id IN (SELECT /*DISTINCT*/ Opt.opty_id FROM MOO_OPTY Opt where ( HZ_SESSION_UTIL.validate_party_bu(HZ_SESSION_UTI L.get_user_partyid(),opt.bu_org_id) = 'VALID' ) ))) ) V346884149, 5. Avoid mixing EXISTS and IN for the same secure object and choose either all EXISTS or all IN clauses. 6. Avoid using complex database views and joins to such views in your DSPs. 7. DO NOT use FND_SESSIONS table in your security sub-queries, as it could be very large in size. 8. Avoid using hints in security predicates as they typically skew the plans for the generated OTBI physical SQLs. 9. Review generated DSP security clauses in OTBI PSQLs for any security predicates based on hierarchies. Unoptimized hierarchy based DSPs could result in significant performance impact on OTBI reports.
  • 36. 36 WHITE PAPER | Oracle Fusion Transactional Business Intelligence and BI Cloud Connector Performance Recommendations | Version 3.0 Copyright © 2021, Oracle and/or its affiliates | Confidential – Public 10. Make sure that security predicates do not cause CARTESIAN MERGE JOIN, as it could cause ORA-01652 (out of TEMP space) error. 11. Avoid using datatype VARCHAR2_TABLE attributes and SPLIT_INTO_TABLE() function in DSPs. 12. Avoid building complex OTBI reports, joining multiple secure facts in a single query, as every single secure entity comes with all associated security predicates and results in the additional query complexity. 13. Avoid large numbers of distinct data roles with their own unique security predicates. This can lead to heavy SQL parsing when there are large numbers of concurrent users, even if individual users have a small number of data roles. 14. Consider using a single Global Data Role rather than multiple versions of the same role to eliminate redundancy in DSP clauses generation in OTBI PSQLs. 15. Review the option to reduce the number of data roles in HCM OTBI reports by using HCM security profiles that generate security predicates that secure access to data, using the context of the logged in user. Such technique can help to reduce SQL parsing overhead, compared with approaches that use large numbers of distinct data roles with static SQL predicates. 16. Check for unintended security policies, pulled into the generated report PSQLs. For example, if you create a custom copy of a role with data security policies already attached, and then create a data role on top of the copied role(s), such logic would result in pulling the unintended DSPs into the generated OTBI PSQLs and cause performance overhead. 17. Design your reports with restrictive functional filters. Do not rely solely on data security predicates to filter the report’s data. 18. Avoid constructing reports, using logical filter sub-queries on another subject area, included with the only purpose to ‘secure’ your data without the SA’s functional usage in the report. Do the due diligence, creating proper roles and DSPs per your functional requirements. Refer to the example of such sub-optimal design, where the report contains ‘IN’ clause doing DSP filtering only. Such query would result in unnecessarily complex PSQL and lead to poor report performance. SELECT ... FROM "Workforce Performance - Performance Rating Real Time" WHERE ... AND ("Worker"."Person ID" IN (SELECT "Worker"."Person ID" saw_0 FROM "Workforce Profiles - Person Profile Real Time"....) INTERSECT SELECT "Worker"."Person ID" saw_0 FROM "Workforce Management - Person Real Time") Security Materialization in OTBI Reports Fusion Applications Release 13 has delivered limited support for the new feature to handle DSP SQL clauses performance overhead by implementing the option to materialize security predicates and pull in materialized DSP IDs instead of the SQL query blocks. It reduces the number of database objects, participating in security predicate joins to a single materialized table, queried via indexed attribute. As the result, OBIEE eliminates DSP complexity, produces a simple PSQL with efficient and indexed ID column predicates. It ensures consistent report performance across all users. This feature is available to selected HCM subject areas only. You have configurable options to define the frequency for materialized IDs cached tables seeding and purging by session or user. Refer to OTBI Security Materialization documentation for more details how to enable and configure the feature in your environment. OTBI EXPORT LIMITERS AND RECOMMENDATIONS OBIEE offers many options to present the data in both interactive and scheduled reports, allowing the end users to choose various data download formats, such as CSV, PDF, XLS. Important! The BI guardrails have been put in place to constrain the processed volumes and contain performance impact from processing large query data in production environments, i.e. help the system overall from being overloaded. The enforced guardrails vary by customer environment shape. You can review published FA sizing documents for more details about shape configurations. This section covers the details about parameters that may cause higher performance impact. ResultRowLimit This document already discussed the impact from fetching too much data from database and enforced ResultRowLimit set to 75,000 rows (and higher values for larger shapes). The report’s runtime will be affected by the size of the fetched data, coming from both the number of select attributes and row counts. Note, that the limiter would not apply to each individual PSQL, if OBIEE generate more than one database query. That’s why it’s very important to employ restrictive filters to reduce the processed data, as ResultRowLimit is applied by OBIEE as the very last step to render the final report result. DefaultRowsDisplayedInDownloadCSV
  • 37. 37 WHITE PAPER | Oracle Fusion Transactional Business Intelligence and BI Cloud Connector Performance Recommendations | Version 3.0 Copyright © 2021, Oracle and/or its affiliates | Confidential – Public DefaultRowsDisplayedInDownloadCSV value has been aligned with ResultRowLimit to allow end users download the complete resultset in CSV format. CSV download benchmarks showed the fastest performance and scalability by volumes. You are advised to use CSV format for downloading the generated resultset over XLS. DefaultRowsDisplayedInDownload DefaultRowsDisplayedInDownload controls the export limit for both PDF and XLS format. It is set to lower value than ResultRowLimit, OBIEE takes higher resources to export the data into these formats. Customers are recommended to use CSV format for exporting reports with high row counts. DefaultRowsDisplayedInDelivery DefaultRowsDisplayedInDelivery limits the row counts fetched into online reports and via scheduled email delivery. Since it controls the row counts for online reports, it is set to more conservative value than CSV, PDF and XLS download limiters. ORACLE BI CLOUD CONNECTOR PERFORMANCE CONSIDERATIONS Oracle BI Cloud Connector (BICC) is using OBIEE technology stack for extracting Fusion View Objects (VOs) for data integration purposes. BICC sends a logical SQL for every VO extract to OBIEE, which then processes the request, generating a physical database SQL, running in FA DB, and fetching the data. BICC transactions can be traced both in Usage Tracking tables as well OBIEE query logs. They are logged for the internal user FUSION_APPS_OBIA_BIEE_APPID. BICC queries have very few limiters, and since they process very large volume extracts, they could generate significant performance impact on FA production environments. BICC OTBI Metadata Independent Mode BICC switched to default OTBI metadata independent mode, that no longer uses OTBI RPD metadata repository for VO extracts. BI VOs, that are used in OTBI queries, may not deliver the desired extract performance due to more complex logic, addressing transactional reporting requirements, but not scaling for extracts. With the new default mode, you have an option to extract wider range of VOs, that have simplified logic, that scales for extracting very large volumes in FA environments. BICC Performance Recommendations Review the following recommendations to ensure the best performance for your BICC extracts in FA environments: 1. Carefully construct the list of VOs for your extracts, limiting to the list of required only objects. Review your extracts for any complex VOs, that result in joining multiple tables, and check if simplified VO versions are available instead. 2. Audit the list of extract attributes for every single VO and check the bare minimum of the extract columns to address your data integration business requirements. Important! DO NOT use default ALL columns for your VO extracts, unless you really need them ALL. The larger number of extract columns would have direct impact on your BICC performance, so make sure you choose the required only attributes. 3. BICC has the default extract timeout of 10 hours per VO extract. Some of large volume VOs may require more than 10 hours to process initial volumes. You can overwrite the default value to accommodate for your initial extract completion in BIACM by navigating to ‘Manage Offerings and Data Stores’ -> ‘Actions’ -> ‘Extract preference’ under ‘Job setting’ -> Timeout in Hours: 10 Hours (default). 4. Plan to run your initial BICC extract jobs during off business hours. Some initial extracts may require larger TEMP and UNDO tablespace space, so you will minimize the chance of running out of space during less busier times such as weekends. 5. BICC has been enhanced to improve data fetching by introducing three configurable parameters in ‘Manage Offerings and Data Stores’ -> ‘Actions’ -> ‘Extract preference’. Consider setting the values to reduce the load on OBIEE for data fetching: • DB fetch buffer size (MB): 3MB • DB fetch row count: 1000 • BI JDBC fetch row count: 1000 Make sure you benchmark your extracts in your test environment before changing them in production. TEMP and UNDO Tablespace Sizing BICC extract queries process very large volumes. Some of the VOs produce query execution plans, that result in significant TEMP tablespace usage. Additionally, BICC SQLs take longer time to run in FA environments, causing longer undo retention and require larger UNDO tablespace as well.
  • 38. 38 WHITE PAPER | Oracle Fusion Transactional Business Intelligence and BI Cloud Connector Performance Recommendations | Version 3.0 Copyright © 2021, Oracle and/or its affiliates | Confidential – Public Make sure you run the internal BICC benchmarks in your TEST instance, and if you observe running into TEMP and UNDO tablespace space issues, reach to Oracle Support to arrange for sizing these two tablespaces to accommodate for your BICC job extracts. BICC Jobs Design Optimization for Better Performance BICC is configured to run in distributed FA environment on primary and high availability (HA) nodes, working in ‘active’-‘active’ mode and balancing the load. Additionally, it introduced the functionality to define priority groups and priority numbers within a job. Understanding jobs configuration and priorities management is essential to achieve most optimal extracts orchestration and better performance. 1. The lowest granular level for balancing the load in BICC is done at JOB level. If you configured to run extracts for all VOs in a single job, BICC would run the JOB on a single BICC node. 2. BICC limits the number of parallel processing threads for VO extracts to 5 at a node level. 3. While BICC does the load balancing at JOB level, OBIEE cluster balances the load at Logical SQL level. Since each VO extract is a logical SQL request in OBIEE, it will load balance VO extracts (or logical SQLs) from active BICC JOBs across all available OBIEE cluster nodes. 4. BICC number of concurrent threads applies at BICC node level and limits parallel VO extracts running on a single BICC node across all JOBs. OBIEE does not have any limits for concurrent requests (logical SQLs. To better understand BICC and OBIEE load balancing consider the following example: • You created a single JOB with 10 VO extracts and the default BICC threads set to 5. • BICC will start the JOB on a single node, spawning first 5 VO extracts. • OBIEE will distribute the 5 VO extracts between its two cluster nodes, 3 VOs going to node1 and 2 VOs to node2. • BICC will maintain the maximum concurrency = 5, spawning more extracts as soon as OBIEE complete any of its jobs on node1 or node2. • BICC will keep spawning more requests, maintaining the concurrency = 5, until it finishes all extracts. 5. Decouple heavy VO extracts into separate jobs. Having them included into common jobs could result in them running late in the cycle and extending the extract window. In the example below SubledgerJournalDistributionPVO was run alone at the end of the extract window: 6. Consider phasing heavy extracts in time to reduce the load on the database. 7. You can use Group Number and Group Item Priority values to manage the order of VO executions within a single JOB. To set the values for Group Number and Group Item Priority, connect to BICC Concole as Admin, navigate to ‘Manage Jobs’ -> click designed job name link -> ‘Edit Group’ button to get to the screen as in the example below:
  • 39. 39 WHITE PAPER | Oracle Fusion Transactional Business Intelligence and BI Cloud Connector Performance Recommendations | Version 3.0 Copyright © 2021, Oracle and/or its affiliates | Confidential – Public 8. If you would like a certain group of VOs from a single job to execute first, then set Group Number for these VOs to a lower value. BICC would prioritize the jobs executions for Group Numbers from lower to higher values. 9. You can use Group Item Priority to set the order of VO extracts within a group with the same ‘Group Number’ in a single job. BICC would prioritize the jobs executions starting with lower Group Item Priority and move up through the list. 10. If you schedule multiple jobs to run at the same time, BICC will load balance the jobs by group numbers between primary and HA nodes, generate the list to execute on each BICC node, accounting for group numbers and group item priorities, create a common VO extract list and execute the list in that order. It would not change the lists or move VO extracts to less loaded BICC node. 11. If you configure both Data and Primary Keys (PKs) extracts, then create two separate jobs, one for data extract and the other one for PKs. If you keep them in a single job, then BICC would first do the data extract and pause PK extracts until the very last Data extract completion. 12. Carefully design your schedule and validate in Dev/Test before deploying in Production environment. The practical scheduling, smart jobs design, use of group numbers and group item priorities, decoupling data from PKs into separate jobs should help you to ensure most efficient extracts execution as well as load balancing in BICC and OBIEE clustered environment. OTBI PERFORMANCE RELATED ERRORS: RECOMMENDATIONS AND WORKAROUNDS This chapter covers the most common errors, related to performance. The published workarounds and solutions may not address all patterns, as the errors may manifest in other cases, not mentioned here. Each case requires careful benchmarking before applying a recommendation in a production pod.
  • 40. 40 WHITE PAPER | Oracle Fusion Transactional Business Intelligence and BI Cloud Connector Performance Recommendations | Version 3.0 Copyright © 2021, Oracle and/or its affiliates | Confidential – Public The overwhelming majority of the performance issues come from extremely complex report’s logic and, as the result, heavy generated physical SQL. As discussed in the topics above, the complexity may come from data security predicates, use of Cross SAs, poorly constructed logic, ineffective use of hierarchies, complex aggregations on the fly, inefficient filters applied too late in the execution plans, etc. ERROR MESSAGE ROOT CAUSE SOLUTION / WORKAROUND [nQSError: 60009] The user request exceeded the maximum query governing execution time The user report exceeded the query runtime limit of 10 min, enforced for non-admin users The query governance is an important indicator, which enforces the maximum allowed limit not to allow heavy runaway queries in Fusion Applications. Such reports require careful analysis for sub- optimal performance. [nQSError: 17012] Bulk fetch failed. ORA-01013: user requested cancel of current operation The user cancelled the report execution, most probably because it ran too long Requires close review of nqquery.log to understand the diagnostic statistics, review report design and applied security predicates [nQSError: 17010] SQL statement preparation failed. ORA-10260: limit size(2097152) of the PGA heap set by event 10261 exceeded The most common cases are: 1. Optimizer runs into 2Gb heap size limit, failing to parse the such complex SQL text. 2. The query execution plan results in very heavy hash joins and running into 2Gb heap size limit during the SQL execution PGA heap size limit = 2Gb is an important limiter, enforced in Fusion Applications to prevent very complex and heavy SQLs from consuming shared database memory areas. If you cannot simplify the generated report logic, you may have to follow recommendations in the document to break generated PSQL into multiple. [nQSError: 17001] Oracle Error code: 1652, message: ORA-01652: unable to extend temp segment by 128 in tablespace FUSION_TEMP The most common cases are: 1. The generated PSQL results in inefficient execution plans, having intermediate steps with too high row counts 2. The logical report uses ineffective filters, or the filters get applied too late in the generated query plans The report requires logical SQL review, careful audit of logical filters, and the generated execution plan for the PSQL to identify any intermediate steps, possibly causing row explosion during the SQL execution [nQSError: 46168] RawFile::checkSoftLimit failed because a temporary file exceeds the limit (10240 MB) specified by the NQ_LIMIT_WRITE_FIL ESIZE environment variable. This error in OTBI reports indicates that the LSQL produces more than one PSQL and OBIEE attempted to fetch too large resultset to OBIEE for ‘stitching’ on BI tier. The logical report requires careful analysis of the aggregates and logical filters as in the example in the section ‘Avoid Fetching Very Large Volumes from Database to OBIEE’. It may be caused by Cross SA reports, use of physical lookups in report design, etc. Consider rewriting LSQL to avoid the use of such non-pushed functions as RAND() or LENGTH(). Use DESCTRIPTOR_IDOF for filter conditions involving lookups. If you run into such error for cross fact reports having implicit LOJ in OBIEE execution plans, try to test your report with the prefix via “SET VARIABLE OBIS_DBFEATURES_LEFT_OUTER_JOIN_SUPPORTED=1;” [nQSError: 17001] Oracle Error code: 1722, message: ORA-01722: invalid number at OCI call OCIStmtFetch. The error is related to an implicit data conversion. Identify the problematic join condition and add the explicit conversion functions such as TO_CHAR(), etc. You may also try influence the optimizer by using database hints to change the order of the joins. Refer to the section discussing the use of database hints.
  • 41. 41 WHITE PAPER | Oracle Fusion Transactional Business Intelligence and BI Cloud Connector Performance Recommendations | Version 3.0 Copyright © 2021, Oracle and/or its affiliates | Confidential – Public [nQSError: 17001] Oracle Error code: 32036, message: ORA- 32036: unsupported case for inlining of query name in WITH clause OTBI enforces "_with_subquery"=''inline' to optimize the performance of generated physical SQL. However, if you define a custom data type in security predicates and use the security in OTBI, then affected reports will fail with such error. The example of custom data type in DSP, causing the error is: (OpportunitiesPEO.bu_org_id IN ( SELECT DISTINCT COLUMN_VALUE AS BU_ID FROM THE (SELECT CAST(SPLIT_INTO_TABLE(HZ_SESSION_U TIL .GET_USER_BUSINESS_UNITS) AS VARCHAR2_TABLE) FROM DUAL) Review and simplify the implemented security predicates. The provided example can be rewritten below or try to test with the logical prefix workaround: ((HZ_SESSION_UTIL.validate_party_bu(HZ_SESSION_UTI L.get_user_partyid(), OpportunitiesPEO.bu_org_id) = 'VALID')) Or: SET VARIABLE OBIS_ORACLEDB_HINTS_FOR_TOP_SELECT ='OPT_PARAM(''_complex_view_merging'',''false'') OPT_PARAM(''_optimizer_distinct_agg_transform'','' false'')'; [nQSError: 42029] Subquery contains too many values for the IN predicate. Cross SA logical SQLs use IN join clause between SAs Try the workaround: SET VARIABLE PERF_ENABLE_ASYMMETRIC_COND_OPTIMIZATION=1; "Exceeded configured maximum number of allowed input records. Error Codes: EKMT3FK5:OI2DL65" The error manifests when exporting or opening large reports in OTBI. Consider applying more restrictive filters to reduce the volumes or break the report into several to produce smaller volumes. CONCLUSION This document consolidates the best practices and recommendations for developing and optimizing performance for Oracle Business Transactional Intelligence for Fusion Applications Version 20A or higher. This list of areas for performance improvements is not complete. The document will be updated with more findings, revisions recommendations, so make sure you always use the latest version. If you observe any performance issues with your OTBI reports, you should carefully benchmark any recommendations or solutions discussed in this article or other sources, before implementing the changes in the production environment.
  • 42. CONNECT WITH US Call +1.800.ORACLE1 or visit oracle.com. Outside North America, find your local office at oracle.com/contact. blogs.oracle.com facebook.com/oracle twitter.com/oracle Copyright © 2021, Oracle and/or its affiliates. All rights reserved. This document is provided for information purposes only, and the contents hereof are subject to change without notice. This document is not warranted to be error-free, nor subject to any other warranties or conditions, whether expressed orally or implied in law, including implied warranties and conditions of merchantability or fitness for a particular purpose. We specifically disclaim any liability with respect to this document, and no contractual obligations are formed either directly or indirectly by this document. This document may not be reproduced or transmitted in any form or by any means, electronic or mechanical, for any purpose, without our prior written permission. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. Intel and Intel Xeon are trademarks or registered trademarks of Intel Corporation. All SPARC trademarks are used under license and are trademarks or registered trademarks of SPARC International, Inc. AMD, Opteron, the AMD logo, and the AMD Opteron logo are trademarks or registered trademarks of Advanced Micro Devices. UNIX is a registered trademark of The Open Group. 0120 Oracle Fusion Transactional Business Intelligence and BI Cloud Connector Performance Recommendations April 2121 Authors: Pavel Buynitsky, Oksana Stepaneeva Contributing Authors: Amar Batham, Wasimraja Abdulmajeeth