F96366_04
F96366_04
Implementation Guide
Release 24.1.201.0
F96366-04
June 2024
Oracle Retail Analytics and Planning Implementation Guide, Release 24.1.201.0
F96366-04
This software and related documentation are provided under a license agreement containing restrictions on use and
disclosure and are protected by intellectual property laws. Except as expressly permitted in your license agreement or
allowed by law, you may not use, copy, reproduce, translate, broadcast, modify, license, transmit, distribute, exhibit,
perform, publish, or display any part, in any form, or by any means. Reverse engineering, disassembly, or decompilation
of this software, unless required by law for interoperability, is prohibited.
The information contained herein is subject to change without notice and is not warranted to be error-free. If you find
any errors, please report them to us in writing.
If this is software, software documentation, data (as defined in the Federal Acquisition Regulation), or related
documentation that is delivered to the U.S. Government or anyone licensing it on behalf of the U.S. Government, then
the following notice is applicable:
U.S. GOVERNMENT END USERS: Oracle programs (including any operating system, integrated software, any
programs embedded, installed, or activated on delivered hardware, and modifications of such programs) and Oracle
computer documentation or other Oracle data delivered to or accessed by U.S. Government end users are "commercial
computer software," "commercial computer software documentation," or "limited rights data" pursuant to the applicable
Federal Acquisition Regulation and agency-specific supplemental regulations. As such, the use, reproduction,
duplication, release, display, disclosure, modification, preparation of derivative works, and/or adaptation of i) Oracle
programs (including any operating system, integrated software, any programs embedded, installed, or activated on
delivered hardware, and modifications of such programs), ii) Oracle computer documentation and/or iii) other Oracle
data, is subject to the rights and limitations specified in the license contained in the applicable contract. The terms
governing the U.S. Government's use of Oracle cloud services are defined by the applicable contract for such services.
No other rights are granted to the U.S. Government.
This software or hardware is developed for general use in a variety of information management applications. It is not
developed or intended for use in any inherently dangerous applications, including applications that may create a risk of
personal injury. If you use this software or hardware in dangerous applications, then you shall be responsible to take all
appropriate fail-safe, backup, redundancy, and other measures to ensure its safe use. Oracle Corporation and its
affiliates disclaim any liability for any damages caused by use of this software or hardware in dangerous applications.
Oracle®, Java, MySQL, and NetSuite are registered trademarks of Oracle and/or its affiliates. Other names may be
trademarks of their respective owners.
Intel and Intel Inside are trademarks or registered trademarks of Intel Corporation. All SPARC trademarks are used
under license and are trademarks or registered trademarks of SPARC International, Inc. AMD, Epyc, and the AMD logo
are trademarks or registered trademarks of Advanced Micro Devices. UNIX is a registered trademark of The Open
Group.
This software or hardware and documentation may provide access to or information about content, products, and
services from third parties. Oracle Corporation and its affiliates are not responsible for and expressly disclaim all
warranties of any kind with respect to third-party content, products, and services unless otherwise set forth in an
applicable agreement between you and Oracle. Oracle Corporation and its affiliates will not be responsible for any loss,
costs, or damages incurred due to your access to or use of third-party content, products, or services, except as set forth
in an applicable agreement between you and Oracle.
Contents
Send Us Your Comments
Preface
1 Introduction
Overview 1-1
Architecture 1-1
Getting Started 1-2
iii
Example #3: Full dimension load 3-6
Example #4: Sales Data Load 3-7
Example #5: Multi-File Fact Data Load 3-7
Uploading ZIP Packages 3-7
Preparing to Load Data 3-8
Calendar and Partition Setup 3-9
Loading Data from Files 3-11
Initialize Dimensions 3-12
Loading Dimensions into RI 3-12
Hierarchy Deactivation 3-14
Loading Dimensions to Other Applications 3-14
Load History Data 3-15
Automated History Loads 3-17
Sales History Load 3-17
Inventory Position History Load 3-20
Reloading Inventory Data 3-22
Price History Load 3-23
Reloading Price Data 3-24
Purchase Order Loads 3-24
Other History Loads 3-25
Modifying Staged Data 3-26
Reloading Dimensions 3-26
Seed Positional Facts 3-27
Run Nightly Batches 3-29
Sending Data to AI Foundation 3-30
Sending Data to Planning 3-33
Process Overview 3-33
Usage Examples 3-38
Customized Planning Integrations 3-39
Generating Forecasts for MFP 3-41
Generating Forecasts for Inventory Planning Optimization Cloud Service-Demand
Forecasting 3-43
Implementation Flow Example 3-43
Generating Forecasts for AP 3-47
Loading Plans to RI 3-47
Loading Forecasts to RI 3-48
Loading Aggregate History Data 3-49
Migrate Data Between Environments 3-52
iv
Merchandising Foundation Cloud Service Data Mapping 4-2
Batch Schedule Definitions 4-2
Ad Hoc Processes 4-4
Batch Dependency Setup (Gen 2 Architecture) 4-6
Batch Link Setup (Gen 2 Architecture) 4-8
Module Setup in Retail Home (Gen 2 Architecture) 4-8
Batch Job Setup (Gen 2 Architecture) 4-9
Batch Job Setup (Gen 1 Architecture) 4-11
Batch Setup for RMS On-Premise 4-12
RDE Job Configuration 4-13
Using RDE for Calendar Setup (Gen 2 Architecture) 4-16
Using RDE for Dimension Loads (Gen 2 Architecture) 4-17
Using RDE for Initial Seeding (Gen 2 Architecture) 4-18
Using RDE for Initial Seeding (Gen 1 Architecture) 4-19
5 Batch Orchestration
Overview 5-1
Initial Batch Setup 5-3
Common Modules 5-4
RI Modules 5-6
AI Foundation Modules 5-6
Maintenance Cycles 5-8
Batch Setup Example 5-8
Adjustments in POM 5-11
Managing Multiple Data Sources 5-11
Adjustments 5-12
Costs 5-12
Deal Income 5-13
Intercompany Margin 5-13
Inventory Position 5-14
Inventory Reclass 5-14
Markdowns 5-15
Prices 5-15
Purchase Orders 5-16
Receipts 5-16
Returns to Vendor 5-17
Sales 5-17
Sales Pack 5-18
Sales Wholesale 5-18
Transfers 5-19
Configure POM Integrations 5-19
v
Schedule the Batches 5-20
Batch Flow Details 5-21
Planning Applications Job Details 5-21
Reprocessing Nightly Batch Files 5-22
vi
Transformations in Planning 6-37
7 Implementation Tools
Retail Home 7-1
Process Orchestration and Monitoring (POM) 7-3
POM and Customer Modules Management 7-3
Control & Tactical Center 7-5
Data Visualizer 7-6
File Transfer Services 7-10
Required Parameters 7-11
Base URL 7-12
Tenant 7-12
OCI IAM URL 7-12
OCI IAM Scope 7-12
Client ID and Secret 7-13
Common HTTP Headers 7-16
Retrieving Identity Access Client Token 7-17
FTS API Specification 7-17
FTS Script Usage 7-20
Upload Files 7-20
Download Files 7-20
Download Archives 7-20
BI Publisher 7-21
Configuring Burst Reports for Object Storage 7-21
Delivering Scheduled Reports through Object Storage 7-21
Downloading Reports from Object Storage 7-22
Application Express (APEX) 7-22
Database Access Levels 7-24
Postman 7-24
vii
Re-Using Product Identifiers 8-11
Organization File 8-11
Organization Alternates 8-14
Calendar File 8-14
Exchange Rates File 8-16
Attributes Files 8-17
Fact Files 8-19
Fact Data Key Columns 8-19
Fact Data Incremental Logic 8-21
Multi-Threading and Parallelism 8-22
Sales Data Requirements 8-22
Sales Pack Data 8-26
Inventory Data Requirements 8-26
Price Data Requirements 8-29
Receipts Data Requirements 8-31
Transfer Data Requirements 8-33
Adjustment Data Requirements 8-34
RTV Data Requirements 8-36
Markdown Data Requirements 8-37
Purchase Order Data Requirements 8-39
Other Fact File Considerations 8-42
Positional Data Handling 8-42
System Parameters File 8-43
9 Extensibility
AI Foundation Extensibility 9-1
Custom Hooks for IW Extensions 9-2
Planning Applications Extensibility 9-3
Supported Application Configuration Customization 9-3
Rules for Customizing Hierarchy 9-4
Rules for Adding Measures 9-4
Rules for Adding Custom Rules 9-5
Rules for Workbooks and Worksheets Extensibility 9-5
Rules for Adding Custom Real-time Alerts into Existing Workbooks 9-6
Adding a Custom Solution 9-7
Adding Custom Styles 9-7
Validating the Customized Configuration 9-7
Taskflow Extensibility 9-8
Customizing the Batch Process 9-9
Custom Batch Control Validation 9-11
Dashboard Extensibility 9-11
viii
IPOCS-Demand Forecasting Dashboard Extensibility 9-12
Dashboard Intersection 9-13
Process to Customize the Dashboard 9-14
Applying Changes to the Cloud Environment 9-15
Customizing the MFP/AP Dashboard 9-15
RAP Integration Interface Extensibility 9-16
Application Specific Batch Control Information 9-19
Batch Control Samples 9-21
Batch Control Samples 9-25
Batch Control Samples 9-28
Programmatic Extensibility of RPASCE Through Innovation Workbench 9-29
Architectural Overview 9-29
Innovation Workbench from an RPASCE Context 9-30
Innovation Workbench from a RAP Context 9-30
RPASCE Configuration Tools Changes 9-31
Measure Properties 9-31
Rules and Expressions 9-32
Integration Configuration 9-33
RPASCE Special Expression - execplsql 9-33
Arguments 9-33
Examples 9-34
Limitations 9-42
Validations and Common Error Messages 9-42
RPASCE Batch Control File Changes 9-43
RPASCE Deployment 9-44
Uploading Custom PL/SQL Packages 9-44
RPASCE Helper Functions and API for IW 9-44
PL/SQL Best Practices 9-47
Abbreviations and Acronyms 9-48
Input Data Extensibility 9-49
Additional Source for Product Attributes 9-49
Additional Source for Foundation Data 9-49
Additional Source for Data Security 9-50
Additional Sources for Measures 9-51
Custom Sales Type 9-51
Custom Fact Measures 9-52
Additional Custom Fact Data 9-52
Extensibility Example – Product Hierarchy 9-52
Input File Changes 9-53
AI Foundation Setup 9-53
Planning Data Store Setup 9-55
ix
In-Season Forecast Setup 9-57
F Accessibility
ADF-Based Applications F-1
Configuring Application for Screen Reader Mode F-2
Setting Accessibility to Default F-3
JET-Based Applications F-4
OAS-Based Applications F-5
RPASCE Configuration Tools F-5
Report Authoring Guidelines F-5
Color Usage in Tables and Graphs F-6
Text and Label Usage F-6
Layout and Canvas Usage F-7
x
Send Us Your Comments
Oracle Retail Analytics and Planning Implementation Guide
Oracle welcomes customers' comments and suggestions on the quality and usefulness of this
document.
Your feedback is important, and helps us to best meet your needs as a user of our products.
For example:
• Are the implementation steps correct and complete?
• Did you understand the context of the procedures?
• Did you find any errors in the information?
• Does the structure of the information help you with your tasks?
• Do you need different information or graphics? If so, where, and in what format?
• Are the examples correct? Do you need more examples?
If you find any errors or have any other suggestions for improvement, then please tell us your
name, the name of the company who has licensed our products, the title and part number of
the documentation and the chapter, section, and page number (if available).
Note:
Before sending us your comments, you might like to check that you have the latest
version of the document and if any concerns are already addressed. To do this,
access the Online Documentation available on the Oracle Technology Network Web
site. It contains the most current Documentation Library plus all documents revised or
released recently.
xi
Preface
Preface
This Implementation Guide provides critical information about the processing and operating
details of the Analytics and Planning, including the following:
• System configuration settings
• Technical architecture
• Functional integration dataflow across the enterprise
• Batch processing
Audience
This guide is for:
• Systems administration and operations personnel
• System analysts
• Integrators and implementers
• Business analysts who need information about Analytics and Planning processes and
interfaces
Documentation Accessibility
For information about Oracle's commitment to accessibility, visit the Oracle Accessibility
Program website at http://www.oracle.com/pls/topic/lookup?ctx=acc&id=docacc
Customer Support
To contact Oracle Customer Support, access My Oracle Support at the following URL:
https://support.oracle.com
When contacting Customer Support, please provide the following:
• Product version and program/module name
• Functional and technical description of the problem (include business impact)
• Detailed step-by-step instructions to re-create
• Exact error message received
• Screen shots of each step you take
xii
Preface
xiii
1
Introduction
Overview
The Oracle Retail Analytics and Planning platform is the common and extensible cloud
architecture for analytics and planning solutions.
The platform supports Oracle Retail applications across each of major analytical categories,
including:
• Descriptive and diagnostic with merchandise, customer, and consumer insights.
• Predictive with demand forecasting, customer, and location clustering.
• Prescriptive with assortment, pricing, and inventory optimization.
The platform also supports Oracle Retail merchandise and inventory planning solutions. These
solutions support business responsiveness through a highly interactive user experience and
drive the best outcomes with the application of advanced analytics and artificial intelligence
(AI).
As a common platform, it provides a centralized data repository, lean integration APIs, and an
efficient portfolio of delivery technologies. The data repository reflects a comprehensive data
model of retail planning, operations, and execution processes. The integration APIs support
right-time interactions: a lean set of bulk, on-demand, and near real-time mechanisms. The
delivery technologies represent a portfolio of connected tools to build and extend composite
solutions using fit-for-purpose analytical, application, and integration tools.
Architecture
The architecture used for Oracle Retail Analytics and Planning provides a centralized
repository and integration path for all common foundational and analytical data. The
centralized repository ensures that all solutions reference consistent data definitions across
transformations and aggregations. The centralized integrations simplify implementation and
operational support. This centralization can also be complemented by data and integrations for
customer-specific extensions to any analytics or planning solution implemented on the
platform. Coordination of analytical processes and data movement within the platform is
managed by Oracle Retail Process Orchestration & Monitoring using a common schedule.
The diagram below depicts the high-level platform architecture.
1-1
Chapter 1
Getting Started
Getting Started
Each implementation of a Retail Analytics and Planning solution involves one or more modules
across Insights, AI Foundation, and Planning. It often includes multiple years of historical data
sourced from multiple Oracle and non-Oracle systems, some of which will also need to be
integrated with the platform on an ongoing basis. For many of the modules, you will also want
data from the platform to be sent to other downstream applications and processes. For these
reasons, every implementation is unique to your business requirements and requires careful
planning and a deep understanding of all platform components.
Regardless of the modules being implemented, the outline in the table below can be followed
and adapted for your project, and later chapters of the document will elaborate on many of
these topics. More detailed checklists and project planning tools for some modules are also
available in My Oracle Support.
1-2
Chapter 1
Getting Started
Based on all of the topics listed above, here is an example of some major milestones and key
activities that might be followed for a project that includes Merchandise Financial Planning or
IPOCS-Demand Forecasting:
1. Oracle sends you a welcome mail having the required credentials to access your Retail
Analytics and Planning applications - RI, AI Foundation, MFP, IPO, AP, POM, RH, DV,
Apex, Innovation Workbench. You will receive a combined email pointing you to Retail
Home for the individual application links.
2. Verify that the modules purchased by the customer are enabled in Retail Home during
deployment. Review the steps in the RAP Implementation Guide on managing customer
modules.
3. Verify that you can access POM and that the batch schedules for your subscribed
applications are available.
4. Prepare the scripts to access object storage and test your connection to the File Transfer
Services (FTS).
5. Apply initial configurations to all your applications per the documentation for each solution.
6. Upload the files required by RAP for foundation and historical data to object storage.
1-3
Chapter 1
Getting Started
7. Run the first set of ad hoc jobs to load data into RI’s interfaces, following the RAP
Implementation Guide and RAP Operations Guides as needed.
8. Run the next set of ad hoc jobs to publish data to AI Foundation and Planning (PDS),
following the RAP Implementation Guide and RAP Operations Guides as needed.
9. Repeat the ad hoc data load process iteratively until all history is loaded, and perform any
needed seeding steps per the RAP Implementation Guide to establish full snapshots of
positional data.
10. Run the PDS domain build activity to build your Planning domain (this can be done as
soon as you have any data moved to PDS; it does not require completing the history load).
11. Create user groups in MFP/IPO/AP and configure access in OCI IAM for business users.
12. Configure forecast settings in AI Foundation for generating forecasts and validate forecast
execution works as expected.
13. Upload files to object storage for your complete nightly batch runs and initiate the full
batches for RAP.
1-4
2
Setup and Configuration
The Setup and Configuration chapter provides parameters and steps for setting up a new
Retail Analytics and Planning cloud environment. While the platform comprises many
application modules (some of which you may not use), there are certain common processes
and settings that are shared across all of them. It is critical to check and update these core
settings before moving on to later implementation steps, as they will define many system-wide
behaviors that could be difficult to change once you've started loading data into the platform.
Configuration Overview
A high-level outline of the setup process is provided below to describe the activities to be
performed in this chapter.
Activity Description
Learn the configuration The Retail Analytics and Planning has many tools available to
tools support an implementation, such as Retail Home, POM, and APEX.
Knowing how to use these tools is an important first step in the
process. Review the Implementation Tools chapter for details.
Verify Object Storage Generate access tokens for interacting with Object Storage and test
connectivity the connection, as it is required for all file movement into and out of
the Oracle cloud. Review the Implementation Tools chapter for details.
Configure the system Update the parameters that define the type and characteristics of
calendar your business calendar, such as the start and end dates RAP will use
to define calendar generation.
Configure the system Update the master list of supported languages that need to be
languages present in addition to your primary language, such as the need for
seeing data in both English and French.
Configure history Certain data tables in Retail Insights that are leveraged by other
retention policies applications on the platform have a history retention period after
which some data may be erased.
Configure application- All applications in the Retail Analytics and Planning have their own
specific settings settings which must be reviewed before starting an implementation
of those modules.
Platform Configurations
This section provides a list of initial setup and configuration steps to be taken as soon as you
are ready to start a new implementation of the Retail Analytics and Planning and have the
cloud environments provisioned and generally available.
Several configuration tables in the RAP database should be reviewed before processing any
data. A list of these tables is below, along with an explanation of their primary uses. The way to
apply changes to these tables is through the Control & Tactical Center, as described in the
section on Control & Tactical Center. The sections following this one provide the detailed
configuration settings for each table listed below.
2-1
Chapter 2
Platform Configurations
Table Usage
C_ODI_PARAM Table used to configure all Oracle Data Integrator (ODI) batch
(C_ODI_PARAM_VW) programs as well as many Retail Insights and AI Foundation load
properties. C_ODI_PARAM_VW is the name of the table shown in
Control Center.
W_LANGUAGES_G Table used to define all languages that need to be supported in the
database for translatable data (primarily for Retail Insights and AI
Foundation Cloud Services).
C_MODULE_ARTIFACT Table used for database table partitioning setup. Defines which
functional areas within the data warehouse will be populated
(and thus require partitioning).
C_MODULE_EXACT_TABLE Table used to configure partition strategies for certain tables in
the data warehouse, including the Plan fact used for loading plans
and budgets to RI/AI Foundation.
C_MODULE_DBSTATS Table used to configure the ad hoc manual stats collection
program COLLECT_STATS_JOB.
C_HIST_LOAD_STATUS Table used to configure historical data loads, configure certain ad
hoc batch processes, and monitor the results of those jobs after
each run.
C_HIST_FILES_LOAD_STATUS Table used to track multiple zip files that are uploaded with
sequence numbers in order to process them automatically
through ad hoc flows.
C_SOURCE_CDC Table used to configure and monitor both historical and ongoing
integration to Planning applications through the Retail Insights
data warehouse.
C_DML_AUDIT_LOG Audit table used to track updates to the C_ODI_PARAM table by
users from APEX or Control Center.
C_ODI_PARAM Initialization
The first table requiring updates is C_ODI_PARAM because your system calendar is populated
using the ODI programs. This table is displayed as C_ODI_PARAM_VW on the Manage System
Configurations screen in the Control & Tactical Center. The following settings must be
updated prior to using the platform. These settings are required even if your project only
includes Planning implementations. Changes to these settings are tracked using the audit table
C_DML_AUDIT_LOG.
2-2
Chapter 2
Platform Configurations
2-3
Chapter 2
Platform Configurations
2-4
Chapter 2
Platform Configurations
2-5
Chapter 2
Platform Configurations
2-6
Chapter 2
Platform Configurations
The following key decisions must be made during this initial configuration phase and the proper
flags updated in C_ODI_PARAM:
• Item Number Re-Use – If you expect the same item numbers to be re-used over time to
represent new items, then you must update RI_ITEM_REUSE_IND to Y and
RI_ITEM_REUSE_AF TER_DAYS to a value >=1. Even if you are not sure how item re-use will
occur, it’s better to enable these initially and change them later as needed.
• Tax Handling – Both for historical and ongoing data, you must decide how tax will be
handled in fact data (will tax amounts be included or excluded in retail values, what kind of
tax calculations may be applied when extracting history data, and so on). You may or may
not need any configurations updated depending on your RDE usage.
2-7
Chapter 2
Platform Configurations
• Full vs Incremental Positional Loads – In nightly batches, the core positional fact loads
(Purchase Orders and Inventory Positions) support two methods of loading data: full
snapshots and incremental updates. You must decide which of these methods you will use
and set INV_FULL_LOAD_IND and PO_FULL_LOAD_IND accordingly. Incremental updates are
preferred, as they result in lower data volumes and faster nightly batch performance; but
not all source systems support incremental extracts.
If you are using RDE to integrate with Merchandising, pay special attention to the global tax
and WAC configurations, as these control complex calculations that will change how your data
comes into RAP. These options should not be changed once you enable the integrations
because of the impact to the daily data. For example, a large European retailer with presence
in multiple VAT countries may want the following options:
• RA_INV_WAC_IND = N - This will dynamically calculate inventory cost using all three
Merchandising cost methods instead of just using WAC
• RA_INV_TAX_IND = Y - This will enable the removal of tax amounts from retail values so
inventory and PO reporting is VAT-exclusive
• RA_SLS_TAX_IND = Y - This will enable the removal of tax amounts from retail values so
sales reporting is VAT-exclusive
Retail Insights contains many additional configurations in the C_ODI_PARAM table that are not
necessary for platform initialization, but may be needed for your project. This includes
Merchandise Financial Planning and IPOCS-Demand Forecasting configurations for specifying
custom planning levels to be used in the integration between MFP/IPO and RI (when RI will be
used for reporting). The default parameters align with MFP/IPO default plan outputs, but if you
are customizing them to use a different base intersection, then you must also update those
values in C_ODI_PARAM. Refer to the Retail Insights Implementation Guide for complete details
on Planning Configurations.
W_LANGUAGES_G Initialization
The W_LANGUAGES_G table controls all the languages supported in the translatable database
data. This applies to areas such as product names, location names, attribute values, season/
phase descriptions, and other text-based descriptors. Additional languages are used mainly by
Retail Insights, which supports displaying data in multiple languages in reporting and analytics.
It is required to delete all languages from this table that will not be used because every
language code in this table will have records generated for it in some interfaces, creating
unnecessary data that can impact system performance. Starting with version 23.1.201.0, new
environments will only be created with the ‘US’ language code in place; but if you are on an
earlier version then you must manually delete all other entries that will not be used.
For example, product names will automatically have database records initialized for every
supported language in this configuration table, even if the data you are providing does not
contain any of those languages. This creates significant amounts of data in your product
descriptions table, which may not serve any real purpose for your implementation. If you are
only using a single primary language, then you can safely delete all but one row from
W_LANGUAGES_G. The default row to preserve is the one with a language code of US which is
used for American English.
C_MODULE_ARTIFACT Initialization
The C_MODULE_ARTIFACT table is used by the database to configure table partitioning. Many
tables in the platform are partitioned based on the business calendar (usually by calendar date
or fiscal week) and this partitioning must be performed immediately after the business calendar
2-8
Chapter 2
Platform Configurations
is loaded. You should perform this step regardless of which application modules you are
implementing, because all foundation data passes through this architecture.
Before running partitioning procedures, you must validate this table has all rows set to
ACTIVE_FLG=Y and PARTITION_FLG=Y with the exception of W_RTL_PLANFC* tables (PLANFC
module) and SLSPRFC module, which should not be partitioned at this time and must have
flag values of N instead.
You also must choose whether you are planning to load the Planning facts (such as
W_RTL_PLAN1_PROD1_LC1_T1_FS) for plan/budget data in RI or AI Foundation. If you are not
using the table right away, you should also disable the PLAN modules, like PLAN1. You can
revisit this setup later to perform additional partitioning as needed.
C_MODULE_EXACT_TABLE Initialization
The C_MODULE_EXACT_TABLE table is used for defining flexible partitioning strategies on certain
tables. Most data in this table can be left as-is, but you must update this table if you plan to
load Planning or Budget information into the W_RTL_PLAN1_PROD1_LC1_T1_FS interface. The
partition level must align with the data level of your plan (day or week). To configure the plan
partitions, you must update the table C_MODULE_EXACT_TABLE where MODULE_CODE = PLAN1.
Modify the columns PARTITION_COLUMN_TYPE and PARTITION_INTERVAL to be one of the
following values:
• If your input data will be at Day level, set both columns to DY
• If your input data will be at Week level, set both columns to WK
You must then enable the partitioning process in C_MODULE_ARTIFACT by locating the row for
MODULE_CODE=PLAN1 and setting ACTIVE_FLG=Y and PARTITION_FLG=Y. If your plan data will
extend into the future, you must also change PARTITION_FUTURE_PERIOD to the number of
future months that need partitions built (for example, use a value of 6M to partition 6 months
into the future).
2-9
Chapter 2
Platform Configurations
C_HIST_LOAD_STATUS
The C_HIST_LOAD_STATUS table is used to track the progress of historical loads of data,
primarily inventory position and pricing facts. You should edit the following fields on this table
based on your implementation needs:
• HIST_LOAD_LAST_DATE – Specifies the planned final date for the end of your historical loads
(for example, the end of the 2-year period you plan to load into RAP). The history load
programs will assume that you are providing each week of inventory in sequence from
earliest to latest and process the data in that order.
• ENABLED_IND – Turns on or off a specific table load for historical data. Most of the tables in
these processes are only required for Retail Insights, and the rest can be disabled to
improve performance. Set to a value of N to disable a table load.
• MAX_COMPLETED_DATE – The load programs use this to keep track of the last loaded week
of data. It does not allow you to reload this week or any prior week, so if you are trying to
start over again after purging some history, you must also reset this field.
• HIST_LOAD_STATUS – The load programs uses this to track the status of each step in the
load process. If your program gets stuck on invalid records change this field back to
INPROGRESS before re-running the job. If you are restarting a load after erasing history data,
then you need to clear this field of any values.
If you are implementing Retail Insights, then enable all INV and PRICE modules in the table
(set ENABLED_IND to Y). If you are only implementing AI Foundation or Planning application
modules, then the following history tables should be enabled; all others should be disabled (set
ENABLED_IND to N).
• W_RTL_PRICE_IT_LC_DY_F
• W_RTL_PRICE_IT_LC_DY_HIST_TMP
• W_RTL_INV_IT_LC_DY_F
• W_RTL_INV_IT_LC_WK_A
• W_RTL_INV_IT_LC_DY_HIST_TMP
After enabling your desired history load tables, update the value of HIST_LOAD_LAST_DATE on
all rows you enabled. Set the date equal to the final date of history to be loaded. This can be
changed later if you need to set the date further out into the future.
As you load data files for one or more weeks of history per run, the value of
MAX_COMPLETED_DATE and HIST_LOAD_STATUS automatically update to reflect the progress you
have made. If you need to restart the process (for example, you have loaded test data and
need to start over with production data) these two columns must first be cleared of all data
from the Control Center before beginning the history load again.
C_SOURCE_CDC
The C_SOURCE_CDC table is used for changed data capture (CDC) parameters for the
integrations between the Retail Insights data warehouse and the Planning application
schemas. In general, this table is updated automatically as batches are run. However, it is
important to know when you may need to modify these values.
For most interfaces, the table will initially have no records. The first time an integration batch
program runs, it will take all the data from the source table and move it to the export table. It
will then create a C_SOURCE_CDC record for the target table name, with a value for
2-10
Chapter 2
Application Configurations
LAST_MIN_DATE and LAST_MAX_DATE matching the timeframe extracted. On the next run, it will
look at LAST_MAX_DATE as the new minimum extract date and pulls data greater than that date
from the source table. If you are performing history loads for tables, such as Sales
Transactions, you may need to change these dates if you have to re-send data to Planning for
past periods.
Specifically for positional data (at this time only Inventory Position), the usage is not quite the
same. Positional data will always send the current end-of-week values to Planning, it does not
look at historical weeks as part of the normal batch process. A separate historical inventory
integration program is provided in an ad hoc process, which will allow you to send a range of
weeks where LAST_MIN_DATE is the start of the history you wish to send, and LAST_MAX_DATE is
the final date of history before normal batches take it forward. It is common to load inventory
from end to end in isolation as it is a data-intensive and time-consuming process to gather,
load, and validate inventory positions for multiple years of history.
W_GLOBAL_CURR_G
The W_GLOBAL_CURR_G table is used by Retail Insights to support up to three additional
currencies in reporting and aggregation (other fields above 3 are not used at this time). RI pre-
populates global currency fields in all aggregation tables based on the specified currency
codes. The desired codes are added to one row in this table and must align with the Exchange
Rates data provided separately. This table is available from the Control & Tactical Center and
is not a required configuration for any project unless you wish to report on additional currencies
in Retail Insights.
Example data to be inserted to this table:
Application Configurations
In addition to the platform configurations defined above, each application on the platform has
its own system and runtime options that need to be reviewed and updated. The information
below will guide you to the appropriate content for each application’s configuration options.
Retail Insights
Retail Insights has a significant number of configurations, primarily in the C_ODI_PARAM_VW table
in the Control Center, which controls batch processes and reporting behaviors throughout the
application. If you are implementing Retail Insights as part of your project, review the “Setup
and Configuration” chapter of the Retail Insights Implementation Guide.
2-11
Chapter 2
Application Configurations
If you are implementing any planning application (Merchandise Financial Planning, IPOCS-
Demand Forecasting, or Assortment Planning), then you are required to configure and use the
Forecasting module in the AI Foundation application interface. This requires initial
configurations to select forecast parameters, as well as post-data load configurations to select
forecast data levels and perform testing of the chosen algorithm. For basic information about
Forecasting and what the AI Foundation application functionality can support, refer to the
“Manage Forecast Configurations” section in the AI Foundation User Guide.
To configure the forecast process for Planning, use the Manage System Configurations
screen in the Control Center to review and modify the configurations in RSE_CONFIG. These
values can be set up now, but you cannot complete the rest of the forecasting process until
your foundation data has been loaded into AI Foundation.
2-12
Chapter 2
Application Configurations
Planning Platform
Planning Applications such as MFP (Merchandise Financial Planning) can be set up using the
Planning Platform (RPASCE). It allows customers to use a Standard GA template version or
configurable planning solution versions. Refer to the Planning application-specific
Implementation Guides for more details about these options.
2-13
3
Data Loads and Initial Batch Processing
This chapter describes the common data requirements for implementing any of the Retail
Analytics and Planning modules, where to get additional information for optional or application-
specific data interfaces, and how to load an initial dataset into the cloud environments and
distribute it across your desired applications.
Data Requirements
Preparing data for one or more Retail Analytics and Planning modules can consume a
significant amount of project time, so it is crucial to identify the minimum data requirements for
the platform first, followed by additional requirements that are specific to your implementation
plan. Data requirements that are called out for the platform are typically shared across all
modules, meaning you only need to provide the inputs once to leverage them everywhere. This
is the case for foundational data elements, such as your product and location
hierarchies. Foundation data must be provided for any module of the platform to be
implemented. Foundation data is provided using different sources depending on your current
software landscape, including the on-premise Oracle Retail Merchandising System (RMS) or
3rd-party applications.
3-1
Chapter 3
Data Requirements
Retail Insights is used as the foundational data warehouse that collects and coordinates data
on the platform. You do not need to purchase Retail Insights Cloud Service to leverage the
data warehouse for storage and integration; it is included as part of any RAP solution.
Regardless of which RAP solutions you are implementing, the integration flows shown above
are used.
Application-specific data requirements are in addition to the shared foundation data, and may
only be used by one particular module of the platform. These application data requirements
may have different formats and data structures from the core platform-level dataset, so pay
close attention to those additional interface specifications. References and links are provided
later in this chapter to guide you to the relevant materials for application-specific inputs and
data files.
If you are using RMS as your primary data source, then you may not need to produce some or
all of these foundation files, as they will be created by other Oracle Retail processes for you.
However, it is often the case that historical data requires a different set of foundation files from
your future post-implementation needs. If you are loading manually-generated history files, or
you are not using an Oracle Retail data source for foundation data, then review the rest of this
section for details.
3-2
Chapter 3
Data Requirements
Note:
Every application included with Retail Analytics and Planning has additional data
needs beyond this foundation data. But this common set of files can be used to
initialize the system before moving on to those specific requirements.
The first table defines the minimum dimensional data. A dimension is a collection of descriptive
elements, attributes, or hierarchical structures that provide context to your business data.
Dimensions tell the platform what your business looks like and how it operates. This is not the
entire list of possible dimension files, just the main ones needed to use the platform. Refer to
Legacy Foundation File Reference for a complete list of available platform foundation files,
along with a cross-reference to the legacy interfaces they most closely align with. A complete
interface specification document is also available in My Oracle Support to assist you in
planning your application-specific interface needs.
3-3
Chapter 3
Data Requirements
The other set of foundation files are referred to as facts. Fact data covers all of the actual
events, transactions, and activities occurring throughout the day in your business. Each
module in the platform has specific fact data needs, but the most common ones are listed
below. At a minimum, you should expect to provide Sales, Inventory, and Receipts data for
use in most platform modules. The intersection of all data (meaning which dimensional values
are used) is at a common level of item/location/date. Additional identifiers may be needed on
some files; for example, the sales data should be at the transaction level, the inventory file has
a clearance indicator, and the adjustments file has type codes and reason codes.
3-4
Chapter 3
Data Requirements
Details on which application modules make use of specific files (or columns within a file) can
be found in the Interfaces Guide on My Oracle Support. Make sure you have a full
understanding of the data needs for each application you are implementing before moving on
to later steps in the process. If it is your first time creating these files, read Data File
Generation, for important information about key file structures and business rules that must be
followed for each foundation file.
3-5
Chapter 3
Data Requirements
3-6
Chapter 3
Data Requirements
3-7
Chapter 3
Preparing to Load Data
Other supported file packages, such as output files and optional input files, are detailed in each
module’s implementation guides. Except for Planning-specific integrations and customizations
(which support additional integration paths and formats), it is expected that all files will be
communicated to the platform using one of the filenames above.
3-8
Chapter 3
Calendar and Partition Setup
7. After history loads are complete, all positional tables, such as Inventory Position, need to
be seeded with a full snapshot of source data before they can be loaded using regular
nightly batches. This seeding process is used to create a starting position in the database
which can be incremented by daily delta extracts. These full-snapshot files can be included
in the first nightly batch you run, if you want to avoid manually loading each seed file
through one-off executions.
8. When all history and seeding loads are completed and downstream systems are also
populated with that data, nightly batches can be started.
Before you begin this process, it is best to prepare your working environment by identifying the
tools and connections needed for all your Oracle cloud services that will allow you to interact
with the platform, as detailed in Implementation Tools and Data File Generation.
Prerequisites for loading files and running POM processes include:
Users must also have the necessary permissions in Oracle Cloud Infrastructure Identity and
Access Management (OCI IAM) to perform all the implementation tasks. Before you begin,
ensure that your user has at least the following groups (and their _PREPROD equivalents if using
a stage/dev environment):
3-9
Chapter 3
Calendar and Partition Setup
1. Upload the calendar file CALENDAR.csv (and associated context file) through Object
Storage (packaged using the RAP_DATA_HIST.zip file).
2. Execute the HIST_ZIP_FILE_LOAD_ADHOC process. Example Postman message body:
{
"cycleName":"Adhoc",
"flowName":"Adhoc",
"processName":"HIST_ZIP_FILE_LOAD_ADHOC"
}
3. Verify that the jobs in the ZIP file load process completed successfully using the POM
Monitoring screen. Download logs for the tasks as needed for review.
4. Execute the CALENDAR_LOAD_ADHOC process. This transforms the data and moves it into all
internal data warehouse tables. It also performs table partitioning based on your input date
range.
Sample Postman message body:
{
"cycleName":"Adhoc",
"flowName":"Adhoc",
"processName":"CALENDAR_LOAD_ADHOC",
"requestParameters":"jobParams.CREATE_PARTITION_PRESETUP_JOB=2018-12-30,job
Params.ETL_BUSINESS_DATE_JOB=2021-02-06"
}
3-10
Chapter 3
Loading Data from Files
Note:
If any job having STG in the name fails during the run, then review the POM logs
and it should provide the name of an external LOG or BAD table with more
information. These error tables can be accessed from APEX using a support
utility. Refer to the AI Foundation Operations Guide section on “External Table
Load Logs” for the utility syntax and examples.
You can monitor the partitioning process while it’s running by querying the RI_LOG_MSG table
from APEX. This table captures the detailed partitioning steps being performed by the script in
real time (whereas POM logs are only refreshed at the end of execution). If the process fails in
POM after exactly 4 hours, this is just a POM process timeout and it may still be running in the
background so you can check for new inserts to the RI_LOG_MSG table.
The partitioning process will take some time (~5 hours per 100k partitions) to complete if you
are loading multiple years of history, as this may require 100,000+ partitions to be created
across the data model. This process must be completed successfully before continuing with
the data load process. Contact Oracle Support if there are any questions or concerns.
Partitioning can be performed after some data has been loaded; however, it will take
significantly longer to execute, as it has to move all of the loaded data into the proper
partitions.
You can also estimate the number of partitions needed based on the details below:
• RAP needs to partition around 120 week-level tables if all functional areas are enabled, so
take the number of weeks in your history time window multiplied by this number of tables.
• RAP needs to partition around 160 day-level tables if all functional areas are enabled, so
take the number of days in your history time window multiplied by this number of tables.
For a 3-year history window, this results in: 120*52*3 + 160*365*3 = 193,920 partitions. If you
wish to confirm your final counts before proceeding to the next dataload steps, you can
execute these queries from APEX:
The queries should return a count roughly equal to your expected totals (it will not be exact, as
the data model will add/remove tables over time and some tables come with pre-built partitions
or default MAXVALUE partitions).
Activity Description
Initialize Dimensions Initialize dimensional data (products, locations, and so on) to
provide a starting point for historical records to join with. Separate
initial load processes are provided for this task.
3-11
Chapter 3
Loading Data from Files
Activity Description
Load History Data Run history loads in one or multiple cycles depending on the data
volume, starting from the earliest date in history and loading
forward to today.
Reloading Dimensions Reload dimensional data as needed throughout the process to
maintain correct key values for all fact data. Dimensional files can be
provided in the same package with history files and ad hoc processes
run in sequence when loading.
Seed Positional Facts Seed initial values for positional facts using full snapshots of all
active item/locations in the source system. This must be loaded for
the date prior to the start of nightly batches to avoid gaps in ongoing
data.
Run Nightly Batches Nightly batches must be started from the business date after the
initial seeding was performed.
Completing these steps will load all of your data into the Retail Insights data model, which is
required for all implementations. From there, proceed with moving data downstream to other
applications as needed, such as AI Foundation modules and Merchandise Financial Planning.
Note:
All steps are provided sequentially, but can be executed in parallel. For example, you
may load dimensions into RI, then on to AI Foundation and Planning applications
before loading any historical fact data. While historical fact data is loaded, other
activities can occur in Planning such as the domain build and configuration updates.
Initialize Dimensions
Loading Dimensions into RI
You cannot load any fact data into the platform until the related dimensions have been
processed and verified. The processes in this section are provided to initialize the core
dimensions needed to begin fact data loads and verify file formats and data completeness.
Some dimensions which are not used in history loads are not part of the initialization process,
as they are expected to come in the nightly batches at a later time.
For the complete list of dimension files and their file specifications, refer to the AI Foundation
Interfaces Guide on My Oracle Support. The steps below assume you have enabled or
disabled the appropriate dimension loaders in POM per your requirements. The process flow
examples also assume CSV file usage, different programs are available for legacy DAT files.
The AI Foundation Operations Guide provides a list of all the job and process flows used by
foundation data files, so you can identify the jobs required for your files and disable unused
programs in POM.
When you are using RDE jobs to source dimension data from RMFCS and you are not
providing any flat files like PRODUCT.csv, it is necessary to disable all file-based loaders in the
RI_DIM_INITIAL_ADHOC process flow from POM. Any job name starting with the following text
can be disabled, because RDE jobs will bypass these steps and insert directly to staging
tables:
3-12
Chapter 3
Loading Data from Files
• COPY_SI_
• STG_SI_
• SI_
• STAGING_SI_
1. Disable any dimension jobs you are not using from Batch Administration, referring to the
process flows for DAT and CSV files in the AIF Operations Guide as needed. If you are not
sure if you need to disable a job, it’s best to leave it enabled initially. Restart the POM
schedule in Batch Monitoring to apply the changes.
2. Provide your dimension files and context files through File Transfer Services (packaged
using the RAP_DATA_HIST.zip file). All files should be included in a single zip file upload. If
you are using data from Merchandising, this is where you should run the RDE ADHOC
processes such as RDE_EXTRACT_DIM_INITIAL_ADHOC.
3. Execute the HIST_ZIP_FILE_LOAD_ADHOC process if you need to unpack a new ZIP file.
4. Execute the RI_DIM_INITIAL_ADHOC process to stage, transform, and load your dimension
data from the files. The ETL date on the command should be at a minimum one day before
the start of your history load timeframe, but 3-6 months before is ideal. It is best to give
yourself a few months of space for reprocessing dimension loads on different dates prior to
start of history. Date format is YYYY-MM-DD; any other format will not be processed. After
running the process, you can verify the dates are correct in the W_RTL_CURR_MCAL_G table.
If the business date was not set correctly, your data may not load properly.
Sample Postman message body:
{
"cycleName":"Adhoc",
"flowName":"Adhoc",
"processName":"RI_DIM_INITIAL_ADHOC",
"requestParameters":"jobParams.ETL_BUSINESS_DATE_JOB=2017-12-31"
}
Note:
If any job having STG in the name fails during the run, then review the POM logs
and it should provide the name of an external LOG or BAD table with more
information. These error tables can be accessed from APEX using a support
utility. Refer to the AI Foundation Operations Guide section on “External Table
Load Logs” for the utility syntax and examples.
If this is your first dimension load, you will want to validate the core dimensions such as
product and location hierarchies using APEX. Refer to Sample Validation SQLs for sample
queries you can use for this.
If any jobs fail during this load process, you may need to alter one or more dimension data
files, re-send them in a new zip file upload, and re-execute the programs. Only after all core
dimension files have been loaded (CALENDAR, PRODUCT, ORGANIZATION, and EXCH_RATE) can you
proceed to history loads for fact data. Make sure to query the RI_DIM_VALIDATION_V view for
any warnings/errors after the run. Refer to the AI Foundation Operations Guide for more details
on the validation messages that may occur. This view primarily uses the table
C_DIM_VALIDATE_RESULT, which can be separately queried instead of the view to see all the
columns available on it.
3-13
Chapter 3
Loading Data from Files
If you need to reload the same file multiple times due to errors, you must Restart the Schedule
in POM and then run the ad hoc process C_LOAD_DATES_CLEANUP_ADHOC before repeating these
steps. This will remove any load statuses from the prior run and give you a clean start on the
next execution.
Note:
Starting with version 23.1.101.0, the product and organization file loaders have been
redesigned specifically for the initial ad hoc loads. In prior versions, you must not
reload multiple product or organization files for the same ETL business date, as it
treats any changes as a reclassification and can cause data issues while loading
history. In version 23.x, the dimensions are handled as “Type 1” slowly changing
dimensions, meaning the programs do not look for reclasses and instead perform
simple merge logic to apply the latest hierarchy data to the existing records, even if
levels have changed.
As a best practice, you should disable all POM jobs in the RI_DIM_INITIAL_ADHOC process
except the ones you are providing new files for. For example, if you are loading the PRODUCT,
ORGANIZATION, and EXCH_RATE files as your dimension data for AI Foundation, then you could
just execute the set of jobs for those files and disable the others. Refer to the AI Foundation
Operations Guide for a list of the POM jobs involved in loading each foundation file, if you wish
to disable jobs you do not plan to use to streamline the load process.
Hierarchy Deactivation
Beginning in version 23, foundation dimension ad hoc loads have been changed to use Type 1
slowly-changing dimension (SCD) behavior, which means that the system will no longer create
new records every time a parent/child relationship changes. Instead, it will perform a simple
merge on top of existing data to maintain as-is hierarchy definitions. The foundation data
model holds hierarchy records separately from product data, so it is also necessary to perform
maintenance on hierarchies to maintain a single active set of records that should be
propagated downstream to other RAP applications. This maintenance is performed using the
program W_PROD_CAT_DH_CLOSE_JOB in the RI_DIM_INITIAL_ADHOC process. The program will
detect unused hierarchy nodes which have no children after the latest data has been loaded
into W_PROD_CAT_DH and it will close them (set to CURRENT_FLG=N). This is required because, in
the data model, each hierarchy level is stored as a separate record, even if that level is not
being used by any products on other tables. Without the cleanup activity, unused hierarchy
levels would accumulate in W_PROD_CAT_DH and be available in AI Foundation, which is
generally not desired.
There are some scenarios where you may want to disable this program. For example, if you
know the hierarchy is going to change significantly over a period of time and you don’t want
levels to be closed and re-created every time a new file is loaded, you must disable
W_PROD_CAT_DH_CLOSE_JOB. You can re-enable it later and it will close any unused levels that
remain after all your changes are processed. Also be aware that the program is part of the
nightly batch process too, so once you switch from historical to nightly loads, this job will be
enabled and will close unused hierarchy levels unless you intentionally disable it. This job must
be disabled if you are using RDE programs to load Merchandising data.
3-14
Chapter 3
Loading Data from Files
applicable). This allows for parallel data validation and domain build activities to occur while
you continue loading data. Review sections Sending Data to AI Foundation and Sending Data
to Planning for details on the POM jobs you may execute for this.
The main benefits of this order of execution are:
1. Validating the hierarchy structure from the AI Foundation interface provides an early view
for the customer to see some application screens with their data.
2. Planning apps can perform the domain build activity without waiting for history file loads to
complete, and can start to do other planning implementation activities in parallel to the
history loads.
3. Data can be made available for custom development or validations in Innovation
Workbench.
Do not start history loads for facts until you are confident all dimensions are working
throughout your solutions. Once you begin loading facts, it becomes much harder to reload
dimension data without impacts to other areas. For example, historical fact data already loaded
will not be automatically re-associated with hierarchy changes loaded later in the process.
Table Usage
C_HIST_LOAD_STATUS Tracks the progress of historical ad hoc load programs for
inventory and pricing facts. This table will tell you which Retail
Insights tables are being populated with historical data, the most
recent status of the job executions, and the most recently
completed period of historical data for each table. Use APEX or
Data Visualizer to query this table after historical data load runs
to ensure the programs are completing successfully and
processing the expected historical time periods.
C_HIST_FILES_LOAD_STATUS Tracks the progress of zip file processing when loading multiple
files in sequence using scheduled standalone process flows.
C_LOAD_DATES Check for detailed statuses of historical load jobs. This is the only
place that tracks this information at the individual ETL thread
level. For example, it is possible for an historical load using 8
threads to successfully complete 7 threads but fail on one thread
due to data issues. The job itself may just return as Failed in POM,
so knowing which thread failed will help identify the records that
may need correcting and which thread should be reprocessed.
3-15
Chapter 3
Loading Data from Files
Table Usage
W_ETL_REJECTED_RECORDS Summary table capturing rejected fact record counts that do not
get processed into their target tables in Retail Insights. Use this to
identify other tables with specific rejected data to analyze. Does
not apply to dimensions, which do not have rejected record
support at this time.
E$_W_RTL_SLS_TRX_IT_LC_D Example of a rejected record detail table for Sales Transactions.
Y_TMP All rejected record tables start with the E$_ prefix. These tables
are created at the moment the first rejection occurs for a load
program. W_ETL_REJECTED_RECORDS will tell you which tables
contain rejected data for a load. These tables may not initially be
granted to APEX for you to read from. To grant access, run the
RABE_GRANT_ACCESS_TO_IW_ADHOC_PROCESS ad hoc process in
the AIF APPS schedule in POM. This will allow you to select from
these error tables to review rejection details.
When loading data from flat files for the first time, it is common to have bad records that cannot
be processed by the RAP load procedures, such as when the identifiers on the record are not
present in the associated dimension tables. The foundation data loads leverage rejected
record tables to capture all such data so you can see what was dropped by specific data load
and needs to be corrected and reloaded. These tables do not exist until rejected records occur
during program execution, and are not initially granted to APEX unless you have run
RABE_GRANT_ACCESS_TO_IW_ADHOC_PROCESS. Periodically monitor these tables for rejected data
which may require reloading.
The overall sequence of files to load will depend on your specific data sources and conversion
activities, but the recommendation is listed below as a guideline.
1. Sales – Sales transaction data is usually first to be loaded, as the data is critical to running
most applications and needs the least amount of conversion.
2. Inventory Receipts – If you need receipt dates for downstream usage, such as in
Lifecycle Pricing Optimization, then you need to load receipt transactions in parallel with
Inventory Positions. For each file of receipts loaded, also load the associated inventory
positions afterwards.
3. Inventory Position – The main stock-on-hand positions file is loaded next. This history
load also calculates and stores data using the receipts file, so INVENTORY.csv and
RECEIPT.csv must be loaded at the same time, for the same periods.
4. Pricing – The price history file is loaded after sales and inventory are complete because
many applications need only the first two datasets for processing. Potentially, price history
may also be the largest volume of data; so it’s good to be working within your other
applications in parallel with loading price data.
5. All other facts – There is no specific order to load any of the other facts like transfers,
adjustments, markdowns, costs, and so on. They can be loaded based on your
downstream application needs and the availability of the data files.
For your first time implementing this history load process, you may also leverage the reference
paper and scripts in My Oracle Support (Doc ID 2539848.1) titled AI Foundation Historical Data
Load Monitoring. This document will guide you through one way you can monitor the progress
of history loads, gather statistics on commonly used tables, and verify that data is moving from
the input tables to the target tables in the database.
3-16
Chapter 3
Loading Data from Files
Note:
Many parts of AI Foundation require transactional data for sales, so loading
aggregate data should not be done unless you have no better alternative.
If you are not loading sales history for Retail Insights specifically, then there are many
aggregation programs that can be disabled in the POM standalone process. Most aggregation
programs (jobs ending in _A_JOB) populate additional tables used only in BI reporting. The
3-17
Chapter 3
Loading Data from Files
following list of jobs must be enabled in the HIST_SALES_LOAD_ADHOC process to support AIF
and Planning data needs, but all others can be disabled for non-RI projects:
• VARIABLE_REFRESH_JOB
• ETL_REFRESH_JOB
• W_EMPLOYEE_D_JOB
• SEED_EMPLOYEE_D_JOB
• W_PARTY_PER_D_JOB
• SEED_PARTY_PER_D_JOB
• W_RTL_CO_HEAD_D_JOB
• W_RTL_CO_LINE_D_JOB
• SEED_CO_HEAD_D_JOB
• SEED_CO_LINE_D_JOB
• W_RTL_SLS_TRX_IT_LC_DY_F_JOB
• RA_ERROR_COLLECTION_JOB
• RI_GRANT_ACCESS_JOB
• RI_CREATE_SYNONYM_JOB
• ANAYLZE_TEMP_TABLES_JOB
• W_RTL_SLS_IT_LC_DY_TMP_JOB
• W_RTL_SLS_IT_LC_WK_A_JOB
• W_RTL_PROMO_D_TL_JOB
• SEED_PROMO_D_TL_JOB
• W_PROMO_D_RTL_TMP_JOB
• W_RTL_SLSPR_TRX_IT_LC_DY_F_JOB
• W_RTL_SLSPK_IT_LC_DY_F_JOB
• W_RTL_SLSPK_IT_LC_WK_A_JOB
• REFRESH_RADM_JOB
The other process used, HIST_STG_CSV_SALES_LOAD_ADHOC, can be run with all jobs enabled,
as it is only responsible for staging the files in the database. Make sure to check the enabled
jobs in both processes before continuing.
After confirming the list of enabled sales jobs, perform the following steps:
1. Create the file SALES.csv containing one or more days of sales data along with a CTX file
defining the columns which are populated. Optionally include the SALES_PACK.csv file as
well.
2. Upload the history files to Object Storage using the RAP_DATA_HIST.zip file.
3. Execute the HIST_ZIP_FILE_LOAD_ADHOC process.
4. Execute the HIST_STG_CSV_SALES_LOAD_ADHOC process to stage the data in the database.
Validate your data before proceeding. Refer to Sample Validation SQLs for sample queries
you can use for this.
5. Execute the HIST_SALES_LOAD_ADHOC batch processes to load the data. If no data is
available for certain dimensions used by sales, then the load process can seed the
3-18
Chapter 3
Loading Data from Files
dimension from the history file automatically. Enable seeding for all of the dimensions
according to the initial configuration guidelines; providing the data in other files is optional.
Several supplemental dimensions are involved in this load process, which may or may not
be provided depending on the data requirements. For example, sales history data has
promotion identifiers, which would require data on the promotion dimension.
Sample Postman message bodies:
{
"cycleName":"Adhoc",
"flowName":"Adhoc",
"processName":"HIST_STG_CSV_SALES_LOAD_ADHOC"
}
{
"cycleName":"Adhoc",
"flowName":"Adhoc",
"processName":"HIST_SALES_LOAD_ADHOC"
}
Note:
If any job having STG in the name fails during the run, then review the POM logs
and it should provide the name of an external LOG or BAD table with more
information. These error tables can be accessed from APEX using a support
utility. Refer to the AI Foundation Operations Guide section on “External Table
Load Logs” for the utility syntax and examples.
After the load is complete, you should check for rejected records, as this will not cause the job
to fail but it will mean not all data was loaded successfully. Query the table
W_ETL_REJECTED_RECORDS from IW to see a summary of rejections. If you cannot immediately
identify the root cause (for example, missing products or locations causing the data load to skip
the records) there is a utility job W_RTL_REJECT_DIMENSION_TMP_JOB that allows you analyze the
rejections for common reject reasons. Refer to the AIF Operations Guide for details on
configuring and running the job for the first time if you have not used it before.
This process can be repeated as many times as needed to load all history files for the sales
transaction data. If you are sending data to multiple RAP applications, do not wait until all data
files are processed to start using those applications. Instead, load a month or two of data files
and process them into all apps to verify the flows before continuing.
Note:
Data cannot be reloaded for the same records multiple times, as sales data is treated
as additive. If data needs correction, you must post only the delta records (for
example, send -5 to reduce a value by 5 units) or erase the table and restart the load
process using RI_SUPPORT_UTIL procedures in APEX. Raise a Service Request with
Oracle if neither of these options resolve your issue.
Once you have performed the load and validated the data one time, you may wish to automate
the remaining file loads. A standalone process flow RI_FLOW_ADHOC is available in POM that
3-19
Chapter 3
Loading Data from Files
can run the sales history load multiple times using your specified start times. Follow the steps
below to leverage this process:
1. Upload multiple ZIP files each containing one SALES.csv and naming them as
RAP_DATA_HIST.zip, RAP_DATA_HIST.zip.1, RAP_DATA_HIST.zip.2 and so on,
incrementing the index on the end of the zip file name. Track the status of the files in the
C_HIST_FILES_LOAD_STATUS table once they are uploaded and at least one execution of
the HIST_ZIP_FILE_UNLOAD_JOB process has been run.
2. In the POM batch administration screen, ensure all of the jobs in the RI_FLOW_ADHOC are
enabled, matching your initial ad hoc run. Schedule the standalone flows from Scheduler
Administration to occur at various intervals throughout the day. Space out the runs based
on how long it took to process your first file.
3. Monitor the load progress from the Batch Monitoring screen to see the results from each
run cycle.
3-20
Chapter 3
Loading Data from Files
Although it is supported, it is not advisable to load history data after nightly batches have
started. It would be difficult to erase or correct historical data after it is loaded without affecting
your nightly batch data as well. For this reason it is best to validate the history data thoroughly
in a non-production environment before loading it to the production system.
The following steps describe the process for loading inventory history:
1. If you need inventory to keep track of First/Last Receipt Dates for use in Lifecycle Pricing
Optimization or Forecasting (SLC) then you must first load a RECEIPT.csv file for the same
historical period as your inventory file (because it is used in forecasting, that may make it
required for your Inventory Planning Optimization loads as well, if you plan to use SLC
forecasting). You must also set RI_INVAGE_REQ_IND to Y. Receipts are loaded using the
process HIST_CSV_INVRECEIPTS_LOAD_ADHOC. Receipts may be provided at day or week
level depending on your history needs.
2. Create the file INVENTORY.csv containing one or more weeks of inventory snapshots in
chronological order along with your CTX file to define the columns that are populated. The
DAY_DT value on every record must be an end-of-week date (Saturday by default).
3. Upload the history file and its context file to Object Storage using the RAP_DATA_HIST.zip
file.
4. Update column HIST_LOAD_LAST_DATE on the table C_HIST_LOAD_STATUS to be the date
matching the last day of your overall history load (will be later than the dates in the current
file). This can be done from the Control & Tactical Center. If you are loading history after
your nightly batches were already started, then you must set this date to be the last week-
ending date before your first daily/weekly batch. No other date value can be used in this
case.
5. Execute the HIST_ZIP_FILE_LOAD_ADHOC process.
6. If you are providing RECEIPT.csv for tracking receipt dates in history, run
HIST_CSV_INVRECEIPTS_LOAD_ADHOC at this time.
7. Execute the HIST_STG_CSV_INV_LOAD_ADHOC process to stage your data into the database.
Validate your data before proceeding. Refer to Sample Validation SQLs for sample queries
you can use for this.
8. Execute the HIST_INV_LOAD_ADHOC batch process to load the file data. The process loops
over the file one week at a time until all weeks are loaded. It updates the
C_HIST_LOAD_STATUS table with the progress, which you can monitor from APEX or DV.
Sample Postman message bodies:
{
"cycleName":"Adhoc",
"flowName":"Adhoc",
"processName":"HIST_STG_CSV_INV_LOAD_ADHOC"
}
{
"cycleName":"Adhoc",
"flowName":"Adhoc",
"processName":"HIST_INV_LOAD_ADHOC"
}
This process can be repeated as many times as needed to load all history files for the
inventory position. Remember that inventory cannot be loaded out of order, and you cannot go
back in time to reload files after you have processed them (for the same item/loc intersections).
3-21
Chapter 3
Loading Data from Files
If you load a set of inventory files and then find issues during validation, erase the tables in the
database and restart the load with corrected files.
If you finish the entire history load and need to test downstream systems (like Inventory
Planning Optimization) then you must populate the table W_RTL_INV_IT_LC_G first (the history
load skips this table). There is a separate standalone job HIST_LOAD_INVENTORY_GENERAL_JOB
in the process HIST_INV_GENERAL_LOAD_ADHOC that you may execute to copy the final week of
inventory from the fact table to this table.
If your inventory history has invalid data, you may get rejected records and the batch process
will fail with a message that rejects exist in the data. If this occurs, you cannot proceed until
you resolve your input data, because rejections on positional data MUST be resolved for one
date before moving onto the next. If you move onto the next date without reprocessing any
rejected data, that data is lost and cannot be loaded at a later time without starting over. When
this occurs:
1. The inventory history load will automatically populate the table
W_RTL_REJECT_DIMENSION_TMP with a list of invalid dimensions it has identified. If you are
running any other jobs besides the history load, you can also run the process
W_RTL_REJECT_DIMENSION_TMP_ADHOC to populate that table manually. You have the choice
to fix the data and reload new files or proceed with the current file
2. After reviewing the rejected records, run REJECT_DATA_CLEANUP_ADHOC, which will erase the
E$ table and move all rejected dimensions into a skip list. You must pass in the module
code you want to clean up data for as a parameter on the POM job (in this case the
module code is INV). The skip list is loaded to the table C_DISCARD_DIMM. Skipped
identifiers will be ignored for the current file load, and then reset for the start of the next
run.
Example Postman message body:
{
"cycleName": "Adhoc",
"flowName":"Adhoc",
"processName":"REJECT_DATA_CLEANUP_ADHOC",
"requestParameters":"jobParams.REJECT_DATA_CLEANUP_JOB=INV"
}
3. If you want to fix your files instead of continuing the current load, stop here and reload your
dimensions and/or fact data following the normal process flows.
4. If you are resuming with the current file with the intent to skip all data in C_DISCARD_DIMM,
restart the failed POM job now. The skipped records are permanently lost and cannot be
reloaded unless you erase your inventory data and start loading files from the beginning.
Log a Service Request with Oracle Support for assistance with any of the above steps if you
are having difficulties with loading inventory history or dealing with rejected records.
3-22
Chapter 3
Loading Data from Files
Make sure you check C_HIST_LOAD_STATUS before starting a new inventory load as you may
want to enable different tables or change the HIST_LOAD_LAST_DATE values to align with your
new data. Verify that the MAX_COMPLETED_DATE and HIST_LOAD_STATUS columns are null for all
rows you will be reprocessing.
{
"cycleName": "Adhoc",
"flowName":"Adhoc",
"processName":"REJECT_DATA_CLEANUP_ADHOC",
"requestParameters":"jobParams.REJECT_DATA_CLEANUP_JOB=PRICE"
}
If you will not be loading a complete history file, or you want to skip history and run price
seeding or nightly cycles, then you must be aware of the nightly job behavior as it relates to
3-23
Chapter 3
Loading Data from Files
history. The nightly batch job W_RTL_PRICE_IT_LC_DY_F_JOB has a validation rule that looks for
history data and fails the batch if it is not found. It is looking specifically in the table
W_RTL_PRICE_IT_LC_G, because this table must be populated when the price history job runs
for the last date (the value in HIST_LOAD_LAST_DATE). The reason for this validation is that the
system calculates many additional fields like LAST_MKDN_DT that need to include your historical
price activity. If you run the nightly batch jobs without populating W_RTL_PRICE_IT_LC_G, all
calculated fields will start as null or having only the current night’s price data. If you are not
planning to load complete price history up to the value on HIST_LOAD_LAST_DATE, then you may
disable this validation rule from C_ODI_PARAM_VW in the Manage System Configurations screen.
Look for the RI_CHK_PRICE_G_EMPTY_IND parameter and update the value to N. This will allow
W_RTL_PRICE_IT_LC_DY_F_JOB to complete even if no history has been loaded and the
W_RTL_PRICE_IT_LC_G table is empty.
3-24
Chapter 3
Loading Data from Files
{
"cycleName":"Adhoc", "flowName":"Adhoc",
"processName":"SEED_CSV_W_RTL_PO_ONORD_IT_LC_DY_F_PROCESS_ADHOC"
}
Purchase Order data functions similar to inventory in that it is a positional fact interface and
cannot be loaded or reloaded multiple times for the same dates. If you load some data for a
given business date and need to change or erase it, you must truncate the tables using
support utilities and then load the new data. You also cannot load the data out of order. Once
you load data for one date, you may only load new files for future dates after that point. The
data warehouse target tables for this data are W_RTL_PO_DETAILS_D (for the ORER_HEAD.csv file)
and W_RTL_PO_ONORD_IT_LC_DY_F (for the ORDER_DETAIL.csv file).
Prior to starting nightly loads of PO data, you must also choose your configuration option for
the parameter PO_FULL_LOAD_IND in the C_ODI_PARAM_VW configuration table in Manage System
Configurations. By default, this parameter is set to N, which means that the nightly interface
expects to get all delta/incremental updates to your existing PO data each night. This delta
includes zero balance records for when a PO is received in the source and moves to zero units
on order. If tracking and sending deltas is not possible, you may change this parameter to Y,
which indicates that your nightly file will be a full snapshot of open order records instead. The
system will automatically zero out any purchase order lines that are not included in your nightly
file, which allows you to extract only the non-zero lines in your source data.
You must also choose the configuration option for the parameter PDS_EXPORT_DAILY_ONORD,
which determines whether the EOW_DATE used in the export data is allowed to contain non-end-
of-week (EOW) dates, or if the system must convert it to a week-ending date in all cases.
When set to a value of Y, it means daily dates are allowed in the EOW_DATE field on the export, if
there is a daily date in the OTB_EOW_DATE column of ORDER_HEAD.csv. When set to a value of N,
it means the system automatically converts the input dates from ORDER_HEAD.csv to be week-
ending dates in all cases. For a base implementation of MFP with no customizations, you may
want this setting to be N to force the export dates to be EOW dates even if the input file has
non-EOW dates from the source. If you are altering RPAS to have a different base calendar
intersection, then you may want to change this to Y instead to allow daily dates.
3-25
Chapter 3
Loading Data from Files
• HIST_SALES_WF_LOAD_ADHOC
All of these interfaces deal with transactional data (not positional) so you may use them at any
time to load history files in each area.
Note:
These processes are intended to support history data for downstream applications
such as AI Foundation and Planning, so the tables populated by each process by
default should satisfy the data needs of those applications. Jobs not needed by those
apps are not included in these processes.
Some data files used by AIF and Planning applications do not have a history load process,
because the data is only used from the current business date forwards. For Purchase Order
data (ORDER_DETAIL.csv), refer to the section below on Seed Positional Facts if you need to
load the file before starting your nightly batch processing. For other areas like transfers/
allocations used by Inventory Planning Optimization, those jobs are only included in the nightly
batch schedule and do not require any history to be loaded.
Directly updating the staging table data can be useful for quickly debugging load failures and
correcting minor issues. For example, you are attempting to load PRODUCT.csv for the first time
and you discover some required fields are missing data for some rows. You may directly
update the W_PRODUCT_DTS table to put values in those fields and rerun the POM job, allowing
you to progress with your dataload and find any additional issues before generating a new file.
Similarly, you may have loaded an inventory receipts file, but discovered after staging the file
that data was written to the wrong column (INVRC_QTY contains the AMT values and vice versa).
You can update the fields and continue to load it to the target tables to verify it, and then
correct your source data from the next run forwards only.
These privileges extend only to staging tables, such as table names ending in FTS, DTS, FS, or
DS. You cannot modify internal tables holding the final fact or dimension data. You cannot
modify configuration tables as they must be updated from the Control & Tactical Center. The
privileges do not apply to objects in the RDX or PDS database schemas.
Reloading Dimensions
It is common to reload dimensions at various points throughout the history load, or even in-
sync with every history batch run. Ensure that your core dimensions, such as the product and
location hierarchies, are up-to-date and aligned with the historical data being processed. To
reload dimensions, you may follow the same process as described in the Initial Dimension
Load steps, ensuring that the current business load date in the system is on or before the date
in history when the dimensions will be required. For example, if you are loading history files in
a monthly cadence, ensure that new product and location data required for the next month has
been loaded no later than the first day of that month, so it is effective for all dates in the history
data files.
3-26
Chapter 3
Loading Data from Files
It is also very important to understand that history load procedures are unable to handle
reclassifications that have occurred in source systems when you are loading history files. For
example, if you are using current dimension files from the source system to process historical
data, and the customer has reclassified products so they are no longer correct for the historical
time periods, then your next history load may place sales or inventory under the new
classifications, not the ones that were relevant in history. For this reason, reclassifications
should be avoided if at all possible during history load activities, unless you can maintain
historical dimension snapshots that will accurately reflect historical data needs.
{
"cycleName":"Adhoc",
"flowName":"Adhoc",
"processName":"LOAD_CURRENT_BUSINESS_DATE_ADHOC",
"requestParameters":"jobParams.ETL_BUSINESS_DATE_JOB=2017-12-31"
}
3-27
Chapter 3
Loading Data from Files
4. Execute the ad hoc seeding batch processes depending on which files have been
provided. Sample Postman messages:
{
"cycleName":"Adhoc",
"flowName":"Adhoc",
"processName":"SEED_CSV_W_RTL_PRICE_IT_LC_DY_F_PROCESS_ADHOC"
}
{
"cycleName": "Adhoc",
"flowName":"Adhoc",
"processName":"SEED_CSV_W_RTL_NCOST_IT_LC_DY_F_PROCESS_ADHOC"
}
{
"cycleName":"Adhoc",
"flowName":"Adhoc",
"processName":"SEED_CSV_W_RTL_BCOST_IT_LC_DY_F_PROCESS_ADHOC"
}
{
"cycleName":"Adhoc",
"flowName":"Adhoc",
"processName":"SEED_CSV_W_RTL_INV_IT_LC_DY_F_PROCESS_ADHOC"
}
{
"cycleName":"Adhoc",
"flowName":"Adhoc",
"processName":"SEED_CSV_W_RTL_INVU_IT_LC_DY_F_PROCESS_ADHOC"
}
{
"cycleName":"Adhoc",
"flowName":"Adhoc",
"processName":"SEED_CSV_W_RTL_PO_ONORD_IT_LC_DY_F_PROCESS_ADHOC"
}
Once all initial seeding is complete and data has been validated, you are ready to perform a
regular batch run. Provide the data files expected for a full batch, such as RAP_DATA.zip or
RI_RMS_DATA.zip for foundation data, RI_MFP_DATA.zip for externally-sourced planning data
(for RI reporting and AI Foundation forecasting), and any AI Foundation Cloud Services files
using the ORASE_WEEKLY.zip files. If you are sourcing daily data from RMFCS then you need to
ensure that the RDE batch flow is configured to run nightly along with the RAP batch schedule.
Batch dependencies between RDE and RI should be checked and enabled, if they are not
already turned on.
From this point on, the nightly batch takes care of advancing the business date and loading all
files, assuming that you want the first load of nightly data to occur the day after seeding. The
following diagram summarizes a potential set of dates and activities using the history and
seeding steps described in this chapter:
3-28
Chapter 3
Loading Data from Files
Note:
The sequential nature of this flow of events must be followed for positional facts (for
example, inventory) but not for transactional facts (such as sales). Transactional data
supports posting for dates other than what the current system date is, so you can
choose to load sales history at any point in this process.
Note:
At this time, incremental product and location loads are supported when using RDE
for integration or when using legacy DAT files. CSV files should be provided as full
snapshots.
As part of nightly batch uploads, also ensure that the parameter file RA_SRC_CURR_PARAM_G.dat
is included in each ZIP package, and that it is being automatically updated with the current
business date for that set of files. This file is used for business date validation so incorrect files
are not processed. This file will help Oracle Support identify the current business date of a
particular set of files if they need to intervene in the batch run or retrieve files from the archives
for past dates. Refer to the System Parameters File section for file format details.
3-29
Chapter 3
Sending Data to AI Foundation
In summary, here are the main steps that must be completed to move from history loads to
nightly batches:
1. All files must be bundled into a supported ZIP package like RAP_DATA.zip for the nightly
uploads, and this process should be automated to occur every night.
2. Include the system parameter file RA_SRC_CURR_PARAM_G.dat in each nightly upload ZIP
and automate the setting of the vdate parameter in that file (not applicable if RDE jobs are
used).
3. Sync POM schedules with the Customer Module configuration using the Sync with MDF
button in the Batch Administration screen, restart the POM schedules to reflect the
changes, and then review the enabled/disabled jobs to ensure the necessary data will be
processed in the batch.
4. Move the data warehouse ETL business date up to the date one day before the current
nightly load (using LOAD_CURRENT_BUSINESS_DATE_ADHOC). The nightly load takes care of
advancing the date from this point forward.
5. Close and re-open the batch schedules in POM as needed to align the POM business date
with the date used in the data (all POM schedules should be open for the current business
date before running the nightly batch).
6. Schedule the start time from the Scheduler Administration screen > RI schedule >
Nightly tab. Enable it and set a start time. Restart your schedule again to pick up the new
start time.
Because AI Foundation Cloud Services ad hoc procedures have been exposed using only one
job in POM, they are not triggered like AIF DATA procedures. AI Foundation programs accept a
3-30
Chapter 3
Sending Data to AI Foundation
number of single-character codes representing different steps in the data loading process.
These codes can be provided directly in POM by editing the Parameters of the job in the Batch
Monitoring screen, then executing the job through the user interface.
For example, this string of parameters will move all dimension data from the data warehouse to
AI Foundation:
Additional parameters are available when moving periods of historical data, such as inventory
and sales:
3-31
Chapter 3
Sending Data to AI Foundation
A typical workflow for moving core foundation data into AI Foundation is:
1. Load the core foundation files (like Calendar, Product, and Organization) using the AIF
DATA schedule jobs.
2. Use the RSE_MASTER_ADHOC_PROCESS to move those same datasets to AI Foundation Apps,
providing specific flag values to only run the needed steps. The standard set of first-time
flags are -pldg, which loads the core hierarchies required by all the applications.
3. Load some ofyour history files for Sales using the AIF DATA jobs to validate the inputs. For
example, load 1-3 months of sales for all products or a year of sales for only one
department.
4. Load the same range of sales to AI Foundation using the sales steps in the master process
with optional from/to date parameters. The flags to use for this are -xwa, which loads the
transaction table RSE_SLS_TXN as well as all sales aggregates which are shared across AIF.
5. Repeat the previous two steps until all sales data is loaded into both RI and AI Foundation.
Performing the process iteratively provides you early opportunities to find issues in the
data before you’ve loaded everything, but it is not required. You can load all the data into
AI Foundation at one time.
Follow the same general flow for the other application-specific, ad hoc flows into the AI
Foundation modules. For a complete list of parameters in each program, refer to the AI
Foundation Operations Guide.
When loading hierarchies, it is possible to have data issues on the first run due to missing or
incomplete records from the customer. You may get an error like the following:
3-32
Chapter 3
Sending Data to Planning
The INDEX create statement tells you the name of the table and the columns that were
attempting to be indexed. Querying that table is what is required to see what is duplicated or
invalid in the source data. Because these tables are created dynamically when the job is run,
you will need to first grant access to it using a procedure like below in IW:
The value passed into the procedure should be everything after the TMP$ in the table name (not
the index name). The procedure also supports two other optional parameters to be used
instead of or in addition to the prefix:
• p_suffix – Provide the ending suffix of a temporary table, usually a number like 00001101,
to grant access to all tables with that suffix
• p_purge_flg - Purge flag (Y/N) which indicates to drop temporary tables for a given run
Once this is executed for a given prefix, a query like this can retrieve the data causing the
failure:
For location hierarchy data the source of the issue will most commonly come from the
W_INT_ORG_DH table. For product hierarchy, it could be W_PROD_CAT_DH. These are the hierarchy
tables populated in the data warehouse by your foundation data loads.
Process Overview
Table 3-7 Extracts for Planning
3-33
Chapter 3
Sending Data to Planning
Note:
While the AIF DATA
job exports the
entire calendar,
PDS will only
import 5 years
around the
RPAS_TODAY date
(current year +/- 2
years).
3-34
Chapter 3
Sending Data to Planning
3-35
Chapter 3
Sending Data to Planning
The PDS jobs are linked with several ad hoc processes in POM, providing you with the ability
to extract specific datasets on-demand as you progress with history and initial data loads. The
table below summarizes the ad hoc processes, which can be called using the standard
methods such as cURL or Postman.
3-36
Chapter 3
Sending Data to Planning
Most of the PDS fact jobs leverage the configuration table C_SOURCE_CDC to track the data that
has been extracted in each run. On the first run of an incremental job in
LOAD_PDS_FACT_PROCESS_ADHOC, the job extracts all available data in a single run. From that
point forwards, the extract incrementally loads only the data that has been added or modified
since the last extract, based on W_UPDATE_DT columns in the source tables. There are two
exceptions to this incremental process: Inventory and On Order interfaces. The normal
incremental jobs for these two interfaces will always extract the latest day’s data only, because
they are positional facts that send the full snapshot of current positions to PDS each time they
run.
To move inventory history prior to the current day, you must use the initial inventory extract to
PDS (LOAD_PDS_FACT_INITIAL_PROCESS_ADHOC). It requires manually entering the start/end
dates to extract, so you must update C_SOURCE_CDC from the Control & Tactical Center for the
inventory table record. LAST_MIN_DATE is the start of the history you wish to send, and
LAST_MAX_DATE is the final date of history. For example, if you loaded one year of inventory, you
might set LAST_MIN_DATE to 04-JUN-22 and LAST_MAX_DATE to 10-JUN-23. Make sure that the
timestamps on the values entered are 00:00:00 when saved to the database, otherwise the
comparison between these values and your input data may not align.
For all other jobs, the extract dates are written automatically to C_SOURCE_CDC alongside the
extract table name after each execution and can be overwritten as needed when doing multiple
loads or re-running for the same time period. If you run the same process more than once, use
C_LOAD_DATES_CLEANUP_ADHOC to reset the run statuses before the next run, then edit
C_SOURCE_CDC to change the minimum and maximum dates that you are pulling data for.
Review the table below for a summary of the configuration tables involved in PDS extracts.
Table Usage
C_SOURCE_CDC Configuration and tracking table that shows the interfaces
supported for data warehouse to Planning integration and
the currently processed date ranges.
3-37
Chapter 3
Sending Data to Planning
Table Usage
C_LOAD_DATES Tracks the execution of jobs and the most recent run status
of each job. Prevents running the same job repeatedly if it
was already successful, unless you first erase the records
from this table.
RAP_INTF_CONFIG Configuration and tracking table for integration ETL
programs between all RAP modules. Contains their most
recent status, run ID, and data retention policy.
RAP_INTF_RUN_STATUS RAP integration run history and statuses.
RAP_LOG_MSG RAP integration logging table, specific contents will vary
depending on the program and logging level.
After the planning data has been extracted from the data warehouse to PDS staging tables
(W_PDS_*), the Planning applications use the same programs to extract both full and
incremental data for each interface. You can run the dimension and fact loads for planning from
the Online Administration (OAT) tasks, or use the RPASCE schedule in POM. Refer to the
relevant implementation guides for MFP, AP, or IPO for details on these processes.
Usage Examples
The following examples show how to leverage the PDS extract processes to move data from
the data warehouse tables to the PDS staging tables, where the data can be picked up by the
Planning applications.
3-38
Chapter 3
Customized Planning Integrations
2. Open the C_SOURCE_CDC table in Manage System Configurations and locate the row for
W_RTL_INV_IT_LC_WK_A. Edit the values in the LAST_MIN_DATE and LAST_MAX_DATE columns
to fully encompass your range of historical dates in the inventory history.
3. Enable the job W_PDS_INV_IT_LC_WK_A_INITIAL_JOB in the ad hoc process
LOAD_PDS_FACT_INITIAL_PROCESS_ADHOC and execute the process.
4. If the job fails with error code “ORA-01403: no data found,” it generally means that the
dates in C_SOURCE_CDC are not set or do not align with your historical data. Update the
dates and re-run the job.
5. Verify that data has been moved successfully to the target table W_PDS_INV_IT_LC_WK_A.
To review the interface configuration details for this interface, the following query can be
executed:
The PUBLISH_APP_CODE and CUSTOMER_PUBLISHABLE_FLG values are important to note from the
result of this query. First, to publish data to an interface, the publishable flag must be Y for any
interface that IW will be populating. If there is a need to publish data for an interface not
3-39
Chapter 3
Customized Planning Integrations
configured for custom extensions, an SR is required that requests that the following update be
made, which will allow publishing data for that interface:
UPDATE rap_intf_cfg
SET customer_publishable_flg='Y'
WHERE intf_name = 'RDF_FCST_PARM_CAL_EXP';
/
COMMIT;
/
Once the above has been done, the implementer can write code in IW that will publish data to
this interface, so that the subscribing application modules can consume it. A sample of how to
do this follows:
DECLARE
--The following is used to capture the ID for a published interface dataset
v_run_id NUMBER;
-- The following is used to allow isolated writing to the interface
v_partition_name VARCHAR2(30);
BEGIN
rap_intf_support_util.prep_intf_run('PDS', 'RDF_FCST_PARM_CAL_EXP',
v_run_id, v_partition_name);
--NOTE: the .... is to be replaced with the actual list of columns and an
appropriate SELECT statement to provide the values to insert into the table.
--Of importance, it should be noted that the run_id column must be
populated with the value that was returned into the v_run_id variable above.
--After the data has been populated into the above table, it can be made
available for retrieval by consuming application modules by the following:
--Note, the 'PDS' value that is shown below, is fixed according to what
was obtained from the query of RAP_INTF_CFG.
rap_intf_support_util.publish_data('PDS', v_run_id,
'RDF_FCST_PARM_CAL_EXP', 'Ready');
END;
/
Once the above steps have been completed, the data will be ready to be consumed by the
application modules that use this interface.
3-40
Chapter 3
Generating Forecasts for MFP
Note:
If an attempt is made to call any of the routines inside RAP_INTF_SUPPORT_UTIL for an
interface where theCUSTOMER_PUBLISHABLE_FLG is not set to Y, then an error will be
provided indicating that "No Interfaces for [ <values they provided to the procedure> ]
are allowed/available to be published from custom code." If this occurs, follow the
instructions described above to submit an SR asking for permission to publish data to
that interface.
Note:
If the run type is active, you will only be able to view the parameters. To edit the
parameters, the run type must be inactive.
To activate the run type and enable the auto-approve, select a run type in the table and click
the corresponding buttons above the table. Lastly, to map the run type to MFP, go to the Map
train stop and click the + icon to create a new mapping.
When configuring forecasts for the MFP base implementation, the following list of forecast runs
may be required, and you will want to configure and test each run type following the general
3-41
Chapter 3
Generating Forecasts for MFP
workflow above. Additional runs can be added to satisfy your MFP implementation
requirements.
Note:
The “Channel” level in MFP is often referred to as “Area” level in RI and AI
Foundation, so be sure to select the correct levels which align to your hierarchy.
3-42
Chapter 3
Generating Forecasts for Inventory Planning Optimization Cloud Service-Demand Forecasting
3-43
Chapter 3
Implementation Flow Example
1. Integrate your foundation data (core dimensions and fact history) using either RMFCS
direct loads or object storage file uploads and run the load programs following the steps
described earlier in this chapter.
2. Move foundation data to the Data Exchange (RDX) using the
LOAD_PDS_DIMENSION_PROCESS_ADHOC and LOAD_PDS_FACT_PROCESS_ADHOC processes.
3. AI Foundation and Forecast Setup Flow
3-44
Chapter 3
Implementation Flow Example
b. Set up run types and execute test runs in the Forecasting module of AI Foundation,
then approve and map those runs to IPOCS-Demand Forecasting. Set up Flex Groups
in AIF to be used with the forecasts in IPOCS-Demand Forecasting.
c. Export AIF setup data for IPOCS-Demand Forecasting to the Data Exchange (RDX)
using the jobs below (MFP and AP do not require most of these jobs, instead you
would simply run RSE_FCST_EXPORT_ADHOC_PROCESS jobs for MFP/AP exports):
• RSE_FCST_RUN_TYPE_CONF_EXPORT_ADHOC_PROCESS
• RSE_PROMO_OFFER_EXPORT_ADHOC_PROCESS
• RSE_FCST_EXPORT_ADHOC_PROCESS (enabling the IPOCS-Demand Forecasting job
only)
4. IPOCS-Demand Forecasting Setup Flow
3-45
Chapter 3
Implementation Flow Example
a. Import the updated IPOCS-Demand Forecasting parameters to AIF using the jobs:
• RSE_RDX_FCST_PARAM_ADHOC_PROCESS
• RSE_FCST_RDX_NEW_ITEM_ENABLE_ADHOC_PROCESS
• RSE_LIKE_RDX_RSE_ADHOC_PROCESS
• PMO_EVENT_IND_RDF_ADHOC_PROCESS
b. Return to the AIF Forecasting module and generate new forecasts using the IPOCS-
Demand Forecasting parameters. Create new runs under the same run type as before,
generate the forecast(s), approve the demand parameters, and click Approve Base
Demand and Forecast. Ensure you activate the run type from the Manager Forecast
Configurations screen and enable auto-approve (if starting nightly runs).
c. Export the forecasts using the RSE_FCST_EXPORT_ADHOC_PROCESS (you can directly run
the IPOCS-Demand Forecasting job RSE_RDF_FCST_EXPORT_ADHOC_JOB),
RSE_FCST_RUN_TYPE_CONF_EXPORT_ADHOC_PROCESS, and
RSE_PROMO_OFFER_SALES_EXPORT_ADHOC_PROCESS.
d. Import the forecasts to IPOCS-Demand Forecasting. Also re-run any of the previous
IPOCS-Demand Forecasting steps if any other data has changed since the last run.
6. Forecast Approval Flow
3-46
Chapter 3
Generating Forecasts for AP
Note:
Although AP has an in-season plan, it still leverages Auto ES as the base forecasting
method.
Bayesian (which includes plan data in the forecast) is set up as the estimation method for the
run. This is also why Store Sales is set as the data source for all runs, because all runs have
the ability to include plan data based on the estimation methods used (in addition to store
sales).
Loading Plans to RI
If you are implementing a Planning module and need to generate plan-influenced forecasts
(such as the AP in-season forecast) then you will need to first integrate your plans into RI
(which acts as the singular data warehouse for this data both when the plans come from
Oracle Retail solutions and when they come from outside of Oracle).
If you are pushing the plans from MFP or AP directly, then you will enable a set of POM jobs to
copy plan outputs directly to RI tables. You may also use an ad hoc process to move the plan
data on-demand during the implementation.
3-47
Chapter 3
Loading Forecasts to RI
In all of the jobs above, the target table name is in the job name (such as
W_RTL_PLAN2_PROD2_LC2_T2_FS as the target for the PLAN2 extract). Once the data is moved to
the staging layer in RI, the source-agnostic fact load jobs will import the data
(W_RTL_PLAN2_PROD2_LC2_T2_FS is loaded to W_RTL_PLAN2_PROD2_LC2_T2_F, and so on). The
fact tables in RI are then available for AI Foundation jobs to import them as needed for
forecasting usage.
The plan tables in RI have configurable levels based on your Planning implementation. The
default levels are aligned to the standard outputs of MFP and AP if you do not customize or
extend them. If you have modified the planning solution to operate at different levels, then you
must also reconfigure the RI interfaces to match. This includes the use of flexible alternate
hierarchy levels (for example, the PRODUCT_ALT.csv interface) which require custom
configuration changes to the PLAN interfaces before you can bring that data back to RI and AIF.
These configurations are in C_ODI_PARAM_VW, accessed from the Control & Tactical Center in AI
Foundation. For complete details on the plan configurations in RI, refer to the Retail Insights
Implementation Guide.
Loading Forecasts to RI
If you are implementing IPOCS-Demand Forecasting and RI, then you may want to integrate
your approved forecast with RI for reporting. The same RI interfaces are used both for IPOCS-
Demand Forecasting forecasts and external, non-Oracle forecasts.
If you are pushing the forecast from IPOCS-Demand Forecasting, use a set of POM jobs to
copy forecast outputs directly to RI tables. You may also use an ad hoc process to move the
forecast data on-demand during the implementation. The process names for the two interfaces
are LOAD_PLANFC1_DATA_ADHOC and LOAD_PLANFC2_DATA_ADHOC. These processes should be
scheduled to run automatically outside of batch, as this is the preferred way to integrate
forecast data (because the volume can be large and you do not want it to impact the nightly
batch cycle times).
You must also map the desired forecast run type to the RI application code from the AI
Foundation user interface. Once this is done, the integration jobs can detect data to extract.
Only one run type can be mapped to RI at this time; use a SKU/store/week run type, unless
you have reconfigured the RI interfaces for some other intersection.
3-48
Chapter 3
Loading Aggregate History Data
In all of the jobs above, the target table name is in the job name (such as
W_RTL_PLANFC_PROD1_LC1_T1_FS as the target for the PLANFC1 extract). Once the data is moved
to the staging layer in RI, the source-agnostic fact load jobs will import the data
(W_RTL_PLANFC_PROD1_LC1_T1_FS is loaded to W_RTL_PLANFC_PROD1_LC1_T1_F, and so on).
The forecast tables in RI have configurable levels based on your implementation. The default
levels are aligned to the standard outputs of AI Foundation and IPOCS-Demand Forecasting
(item/location/week) if you do not customize or extend them. If you have modified the IPOCS-
Demand Forecasting solution to operate at different levels, then you must also reconfigure the
RI interfaces to match. These configurations are in C_ODI_PARAM_VW, accessed from the
Control & Tactical Center in AI Foundation. For complete details on the forecast configurations
in RI, refer to the Retail Insights Implementation Guide.
3-49
Chapter 3
Loading Aggregate History Data
You must configure the data intersections for these tables before you can use them, as each
table can only have one intersection defined. The parameters are in the C_ODI_PARAM_VW table
in the Control & Tactical Center Manage System Configurations screen. The parameters for
each interface are listed below.
In the current release, the ATTR and SUPP parameters should remain as ALL; other options are
not supported when integrating the data throughout the platform. You can configure the PROD,
ORG, and CAL levels for each interface to match the intersection of data being loaded there.
Valid parameter values for each type are listed below.
Before using the interfaces, you must also partition them using either day- or week-level
partitioning (depending on the data intersections specified above). Partitioning is controlled
using two tables accessible from the Control & Tactical Center: C_MODULE_ARTIFACT and
C_MODULE_EXACT_TABLE.
3-50
Chapter 3
Loading Aggregate History Data
In C_MODULE_ARTIFACT, locate the rows where the module code starts with FACT (such as
FACT1) and set them to both ACTIVE_FLG=Y and PARTITION_FLG=Y.
3-51
Chapter 3
Migrate Data Between Environments
until the data is loaded, you will need to instruct the system on how to use your aggregate
data.
The measure metadata will be stored in the AIF table RSE_MD_CDA. This table is loaded
programmatically using an ad hoc job in the RSP schedule named
RSE_AGGREGATE_METADATA_LOAD_ADHOC_JOB. The program will detect the columns with data and
add entries for each measure with a generic name assigned. Once the program is complete,
you can modify the UI display name to be something meaningful to end-users from the Control
& Tactical Center.
The measures themselves will first be loaded into RSE_PR_LC_CAL_CDA, which is the staging
area in AIF to prepare the measures for the applications. After the metadata is configured, you
may run another ad hoc job in the RSP schedule named
RSE_AGGREGATE_ACTUALS_LOAD_ADHOC_JOB. This will populate the columns in
RSE_PR_LC_CAL_CDA based on their metadata.
Lastly, you must map the measure data into the application tables that require access to
aggregate facts. This is performed using the configuration table RSE_MD_FACT_COLUMN_MAP,
which is accessible for inserts and updates in the Control & Tactical Center. Possible
configuration options supported by the AIF applications will be listed in their respective
implementation guides, but a sample set of values is provide below for a sales and inventory
measure mapping, which will be the most common use cases:
Separate POM jobs are included in the RSP schedule to move the data from the CDA tables to
their final target tables. The jobs will come in pairs and have job names ending in
AGGR_MEAS_SETUP_ADHOC_JOB followed by AGGR_MEAS_PROCESS_ADHOC_JOB. For example, to
load the sales table in the sample mapping, use
RSE_SLS_PR_LC_WK_AGGR_MEAS_SETUP_ADHOC_JOB and
RSE_SLS_PR_LC_WK_AGGR_MEAS_PROCESS_ADHOC_JOB. For additional details on the individual AIF
application usage of these mappings and jobs, refer to the AIF Implementation Guide.
If you need in-season forecasts, then you must plan to configure MFP or AP plan exports to RI
as part of your planning implementation. You must populate the same columns on the plan
exports that you are using on the FACT1-4 interfaces for actuals. When doing in-season
forecasts with aggregated data, it expects the same column in a PLAN and FACT table at the
same intersection so that it can load the associated plan measure for the actuals and do a
plan-influenced forecast run. For example, if you are populating the SLS_QTY column on the
FACT1 interface, then you must also send an SLS_QTY value on the PLAN1 interface or else it
won’t be used in the plan-influenced forecast.
3-52
Chapter 3
Migrate Data Between Environments
When requesting the activity, you may specify if you need to migrate the entire database (both
data and structures) or only the data. The first time doing this process must be a full migration
of data and structures to synchronize the source and target environments. It is currently
recommended to begin your implementation in the Production environment to avoid needing a
lift-and-shift in order to go live. Around the time of your go-live date, you can request a lift-and-
shift be done into your non-production environments to synchronize the data for future use.
Note:
The product versions between the source and target must be aligned. It is the project
team’s responsibility to plan for appropriate upgrades and lift-and-shift activities such
that this will be true.
3-53
4
Integration with Merchandising
This chapter describes the various integrations between Retail Merchandising Foundation
Cloud Services (RMFCS) and the Retail Analytics and Planning platform. RMFCS can be used
as the primary source of foundation data for RAP and pre-built integrations and batch
programs exist to move data between cloud applications. You may also use an on-premise
installation of the Retail Merchandising System (RMS), in which case you must establish the
integration to the RAP cloud following the guidance in this chapter.
Architecture Overview
In prior releases, the integration between RMFCS and RI/AIF used a tool named the Retail
Data Extractor (RDE) to generate data files for RI/AIF to consume. These programs have been
fully integrated to the RAP batch flow and directly insert the data from RMFCS to RAP. The
integration uses an instance of Oracle Golden Gate to copy RMFCS tables to the local
database, where the Data Extractor jobs can source all required data and transform it for use
in RAP. If you are familiar with the prior RDE architecture, then you need to be aware of the
following major changes:
1. RDE_DM01 database objects are now located in the RADM01 schema. RDE_RMS01 database
objects are now in the RABE01USER schema. The RABE01USER now has access to extract
records from RMFCS through the Golden Gate replicated schema.
2. The C_ODI_PARAM configuration tables have been merged and all RDE configurations are
accessed from the Control & Tactical Center.
3. File-based integration has been removed. All data is moved directly between database
source and target tables with no option to produce flat files.
4. RDE’s batch schedule has been merged with RI’s schedule in POM. Jobs have been
renamed and assigned modules such that it is easy to identify and disable/enable RDE
jobs as needed.
5. Customer Engagement (CE) integration jobs have been included in RDE for when CE is
set up to replicate data to RAP. File-based integration is no longer required.
6. All jobs relating to file extraction, ZIP file creation, or data cleanup in RMS have been
removed.
Because RDE jobs are now a part of the RAP nightly batch cycle, they have been assigned
modules in the Customer Modules framework (accessed using Retail Home) and can be
enabled or disabled in bulk depending on your use-cases.
• RDE_RMS – These are the RDE components in relation to RMS
• RDE_CE – These are the RDE components in relation to CE
If you are not integrating data from RMFCS or CE then you will need to disable these modules
to prevent them from running in your nightly batch cycles. RI and AIF jobs are programmed to
start automatically after the RDE jobs complete, but if the RDE jobs are disabled then the
dependencies will be ignored.
4-1
Chapter 4
Merchandising Foundation Cloud Service Data Mapping
4-2
Chapter 4
Batch Schedule Definitions
4-3
Chapter 4
Ad Hoc Processes
Ad Hoc Processes
There are several standalone ad hoc processes available for executing the RDE programs
outside of a normal batch cycle. These processes can be used to integrate dimension or fact
data to RAP during initial implementation, or simply to run the extracts and validate the outputs
without actually loading them into the platform. The table below summarizes these processes
and their usage.
4-4
Chapter 4
Ad Hoc Processes
4-5
Chapter 4
Batch Dependency Setup (Gen 2 Architecture)
4-6
Chapter 4
Batch Dependency Setup (Gen 2 Architecture)
automatically, but you will need to manually enable/disable the RMFCS interschedule
dependencies based on your needs.
You should start with all dependencies enabled, and only disable them if you are trying to run
the batch cycle out of sync from the RMFCS batch. The inter-schedule dependencies fall into
two categories: discreet jobs that perform some check on RMFCS data, and POM
dependencies that cross-reference another RMFCS batch program. The first category of jobs
check the availability of data from the RMFCS signaling table called RMS_RDE_BATCH_STATUS.
The RDE jobs that check the signaling table in RMFCS are:
• RDE_INTERSCHED_CHECK_RESAEXTRACT_PROCESS / RDE_INTERSCHED_CHECK_RESAEXTRACT_JOB
- Checks the completion of the RESA_EXTRACT job in RMFCS
• RDE_INTERSCHED_CHECK_INVSNAPSHOT_PROCESS / RDE_INTERSCHED_CHECK_INVSNAPSHOT_JOB
- Checks the completion of the INVENTORY_SNAPSHOT job that signifies that the
ITEM_LOC_SOH_EOD table in RMFCS is now available for the RDE extract
• RDE_INTERSCHED_CHECK_STAGETRANDATA_PROCESS /
RDE_INTERSCHED_CHECK_STAGETRANDATA_JOB - Checks the completion of the
STAGE_TRAN_DATA job that signifies whether the IF_TRAN_DATA table in RMFCS is now
available for the RDE extract
If the RDE jobs run in parallel with the RMFCS batch, then all these jobs must be enabled. If
you are running RDE jobs outside the RMFCS batch, then these jobs must be disabled during
those runs. The jobs will wait indefinitely for a signal from the RMFCS batch, which they will
never receive if you are running RDE jobs independently.
The second category of dependencies are found on the RDE jobs themselves when you click
on a job to view its details in POM or click the Interschedule Dependencies link in Batch
Monitoring UI. These jobs are listed below, along with the RMFCS jobs they depend on. You
must verify these are enabled before trying to run RDE batches (unless the associated RMFCS
job is disabled, in which case the RDE dependency can be turned off as well). If any of these
are disabled, you will need to use Batch Administration to enable them by locating each job
and clicking into the details to enable all dependencies for it.
• RDE_SETUP_INCRMNTL_DEALACT_PROCESS / RDE_SETUP_INCRMNTL_DEALACT_JOB – This RDE
job waits for the following MFCS jobs to complete:
– RPM_PRICE_EVENT_EXECUTION_PROCESS / RPM_PRICE_EVENT_EXECUTION_JOB
• RDE_EXTRACT_DIM_P5_REPLDAYSDE_PROCESS / RDE_EXTRACT_DIM_P5_REPLDAYSDE_JOB – This
RDE job waits for the following MFCS jobs to complete:
– REPLENISHMENT_PROCESS / RPLEXT_JOB
• RDE_EXTRACT_DIM_P3_PRDITMATTRSDE_PROCESS / RDE_EXTRACT_DIM_P3_PRDITMATTRSDE_JOB
– This RDE job waits for the following MFCS jobs to complete:
– REPLENISHMENT_PROCESS / RPLEXT_JOB
• CSTISLDSDE_PROCESS / CSTISLDSDE_JOB – This RDE job waits for the following MFCS jobs
to complete:
– ALLOCBT_PROCESS / ALLOCBT_JOB
– BATCH_RFMCURRCONV_PROCESS / BATCH_RFMCURRCONV_JOB
– COSTCOMPUPD_ELCEXPRG_PROCESS / ELCEXCPRG_JOB
– EDIDLCON_PROCESS / EDIDLCON_JOB
– EXPORT_STG_PURGE_PROCESS / EXPORT_STG_PURGE_JOB
– EDIUPAVL_PROCESS / EDIUPAVL_JOB
4-7
Chapter 4
Batch Link Setup (Gen 2 Architecture)
– LIKESTOREBATCH_PROCESS / LIKESTOREBATCH_JOB
– POSCDNLD_PROCESS / POSCDNLD_POST_JOB
– REPLINDBATCH_PROCESS / REPLINDBATCH_JOB
– SALESPROCESS_PROCESS / SALESUPLOADARCH_JOB
– STKVAR_PROCESS / STKVAR_JOB
• RDEBATCH_INITIAL_START_PROCESS / RDEBATCH_INITIAL_START_MILEMARKER_JOB – This
RDE job waits for the RMFCS job STOP_RIB_ADAPTOR_INV_PROCESS /
STOP_RIB_ADAPTOR_INV_JOB to complete.
If you cannot see any dependencies in the POM UI, then your POM system options may have
them disabled. Make sure to check the System Configuration for AIF DATA and ensure the
dependency options are set to Enabled.
4-8
Chapter 4
Batch Job Setup (Gen 2 Architecture)
– Disable the ZIP_FILES modules unless you have a need for one of them for non-
Mechandising data.
• (For RI customers) Under the RI > RCI section, enable or disable the module depending
on your plans to use Retail Insights customer-related functionality. Disable the RI > RCI >
CONTROLFILES and SICONTROLFILES modules, because RDE jobs replace this
functionality. The BATCH modules can be left enabled if you are unsure whether these
modules are needed.
• (For RI customers) Under the RI > RMI section, enable or disable the module depending
on your plans to use Retail Insights merchandising-related functionality. Disable the RI >
RMI > CONTROLFILES and SICONTROLFILES modules, because RDE jobs replace this
functionality. The BATCH modules can be left enabled if you are unsure whether these
modules are needed.
Once all modules are configured, go back into the POM Batch Administration UI and perform a
Sync with MDF action on the AIF DATA schedule. This is a one-time activity to streamline the
POM schedule setup, after which you will want to perform a review of the POM schedule and
refine the enabled/disabled jobs further to cover any specific file or data requirements.
Even if you are not syncing with MDF at this time, you must still perform the Retail Home setup
because the CONTROLFILES and SICONTROLFILES modules are used implicitly by AIF
DATA schedule programs to know which data files to expect in the batch runs. Any
misconfiguration could lead to the program DAT_FILE_VALIDATE_JOB failing or running for
several hours while waiting for data files you didn’t provide. Similarly, if you misconfigure the
ZIP_FILES modules, then you will encounter errors/delays in the ZIP_FILE_WAIT_JOB as it
looks for the expected ZIP files.
4-9
Chapter 4
Batch Job Setup (Gen 2 Architecture)
– RDE_CE_CUSTOMER
– RDE_CE_CUSTSEG
– RDE_CE_LOYALTY
• If the client opts to integrate the ORCE customer data (which means that the ORCE jobs
are enabled - those with RDE_EXTRACT_CE_*), the following AIF DATA jobs should be
disabled as the customer data will directly be populated without an input file:
– W_RTL_CUST_DEDUP_DS_COPY_JOB
– W_RTL_CUST_DEDUP_DS_STG_JOB
– W_RTL_CUST_LYL_AWD_TRX_DY_FS_COPY_JOB
– W_RTL_CUST_LYL_AWD_TRX_DY_FS_STG_JOB
– W_RTL_CUST_LYL_TRX_LC_DY_FS_STG_JOB
– W_RTL_CUST_LYL_TRX_LC_DY_FS_COPY_JOB
– W_RTL_CUST_LYL_ACCT_DS_COPY_JOB
– W_RTL_CUST_LYL_PROG_DS_COPY_JOB
– W_RTL_CUSTSEG_DS_COPY_JOB
– W_RTL_CUSTSEG_DS_STG_JOB
– W_RTL_CUSTSEG_DS_ORASE_JOB
– W_RTL_CUST_CUSTSEG_DS_COPY_JOB
– W_RTL_CUST_CUSTSEG_DS_STG_JOB
– W_RTL_CUST_CUSTSEG_DS_ORASE_JOB
– W_RTL_CUSTSEG_ATTR_DS_COPY_JOB
– W_RTL_CUSTSEG_ATTR_DS_STG_JOB
– W_RTL_CUST_HOUSEHOLD_DS_COPY_JOB
– W_RTL_CUST_HOUSEHOLD_DS_STG_JOB
– W_RTL_CUST_ADDRESS_DS_COPY_JOB
– W_RTL_CUST_ADDRESS_DS_STG_JOB
– W_PARTY_PER_DS_COPY_JOB
– W_PARTY_PER_DS_STG_JOB
– W_RTL_PARTY_PER_ATTR_DS_COPY_JOB
– W_RTL_PARTY_PER_ATTR_DS_STG_JOB
– W_HOUSEHOLD_DS_COPY_JOB
– W_HOUSEHOLD_DS_STG_JOB
• Disable most of the AIF DATA copy jobs (those with *_COPY_JOB) except ones needed for
non-RMFCS sources. These jobs should be disabled because these jobs will copy files
and upload them from object storage. This is not needed because data is loaded directly
into the staging tables and flat files are not expected to arrive for processing. Most of these
jobs are under the following modules:
– RI_DAT_STAGE
– RSP_DAT_STAGE
4-10
Chapter 4
Batch Job Setup (Gen 1 Architecture)
• Disable most of the AIF DATA stage jobs (those with *_STG_JOB) ) except ones needed for
non-RMFCS sources. These jobs should be disabled as these jobs read from a flat file
which are not available if using this integration. Most of these jobs are under the following
modules:
– RI_DAT_STAGE
– RSP_DAT_STAGE
• Disable the RAP Simplified Interface jobs (those with SI_*, COPY_SI_*, and STG_SI_* at
the start of the name) as RMFCS will be the source of data to feed RI. Most of these jobs
are under the modules with the patterns below:
– RI_SI*
– RSP_SI*
• Disable the AIF DATA program W_PROD_CAT_DH_CLOSE_JOB, which is used to close unused
hierarchy levels when non-Merchandising incremental hierarchy loads are used in RAP. It
must not run with Merchandising, because the product hierarchy data is already being
managed by RDE extracts.
• Disable the AIF DATA programs ETL_REFRESH_JOB and BATCH_START_NOTIFICATION_JOB
(specifically the versions belonging to process SIL_INITIAL_PROCESS) because these are
redundant with jobs included in the RDE schedule.
• If you are not providing any flat file uploads and using only RMFCS data, you may disable
the jobs in CONTROL_FILE_VALIDATION_PROCESS, which will prevent any data files from
being processed (and potentially overwriting the RMFCS data).
• Disable the job named TRUNCATE_STAGE_TABLES_JOB, which is used only for data file loads
and cannot be run when RDE jobs are used for direct integration. A similar job named
RDE_TRUNCATE_STAGE_TABLES_JOB should remain enabled as this does apply to RDE job
execution.
4-11
Chapter 4
Batch Setup for RMS On-Premise
4-12
Chapter 4
RDE Job Configuration
for the Retail Analytics and Planning cloud services are made available in this release,
replacing the current SFTP process
3. Check that the FTS configuration file ra_objstore.cfg is available in RDE's $MMHOME/etc
directory. The FTS configuration file contains the following variable set-up used for the
Object Storage:
• RA_FTS_OBJSTORE_IND – This will be set to Y so that FTS will be enabled
• RA_FTS_OBJSTORE_URL – This is the Base URL
• RA_FTS_OBJSTORE_ENVNAMESPACE – This is the Tenant
• RA_FTS_OBJSTORE_IDCS_URL – This is the IDCS URL appended with /oauth2/v1/token
at the end
• RA_FTS_OBJSTORE_IDCS_CLIENTID – This is the Client ID
• RA_FTS_OBJSTORE_IDCS_CLIENTSECRET – This is the Client ID Secret
• RA_FTS_OBJSTORE_IDCS_SCOPE – This is the IDCS Scope
• RI_OBJSTORE_UPLOAD_PREFIX – This is the Storage Prefix and is set to ris/
incomingpointing to the correct Object Storage directory for RI input files
Refer to the File Transfer Services section of this document for instructions on how to get
the values for each of the variables above.
4. Enable the File Transfer Service (FTS) in RDE by setting the RA_FTS_OBJSTORE_IND to Y in
the FTS Configuration file ra_objstore.cfg found in RDE’s $MMHOME/etc directory. This
must be enabled so that the RDE nightly zip file job (RTLRDEZIP_PROCESS / RTLRDEZIP_JOB)
and all existing ad hoc zip file jobs (RTLUASDE_INITIAL_DIMMENSION_LOAD_ADHOC /
RTLRDEZIP_HIST_JOB, RTLRDEZIP_HIST_PROCESS_ADHOC / RTLRDEZIP_HIST_JOB,
INVRTVFACT_ADHOC / ADHOCINVRTVSDE_JOB, SEEDPOSITIONALFACT_ADHOC / SEEDRDEZIP_JOB)
will automatically upload files to the Object Storage through FTS for RI to pick up and
download for further processing.
5. Once these changes are applied, it will no longer be possible to upload to SFTP; you will
be sending the ZIP files only to Object Storage as specified in the install properties and
configuration changes.
4-13
Chapter 4
RDE Job Configuration
associated purchase order purge setting, ORDER_HISTORY_MONTHS, must also be more than a
month out.
4-14
Chapter 4
RDE Job Configuration
The flags that impact the incremental/full load behavior for the AIF DATA jobs linked to RDE
jobs are also provided below and should be configured at the same time.
4-15
Chapter 4
Using RDE for Calendar Setup (Gen 2 Architecture)
4-16
Chapter 4
Using RDE for Dimension Loads (Gen 2 Architecture)
If the validator job rule CAL_R2 continues to happen after updating the Merchandising calendar,
but you know that your first calendar year is not meeting the requirements and want to bypass
the error for now, then you would need to update the table C_DIM_RULE_LIST from the Control &
Tactical Center. Change the error type for this rule to W so that it only throws a warning in the
batch program instead of failing. For example, if your Merchandising calendar starts in 2010
and you are not going to load any data until 2020 in RAP, then having an invalid first year of
the calendar is not going to block you from other data load activities.
• Once you have disabled all flat file jobs, restart the POM schedule as needed and then run
the RI_DIM_INITIAL_ADHOC process. This moves all Merchandising data from the RAP
staging tables into the data warehouse internal tables. Once all jobs are complete, the
remaining steps to move data to AIF and PDS are the same as documented in the Data
Loads and Initial Batch Processing chapter.
4-17
Chapter 4
Using RDE for Initial Seeding (Gen 2 Architecture)
4-18
Chapter 4
Using RDE for Initial Seeding (Gen 1 Architecture)
4-19
Chapter 4
Using RDE for Initial Seeding (Gen 1 Architecture)
1. Set up a full RDE batch (by enabling batch links/dependencies to the RMFCS schedule)
and let it run nightly to get the full set of RDE files for dimensions and facts.
2. The file will be pushed automatically to RAP FTS. Download the RI_RMS_DATA.zip file from
FTS; do not load it into RAP yet.
3. Run the process SEEDPOSITIONALFACT_ADHOC, which will extract full snapshots of all
positional data, zip them, and push them to the RAP FTS location.
4. Download the RIHIST_RMS_DATA.zip file from FTS and copy the full snapshots of positional
facts into the RI_RMS_DATA.zip file generated by the RDE nightly process (replacing the
incremental files that were extracted).
5. Upload the modified RDE nightly ZIP file to RAP FTS at the ris/incoming location (same
as you would for all nightly batches going forward). Upload any additional ZIP files you
need for the nightly batches, such as ORASE_WEEKLY.zip or RAP_DATA.zip, if you want
these other files loaded in the same batch.
6. Advance the ETL business date in AIF to one day before the current batch, if it’s not
already set to that date, using the ad hoc process LOAD_CURRENT_BUSINESS_DATE_ADHOC.
Review any configurations in C_ODI_PARAM and RSE_CONFIG tables which may have been
altered for your historical loads but need updates for nightly batch data. For example, you
may want to update RI_INVAGE_REQ_IND in C_ODI_PARAM if you need calculations of first/
last receipt dates and inventory age from the RMFCS data.
7. Schedule a run of the full AIF nightly batch. Ensure your AIF POM schedule dates for the
nightly batch run is aligned with the completed run of RMFCS/RDE, because, from this
point forward, the batch schedules will need to remain in sync.
Your transactional facts, such as sales and receipts, should already have history loaded up to
this first run of nightly batches, because the next RDE nightly batch will only extract data for
the current vdate in RMFCS (for example, it will use the contents of the IF_TRAN_DATA daily
transaction table for most fact updates besides sales, which come from Sales Audit directly).
Once this first AIF batch completes using the full snapshot of positional data, you may prepare
for regular nightly batches which will use the incremental extracts from RDE.
The calendar validator job rule CAL_R2 may cause the batch to fail if this is the first time using
Merchandising calendar data directly. This is because the default system calendar in
Merchandising does not follow RAP recommendations, which is that the first year of the
calendar must be a complete fiscal year. If this happens, verify that the first year of the
Merchandising calendar exists much earlier than any actual data in RAP (for example, the
Merchandising calendar starts in 2010 but RAP data only exists from year 2020 onwards). If
this is confirmed, you may change the validation rule to be a warning instead of an error.
Update the table C_DIM_RULE_LIST from the Control & Tactical Center. Change the error type
for this rule to W so that it only throws a warning in the batch program instead of failing, and
then restart the failed validator job as needed.
4-20
5
Batch Orchestration
This chapter describes the tools, processes, and implementation considerations for configuring
and maintaining the batch schedules used by the Retail Analytics and Planning. This includes
nightly, weekly, and ad hoc batch cycles added in the Process Orchestration and Monitoring
(POM) tool for each of the RAP applications.
Overview
All applications on the Retail Analytics and Planning have either a nightly or weekly batch
schedule. Periodic batches allow the applications to move large amounts of data during off-
peak hours. They can perform long-running calculations and analytical processes that cannot
be completed while users are in the system, and close out the prior business day in
preparation for the next one.
To ensure consistency across the platform, all batch schedules have some level of
interdependencies established, where jobs of one application require processes from another
schedule to complete successfully before they can begin. The flow diagram below provides a
high-level view of schedule dependencies and process flows across RAP modules.
5-1
Chapter 5
Overview
5-2
Chapter 5
Initial Batch Setup
The frequency of batches will vary by application. However, the core data flow through the
platform must execute nightly. This includes data extraction from RMS by way of RDE (if used)
and data loads into Retail Insights and AI Foundation.
Downstream applications from Retail Insights, such as Merchandise Financial Planning, may
only execute the bulk of their jobs on a weekly basis. This does not mean the schedule itself
can run weekly (as MFP batch has been run in previous versions); those end-of-week
processes now rely on consumption and transformations of data happening in nightly batches.
For example, Retail Insights consumes sales and inventory data on a daily basis. However, the
exports to Planning (and subsequent imports in those applications) are only run at the end of
the week, and are cumulative for all the days of data up to that point in the week.
For this reason, assume that most of the data flow and processing that is happening within the
platform will happen every day and plan your file uploads and integrations with non-Oracle
systems accordingly.
While much of the batch process has been automated and pre-configured in POM, there are
still several activities that need to be performed which are specific to each implementation.
TheTable 5-1 table summarizes these activities and the reasons for doing them. Additional
details will be provided in subsequent sections of this document
Activity Description
Initial Batch Setup By default, most batch processes are enabled for all of the
applications. It is the implementer’s responsibility to disable
batches that will not be used by leveraging the Customer
Modules Management screen in Retail Home.
Configure POM Integrations The POM application supports external integration methods
including external dependencies and process callbacks.
Customers that leverage non-Oracle schedulers or batch
processing tools may want to integrate POM with their existing
processes.
Schedule the Batches Schedules in POM must be given a start time to run
automatically. Once started, you have a fixed window of time to
provide all the necessary file uploads, after which time the
batch will fail due to missing data files.
Batch Flow Details It is possible to export the batch schedules from POM to review
process/job mappings, job dependencies, inter-schedule
dependencies, and other details. This can be very useful when
deciding which processes to enable in a flow or when debugging
fails at specific steps in the process, and how that impacts
downstream processing.
5-3
Chapter 5
Initial Batch Setup
Retail Insights are not required for the platform as a whole and can be disabled if you do not
plan to implement RI.
After you make changes to the modules, make sure to synchronize your batch schedule in
POM following the steps in Implementation Tools. If you are unsure whether a module should
be enabled or not, you can initially leave it enabled and then disable jobs individually from
POM as needed.
Common Modules
The following table lists the modules that are used across all platform implementations. These
modules process core foundation data, generate important internal datasets, and move data
downstream for other applications to use. Verify in your environment that these are visible in
Customer Modules Management and are enabled.
5-4
Chapter 5
Initial Batch Setup
If you are implementing some or all of AI Foundation or Retail Insights, then there are some
additional modules to review. These modules may or may not be required, as they are based
on which interface files you plan to load as part of the nightly batch process.
After setting up the common modules and syncing with POM, ensure that certain critical batch
processes in the AIF DATA schedule (which is used by all of RAP) are enabled in Batch
Monitoring. This can be used as a check to validate the POM sync occurred:
• RESET_ETL_THREAD_VAL_STG_JOB
• TRUNCATE_STAGE_TABLES_JOB (unless you are using RDE with RMFCS v22+, then
this must be disabled instead)
• DELETE_STATS_JOB
• RI_UPDATE_TENANT_JOB
5-5
Chapter 5
Initial Batch Setup
Some of these jobs begin in a disabled state in POM (depending on the product version) so the
POM sync should ensure they are enabled. If they are not enabled after the POM sync, be
sure to enable them before attempting any batch runs.
Additionally, there are certain jobs that must remain disabled unless advised to enable them by
Oracle. Make sure the following jobs are disabled in the AIF DATA schedule after syncing with
POM:
• OBIEE_CACHE_CLEAR_JOB
• ODI_LOG_EXTRACTOR_JOB
• ODI_LOG_LOADER_JOB
• If you are not providing a file named RA_SRC_CURR_PARAM_G.dat in all ZIP uploads, disable
BATCH_VALIDATION_JOB, RA_SRC_CURR_PARAM_G_COPY_JOB, and
RA_SRC_CURR_PARAM_G_STG_JOB
• If you are using stock ledger in RI, only one of the following can be used and the other
must be disabled: W_RTL_STCKLDGR_SC_LC_MH_F_GREG_JOB,
W_RTL_STCKLDGR_SC_LC_MH_F_JOB
RI Modules
The following table lists modules within the Retail Insights offer codes which may be used if
you are implementing any part of Retail Insights. Disable the RI module entirely if you are not
implementing Retai lInsights at this time, because all your batch configurations should be
covered by the RAP common modules.
AI Foundation Modules
The following table lists modules within the AI Foundation applications which may be used by
one or more other RAP applications, in addition to the common modules from the prior section.
This primarily covers forecasting and integration needs for Planning application usage. It is
important to note that the underlying process for generating forecasts leverages jobs from the
Lifecycle Pricing Optimization (LPO) application, so you will see references to that product
throughout the POM batch flows and in Retail Home modules.
5-6
Chapter 5
Initial Batch Setup
To initialize data for the forecasting program, use the ad hoc POM process
RSE_MASTER_ADHOC_JOB described in Sending Data to AI Foundation. After the platform is
initialized, you may use the Forecast Configuration user interface to set up and run your initial
forecasts. For complete details on the requirements and implementation process for
forecasting, refer to the Retail AI Foundation Cloud Services Implementation Guide.
For reference, the set of batch programs that should be enabled for forecasting are listed
below (not including foundation data loads common to all of AI Foundation). Enable these by
syncing POM with MDF modules, though it is best to validate that the expected programs are
enabled after the sync.
Note:
Jobs with PMO in the name are also used for Lifecycle Pricing Optimization and are
shared with the forecasting module.
Job Name
PMO_ACTIVITY_LOAD_START_JOB
PMO_ACTIVITY_STG_JOB
PMO_ACTIVITY_LOAD_JOB
PMO_ACTIVITY_LOAD_END_JOB
PMO_CREATE_BATCH_RUN_JOB
PMO_RUN_EXEC_SETUP_JOB
PMO_RUN_EXEC_START_JOB
PMO_RUN_EXEC_PROCESS_JOB
PMO_RUN_EXEC_END_JOB
RSE_CREATE_FCST_BATCH_RUN_JOB
RSE_FCST_BATCH_PROCESS_JOB
RSE_FCST_BATCH_RUN_END_JOB
RSE_CREATE_FCST_BATCH_RUN_ADHOC_JOB
RSE_FCST_BATCH_PROCESS_ADHOC_JOB
There are also two processes involved in forecast exports to MFP, one as part of weekly batch
and the other as an ad hoc job which you can run during implementation.
Job Name
RSE_MFP_FCST_EXPORT_JOB
5-7
Chapter 5
Initial Batch Setup
Job Name
RSE_MFP_FCST_EXPORT_ADHOC_JOB
Maintenance Cycles
The AIF DATA schedule also includes a standalone maintenance batch flow called
RI_MAINTENANCE_ADHOC. You need to enable and schedule this process to run nightly,
sometime prior to the main nightly batch.
1. From Batch Administration, go into the AIF DATA schedule Standalone tab and locate the
RI_MAINTENANCE_ADHOC flow. If the jobs are disabled, then enable them now.
2. Go into Schedule Administration and add a new schedule for the flow, enable it, and
schedule it to run once every day. It is recommended to start it several hours prior to the
actual nightly batch; for example, it can run starting at 8 pm local time if your nightly batch
starts at 2 am the following morning. These processes should not impact normal user
activity in the applications and are safe to run even if users are still logged in.
3. From Batch Monitoring, restart the AIF DATA schedule to apply the changes.
The maintenance cycle is responsible for creating table partitions, repairing unusable indexes,
purging old log files, and purging certain records relating to deleted items and locations. All of
these activities are necessary to ensure the data warehouse operates efficiently over time as
data continues to accumulate in the system. These jobs are kept outside of the nightly batch
flow because there is a chance they will need several hours to run in some instances, such as
when a large number of new partitions need to be created at the start of a fiscal quarter or a
particularly large number of log files have to be purged.
5-8
Chapter 5
Initial Batch Setup
These files will be bundled into a single ZIP file named RAP_DATA.zip. To configure the
Customer Modules for this batch implementation, perform the following steps:
1. At the top level, you may disable the RI module entirely (if it’s visible), because you are not
implementing that solution
2. Enable and expand the RAP module. Within the RAP_COMMON sub-module, enable only the
following components and disable the rest:
• RAP>RAP_COMMON>ZIP_FILES: RAP_DATA_ZIP
• RAP>RAP_COMMON>SICONTROLFILES: RAP_SI_DIM_CALENDAR,
RAP_SI_DIM_EXCHANGE_RATES, RAP_SI_DIM_ONORDER,
RAP_SI_DIM_ORGANIZATION, RAP_SI_DIM_PRODUCT,
RAP_SI_FACT_ADJUSTMENT, RAP_SI_FACT_INVENTORY,
RAP_SI_FACT_MARKDOWN, RAP_SI_FACT_ORDER_DETAIL,
RAP_SI_FACT_RECEIPT, RAP_SI_FACT_RTV, RAP_SI_FACT_SALES,
RAP_SI_FACT_TRANSFER
• RAP>RAP_COMMON>SIBATCH: RAP_SI_INVADJ, RAP_SI_INVPOS,
RAP_SI_INVRECEIPT, RAP_SI_INVRTV, RAP_SI_INVTRANSFER,
RAP_SI_MARKDOWN, RAP_SI_ONORDER, RAP_SI_PO, RAP_SI_REQUIRED,
RAP_SI_SALES
• RAP>RAP_COMMON>RDXBATCH: RAP_RDX_INVADJ, RAP_RDX_INVPOS,
RAP_RDX_INVECEIPT, RAP_RDX_INVRTV, RAP_RDX_INVTRANSFER,
RAP_RDX_MARKDOWN, RAP_RDX_PO, RAP_RDX_REQUIRED,
RAP_RDX_SALES
• RAP>RAP_COMMON>BATCH: RAP_INVADJ, RAP_INVPOS, RAP_INVRECEIPT,
RAP_INVRTV, RAP_INVTRANSFER, RAP_MARKDOWN, RAP_PO,
RAP_REQUIRED, RAP_SALES
3. To use AIF for forecasting, you will also need certain parts of the AI Foundation modules
(named RSP in Retail Home). Enable the RSP root module, expand it, and enable the
FCST module. All options under FCST should also be enabled. Disable all other RSP
modules here.
4. If you are integrating with Merchandising using the RDE batch jobs, then you are likely not
providing flat files for most things, and the setup process will be different. Refer to
Integration with Merchandising to understand the batch setup process in more detail.
Once all changes are made, make sure to Save the updates using the button below the table.
Only after saving the changes will they be available to sync with POM.
5-9
Chapter 5
Initial Batch Setup
5. Go to the Batch Monitoring screen. If you have a schedule already open for a past
business date, click Close Schedule. If you are already on the current business date then
just click Restart Schedule and skip the next step.
6. Change the POM business date to the date you will be loading nightly batch data for (for
example if your data files will have data for 2/25/2023 then that should also be the
business date). Open a new schedule using the provided button.
7. Enable and schedule the batches to run from Schedule Administration. The inter-schedule
dependencies will ensure downstream jobs are not run until the necessary dependencies
are complete (for example, RSP jobs will wait for RI loads to complete).
Note:
Updating the schedule times also requires a Restart Schedule action to be
performed afterwards.
8. If you have not already enabled the schedule for the RI_MAINTENANCE_ADHOC flow, make
sure to do that now.
9. Make sure the business date in the data warehouse is one day prior to the first day of data
you plan to load through the nightly batch. If necessary, use the standalone POM process
LOAD_CURRENT_BUSINESS_DATE_ADHOC to advance the business date. For example, if your
first nightly batch is loading data for 2023-04-30 then you must ensure the data warehouse
is currently on business date 2023-04-29. The nightly batch will automatically advance the
date starting with 2023-04-30.
10. If you chose not to provide the RA_SRC_CURR_PARAM_G.dat file for ensuring proper business
date validation, then go back to Batch Administration in the AIF DATA nightly schedule and
disable BATCH_VALIDATION_JOB, RA_SRC_CURR_PARAM_G_COPY_JOB, and
RA_SRC_CURR_PARAM_G_STG_JOB.
The next set of steps may or may not be needed depending on your business calendar
configuration:
1. From the Batch Administration Nightly schedules, locate the column for Days of the Week.
By default, weekly jobs may run on either Saturday or Sunday (varies by process). You
must align the weekly jobs to your week-ending dates for anything involved in integration
or calculations, such as RDX and PDS batch jobs.
2. For example, if you wish to set your week-ending date as Saturday for all jobs involved in
RI > Planning integrations, filter the AIF DATA Schedule job names using each of the
following codes: PDS, RDX
3. For each set of PDS or RDX jobs, look at the Days of the Week value. Anything that is not
set to run Saturday should be modified by editing the job record and changing the day to
Saturday. You might also want some jobs to run daily, in which case you select all days of
the week here.
4. Repeat this process in the RPASCE schedule, moving any jobs to the desired Day of the
Week value.
5-10
Chapter 5
Managing Multiple Data Sources
5. When all changes are done, be sure to Restart Schedule from the Batch Monitoring
screen.
Lastly, ensure that the following jobs are disabled in the AIF DATA schedule after all other
setup is done, as they should not be used in the current version:
• OBIEE_CACHE_CLEAR_JOB
• ODI_LOG_EXTRACTOR_JOB
• ODI_LOG_LOADER_JOB
You are now ready to begin running your nightly batches. As soon as you run the first batch
cycle, you must continue to run AIF DATA and AIF APPS batch cycles every night in sequence.
You must not skip any days, because some jobs have internal processing based on the day of
week that they execute and skipping days will prevent proper operation of these jobs. If you
are not providing daily fact data, then you must still run the daily batches with a full set of
dimension files, such as products and locations; you can never run a batch without dimension
files present and populated with data. The recommended approach is for the source system
providing data files to always package and upload a nightly ZIP file every day even when no
data is changing. The uploaded ZIP should contain the dimension data plus empty fact files
where daily data is not being generated.
Adjustments in POM
While the bulk of your batch setup should be done in Retail Home, it may be necessary to fine-
tune your schedule in POM after the initial configuration is complete. You may need to disable
specific jobs in the nightly schedules (usually at Oracle’s recommendation) or reconfigure the
ad hoc processes to use different programs. The general steps to perform this activity are:
1. From Retail Home, click the link to navigate to POM or go to the POM URL directly if
known. Log in as a batch administrator user.
2. Navigate to the Batch Administration screen.
3. Select the desired application tile, and then select the schedule type from the nightly,
recurring, or standalone options.
4. Search for specific job names, and then use the Enabled option to turn the program on or
off.
5. From the Batch Monitoring screen, click Restart Schedule to apply the changes.
Note:
If you sync with MDF again in the future, it may re-enable jobs that you turned off
inside a module that is turned on in Retail Home. For that reason, the module
configuration is typically used only during implementation, then POM is used once
you are live in production.
5-11
Chapter 5
Managing Multiple Data Sources
Each numbered entry point into the data flow is mutually exclusive, you cannot run multiple
data flows at the same time or it can result in duplicated or invalid data being loaded. For
example, if you are entering the diagram from point 1, you would want to ensure all AIF DATA
jobs along that path are enabled, and also disable the jobs for unused entry points (2 and 4).
The entry points in every diagram are explained in more detail below.
Adjustments
This is the data flow diagram for inventory adjustment transactions. Enable only the jobs for
your chosen load method and disable the jobs for all other entry points.
Costs
This is the data flow diagram for base cost and net costs. Enable only the jobs for your chosen
load method and disable the jobs for all other entry points.
5-12
Chapter 5
Managing Multiple Data Sources
Deal Income
This is the data flow diagram for deal income transactions. Enable only the jobs for your
chosen load method and disable the jobs for all other entry points.
Intercompany Margin
This is the data flow diagram for intercompany margin transactions. Enable only the jobs for
your chosen load method and disable the jobs for all other entry points.
5-13
Chapter 5
Managing Multiple Data Sources
Inventory Position
This is the data flow diagram for inventory positions. Enable only the jobs for your chosen load
method and disable the jobs for all other entry points.
Inventory Reclass
This is the data flow diagram for inventory reclass transactions. Enable only the jobs for your
chosen load method and disable the jobs for all other entry points.
5-14
Chapter 5
Managing Multiple Data Sources
Markdowns
This is the data flow diagram for markdown transactions. Enable only the jobs for your chosen
load method and disable the jobs for all other entry points.
Prices
This is the data flow diagram for daily price updates. Enable only the jobs for your chosen load
method and disable the jobs for all other entry points.
5-15
Chapter 5
Managing Multiple Data Sources
Purchase Orders
This is the data flow diagram for purchase order updates. Enable only the jobs for your chosen
load method and disable the jobs for all other entry points.
Receipts
This is the data flow diagram for inventory receipt transactions. Enable only the jobs for your
chosen load method and disable the jobs for all other entry points.
5-16
Chapter 5
Managing Multiple Data Sources
Returns to Vendor
This is the data flow diagram for inventory returns to vendor. Enable only the jobs for your
chosen load method and disable the jobs for all other entry points.
Sales
This is the data flow diagram for sales transactions. Enable only the jobs for your chosen load
method and disable the jobs for all other entry points.
5-17
Chapter 5
Managing Multiple Data Sources
Sales Pack
This is the data flow diagram for sales transactions at component level that are spread down
from pack item sales. Enable only the jobs for your chosen load method and disable the jobs
for all other entry points.
Sales Wholesale
This is the data flow diagram for wholesale and franchise sales transactions. Enable only the
jobs for your chosen load method and disable the jobs for all other entry points.
5-18
Chapter 5
Configure POM Integrations
Transfers
This is the data flow diagram for inventory transfer transactions. Enable only the jobs for your
chosen load method and disable the jobs for all other entry points.
5-19
Chapter 5
Schedule the Batches
Activity References
Trigger RAP batches from an external POM Implementation Guide > Integration > Invoking
program Cycles in POM
Trigger external processes based on POM Implementation Guide > Integration > External
RAP batch statuses Status Update
Add external dependencies into the POM Implementation Guide > Integration > External
RAP batch to pause execution at Dependency
specific points
5-20
Chapter 5
Batch Flow Details
For more details about the tabs in the resulting XLS file, refer to the POM User Guide, “Export/
Import Schedule Configuration”.
5-21
Chapter 5
Reprocessing Nightly Batch Files
the application. The POM schedule for Planning internally calls an OAT task controlled by the
batch control entries. A standard set of nightly and weekly jobs for Planning are defined to
schedule them in POM. You also have the option to disable or enable the jobs either directly
through POM, or by controlling the entries in batch control files. Refer to the Planning
application-specific Implementation Guides for details about the list of jobs and how jobs can
be controlled by making changes to batch control files.
Once a new file has been placed, you will still need to re-run the jobs to import that file. Or, if a
COPY job for that file is what failed, re-run that job and the batch will resume from there.
5-22
6
Data Processing and Transformations
The Retail Analytics and Planning is a data-driven set of applications performing many
transformations and calculations as part of normal operations. Review this chapter to learn
about the most common types of data transformation activities occurring within the platform
that may impact the data you send into the platform and the results you see as an end user.
Table Structures
If you are currently loading data into the data warehouse using AIF DATA batch jobs and need
to access database tables for debugging or validation purposes, there are naming and format
conventions used on each aggregate table. A base intersection table is abbreviated using the
following notations:
Using the above notation, you may interpret the table W_RTL_SLS_IT_LC_WK_A as “Sales
aggregate table at the item/location/week intersection”.
Key Columns
Most fact tables in the data warehouse use the same key column structure, which consists of
two types of internal identifiers. The first identifier is referred to as a WID value. The WID on a
6-1
Chapter 6
Data Warehouse Aggregate Tables
fact table is a foreign key reference to a dimension table’s ROW_WID column. For example, a
PROD_WID column in a sales table is referring to the ROW_WID on W_PRODUCT_D (the product
dimension table). Joining the WIDs on a fact and a dimension will allow you to look up user-
facing descriptors for the dimensions, such as the product number.
The second identifier is known as SCD1_WID and refers to slowly changing dimensions, which is
a common data warehousing concept. The IDs on the SCD1_WID columns are carried forward
through reclassifications and other dimensional changes, allowing you to locate a single
product throughout history, even if it has numerous records in the parent dimension table. For
example, joining PROD_SCD1_WID from a sales table with SCD1_WID on W_PRODUCT_D will receive
all instances of that product’s data throughout history, even if the product has several different
ROW_WID entries due to reclassifications, which insert new records to the dimension for the
same item.
The other core structure to understand is Date WIDs (key column DT_WID). These also join with
the ROW_WID of the parent dimension (W_MCAL_DAY_D usually), but the format of the WID allows
you to extract the date value directly if needed, without table joins. The standard DT_WID value
used is a combination of 1 + date in YYYYMMDD + 000. For example, 120210815000 is the
DT_WID value for “August 15, 2021”.
6-2
Chapter 6
Transformations from Data Warehouse to Planning
Transformation Explanation
Currency Conversion As part of the nightly batch, AIF DATA jobs will use exchange rate
information to convert all incoming data from the source
currency to the primary business currency. All data sent to
downstream applications is in the primary currency. The data
model maintains separate columns for both local and primary
currency amounts for RI and AIF usage.
Tax Handling The data model includes non-US taxes, such as VAT, in the sales
retail amounts based on the indicators set up in the source
system (such as Sales Audit) and in the data extraction jobs (RDE).
When sending the sales data to Planning and AI Foundation, the
default sales values may include VAT and only specific VAT-
exclusive fields will remove it. You may optionally remove VAT
from all data using configuration changes.
Transaction Date Usage All fact data coming into the system includes a transaction date
on the record. AIF DATA jobs aggregate from day to week level
using transaction dates and does not alter or re-assign any
records to different dates from what is provided. Transaction
data in the past will be added to their historical week in the
aggregates, no matter how far back it is dated.
Pack Item Handling Downstream applications are currently only interested in the
component item level, so AIF DATA will not send any fact data for
pack items to other applications. Pack item sales must be spread
to the component item level and loaded into the Sales Pack
interface if this data is required for AI Foundation or Planning.
All inventory, purchase order, and transaction data must be
loaded at the component item level only.
Stockholding Locations Inventory data for Planning is only exported for stockholding
locations. A store indicated as a non-stockholding location on the
location dimension will not be included in outbound inventory
data. Physical warehouses which are not stockholding (because
you use virtual warehouses) will also not be included.
Warehouse Types Planning solutions assume that virtual warehouses are used as
the stockholding locations for the business, and physical
warehouses will be non-stockholding. For this reason, virtual
warehouses are used to integrate data from the data warehouse
to Planning, and no data is sent for the physical warehouses
(except to indicate on each virtual WH the ID and name of the
associated physical WH). If you don’t use virtual WHs, you can
mark your physical WHs as virtual for the purposes of
integration.
6-3
Chapter 6
Transformations from Data Warehouse to Planning
Transformation Explanation
Future On Order Planning applications require a forward-looking view of
purchase orders based on the OTB EOW Date. The data
warehouse accepts the actual purchase order details on the
interfaces but will then transform the on-order amounts to be
future-dated using the provided OTB EOW Dates. Orders which
are past the OTB date will be included in the first EOW date, they
will never be in the past.
Include On Order Purchase Order data is limited by the Include On Order Flag on
the Order Head interface. A value of N will not be included in the
calculations for Planning.
Orderable Items Purchase Order data is limited by the Orderable Flag on the
Product interface. A value of N will not be included in the
calculations for Planning.
Sellable Items Regular non-pack items must be flagged as sellable to be
interfaced to Planning as they do not want non-sellable item data
in PDS at this time. This does not apply to pack items, which may
be sellable or non-sellable because non-sellable pack items are
often used for replenishment in IPO.
Inventory Adjustment Types The system accepts 3 types of inventory adjustments using the
codes 22, 23, and 41. For Planning, only the first two codes are
exported. Code 22 relates to Shrink and code 23 relates to Non-
Shrink.
Inventory Receipt Types The system accepts 3 types of inventory receipts using the codes
20, 44~T, and 44~A. For Planning, all codes are sent but the 44s
are summed together. Code 20 relates to purchase order receipts.
Code 44 relates to Transfer receipts and Allocation receipts. Only
code 20 is used by MFP in the GA solution.
Inventory Transfer Types The system accepts 3 types of transfers using the codes N, B, and I
(normal, book, and intercompany). All three types are sent to
planning along with the type codes.
Data Mappings
When you are generating input files to RAP, you may also want to know which columns are
being moved to the output and how that data translates from what you see in the file to what
you see in Planning applications. The list of mappings below describes how the data in the
foundation data warehouse is exported to PDS.
Note:
Conversions and filters listed in the prior section of this chapter apply to all of this
data (for example, data may be stored in local currency in RI but is always converted
to the primary currency for export).
6-4
Chapter 6
Transformations from Data Warehouse to Planning
Product Mapping
The item dimension and product hierarchy data is loaded mainly from the PRODUCT.csv file or
from RMFCS. The primary data warehouse table for item data is W_PRODUCT_D while the
hierarchy comes from W_PROD_CAT_DH, but several temporary tables are used to pre-calculate
the values before export. The mapping below is used by the interface program to move data
from the data warehouse to RDX. The temporary table W_RTL_ITEM_PARENT_TMP is generated
using data from W_PROD_CAT_DH, W_PRODUCT_ATTR_D, W_PRODUCT_D_TL, W_RTL_IT_SUPPLIER_D,
and W_DOMAIN_MEMBER_LKP_TL. The export filters out non-pack items that have SELLABLE_FLG=N
on the interface file or from Merchandising.
There is a configuration that alters the behavior of the ITEM_DESC, ITEM_PARENT_DIFF_DESC,
and ITEM_PARENT_DESC fields. You may optionally update the C_ODI_PARAM_VW parameter
PDS_PROD_INCLUDE_ITEM_ID to Y. When you do, the item ID will be concatenated into the
description field on W_PDS_PRODUCT_D for all 3 levels of item.
6-5
Chapter 6
Transformations from Data Warehouse to Planning
6-6
Chapter 6
Transformations from Data Warehouse to Planning
6-7
Chapter 6
Transformations from Data Warehouse to Planning
6-8
Chapter 6
Transformations from Data Warehouse to Planning
6-9
Chapter 6
Transformations from Data Warehouse to Planning
Organization Mapping
The location dimension and organization hierarchy data is loaded mainly from the
ORGANIZATION.csv file or from RMFCS. The primary data warehouse table for location data is
W_INT_ORG_D while the hierarchy comes from W_INT_ORG_DH, but several other tables are used
to pre-calculate the values before export. The mapping below is used by the interface program
to move data from the data warehouse to RDX. W_DOMAIN_MEMBER_LKP_TL is the holding table
for translatable description strings. W_INT_ORG_ATTR_D is for location attributes. Other tables
ending in TL are for lookup strings for specific entities like store names. The mappings are
separated by store and warehouse, when different logic is used based on the location type.
Only virtual warehouses are exported here, physical warehouse records are excluded from the
export.
6-10
Chapter 6
Transformations from Data Warehouse to Planning
6-11
Chapter 6
Transformations from Data Warehouse to Planning
6-12
Chapter 6
Transformations from Data Warehouse to Planning
6-13
Chapter 6
Transformations from Data Warehouse to Planning
6-14
Chapter 6
Transformations from Data Warehouse to Planning
Calendar Mapping
The calendar hierarchy data is loaded from the CALENDAR.csv file or from RMFCS. The
calendar must be a fiscal calendar (such as 4-4-5 or 4-5-4). The primary data warehouse table
having day-level data is W_MCAL_DAY_D. AIF DATA jobs automatically generate the calendar
using the start/end dates for the fiscal periods in the input file. AIF DATA jobs also generate an
6-15
Chapter 6
Transformations from Data Warehouse to Planning
internal Gregorian calendar at the same time the fiscal calendar is loaded, and this data is
exported alongside the fiscal calendar for extensions and customizations.
6-16
Chapter 6
Transformations from Data Warehouse to Planning
6-17
Chapter 6
Transformations from Data Warehouse to Planning
6-18
Chapter 6
Transformations from Data Warehouse to Planning
Brand Mapping
The brand data is loaded from the PRODUCT.csv file or from RMFCS. The product data load
programs will insert the brand information into the additional tables used below (as long as
these tables are enabled during foundation loads).
6-19
Chapter 6
Transformations from Data Warehouse to Planning
6-20
Chapter 6
Transformations from Data Warehouse to Planning
6-21
Chapter 6
Transformations from Data Warehouse to Planning
6-22
Chapter 6
Transformations from Data Warehouse to Planning
Supplier Mapping
The supplier data is loaded from the PRODUCT.csv file or from RMFCS. The product data load
programs will insert the supplier information into the additional tables used below (as long as
these tables are enabled during foundation loads).
6-23
Chapter 6
Transformations from Data Warehouse to Planning
All of the tables include an ATTR_ID column that is automatically generated from either the
CFAS attribute group name or the user-supplied group names, stripped of unsupported
characters for ID columns. They also have an ATTR_VALUE column containing the attribute
values from the source table, which is a combination of all the values in all columns having the
same data type (CHAR, NUM, DATE). ATTR_VALUE for string types are a trimmed version with
unsupported characters removed. Also specific to string types, there is an ATTR_VALUE_DESC
column which has the unmodified string value from the source table.
To leverage location string attributes for non-CFAS data, you need to load the location group
definitions to the common translation lookup table in the data warehouse,
W_DOMAIN_MEMBER_LKP_TL. From APEX, you may insert the group names to the staging table
W_DOMAIN_MEMBER_DS_TL and then run the associated POM job W_DOMAIN_MEMBER_LKP_TL_JOB
to populate the target table. You need to populate the columns with specific values as
described below:
6-24
Chapter 6
Transformations from Data Warehouse to Planning
Column Usage
DOMAIN_CODE Use RTL_ORG_ATTR for columns in
W_INT_ORG_ATTR_D and RTL_ORG_FLEX for
columns in W_ORGANIZATION_FLEX_D
DOMAIN_MEMBER_CODE Specify the exact column in the source table
having this attribute in it, such as
FLEX1_CHAR_VALUE
DOMAIN_MEMBER_NAME Specify the name of the attribute group, such as
Climate
LANGUAGE_CODE Specify the primary language code used for all
translated lookup data, such as US
SRC_LANGUAGE_CODE Specify the primary language code used for all
translated lookup data, such as US
Sales Mapping
Data for sales is loaded from the SALES.csv file or from RMFCS (Sales Audit). The primary
data warehouse table is the week-level aggregate generated by the historical and daily load
processes. All data mappings in this area are split out by retail type. Any measure having
reg/pro/clr in the name are being filtered on that retail type code as part of the export. When
you provide input data to RAP, you specify the retail type code as R, P, or C, and those values
are used here to determine the output. A custom 4th option (using type code O for Other) is also
allowed, as long as you extend the W_XACT_TYPE_D dimension in the data warehouse to have
the extra type code. Other sales are only included in the Total Sales measures in the PDS
export. The data only includes non-pack item sales, as it expects pack sales to be spread to
their component level when used.
6-25
Chapter 6
Transformations from Data Warehouse to Planning
6-26
Chapter 6
Transformations from Data Warehouse to Planning
6-27
Chapter 6
Transformations from Data Warehouse to Planning
6-28
Chapter 6
Transformations from Data Warehouse to Planning
6-29
Chapter 6
Transformations from Data Warehouse to Planning
6-30
Chapter 6
Transformations from Data Warehouse to Planning
On Order Mapping
Data is loaded from the ORDER_HEAD.csv and ORDER_DETAIL.csv files or from RMFCS.
Purchase order data is transformed from the raw order line details into a forward-looking total
on-order amount based on the OTB end-of-week date on the order. The calendar date on the
export is further altered based on the parameter PDS_EXPORT_DAILY_ONORD in C_ODI_PARAM_VW
to either allow or prevent non-week-ending dates. Data is also filtered to remove orders not
flagged as Include On Order.
Markdown Mapping
Data is loaded from the MARKDOWN.csv file or from RMFCS. The primary data warehouse table
is the week-level aggregate generated by the historical load process.
6-31
Chapter 6
Transformations from Data Warehouse to Planning
Wholesale/Franchise Mapping
Data is loaded from the SALES_WF.csv file or from RMFCS. The primary data warehouse table
is the week-level aggregate generated by the historical load process.
6-32
Chapter 6
Transformations from Data Warehouse to Planning
6-33
Chapter 6
Transformations from Data Warehouse to Planning
6-34
Chapter 6
Transformations from Data Warehouse to Planning
6-35
Chapter 6
Transformations from Data Warehouse to Planning
6-36
Chapter 6
Transformations in Planning
Transformations in Planning
Planning applications allow the loading of fact data at the load intersection level (such as Item
and Location) but uses the data within the application at an aggregated level (called the base
intersection). In MFP, though all facts are loaded at the item level, it only needs data to plan at
the Subclass level. The data will be aggregated from item level to subclass level for all the
configured metrics to be directly used by the application. During re-classifications (such as
when one item is moved from one subclass to another subclass), after the new hierarchy
details are imported into MFP it also triggers re-classification of all fact data. Re-aggregation of
fact data then happens only for shared facts having different load and base intersections.
In Planning applications, fact data is grouped as dynamic fact groups based mainly on the
base intersection and interface details, as defined in the Data Interface of the Application
Configuration. RI and AI Foundation use a relational data model, whereas Planning
applications internally use a hierarchical data model. Data from RAP, stored using the relational
data model, needs to be transformed to be loaded into Planning applications. A similar
approach is necessary for data coming out of planning applications to AI Foundation or
external sources. These data transformations happen as part of the interfaces defined in
interface.cfg (Interfaces Configuration File), which is a mapping of dimensions and
measures from Planning applications to external system table columns. Refer to the
6-37
Chapter 6
Transformations in Planning
6-38
7
Implementation Tools
Review the sections below to learn about the tools and common components used within the
Retail Analytics and Planning. Many of these tools are used both for initial implementation and
for ongoing maintenance activities, so implementers should be prepared to transfer knowledge
of these tools to the customer before completing the project.
Retail Home
One of the first places you will go in a new RAP environment is Retail Home. It serves both as
the customer portal for Oracle Retail cloud applications and as a centralized place for certain
common configurations, such as Customer Module Management. Module management allows
implementers to quickly configure the complex batch schedules and interdependencies of RAP
applications using a simplified module-based layout. Optional batch programs, such as those
used for Retail Insights or AI Foundation applications, can be turned off from this tool and it
synchronizes with the batch scheduler to ensure all related programs are disabled
automatically.
For more general information about Retail Home and the other features it provides, review the
Retail Home Administration Guide.
Because Customer Modules are a necessary part of configuring and using a RAP
environment, see the steps below for how to access this feature.
1. To access Retail Home, access the URL sent to your cloud administrator on first
provisioning a new environment. It should look similar to the URL format below.
https://{service}.retail.{region}.ocs.oraclecloud.com/{solution-customer-
env}/retailhome
3. You may enable or disable various modules, depending on your implementation plans. For
example, if you are not implementing any Retail Insights modules, then the sections for
“RCI” and “RMI” can be deactivated.
7-1
Chapter 7
Retail Home
Note:
Other components within the RI parent module may still be necessary. Detailed
module requirements are described in Batch Orchestration
In addition to Customer Modules, you may also use Retail Home’s Resource Bundle
Customization (RBC) feature to change translatable strings in the applications to custom
values. Use the steps below to verify this feature is available:
1. Navigate to Settings → Application Administration → Application Navigator Setup.
2. Confirm that a row already exists for each application in the platform, including Retail
Insights, Retail AI Foundation Cloud Services, and Merchandise Financial Planning.
3. On Retail Insights, select the row and click Edit.
a. If not enabled, change the Platform Service toggle to an enabled state.
b. Check all of the boxes that appear.
c. Enter a valid platform service URL.
If your platform services URL is blank and you do not know the URL, log a Service
Request to receive it from Oracle.
4. Repeat the steps above for the AI Foundation and MFP modules, if necessary.
5. Navigate to Settings → Resource Bundles → Resource Text Strings once the navigator
and platform service setup is validated.
6. Set the following values in the dropdown menus:
a. Application: Retail Insights
7-2
Chapter 7
Process Orchestration and Monitoring (POM)
7-3
Chapter 7
Process Orchestration and Monitoring (POM)
6. Click on the tile named AIF DATA <Release_#> to view the RI and data warehouse batch
jobs, which should be loaded into the table below the tiles.
7. Click the Sync with MDF button (above the table) and then click the OK button in the
Warning message popup. Once clicked, the Platform Services calls are initiated between
Retail Home and POM to sync the module status.
8. While the modules are synchronizing, you will see a message: 'Some features are disabled
while a schedule is being synced'. Do not attempt to modify the schedule or make other
changes while the sync is in progress.
9. Once the sync is complete, a JSON file with the batch schedule summary is downloaded.
This file contains the current and previous status of an application and module in MDF and
POM after sync. For example:
{"scheduleName":"RI","synced":true,"enabledModules":
[{"state":"MATCHED_MODULE","mdfStatus":"ENABLED","prevMdfStatusInPom":"E
7-4
Chapter 7
Control & Tactical Center
NABLED","prevStatusInPom":"ENABLED","publishToPom":true,"applicationName":
"RI","moduleName":"RMI_SI_ONORDER","matchedModule":true},…
10. Click the Nightly or the Standalone tab above the table and enter a filter for the Module
column (based on the modules that were activated or deactivated) and press Enter. The
jobs will be enabled or disabled based on the setup in Customer Modules Management.
11. Navigate to Tasks → Batch Monitoring. Click on the same application tile as before. If the
batch jobs are not listed, change the Business Date option to the 'Last Schedule Date'
shown on the tile.
12. Once the date is changed, the batch jobs are loaded in the table. Click the Restart
Schedule button so that module changes are reflected in the new schedule. Click OK on
the confirmation pop-up. After a few seconds, a 'Restarted' message is displayed.
13. In the same screen, filter the Job column (for example,'W_HOUSEHOLD') to check the
status of jobs. The status is either 'Loaded' or 'Disabled' based on the configuration in the
Customer Modules Management screen in Retail Home.
Note:
A specific module in Retail Home may appear under several applications, and jobs
within a module may be used by multiple processes in POM. The rule for
synchronizing modules is, if a given POM job is enabled in at least one module, it will
be enabled in POM (even if it is disabled in some other modules). Only when a job is
not needed for any modules will it be disabled.
7-5
Chapter 7
Data Visualizer
Here are the steps for accessing and using this feature:
1. To access the system configurations, start from the Retail Home URL sent to your cloud
administrator on first provisioning a new environment. It should look similar to the URL
format below.
https://{service}.retail.{region}.ocs.oraclecloud.com/{solution-customer-
env}/retailhome
2. Using the Retail Home application menu, locate the link for the Retail AI Foundation Cloud
Services. Alternatively, you can directly navigate to the application using a URL similar to
the format below.
https://{service}.retail.{region}ocs.oraclecloud.com/{solution-customer-
env}/orase/faces/Home
3. In the task menu, navigate to Control & Tactical Center → Strategy & Policy
Management. A new window opens.
Note:
Make sure your user has the ADMINISTRATOR_JOB role in OCI IAM before logging
into the system.
Data Visualizer
Retail Analytics and Planning implementations largely involve processing large volumes of
data through several application modules, so it is important to know how to access the
database to review settings, monitor load progress, and validate data tables. Database access
7-6
Chapter 7
Data Visualizer
is provided through the Oracle Data Visualization (DV) tool, which is included with all Retail
Analytics and Planning environments. The URL to access the DV application will be similar to
the below URL:
https://{analytics-service-region}/{tenant-id}/dv/?pageid=home
Note:
The best way to write ad hoc SQL against the database is through APEX. However,
Data Visualizer can be used to create reusable datasets based on SQL that can be
built into reports for longer term usage.
The RAP database comprises several areas for the individual application modules, but the
majority of objects from RI and AIF are exposed in DV as a connection to the RAFEDM01
database user. This user has read-only access to the majority of database objects which are
involved in RI and AI Foundation implementations, as well as the tables involved in publishing
data to the Planning modules. Follow the steps below to verify access to this database
connection:
1. Log in to the DV application with a user that includes the DVContentAuthor group in OCI
IAM (group names vary by cloud service; they will be prefixed with the tenant ID).
2. Expand the navigation panel using the Navigator icon in the upper left corner.
3. Click Data and, once the screen loads, click Connections. Confirm that you have a
connection already available for RAFEDM01-Connection (Retail Analytics Front End Data
Mart).
7-7
Chapter 7
Data Visualizer
4. Click the connection. The Add Data Set screen will load using the selected connection. A
list of database users are displayed in the left panel.
If any errors are displayed or a password is requested, contact Oracle Support for
assistance.
7-8
Chapter 7
Data Visualizer
8. If you are performing a one-time query that does not need to be repeated or reused, you
can stop at this point. You can also add a Manual SQL query using the SQL object at the
top of the left panel, to write simple queries on the database. However, if you want to
create a reusable dataset, or expose the data for multiple users, proceed to the next steps.
9. Click the Save icon in the upper right corner of the screen and provide a name for the new
dataset:
10. Click the table name (C_ODI_PARAM) at the bottom of the screen to modify the dataset
further for formatting and custom fields (if desired).
11. You can format the dataset on this screen for use in DV projects. You may rename the
columns, change the datatype between Measure and Attribute, create new columns based
on calculated values, and extract values from existing columns (such as getting the month
from a date). Refer to Oracle Analytics documentation on Dataset creation for full details.
When finished, click Create Workbook in the upper right corner to open a new workbook
with it.
7-9
Chapter 7
File Transfer Services
Once you have verified database connectivity, you may continue on to creating more datasets
and workbooks as needed. Datasets will be saved for your user and can be reused at later
dates without having to re-query the database. Saved datasets can be accessed using the
Data screen from the Navigator panel.
7-10
Chapter 7
File Transfer Services
To interact with FTS you must use the REST APIs provided. The table below lists the API end
points for different file operations.
The {baseUrl} is the URL for your RAP service that is supplied to you when your service is
provisioned, and can be located from Retail Home as it is also the platform service URL. Refer
to the Required Parameters section for additional parameters you will need to make FTS
requests.
Required Parameters
To leverage File Transfer Services, several pieces of information are required. This information
is used in API calls and also inserted into automated scripts, such as the test script provided
later in this document.
7-11
Chapter 7
File Transfer Services
The below parameters are required for uploading data files to object storage.
BASE_URL="https://__YOUR_TENANT_BASE_URL__"
TENANT="__YOUR-TENANT_ID__"
IDCS_URL="https://_YOUR__IDCS__URL__/oauth2/v1/token"
IDCS_CLIENTID="__YOUR_CLIENT_APPID__"
IDCS_CLIENTSECRET="__YOUR_CLIENT_SECRET___"
IDCS_SCOPE="rgbu:rsp:psraf-__YOUR_SCOPE__"
Base URL
The substring before the first ‘/’ in the Application URL is termed as the base URL.
Tenant
The string after the Base URL and before the Application URL starts would be the Tenant.
Example URL: https://rap.retail.eu-frankfurt-1.ocs.oraclecloud.com/rgbu-rap-
hmcd-stg1-rsp/orase/faces/Home
Where <ENV> is replaced with one of the codes in (PRD, STG) and <ENVINDEX> is set to 1, unless
you have multiple staging environments, and then the index can be 2 or greater. For these
applications (that is, RI and AIF) use the rsp code. For other applications, the code is rpas.
To determine this information, look at the URL for your environment, such as:
https://rap.retail.eu-frankfurt-1.ocs.oraclecloud.com/rgbu-rap-hmcd-stg1-rsp/
orase/faces/Home
7-12
Chapter 7
File Transfer Services
In the tenant string, you can see the code stg1. This can be added to your scope string
(ensuring it is in uppercase characters only).
IDCS_SCOPE = rgbu:rsp:psraf-STG1
1. Navigate to the Manage OAuth Clients screen from the Settings menu, under Application
Administration.
2. Click the plus (+) icon to create a new OAuth 2.0 client.
3. Enter the requested details in the window. The application name should be unique to the
connection you are establishing and cannot be used to generate multiple client ID/secret
pairs.
The application name cannot be re-used for multiple requests. It also cannot contain
spaces. The scope should be the string previously established in OCI IAM Scope. The
description is any value you wish to enter to describe the application name being used.
4. Click OK to submit the form and display a new popup with the client ID and secret for the
specified Application Name. Do NOT close the window until you have captured the
information and verified it matches what is shown on screen. Once you close the window,
you cannot recover the information and you will need to create a new application.
7-13
Chapter 7
File Transfer Services
MFP Example
To determine IDCS_SCOPE, refer to the tenant string portion (for example, rgbu-rap-cust-stg1-
mfpscs) of your cloud service URL (for example,https://rap.retail.us-
ashburn-1.ocs.oraclecloud.com/rgbu-rap-cust-stg1-mfpscs/rpasceui/)
Based on the tenant string (for example, rgbu-rap-cust-stg1-mfpscs), the environment index
is stg1 and the application is mfpscs. For this combination, the IDCS scope will look like the
configuration below (ensuring it is in uppercase characters only):
IDCS_SCOPE = rgbu:rpas:psraf-MFPSCS-STG1
Create the OAuth Client in Retail Home with the following parameters:
• App Name: MFP_STG1
• Description: FTS for MFP on STG1
• Scope 1: rgbu:rpas:psraf-MFPSCS-STG1
This generates an OAuth Client with details like this:
• Oauth client:
• App Name: MFP_STG1
• Client Id: MFP_STG1_APPID
• Client Secret: 6aae7818-309b-4e7a-874e-f26356a675b1
You will need to capture Client Id and Client Secret. So set the FTS script variables as follows:
BASE_URL="https://rap.retail.eu-frankfurt-1.ocs.oraclecloud.com"
TENANT="rgbu-rap-hmcd-stg1-mfpscs"
IDCS_URL="https://oci—iam-
a4cbf187f29d4f41bc03fffb657d5513.identity.oraclecloud.com/oauth2/v1/token"
IDCS_CLIENTID="MFP_STG1_APPID"
IDCS_CLIENTSECRET="6aae7818-309b-4e7a-874e-f26356a675b1"
IDCS_SCOPE="rgbu:rpas:psraf-MFPSCS-STG1"
IPO Example
To determine the IDCS_SCOPE, refer to the tenant string portion (for example, rgbu-rap-cust-
stg1-ipocs) of your cloud service URL (for example, https://rap.retail.us-
ashburn-1.ocs.oraclecloud.com/rgbu-rap-cust-stg1-ipo/rpasceui/)
7-14
Chapter 7
File Transfer Services
Based on the tenant string (for example, rgbu-rap-cust-stg1-ipocs), the environment index
is stg1 and the application is ipocs. For this combination, the IDCS scope will look like the
configuration below (ensuring it is in uppercase characters only):
IDCS_SCOPE = rgbu:rpas:psraf-IPOCS-STG1
Create the OAuth Client in Retail Home with the following parameters:
• App Name: IPOCS_STG1
• Description: FTS for IPOCS on STG1
• Scope 1: rgbu:rpas:psraf-IPOCS-STG1
This generates an OAuth Client with details like this:
• Oauth client:
• App Name: IPOCS_STG1
• Client Id: IPOCS_STG1_APPID
• Client Secret: 6aae7818-309b-4e7a-874e-f26356a675b1
You will need to capture Client Id and Client Secret. So set the FTS script variables as follows:
BASE_URL="https://rap.retail.eu-frankfurt-1.ocs.oraclecloud.com"
TENANT="rgbu-rap-hmcd-stg1-ipocs"
IDCS_URL="https://oci—iam-
a4cbf187f29d4f41bc03fffb657d5513.identity.oraclecloud.com/oauth2/v1/token"
IDCS_CLIENTID="IPOCS_STG1_APPID"
IDCS_CLIENTSECRET="6aae7818-309b-4e7a-874e-f26356a675b1"
IDCS_SCOPE="rgbu:rpas:psraf-IPOCS-STG1"
AP Example
To determine the IDCS_SCOPE, refer to the tenant string portion (for example, rgbu-rap-cust-
stg1-apcs) of your cloud service URL (for example, https://rap.retail.us-
ashburn-1.ocs.oraclecloud.com/rgbu-rap-cust-stg1-apcs/rpasceui/)
7-15
Chapter 7
File Transfer Services
Based on the tenant string (for example, rgbu-rap-cust-stg1-apcs), the environment index is
stg1 and the application is apcs. For this combination, the IDCS scope will look like the
configuration below (ensuring it is in uppercase characters only):
IDCS_SCOPE = rgbu:rpas:psraf-APCS-STG1
Create the OAuth Client in Retail Home with the following parameters:
• App Name: AP_STG1
• Description: FTS for AP on STG1
• Scope 1: rgbu:rpas:psraf-AP-STG1
This generates an OAuth Client with details like this:
• Oauth client:
• App Name: AP_STG1
• Client Id: AP_STG1_APPID
• Client Secret: 6aae7818-309b-4e7a-874e-f26356a675b1
You will need to capture Client Id and Client Secret. So set the FTS script variables as follows:
BASE_URL="https://rap.retail.eu-frankfurt-1.ocs.oraclecloud.com"
TENANT="rgbu-rap-hmcd-stg1-apcs"
IDCS_URL="https://oci—iam-
a4cbf187f29d4f41bc03fffb657d5513.identity.oraclecloud.com/oauth2/v1/token"
IDCS_CLIENTID="AP_STG1_APPID"
IDCS_CLIENTSECRET="6aae7818-309b-4e7a-874e-f26356a675b1"
IDCS_SCOPE="rgbu:rpas:psraf-AP-STG1"
Content-Type: application/json
Accept: application/json
Accept:-Language: en
Authorization: Bearer {ClientToken}
The {ClientToken} is the access token returned by OCI IAM after requesting client
credentials. This is refreshed periodically to avoid authentication errors.
7-16
Chapter 7
File Transfer Services
Headers
Content-Type: application/x-www-form-urlencoded
Accept: application/json
Authorization: Basic {ociAuth}
Data (URLEncoded)
grant_type=client_credentials
scope=rgbu :rpas :psraf-{environment}
Note:
The baseUrl in these examples is not the same as the BASE_URL variable passed into
cURL commands. The baseUrl for the API itself is the hostname and tenant, plus the
service implementation path (for example, RetailAppsPlatformServices). The
sample scripts provided in the appendix show the full path used by the API calls.
Ping Returns the status of the service, and provides an external health-
check.
Method GET
Endpoint {baseUrl}/services/private/FTSWrapper/ping
Parameters Common headers
Request None
Response { appStatus:200 }
The appStatus code follows HTTP return code standards.
List Prefixes Returns a list of the known storage prefixes. These are analogous to
directories, and are restricted to predefined choices per service.
Method GET
Endpoint {baseUrl}/services/private/FTSWrapper/listprefixes
7-17
Chapter 7
File Transfer Services
List Files Returns a list of the file within a given storage prefix.
Method GET
Endpoint {baseUrl}/services/private/FTSWrapper/listfiles
Parameters Common headers
Request Query parameters (…/listfiles?{parameterName}) that can be
appended to the URL to filter the request:
prefix – the storage prefix to use
contains – files that contain the specified substring
scanStatus – file status returned by malware/antivirus scan
limit – control the number of results in a page
offset – page number
sort – the sort order key
Response A JSON resultSet containing array of files. For each file, there is
metadata including: name, size, created and modified dates, scan status
and date, scan output message.
Move Files Moves one or more files between storage prefixes, while additionally
allowing the name to be modified
Method GET
Endpoint {baseUrl}/services/private/FTSWrapper/movefiles
Parameters Common headers
Request An array of files containing the current and new storage prefixes and
file names, as shown below.
{"listOfFiles": [
{"currentPath":
{ "storagePrefix": "string",
"fileName": "string"},
"newPath": {
"storagePrefix": "string",
"fileName": "string"
}
}
]
}
7-18
Chapter 7
File Transfer Services
{"listOfFiles":
[
{
"storagePrefix": "string",
"fileName": "string"
}
]
}
Response A JSON array of each file deletion attempted and the result.
Request Upload PAR Request PAR for uploading one or more files
Method POST
Endpoint {baseUrl}/services/private/FTSWrapper/upload
Parameters Common headers
Request A JSON array of files to be uploaded. One or more pairs of
storagePrefix and filename elements can be specified within the
array.
{ "listOfFiles":
[
{
"storagePrefix": "string",
"fileName": "string"
}
]
}
7-19
Chapter 7
File Transfer Services
{ "listOfFiles":
[
{
"storagePrefix": "string",
"fileName": "string"
}
]
}
Upload Files
For RAP input files (excluding direct-to-RPASCE input files) the input files must be uploaded to
the object storage with a prefix of ris/incoming.
• Prefix: ris/incoming
• File Name: RI_RMS_DATA.zip
• Command:
sh file_transfer.sh uploadfiles ris/incoming RI_RMS_DATA.zip
Download Files
For RAP output files (excluding direct-from-RPASCE output files) the files must be downloaded
from the object storage with a prefix of ris/outgoing.
• Prefix: ris/outgoing
• File Name: cis_custseg_exp.csv
• Command:
sh file_transfer.sh downloadfiles ris/outgoing cis_custseg_exp.csv
Download Archives
For RAP files that are automatically archived as part of the batch process, you have the ability
to download these files for a limited number of days before they are erased (based on the file
retention policy in your OCI region). Archive files are added to sub-folders so the steps are
different from a standard download.
1. POM job logs will show the path to the archive
For example: incoming-10072022-163233/RAP_DATA.zip
7-20
Chapter 7
BI Publisher
2. Create the directory in your local server matching the archive name.
For example: mkdir incoming-10072022-163233
3. Use the downloadfiles command with the ris/archive prefix and the file path as the sub-
folder and filename together.
For example: sh file_transfer.sh downloadfiles ris/archive
incoming-10072022-163233/RAP_DATA.zip
BI Publisher
The BI Publisher component of Oracle Analytics is available for reporting and export file
generation where the data needs to be written into a specific template or layout and must be
delivered to other sources such as email or Object Storage (OS). SFTP is no longer available,
so if you have previously used SFTP as the report delivery method, you will now use OS.
select 0 KEY,
'<template_name>' TEMPLATE,
'RTF' TEMPLATE_FORMAT,
'en-US' LOCALE,
'PDF' OUTPUT_FORMAT,
'OBJECTSTORAGE' DEL_CHANNEL,
'<output_name>' OUTPUT_NAME,
'OS' PARAMETER1,
'<prefix>'PARAMETER2,
'<file_name>' PARAMETER3
FROM "Retail Insights As-Is"
7-21
Chapter 7
Application Express (APEX)
• Server – The server is preconfigured as OS for any tenant. OS must always be selected.
• Prefix – The prefix under the object storage bucket where the file will be uploaded.
• File Name – The file name with which the scheduled report output will be delivered to the
object storage.
The way you set up these inputs is the same as when using the bursting option. For additional
details on how to set up reports delivery through object storage, refer to Set Output Options in
Oracle Cloud Visualizing Data and Building Reports in Oracle Analytics Cloud.
{
"disUrl": "https://rgbu.gbua.ocs.oc-test.com",
"name": "Productlist",
"dateExpires": "2024-01-31",
"objectName": "ris/incoming/Productlist.pdf"
}
When making the request, you must also add a context type header with a value of
application/json or you will not receive a response. If the requested file is found, then you
will get a response like the following:
{
"status": "Success",
"id": "B0ji5Vir/nDfUQVaFHNhYVjgHoRPO8ZnFjTUPcQIyXcEtY8HUoqeJNsdyFzqreqv:ris/
incoming/Productlist.pdf",
"url": "https://objectstorage.us-phoenix-1.oraclecloud.com/p/
X6BRpziLKQ3xoRcSToi68L31NHxm2rhTc2lbrTpvmWm9vIpCVWNiC63tYTCWgxYW/n/
oraclegbudevcorp/b/
cds_gbua_cndevcorp_rgbu_rgbu_tpn24ouxsgftuy5qh4te_RIRSP_STG5009_1/o/ris/
incoming/Productlist.pdf"
}
The URL can be passed into any method you would normally use to download a file, such as a
wget command.
7-22
Chapter 7
Application Express (APEX)
Studio. You can access APEX from a task menu link in the AI Foundation Cloud Services
interface, or by navigating directly to the ORDS endpoint like below:
https://{base URL}/{solution-customer-env}/ords
For example:
https://ocacs.ocs.oc-test.com/nrfy45ka2su3imnq6s/ords/
For first time setup of the administrator user account for APEX, refer to the RAP Administration
Guide.
After you are logged into APEX, click the SQL Workshop icon or access the SQL Workshop
menu to enter the SQL Commands screen. This screen is where you will enter SQL to query
the RAP database objects in the RI and AI Foundation schemas.
To see the list of available objects to query, access the Object Browser. All RI and AI
Foundation objects are added as synonyms, so select that menu option from the panel on the
left.
If you do not see any RI tables in the synonym list, then you may need to run a set of ad hoc
jobs in POM to expose them. Run the following two programs from the AI Foundation
schedule’s standalone job list:
7-23
Chapter 7
Postman
• RADM_GRANT_ACCESS_TO_IW_ADHOC_JOB
• RABE_GRANT_ACCESS_TO_IW_ADHOC_JOB
Once the jobs execute successfully, start a new session of APEX and navigate back to the list
of Synonyms in the Object Browser screen to confirm the table list is updated.
A staging table is generally any table that receives data from an outside source, which can
include flat files, direct integration between two Oracle solutions, or web services. For example,
W_PRODUCT_DS is the staging table in the RADM01 schema that receives product information.
Internal tables include the target tables where staged data will be moved to for the applications
to read from during normal operations, temporary (TMP) tables, and configuration tables. There
are a few exceptions made for application objects that must not be altered, but all tables that
contain data you should need to interact with will be accessible in non-prod environments.
W_PRODUCT_D is an example of a target table for the W_PRODUCT_DS staging table.
Postman
For automated API calls, Postman is the preferred way to interact with POM in order to call ad
hoc processes and perform data load activities. The steps below explain how to configure
Postman for first-time use with POM.
Note:
POM versions earlier than v21 allow Basic Authentication, while v21+ requires OAuth
2.0 as detailed below.
1. As a pre-requisite, retrieve the Client ID and Client Secret from Retail Home’s ‘Create
IDCS OAuth 2.0 Client’. Refer to the Retail Home Administration Guide for complete
details on retrieving the Client ID and Client Secret info if you have not done it before.
2. In the Postman application, click New->HTTP Request.
7-24
Chapter 7
Postman
3. Set Request Type as POST and set the Request URL (Example for RI schedule):
https://<Region-LB>/<POM-Subnamespace>/ProcessServices/services/private/
executionEngine/schedules/RI/execution
For example:
https://home.retail.us-region-1.ocs.oc-test.com/rgbu-common-rap-prod-pom/
ProcessServices/services/private/executionEngine/schedules/RI/execution
4. Perform the following steps before sending the POST request to retrieve your
authentication token:
a. Authorization Tab:
Type: OAuth 2.0
Add authorization data to: Request Headers
7-25
Chapter 7
Postman
5. The Access Token is displayed in the MANAGE ACCESS TOKENS pop-up window. Click
the Use Token button.
7-26
Chapter 7
Postman
8. Click the Body tab to enter the JSON for running POM batches.
9. Select the raw radio button and JSON file type.
10. Enter the JSON XML and click the Send button in the Body tab.
7-27
Chapter 7
Postman
b. If the status is not 200 and either of 401 (authorization error) or 500 (incorrect body
content) or 404 (server down) and so on, then perform error resolution as needed.
Example request and response for the HIST_ZIP_FILE_LOAD_ADHOC process:
{
"cycleName" : "Nightly",
"flowName" : "Nightly",
"requestType" : "POM Scheduler"
}
{
"cycleName" : "Nightly",
"flowName" : "Nightly",
"requestType" : "POM Scheduler",
"processName" : "LOAD_AGGREGATION_BCOST_IT_DY_A_PROCESS"
}
{
"cycleName" : "Adhoc",
"flowName" : "Adhoc",
"requestType" : "POM Scheduler",
"processName" : "HIST_INVRTV_LOAD_ADHOC",
"requestParameters" : "jobParams.HIST_LOAD_INVRTV_DAY_JOB=2020-09-09
7-28
Chapter 7
Postman
2021-09-09"
}
7-29
8
Data File Generation
When you are implementing the Retail Analytics and Planning without using an Oracle
merchandising system for foundation data, or you are providing history data from non-Oracle
sources, you will need to create several data files following the platform specifications. This
chapter will provide guidance on the data file formats, structures, business rules, and other
considerations that must be accounted for when generating the data.
Important:
Do not begin data file creation for RAP until you have reviewed this chapter and have
an understanding of the key data structures used throughout the platform.
For complete column-level definitions of the interfaces, including datatype and length
requirements, refer to the RI and AI Foundation Interfaces Guide in My Oracle Support. From
the same document, you may also download Data Samples for all of the files covered in this
chapter.
8-1
Chapter 8
Files Types and Data Format
Context Files
Before creating and processing a data file on the platform, choose the fields that will be
populated and instruct the platform to only look for data in those columns. This configuration is
handled through the use of Context (CTX) Files that are uploaded alongside each base data
file. For example, the context file for PRODUCT.csv will be PRODUCT.csv.ctx (appending
the .ctx file descriptor to the end of the base filename).
Within each context file you must provide a single column containing:
• One or more parameters defining the behavior of the file load and the format of the file.
• The list of fields contained in the source file, in the order in which they appear in the file
specification:
– #TABLE#<Staging Table Name>#
– #DELIMITER#<Input Value>#
– #DATEFORMAT#<Input Value>#
– #REJECTLIMIT#<Input Value>#
– #RECORDDELIMITER#<Input Value>#
– #IGNOREBLANKLINES#<Input Value>#
– #SKIPHEADERS#<Input Value>#
– #TRIMSPACES#<Input Value>#
– #TRUNCATECOL#<Input Value>#
– #COLUMNLIST#<Input Value>#
<COL1>
<COL2>
<COL3>
The following is an example context file for the CALENDAR.csv data file:
File Contents:
#TABLE#W_MCAL_PERIOD_DTS#
#DELIMITER#,#
#DATEFORMAT#YYYY-MM-DD#
#REJECTLIMIT#1#
#RECORDDELIMITER#\n#
#IGNOREBLANKLINES#false#
#SKIPHEADERS#1#
#TRIMSPACES#rtrim#
8-2
Chapter 8
Files Types and Data Format
#TRUNCATECOL#false#
#COLUMNLIST#
MCAL_CAL_ID
MCAL_PERIOD_TYPE
MCAL_PERIOD_NAME
MCAL_PERIOD
MCAL_PERIOD_ST_DT
MCAL_PERIOD_END_DT
MCAL_QTR
MCAL_YEAR
MCAL_QTR_START_DT
MCAL_QTR_END_DT
MCAL_YEAR_START_DT
MCAL_YEAR_END_DT
The file must be UNIX formatted and have an end-of-line character on every line, including the
last one. As shown above, the final EOL may appear as a new line in a text editor. The
#TABLE# field is required: it indicates the name of the database staging table updated by the
file. The COLUMNLIST tag is also required: it determines the columns the customer uses in
their .dat or .csv file. The column list must match the order of fields in the file from left to right,
which must also align with the published file specifications. Include the list of columns after the
#COLUMNLIST# tag. Most of the other parameters are optional and the rows can be excluded
from the context file. However, this will set values to system defaults that may not align with
your format.
Note:
Both RI and AI Foundation can use these context files to determine the format of
incoming data.
The server maintains a copy of all the context files used, so you do not need to send a context
file every time. If no context files are found, the Analytics and Planning uses the last known
configuration.
For additional format options, the available values used are from the DBMS_CLOUD package
options in ADW.
If you want to retrieve the latest copy of the context files, the RI_ZIP_UPLOAD_CTX_JOB job in
process RI_ZIP_UPLOAD_CTX_ADHOC can be run from the AIF DATA standalone schedule in
POM. This job will extract all the context files from the custom_ext_table_config directory,
package them in a zip file, and upload that file to Object Storage. The zip file is named
RAP_CTX.zip, and will use ris/outgoing as the prefix for File Transfer Services (FTS) to
access it.
In addition to being able to obtain copies of the files, there is also a database table named
C_DIS_ADW_EXT_TABLE_CONFIG that holds the context file information that was last uploaded to
the database. Except for the COLUMN_LIST and FORMAT_OPTIONS columns, the data in the other
columns on the table is editable using the Control & Tactical Center screen in AI Foundation,
so you can provide override values. The table does not have any initial data; it is populated
when a CTX file is processed by RI_UPDATE_TENANT_JOB. When a CTX file is provided and data
is also present in the table, the priority is to use the CTX file. If a CTX file was not provided in
the current batch run, then the data on this table will be used. After the batch run, this table will
reflect the most recently used CTX file configurations.
8-3
Chapter 8
Files Types and Data Format
A change in format data in the table will trigger an update to ADW only if the values are
different from what was last sent. This is done by comparing the entries at the FORMAT_OPTIONS
column. Modifying the COLUMN_LIST in this table will not trigger a request to ADW to update the
options. COLUMN_LIST is not editable through the Control & Tactical Center screen, as it only
serves as a view to show the contents of the last payload sent to ADW. Sending the updates
through a CTX file is the preferred method for modifying the column list. If no CTX files are
provided, the RI_UPDATE_TENANT_JOB will end immediately instead of pushing the same
configurations to ADW again. If you notice slow performance on this job then you can stop
providing CTX files when they are not changing, and the job will finish within 10-20 seconds.
There is an OVERRIDE_REJECTLIMIT_TO_DT column on the table that will determine whether a
REJECTLIMIT value other than zero is used. If this date column is null or is already past the
current date, then the REJECTLIMIT will be reset to 0 and will trigger an update to ADW. The
REJECTLIMIT value provided in the table will be used until the date specified in this column.
Retail Insights
Retail Insights has a large number of legacy interfaces that do not follow the shared platform
data formats. These interfaces are populated with files named after their target database table
with a file extension of .dat, such as W_PRODUCT_DS.dat. All files ending with a .dat extension
are pipe-delimited files by default (using the | symbol as the column separator) but can be
changed using CTX file options. These files also have a Unix line-ending character by default,
although the line-ending character can be configured to be a different value, if needed. These
files may be created by a legacy Merchandising (RMS) extract process or may be produced
through existing integrations to an older version of RI or AI Foundation.
8-4
Chapter 8
Dimension Files
This file format is used when integrating with legacy solutions such as the Retail
Merchandising System (RMS) through the Retail Data Extractor (RDE) on v19 or earlier
versions.
Example data from the file W_RTL_PLAN1_PROD1_LC1_T1_FS.dat:
70|-1|13|-1|2019-05-04;00:00:00|RETAIL|0|1118.82|1|70~13~2019-05-04;00:00:00~0
70|-1|13|-1|2019-05-11;00:00:00|RETAIL|0|476.09|1|70~13~2019-05-11;00:00:00~0
70|-1|13|-1|2019-05-18;00:00:00|RETAIL|0|296.62|1|70~13~2019-05-18;00:00:00~0
Planning Platform
Planning solutions using PDS (Planning Data Schema), such as Merchandise Financial
Planning, have two main types of files:
Hierarchy/Dimension Files – Foundation Data for the Hierarchy/Dimensions.
Measure/Fact Files – Factual Data specific to loadable metric/measures.
When loading directly to Planning applications, both types of files should only be in CSV format
and they should contain headers. Headers contain the details of the dimension names for
Hierarchy/Dimension Files and the fact names for Measure/Fact Files.
Hierarchy/Dimension Files uses the naming convention <Hierarchy Name>.hdr.csv.dat and
Measure Files can be any meaningful fact-grouping name, but with allowed extensions such
as .ovr, .rpl, or .inc.
Dimension Files
A dimension is a collection of descriptive elements, attributes, or hierarchical structures that
provide context to your business data. Dimensions tell the platform what your business looks
like and how it operates. They describe the factual data (such as sales transactions) and
provide means for aggregation and summarization throughout the platform. Dimensions follow
a strict set of business rules and formatting requirements that must be followed when
generating the files.
There are certain common rules that apply across all of the dimension files and must be
followed without exception. Failure to adhere to these rules may result in failed data loads or
incorrectly structured datasets in the platform.
8-5
Chapter 8
Dimension Files
• All dimension files must be provided as full snapshots of the source data at all times,
unless you change the configuration of a specific dimension to be IS_INCREMENTAL=Y
where incremental loads are supported. Incremental dimension loading should only be
done once nightly/weekly batch processing has started. Initial/history dimension loads
should always be full snapshots.
• Hierarchy levels must follow a strict tree structure, where each parent has a 1-to-N
relationship with the children elements below them. You cannot have the same child level
identifier repeat across more than one parent level, with the exception of Class/Subclass
levels (which may repeat on the ID columns but must be unique on the UID columns). For
example, Department 12 can only exist under Division 1, it cannot also exist under Division
2.
• Hierarchy files (product, organization, calendar) must have a value in all non-null fields for
all rows and must fill in all the required hierarchy levels without exception. For example,
even if your non-Oracle product data only has 4 hierarchy levels, you must provide the
complete 7-level product hierarchy to the platform. Fill in the upper levels of the hierarchy
with values to make up for the differences, such as having the division and group levels
both be a single, hard-coded value.
• Any time you are providing a key identifier of an entity (such as a supplier ID, channel ID,
brand ID, and so on) you should fill in the values on all rows of the data file, using a
dummy value for rows that don’t have that entity. For example, for items that don’t have a
brand, you can assign them to a generic “No Brand” value to support filtering and reporting
on these records throughout the platform. You may find it easier to identify the “No Brand”
group of products when working with CDTs in the AI Foundation Cloud Services or when
creating dashboards in RI, compared to leaving the values empty in the file.
• Any hierarchy-level ID (department ID, region ID, and so on) or dimensional ID value
(brand name, supplier ID, channel ID, store format ID, and so on) intended for Planning
applications must not have spaces or special characters on any field, or it will be rejected
by the PDS load. ID columns to be used in planning should use a combination of numbers,
letters, and underscores only.
• Any change to hierarchy levels after the first dimension is loaded will be treated as a
reclassification and will have certain internal processes and data changes triggered as a
result. If possible, avoid loading hierarchy changes to levels above Item/Location during
the historical load process. If you need to load new hierarchies during the history loads,
make sure to advance the business date in the data warehouse using the specified jobs
and date parameters, do NOT load altered hierarchies on top of the same business date
as previous loads.
• All fields designated as flags (having FLG or FLAG in the field name) must have a Y or N
value. Filters and analytics within the system will generally assume Y/N is used and not
function properly if other values (like 0/1) are provided.
• Retail Insights requires that all hierarchy identifiers above item/location level MUST be
numerical. The reporting layer is designed around having numerical identifiers in
hierarchies and no data will show in reports if that is not followed. If you are not
implementing Retail Insights, then alphanumeric hierarchy IDs could be used, though it is
not preferred.
Product File
The product file is named PRODUCT.csv, and it contains most of the identifying information
about the merchandise you sell and the services you provide. The file structure follows certain
rules based on the Retail Merchandising Foundation Cloud Services (RMFCS) data model, as
that is the paradigm for retail foundation data that we are following across all RAP foundation
files.
8-6
Chapter 8
Dimension Files
The columns below are the minimum required data elements, but the file supports many more
optional fields, as listed in the Interfaces Guide. Optional fields tend to be used as reporting
attributes in RI and are nullable descriptive fields. Optional fields designated for use in a AI
Foundation or Planning module are generally nullable too, but should generally be populated
with non-null values to provide more complete data to those modules.
8-7
Chapter 8
Dimension Files
The product hierarchy fields use generic level names to support non-traditional hierarchy
structures (for example, your first hierarchy level may not be called Subclass, but you are still
loading it into the same position in the file). Other file columns such as LVL1 to LVL3 exist in the
interface but are not yet used in any module of the platform.
8-8
Chapter 8
Dimension Files
Note:
Multi-level items are not always required and depend on your use-cases. For
example, the lowest level (ITEM_LEVEL=3) for sub-transaction items is only used in
Retail Insights for reporting on UPC or barcode level attribute values. Most
implementations will only have ITEM_LEVEL=1 and ITEM_LEVEL=2 records. If you are a
non-fashion retailer you may only have a single item level (for SKUs) and the other
levels could be ignored. The reason for having different records for each item level is
to allow for different attributes at each level, which can be very important in Retail
Insights analytics. You may also need to provide multiple item levels for optimizing or
planning data at a Style or Style/Color level in the non-RI modules. When providing
multiple item level records, note that the item IDs must be unique across all levels
and records.
Example data for the PRODUCT.csv file columns above, including all 3 supported item levels
(style, SKU, and UPC):
ITEM,ITEM_PARENT,ITEM_GRANDPARENT,ITEM_LEVEL,TRAN_LEVEL,PACK_FLG,DIFF_AGGREGAT
E,LVL4_PRODCAT_ID,LVL4_PRODCAT_UID,LVL5_PRODCAT_ID,LVL5_PRODCAT_UID,LVL6_PRODC
AT_ID,LVL7_PRODCAT_ID,LVL8_PRODCAT_ID,TOP_LVL_PRODCAT_ID,ITEM_DESC,LVL4_PRODCA
T_DESC,LVL5_PRODCAT_DESC,LVL6_PRODCAT_DESC,LVL7_PRODCAT_DESC,LVL8_PRODCAT_DESC
,TOP_LVL_PRODCAT_DESC,INVENTORIED_FLG,SELLABLE_FLG
190085210200,-1,-1,1,2,N,,8,9001,3,910,3,2,1,1,2IN1 SHORTS,Shorts,Active
Apparel,Women's Activewear,Activewear,Apparel,Retailer Ltd,Y,Y
190085205725,190085210200,-1,2,2,N,BLK,8,9001,3,910,3,2,1,1,2IN1
SHORTS:BLACK:LARGE,Shorts,Active Apparel,Women's
Activewear,Activewear,Apparel,Retailer Ltd,Y,Y
190085205923,190085210200,-1,2,2,N,DG,8,9001,3,910,3,2,1,1,2IN1 SHORTS:DARK
GREY:LARGE,Shorts,Active Apparel,Women's
Activewear,Activewear,Apparel,Retailer Ltd,Y,Y
1190085205725,190085205725,190085210200,3,2,N,,8,9001,3,910,3,2,1,1,2IN1
SHORTS:BLACK:LARGE:BC,Shorts,Active Apparel,Women's
Activewear,Activewear,Apparel,Retailer Ltd,Y,Y
1190085205923,190085205923,190085210200,3,2,N,,8,9001,3,910,3,2,1,1,2IN1
SHORTS:DARK GREY:LARGE:BC,Shorts,Active Apparel,Women's
Activewear,Activewear,Apparel,Retailer Ltd,Y,Y
This example and the field descriptions covered in this section all follow the standard
Merchandising Foundation (RMFCS) structure for product data, and it is strongly
recommended that you use this format for RAP. If you are a legacy Planning customer or have
specific needs for extended hierarchies, the preferred approach is to convert your non-RMS
hierarchy structure to a standard RMS-like foundation format. This conversion involves:
• Provide only the SKUs and Styles as separate item records (dropping the style/color level
from the hierarchy). The Style will be the ITEM_PARENT value on the SKU records and
ITEM_GRANDPARENT will always be -1.
• Populate the field DIFF_AGGREGATE at the SKU level with the differentiator previously used
in the style/color level. For example, a legacy style/color item ID of S1000358:BLUE will
instead create S1000358 as the ITEM for the style-level record and the ITEM_PARENT in the
SKU record. The value BLUE is written in the DIFF_AGGREGATE field in the SKU-level record
(DIFF_AGGREGATE can be set to -1 or left null on style level records).
8-9
Chapter 8
Dimension Files
• When constructing the extended hierarchies in Planning and AI Foundation, the styles and
diff aggregate values are concatenated together to dynamically create the style/color level
of the hierarchy where needed.
Following this approach for your product hierarchy ensures you are aligned with the majority of
Oracle Retail applications and will be able to take up additional retail applications in the future
without restructuring your product data again.
For other fields not shown here, they are optional from a data load perspective but may be
used by one or more applications on the platform, so it is best to consider all fields on the
interface and populate as much data as you can. For example, supplier information is a
requirement for Inventory Planning Optimization, and brand information is often used in
Clustering or Demand Transference. Also note that some fields come in pairs and must be
provided together or not at all. This includes:
• Brand name and description
• Supplier ID and description
Description fields can be set to the same value as the identifier if no other value is known or
used, but you must include both fields with non-null values when you want to provide the data.
Product Alternates
You may also use the file PRODUCT_ALT.csv to load additional attributes and hierarchy levels
specifically for use in Planning applications. The file data is always at item level and may have
up to 30 flexible fields for data. These columns exist in the PRODUCT.csv file if you are a non-
RMFCS customer so this separate file would be redundant. If you are using RMFCS, then this
file provides a way to send extra data to Planning that does not exist in RMFCS.
When using flex fields as alternate hierarchy levels, there are some rules you will need to
follow:
• All hierarchies added this way must have an ID and Description pair as two separate
columns
• The ID column for an alternate hierarchy must ONLY contain numbers; no other characters
are permitted
Numerical ID fields are required for integration purposes. When a plan is generated in MFP or
AP using an alternate hierarchy, and you wish to send that plan data to AIF for in-season
forecasting, the alternate hierarchy ID used must be a number for the integration to work. If
your alternate hierarchy level will not be used as the base intersection of a plan, then it does
not need to be limited to numerical IDs (although it is still recommended to do so). This
requirement is the same for all hierarchy levels when Retail Insights is used, as RI can only
accept numerical hierarchy IDs for all levels (for both base levels and alternates).
For example, you might populate FLEX1_CHAR_VALUE with numerical IDs for an alternate level
named “Subsegment”. You will put the descriptions into FLEX2_CHAR_VALUE. These values can
be mapped into PDS by altering the interface.cfg file, and the values may be used to define
plans or targets in MFP. When you export your plans for AIF, they are written into integration
tables such as MFP_PLAN1_EXP using the numerical identifiers from FLEX1_CHAR_VALUE as the
plan level. This is further integrated to RI tables like W_RTL_PLAN1_PROD1_LC1_T1_FS (columns
ORG_DH_NUM and PROD_DH_NUM for location/product IDs respectively). This is where numerical
IDs become required for these interfaces to function; they will not load the data if the IDs are
non-numerical. Once loaded into W_RTL_PLAN1_PROD1_LC1_T1_F and similar tables, AIF reads
the plan data to feed in-season forecast generation.
Loading the data into data warehouse tables at a flex field level requires additional
configuration. Refer to the RI Implementation Guide for details. AIF also requires additional
8-10
Chapter 8
Dimension Files
setup to use alternate hierarchies. Refer to the section “Building Alternate Hierarchy in AIF” in
the AIF Implementation Guide for details.
Organization File
The organization file will contain most of the identifying information about the locations where
you sell or store merchandise, including physical locations (such as a brick & mortar store) and
virtual locations (such as a web store or virtual warehouse entity). The file structure follows
certain rules based on the Retail Merchandising Foundation Cloud Services (RMFCS) data
model, as that is the paradigm for retail foundation data that we are following across all RAP
foundation files. The columns below are the minimum required data elements, but the file
supports many more optional fields, as listed in the Interfaces Guide.
8-11
Chapter 8
Dimension Files
The organization hierarchy fields use generic level names to support non-traditional hierarchy
levels (for example, your first hierarchy level may not be called District, but you are still loading
it into the same position in the file which is used for Districts). Other levels, such as 1 to 9, have
columns in the interface but are not yet used in any module of the platform.
Warehouses get special handling both in the input interface load and throughout the RAP
applications. Warehouses are not considered a part of the organization hierarchy structure.
While you are required to put some value in the hierarchy level fields for warehouses (because
the columns are not nullable) those values are not currently used. Instead, the values will be
discarded and the warehouses are loaded with no parent levels in the data warehouse tables.
You should provide a unique reserved value like 1 or 9999 on all hierarchy level numbers
8-12
Chapter 8
Dimension Files
between location and company for warehouses, just to ensure the data is loaded without
violating any multi-parentage rules. When exporting the warehouse locations to Planning
applications, each warehouse ID is assigned its own name and number for each parent level,
prefixed with WH to make the level IDs distinct from any store hierarchy level. The warehouses
must then be mapped to channels from the MFP user interface before you can use their data.
Example data for the ORGANIZATION.csv file columns above as well as some optional fields
available on the interface:
ORG_NUM,ORG_TYPE_CODE,CURR_CODE,STATE_PROV_NAME,COUNTRY_REGION_NAME,ORG_HIER10
_NUM,ORG_HIER11_NUM,ORG_HIER12_NUM,ORG_HIER13_NUM,ORG_TOP_NUM,ORG_DESC,ORG_SEC
ONDARY_DESC,ORG_HIER10_DESC,ORG_HIER11_DESC,ORG_HIER12_DESC,ORG_HIER13_DESC,OR
G_TOP_DESC,CHANNEL_ID,CHANNEL_NAME,PHYS_WH_ID,STOCKHOLDING_FLG,STORE_FORMAT_DE
SC,STORE_FORMAT_ID,STORE_TYPE,TRANSFER_ZONE_ID,TRANSFER_ZONE_DESC,VIRTUAL_WH_F
LG,STORE_CLASS_TYPE,STORE_CLASS_DESC,WH_DELIVERY_POLICY,WH_REPL_IND,DUNS_NUMBE
R,STORE_REMODEL_DT,STORE_CLOSE_DT,INBOUND_HANDLING_DAYS,FLEX1_CHAR_VALUE,FLEX2
_CHAR_VALUE,FLEX3_CHAR_VALUE,FLEX4_CHAR_VALUE,FLEX5_CHAR_VALUE,FLEX6_CHAR_VALU
E,FLEX7_CHAR_VALUE,FLEX8_CHAR_VALUE,FLEX9_CHAR_VALUE,FLEX10_CHAR_VALUE
1000,S,USD,North Carolina,United
States,1070,170,1,1,1,Charlotte,Charlotte,North Carolina,Mid-Atlantic,Brick &
Mortar,US,Retailer Ltd,1,North America,,Y,Store,1,C,101,Zone
101,N,1,A,,,,,,,WH-1,Warehouse - US,1,Store Pick Up / Take
With,3,Comp,6,Mixed Humid,1,Very Large
1001,S,USD,Georgia,United States,1023,400,1,1,1,Atlanta,Atlanta,Georgia,South
Atlantic,Brick & Mortar,US,Retailer Ltd,1,North America,,Y,Kiosk,2,C,101,Zone
101,N,6,F,,,,,,,WH-1,Warehouse - US,2,Deliver/Install at
Customer ,3,Comp,7,Hot Humid,3,Medium
1002,S,USD,Texas,United States,1104,230,1,1,1,Dallas,Dallas,Texas,Gulf
States,Brick & Mortar,US,Retailer Ltd,1,North America,,Y,Store,1,C,101,Zone
101,N,6,F,,,,,,,WH-1,Warehouse - US,3,Home Delivery,3,Comp,4,Hot Dry,3,Medium
It is important that your organization hierarchy follow the standard rules laid out at the
beginning of this chapter. All IDs must be unique (within their level) and IDs can never be re-
used under multiple parents. All IDs must be numbers if you are using Retail Insights. The
entire 6-level structure must be filled out, even if your source system doesn’t have that many
levels.
Note:
You may duplicate a higher level down to lower levels if you need to fill it out to meet
the data requirements.
Also note that some optional fields come in pairs and must be provided together or not at all.
This includes:
• Banner ID and description
• Channel ID and description
• Store format ID and description
Description fields can be set to the same value as the identifier if no other value is known or
used, but you must include both fields with non-null values when you provide the data.
8-13
Chapter 8
Dimension Files
Organization Alternates
You may also use the file ORGANIZATION_ALT.csv to load additional attributes and hierarchy
levels specifically for use in Planning applications. The file data is always at location level and
may have up to 30 flexible fields for data. These columns exist on the ORGANIZATION.csv file if
you are a non-RMFCS customer, so this separate file would be redundant. If you are using
RMFCS, then this file provides a way to send extra data to Planning that does not exist in
RMFCS.
When using flex fields as alternate hierarchy levels, there are some rules you will need to
follow:
• All hierarchies added this way must have an ID and Description pair as two separate
columns
• The ID column for an alternate hierarchy must ONLY contain numbers, no other characters
are permitted
Numerical ID fields are required for integration purposes. When a plan is generated in MFP or
AP using an alternate hierarchy, and you wish to send that plan data to AIF for in-season
forecasting, the alternate hierarchy ID used must be a number for the integration to work. If
your alternate hierarchy level will not be used as the base intersection of a plan, then it does
not need to be limited to numerical IDs (although it is still recommended to do so). This
requirement is the same for all hierarchy levels when Retail Insights is used, as RI can only
accept numerical hierarchy IDs for all levels (both base levels and alternates).
For example, you might populate FLEX1_CHAR_VALUE with numerical IDs for an alternate level
named “Subsegment”. You will put the descriptions into FLEX2_CHAR_VALUE. These values can
be mapped into PDS by altering the interface.cfg file, and the values can be used to define
plans or targets in MFP. When you export your plans for AIF, they are written into integration
tables such as MFP_PLAN1_EXP using the numerical identifiers from FLEX1_CHAR_VALUE as the
plan level. This is further integrated to RI tables like W_RTL_PLAN1_PROD1_LC1_T1_FS (columns
ORG_DH_NUM and PROD_DH_NUM for location/product IDs respectively). This is where numerical
IDs become required for these interfaces to function; they will not load the data if the IDs are
non-numerical. Once loaded into W_RTL_PLAN1_PROD1_LC1_T1_F and similar tables, AIF reads
the plan data to feed in-season forecast generation.
Loading the data into data warehouse tables at a flex field level requires additional
configuration. Refer to the RI Implementation Guide for details. AIF also requires additional
setup to use alternate hierarchies. Refer to the section “Building Alternate Hierarchy in AIF” in
the AIF Implementation Guide for details.
Calendar File
The calendar file contains your primary business or fiscal calendar, defined at the fiscal-period
level of detail. The most common fiscal calendar used is a 4-5-4 National Retail Federation
(NRF) calendar or a variation of it with different year-ending dates. This calendar defines the
financial, analytical, or planning periods used by the business. It must contain some form of
fiscal calendar, but if you are a business that operates solely on the Gregorian calendar, a
default calendar file can be generated by an ad hoc batch program to initialize the system.
However, if you are implementing a planning solution, you must use the Fiscal Calendar as
your primary calendar, and only this calendar will be integrated from the data warehouse to
Planning.
8-14
Chapter 8
Dimension Files
The hard-coded calendar ID is used to align with several internal tables that are designed to
support multiple calendars but currently have only one in place, and that calendar uses the
provided value of MCAL_CAL_ID above.
The fiscal calendar should have, at a minimum, a 5-year range (2 years in the past, the current
fiscal year, and 2 years forward from that) but is usually much longer so that you do not need
to update the file often. Most implementations should start with a 10-15 year fiscal calendar
length. The calendar should start at least 1 full year before the planned beginning of your
history files and extend at least 1 year beyond your expected business needs in all RAP
modules.
Example data for the CALENDAR.csv file columns above:
MCAL_CAL_ID,MCAL_PERIOD_TYPE,MCAL_PERIOD_NAME,MCAL_PERIOD,MCAL_PERIOD_ST_DT,MC
AL_PERIOD_END_DT,MCAL_QTR,MCAL_YEAR,MCAL_QTR_START_DT,MCAL_QTR_END_DT,MCAL_YEA
R_START_DT,MCAL_YEAR_END_DT
Retail
Calendar~41,4,Period01,1,20070204,20070303,1,2007,20070204,20070505,20070204,2
0080202
Retail
Calendar~41,5,Period02,2,20070304,20070407,1,2007,20070204,20070505,20070204,2
0080202
8-15
Chapter 8
Dimension Files
Retail
Calendar~41,4,Period03,3,20070408,20070505,1,2007,20070204,20070505,20070204,2
0080202
Retail
Calendar~41,4,Period04,4,20070506,20070602,2,2007,20070506,20070804,20070204,2
0080202
Scenario 1 - No Conversion
For this use-case, all data is in the desired currency before sending it to Oracle. You do not
want the platform to convert your data from source currency to primary currency. All fact
records must have LOC_CURR_CODE = DOC_CURR_CODE. For example, set both values to USD for
sales in the U.S. and both values to CAD for sales in Canada that you pre-converted.
EXCH_RATE.csv data is not required or used for records having the same currency code on both
columns.
8-16
Chapter 8
Dimension Files
Exchange rates should be provided using the standard international rates (for example USD >
CAD may be 1.38) but the fact load will perform lookups in reverse. Fact conversions are
applied as a division process. For example, “transaction amount / exchange rate” is the
formula to convert from document currency to primary currency; so when converting from CAD
> USD the system will look up the value for USD > CAD and divide by that number to get the
final value.
START_DT,END_DT,EXCHANGE_RATE,FROM_CURRENCY_CODE,TO_CURRENCY_CODE
20180514,21000101,0.8640055,CAD,USD
20180514,21000101,0.1233959,CNY,USD
The exchange rates data must also satisfy the following criteria if you are loading data for use
in Retail Insights reporting:
1. Rates must be provided in both directions for every combination of currencies that can
occur in your dataset (for example, USD > CAD and CAD > USD).
2. Dates must provide complete coverage of your entire timeframe in the dataset, both for
historical and current data. The current effective records for all rates can use 2100-01-01
as the end date. Dates cannot overlap, only a single rate must be effective per day.
3. Rates should not change more often than absolutely necessary based on the business
requirements. If you are implementing RI with positional data, a rate change triggers a
complete recalculation of the stock on hand cost/retail amounts for the entire business
across all pre-calculated aggregate tables. When RI is not used for financial reporting you
might only change the rates once each fiscal year, to maintain a single constant currency
for analytical purposes.
Attributes Files
Product attributes are provided on two files: one file for the attribute-to-product mappings and
another for attribute descriptions and codes. These files should be provided together to fully
describe all the attributes being loaded into the system. The attribute descriptors file must be a
full snapshot of all attribute types and values at all times. The product attribute mapping file
should start as a full snapshot but can move to incremental (delta) load methods once nightly
batches begin, if you can extract the information as deltas only.
Product attributes are a major component of the RI and AI Foundation modules and drive
many analytical processes but are not required for some planning modules like MFP.
8-17
Chapter 8
Dimension Files
ATTR_VALUE_ID,ATTR_VALUE_DESC,ATTR_GROUP_ID,ATTR_GROUP_DESC,ATTR_TYPE_CODE
13,No_Sugar_IN13,45008,UDA_ING_2018.01.16.01.00,FF
14,Zero_Carbs_IN14,45008,UDA_ING_2018.01.16.01.00,FF
3,Distressed,80008,Wash,LV
STEEL,Steel,METAL,Metals,DIFF
CHOC,Chocolate,FLAVOR,Flavor,FLAVOR
GRAY_43214,Gray,COLOR,Color,COLOR
32X32_9957,32X32,SIZE,Size,SIZE
8-18
Chapter 8
Fact Files
ITEM,ATTR_ID,ATTR_GRP_TYPE,ATTR_GRP_ID,DIFF_GRP_ID,DIFF_GRP_DESC
91203747,13,ITEMUDA,45008,,
91203747,3,ITEMUDA,80008,,
190496585706,STEEL,ITEMDIFF,METAL,,
86323133004,GRAY_43214,COLOR,COLOR,,
190085302141,CHOC,PRODUCT_ATTRIBUTES,FLAVOR,,
345873291,32X32_9957,PRODUCT_ATTRIBUTES,SIZE,S13,Pant Sizes
Fact Files
8-19
Chapter 8
Fact Files
Nearly all fact files share a common intersection of an item, location, and date as specified
above. Such files are expected to come into the platform on a nightly basis and contain that
day’s transactions or business activity.
Most fact data also supports having currency amounts in their source currency, which is then
automatically converted to your primary operating currency during the load process. There are
several currency code and exchange rate columns on such interfaces, which should be
populated if you need this functionality. The most important ones are shown in the list above,
and other optional column for global currencies can be found in the Interfaces Guide. When
you provide these fields, they must all be provided on every row of data, you cannot leave out
any of the values or it will not load properly.
Here are sample records for commonly used historical load files having a small set of fields
populated. These fields are sufficient to see results in RI reporting and move the data to AI
Foundation or MFP but may not satisfy all the functional requirements of those applications.
Review the Interfaces Guide for complete details on required/optional columns on these
interfaces.
SALES.csv:
ITEM,ORG_NUM,DAY_DT,MIN_NUM,RTL_TYPE_CODE,SLS_TRX_ID,PROMO_ID,PROMO_COMP_ID,CA
SHIER_ID,REGISTER_ID,SALES_PERSON_ID,CUSTOMER_NUM,SLS_QTY,SLS_AMT_LCL,SLS_PROF
8-20
Chapter 8
Fact Files
IT_AMT_LCL,RET_QTY,RET_AMT_LCL,RET_PROFIT_AMT_LCL,TRAN_TYPE,LOC_CURR_CODE,DOC_
CURR_CODE
1235842,1029,20210228,0,R,202102281029,-1,-1,96,19,65,-1,173,1730,605.5,0,0,0,
SALE,USD,USD
1235842,1029,20210307,0,R,202103071029,-1,-1,12,19,55,-1,167,1670,584.5,0,0,0,
SALE,USD,USD
1235842,1029,20210314,0,R,202103141029,-1,-1,30,18,20,-1,181,1810,633.5,0,0,0,
SALE,USD,USD
INVENTORY.csv:
ITEM,ORG_NUM,DAY_DT,CLEARANCE_FLG,INV_SOH_QTY,INV_SOH_COST_AMT_LCL,INV_SOH_RTL
_AMT_LCL,INV_UNIT_RTL_AMT_LCL,INV_AVG_COST_AMT_LCL,INV_UNIT_COST_AMT_LCL,PURCH
_TYPE_CODE,DOC_CURR_CODE,LOC_CURR_CODE
72939751,1001,20200208,N,0,0,0,104.63,0,48.52,0,USD,USD
73137693,1001,20200208,N,0,0,0,104.63,0,48.52,0,USD,USD
75539075,1001,20200208,N,0,0,0,101.73,0,47.44,0,USD,USD
PRICE.csv:
ITEM,ORG_NUM,DAY_DT,PRICE_CHANGE_TRAN_TYPE,SELLING_UOM,STANDARD_UNIT_RTL_AMT_L
CL,SELLING_UNIT_RTL_AMT_LCL,BASE_COST_AMT_LCL,LOC_CURR_CODE,DOC_CURR_CODE
89833651,1004,20200208,0,EA,93.11,93.11,53.56,USD,USD
90710567,1004,20200208,0,EA,90.41,90.41,50.74,USD,USD
90846443,1004,20200208,0,EA,79.87,79.87,44.57,USD,USD
8-21
Chapter 8
Fact Files
8-22
Chapter 8
Fact Files
8-23
Chapter 8
Fact Files
The columns you provide in the sales file will vary greatly depending on your application needs
(for example ,you may not need the sales profit columns if you don’t care about Sales Cost or
Margin measures). The most commonly used columns are listed below with additional usage
notes.
8-24
Chapter 8
Fact Files
8-25
Chapter 8
Fact Files
As an example, assume you have a SKU# 1090 which is a white T-shirt. This item is sold
individually to customers, but it is also included in a pack of three shirts. The 3-pack is sold
using a separate SKU# 3451. You must provide the data for this scenario as follows:
• When SKU 1090 sells to a customer, you will have a transaction for 1 unit on SALES.csv
• When SKU 3451 sells to a customer, you will have a transaction for 1 unit on SALES.csv,
plus a record for SKU 1090 for 3 units on SALES_PACK.csv (representing the 3 units inside
the pack that sold).
When this data is loaded into other applications like MFP, you will see a total of 4 units of sales
for SKU 1090, because we will sum together the sales from both interfaces. The pack-level
sale of SKU 3451 is not exported to Planning applications because that would result in double-
counting at an aggregate level, but it can be used for other purposes such as Retail Insights
reports.
When you are providing SALES_PACK.csv you must also provide the pack item/component item
relationships using a dimension file PROD_PACK.csv. Refer to the RAP Interfaces Guide for the
full interface specifications of both of these files.
8-26
Chapter 8
Fact Files
For historical loads, this results in the following flow of data across all your files:
1. Generate the first month of week-ending inventory balances in INVENTORY.csv for all active
item/locations in each week of data. Load using the historical inventory load ad hoc
process. Make sure you load Receipts data in parallel with inventory data if you need to
capture historical first/last receipt dates against the stock positions (for IPO or LPO usage).
2. Repeat the monthly file generation process, including sets of week-ending balances in
chronological order. Remember that you cannot load inventory data out of order, once a
given intersection (item/loc/week) is loaded you cannot go back and reload or modify it
without deleting it first. Make sure all the requirements listed in the table above are
satisfied for every week of data. Depending on your data volumes you can include more
than one month in a single file upload.
8-27
Chapter 8
Fact Files
3. Load every week of inventory snapshots through to the end of your historical period. If
there will be a gap of time before starting nightly batches, plan to load an additional history
file at a later date to catch up. Make sure you continue loading Receipts data in parallel
with inventory data if first/last receipt date calculations are needed.
4. When you are ready to cutover to batches, you must also re-seed the positions of all item/
locations that need to have an inventory record on Day 1 of nightly batch execution (same
as for all positional facts in RI). This is needed to fill in any gaps where currently active
item/locations are not present in the historical files but need to have an inventory record
added on day 1. Use the Seeding Adhoc process for Inventory to do this step, or include a
full inventory snapshot file in your first nightly batch run to set all active positions.
The columns you provide on the inventory file will vary depending on your application needs
(for example, you may not need the in-transit or on-order columns if you are only providing
data for IPOCS-Demand Forecasting). The most commonly used columns are listed below with
additional usage notes.
8-28
Chapter 8
Fact Files
8-29
Chapter 8
Fact Files
For historical loads, this results in the following flow of data across all your files:
1. Generate an initial position PRICE.csv that has all type=0 records for the item/locations you
want to specify a starting price for. Load this as the very first file using the historical load ad
hoc process.
2. Generate your first month of price change records. This will have a mixture of all the price
change types. New item/location records may come in with type=0 and records already
established can get updates using any of the other type codes. Only send records when a
price or cost value changes; do not send every item/location on every date. You also must
not send more than one change per item/location/date.
3. Repeat the monthly file generation (or more than one month if your data volume for price
changes is low) and load process until all price history has been loaded for the historical
timeframe.
4. When you are ready for the cutover to batches, you must also re-seed the positions of all
item/locations that need a price record on Day 1 of nightly batch execution (same as for all
positional facts in RI). This is needed to fill in any gaps where currently active item/
locations are not present in the historical files, but need a price record added on day 1.
Use the Seeding Ad Hoc process for Pricing to do this step, not the historical load.
In most cases, you will be providing the same set of price columns for any application. These
columns are listed below with additional usage notes.
8-30
Chapter 8
Fact Files
8-31
Chapter 8
Fact Files
calculations for IPO/LPO are done up front during each load of inventory position and receipt
files.
Rule Explanation
Receipt Types The receipts are provided using a type code, with 3 specific codes supported:
• 20 – This code is for purchase order receipts, which are usually shipments
from a supplier into a warehouse (but can be into stores).
• 44~A – These are allocation transfer receipts resulting from allocations
issued to move warehouse inventory down to stores. The receipt occurs
for the store location on the day it receives the shipment.
• 44~T – These are generic non-allocation transfer receipts between any two
locations.
MFP GA solution only uses type 20 transactions but the rest of the RAP
solutions use all types.
Receipts vs. Transfer receipts are not the same thing as transfers (TRANSFER.csv) and both
Transfers datasets provide useful information. Transfer receipts are specific to the
receiving location only and occur at the time the units arrive. Transfers are
linked to both the shipping and receiving locations, and they should be sent at
the time the transfer is initiated. The MFP GA solution receives transfers from
the TRANSFER.csv file only, but the other solutions will want both
RECEIPT.csv and TRANSFER.csv files to have the transfer-related data.
Unplanned It is possible for a location to receive inventory it did not ask for (for example,
Receipts there is no associated PO or allocation linked to those units). Such receipts
should still appear as a type 44~T receipt transaction, so long as those units of
inventory do get pulled into the location’s stock on hand.
In most cases, you will be providing the same set of receipt columns for any application. These
columns are listed below with additional usage notes.
8-32
Chapter 8
Fact Files
Rule Explanation
Transfer Types Transfers are provided using a type code, with 3 specific codes supported:
• N – Normal transfers are physical movement of inventory between two
locations that impacts the stock on hand
• B – Book transfers are financial movement of inventory in the system of
record that doesn’t result in any physical movement, but still impacts the
stock on hand
• I – Intercompany transfers involve inventory moved into or out of
another location that is part of a different legal entity, and therefore the
transfer is treated like a purchase transaction in the source system
Most transfers are categorized as Normal (N) by default. All transfer types are
sent to Planning but would be loaded into separate measures as needed based
on the type. Because transfers and receipts are separate measures used for
different purposes, there is no overlap despite having similar information in
both files.
Transfer In vs. The transfers file has two sets of measures for the unit/cost/retail value into
Transfer Out the location and out of the location. Typically these values contain the same
data, but since they are aggregated and displayed separately in the target
systems, they are also separate on the input so you have full control over what
goes into each measure. For example, a transfer in of 5 units to location 102
would also have a transfer out of 5 units leaving location 56 (on the same
record).
In most cases, you will be providing the same set of transfer columns for any application.
These columns are listed below with additional usage notes.
8-33
Chapter 8
Fact Files
8-34
Chapter 8
Fact Files
Rule Explanation
Adjustment The adjustments are provided using a type code, with 3 specific codes
Types supported:
• 22 – These adjustments are your standard changes to inventory for
wastage, spoilage, losses, and so on. In Planning they are categorized as
Shrink adjustments.
• 23 – These adjustments are for specific changes that impact the Cost of
Goods Sold but are not an unplanned shrink event, such as charitable
donations. In Planning they are categorized as Non-Shrink adjustments.
• 41 – These adjustments are targeted to reporting needs specifically and
are the result of a stock count activity where the inventory levels were
already adjusted in the store’s inventory as part of the count, but you
want the adjustment captured anyway to report against it
Only types 22 and 23 go to Planning applications. Type 41 is used within RI for
reporting.
Reason Codes Reason codes are used to identify the specific type of adjustment that occurred
for that item, location, and date. If you are loading data for Planning apps,
then they are not required because Planning apps do not look at reason codes.
They are only used for RI reporting. There are no required codes; it will
depend on the data in your source system. The codes should be numerical,
and there is a Description field that must also be provided for the display
name.
Positive and Adjustments should be positive by default. A positive adjustment on the input
Negative Values file means a decrease in the stock on hand at the location. A negative
adjustment means an increase to stock on hand (basically you have adjusted
the units back into the location’s inventory, which is less common). When the
data is sent to MFP, the default planning import will invert the signs for the
positive adjustments to become subtractions to inventory.
In most cases, you will be providing the same set of adjustment columns for any application.
These columns are listed below with additional usage notes.
8-35
Chapter 8
Fact Files
Rule Explanation
Supplier IDs, All of the reason code, supplier number, and status code fields in an RTV
Reason Codes, record are optional and used only for RI reporting purposes, because planning
Status Codes applications do not report at those levels. If you are not specifying these
values, leave the columns out of the file entirely, and a default value of -1 will
be assigned to the record in those columns.
Positive and RTV transactions should always be positive values. Only send negative values
Negative Values to reverse a previously-sent transaction in order to zero it out from the
database.
In most cases, you will be providing the same set of RTV columns for any application. These
columns are listed below with additional usage notes.
8-36
Chapter 8
Fact Files
Rule Explanation
Markdown Markdown amounts are only the change in total value of inventory, not the
Amounts total value itself. Permanent and clearance price changes result in markdown
amounts derived like this:
Markdown Retail = (SOH*Old Retail) – (SOH*New Retail)
Markdown Retail = (150*15) – (150*12) = $450
Promotional price changes do not need the total markdown amount
calculation, and instead send a promotion markdown amount at the time of
any sale:
Promotional Markdown Retail = (Units Sold*Old Retail) – (Units Sold*New
Retail)
Promotional Markdown Retail = (5*17) – (5*15) = $10
Markdown amounts will generally be positive values when the price was
decreased, and the target systems will know when to add or subtract the
markdown amounts where needed.
Markdown The markdowns are provided using a type code, with 3 specific codes
Types supported:
• R – Regular permanent price changes that are not considered a clearance
price
• C – Clearance markdowns which are permanent and intended to be used
at end-of-life for the item
• P – Promotional markdowns which are temporary price changes or
discounts that are limited to a period of time
Markup When a regular price is increased or a clearance price is set back to regular
Handling price, you can send a separate transaction with positive Markup values
populated in the record. You do not need to send negative values to reverse a
markdown; the target systems can use the markup measures to do that. A
similar rule applies to the markdown/markup cancel measures.
8-37
Chapter 8
Fact Files
Rule Explanation
Inventory Usage Markdown data is joined with inventory data when you are exporting it to
for PDS Planning applications, specifically to calculate two markdown measures (reg-
Measures promo and clearance-promo markdown amounts). The markdown export uses
the clearance flag from the inventory history to determine the measure
rollups. If there is no inventory record for a given item/loc/week intersection,
the markdown data will default into the reg-promo markdown measure.
In most cases, you will be providing the same set of markdown columns for any application.
These columns are listed below with additional usage notes.
8-38
Chapter 8
Fact Files
Rule Explanation
Daily Data It is expected that the PO header and detail files start as full daily snapshots of
Requirements all active or recently closed orders. The detail data is maintained positionally
so that, if no update is received, we will continue to carry forward the last
known value. Once daily batches have started, you can transition the PO
details file only to an incremental update file (header file must always be a
complete snapshot). When sending data incrementally, you must include all
order updates for a given date, both for open and closed orders. If an order
changed at the header level (such as closing or cancelling the order), you
should send all the detail lines in that order even if some didn’t change. This
includes when order lines are fully received and move to 0 units remaining,
these changes must be sent to RAP. If you are unable to satisfy these
incremental data requirements, you may change the parameter
PO_FULL_LOAD_IND to Y to instead provide full snapshots of only non-zero
order lines and the system will zero out the rest of the orders automatically.
Historical Data The Order interfaces do not support loading historical data or data with past
and Past Dates dates on the detailed order-line records. Every time you load orders, it is for
the current set of data for a single business date. The DAY_DT value on the
detail file should be the same on all rows and be set to the business date the
data is for. You also cannot reload the same date multiple times; the detail
table follows the rules for positional facts as described in Positional Data
Handling section below.
Order Status The header file for POs has a variety of attributes but one of the most
important is the status, which should be either A (active) or C (closed). Active
orders are used in the PO calculations. When sending daily PO files, you must
include both active and closed order updates, because we need to know an
order has been completed so it can stop being included in calculations.
OTB EOW Date The OTB end-of-week date is used for the Planning aggregations to create a
forward-looking view of expected receipts from POs. Open order quantities
are aggregated to the OTB week before being exported to Planning. If the OTB
week has elapsed, the order quantities are included in next week’s OTB roll-up
regardless of how far in the past the date is, because the earliest that PO can
come as receipts is in the next business week.
Include On There is a required flag on the order header file to tell the system that an
Order Indicator order should be included in calculations for Planning or not. When the flag is
set to Y, that order’s details will be used for the aggregated on-order values. If
set to N, the order details will not be used (but will still be present in the
database for other purposes like RI reporting and inventory optimization).
The ORDER_HEAD.csv and ORDER_DETAIL.csv files both have a minimum set of required fields to
make the integrations within RAP function, so those will be listed out below with additional
usage notes. The two files are tightly coupled and it’s expected that you send both at the same
time; you will never send only one of them.
8-39
Chapter 8
Fact Files
8-40
Chapter 8
Fact Files
It’s also necessary to understand the lifecycle of a purchase order and how that should be
reflected in the data files over time. RAP will require data to be sent for each step in the order
process as outlined below.
1. When the order is approved, the ORDER_HEAD file should contain a row for the order with
status=A and the ORER_DETAIL should contain the items on the order with non-zero
quantities for the on-order amounts.
2. As the lines of the order are received, ORDER_HEAD should continue to have the row for the
order with every update, and ORDER_DETAIL should be sent with the order lines that require
changes from the last known value. If you have the ability to detect which order lines
changed, you only need to send those. RAP will remember and carry forward any order
lines that were not updated. If you can’t detect the changes to order lines, just send all
lines in the order every time.
3. If any lines are cancelled from the order, you must send that update as a set of zero values
on the PO_ONORD_* columns in ORDER_DETAIL to zero out the cancelled lines in RAP.
Similarly, if the entire order is canceled or closed before being fully received, you must
send all lines of the order with zero values on the PO_ONORD_* columns in ORDER_DETAIL
and also update ORDER_HEAD to have a status of C. If this is not possible in your source
system, you must configure the parameter PO_FULL_LOAD_IND to a value of Y in Manage
System Configurations, then you will be allowed to send full loads of only non-zero order
lines and the system will zero out the rest.
4. As order lines start to be received normally, send the new order quantities for each
change, including when a line is fully received and moves to 0 units on order. When an
order becomes fully received we need all rows of data in RAP to move to 0 for that order’s
values, so that we stop including it in future on-order rollups. If PO_FULL_LOAD_IND=Y then
we don’t need zero balance updates from you, just stop sending the order details when it
reaches zero and we will zero it out automatically.
8-41
Chapter 8
Fact Files
5. When an order is finally fully-received and closed, send one final update where
ORDER_HEAD shows the status as C and the ORDER_DETAIL data is moved to 0 units on order
for any lines not updated yet.
Depending on your source system, it can be difficult to detect all of these changes to the
purchase orders over time and send only incremental updates. In such cases, you may always
post all orders to RAP which are active or have been closed within recent history and we will
merge the data into the system on top of the existing order records. Then the main requirement
that must be accounted for is the cancelling or removal of order lines from an order, which
must still be tracked and sent to RAP even if your source system deletes the data (unless
PO_FULL_LOAD_IND=Y).
Rule Explanation
Data Must be Positional data must be loaded in the order of the calendar date on which it
Sequential occurs and cannot be loaded out-of-order. For example, when loading history
data for inventory, you must provide each week of inventory one after the
other, starting from Week 1, 2, 3, and so on.
Data Cannot be Positional data cannot be posted to any date prior to the current load date or
Back Posted business date of the system. If your current load date is Week 52 2021, you
cannot post records back to Week 50: those past positions are unable to be
changed. Any corrections that need to be loaded must be effective from the
current date forward.
8-42
Chapter 8
Fact Files
Rule Explanation
Data Must be Because positional data must maintain the current position of all data
Seeded elements in the fact (even those that are inactive or not changing) it is
required to initialize or “seed” positional facts with a starting value for every
possible combination of identifiers. This happens at two times:
1. The first date in your history files must be full snapshots of all item/
locations that need a value, including zero balances for things like
inventory.
2. Special seed programs are provided to load initial full snapshots of data
after history is finished, to prepare you for nightly batch runs. After
seeding, you are allowed to provide incremental datasets (posting only the
positions that change, not the full daily or weekly snapshot). Incremental
loads are one of the main benefits of using positional data, as they greatly
reduce your nightly batch runtime.
Throughout the initial data load process, there will be additional steps called out any time a
positional load must be performed, to ensure you accurately capture both historical and initial
seed data before starting nightly batch runs.
VDATE|20220101
PRIME_CURRENCY_CODE|USD
For anyone that will be using MFCS now or at any time in the future, you instead should
provide the full set of parameters that MFCS would eventually be generating for you, like so:
PRIME_CURRENCY_CODE|USD
CONSOLIDATION_CODE|C
VAT_IND|Y
STKLDGR_VAT_INCL_RETL_IND|Y
MULTI_CURRENCY_IND|Y
CLASS_LEVEL_VAT_IND|Y
DOMAIN_LEVEL|D
CALENDAR_454_IND|4
8-43
Chapter 8
Fact Files
VDATE|20230506
NEXT_VDATE|20230507
LAST_EOM_DATE|20240131
CURR_BOM_DATE|20240201
MAX_BACKPOST_DAYS|10
PRIME_EXCHNG_RATE|1
PRIMARY_LANG|EN
DEFAULT_TAX_TYPE|GTAX
INVOICE_LAST_POST_DATE|20170101
The parameter value with VDATE is the current business date that all your other files were
generated for in YYYYMMDD format. The date should match the values on your fact data, such as
the DAY_DT columns in sales, inventory, and so on. This format is not configurable and should
be provided as shown. The parameter value with PRIME_CURRENCY_CODE is used by the system
to set default currencies on fact files when you do not provide them yourself or if there are null
currency codes on a row.
Assuming you will be using RDE jobs to extract data from MFCS later on, the other parameters
can be provided as shown above or with any other values. The first time you run the RDE job
ETLREFRESHGENSDE_JOB, it will extract all the parameter values from MFCS and directly update
the RA_SRC_CURR_PARAM_G table records. The update from MFCS assumes that
RA_SRC_CURR_PARAM_G already has rows for all of the above parameters, which is why it is
important to initialize the data as shown if you are loading data from flat files.
8-44
9
Extensibility
The Retail Analytics and Planning (RAP) suite of applications can be extended and customized
to fit the needs of your implementation.
Custom applications, services and interfaces can be developed for AI Foundation using the
Innovation Workbench module. Innovation Workbench is also the first choice for programmatic
extensibility within RAP applications and provides access to data from both PDS and AIF.
Planning application configurations can be extended using the native RPASCE platform
functionality, and further extended using Innovation Workbench.
Retail Insights can be extended with custom datasets brought into the application using Data
Visualizer. This chapter will provide an overview of the RAP extensibility capabilities with links
and references to find more information.
Note:
Before continuing with this section, please read the application-specific
implementation/user guides.
AI Foundation Extensibility
The Innovation Workbench as a part of the AI Foundation module consists primarily of
Application Express (APEX) and Data Studio. These tools provide significant extensibility
features for custom analytical applications, advanced data science processes, 3rd party
integrations, and much more. Some examples of IW capabilities for AI Foundation include:
• Custom database schema with full read/write access allows you to store data, run queries,
perform custom calculations, and debug integrations across the RAP platform
• Use advanced Oracle database features like Oracle Data Mining (ODM) and other
machine-learning models
• Use Notebooks in Data Studio to create custom Python scripts for analytics, data mining,
or machine learning
• Notebooks and APEX jobs can be scheduled to run automatically to refresh data and
calculations
• Create Restful API services both to request data from IW out to other systems and to
consume non-Oracle data into the platform
9-1
Chapter 9
AI Foundation Extensibility
• Build flat file integrations into and out of IW for large data movements and custom dataset
extensions
• Build custom monitoring and utilities to manage integrations and science models with
business IT processes
More details on Innovation Workbench features and examples of custom extensions can be
found in the AI Foundation Implementation Guide chapter on Innovation Workbench.
9-2
Chapter 9
Planning Applications Extensibility
2. Back in AI Foundation, use the Manage System Configuration screen in the Control Center
to modify the table RI_CUSTOM_JOB_CFG and edit values for the following columns:
a. PACKAGE_NAME: Enter the name of the package that was created in IW.
b. PROCEDURE_NAME: Enter the name of the procedure in your package that was created in
IW.
c. PROCEDURE_DESCR: Enter a description, if desired.
d. RUN_TIME_LIMIT: The run time limit is 900 seconds by default. It can be changed to a
different value if needed. If the custom process runs for longer than the value indicated
in RUN_TIME_LIMIT when running as a part of the batch process, the custom process
will stop and move on to the next job/process.
e. CONNECTION_TYPE: Valid values are LOW and MEDIUM. This value should almost always
be LOW unless the job is supposed to run a process that would need multiple threads.
HIGH is not a valid value. If HIGH is entered, it will switch to LOW by default when the
job runs.
f. ENABLE_FLG: Set this value to Y to indicate that this job should be executed as part of
the batch process.
3. The POM jobs should be enabled in the Nightly batch once configured. Alternatively, you
may use the ad hoc process RI_IW_CUSTOM_ADHOC to run the jobs outside of the batch.
Because these jobs are added as part of the nightly batch, they do not allow extended
execution times (>900 seconds) by default. If you are building an extension that requires long-
running jobs, those should be scheduled using the DBMS_SCHEDULER package from within IW
itself so that you don’t cause batch delays.
Note:
These customizations must be made through RPASCE Configuration Tools.
• Solution
• Measures
9-3
Chapter 9
Planning Applications Extensibility
Custom worksheets may only be added into existing workbook tabs for plug-in generated
solutions.
Publishing Measures
The published GA measures can be divided into the following categories:
Read only—can only be used on the right-hand side of the expression
Writable—can be used on both the left-hand side and right-hand side of the expression
RuleGroupOnlyWritable—a specific measure that can be read/written in the specified rule
group
Loadable—measures that can be loaded using OAT and can be present in the custom load
batch control file
WorkbookMeasureOverride—measures which property can be overridden in the associated
workbook
9-4
Chapter 9
Planning Applications Extensibility
ReadableExecutionSet—list of GA batch control execution set names that can be called from
within a custom batch control execution file
The list of published measures will change based upon configuration. Therefore, the list is
dynamically generated at each configuration regeneration.
The contents of the list are saved in a file named: publishedMeasures.properties.
The file is located under [config]/plugins. Before writing custom rules, regenerate your
application configuration and then open the file to search for published application measures.
ReadOnly|PreSeaProf|Seasonal Profile
ReadOnly|activefcstitem01|Active Forecast Items
ReadOnly|activefcstitem07|Active Forecast Items
9-5
Chapter 9
Planning Applications Extensibility
• Apart from the Custom Solution, custom workbooks can also be added to the extensible
GA solutions.
For example:
Note:
Options can only be removed; new options cannot be added.
Note:
If a GA measure has not been enabled as Elapsed Lock Override, the following
steps can achieve the same behavior:
1. Make sure the GA measure is writable.
2. Register a custom measure and load it from the GA measure.
3. Set the custom measure as Elapsed Lock Override.
4. Edit the custom measure in the workbook.
5. Commit the custom measure back into the GA measure.
9-6
Chapter 9
Planning Applications Extensibility
Note:
These steps must be performed using RPASCE Configuration Tools. Copying,
pasting or direct editing of XML files is unsupported.
1. To add custom real-time alert into existing workbooks, all measures related to the custom
real-time alert need to be added to the workbook.
2. Create a style for the custom real-time alert in the configuration.
3. Create a custom real-time alert in a workbook using the measures and style created from
the previous steps.
4. If a real-time alert defined in custom solution will be used in a GA workbook, the real-time
alert measure should be imported as an external measure in the corresponding GA
solution.
5. We must ensure that the rule group consistency is maintained while adding any custom
rules that might be needed to calculate an alert measure.
The application plug-in will preserve a custom real-time alert during regeneration
9-7
Chapter 9
Planning Applications Extensibility
Note:
The bold line shows where the details of the validation failure are in the log. (In the
actual log, this line is not bold.)
Taskflow Extensibility
The application taskflow is extensible, the implementer can add custom taskflow components
such as activities, tasks, steps, tabs, and worksheets. Any custom taskflow component added
to a GA taskflow component will be retained after plug-in automation. As part of extensibility,
applications provide a mechanism wherein the implementor can hide certain components of
the GA configuration and taskflow by editing a property file. The property file is a simple text
file named extend_app.properties and is located inside the plug-in directory of the
configuration. A sample file is included in the plug-ins directory of the GA configuration for
reference.
For example, <App>\plug-ins\extend_app.properties
9-8
Chapter 9
Planning Applications Extensibility
Stage|Component|Action|Value
Each line consists of four fields separated by the | character. The value field can contain a
comma-separated list of values. Note that the value field should specify the fully qualified name
of the taskflow component. Refer to the sample file. Any line that begins with a # character is
considered a comment line and is ignored.
The names of the Taskflow entities can be found in the taskflow.xml file located in the
configuration directory.
The various GA configuration components that can be hidden are listed in the following table:
Component Description
Activity Hides the specified taskflow activity. The value field is the taskflow
activity name.
Task Hides the specified Taskflow task. The value field is the taskflow task
name.
Step Hides the specified Taskflow step. The value field is the taskflow step
name.
Tab Hides the specified Taskflow tab. The value field is the taskflow tab
name.
Worksheet Hides the specified worksheet. The value field is the worksheet name.
Realtime Alert Hides the specified Real-time Alert. The value field is the real-time alert
name.
9-9
Chapter 9
Planning Applications Extensibility
9-10
Chapter 9
Planning Applications Extensibility
• For ease of maintenance, all custom batch set name or step names should be prefixed
with c_
Examples
The following is an example of custom batch_exec_list.txt, batch_calc_list.txt,
batch_loadmeas_list.txt, and batch_exportmeas_list.txt.
In this example, the following modification were added to the batch _weekly process:
Note:
The batch control validation is called automatically during domain build or patch. It is
also called when the batch control files are uploaded using the Upload Batch Control
files from OAT.
Dashboard Extensibility
Currently, IPOCS-Demand Forecasting supports Dashboard Extensibility by allowing the
Dashboard Settings configuration file to be customized. The other planning applications, such
9-11
Chapter 9
Planning Applications Extensibility
as MFP and AP, support customizing the dashboard, but these are not extensible (please refer
to Customizing the MFP/AP Dashboard).
In Figure 9-2,the Overview Metric profile is selected, and the Total Sales tile is highlighted with
two sub-measures: Promo Sales and Markdown Sales.
9-12
Chapter 9
Planning Applications Extensibility
Note:
The Exception profiles consist of Exception Tiles, and the Metric Profile consists of
metric tiles of the type Comparison Tile. Currently, IPOCS-Demand Forecasting does
not support the Variance Metric tile.
Dashboard Intersection
The IPOCS-Demand Forecasting GA Dashboard workbook is built at the Sub-class, District
level which is controlled by the Dashboard Intersection specified in the IPOCS-Demand
Forecasting plug-in. Refer to the "IPOCS-Demand Forecasting / IPOCS-Lifecycle Allocation
and Replenishment Configuration" section in the Oracle® Retail Inventory Planning
Optimization Cloud Service-Demand Forecasting/ Inventory Planning Optimization Cloud
Service-Lifecycle Allocation and Replenishment Implementation Guide. The Dashboard
intersection also defines the level to which we can drill down the Product and Location filters in
the Dashboard.
9-13
Chapter 9
Planning Applications Extensibility
Note:
The Deployment Tool is a utility within the Configuration Tools. Refer to the section,
Deployment Tool – Dashboard Settings Resource in the Oracle Retail Predictive
Application Server Cloud Edition (RPASCE) Configuration Tools User Guide.
The IPOCS-Demand Forecasting GA Dashboard Settings configuration file is found within the
configuration: RDF\plugins\dashboardSettings.json
Note:
Do not remove the GA measures or worksheet from the Dashboard workbook
template in the configuration.
9-14
Chapter 9
Planning Applications Extensibility
9-15
Chapter 9
Planning Applications Extensibility
2. Download the application dashboard JSON file (dashboardSettings.json) from the Starter
kit or directly from the customer-provisioned environment by running the Online
Administration Tools task Patch Application Task -> Manage JSON Files -> Retrieve
JSON files to Object Storage. This will download the JSON file into the Object Storage
location at outgoing/dashboardSettings.json.
3. Open the downloaded dashboard JSON file using the RPASCE Configuration Tools ->
Utilities -> Deployment Tool and selecting the Open option under
dashdoardSettings.json.
4. It should open the dashboard JSON file in edit mode. The customer can then edit the
dashboard to add the newly added measures into their required profiles. They can also
add new profiles or change profiles but can only use the measures available in the
dashboard workbook. For more information on working with the JSON file using RPASCE
Configuration Tools, see the Oracle Retail Predictive Application Server Cloud Edition
Configuration Tools User Guide.
5. Once the JSON file is updated, it can be uploaded into the MFP environment by uploading
the file to the Object Storage location as incoming/config/dashboardSettings.json, and
running the Online Administration Tool task Patch Application Task -> Manage JSON
Files > Update JSON files from Object Storage. Successful completion of the task will
copy the file to the required location under the application domain.
6. After uploading, rebuild the dashboard to view the updated dashboard.
7. The entire process can be validated in the Virtual machine before trying to upload the
completed JSON file into the customer environment.
Note:
The permissible and restricted interface customization is published in the file
publishedMeasures.properties located in the [config]/plugins directory.
9-16
Chapter 9
Planning Applications Extensibility
9-17
Chapter 9
Planning Applications Extensibility
Note:
9-18
Chapter 9
Planning Applications Extensibility
• Only Interface Filters published and not restricted in the property file can be edited.
Follow this process to update the interface.cfg file:
Hook Description
hook_calc_attb_CF_ This hook is executed right after GA attributes exception
navifin_CF_ is calculated and before approval business
rule group are calculated. If any custom calculated
attributes have been set up to be used in approval by
implementor. This is the place to insert custom attributes
calculations.
_CF_ needs to be replaced by a level number.
hook_frcst_adjust_CF_ This hook is provided to add custom forecast adjustment
calculations. This hook is before the business rule group
related calculation, approval, and navigation logic.
_CF_ needs to be replaced by a level number.
hook_frcst_alert_CF_ This hook is provided to merge the user specified
parameters associated with approval business rule group
before running exceptions. After merging the user
specified parameters, the custom approval exceptions
and exception metric should be executed.
_CF_ needs to be replaced by a level number.
hook_frcst_approval_CF_ This hook is provided to perform any post-processing to
approval forecast after GA approval step.
_CF_ needs to be replaced by a level number.
hook_navi_attb_CF_ This hook is provided so that implementor can calculate
the custom calculated attributes used in the navigation
business rule groups.
_CF_ needs to be replaced by a level number.
hook_populate_aprvrulg_eligiblemask This hook is for populate rulgeligmask_CF measure using
_CF_ custom logic. This measure is the eligible mask at sku/
store/rulegroup. It can be populated with custom logic to
calculate eligible items for approval business rule
groups.
_CF_ needs to be replaced by a level number.
hook_post_export This hook is after export.
hook_post_forecast This hook is between forecast and export.
hook_post_preprocess This hook is after the preprocessing phase and before
generating the forecasts.
9-19
Chapter 9
Planning Applications Extensibility
Hook Description
hook_pre_forecast This hook is after New Item calculation and before the
forecast generation step.
hook_pre_post_data_load This hook is between GA measure load and
post_data_load rule group run.
hook_IPO_COM_DATA_IMP_OBS_D This hook is for the calling steps using any import of
hook_IPO_COM_DATA_IMP_OBS_W common data interfaces.
hook_IPO_COM_DATA_IMP_RDX_D
hook_IPO_COM_DATA_IMP_RDX_W
hook_IPO_COM_HIER_IMP_OBS_D This hook is for the calling steps using any import of
hook_IPO_COM_HIER_IMP_OBS_W common hierarchies.
hook_IPO_COM_HIER_IMP_RDX_D
hook_IPO_COM_HIER_IMP_RDX_W
hook_IPO_HIER_IMP_OBS_D This hook is for the calling steps using any import of
hook_IPO_HIER_IMP_OBS_W application-specific hierarchies.
hook_IPO_HIER_IMP_RDX_D
hook_IPO_HIER_IMP_RDX_W
hook_IPO_INIT_EXP_OBS_D This hook is for calling steps for initial batch exports.
hook_IPO_INIT_EXP_OBS_W
hook_IPO_INIT_EXP_RDX_D
hook_IPO_INIT_EXP_RDX_W
hook_IPO_POST_BATCH_D This hook is for calling steps after the batch has run.
hook_IPO_POST_BATCH_W
hook_IPO_POST_DATA_IMP_OBS_D This hook is for the calling steps using any import of
hook_IPO_POST_DATA_IMP_OBS_W application-specific data interfaces after the calc steps.
hook_IPO_POST_DATA_IMP_RDX_D
hook_IPO_POST_DATA_IMP_RDX_W
hook_IPO_POST_EXP_OBS_D This hook is for the calling steps using any exports after
hook_IPO_POST_EXP_OBS_W the batch aggregations.
hook_IPO_POST_EXP_RDX_D
hook_IPO_POST_EXP_RDX_W
hook_IPO_PRE_BATCH_D This hook is for calling steps prior to the batch being run.
hook_IPO_PRE_BATCH_W
hook_IPO_PRE_DATA_IMP_OBS_D This hook is for the calling steps using any import of
hook_IPO_PRE_DATA_IMP_OBS_W application-specific data interfaces.
hook_IPO_PRE_DATA_IMP_RDX_D
hook_IPO_PRE_DATA_IMP_RDX_W
hook_IPO_PRE_EXP_OBS_D This hook is for calling steps prior to exports.
hook_IPO_PRE_EXP_OBS_W
hook_IPO_PRE_EXP_RDX_D
hook_IPO_PRE_EXP_RDX_W
hook_IPO_WB_BUILD_D This hook is for the calling steps specific to workbook
hook_IPO_WB_BUILD_W refresh or build.
9-20
Chapter 9
Planning Applications Extensibility
9-21
Chapter 9
Planning Applications Extensibility
batch_exec_list.txt
# custom export
hook_post_export | measexport | c_export_promoeffects
c_calc_cust_alerts | calc |c_custalert1
c_calc_cust_alerts | calc |c_custalert2
batch_calc_list.txt
#outlier calculation
c_outlier_calc | G | GROUP | c_HBICalcTodayIdx
c_outlier_calc | G | GROUP | c_dataprocess
c_outlier_calc | G | GROUP | c_calc_outlier
batch_loadmeas_list.txt
batch_exportmeas_list.txt
9-22
Chapter 9
Planning Applications Extensibility
Below sections describe Batch Control details that are specific to MFP:
The following table describes the Custom Hooks available in the batch process if the customer
is scheduling jobs directly through the OAT.
Table 9-5 Custom Hooks in the Batch Process to Directly Run from OAT
Hook Description
hook_postbuild This hook is added at the end of the postbuild batch,
which runs after the initial domain build.
hook_postpatch This hook is added at the end of the service patch process,
which runs after the service patch.
hook_batch_daily_pre This hook is added before the daily batch process.
hook_batch_daily_post This hook is added at the end of the daily batch process
before the dashboard build.
hook_batch_weekly_pre This hook is added before the weekly batch process.
hook_batch_weekly_post This hook is added at the end of the weekly batch process
before the workbook refresh and segment build.
If the customer is using the JOS/POM flow schedule to schedule jobs in MFP, then the
following hooks can be used. The MFP JOS/POM job flow is connected to use the same set
names, like the hooks shown in the following table without hook_* in it and in turn calls each of
the corresponding hooks. So the customer can easily customize their MFP batch flow based
on their needs by simply changing the hooks or adding additional steps to the existing, pre-
configured hooks.
The naming convention followed is:
• _RDX is used for any integration step using RDX.
• _OBS is used for any steps using Object Storage.
• _D is for jobs that run daily.
• _W is for jobs that run weekly.
Table 9-6 Custom Hooks in the Batch Process if JOS/POM is Used to Schedule the
Flow
Hook Description
hook_MFP_PRE_EXP_RDX_D This hook is for the calling steps using the Daily Export
Interfaces to RDX as soon as the batch starts.
hook_MFP_PRE_EXP_OBS_D This hook is for the calling steps using the Daily Export
Interfaces to Object Storage as soon as the batch starts.
hook_MFP_PRE_EXP_RDX_W This hook is for calling steps using the Weekly Export
Interfaces to RDX as soon as the batch starts.
hook_MFP_PRE_EXP_OBS_W This hook is for the calling steps using the Weekly Export
Interfaces to Object Storage as soon as the batch starts.
hook_MFP_COM_HIER_IM P_RDX_D This hook is for the calling steps using any Daily Import of
common hierarchies from RDX.
hook_MFP_COM_HIER_IM P_OBS_D This hook is for the calling steps using any Daily Import of
common hierarchies from Object Storage.
9-23
Chapter 9
Planning Applications Extensibility
Table 9-6 (Cont.) Custom Hooks in the Batch Process if JOS/POM is Used to Schedule
the Flow
Hook Description
hook_MFP_COM_HIER_IM P_RDX_W This hook is for the calling steps using any Weekly Import
of common hierarchies from RDX.
hook_MFP_COM_HIER_IM P_OBS_W This hook is for the calling steps using any Weekly Import
of common hierarchies from Object Storage.
hook_MFP_COM_DATA_IM P_RDX_D This hook is for the calling steps using any Daily Import of
common data interfaces from RDX.
hook_MFP_COM_DATA_IM P_OBS_D This hook is for the calling steps using any Daily Import of
common data interfaces from Object Storage.
hook_MFP_COM_DATA_IM P_RDX_W This hook is for the calling steps using any Weekly Import
of common data interfaces from RDX.
hook_MFP_COM_DATA_IM P_OBS_W This hook is for the calling steps using any Weekly Import
of common data interfaces from Object Storage.
hook_MFP_HIER_IMP_RD X_D This hook is for the calling steps using any Daily Import of
application-specific hierarchies from RDX.
hook_MFP_HIER_IMP_OB S_D This hook is for the calling steps using any Daily Import of
application-specific hierarchies from Object Storage.
hook_MFP_HIER_IMP_RD X_W This hook is for the calling steps using any Weekly Import
of application-specific hierarchies from RDX.
hook_MFP_HIER_IMP_OB S_W This hook is for the calling steps using any Weekly Import
of application-specific hierarchies from Object Storage.
hook_MFP_PRE_DATA_IMP_RDX_D This hook is for the calling steps using any Daily Import of
application-specific data interfaces from RDX.
hook_MFP_PRE_DATA_IMP_OBS_D This hook is for the calling steps using any Daily Import of
application-specific data interfaces from Object Storage.
hook_MFP_PRE_DATA_IMP_RDX_W This hook is for the calling steps using any Weekly Import
of application-specific data interfaces from RDX.
hook_MFP_PRE_DATA_IMP_OBS_W This hook is for the calling steps using any Weekly Import
of application-specific data interfaces from Object Storage.
hook_MFP_BATCH_AGG_D This hook is for the calling steps doing any regular daily
batch aggregation after hierarchy and data loads.
hook_MFP_BATCH_AGG_ W This hook is for the calling steps doing any regular weekly
batch aggregation after hierarchy and data loads.
hook_MFP_POST_DATA_IM P_RDX_D This hook is for the calling steps using any Daily Import of
application-specific data interfaces from RDX after the calc
steps.
hook_MFP_POST_DATA_IM P_OBS_D This hook is for the calling steps using any Daily Import of
application-specific data interfaces from Object Storage
after the calc steps.
hook_MFP_POST_DATA_IM This hook is for the calling steps using any Weekly Import
P_RDX_W of application-specific data interfaces from RDX after the
calc steps.
hook_MFP_POST_DATA_IM This hook is for the calling steps using any Weekly Import
P_OBS_W of application-specific data interfaces from Object Storage
after the calc steps.
hook_MFP_POST_EXP_RD X_D This hook is for the calling steps using any Daily Exports to
RDX after the batch aggs.
9-24
Chapter 9
Planning Applications Extensibility
Table 9-6 (Cont.) Custom Hooks in the Batch Process if JOS/POM is Used to Schedule
the Flow
Hook Description
hook_MFP_POST_EXP_OB S_D This hook is for the calling steps using any Daily Exports to
Object Storage after the batch aggs.
hook_MFP_POST_EXP_RD X_W This hook is for the calling steps using any Weekly Exports
to RDX after the batch aggs.
hook_MFP_POST_EXP_OB S_W This hook is for the calling steps using any Weekly Exports
to Object Storage after the batch aggs.
hook_MFP_WB_BUILD_D This hook is for the calling steps specific to workbook
refresh or build in the daily cycle.
hook_MFP_WB_BUILD_W This hook is for the calling steps specific to workbook
refresh or build in the weekly cycle.
batch_exec_list.txt
# Run Batch calc and new custom exports after end of weekly batch
hook_batch_weekly_post |calc |c_calc_vndr
hook_batch_weekly_post |exportmeasure |c_exp_vndr
batch_calc_list.txt
9-25
Chapter 9
Planning Applications Extensibility
batch_loadmeas.txt
batch_exportmeas.txt
Below sections describes Batch Control details that are specific to AP:
The following table describes the Custom Hooks available in the batch process.
Hook Description
hook_postbuild_pre This hook is added at the beginning of the postbuild batch which
runs after the initial domain build.
hook_postbuild_post This hook is added at the end of the postbuild batch which runs
after the initial domain build.
hook_postpatch This hook is added at the end of the service patch process which
runs after the service patch.
hook_batch_daily_pre This hook is added before the daily batch process.
hook_batch_daily_post This hook is added at the end of daily batch process before the
dashboard build.
hook_batch_weekly_pre This hook is added before the weekly batch process.
hook_batch_weekly_post This hook is added at the end of the weekly batch process before the
workbook refresh and segment build.
If the customer is using the JOS/POM flow schedule to schedule jobs in AP, then the following
hooks can be used. The AP JOS/POM job flow is connected to use the same set names
similar to the hooks shown in the following table without hook_* in it and in turn calls each of
the corresponding hooks. So the customer can easily customize their AP batch flow based on
their needs by simply changing the hooks or adding additional steps to the existing pre-
configured hooks.
The naming convention followed is:
• _RDX that is used for any integration step using RDX.
• _OBS is used for any steps using Object Storage.
• _D is for jobs that runs daily.
• _W is for jobs that runs weekly.
9-26
Chapter 9
Planning Applications Extensibility
Hook Description
hook_AP_PRE_EXP_RDX_D This hook is for the calling steps using the Daily Export Interfaces to
RDX as soon as the batch starts.
hook_AP_PRE_EXP_OBS_D This hook is for the calling steps using the Daily Export Interfaces to
Object Storage as soon as the batch starts.
hook_AP_PRE_EXP_RDX_W This hook is for calling steps using the Weekly Export Interfaces to
RDX as soon as the batch starts.
hook_AP_PRE_EXP_OBS_W This hook is for the calling steps using the Weekly Export Interfaces
to Object Storage as soon as the batch starts.
hook_AP_COM_HIER_IMP_ This hook is for the calling steps using any Daily Import of common
RDX_D hierarchies from RDX.
hook_AP_COM_HIER_IMP_ This hook is for the calling steps using any Daily Import of common
OBS_D hierarchies from Object Storage.
hook_AP_COM_HIER_IMP_ This hook is for the calling steps using any Weekly Import of
RDX_W common hierarchies from RDX.
hook_AP_COM_HIER_IMP_ This hook is for the calling steps using any Weekly Import of
OBS_W common hierarchies from Object Storage.
hook_AP_COM_DATA_IMP_ This hook is for the calling steps using any Daily Import of common
RDX_D data interfaces from RDX.
hook_AP_COM_DATA_IMP_ This hook is for the calling steps using any Daily Import of common
OBS_D data interfaces from Object Storage.
hook_AP_COM_DATA_IMP_ This hook is for the calling steps using any Weekly Import of
RDX_W common data interfaces from RDX.
hook_AP_COM_DATA_IMP_ This hook is for the calling steps using any Weekly Import of
OBS_W common data interfaces from Object Storage.
hook_AP_HIER_IMP_RDX_ This hook is for the calling steps using any Daily Import of
D application-specific hierarchies from RDX.
hook_AP_HIER_IMP_OBS_D This hook is for the calling steps using any Daily Import of
application-specific hierarchies from Object Storage.
hook_AP_HIER_IMP_RDX_ This hook is for the calling steps using any Weekly Import of
W application-specific hierarchies from RDX.
hook_AP_HIER_IMP_OBS_ This hook is for the calling steps using any Weekly Import of
W application-specific hierarchies from Object Storage.
hook_AP_PRE_DATA_IMP_ This hook is for the calling steps using any Daily Import of
RDX_D application-specific data interfaces from RDX.
hook_AP_PRE_DATA_IMP_ This hook is for the calling steps using any Daily Import of
OBS_D application-specific data interfaces from Object Storage.
hook_AP_PRE_DATA_IMP_ This hook is for the calling steps using any Weekly Import of
RDX_W application-specific data interfaces from RDX.
hook_AP_PRE_DATA_IMP_ This hook is for the calling steps using any Weekly Import of
OBS_W application-specific data interfaces from Object Storage.
hook_AP_BATCH_AGG_D This hook is for the calling steps doing any regular daily batch
aggregation after hierarchy and data loads.
hook_AP_BATCH_AGG_W This hook is for the calling steps doing any regular weekly batch
aggregation after hierarchy and data loads.
hook_AP_POST_DATA_IMP_ This hook is for the calling steps using any Daily Import of
RDX_D application-specific data interfaces from RDX after the calc steps.
9-27
Chapter 9
Planning Applications Extensibility
Hook Description
hook_AP_POST_DATA_IMP_ This hook is for the calling steps using any Daily Import of
OBS_D application-specific data interfaces from Object Storage after the
calc steps.
hook_AP_POST_DATA_IMP_ This hook is for the calling steps using any Weekly Import of
RDX_W application-specific data interfaces from RDX after the calc steps.
hook_AP_POST_DATA_IMP_ This hook is for the calling steps using any Weekly Import of
OBS_W application-specific data interfaces from Object Storage after the
calc steps.
hook_AP_POST_EXP_RDX_ This hook is for the calling steps using any Daily Exports to RDX
D after the batch aggs.
hook_AP_POST_EXP_OBS_ This hook is for the calling steps using any Daily Exports to Object
D Storage after the batch aggs.
hook_AP_POST_EXP_RDX_ This hook is for the calling steps using any Weekly Exports to RDX
W after the batch aggs.
hook_AP_POST_EXP_OBS_ This hook is for the calling steps using any Weekly Exports to Object
W Storage after the batch aggs.
hook_AP_WB_BUILD_D This hook is for the calling steps specific to workbook refresh or
build in the daily cycle.
hook_AP_WB_BUILD_W This hook is for the calling steps specific to workbook refresh or
build in the weekly cycle.
batch_exec_list.txt
# Run Batch calc and new custom exports after end of weekly batch
9-28
Chapter 9
Programmatic Extensibility of RPASCE Through Innovation Workbench
batch_calc_list.txt
batch_loadmeas.txt
batch_exportmeas.txt
Architectural Overview
The figures in this section describe how the IW schema fits into the PDS and RAP contexts
respectively.
9-29
Chapter 9
Programmatic Extensibility of RPASCE Through Innovation Workbench
9-30
Chapter 9
Programmatic Extensibility of RPASCE Through Innovation Workbench
Measure Properties
If the customer-provided PL/SQL functions and procedures require write-access to any
RPASCE measures, then they must be marked as "Customer-Managed" in the application
configuration.
In ConfigTools Workbench, a new column Customer Managed is added to the Measure
Definition Table. This new column is defaulted to empty, which means false.
To mark a measure as customer managed the value should be changed to true.
9-31
Chapter 9
Programmatic Extensibility of RPASCE Through Innovation Workbench
The "customer-managed" measures must have a database field specified, otherwise an error
will be thrown.
Note:
The "customer-managed" measures cannot be used in cycle groups and the left-
hand side of special expressions because these measures need to be in the same
fact group. Making part of these measures as customer-managed measures/facts will
split this fact group because customer-managed measures are assigned to a
separate fact group.
Example
In the example below, a rule containing execplsql is added to rule group cust6.
One requirement is that a cmf rulegroup must have only plsqlexec rules. It is not possible to
mix other kinds of rules with the plsqlexec rule. There can be many plsqlexec rules in the
same rulegroup. Also please make sure keep only one expression in each rule.
9-32
Chapter 9
Programmatic Extensibility of RPASCE Through Innovation Workbench
Integration Configuration
The Integration Configuration Tool will have a new column Customer-Managed for the
Integration Map table. The integration configuration is generated internally and is only shown
here for information purposes.
<integration_map>
<entry>
<fact>ADDVChWhMapT</fact>
<domain>mfpcs</domain>
<measure>ADDVChWhMapT</measure>
<outbound>N</outbound>
<customer-managed>Y</customer-managed>
</entry>
</integration_map>
Example
drdvsrcti<-execplsql("RP_CUSTOM_PKG","sum",drdvsrctt, adhdlcratet,
add2locopnd)
In this example the LHS measure drdvsrcti is a scalar integer measure. It will be set to the
integer value returned by the function named sum in the customer uploaded
package RP_CUSTOM_PKG.
Arguments
LHS
The LHS measure must be a scalar integer measure. It will be set to the integer value returned
by the customer-uploaded PL/SQL function or procedure. The integer value is meant to be a
return code indicating the result of the procedure or function execution. In case of exceptions,
RPASCE will set the LHS measure to a value of -1 to indicate an error. If there are any
exceptions or failures, then the logs will provide further information regarding the reason for the
failure.
RHS
• First argument:
The type of the first argument is string. It can either be a string constant or a scalar string
measure. The first argument is the name of the customer-uploaded package. For more
9-33
Chapter 9
Programmatic Extensibility of RPASCE Through Innovation Workbench
details regarding uploading custom packages refer to section Uploading Custom PL/SQL
Packages.
• Second Argument:
The type of the first argument is string. It can either be a string constant or a scalar string
measure. The second argument is the name of a function or procedure within in the
custom package specified as the first argument of execplsql. This function or procedure
will be executed by the execplsql special expression when it is evaluated.
If a function is being specified, make sure the return type is declared as a number in the
PL/SQL function declaration.
If a procedure is being specified, make sure there is exactly 1 out type parameter of
type number in the PL/SQL procedure declaration.
Examples
Consider the PL/SQL function SUM present in the package RP_CUSTOM_PKG. To execute the SUM
function in the RPASCE application batch, first upload RP_CUSTOM_PKG as described in the
section Uploading Custom PL/SQL Packages. The PL/SQL function SUM is declared as below
in the package RP_CUSTOM_PKG.
Here is a sample definition of the SUM function that adds 2 measures and writes the result to a
third measure. Note that the measure lhsMeas is an IN type argument although the function
SUM updates it. The measure lhsMeas must be marked as a customer managed measure as
described in the Measure Properties subsection of the RPASCE Configuration Tools Changes
section.
FUNCTION sum (
lhsmeas IN VARCHAR2,
rhsmeas1 IN VARCHAR2,
rhsmeas2 IN VARCHAR2
) RETURN NUMBER IS
-- EXPR 1: lhsMeas = rhsMeas2 + rhsMeas1
9-34
Chapter 9
Programmatic Extensibility of RPASCE Through Innovation Workbench
WHERE
fact_name = lhsmeas;
SELECT
fact_group
INTO rhs1factgroup
FROM
rp_g_fact_info_md
WHERE
fact_name = rhsmeas1;
SELECT
fact_group
INTO rhs2factgroup
FROM
rp_g_fact_info_md
WHERE
fact_name = rhsmeas2;
lhsfacttable := 'rp_g_'
|| lhsfactgroup
|| '_ft';
rhs1facttable := 'rp_g_'
|| rhs1factgroup
|| '_ft';
rhs2facttable := 'rp_g_'
|| rhs2factgroup
|| '_ft';
na_ut_lhsmeas := ( na_ut_rhsmeas2 + na_ut_rhsmeas1 );
9-35
Chapter 9
Programmatic Extensibility of RPASCE Through Innovation Workbench
dept_id,
stor_id,
'
|| rhsmeas1
|| '
FROM
'
|| rhs1facttable
|| '
) rhsft01
FULL OUTER JOIN (
SELECT
partition_id,
dept_id,
stor_id,
'
|| rhsmeas2
|| '
FROM
'
|| rhs2facttable
|| '
) rhsft02 ON rhsft01.partition_id = rhsft02.partition_id
AND rhsft01.dept_id = rhsft02.dept_id
AND rhsft01.stor_id = rhsft02.stor_id
)
rhs_final ON ( lhs.partition_id = rhs_final.partition_id
AND lhs.dept_id = rhs_final.dept_id
AND lhs.stor_id = rhs_final.stor_id )
WHEN MATCHED THEN UPDATE
SET lhs.'
|| lhsmeas
|| '= nullif(rhs_final.'
|| lhsmeas
|| ', '
|| na_ut_lhsmeas
|| ') DELETE
WHERE
rhs_final.'
|| lhsmeas
|| ' = '
|| na_ut_lhsmeas
|| '
WHEN NOT MATCHED THEN
INSERT (
lhs.partition_id,
lhs.dept_id,
lhs.stor_id,
lhs.'
|| lhsmeas
|| ' )
VALUES
( rhs_final.partition_id,
rhs_final.dept_id,
rhs_final.stor_id,
nullif(rhs_final.'
9-36
Chapter 9
Programmatic Extensibility of RPASCE Through Innovation Workbench
|| lhsmeas
|| ', '
|| na_ut_lhsmeas
|| ') )
WHERE
rhs_final.'
|| lhsmeas
|| ' != '
|| na_ut_lhsmeas;
dbms_output.put_line(stmt);
EXECUTE IMMEDIATE stmt;
COMMIT;
RETURN 0;
END sum;
Now to execute this SUM function from the application batch, add the rule below to application
configuration as described in the Rules and Expressions subsection of RPASCE Configuration
Tools Changes section. Add the rule group containing the rule to the batch control files as
described in the section RPASCE Batch Control File Changes. Patch the application with the
updated configuration and batch control files.
drdvsrcti<-execplsql("RP_CUSTOM_PKG","sum",drdvsrctt, adhdlcratet,
add2locopnd)
Here all 3 measures are placeholder scalar string measures that will point to the actual real
measures that are being summed.
In this example, the input scalar measures are mapped as follows:
• drdvsrctt: lpwpsellthrmn - dept_stor - customer managed (LHS measure)
Label: Wp Sell Thru R % Min Threshold
• adhdlcratet: lpwprtnmn - dept_stor (RHS1)
Label: Wp Returns R % Min Threshold
• add2locopnd: lpwprtnmx - dept_stor (RHS2)
Label: Wp Returns R % Max Threshold
Alternately, the rule could have been configured as below. However, that would mean that it is
not possible to change the input measures as part of the batch. It will need a patch to update
the input measures to the SUM procedure.
Note:
The measures are in quotes as they are passed to PL/SQL as string constants. If the
quotes are missing, then RPASCE will throw an error indicating that it is not possible
to invoke execplsql using non-scalar measures.
drdvsrcti<-execplsql("RP_CUSTOM_PKG","sum",'lpwpsellthrmn' ,
'lpwprtnmn' ,'lpwprtnmx' )
9-37
Chapter 9
Programmatic Extensibility of RPASCE Through Innovation Workbench
Execute this rule group through batch and build a measure analysis workbook with the
involved measures. It can then be verified that the SUM evaluated correctly.
The following examples demonstrate execplsql and how a special expression can invoke
PL/SQL with a variable number and type of input arguments.
intscalar01<-execplsql("RP_CUSTOM_PKG","custom_procedure1","dvsn", true, 1,
1)
intscalar01<-execplsql("RP_CUSTOM_PKG","custom_procedure2","dvsn",
1123.5813, 23, -1)
intscalar01<-execplsql("RP_CUSTOM_PKG","custom_function1","dvsn", true, 1)
intscalar01<-execplsql("RP_CUSTOM_PKG","custom_function2","dvsn", 1)
intscalar01<-execplsql("RP_CUSTOM_PKG","custom_function2",strscalar1,
intscalar02)
intscalar01<-execplsql("RP_CUSTOM_PKG","custom_procedure3","dvsn",
datescalar2, 1, 1)
The PL/SQL counterparts are defined, through very simple demonstration code, in the example
custom package below.
9-38
Chapter 9
Programmatic Extensibility of RPASCE Through Innovation Workbench
rp_custom_pdg.pkb
function SUM
(lhsMeas IN VARCHAR2,
rhsMeas1 IN VARCHAR2,
rhsMeas2 IN VARCHAR2) return number
is
-- EXPR 1: lhsMeas = rhsMeas2 + rhsMeas1
9-39
Chapter 9
Programmatic Extensibility of RPASCE Through Innovation Workbench
rhs1FactGroup varchar2(4000);
rhs2FactGroup varchar2(4000);
lhsFactTable varchar2(4000);
rhs1FactTable varchar2(4000);
rhs2FactTable varchar2(4000);
stmt varchar2(8000);
BEGIN
rp_g_common_pkg.clear_facts(varchar2_table(lhsMeas));
select fact_group into lhsFactGroup from RP_G_FACT_INFO_MD where
FACT_NAME = lhsMeas;
select fact_group into rhs1FactGroup from RP_G_FACT_INFO_MD where
FACT_NAME = rhsMeas1;
select fact_group into rhs2FactGroup from RP_G_FACT_INFO_MD where
FACT_NAME = rhsMeas2;
lhsFactTable := 'rp_g_' || lhsFactGroup || '_ft';
rhs1FactTable := 'rp_g_' || rhs1FactGroup || '_ft';
rhs2FactTable := 'rp_g_' || rhs2FactGroup || '_ft';
na_ut_lhsMeas := ( na_ut_rhsMeas2 + na_ut_rhsMeas1 );
-- UPDATE rp_g_fact_info_md
-- SET
-- table_na =
-- CASE lower(fact_name)
-- WHEN 'b' THEN
-- to_char(na_ut_lhsMeas)
-- END
-- WHERE
-- lower(fact_name) IN ( lhsMeas );
9-40
Chapter 9
Programmatic Extensibility of RPASCE Through Innovation Workbench
stor_id,
' || rhsMeas2 || '
FROM
' || rhs2FactTable || '
) rhsft02 ON rhsft01.partition_id = rhsft02.partition_id
AND rhsft01.dept_id = rhsft02.dept_id
AND rhsft01.stor_id = rhsft02.stor_id
)
rhs_final ON ( lhs.partition_id = rhs_final.partition_id
AND lhs.dept_id = rhs_final.dept_id
AND lhs.stor_id = rhs_final.stor_id )
WHEN MATCHED THEN UPDATE
SET lhs.' || lhsMeas || '= nullif(rhs_final.' || lhsMeas || ', ' ||
na_ut_lhsMeas || ') DELETE
WHERE
rhs_final.' || lhsMeas || ' = ' || na_ut_lhsMeas || '
WHEN NOT MATCHED THEN
INSERT (
lhs.partition_id,
lhs.dept_id,
lhs.stor_id,
lhs.' || lhsMeas || ' )
VALUES
( rhs_final.partition_id,
rhs_final.dept_id,
rhs_final.stor_id,
nullif(rhs_final.' || lhsMeas || ', ' || na_ut_lhsMeas || ') )
WHERE
rhs_final.' || lhsMeas || ' != ' || na_ut_lhsMeas ;
DBMS_OUTPUT.PUT_LINE (stmt);
commit;
return 0;
END SUM;
end RP_CUSTOM_PKG;
rp_custom_pkg.pks
9-41
Chapter 9
Programmatic Extensibility of RPASCE Through Innovation Workbench
Limitations
Boolean arguments must be recast as character types, the PL/SQL function or procedure
should declare them as a CHAR type. RPASCE will set the char to T for true and F for false. On
the expression side it is handled similarly to how boolean types are handled by an RPASCE
expression. Pass a scalar boolean measure or boolean constant (true or false) to
the execplsql special expression.
9-42
Chapter 9
Programmatic Extensibility of RPASCE Through Innovation Workbench
File: batch_calc_list.txt
In this example, the calc set name is iw_sum, which is of type group, meaning it is executing a
rule group. The third item is the rule group name, which is cust7. Rule group cust7 has a CMF
property set and contains execplsql rules.
File: batch_oat_list.txt
Here the Batch Control Group Name is calc and the batch set name is iw_sum, meaning it will
look at file batch_calc_list.txt for an entry named iw_sum. We already added it in the step
above. The third item is the label, which shows up on the UI in the drop down when user tries
to execute the batch calc group OAT task.
File: batch_exec_list.txt
Here iw_all is the Batch Set Name. Batch task type is calc and parameter is iw_sum. When
iw_all is invoked, it will look for an entry named iw_sum in the batch_calc_list.txt. Please
check the first step above for the entry in batch_calc_list.txt.
The iw_all can be made part of a daily batch as shown in the example below.
File: batch_exec_list.txt
9-43
Chapter 9
Programmatic Extensibility of RPASCE Through Innovation Workbench
RPASCE Deployment
The customer-managed PL/SQL functions and procedures are uploaded to the IW schema.
For more information on uploading the custom packages, refer to the section Uploading
Custom PL/SQL Packages.
During evaluation of the execplsql special expression, RPASCE switches to the IW schema
user, to limit the scope of writable data access, and then executes the function or procedure.
However, during application deploy and patch, RPASCE code grants the necessary privileges
to the IW schema user. These grants ensure that the IW schema user can read all the fact
tables and metadata tables in the PDS through synonyms, and write access is only provided to
fact tables for the measures marked as customer-managed in the application configuration.
If the configuration is modified such that additional measures are now marked as customer
managed, or if existing customer-managed measures are made non-customer-managed, then
the application patch operation will update the privileges accordingly.
declare
l_fact varchar2(30);
begin
l_fact := rp_g_rpas_helper_pkg.get_fact_name('drtynslsu');
end;
9-44
Chapter 9
Programmatic Extensibility of RPASCE Through Innovation Workbench
declare
l_fact varchar2(30);
l_na_value number;
begin
l_fact := rp_g_rpas_helper_pkg.get_fact_name('drtynslsu');
l_na_value := rp_g_rpas_helper_pkg.get_na_value(l_fact);
end;
declare
l_fact varchar2(30);
l_log_space number;
begin
l_fact := rp_g_rpas_helper_pkg.get_fact_name('drtynslsu');
l_log_space := rp_g_rpas_helper_pkg.get_logical_space(l_fact);
end;
declare
l_fact varchar2(30);
l_group varchar2(30);
begin
l_fact := rp_g_rpas_helper_pkg.get_fact_name('drtynslsu');
l_group := rp_g_rpas_helper_pkg.get_fact_group_name(l_fact);
end;
declare
l_fact varchar2(30);
l_table varchar2(30);
begin
l_fact := rp_g_rpas_helper_pkg.get_fact_name('drtynslsu');
l_table := rp_g_rpas_helper_pkg.get_table_name(l_fact);
end;
declare
l_parts number;
9-45
Chapter 9
Programmatic Extensibility of RPASCE Through Innovation Workbench
begin
l_parts := rp_g_rpas_helper_pkg.get_number_of_partitions;
end;
declare
l_part_level varchar2(30);
begin
l_part_level := rp_g_rpas_helper_pkg.get_partition_level;
end;
declare
l_result boolean;
begin
l_result := rp_g_rpas_helper_pkg.clear_fact('drtynslsu');
end;
declare
l_fact varchar2(30);
l_intx varchar2(30);
begin
l_fact := rp_g_rpas_helper_pkg.get_fact_name('drtynslsu');
l_intx := rp_g_rpas_helper_pkg.get_base_intx(l_fact);
end;
declare
l_fact varchar2(30);
l_intx varchar2(30);
l_array dim_level_array;
begin
l_fact := rp_g_rpas_helper_pkg.get_fact_name('drtynslsu');
l_intx := rp_g_rpas_helper_pkg.get_base_intx(l_fact);
l_array := rp_g_rpas_helper_pkg.intx_to_level(l_intx);
end;
9-46
Chapter 9
Programmatic Extensibility of RPASCE Through Innovation Workbench
declare
l_array level_array;
begin
l_array := rp_g_rpas_helper_pkg.get_parent_levels('styl');
end;
declare
l_array level_array;
begin
l_array := rp_g_rpas_helper_pkg.get_child_levels('styl');
end;
declare
l_val boolean;
begin
l_val := rp_g_rpas_helper_pkg.is_higher_level('styl', 'dept');
end;
declare
l_val boolean;
begin
l_val := rp_g_rpas_helper_pkg.is_lower_level('dept', 'styl');
end;
9-47
Chapter 9
Programmatic Extensibility of RPASCE Through Innovation Workbench
9-48
Chapter 9
Input Data Extensibility
9-49
Chapter 9
Input Data Extensibility
Secondary Data AIF DATA Jobs to Enable AIF DATA Jobs to Disable
Source
PROMO_DETAIL.csv SI_W_RTL_PROMO_IT_LC_DS_MER SI_W_RTL_PROMO_IT_LC_DS_JOB
GE_JOB
COPY_SI_PROMO_DETAIL_JOB
STG_SI_PROMO_DETAIL_JOB
SHIPMENT_HEAD.csv SI_W_RTL_SHIP_DETAILS_DS_MER SI_W_RTL_SHIP_DETAILS_DS_JOB
GE_JOB
COPY_SI_SHIPMENT_HEAD_JOB
STG_SI_SHIPMENT_HEAD_JOB
SHIPMENT_DETAIL.csv SI_W_RTL_SHIP_IT_LC_DY_FS_MER SI_W_RTL_SHIP_IT_LC_DY_FS_JOB
GE_JOB
COPY_SI_SHIPMENT_DETAIL_JOB
STG_SI_SHIPMENT_DETAIL_JOB
SALES.csv SI_W_RTL_SLS_TRX_IT_LC_DY_FS_ SI_W_RTL_SLS_TRX_IT_LC_DY_FS_
MERGE_JOB JOB
COPY_SI_SALES_JOB
STG_SI_SALES_JOB
SALES_PACK.csv SI_W_RTL_SLSPK_IT_LC_DY_FS_ME SI_W_RTL_SLSPK_IT_LC_DY_FS_JOB
RGE_JOB
COPY_SI_SALES_PACK_JOB
STG_SI_SALES_PACK_JOB
INVENTORY.csv SI_W_RTL_INV_IT_LC_DY_FS_MERG SI_W_RTL_INV_IT_LC_DY_FS_JOB
E_JOB
COPY_SI_INVENTORY_JOB
STG_SI_INVENTORY_JOB
The way the data is merged depends on the interface. For PROMO_DETAIL.csv, the Pricing CS
data always takes priority, and the CSV file data is inserted where it does not match an existing
record. This is because the promotion header interface (W_RTL_PROMO_DS / D tables and
PROMOTION.csv file) from Pricing CS already follows that logic and this detail-level table needs
to match it. You should aim to ensure that there is no overlap between the Pricing CS data and
the external CSV file data.
Fact data uses configurable merge logic. For shipment data, it is a configuration option
whether you want MFCS data or the CSV files to get first priority when merging. Update the
C_ODI_PARAM_VW table from the Control Center for parameter SHIP_SI_MERGE_PRIORITY. When
set to MFCS, the CSV file data will only be inserted if there is no matching record. When set to
EXT (or any other value), the external file data will overwrite any matching records from MFCS
and insert for all other records. Sales and inventory use the same logic based on the value in
parameters SALES_SI_MERGE_PRIORITY and INV_SI_MERGE_PRIORITY.
9-50
Chapter 9
Input Data Extensibility
1. Create programs or REST APIs that insert data into the following staging tables from IW:
RAF_FILTER_GROUP_MERCH_STG
RAF_FILTER_GROUP_ORG_STG
RAF_SEC_USER_STG
RAF_SEC_GROUP_STG
RAF_SEC_USER_GROUP_STG
2. From POM, enable and run the adhoc process RAF_SEC_FILTER_LOAD_ADHOC in the AIF
DATA schedule, which contains just one job named RAF_SEC_FILTER_LOAD_JOB. This job
truncates the target data warehouse tables (like RAF_SEC_USER), then insert the contents
from the staging table to the target table.
3. If you are moving the data downstream to LPO or other AIF applications, run the
associated data security load jobs for those applications.
The relationship between the internal RAF tables is shown in the diagram below.
Note:
The same set of RAF_* tables exist in multiple database schemas, so you must be
careful when querying and loading them. When you want to query the tables
populated by the AIF DATA job, you must specify RADM01 as the owner of the table
(such as select * from RADM01.RAF_SEC_USER). When you want to query the tables
owned by AIF APPS, you must specify RASE01 as the owner.
9-51
Chapter 9
Extensibility Example – Product Hierarchy
available for this table load). Once that is done, you may include the 4th type code on records
in SALES.csv. The additional sales type will be exported to PDS in two different ways:
1. The MFP sales interface (W_PDS_SLS_IT_LC_WK_A) has a set of fields for Total Sales, which
will be inclusive of Other sales. This allows you to have the default measures for Reg, Pro,
and Clr sales and custom non-GA measures for Total Sales (which will not be equal to
R+P+C sales). You could use total sales measures minus the other types to arrive at
values specifically for Other sales or any other combination of retail types.
2. The IPOCS-Demand Forecasting sales interface (W_PDS_GRS_SLS_IT_LC_WK_A) will
maintain the separate rows for other sales on the output since that interface has the retail
type code on it directly. You may define custom measures to load the Other sales into
IPOCS-Demand Forecasting.
9-52
Chapter 9
Extensibility Example – Product Hierarchy
below. For this example, we assume a new product level named “Sub-Category” will be added.
This level will be placed between the Department and Class levels within the main hierarchy in
AIF and RPAS applications.
ITEM,FLEX1_CHAR_VALUE,FLEX2_CHAR_VALUE
30018,100101,WOMEN'S CLOTHING
30019,100101,WOMEN'S CLOTHING
51963371,100103,WOMEN'S INSPIRATION
1101247,100104,WOMEN'S FAST FASHION
Once you have generated this data for all items in the hierarchy, then you will load it into the
platform following the Initialize Dimensions process in Data Loads and Initial Batch Processing.
The following jobs in the RI_DIM_INITIAL_ADHOC process are used to load this file:
• COPY_SI_PRODUCT_ALT_JOB
• STG_SI_PRODUCT_ALT_JOB
• SI_W_PRODUCT_FLEX_DS_JOB
• W_PRODUCT_FLEX_D_JOB
You should already have loaded a PRODUCT.csv file at this stage, or you should load it at the
same time as the PRODUCT_ALT.csv file, so that the full product hierarchy is available in the
data warehouse. Once loaded, the data for the alternate levels will be available in the
W_PRODUCT_FLEX_D table for review. At this stage, the data is only available in the data
warehouse table; it has not been configured for use in any other solution.
AI Foundation Setup
To see the additional hierarchy level in AI Foundation applications, you must create an
alternate product hierarchy that includes both the new level and all other levels from your
product hierarchy that you wish to use.
The first step in defining the alternate product hierarchy in AIF is setting up the configuration
tables RSE_ALT_HIER_TYPE_STG and RSE_ALT_HIER_LEVEL_STG. These tables are updated from
the Manage System Configurations screen in the Control & Tactical Center. For this example,
the data you create may look like the following:
9-53
Chapter 9
Extensibility Example – Product Hierarchy
The configurations specified in this example show how to refer to the default hierarchies (which
are loaded through the staging table W_PRODUCT_DTS) and the alternate hierarchies (loaded
through the table W_PRODUCT_ALT_DTS). When referring to a default hierarchy level, you should
use the parameters shown here for all the SRC fields. You can modify the HIER_LEVEL_ID to
change the placement of the levels within the structure; however the standard hierarchy rules
must still pass after reorganizing them (for example, you cannot place DEPT below CLS because
then the same child node may have multiple parent nodes).
After your configuration is finalized, you may generate the alternate hierarchy in AIF using
RSE_MASTER_ADHOC_JOB with the -X flag. This will only load the alternate hierarchy; it assumes
you have also loaded the main hierarchy using the -p flag, or you are loading both of them
together using -pX. For nightly batch job details, refer to the AI Foundation Implementation
Guide, section “Building Alternate Hierarchy in AIF”.
It is also necessary to update RSE_CONFIG options to use the new hierarchy. For example, to
use the hierarchy in LPO, change the PMO_PROD_HIER_TYPE parameter to the ID for the new
hierarchy. You can find the ID for the hierarchy in table RSE_ HIER_TYPE column ID, which is
viewable in Manage System Configurations. Custom hierarchies will have ALT_FLG=Y in their
rows of the table.
9-54
Chapter 9
Extensibility Example – Product Hierarchy
If you will use the alternate hierarchy in forecast generation for Planning, then the rest of the
data aggregation and forecasting processes are the same, whether you are using the standard
product hierarchy or the alternate one. You will follow all steps outlined in the AI Foundation
Implementation Guide sections for “Forecast Configuration for MFP and AP” and “Forecast
Configuration for IPO-DF and AIF” as needed. A summary of those steps are:
1. Set up the configuration to use your alternate hierarchy
2. Create your run types and select your desired intersections, which can include the new
alternate hierarchy levels as the forecast level
3. Perform aggregation, estimation, and forecasting processes following the usual steps in
the AIF guides
4. Run the ad hoc jobs from POM to export the forecast results to Planning, such as
RSE_MFP_FCST_EXPORT_ADHOC_JOB
If you generate a forecast using the custom level, then the export to PDS will appear for that
level description as defined in RSE_ALT_HIER_LEVEL_STG.DESCR. In this example, you may
generate a forecast at the SUBCAT / AREA / Fiscal Week levels for use in MFP. These are the
level names that will appear in the forecast export and must be configured for use in MFP.
Once it reaches the staging table in the RDX schema, the same can be interfaced to PDS
hierarchies by making changes to interface.cfg. Follow the steps below for integrating the
new dimension into PDS for the Product Hierarchy, which includes changes to interface.cfg
for importing the dimension and to export and import AIF data at the new dimension level.
• Update the configuration for either GA (template activated) or non-GA (template de-
activated) to include the new dimension in the hierarchy structure. In the example below,
say ‘Sub-Category’ was added as dimension ‘scat’ between Class and Department.
• Update the interface.cfg to interface the newly added dimension from the corresponding
mapped column from RDX.
In the below example we added entries for HDM50 and HDL50 to map the dimension position
and label for the dimension from the RDX staging table. If you are using the GA template
or if you are not using a template but starting from GA configuration, use numbers starting
from 50 for new dimensions. If it is a fully custom configuration, you may use any
numbering.
W_PDS_PRODUCT_D:PDS:HDM01:SKU:ITEM:
9-55
Chapter 9
Extensibility Example – Product Hierarchy
W_PDS_PRODUCT_D:PDS:HDM04:SCLS:SUBCLASS_ID:
W_PDS_PRODUCT_D:PDS:HDM05:CLSS:CLASS_ID:
W_PDS_PRODUCT_D:PDS:HDM06:DEPT:DEPT:
W_PDS_PRODUCT_D:PDS:HDM07:PGRP:GROUP_NO:
W_PDS_PRODUCT_D:PDS:HDM08:DVSN:DIVISION:
W_PDS_PRODUCT_D:PDS:HDM09:CMPP:COMPANY:
W_PDS_PRODUCT_D:PDS:HDM50:SCAT:FLEX1_CHAR_VALUE:
W_PDS_PRODUCT_D:PDS:HDL01::ITEM_DESC:
W_PDS_PRODUCT_D:PDS:HDL04::SUB_NAME:
W_PDS_PRODUCT_D:PDS:HDL05::CLASS_NAME:
W_PDS_PRODUCT_D:PDS:HDL06::DEPT_NAME:
W_PDS_PRODUCT_D:PDS:HDL07::GROUP_NAME:
W_PDS_PRODUCT_D:PDS:HDL08::DIV_NAME:
W_PDS_PRODUCT_D:PDS:HDL09::CO_NAME:
W_PDS_PRODUCT_D:PDS:HDL50::FLEX2_CHAR_VALUE:
Note:
If using GA template with extensibility, you also need to add custom_add as the
last column for newly added columns.
W_PDS_PRODUCT_D:PDS:HDM50:SCAT:FLEX1_CHAR_VALUE:custom_add
…
W_PDS_PRODUCT_D:PDS:HDL50::FLEX2_CHAR_VALUE:custom_add
• To export plans generated at the new level to AIF to use in forecast generation, then create
plans at the new level and export the plans defined at that level to AIF. Assuming the
intersection of the plans are new dimension level, ensure the product dimension (DIM02 in
the example below) (which is mapped to PROD_KEY)is set to SCAT to identify the product
intersection of data in PDS as Sub-Category. For AIF to understand the prod level as Sub-
Category, set the PROD_LEVEL value to SUBCAT, as defined in the AIF alternate hierarchy
setup.
MFP_PLAN1_EXP:MPOP:DIM01:WEEK:CLND_KEY:
MFP_PLAN1_EXP:MPOP:DIM02:SCAT:PROD_KEY:
MFP_PLAN1_EXP:MPOP:DIM03:CHNC:LOC_KEY:
MFP_PLAN1_EXP:MPOP:DATA::CLND_LEVEL:WEEK
MFP_PLAN1_EXP:MPOP:DATA::PROD_LEVEL:SUBCAT
MFP_PLAN1_EXP:MPOP:DATA::LOC_LEVEL:AREA
…
MFP_PLAN1_EXP:MPOP:DATA:MFP_MPOPLDOWD:CAL_DATE:
MFP_PLAN1_EXP:MPOP:DATA:MFP_MPOPSLSU:SLS_QTY:
MFP_PLAN1_EXP:MPOP:DATA:MFP_MPOPSLSR:SLS_RTL_AMT:
Note:
Some export tables to AIF may not have PROD_LEVEL or PROD_HIER_LEVEL
defined. If not they are not present, then that specific interface table is only
meant for pre-defined product levels and you cannot change it.
9-56
Chapter 9
Extensibility Example – Product Hierarchy
• If AIF is generating the forecast at the new ‘SUBCAT’ level and exporting the forecast data,
then the same can be pulled into MFP using the following updates to the forecast interface.
Assuming the new forecast measures are defined at the Sub-Category level instead of
existing Subclass level in GA, then below are the changes needed. Update the dimension
for product to SCAT to specify the intersection for import measures as identified by PDS and
also set the filter criteria for imported data in PROD_HIER_LEVEL to SUBCAT as identified by
AIF hierarchy setup.
RSE_FCST_DEMAND_EXP:MPP:DIM01:WEEK:FCST_DATE_FROM:
RSE_FCST_DEMAND_EXP:MPP:DIM02:SCAT:PROD_EXT_KEY:
RSE_FCST_DEMAND_EXP:MPP:DIM03:CHNC:LOC_EXT_KEY:
RSE_FCST_DEMAND_EXP:MPP:DATA:MFP_MPWPDMDP1U:REG_PR_SLS_QTY:
RSE_FCST_DEMAND_EXP:MPP:DATA:MFP_MPWPDMDP1R:REG_PR_SLS_AMT:
…
RSE_FCST_DEMAND_EXP:MPP:FILTER::CAL_HIER_LEVEL:Fiscal Week
RSE_FCST_DEMAND_EXP:MPP:FILTER::PROD_HIER_LEVEL:SUBCAT
RSE_FCST_DEMAND_EXP:MPP:FILTER::LOC_HIER_LEVEL:CHANNEL
RSE_FCST_DEMAND_EXP:MPP:FILTER::CUSTSEG_EXT_KEY:
RSE_FCST_DEMAND_EXP:MPP:FILTER::FCST_TYPE:NPI
Note:
Some import tables from AIF may not have PROD_LEVEL or PROD_HIER_LEVEL
defined. If they are not present, then that specific interface table is only meant for
pre-defined product levels and you cannot change it.
To configure these interfaces, use the parameters on C_ODI_PARAM_VW in the Manage System
Configurations screen. Our plan data at the levels of SUBCAT / AREA / WEEK will need this set of
parameters:
The product level of FLEX1 correlates with the column in the W_PRODUCT_ALT_DTS table that you
used to load the alternate hierarchy level in the very beginning of the process, and matches the
field mapped during AIF alternate hierarchy setup.
To integrate the data from MFP to the data warehouse, the jobs in the AIF DATA schedule in
POM that are used are:
9-57
Chapter 9
Extensibility Example – Product Hierarchy
• W_RTL_PLAN1_PROD1_LC1_T1_FS_SDE_JOB
• W_RTL_PLAN1_PROD1_LC1_T1_F_JOB
These jobs are included in the AIF DATA nightly schedule and can also be found in the ad hoc
process LOAD_PLANNING1_DATA_ADHOC. This process populates the table
W_RTL_PLAN1_PROD1_LC1_T1_F, which can then be loaded to the AIF forecasting module using
the AIF APPS schedule job RSE_FCST_SALES_PLAN_LOAD_JOB. This job populates the table
RSE_FCST_SALES_PLAN_DTL which is used in generating plan-influenced forecasts.
9-58
A
Legacy Foundation File Reference
The following table provides a cross-reference for legacy application input files and the Retail
Analytics and Planning files that replace them. This list covers foundation data flows which
span multiple applications, such as MFP and RI. Other foundation files exist which do not
replace multiple application files; those are specified in the Interfaces Guide in My Oracle
Support.
File Group File Type Legacy Legacy RI/AI Foundation RAP Files
Planning Files Files
Product Dimension prod.csv.dat W_PRODUCT_DS.dat PRODUCT.csv
W_PRODUCT_DS_TL.dat
W_PROD_CAT_DHS.dat
W_DOMAIN_MEMBER_DS
_TL.dat
W_RTL_PRODUCT_BRAND
_DS.dat
W_RTL_PRODUCT_BRAND
_DS_TL.dat
W_RTL_IT_SUPPLIER_DS.d
at
W_PARTY_ORG_DS.dat
W_RTL_PRODUCT_IMAGE_
DS.dat
W_PRODUCT_ATTR_DS.dat
W_RTL_ITEM_GRP1_DS.da
t
Organization Dimension loc.csv.dat W_INT_ORG_DS.dat ORGANIZATION.cs
stor_metrics.csv. W_INT_ORG_DS_TL.dat v
ovr W_INT_ORG_DHS.dat
W_DOMAIN_MEMBER_DS
_TL.dat (for RTL_ORG)
W_RTL_CHANNEL_DS.dat
W_INT_ORG_ATTR_DS.dat
Calendar Dimension clnd.csv.dat W_MCAL_PERIOD_DS.dat CALENDAR.csv
Exchange Dimension curh.csv.dat W_EXCH_RATE_GS.dat EXCH_RATE.csv
Rates curr.csv.ovr
Attributes Dimension patr.csv.dat W_RTL_PRODUCT_ATTR_D ATTR.csv
patv.csv.ovr S.dat
W_RTL_PRODUCT_ATTR_D
S_TL.dat
W_DOMAIN_MEMBER_DS
_TL.dat (for Diffs)
W_RTL_PRODUCT_COLOR
_DS.dat
A-1
Appendix A
File Group File Type Legacy Legacy RI/AI Foundation RAP Files
Planning Files Files
Diff Groups Dimension sizh.hdr.csv.dat W_RTL_DIFF_GRP1_DS.dat DIFF_GROUP.csv
W_RTL_DIFF_GRP1_DS_TL
.dat
Product Dimension prdatt.csv.ovr W_RTL_ITEM_GRP1_DS.da PROD_ATTR.csv
Attribute t
Assignments
Sales Fact rsal.csv.ovr W_RTL_SLS_TRX_IT_LC_D SALES.csv
psal.csv.ovr Y_FS.dat SALES_PACK.csv
csal.csv.ovr W_RTL_SLSPK_IT_LC_DY_
FS.dat
nsls.csv.ovr
rtn.csv.ovr
Inventory Fact eop.csv.ovr W_RTL_INV_IT_LC_DY_FS. INVENTORY.csv
eopx.csv.ovr dat
wsal.csv.ovr
Markdown Fact mkd.csv.ovr W_RTL_MKDN_IT_LC_DY_ MARKDOWN.csv
FS.dat
On Order Fact oo.csv.ovr W_RTL_PO_DETAILS_DS.d ORDER_HEAD.csv
at ORDER_DETAIL.cs
W_RTL_PO_ONDORD_IT_L v
C_DY_FS.dat
PO Receipts Fact rcpt.csv.ovr W_RTL_INVRC_IT_LC_DY_ RECEIPT.csv
FS.dat
Transfers Fact tranx.csv.ovr W_RTL_INVTSF_IT_LC_DY_ TRANSFER.csv
FS.dat
Adjustments Fact tran.csv.ovr W_RTL_INVADJ_IT_LC_DY_ ADJUSTMENT.csv
FS.dat REASON.csv
W_REASON_DS.dat
RTVs Fact tran.csv.ovr W_RTL_INVRTV_IT_LC_DY RTV.csv
_FS.dat
Costs Fact slsprc.csv.ovr W_RTL_BCOST_IT_LC_DY_ COST.csv
FS.dat
W_RTL_NCOST_IT_LC_DY_
FS.dat
Prices Fact slsprc.csv.ovr W_RTL_PRICE_IT_LC_DY_F PRICE.csv
S.dat
W/F Sales Fact tran.csv.ovr W_RTL_SLSWF_IT_LC_DY_ SALES_WF.csv
and Fees FS.dat
Vendor Fact tran.csv.ovr W_RTL_DEALINC_IT_LC_D DEAL_INCOME.cs
Funds (TC Y_FS.dat v
6/7)
Reclass Fact tran.csv.ovr W_RTL_INVRECLASS_IT_L INV_RECLASS.csv
In/Out (TC C_DY_FS.dat
34/36)
Intercompan Fact tran.csv.ovr W_RTL_ICM_IT_LC_DY_FS. IC_MARGIN.csv
y Margin (TC dat
39)
A-2
B
Context File Table Reference
The following table maps CSV data files to internal tables for the purpose of creating Context
Files. The first parameter on the Context file is a TABLE property containing the table name into
which the CSV data will be loaded. For legacy context file usage, the name of the context file
itself should match the internal table name.
B-1
Appendix B
B-2
C
Sample Public File Transfer Script for Planning
Apps
This example provides an example of how file transfers can be implemented through a shell
script. It requires: bash, curl and jq.
#!/bin/bash
BASE_URL="https://__YOUR_TENANT_BASE_URL__"
TENANT="__YOUR-TENANT_ID__"
IDCS_URL="https://_YOUR__IDCS__URL__/oauth2/v1/token"
IDCS_CLIENTID="__YOUR_CLIENT_APPID__"
IDCS_CLIENTSECRET="__YOUR_CLIENT_SECRET___"
IDCS_SCOPE="rgbu:rpas:psraf-__YOUR_SCOPE__"
### FINISHED
clientToken() {
curl -sX POST "${IDCS_URL}" \
--header "Authorization: Basic ${IDCS_AUTH}" \
--header "Content-Type: application/x-www-form-urlencoded" \
--data-urlencode "grant_type=client_credentials" \
--data-urlencode "scope=${IDCS_SCOPE}" | jq -r .access_token
}
ping() {
echo "Pinging"
curl -sfX GET "${BASE_URL}/${TENANT}/RetailAppsReSTServices/services/
private/FTSWrapper/ping" \
--header 'Accept: application/json' \
--header 'Accept-Language: en' \
--header "Authorization: Bearer ${CLIENT_TOKEN}" | jq
}
listPrefixes() {
echo "Listing storage prefixes"
curl -sfX GET "${BASE_URL}/${TENANT}/RetailAppsReSTServices/services/
private/FTSWrapper/listprefixes" \
--header 'Accept: application/json' \
--header 'Accept-Language: en' \
--header "Authorization: Bearer ${CLIENT_TOKEN}" | jq
}
C-1
Appendix C
listFiles() {
echo "Listing files for ${1}"
curl -sfX GET "${BASE_URL}/${TENANT}/RetailAppsReSTServices/services/
private/FTSWrapper/listfiles?prefix=${1}" \
--header 'Accept: application/json' \
--header 'Accept-Language: en' \
--header "Authorization: Bearer ${CLIENT_TOKEN}" | jq
}
deleteFiles() {
echo "Deleting files"
json=$(fileCollection $@)
curl --show-error -sfX DELETE "${BASE_URL}/${TENANT}/
RetailAppsReSTServices/services/private/FTSWrapper/delete" \
--header 'content-type: application/json' \
--header 'Accept: application/json' \
--header 'Accept-Language: en' \
--header "Authorization: Bearer ${CLIENT_TOKEN}" \
-d "${json}" | jq
}
fileMover() {
movement="${1}"
shift
json=$(fileCollection $@)
requestPAR "${movement}" "${json}"
}
fileCollection() {
local json="{ \"listOfFiles\": [ __FILES__ ] }"
sp="${1}"
shift
echo "${json/__FILES__/${list}}"
}
requestPAR() {
use="${1}"
echo "Requesting PARs for ${use}"
pars="$(curl --show-error -sfX POST "${BASE_URL}/${TENANT}/
RetailAppsReSTServices/services/private/FTSWrapper/${use}" \
--header 'content-type: application/json' \
--header 'Accept: application/json' \
--header 'Accept-Language: en' \
--header "Authorization: Bearer ${CLIENT_TOKEN}" \
C-2
Appendix C
-d "${2}")"
#Entry point
IDCS_AUTH=$(echo -n ${IDCS_CLIENTID}:${IDCS_CLIENTSECRET} | base64 -w0)
CLIENT_TOKEN=$(clientToken)
case "${1}" in
ping)
ping
;;
listprefixes)
shift
listPrefixes
;;
listfiles)
shift
listFiles ${@}
;;
deletefiles)
shift
deleteFiles ${@}
;;
uploadfiles)
shift
fileMover upload ${@}
;;
downloadfiles)
shift
fileMover download ${@}
;;
*)
echo "Usage: $0"
echo " ping : test service
C-3
Appendix C
functionality"
echo " listprefixes : list registered
prefixes"
echo " listfiles [prefix] : list files within
a prefix"
echo " deletefiles [prefix] [file1] [file2] ... : delete files with
this prefix"
echo " uploadfiles [prefix] [file1] [file2] ... : upload files with
this prefix"
echo " downloadfiles [prefix] [file1] [file2] ... : download files
with this prefix"
echo
exit 0
;;
esac
C-4
D
Sample Public File Transfer Script for RI and
AIF
#!/bin/bash
BASE_URL="https://__YOUR_TENANT_BASE_URL__"
TENANT="__YOUR-TENANT_ID__"
IDCS_URL="https://_YOUR__IDCS__URL__/oauth2/v1/token"
IDCS_CLIENTID="__YOUR_CLIENT_APPID__"
IDCS_CLIENTSECRET="__YOUR_CLIENT_SECRET___"
IDCS_SCOPE="rgbu:rsp:psraf-__YOUR_SCOPE__"
### FINISHED
clientToken() {
curl -sX POST "${IDCS_URL}" \
--header "Authorization: Basic ${IDCS_AUTH}" \
--header "Content-Type: application/x-www-form-urlencoded" \
--data-urlencode "grant_type=client_credentials" \
--data-urlencode "scope=${IDCS_SCOPE}" | jq -r .access_token
}
ping() {
echo "Pinging"
curl -sfX GET "${BASE_URL}/${TENANT}/RIRetailAppsPlatformServices/
services/private/FTSWrapper/ping" \
--header 'Accept: application/json' \
--header 'Accept-Language: en' \
--header "Authorization: Bearer ${CLIENT_TOKEN}" | jq
}
listPrefixes() {
echo "Listing storage prefixes"
curl -sfX GET "${BASE_URL}/${TENANT}/RIRetailAppsPlatformServices/
services/private/FTSWrapper/listprefixes" \
--header 'Accept: application/json' \
--header 'Accept-Language: en' \
--header "Authorization: Bearer ${CLIENT_TOKEN}" | jq
}
listFiles() {
echo "Listing files for ${1}"
curl -sfX GET "${BASE_URL}/${TENANT}/RIRetailAppsPlatformServices/
D-1
Appendix D
services/private/FTSWrapper/listfiles?prefix=${1}" \
--header 'Accept: application/json' \
--header 'Accept-Language: en' \
--header "Authorization: Bearer ${CLIENT_TOKEN}" | jq
}
deleteFiles() {
echo "Deleting files"
json=$(fileCollection $@)
curl --show-error -sfX DELETE "${BASE_URL}/${TENANT}/
RIRetailAppsPlatformServices/services/private/FTSWrapper/delete" \
--header 'content-type: application/json' \
--header 'Accept: application/json' \
--header 'Accept-Language: en' \
--header "Authorization: Bearer ${CLIENT_TOKEN}" \
-d "${json}" | jq
}
fileMover() {
movement="${1}"
shift
json=$(fileCollection $@)
requestPAR "${movement}" "${json}"
}
fileCollection() {
local json="{ \"listOfFiles\": [ __FILES__ ] }"
sp="${1}"
shift
echo "${json/__FILES__/${list}}"
}
requestPAR() {
use="${1}"
echo "Requesting PARs for ${use}"
pars="$(curl --show-error -sfX POST "${BASE_URL}/${TENANT}/
RIRetailAppsPlatformServices/services/private/FTSWrapper/${use}" \
--header 'content-type: application/json' \
--header 'Accept: application/json' \
--header 'Accept-Language: en' \
--header "Authorization: Bearer ${CLIENT_TOKEN}" \
-d "${2}")"
D-2
Appendix D
#Entry point
IDCS_AUTH=$(echo -n ${IDCS_CLIENTID}:${IDCS_CLIENTSECRET} | base64 -w0)
CLIENT_TOKEN=$(clientToken)
case "${1}" in
ping)
ping
;;
listprefixes)
shift
listPrefixes
;;
listfiles)
shift
listFiles ${@}
;;
deletefiles)
shift
deleteFiles ${@}
;;
uploadfiles)
shift
fileMover upload ${@}
;;
downloadfiles)
shift
fileMover download ${@}
;;
*)
echo "Usage: $0"
echo " ping : test service
functionality"
echo " listprefixes : list registered
prefixes"
D-3
Appendix D
D-4
E
Sample Validation SQLs
This set of sample SQL commands provides scripts to run using APEX which can help validate
your initial dimension and fact loads, especially if it is the first time loading the data and quality
is unknown. Do not load data into the platform without performing some level of validation on it
first, as this will greatly reduce the time spent reworking and reloading data.
------------------------------------------------
-- Checks for CALENDAR.csv file load
------------------------------------------------
-- Verify initial calendar data before staging it further, row counts should
match data file
SELECT * FROM W_MCAL_PERIOD_DTS
-- Check total counts, all counts should be same. This can indirectly check
for nulls in required columns.
SELECT
count(*),count(MCAL_CAL_ID),count(MCAL_PERIOD_TYPE),count(MCAL_PERIOD_NAME),
count(MCAL_PERIOD),count(MCAL_PERIOD_ST_DT),count(MCAL_PERIOD_END_DT),count(MC
AL_QTR),
count(MCAL_YEAR),count(MCAL_QTR_START_DT),count(MCAL_QTR_END_DT),count(MCAL_YE
AR_START_DT),
count(MCAL_YEAR_END_DT) FROM W_MCAL_PERIOD_DTS
-- Checking duplicate rows, if any. This should not return any rows.
SELECT MCAL_YEAR,MCAL_PERIOD_NAME,count(*) FROM W_MCAL_PERIOD_DTS GROUP BY
MCAL_YEAR,MCAL_PERIOD_NAME having count(MCAL_PERIOD_NAME) > 1
------------------------------------------------
E-1
Appendix E
-- Check total count, all counts should be same. This can indirectly check
for nulls in required columns.
SELECT
count(*),count(item),count(distinct(item)),count(item_level),count(tran_level)
,
count(LVL4_PRODCAT_ID),count(LVL4_PRODCAT_UID),count(LVL5_PRODCAT_ID),count(LV
L5_PRODCAT_UID),
count(LVL6_PRODCAT_ID),count(LVL7_PRODCAT_ID),count(LVL8_PRODCAT_ID),count(TOP
_PRODCAT_ID),
count(ITEM_DESC),count(LVL4_PRODCAT_DESC),count(LVL5_PRODCAT_DESC),count(LVL6_
PRODCAT_DESC),
count(LVL7_PRODCAT_DESC),count(LVL8_PRODCAT_DESC),count(TOP_PRODCAT_DESC)
FROM W_PRODUCT_DTS
-- Check individual counts to make sure it aligns with your source data
SELECT
count(*),count(ITEM_PARENT),count(distinct(ITEM_PARENT)),count(ITEM_GRANDPAREN
T),count(distinct(ITEM_GRANDPARENT)) FROM W_PRODUCT_DTS WHERE ITEM_LEVEL = 1
SELECT
count(*),count(ITEM_PARENT),count(distinct(ITEM_PARENT)),count(ITEM_GRANDPAREN
T),count(distinct(ITEM_GRANDPARENT)) FROM W_PRODUCT_DTS WHERE ITEM_LEVEL = 2
SELECT
count(*),count(ITEM_PARENT),count(distinct(ITEM_PARENT)),count(ITEM_GRANDPAREN
T),count(distinct(ITEM_GRANDPARENT)) FROM W_PRODUCT_DTS WHERE ITEM_LEVEL = 3
-- Checking duplicate rows, if any. This should not return any rows.
SELECT item,count(1) FROM W_PRODUCT_DTS GROUP BY item having count(1) > 1
-- Check item_level, should not have NULL, should have values only 1,2 or 3.
Make sure Count makes sense
SELECT item_level, count(*) FROM W_PRODUCT_DTS GROUP BY item_level ORDER BY 1
-- Check tran_level, should not have NULL, should have only one value for our
purpose. Make sure Count makes sense
SELECT tran_level, count(*) FROM W_PRODUCT_DTS GROUP BY tran_level ORDER BY 1
-- Expect records for "MCAT" which is the product hierarchy labels code
SELECT DOMAIN_CODE, DOMAIN_TYPE_CODE,LANGUAGE_CODE,
SRC_LANGUAGE_CODE,count(1)
FROM W_DOMAIN_MEMBER_DS_TL GROUP BY DOMAIN_CODE,
E-2
Appendix E
DOMAIN_TYPE_CODE,LANGUAGE_CODE, SRC_LANGUAGE_CODE
-- check for MCAT records for hierachy labels, should align with hierachy
level counts
select domain_code,count(*) from W_DOMAIN_MEMBER_LKP_TL group by domain_code
------------------------------------------------
-- Checks for ORGANIZATION.csv file load
------------------------------------------------
-- Verify initial location data before staging it further, row counts should
match data file
SELECT * FROM W_INT_ORG_DTS
-- Check total count, all counts should be same. This can indirectly check
for nulls in required columns.
SELECT
count(*),count(ORG_NUM),count(distinct(ORG_NUM)),count(ORG_TYPE_CODE),count(CU
RR_CODE),
count(ORG_HIER10_NUM),count(ORG_HIER11_NUM),count(ORG_HIER12_NUM),count(ORG_HI
ER13_NUM),
count(ORG_TOP_NUM),count(ORG_DESC),count(ORG_HIER10_DESC),count(ORG_HIER11_DES
C),
count(ORG_HIER12_DESC),count(ORG_HIER13_DESC),count(ORG_TOP_DESC) FROM
W_INT_ORG_DTS
-- Checking duplicate rows, if any. This should not return any rows.
SELECT ORG_NUM,count(1) FROM W_INT_ORG_DTS GROUP BY ORG_NUM having count(1) >
1
-- After DTS to DS job executed, check following tables for expected data
SELECT /*+ OPT_PARAM('_optimizer_answering_query_using_stats' 'FALSE') */
count(*) FROM W_INT_ORG_DS
SELECT /*+ OPT_PARAM('_optimizer_answering_query_using_stats' 'FALSE') */
count(*) FROM W_INT_ORG_DS_TL
SELECT /*+ OPT_PARAM('_optimizer_answering_query_using_stats' 'FALSE') */
count(*) FROM W_INT_ORG_DHS
SELECT /*+ OPT_PARAM('_optimizer_answering_query_using_stats' 'FALSE') */
count(*) FROM W_DOMAIN_MEMBER_DS_TL
-- Expect records for "RTL_ORG" which is the location hierarchy labels code
SELECT DOMAIN_CODE, DOMAIN_TYPE_CODE,LANGUAGE_CODE,
SRC_LANGUAGE_CODE,count(1)
FROM W_DOMAIN_MEMBER_DS_TL GROUP BY DOMAIN_CODE,
E-3
Appendix E
DOMAIN_TYPE_CODE,LANGUAGE_CODE, SRC_LANGUAGE_CODE
E-4
Appendix E
SELECT '1',
'AREA org_hier12_num' LEVEL_DESC,
location_a.org_hier12_num C_LEVEL,
NULL P1_LEVEL,
NULL P2_LEVEL,
NULL P3_LEVEL,
location_a.org_hier13_num P4_LEVEL,
location_a.org_top_num P5_LEVEL
FROM w_int_org_dhs location_a, w_int_org_dhs location_b
where location_a.level_name = 'AREA'
and location_b.level_name = location_a.level_name
and location_a.org_hier12_num = location_b.org_hier12_num
and (location_a.org_hier13_num <> location_b.org_hier13_num
or location_a.org_top_num <> location_b.org_top_num)
UNION ALL
SELECT '1',
'CHAIN org_hier13_num' LEVEL_DESC,
location_a.org_hier13_num C_LEVEL,
NULL P1_LEVEL,
NULL P2_LEVEL,
NULL P3_LEVEL,
NULL P4_LEVEL,
location_a.org_top_num P5_LEVEL
FROM w_int_org_dhs location_a, w_int_org_dhs location_b
where location_a.level_name = 'CHAIN'
and location_b.level_name = location_a.level_name
and location_a.org_hier13_num = location_b.org_hier13_num
and location_a.org_top_num <> location_b.org_top_num;
------------------------------------------------
-- Checks on EXCH_RATE.csv file load
------------------------------------------------
select * from w_exch_rate_dts
------------------------------------------------
-- Checks on ATTR.csv and PROD_ATTR.csv file load
------------------------------------------------
select * from w_attr_dts
E-5
Appendix E
------------------------------------------------
-- Check on W_DOMAIN_MEMBER_LKP_TL issues while loading dimensions
------------------------------------------------
--- DOMAIN MEMBER DUPLICATE RECORD ERROR ---
SELECT DOMAIN_CODE,DOMAIN_TYPE_CODE,DOMAIN_MEMBER_CODE,count(1) FROM
W_DOMAIN_MEMBER_DS_TL GROUP BY
DOMAIN_CODE,DOMAIN_TYPE_CODE,DOMAIN_MEMBER_CODE having count(1) > 1
------------------------------------------------
-- Checks on SALES.csv file
------------------------------------------------
-- Verify initial sales data before staging it further, check all columns are
populated with expected values (i.e. CTX was properly formed)
select * from W_RTL_SLS_TRX_IT_LC_DY_FTS
-- Should match the record count from last loaded SALES.csv file
select /*+ OPT_PARAM('_optimizer_answering_query_using_stats' 'FALSE') */
count(*) from W_RTL_SLS_TRX_IT_LC_DY_FTS
E-6
Appendix E
------------------------------------------------
-- Checks on INVENTORY.csv file
------------------------------------------------
-- Verify initial inventory data before staging it further, check all columns
are populated with expected values (i.e. CTX was properly formed)
select * from W_RTL_INV_IT_LC_DY_FTS
-- Should match the record count from last loaded INVENTORY.csv file
select /*+ OPT_PARAM('_optimizer_answering_query_using_stats' 'FALSE') */
count(*) from W_RTL_INV_IT_LC_DY_FTS
E-7
Appendix E
E-8
F
Accessibility
This section documents support for accessibility in the Retail Analytics and Planning solutions.
It describes the support for accessibility and assistive technologies within the underlying
technology used by the solutions. Additionally, it covers any accessibility support and
considerations built into the application beyond the capabilities of the underlying platform.
ADF-Based Applications
The central user interface for the AI Foundation Cloud Services is built using ADF Faces.
Application Development Framework (ADF) Faces user-interface components have built-in
accessibility support for visually and physically impaired users. User agents such as a web
browser rendering to nonvisual media such as a screen reader can read component text
descriptions to provide useful information to impaired users.
ADF Faces provides two levels of application accessibility support:
• Default: By default, ADF Faces generates components that have rich user interface
interaction, and are also accessible through the keyboard.
Note:
In the default mode, screen readers cannot access all ADF Faces components. If
a visually impaired user is using a screen reader, it is recommended to use the
Screen Reader mode
• Screen Reader: ADF Faces generates components that are optimized for use with screen
readers. The Screen Reader mode facilitates the display for visually impaired users, but
will degrade the display for sighted users (without visual impairment).
Additional fine-grained accessibility levels as described below are also supported:
• High-contrast: ADF Faces can generate high-contrast–friendly visual content. High-
contrast mode is intended to make ADF Faces applications compatible with operating
systems or browsers that have high-contrast features enabled. For example, ADF Faces
changes its use of background images and background colors in high-contrast mode to
prevent the loss of visual information.
Note:
ADF Faces’ high-contrast mode is more beneficial if used in conjunction with
your browser's or operating system's high-contrast mode. Also, some users
might find it beneficial to use large-font mode along with high-contrast mode.
F-1
Appendix F
ADF-Based Applications
Note:
If you are not using large-font mode or browser-zoom capabilities, you should
disable large-font mode. Also, some users might find it beneficial to use high-
contrast mode along with the large-font mode.
AIF provides the ability to switch between the above accessibility support levels in the
application, so that users can choose their desired type of accessibility support, if required. It
exposes a user preferences screen in which the user can specify the accessibility preferences/
mode which will allow the user to operate in that mode.
F-2
Appendix F
ADF-Based Applications
F-3
Appendix F
JET-Based Applications
JET-Based Applications
Some components of the AI Foundation solutions (such as Profile Science and Inventory
Planning Optimization) and the interface for the Planning solutions are built using Oracle
JavaScript Extension Toolkit (JET).
F-4
Appendix F
OAS-Based Applications
Oracle JET components have built-in accessibility support that conforms to the Web Content
Accessibility Guidelines version 2.0 at the AA level (WCAG 2.0 AA), developed by the World
Wide Web Consortium (W3C).
Note:
Because different browsers themselves support accessibility somewhat differently,
user experience tends to differ on different web-browsers.
OAS-Based Applications
Retail Insights uses the Oracle Analytics Server as its user interface, and benefits from all the
native accessibility features added to that platform. For details on the accessibility features in
OAS, refer to the Accessibility Features and Tips chapter in the Oracle® Analytics Visualizing
Data in Oracle Analytics Server guide.
F-5
Appendix F
Report Authoring Guidelines
accessibility in mind without sacrificing any features or functionality. Some general guidelines
for creating accessible content are provided below.
F-6
Appendix F
Report Authoring Guidelines
F-7