0% found this document useful (0 votes)
17 views261 pages

Administration BI

administration bi

Uploaded by

Ezzouine Khalid
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views261 pages

Administration BI

administration bi

Uploaded by

Ezzouine Khalid
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 261

Retail Intelligence

General concepts
April 20, 2022
Contents

1. Introduction ........................................................................................................... 6

2. Logins and security .............................................................................................. 7


2.1. BI Architect .............................................................................................................................................. 7
2.2. BI OLAP, BI ETL and BI Reports ......................................................................................................... 8

3. BI Architect metadata ..........................................................................................10


3.1. BI Architect source types ..................................................................................................................... 10
3.2. Subject of BI entities ............................................................................................................................ 10
3.3. BI entities ............................................................................................................................................... 11
3.3.1. Dimensions ......................................................................................................................................... 11
1.1.1 Facts ..................................................................................................................................................... 18
3.4. General options..................................................................................................................................... 19
3.4.1. Enabling/disabling the loading of multilingual data ....................................................................... 19
3.4.2. Enabling/disabling the loading of BLOB objects ........................................................................... 19
3.4.3. Enabling/disabling data loading without an update ...................................................................... 20
3.4.4. Enabling/disabling the loading of treasury sales ........................................................................... 20
3.4.5. Enabling/disabling the loading of document comment lines ........................................................ 20
3.4.6. Enabling/disabling the automatic merging of products from external sources.......................... 21
3.5. Enabling or initializing an entity (global or incremental initialization) ............................................ 21
3.5.1. Enabling subjects .............................................................................................................................. 23
3.5.2. Initializing subjects............................................................................................................................. 23
3.5.3. Enabling entities ................................................................................................................................ 23
3.5.4. Initializing entities .............................................................................................................................. 24
3.5.5. Sending all entities to BI ................................................................................................................... 24

4. System reports .....................................................................................................25


4.1. Reports in the System/Monitoring folder ........................................................................................... 25
4.1.1. Reports in the System/Monitoring/Advanced folder ..................................................................... 25
4.1.2. Reports in the System/Monitoring/Advanced/BI Architect data check folder ............................ 26
4.1.3. Reports in the System/Monitoring/Advanced/Consolidation folder ............................................ 27
4.1.4. Reports in the System/Monitoring/Advanced/Database maintenance folder ............................ 28
4.2. Reports in the System/General settings folder ................................................................................. 28
4.2.1. Reports in the System/General settings/Architect model folder ................................................. 28
4.2.2. Reports in the System/General settings/Dashboard setup folder............................................... 28
4.2.3. Reports in the System/General settings/Dashboard setup/Security access folder .................. 29
4.3. Reports in the System/Functional settings folder............................................................................. 29
4.3.1. Reports in the System/Functional settings/Analysis periods folder ............................................ 29
4.3.2. Reports in the System/Functional settings/Comparable seasons folder ................................... 30
4.3.3. Reports in the System/Functional settings/Comparable stores folder ....................................... 30
4.3.4. Reports in the System/Functional settings/CRM folder ............................................................... 30
4.3.5. Reports in the System/Functional settings/Currencies folder ..................................................... 30
4.3.6. Reports in the System/Functional settings/Geographical folder ................................................. 30

Page 2/261
4.4. Reports in the System/Consolidation settings folder ....................................................................... 30

5. BI Calendars .........................................................................................................32

6. Output currency ...................................................................................................38


6.1. Configuring the output currency ......................................................................................................... 39
6.2. Entering/displaying output currency conversion rates ..................................................................... 42
6.3. Output currency conversion rates by day.......................................................................................... 45
6.4. Using output currency conversion rates ............................................................................................ 45
6.4.1. Constant conversion rates for Y/Y-1 comparison ......................................................................... 46
6.4.2. Conversion rule for cost prices and sales/purchase prices to date ............................................ 48

7. Analysis periods ..................................................................................................49


7.1. Definition of analysis period types ...................................................................................................... 49
7.1.1. Comparatives and hierarchies for period types ............................................................................. 62
7.2. Management of analysis periods........................................................................................................ 69
7.3. Use of analysis periods ........................................................................................................................ 74

8. Comparable stores...............................................................................................77
8.1. Configuring comparable stores ........................................................................................................... 78
8.1.1. User fields for sales opening and closing dates ............................................................................ 85
8.2. Store calendar ....................................................................................................................................... 87
8.2.1. Generating events ............................................................................................................................. 92
8.2.2. Entering sales opening and closing dates ..................................................................................... 95
8.3. Managing comparable measures ....................................................................................................... 95

9. Comparable seasons and collections ............................................................. 100

10. CRM ...................................................................................................................... 104


10.1. Managing customer data quality .................................................................................................... 104
10.2. Managing age ranges ...................................................................................................................... 106

11. Default company for cost price searches ....................................................... 111

12. Store traffic terminals to be excluded ............................................................. 113

13. Importing system data to BI Architect ............................................................ 117


13.1. Importing system data via the consolidation module ................................................................... 117
13.2. Importing system data directly ........................................................................................................ 117
13.2.1. Description of system data to be imported................................................................................. 118
13.2.1.1. Importing the conversion rates to the output currency .......................................................... 118
13.2.1.2. Importing the geographical coordinates (GPS) of entities .................................................... 120
13.2.1.3. Importing store calendars .......................................................................................................... 125
13.2.1.4. Importing store comparability .................................................................................................... 128
13.2.1.5. Importing BI translations ............................................................................................................ 134

14. Integrity rules for entities .................................................................................. 139

Page 3/261
15. Filtering and purging data ................................................................................ 144
15.1. Defining filters ................................................................................................................................... 144

16. Status of filters ................................................................................................... 148


16.1. Purging filtered data ......................................................................................................................... 148

17. Data partitions .................................................................................................... 150

18. Entities archived or awaiting creation ............................................................ 151


18.1. Dimensions archived or awaiting creation .................................................................................... 151
18.2. Archiving facts ................................................................................................................................... 153

19. Generating geographical coordinates (GPS) ................................................. 155

20. Fact extraction views......................................................................................... 163


20.1. Fields linked to managing comparable stores .............................................................................. 165
20.2. Fields linked to currencies ............................................................................................................... 166
20.3. Cost prices and margins .................................................................................................................. 166

21. Jobs for loading data marts.............................................................................. 168


21.1. "BI ARCHITECT DATA MART load" standard load job .............................................................. 169
21.2. BI ARCHITECT DATA MART fast load job ................................................................................... 169

22. Configuring jobs for loading data marts in SaaS .......................................... 172
22.1. Changing the run frequency of the standard data mart load job................................................ 172
22.2. Changing the run frequency of the fast data mart load job ......................................................... 173

23. SCD (slowly changing dimensions) Type 2.................................................... 175


23.1. Enabling/disabling SCD Type 2 ...................................................................................................... 175
23.2. Historical table structure for SCD Type 2 attributes .................................................................... 179
23.3. Functions and views for SCD Type 2 attributes ........................................................................... 181
23.4. OLAP methods for managing SCD Type 2 attributes .................................................................. 186

24. Reading BI Architect differential data ............................................................. 192


24.1. Reading differential data in batches............................................................................................... 192
24.2. Enabling change tracking in SQL Server ...................................................................................... 198

25. Loading Y2 data while the source database is active ................................... 201

26. Initializing the identification of a BI Architect source ................................... 203

27. Assigning the Control Server right to an account ........................................ 204

28. Making the standard BI Architect database operational .............................. 205

29. Disabling BI triggers when upgrading Y2 ....................................................... 206

30. Forcing the end of the data source loading process .................................... 208

Page 4/261
31. Forcing the end of the Qlik data loading process ......................................... 209

32. Configuring the firewall to allow access to the BI Architect server ........... 210

33. SQL agent: Sending emails on job completion ............................................. 211

34. SQL Agent: Modifying the account in charge of running a job step .......... 218

35. Migrate a BI Foundation environment to new servers .................................. 220

36. Running customized SSIS packages .............................................................. 229


36.1. Configuring the user BI Architect data mart update ..................................................................... 229

37. Stopping/starting an SQL instance/service using a command line ............ 233

38. Appendix ............................................................................................................. 234


38.1. Installing the 64-bit Microsoft database access driver ................................................................ 234
38.2. Exporting linked documents missing from an external import .................................................... 235
38.3. Running customized SSIS packages............................................................................................. 237
38.3.1. Configuring the user BI Architect data mart update .................................................................. 238
38.4. Accessing non-Cegid external data directly .................................................................................. 241
38.4.1. Accessing a secondary database on the standard BI instance .............................................. 241
38.4.2. Accessing a database via a linked server on the standard BI instance ................................. 242
38.4.3. Accessing a database using ad hoc remote queries ................................................................ 247
38.4.4. Presentation of the Transact-SQL function, OPENROWSET ................................................. 247
38.4.4.1. Rights for using the OPENROWSET function in ad hoc remote queries ........................... 247
38.4.5. Accessing an MS Access database: ........................................................................................... 251
38.4.6. Accessing a database via an AS cube project data source .................................................... 251
38.4.7. Accessing a secondary database on the non-standard BI instance in the AS cube project252
38.4.8. Accessing an MS Access database via an AS cube project data source ............................. 252
38.4.9. Accessing an MS Access database via an AS cube project named query ........................... 253
38.4.10. Accessing an MS Access database via a linked server and an AS cube project named query
254
38.4.11. Accessing an Excel file ............................................................................................................... 256
38.4.12. Accessing an Excel file via an AS cube project named query .............................................. 256
38.4.13. Accessing text files ...................................................................................................................... 257
38.4.14. Accessing Colombus databases ............................................................................................... 258
38.4.15. Accessing Colombus databases via an AS cube project data source ................................. 259
38.4.16. Accessing Colombus databases via an AS cube project named query............................... 259
38.4.17. Accessing Colombus databases via a linked server and an AS cube project named query260

Page 5/261
1. INTRODUCTION
This document describes different concepts in the Retail Intelligence solution and its different modules.

Retail Intelligence comprises the following modules:

• BI Architect (BI Foundation): Data mart

• BI OLAP (BI Foundation): OLAP cube

• BI ETL (BI Foundation): Connectors

• BI Reports (BI Foundation): Reporting Services portal

• BI Dashboards (BI Dashboards): Dashboards portal

Some of the information in this document may not be relevant to your context if you have not deployed the
corresponding module.

Specific documents on the administration of each of the modules are available.

The resources in the solution (e.g. files, projects or documents) are stored in two folders called: Custom
BINEXT and vt BINEXT. These folders are usually found on the main BI server hosting the BI Architect module.
To find out more, see the customer installation file. Each folder contains a subfolder for each BI module with
the corresponding resources.

The Custom BINEXT folder contains customer-specific customized resources, e.g. customized cube and
reports. Once this folder has been installed, it will not be affected when BI is upgraded. Customers can
therefore store their personal resources in this folder.

The vt BINEXT folder contains standard resources provided by Cegid, e.g. standard cube and reports. This
folder will be affected each time BI is upgraded. Customers should therefore not store any personal resources
in this folder.

It is vital that you perform a daily backup of these two folders and their contents, especially the Custom
BINEXT folder containing customer-specific projects. The customer is in charge of performing this backup. If
this backup is not performed, customers risk losing their projects if the server or disk crashes. This means that
the customer-specific cube and dashboards would be lost.

To find out more about the backup of different databases, see the DB Administration, RS Administration, AS
Administration and Dashboards Administration documents.

Page 6/261
2. LOGINS AND SECURITY
This chapter contains a list of logins created and managed by the application. Windows logins are not mandatory. As
such, they may differ depending on the configurations you set up.

Login name Server role Description

Login or Windows group

BUILTIN\administrators All except for sysadmin Perform administrative tasks on the BI Architect
(the Windows server local database server such as backups or run stored
administrator) procedures shipped with the product for
administrative purposes. To find out more, see the DB
Administration document.

SERVER\VtSql The user is sa.


Local administrator for coordinating SQL Server
services used by Cegid.

Reserved for Cegid.

DOMAIN\BINextServices This is present only if BI OLAP, BI Reports and/or BI EIS


are installed.

This user enjoys the same rights as members


belonging to BUILTIN\administrators. This centralizes
SQL server management using a single login.

SQL Server logins

sa sysadmin The user is sysadmin.

Reserved for Cegid.

vtAdministrator sysadmin Creator and owner of the Data Warehouse database.

Reserved for Cegid.

vtAllReader (none) Read access for third party applications such as Excel
or Qlik.

Password: TIMELESS

vtAnalysisServices (none) Read access for BI OLAP Analysis Services projects.

Password: MSSQLAS2005VT

vtColombusLoad (none) Reserved for Colombus for loading the BI Architect


database.

Reserved for Cegid.

Page 7/261
vtCNextLoad (none) Reserved for .Next and CBR for loading the BI
Architect database.

Reserved for Cegid.

vtFrontTool DbCreator User created for a front-end tool (e.g. EIS) for creating
a repository database on the server.

Password: 2x2cut1v2 1nf0rm@t10n

vtReportingServices (none) Dedicated read access to Reporting Services for


reports that retrieve data directly from the relational
database.

(unused by default)

Password: MSSQLRS2005VT

vtDbAdmin DbCreator This user is created to enable customers to perform


administrative tasks such as backups and run stored
DiskAdmin
procedures shipped with the product for
ProcessAdmin administrative purposes.
To find out more, see the DB Administration
ServerAdmin document.
SetupAdmin This user has the required rights to create the
BulkAdmin customer database, objects such as tables or views in
the standard database, and access data in the
VtNextDW database in read mode. This user can
therefore add external data to the Data Warehouse.

Password: MSSQLDB2005VT

Logins in bold are the most frequently used ones for accessing the BI instance.

Login name Description

Login or Windows group

DOMAIN\BINextServices User created by Cegid during the installation of the SQL Server default instance, i.e.
BI OLAP, BI ETL, BI Reports and BI EIS.

This user enjoys administrator rights on all SQL servers of the default instance and
on the BI Architect server.

Please refer to the customer installation file for the password.

DOMAIN\BINextReader User created for read access to cube data in BI Reports. This user is used by default
by Reporting Services reports to read the cube and can also be used for other
purposes.

Please refer to the customer installation file for the password.

Page 8/261
SQL Server login

sa This is the sa user of the default instance.

Please refer to the customer installation file for the password.

Page 9/261
3. BI ARCHITECT METADATA
This chapter describes standard metadata in the solution.

A certain number of system dashboards are used to configure entities managed by BI Architect. This chapter will
describe some of these configurations. It does not include the configuration for Colombus or multi-database
consolidation as these are explained in other documents.

The table below presents a list of standard source applications for BI Architect and their IDs. You should take note of the
following:

• Data loading in Colombus is managed directly in Colombus.

• All .Next M entities are enabled by default except for the loading of multilingual data which is
optional.

• Y2 and external sources are managed directly in BI Architect.

• Orli is managed by the consolidation module as an external source and communication is


performed using file exchanges. The configuration of the entities to be sent to BI is performed in
Orli.

Source ID

BI Architect 0

Colombus 1

.Next 2

Interop 3

.Next: current stock 4

Y2 5

External or Orli 6

The table below presents a list of subjects and their IDs. A subject is a logical business sector that groups a certain
number of entities together.

Subject ID

Accounting ACCOUNTING

Page 10/261
CRM CRM

Production MANUFACTURING

Purchases PURCHASE

Sales SALES

Inventory STOCK

The table below presents a list of standard entities and their IDs. An entity groups one or more tables together and
usually corresponds to business units in source applications.

3.3.1. Dimensions
ID Entity
ADDRESS_KIND Type of address

ACCOUNTING_ALLOCATION Breakdown of accounting data

ADVERTISING_KIND Type of advertising

ACTIVITY_AREA Business domain

ADDITIONAL_COST Additional cost

ADDRESS_ATTRIBUTES Distributing center

AFTER_SALES_SERVICE_CATEGORY After-Sales Service file category

AFTER_SALES_SERVICE_DISPUTE_REASON Reason for the dispute in the After-Sales Service file

AFTER_SALES_SERVICE_REPAIR_STATUS Repair status in the After-Sales Services file

AIRLINE_COMPANY Airline company

AIRPLANE_CABIN_CLASS Airplane cabin class

AIRPORT Airport

APE APE

APPLICATION_USER Application user

BANK Bank

BIN_LOCATION Location of the BIN

BLOB BLOB

BLOB_KIND Type of BLOB

BRAND Brand

Page 11/261
ID Entity
BUSINESS_SUPERVISION Business supervision

CALENDAR_EVENT Calendar event

CARRIAGE Shipping

CARRIER Carrier

CASH_REGISTER Cash register

CASH_REGISTER_STATUS Status of the cash register

CASH_REGISTER_TRANSACTION_KIND Type of cash register transaction

CITY City

COMMERCIAL_CONDITION Sales terms and conditions

COMMERCIAL_EVENT_KIND Type of sales event

COMMERCIAL_EVENT_STATUS Status of the sales event

COMMERCIAL_MATERIAL Sales material

COMMERCIAL_MATERIAL_KIND Type of sales event

COMMERCIAL_SECRETARY Sales secretary

COMMERCIAL_ZONE Sales area

COMMERCIALIZATION_END_REASON Reason for discontinued sale

COMMUNICATION_CHANNEL_CHECK Verification of the means of communication

COMMUNICATION_CHANNEL_CONFIDENTIALITY Confidentiality of the means of communication

COMPANY Company

CONTROLLING_ACCOUNT Controlling account

COST_PRICE_KIND Cost price type

COST_PRICE_CALCULATION_TYPE Cost price calculation type

COST_PRICE_PROFILE Cost price profile

COUNTRY Country

COUNTRY_CATEGORY Country category

CURRENCY Currency

CURRENCY_RATE_KIND Type of conversion rate

CUSTOMER Customer

CUSTOMER_ATTRIBUTES Customer classification

Page 12/261
ID Entity
CUSTOMER_CATEGORY Customer category

CUSTOMER_CLASS Customer class

CUSTOMER_CLASSIFICATION_LEVEL Customer classification level

CUSTOMER_DELIVERY_GROUP Customer delivery group

CUSTOMER_DELIVERY_RETURN_REASON Reason for the customer delivery return

CUSTOMER_FAMILY Customer family

CUSTOMER_FIDELITY_ACTION_KIND Type of customer loyalty action

CUSTOMER_FIDELITY_CAMPAIGN Customer loyalty campaign

CUSTOMER_FIDELITY_PLAN Customer loyalty plan

CUSTOMER_FIDELITY_PLAN_KIND Type of customer loyalty plan

CUSTOMER_ORDER_CANCEL_REASON Reason for the customer order cancellation

CUSTOMER_ORDER_KIND Type of customer order

CUSTOMER_ORDER_NATURE Nature of customer order

CUSTOMER_ORIGIN Customer origin

CUSTOMER_PRICE_CATEGORY Customer price category

CUSTOMER_SALES_KIND Type of customer sales

DATABASE_CONFIGURATION Database configuration

DATABASE_PACKAGE Database package

DEADLINE_PAYMENT Payment deadline

DEADLINE_PAYMENT_RULE Payment deadline rule

DELIVERY_TERM Delivery terms and conditions

DEPARTMENT Department

DEPARTMENT_KIND Type of department

DIMENSION_GROUP Dimension group

DISCOUNT_ORIGIN Origin of the discount

DISTRIBUTION_CHANNEL Distribution channel

DISTRICT District

DOCUMENT_STATUS Document status

EMPLOYEE_CHECK_KIND Type of control run on employee schedules

Page 13/261
ID Entity
EMPLOYEE_PLANNING_CHECK Control run on employee schedules

EMPLOYEE_PLANNING_CHECK_STATUS Status of the control run on employee schedules

EMPLOYEE_PLANNING_GAP_REASON Reason for a discrepancy in employee schedules

EMPLOYEE_PLANNING_RANGE_KIND Type of range for employee schedules

ENTITY_STATUS Entity status

EVENT_KIND Type of event

FIDELITY_CARD_ACTIVATION_KIND Type of loyalty card activation

FIDELITY_CARD_CATEGORY Loyalty card category

FINANCIAL_PRODUCT_KIND Type of financial product

FINANCING_PLAN Financing plan

GENERAL_ATTRIBUTES User field

GENRE Genre

GEOGRAPHICAL_AREA Geographical area

GUARANTEED_REASON Reason for guarantee

INCOTERM_CITY Incoterm city

INFORMATION_ORIGIN_KIND Information origin type

INFORMATION_WAY Means of information

LANGUAGE Language

LINE_DISCOUNT_REASON Reason for the line discount

LOT Batch

MANUFACTURING_CHANNEL Manufacturing channel

MANUFACTURING_CHANNEL_HUB Manufacturing channel hub

MANUFACTURING_CHANNEL_KIND Type of manufacturing channel

MANUFACTURING_CHANNEL_NATURE Nature of manufacturing channel

MANUFACTURING_END_REASON Reason for discontinued production

MANUFACTURING_KIND Manufacturing type

MANUFACTURING_PHASE Manufacturing phase

MEASURE_UNIT Unit of measurement

METHOD_PAYMENT Payment method

Page 14/261
ID Entity
METHOD_PAYMENT_KIND Type of payment method

NATIONALITY Nationality

ORIGIN_ZONE Region of origin

PACKAGING Packaging

PERSON_KIND Type of person

PRODUCT Product

PRODUCT_ACTIVITY Product activity

PRODUCT_ATTRIBUTES Product classification

PRODUCT_CLASS Product class

PRODUCT_CLASSIFICATION_LEVEL Product classification level

PRODUCT_COLLECTION Product collection

PRODUCT_COMMERCIAL_FAMILY Product sales category

PRODUCT_DIMENSION Product dimension

PRODUCT_DIMENSION_ELEMENT Product dimension element

PRODUCT_DIMENSION_GRID Product dimension grid

PRODUCT_FAMILY_HISTORY Product family history

PRODUCT_FORM Product form

PRODUCT_GROUP Product group

PRODUCT_KIND Type of product

PRODUCT_LEVEL Product level

PRODUCT_LINE Product line

PRODUCT_MANAGEMENT_MODE Product management mode

PRODUCT_NATURE Nature of product

PRODUCT_NETWORK Product network

PRODUCT_SERIAL_NUMBER Product serial number

PRODUCT_TECHNICAL_FAMILY Product technical category

PRODUCTION_DIVISION Production division

PURCHASE_ANOMALY Purchase anomaly

PURCHASE_BEHAVIOUR Purchasing behavior

Page 15/261
ID Entity
PURCHASE_KIND Purchase type

PURCHASE_PRICE_KIND Purchase price type

PURCHASE_RETURN_REASON Reason for the purchase credit note

PURCHASE_TAX_AUTHORITY Purchase invoicing system

QUOTATION_AGREEMENT_REASON Quotation approval reason

QUOTATION_REFUSAL_REASON Quotation rejection reason

REGIONAL_SETTING Regional setting

RELATIVE_LINK Relationship

REMOVAL_MERCHANDISE_KIND Type of merchandise collection

RESPONSIBILITY_CENTER Responsibility center

SALES_AREA Sales area

SALES_CANCEL_REASON Sales cancellation reason

SALES_DIVISION Sales division

SALES_LINE_RETURN_REASON Reason for the sales line return

SALES_PERSON Salesperson

SALES_PERSON_CATEGORY Salesperson category

SALES_RETURN_REASON Reason for the credit note

SALES_TAX_AUTHORITY Sales invoicing system

SCHEDULE_EVENT Scheduled event

SEASON Season

SENDING_MERCHANDISE_KIND Type of goods delivery

SHIPPING_METHOD Shipping method

SHIPPING_STATUS Shipping status

SITE Site

SITE_KIND Type of site

SPECIAL_SALES_CATEGORY Special sales category

SPECIAL_SALES_PROGRAM Special sales program

STATE State

STOCK_EXCHANGE Stock exchange

Page 16/261
ID Entity
STOCK_IMAGE_KIND Inventory snapshot type

STOCK_QUALITY Stock quality

STOCK_QUALITY_REASON Stock quality reason

STOCK_ROOM Stock room

STOCK_ROOM_KIND Type of storage location

STOCK_TRANSACTION_CATEGORY Stock transaction category

STOCK_TRANSACTION_REASON Stock transaction reason

SUPPLIER Supplier

SUPPLIER_ATTRIBUTES Supplier classification

SUPPLIER_CLASS Supplier class

SUPPLIER _CLASSIFICATION_LEVEL Supplier classification level

SUPPLIER_ORDER_KIND Type of supplier order

SUPPLIER_PRICE_CATEGORY Supplier price category

SUPPLIER_RECEIPT_RETURN_REASON Reason for the supplier receipt return

SUPPLY_METHOD Supply method

TAX Tax

TAX_CATEGORY Tax category

TAX_EXCEPTION Tax exception

TAX_LOCATION_KIND Type of tax location

TAX_RATE Tax rate

TAX_TEMPLATE Tax template

TECHNICAL_MATERIAL Technical material

TIME_EVENT_KIND Type of calendar event

TITLE Title

TRACKING_STATUS_ORDER Order tracking status

TRANSACTION_CANCELED_STATUS Transaction cancellation status

TRANSACTION_CATEGORY Transaction category

TRANSFER_RETURN_REASON Reason for the transfer return

USER_FIELD_DEFINED_VALUE Possible value for user field

Page 17/261
ID Entity
WEATHER Weather

WORKSHOP Workshop

1.1.1 Facts
ID Entity
AFTER_SALES_SERVICE After-Sales Service file

BASE_PURCHASE_AND_SALES_PRICE Base purchase and selling price

CASH_REGISTER_TRANSACTION Cash register transaction

COMMERCIAL_EVENT Sales event

COST_PRICE_COMPANY Company cost price

COST_PRICE_COMPANY_HISTORY Company cost price history

COST_PRICE_STOCK_ROOM Storage location cost price

COST_PRICE_STOCK_ROOM_HISTORY Storage location cost price history

COST_TRANSACTION Cost transaction

CURRENT_STOCK Current stock

CUSTOMER_DELIVERY Customer delivery

CUSTOMER_FIDELITY Customer loyalty

CUSTOMER_ORDER Customer order

INVENTORY Inventory

LOADING_FORM Loading form

LINKED_TRANSACTION Linked transaction

OBJECT_TRANSACTION Object transaction

OBJECT_PAYMENT_TRANSACTION Object payment transaction

PURCHASE Purchase

PRICE Purchase and selling price

PRODUCT_LIST Product list

SALES Sales

SALES_PERSON_PRESENCE Salesperson present in store

SALES_TAX_REFUND Sales tax refund

Page 18/261
ID Entity
SITE_OBJECTIVE Site objective

SPECIAL_SALES_LOAN Special sales loan

STOCK_HISTORY Stock transaction

STOCK_IMAGE Inventory snapshot

STOCK_ROOM_EVENT Store event

STOCK_SNAPSHOT Inventory snapshot 2

SUPPLIER_ORDER Supplier order

SUPPLIER_RECEIPT Supplier receipt

TRANSFER_DELIVERY Transfer delivery

TRANSFER_ORDER Transfer order

TRANSFER_RECEIPT Transfer receipt

USER_FIELD_TRANSACTION User field transaction

3.4.1. Enabling/disabling the loading of multilingual data


You should run the system dashboard called BI Configuration setup found in the System/General settings folder on the
Reporting Services portal.

This option indicates if the multilingual versions of dimensions are to be loaded for Y2 sources.

Note: Initialization of the dimensions is automatically programmed should this option be enabled or disabled.

3.4.2. Enabling/disabling the loading of BLOB objects


You should run the system dashboard called BI Configuration setup found in the System/General settings folder on the
Reporting Services portal.

This option indicates if BLOB objects for products are to be loaded for Y2 or Colombus sources.

Note: If this option is enabled or disabled for Columbus, the product dimension must be initialized in Colombus. If the
source is Y2, product initialization is automatically programmed should this option be enabled or disabled.

Colombus does not manage images in BLOB format in the database. They are managed like files on a hard disk. To
retrieve images in BLOB format in BI Architect, you must import them to the data mart database. If image files are on an
external disk as they usually are, the data mart loading process will not have adequate rights to read them. You must
therefore assign these rights. The SQL Server service of VCSNEXT, the BI Architect instance is the process that loads
BLOBs.

To import Colombus images in BLOB format to BI Architect, you must:

• Create a Windows account with the same name and password as the service account of the BI
Architect instance on the server that manages the disk containing the images to be imported.
Page 19/261
This is generally the vtSql account. You can also use a domain account recognized by the two
servers (the BI server and the server managing the images) to modify the BI Architect service
account. In this case, the domain account must be the local administrator of the BI server.

• Assign this account read access rights to the folder containing Colombus images.

• Modify the path of the product images configured in Colombus in compliance with UNC. This
means that you should avoid using the drive mapped on the network. Instead, you should use
the full name and path, e.g. \\ServerName\Folders.

3.4.3. Enabling/disabling data loading without an update


You should run the system dashboard called BI Configuration setup found in the System/General settings folder on the
Reporting Services portal.

This option indicates if, for Y2 sources, the data should be loaded without running an update in BI Architect. This option
will generally be enabled to export data from Y2 to a remote BI Architect database. See the BI Architect consolidation
module document.

If this option is disabled when the data has already been updated in the BI Architect database, the data present in the
database is not deleted.

Note: Global initialization is automatically programmed only if this option is enabled.

3.4.4. Enabling/disabling the loading of treasury sales


You should run the system dashboard called BI Configuration setup found in the System/General settings folder on the
Reporting Services portal.

This option is used to indicate, for Y2 sources, if treasury sales are to be loaded (compatibility type TRE in Y2, see CBR
documentation). A Boolean in the tables in BI shows if the sale is treasury or not (IsTreasury).

Note: Initialization of sales is automatically programmed should this option be enabled or disabled.

Sales of this type are excluded by default from the cube (from version 6.90 upwards, if the version of the cube installed
is later, these sales will be loaded by default) and the Qlik model (set a script parameter to request the loading of
treasury sales in the dashboard configuration).

Note: Cubes already deployed before version 6.90 do not exclude treasury sales. If this option is enabled in this case,
quantities and amount of treasury sales will be included in the analyses. You may have to plan modifications to the cube
if this option is enabled in a cube deployed before version 6.90. This applies to any customer-specific Qlik applications
deployed before version 6.90 and to any other customer-specific queries that exist in sales.

3.4.5. Enabling/disabling the loading of document comment lines


You should run the system dashboard called BI Configuration setup found in the System/General settings folder on the
Reporting Services portal.

This option is used to indicate, for Y2 sources, if comment lines (COM lines in Y2, see CBR documentation) in documents
(orders, deliveries, sales, etc.) are to be loaded. A Boolean in the tables in BI shows if the line is a comment or not
(IsComment).

Comments in Y2 can contain values in the quantities and amounts.

Note: An initialization of documents concerned is automatically programmed should this option be enabled or disabled.
Page 20/261
By default, lines of this type are automatically excluded from the cube (from version 6.90 onwards) and the Qlik model.

Note: Cubes already deployed before version 6.90 do not exclude comment lines. If this option is enabled in this case,
quantities and amounts in comments will be included in the analyses. You will therefore have to plan modifications to
the cube if this option is enabled in a cube deployed before version 6.90. This applies to any customer-specific Qlik
applications deployed before version 6.90 and to any other customer-specific queries that exist in documents.

3.4.6. Enabling/disabling the automatic merging of products from external sources


You should run the system dashboard called BI Configuration setup found in the System/General settings folder on the
Reporting Services portal.

This option is used to automatically run the merging procedure for duplicate products after the data mart is loaded by
the standard SSIS package. To find out more, see Stored procedure for merging duplicate products in the BI Architect
Database Consolidation document.

You can enable and/or initialize entities using stored procedures or the system dashboard called Configure BI Entities
found in the System/General settings on the Reporting Services portal. To find out more, see System reports.

Enabling means that you declare the entity in BI so that it can be loaded. This means that it is being implemented for
the first time.

Initializing means that you send the whole entity regardless of whether or not it is being implemented for the first time.
If you enable an entity, its initialization must be scheduled.

Entities can be initialized in two ways, using global or incremental initialization. Dimension entities cannot be initialized
incrementally. They can only be initialized globally. Only facts can be initialized incrementally, except for certain facts.
When configuring the initialization, you can enable incremental initialization for an entity even if this is not managed. In
this case, the system will automatically schedule global initialization.

Whichever method used (global or incremental), initialization will also delete members that no longer exist in the source
and that are still present in BI. This scenario usually does not occur because it is managed in the standard
communication process. However, in the event of a BI or source database downtime, you may be required to restore
the database using a previous version. In this case, some deletions might be missing after the restore. Run a global or
incremental initialization to resolve this problem.

Global initialization - principle


Global initialization sends or resends all members of an entity from the source system to BI in a single processing.
Initialization is therefore generally fast since it is performed in a single processing. However, large data volumes place
heavy demands on server resources and as such, the loading processing time may exceed the period reserved for the
processing.

Page 21/261
Incremental initialization - principle
An incremental initialization sends or resends all members of an entity in several processes. The initialization is spread
over a period of time. Contrary to global initialization, this method updates the full database in several processes. The
initialization can therefore take a long time. In fact, when incremental initialization is required, the user must enter a
number of months corresponding to the level of the process. This number of months is used by the system for
extracting the data per level. For example, you want to enable incremental initialization for sales data per 24-month
level. The algorithm will be as follows:

• The system will calculate the least recent date and the most recent date of sales in the source
system.

• When data is loaded to the BI Architect data mart for the first time, the system will only load 24
months of sales data, starting with the most recent date. The priority is therefore the most
recent sales data.

• When data is next loaded to the BI Architect data mart, the system will load the next 24 months,
starting from the last date minus 24 months. The same algorithm will be applied for the next
data loads. As such, if you enable incremental initialization for sales data per 24-month level and
if there are five years of historical data in the source database, you will need to run SSIS
packages at least three times to load the data mart. If the packages are scheduled to run daily,
then the initialization of all sales data will take three days.

Although incremental initialization requires a longer period of time than global initialization, it
has the advantage of not overloading the system as the data volumes to be loaded can be
configured and made smaller depending on the user’s choice. This means that the incremental
initialization of an important data volume is less likely to have an adverse effect on daily
processing. As such, you can run incremental initialization at the same time as standard daily
processes if the number of months requested per level is not too high.

Note: Incremental initialization has the following constraint: this method can only load data between 01/01/1800 and
12/31/9999. If there is a member earlier than 01/01/1800 (possible in Oracle), the member cannot be loaded using
incremental initialization.

Despite this constraint, we recommend using this method over global initialization.

Note the following on incremental initialization:

• With the BI Configuration setup system report, you can set the default number of months for
levels: “User” corresponds to initializations programmed by the users, “System” to those
programmed by the system. Consolidation corresponds to sources that can be consolidated at
import/export.

• As with global initialization, entities deleted in the source database and not deleted in the BI
database (for example, because of downtime) will be deleted in BI.

Page 22/261
• Once an incremental initialization is running, you can modify the level (the number of months)
in the standard report called Configure BI Entities in the System/General settings folder. The
new level will be effective when the data mart is next loaded.

• If you schedule global initialization for an entity while an incremental initialization is running,
the global initialization will overwrite the incremental initialization and load all data and vice
versa.

• If the number of records to be loaded is low (i.e. less than 100,000), the system will load several
levels in a single processing, i.e. as many levels as possible as long as the 100,000 threshold is
not reached. If the number of records in the entity is less than 100,000, the initialization will be
performed in a single processing like in a full initialization. If there are isolated members of an
entity dating from the distant past, they will be quickly loaded because the system will skip
levels in cases like this.

• The standard report called BI entities status in the System/Monitoring folder enables you to
monitor the level of the incremental initialization.

3.5.1. Enabling subjects


To enable a subject, which is equivalent to enabling all of the entities associated with the subject, you should run the
system dashboard called Configure BI Entities found in the System/General settings folder on the Reporting Services
portal.

Note: Certain optional entities cannot be enabled using a subject but must be enabled directly. You should run the
stored procedure for displaying the status of entities and see the optional entities. To find out more, see Displaying the
loading status of entities.

To disable the facts for a subject, which is equivalent to disabling all fact entities associated with the subject, you should
run the system dashboard called Configure BI Entities found in the System/General settings folder on the Reporting
Services portal.

3.5.2. Initializing subjects


To initialize a subject, i.e. send all of the entities associated with the subject, you must run the system dashboard
called Configure BI Entities found in the System/General settings folder on the Reporting Services portal.

3.5.3. Enabling entities


To enable an entity, you must run the system dashboard called Configure BI Entities found in the System/General
settings folder on the Reporting Services portal.

When you enable a fact, this automatically enables all dimensions linked to this fact.

To disable a fact, you must run the system dashboard called Configure BI Entities found in the System/General
settings folder on the Reporting Services portal.

Page 23/261
3.5.4. Initializing entities
To initialize an entity, you must run the system dashboard called Configure BI Entities found in the System/General
settings folder on the Reporting Services portal.

3.5.5. Sending all entities to BI


If you need to send all of the data from a source to BI (e.g. when restoring a backup of the BI Architect relational
database), which is equivalent to sending all of the enabled entities of a source to BI, you must run the system
dashboard called Configure BI Entities found in the System/General settings folder on the Reporting Services portal.

Page 24/261
4. SYSTEM REPORTS
The BI solution Reporting Services portal provides a set of standard system reports used to configure and monitor the
administration of the solution. As these are standard reports, they are updated with each new BI upgrade.

System reports are found in the System folders on the portal.

• Average daily loads: Average daily standard data mart load time

• BI Configuration: Used to display the general BI configuration

• BI Entities history load: Standard load history of data marts

• BI Entities status: Status of the last standard data mart load

• BI jobs process history: History of SQL jobs run in the BI solution. This report requires a login and
password. There are two possibilities:

• The jobs are present in the SQL agent of the default instance (the most frequent
scenario). You must enter the login and password of the SQL job owner. This is usually
BINextServices prefixed by the domain.

• The jobs are present in the SQL agent of the standard BI Architect instance (generally
VCSNEXT). You must enter the login and password of vtDbAdmin. This is the case when
BI runs in SaaS mode.

• Dashboard process history: History of dashboard processes

• Execute BI jobs process: Used to run SQL jobs in the BI solution. This report requires a login and
password. There are two possibilities:

• The jobs are present in the SQL agent of the default instance (the most frequent
scenario). You must enter the login and password of the SQL job owner. This is usually
BINextServices prefixed by the domain.

• The jobs are present in the SQL agent of the standard BI Architect instance (generally
VCSNEXT). You must enter the login and password of vtDbAdmin. This is the case when
BI runs in SaaS mode.

• Fast load process history: Status of data mart fast loads

4.1.1. Reports in the System/Monitoring/Advanced folder

• Import-export history: History of imports and exports performed using the BI Architect database
consolidation module. This report can also be used to search for a string of characters in the
archive history of imported files.

Page 25/261
• Communication data maintenance: Maintenance report to retrieve or send communication files
between two remote environments.

• Rejected batches: Used to manage imported and rejected batches. To find out more, see the BI
Architect Database Consolidation document.

• Reports execution history: History of reports run on the portal. This report requires a login and
password. There are two possibilities:

• The Reporting Services portal is present in the default instance (the most frequent
scenario). You must enter the login and password of the SQL job owner. This is usually
BINextServices prefixed by the domain.

• The Reporting Services portal is present in the standard BI Architect instance (generally
VCSNEXT). You must enter the login and password of vtDbAdmin. This is the case when
BI runs in SaaS mode.

• BI ARCHITECT errors history: History of errors in the BI Architect relational database when loading
data or manipulating certain objects in the database, regardless of the source.

• BI ARCHITECT log history: Used to display the log for system configuration modifications in the BI
Architect relational database

• BI load rejects history: History of rejected BI data loads.

• SSIS Error history: History of errors in standard SSIS packages while loading data.

4.1.2. Reports in the System/Monitoring/Advanced/BI Architect data check folder


This folder contains several subfolders that contain reports used to visualize certain data present in the BI Architect
database. These reports do not serve a functional purpose. They have been designed to run checks in the relational
database without the need to run SQL queries. The BI Architect database is the main database for BI data. To check if
data has been integrated into the solution, it needs to be checked in the Architect relational database. Since OLAP cubes
and dashboards are customer specific, they can display data that does not necessarily correspond to the data imported
into the BI database. These reports are used to check the consistency of data depending on the initial sources. Note:
checked data is not exhaustive.

There are three folders:

• Retail and wholesale: contains the reports for checking customer transactions
o Customer orders: for checking customer orders
o Customer deliveries: for checking customer deliveries
o Customer sales: for checking customer sales

• Stock: contains the reports for checking stock


o Current stock: for checking current stock
o Current stock quality: for checking current stock by quality added
o Stock transactions history: for checking the history of stock transactions

Page 26/261
• Supply chain: contains the reports for checking supplier transactions
o Supplier orders: for checking supplier orders
o Supplier receipts: for checking supplier receipts
o Supplier purchases: for checking supplier invoices

Note:

• If the data volume is high and no other selection criterion is entered, extraction can take a long
time as the queries load the data at the finest level.

• For alphanumeric selection criteria, you can use Transact-SQL wildcards such as: % to replace a
character string and _ (underscore) to replace a character.

• All the reports provide for two optional breaks (Dimension level 1 and Dimension level 2). If the
Total value is selected (the default value), no break is displayed and the report displays only a
single grand total line.

If a breakdown for dimension 1 is requested, a subtotal is displayed by default (option active by


default but can be modified).

Note: If a break detailed at a fine level is requested with a high data volume and with no
restrictive selection criteria, the number of pages generated by the report can be huge and the
process might return an error. For example, if you ask to display all sales tickets without
selection criteria, this report type might potentially generate a million pages.

• The measures to display are optional. By default, the most typical are selected only.

• All the customer reports contain an additional mandatory selection criterion concerning the sales
activity: Trade or Retail.

• The data source is also a mandatory selection criterion in all the reports.

• If no selection criterion has been entered, the system loads all the data present in the database
with the exception of notice documents that have no functional values and other specific
documents (such as treasury documents for sales). By default, the reports only extract the
relevant functional data. However, you can tell the system to extract all documents, even those
with no relevant functional value by selecting the Extract all documents box (not checked by
default).

• By default, comment lines in documents (see Enable/disable loading of document comment lines)
are not displayed, but an option allows you to include these. Note: Comment lines can contain
quantities and amounts. If the option in the report is enabled, the values in comments will be
included in the totals.

4.1.3. Reports in the System/Monitoring/Advanced/Consolidation folder


• Merge mapping check: Used to check if products mapped using barcodes can be merged in a
multi-database consolidation context. To find out more, see the BI Architect Database
Consolidation document.

Page 27/261
• Merge mapping log: Used to display the log for the merging procedure that is being run. It can
also be used to interrupt the merging procedure in a multi-database consolidation context. To
find out more, see the BI Architect Database Consolidation document.

4.1.4. Reports in the System/Monitoring/Advanced/Database maintenance folder


• Index maintenance setup: Configuration of the BI Architect relational database table index
maintenance; see the DB Administration document.

• Index maintenance: Displays the fragmentation status of the BI Architect relational database
table index, as well as the log of the most recent maintenance operations on the indexes. The
report also allows you to manually initiate maintenance operations; see the DB Administration
document.

• BI Configuration setup: General configuration of the solution and sources.

• BI Entities filters: Definition of filters in BI Architect. To find out more, see Filtering and purging
data.

• BI Entities integrity rules: Used to manage integrity rules for BI Architect fields. To find out more,
see Configuring integrity rules for entities.

• BI Languages: Management of application languages in the solution.

• Dynamic entities properties: Used to display the dynamic properties of certain entities to
facilitate manipulation.

• Configure BI Entities: Configuration of BI entities

• Configure log tables: Configuration of logs for BI Architect tables. To find out more, see Reading
differential data in BI Architect.

• Configure SCD attributes: Configuration of SCD Type 2 attributes in the BI Architect relational
database.

4.2.1. Reports in the System/General settings/Architect model folder


• Architect data model: Customization of the BI Architect data model. To find out more, see the
chapter called Importing non-Cegid data - BI SaaS/BI On-premises in the BI Architect Database
Consolidation document.

4.2.2. Reports in the System/General settings/Dashboard setup folder


• Dashboard data model: Customization of the Qlik data models. To find out more, see the
Dashboards Administration document.

• Dashboard data partitions: Configuration of dashboard data partitions.

Page 28/261
• Dashboard dimensions and facts: Configuration of facts and dimensions loaded by default in Qlik
applications. To find out more, see the Dashboards Administration document.

• Dashboard master item model: Customization of the Qlik master item model. To find out more,
see the Dashboards Administration document.

• Dashboard properties: Configuration of the properties of Qlik applications. To find out more, see
the Dashboards Administration document.

• Dashboard server and applications: Configuration of the dashboards server, applications and
load tasks

• Dashboard sheets: Declaration of dashboard sheets to manage secure access to the sheets. To
find out more, see the Dashboards Administration document.

4.2.3. Reports in the System/General settings/Dashboard setup/Security access folder


• Dashboard roles: Definition of roles for managing access rights to data in Qlik applications. To
find out more, see the Dashboards Administration document.

• Dashboard security items: Definition of security items for managing access rights to data in Qlik
applications. To find out more, see the Dashboards Administration document.

• Dashboard security items roles: Definition of the values of security items in roles for managing
access rights to data in Qlik applications. To find out more, see the Dashboards Administration
document.

• Dashboard security policies: Definition of security policies. To find out more, see the Dashboards
Administration document.

• Dashboard user categories: Definition of user categories for managing access rights to data in
Qlik applications. To find out more, see the Dashboards Administration document.

• Dashboard users: Definition of users for managing access rights to data in Qlik applications. To
find out more, see the Dashboards Administration document.

• Dashboard users roles: Definition of user roles for managing access rights to data in Qlik
applications. To find out more, see the Dashboards Administration document.

• Dashboard users roles values: View of user roles for managing access rights to data in Qlik
applications. To find out more, see the Dashboards Administration document.

4.3.1. Reports in the System/Functional settings/Analysis periods folder


• Analysis periods types: Configuration of the types of analysis periods. To find out more, see
Analysis periods.

• Analysis periods: Management of analysis periods. To find out more, see Analysis periods.

Page 29/261
4.3.2. Reports in the System/Functional settings/Comparable seasons folder
• Comparable seasons: Configuration of the comparability properties of seasons. To find out more,
see Managing comparable seasons and collections.

• Comparable collections: Configuration of the comparability properties of collections. To find out


more, see Managing comparable seasons and collections.

4.3.3. Reports in the System/Functional settings/Comparable stores folder


• Comparable stores setup: Configuration of the comparability properties of stores. To find out
more, see Configuring comparable stores.

• Stores calendar: Management of store calendars. To find out more, see Store calendar.

4.3.4. Reports in the System/Functional settings/CRM folder


• Customer data quality setup: Configuration of the fields you want to use in the analysis of
customer data quality. To find out more, see Managing customer data quality.

• Age range setup: Configuration of age ranges. To find out more, see Managing age ranges.

4.3.5. Reports in the System/Functional settings/Currencies folder


• Output currency setup: Configuration of the output currency across the BI solution for amounts
and/or prices. See Output currency.

• Output currency rates: Used to enter and/or display conversion rates to the output currency. See
Output currency.

• Output currency rates per day: Used to display the conversion rates to the output currency per
day. See Output currency.

4.3.6. Reports in the System/Functional settings/Geographical folder


• Geocoding: View of the generated and non-generated geographical coordinates of entities with
the reasons for the non-generation. To find out more, see Generating geographical coordinates
(GPS).

• BI Database communication setup: Configuration of databases in a BI Architect database


consolidation context. To find out more, see the BI Architect Database Consolidation document.

• BI Entity communication export setup: Configuration of database exports in a BI Architect


database consolidation context. To find out more, see the BI Architect Database
Consolidation document.

• BI Entity communication import setup: Configuration of database imports in a BI Architect


database consolidation context. To find out more, see the BI Architect Database
Consolidation document.

Page 30/261
• BI Entity communication mapping setup: Configuration of the mapping of entities to be imported
to the current database in a BI Architect database consolidation context. To find out more, see
the BI Architect Database Consolidation document.

Page 31/261
5. BI CALENDARS
Retail Intelligence provides the option to manage different calendars (two currently): a standard calendar and a fiscal
calendar (an accounting calendar corresponding to your company’s financial year). In both calendars, you can define
types of week and various other properties.

The fiscal calendar differs from the standard calendar with regard to the following properties:

• The first month of the year can be any month and not just January

• The year does not have to contain 12 months

• The first day of the year can be different from the first day of the first month of the year (if using
in France, this is not an option but is possible in other countries).

Retail Intelligence does not manage all properties of fiscal calendars. Only the first month of the fiscal year can be
defined in the solution. The duration of the fiscal year must be 12 months and the first day of the fiscal year must be the
first day of the first month of the fiscal year.

By default, the Retail Intelligence functional repositories (OLAP cube and Dashboard) and the dashboards and reports
shipped with the solution use the standard calendar and the First week of at least 4 days week type with the first day of
the week as Monday. The fiscal calendar can, however, be used instead (see below).

The calendar properties and the calendars used by the repositories can be modified via a system report:

• Log on to the Retail Intelligence Reporting Services portal.

• Select the System/General settings folder.

• Click the BI Configuration setup report. The following screen will appear:

Page 32/261
• The calendar properties are displayed. To modify them, click the Edit properties link in the same
column.

• The different properties are as follows:

o Real calendar week managed by system: If the value of this option is True, the system
will manage the type of week for the actual calendar. The objective of this option is to ask
the system to choose the type of week with the smallest difference in days in order to
compare years Y and Y-1. The modification in the type of week is performed automatically
using the system date of the day each time the data marts are loaded. The automatic
modification is generally performed in the last week of the year.

The system selects the type of week based on the following rule:

1. Based on the order defined for the types of week, retrieve the first type of week
with the smallest difference in days between years Y and Y-1 AND whose number
of weeks in Y does not exceed 52. See below.

2. Based on the order defined for the types of week, retrieve the first type of week
with the smallest difference in days between years Y and Y-1 AND with the
smallest difference in days compared with the current type of week.

3. Keep the current type of week if no type of week has a smaller difference.

Note:

• This option is enabled by default.

• This option eliminates week 53.

• On the other hand, for certain years, this option may keep week 52 for
two weeks. In this case, week 52 will be different from one week to
another when the type of week is automatically modified.

Page 33/261
o Fiscal calendar week managed by system: If the value of this option is True, the system
will manage the type of week for the fiscal calendar. See above for the rules applicable to
this option.

o Real calendar week kind: Type of calculation for the first week of the year for the actual
calendar. You can modify this property only if Real calendar week managed by system is
enabled.

The weeks in the standard year will be numbered by this type.

The possible values are as follows:

▪ First week of four days (default): The first week of the year is the one that contains
at least four days in the first month of the year

▪ Starts on January 1: The first week of the year is the one containing the first day
of the first month of the year

▪ First full week: The first week of the year is the first full week in the first month
of the year, running, for example, from Monday to Sunday, if Monday is the first
day of the week.

o Real calendar first week day: First day of the week, from Monday to Sunday, in the
standard calendar. This property is combined with Real calendar week kind when
calculating the numbering of calendar weeks. By default, the first day of the week is
Monday.

o Fiscal calendar week kind: Type of calculation for the first week of the year for the fiscal
calendar. You can modify this property only if Fiscal calendar week managed by system
is enabled.

The weeks in the fiscal year will be numbered by this type.

See above for the possible values.

o Fiscal calendar first week day: First day of the week, from Monday to Sunday, in the fiscal
calendar. This property is combined with Fiscal calendar week kind when calculating the
numbering of fiscal weeks. By default, the first day of the week is Monday.

o Fiscal calendar year label kind: This property is used to define the label of the year in the
fiscal calendar. You must choose a label with one or two years. We recommend a label
with two years.

o Fiscal calendar first month of year: This property is used to define the first month of the
year in the fiscal calendar (January by default).

o Qlik global calendar: Default calendar used in the Qlik model and dashboards (standard
calendar by default).

Page 34/261
o Cube global calendar: Default calendar used in the cube and business reports (standard
calendar by default).

o Qlik calendar for weeks: Default calendar used in the Qlik model and dashboards to
calculate the weeks (standard calendar by default). In fact, weeks can be separated from
the global calendar used by the model.

o Cube calendar for weeks: Default calendar used in the cube and reports to calculate the
weeks (standard calendar by default). In fact, weeks can be separated from the global
calendar used by the cube.

• Once you have specified all of the properties, select Yes in the Confirm update configuration field
to validate the new configuration. Click View report. The following screen will appear. Note:
Calculating calendars takes around one minute.

The Order of week type link enables you to define the order of the types of week in the algorithm used by the system
for choosing the type of week. See Real calendar week managed by system above.

Note:

• Modifications (week type and default calendar in the repository) are automatically taken into
account in the OLAP cube (and in the reports by extension) and in the dashboard if the following
conditions are met:

o The OLAP cube installed is version 7.01 or later and no special modification has broken
the link with the calendars' dynamic properties. In this case, the new properties will be
automatically taken into account.

If the version is earlier or a special modification has broken the link with the calendars’
dynamic properties, you must manually modify the OLAP cube to take the new settings
into account. We recommend using the vtCommonDataSchema.vtGetAllCalendar()
function to extract the calendars. See below for more information.

This rule also applies to any reports in the cube.

o The dashboards installed are version 7.01 or later and no special modification has broken
the link with the calendars' dynamic properties. In this case, the new properties will be
automatically taken into account.

If the version is earlier or a special modification has broken the link with the calendars’
dynamic properties, you must either upgrade the dashboards to the new version if the
dashboards are standard or manually modify the dashboards to take the new properties
into account. We recommend using the vtCommonDataSchema.vtGetAllCalendar()
function to extract the calendars. See below for more information.

• By default, the OLAP cube contains three calendars in the Time dimension: the standard calendar,
the fiscal calendar and the default calendar in the cube repository (see the cube dictionary). The
Page 35/261
default hierarchies in the Time dimension correspond to the default calendar in the repository.
All functional reports defined with a time-based selection which is not a range of dates will use
the default calendar of the cube repository, i.e. fiscal or calendar depending on the selected
option.

• By default, the Qlik model contains only the default calendar in the Qlik repository. All dashboards
are by default based on the calendar in the repository defined in the properties if the selections
or comparisons in the report relate to different date periods.

• Note: To ensure the calendars are recalculated after the properties are modified in the OLAP cube
and in Qlik, you must run a weekly cube process job and load the Qlik dashboard daily. To find out
more, see ‘Qlik’ partition loading method.

• Regarding the fiscal calendar:

o The months of the year are sorted according to the order defined by the first month of
the year. The labels, however, do not change.

o The weeks can be separated from the calendar in the repository (generally, the weeks are
always calendar weeks).

o The start date of the entire cube history starts on January 1 by default even if the default
calendar is the fiscal calendar.

o The start date of the entire history of the Qlik model starts on January 1 if the default
calendar is the standard calendar or the first day of the first month of the year if the
default calendar is the fiscal calendar.

o Since the fiscal calendar usually falls over two years, we recommend labeling the fiscal
year label to indicate both years, e.g. Year Y-Y+1 or Year Y:Y+1. In fact, in a fiscal calendar,
the reference year is the first year Y. The month of January in the fiscal year and all the
months that follow before the first month of the fiscal year correspond to the calendar
year Y+1. If the label defined is Year N, the month of January in the fiscal year 2016 will
be displayed as January 2016 whereas it is in reality the month of January in the standard
year 2017. It would therefore be clearer to display January 2016-17 rather than January
2016 to avoid any ambiguity.

• The calendars are stored in the vtCommonDataSchema.vtTimeDate BI Architect table and are
automatically calculated once the configuration is validated.

• In addition to the table, BI Architect provides a number of calendar functions (e.g. several
functions are provided to calculate the weeks).

The vtCommonDataSchema.vtGetAllCalendar() “table” function returns all calendars stored in


the vtCommonDataSchema.vtTimeDate table. The function returns three calendars:

o The standard calendar

o The fiscal calendar

Page 36/261
o The default calendar configured for the requested repository (see below)

This function receives as a setting the business repository for which the default calendar is to be
loaded. The possible values of this setting are:
o vtCommonDataSchema.vtBIRepositoryQlik(): Qlik repository

o vtCommonDataSchema.vtBIRepositoryOLAPCube(): repository of the OLAP cube

Example of use:

SELECT *
FROM vtCommonDataSchema.vtGetAllCalendar(vtCommonDataSchema.vtBIRepositoryQlik())

Page 37/261
6. OUTPUT CURRENCY
Amount and price fields are, except in specific cases, stored in the BI Architect relational database and can take two
currencies: the consolidation currency and the initial currency of the value.

• Consolidation currency

The fields are converted into the main currency of the source database that is named the
consolidation currency (e.g. this is the currency in the folder for the Y2 source). If there is more
than one source data, there can be more than one consolidation currency.

For example, the AmountInvoicedExceptionOfTax field in the vtFactsProductCustomerSales


customer sales table contains the tax-excl. amount converted into the main currency of the
source database. The value is converted into the consolidation currency of the source database
using the rate in force at the time of the sales transaction. Note: The method used to search for
the conversion rate may change depending on the source.

• Initial currency of the value

In addition, the tables also contain the non-converted fields expressed in their initial currency.
These fields are all suffixed by the keyword NotConverted.

For example, the AmountInvoicedExceptionOfTaxNotConverted field in the


vtFactsProductCustomerSales table contains the tax-excl. amount for a non-converted sales line
expressed in the sales currency (CustomerSalesTransactionCurrencyKey field in the
vtCustomerSalesTransaction table for customer sales).

It is possible to have the values of amounts and price in a third currency named the output currency.

In the BI repositories and dashboards (OLAP cube and Qlik), amounts and prices are displayed by default in the
consolidation currency of the source. If the output currency is enabled, all amounts and prices are converted and
displayed in the output currency.

The output currency offers several benefits:

• It means you can have a single currency in the case where several sources populate the BI
database and the consolidation currencies in the sources are all different.

• It allows the user to manage average conversion rates over long periods in order to ‘level out’
fluctuating daily rates.

• It provides a safeguard against conversion rates that are often entered incorrectly in source
applications and/or not maintained up to date which then distort the converted values (entry
errors are not generally retroactively corrected in sources).

• It is used to manage constant currency rates for Y/Y-1 comparisons (see below Constant
conversion rates for Y/Y-1 comparisons).

Page 38/261
Enabling the output currency and managing its properties are performed via a system report:

• Log on to the Retail Intelligence Reporting Services portal.

• Select the System/Functional settings/Currencies folder.

• Click the Output currency setup report. The following screen will appear:

• This screen allows you to view and configure the properties of the output currency.

To modify properties, click the Edit properties link. If required, enter the user login and
password.

Page 39/261
The different properties are as follows:

• Output currency is active: Boolean that specifies whether or not the output currency is active.

If this option is active, the OLAP cube and the Qlik model automatically convert the values into
the output currency. If the conversion rate to date has not been configured, the value is
displayed in the consolidation currency.

Note: The OLAP cube automatically takes this option into account if the version used to install
the cube is 7.01 or later. If the version used to install the cube is earlier than this version, this
option is not taken into account so you will have to modify the customized cube to manage the
output currency. This also applies to dashboards if they have been customized and are customer
specific.

• Output currency: To select the output currency. All other currencies should have a conversion
rate to this currency.

• Conversion is managed from sources: Boolean that tells the system whether conversion rates are
entered in the BI database or retrieved from a source. If the rates are retrieved from a source,
they must have been entered in the source. In this case, we recommend managing in the source
a specific stock exchange, a specific rate type and/or a specific conversion ID depending on the
source. In the interests of performance, we recommend against using the default PAR stock
exchange and the default rate type NOR).

• Stock exchange: Stock exchange for retrieving conversion rates from the source (only if the value
of Conversion is managed from sources is True). Avoid using the PAR stock exchange.

• Rate kind: Rate type for retrieving conversion rates from the source (only if the value of
Conversion is managed from sources is True). Avoid using the NOR rate type.

Page 40/261
• Currency conversion ID: Conversion identifier for retrieving conversion rates from the source
(only when the source is Orli or external and if the value of Conversion is managed from sources
is True).

Once you have specified the properties, select Yes in the Confirm update configuration field to validate the new
configuration. Click View report. The following screen will appear:

Note:

• If the conversion rates are not loaded from the sources, they need to be entered in BI Architect
(see below Enter/display output currency conversion rates).

• If you modify any of the properties that affect the rates after activation and configuration of the
output currency, you will need to recalculate the BI repositories:

o Run the job for loading the data mart and the global calculation of the OLAP cube. This
job is usually run on a weekly basis on Sundays.

o Force the loading of all Qlik model partitions and run the loading of Qlik applications. To
find out more, see ‘Qlik’ partition loading method.

• If you disable the output currency, the values will be again displayed in the consolidation currency.
If rates have been entered, they will still be in the memory and so you will be able to enable or
disable this option at any time.

Page 41/261
• BI Architect provides a set of views used in the OLAP cube and in the Qlik model for easily
retrieving values, whether or not they are converted into the output currency. See Fact extraction
views.

When the output currency is enabled, you can enter conversion rates (if not already done) or display them if they have
been loaded from sources. Conversion rates into the output currency are managed via a system report:

• Log on to the Retail Intelligence Reporting Services portal.

• Select the System/Functional settings/Currencies folder.

• Click the Output currency rates report. The following screen will appear:

• This screen is used to enter/view conversion rates into the output currency. If conversion rates
are loaded from sources, the links used to modify them are not displayed.

Rates are managed by period. You only need to enter the effective start date. The effective end
date is calculated by the system.

To add a conversion rate, click the Add a conversion rate link. If required, enter the user login
and password.

Page 42/261
This window is used to enter/view conversion rates into the output currency by effective start
date. You can enter several rates at a time without having to leave the window.

• Currency code/name (wildcards): The currency to convert. You can enter the currency code or
name or use Transact-SQL wildcard characters to enter part of a code or name: "%" to replace a
string or "_" to replace a character.

If a single currency is found using the code or name entered, it appears in the Currency to
convert list. If several currencies match, they are all shown in the list and you must select just
one.

• Currency to convert: Name and code of the currency to convert (see previous field).

• Conversion rate: Conversion rate to the output currency. It is a decimal field. The decimal
separator must be the same as the one configured in the Windows Control Panel (generally “,” in
France and “.” in English-speaking countries).

The conversion rate must be greater than 0 but no greater than 999999.9999999999, and
contain up to 10 decimal places.

• Effect start date: The effective start date of the conversion rate. The end date is calculated by the
system using the effective start date of the next conversion rate for the same currency less one
day. If there are no future rates configured for this currency, the end date is calculated as
06/06/2079.

The user is in charge of deciding the effective rate periods. For an annual rate, enter January 1
as the effective start date of each year. For a seasonal rate, enter the season start date, etc.

Dates must fall between 01/01/1900 and 06/06/2079.


Page 43/261
Once you have specified the properties, select Yes in the Confirm creation field and click View report to add the new
rate. You can repeat this operation as many times as you need without leaving the window.

Note:

• To modify a rate, click the Edit link.

• To delete a rate, click the Delete link (you also need to confirm).

• You can delete all selected rates by clicking the Delete all selected conversion rates (you also
need to confirm).

• You can also import conversion rates. See Import conversion rates to the output currency. –
vtCurrencyConversionHistory table.

• Rates by date range are stored in the vtCurrencyConversionHistory BI table which contains
specific identifiers for the stock exchange, the rate type and the conversion ID. The system
manages the table by automatically blocking users from modifying information concerning the
output currency loaded from source databases.

• Additionally, the conversion rates to the output currency are stored in a standardized system
table (vtOutputCurrencyConversionHistory) which shows the rates by day and not by date range.
The purpose of this table is to facilitate queries by providing the conversion rates to the output
currency by day. The table is automatically populated if the output currency is enabled (the table
is empty if disabled). See Output currency conversion rates by day below.

• The output currency rates are retroactive if they are modified. The converted values are not
stored in the database but calculated as and when needed.

• To facilitate queries, a set of views is provided. To find out more, see Fact extraction views. If the
conversion rate does not exist for a given day, the views return the value converted to the
consolidation currency.

• The modified rates will then be taken into account in the BI repositories (OLAP cube and Qlik) the
next time the repositories are processed. Note: If you retroactively modify a rate at a date earlier
than the daily partitions in the repositories, you might need to launch the global calculation of the
OLAP cube job. This job is usually run on a weekly basis on Sundays.

Page 44/261
In addition to the output currency conversion rates by date range, BI Architect provides the
vtOutputCurrencyConversionHistory table which contains the conversion rates by day. This table is automatically
populated by the system using the currency conversion date range (if the output currency is enabled, otherwise this
table is empty).

You can view the rates in this table:

• Log on to the Retail Intelligence Reporting Services portal.

• Select the System/Functional settings/Currencies folder.

• Click the Output currency rates per day report

• Select the currencies and dates of the conversion rates then display the report

The solution provides a set of views that extract facts and manage conversions into the output currency. One of the
features of the views is to calculate the amounts and prices in the three configured currencies. It also shows the various
conversion rates used. To find out more, see Fact extraction views.

You must use these views to display the values in the output currency. If you have to manage this yourself, however, the
vtOutputCurrencyConversionHistory table contains the conversion rates in the output currency by initial currency and
by day. Here is an example of a conversion of a retail sales tax-excl. amount:

SELECT
Page 45/261
COALESCE(
OutputCurrency.OutputCurrencyConversionHistoryRate *
FactsProductCustomerSales.AmountInvoicedExceptionOfTaxNotConverted,
FactsProductCustomerSales.AmountInvoicedExceptionOfTax
) AS AmountInvoicedExceptionOfTax

FROM vtCustomerSalesDataSchema.vtFactsProductCustomerSales AS
FactsProductCustomerSales
INNER JOIN vtCustomerSalesDataSchema.vtCustomerSalesTransaction AS
CustomerSalesTransaction ON CustomerSalesTransaction.CustomerSalesTransactionKey
= FactsProductCustomerSales.CustomerSalesTransactionKey

LEFT OUTER JOIN vtCommonDataSchema.vtOutputCurrencyConversionHistory AS


OutputCurrency
ON OutputCurrency.OutputCurrencyConversionHistoryCurrencyKeyToConvert =
CustomerSalesTransaction.CustomerSalesTransactionCurrencyKey
AND OutputCurrency.OutputCurrencyConversionHistoryTimeDateSys =
FactsProductCustomerSales.TimeDateSys

WHERE FactsProductCustomerSales.SalesChannelIdSys = 1 – Retail

When the value of the conversion rate is null (either does not exist or the output currency is disabled), the tax-excl.
amount is loaded in the consolidation currency. The same rule applies in the views provided.

6.4.1. Constant conversion rates for Y/Y-1 comparison


If you want to know the constant conversion rate for a comparison between year Y and Y-1, simply specify the same
conversion rate for period Y and period Y-1. If for whatever reason this is not possible, calculate the conversions by
searching for the correct rates for each period analyzed. BI Architect provides the vtSiteComparativeDate table which is
populated by comparable store management (see Comparable stores). For a given date, this table contains the
corresponding dates in years Y-1, Y-2, Y+1 and Y+2 for day/month/year and week period. This table, therefore, provides
a simple way to load the rates in force in year Y-1 when you load year Y and/or the rates in force in year Y+1 when you
load year Y. Here is an example of a query for loading the rates based on this table from wholesale sales lines (example
from the sales view). These rates can then be used to convert the different amounts needed, depending on the context:

SELECT

OutputCurrencyDayCalendarYearN_1.OutputCurrencyConversionHistoryRate AS
OutputCurrencyRateDayCalendarYearN_1,

Page 46/261
OutputCurrencyDayWeekYearN_1.OutputCurrencyConversionHistoryRate AS
OutputCurrencyRateDayWeekYearN_1,

OutputCurrencyDayCalendarYearN_P1.OutputCurrencyConversionHistoryRate AS
OutputCurrencyRateDayCalendarYearN_P1,

OutputCurrencyDayWeekYearN_P1.OutputCurrencyConversionHistoryRate AS
OutputCurrencyRateDayWeekYearN_P1

FROM vtCustomerSalesDataSchema.vtCustomerSalesProductView AS CustomerSalesProduct

LEFT OUTER JOIN vtCommonDataSchema.vtSiteComparativeTimeDate AS SiteComparativeTimeDate

ON SiteComparativeTimeDate.SiteComparativeTimeDateSalesChannelIdSys =
CustomerSalesProduct.SalesChannelIdSys

AND SiteComparativeTimeDate.SiteComparativeTimeDateTimeDateSys =
CustomerSalesProduct.TimeDateSys

LEFT OUTER JOIN vtCommonDataSchema.vtOutputCurrencyConversionHistory AS


OutputCurrencyDayCalendarYearN_1

ON OutputCurrencyDayCalendarYearN_1.OutputCurrencyConversionHistoryCurrencyKeyToConvert =
CustomerSalesProduct.CustomerSalesTransactionCurrencyKey

AND OutputCurrencyDayCalendarYearN_1.OutputCurrencyConversionHistoryTimeDateSys =
SiteComparativeTimeDate.SiteComparativeTimeDateDayCalendarYearN_1

LEFT OUTER JOIN vtCommonDataSchema.vtOutputCurrencyConversionHistory AS


OutputCurrencyDayWeekYearN_1

ON OutputCurrencyDayWeekYearN_1.OutputCurrencyConversionHistoryCurrencyKeyToConvert =
CustomerSalesProduct.CustomerSalesTransactionCurrencyKey

AND OutputCurrencyDayWeekYearN_1.OutputCurrencyConversionHistoryTimeDateSys =
SiteComparativeTimeDate.SiteComparativeTimeDateDayWeekYearN_1

LEFT OUTER JOIN vtCommonDataSchema.vtOutputCurrencyConversionHistory AS


OutputCurrencyDayCalendarYearN_P1

ON OutputCurrencyDayCalendarYearN_P1.OutputCurrencyConversionHistoryCurrencyKeyToConvert =
CustomerSalesProduct.CustomerSalesTransactionCurrencyKey

AND OutputCurrencyDayCalendarYearN_P1.OutputCurrencyConversionHistoryTimeDateSys =
SiteComparativeTimeDate.SiteComparativeTimeDateDayCalendarYearN_P1

LEFT OUTER JOIN vtCommonDataSchema.vtOutputCurrencyConversionHistory AS


OutputCurrencyDayWeekYearN_P1

ON OutputCurrencyDayWeekYearN_P1.OutputCurrencyConversionHistoryCurrencyKeyToConvert =
CustomerSalesProduct.CustomerSalesTransactionCurrencyKey

AND OutputCurrencyDayWeekYearN_P1.OutputCurrencyConversionHistoryTimeDateSys =
SiteComparativeTimeDate.SiteComparativeTimeDateDayWeekYearN_P1

WHERE CustomerSalesProduct.SalesChannelIdSys = 1 --Retail

Page 47/261
6.4.2. Conversion rule for cost prices and sales/purchase prices to date
The following rule applies when converting cost prices and sales/purchases prices to date into the output currency:

• If searching for a price by date, such as the sales date to calculate the cost and so the margin (the
search will look for the price closest to this date), the price is converted using the rate at the date
entered in the search (so at the sales date in this example).

• If the price is displayed by a defined period (the effective start and end dates for the price), the
following rule is applied to load the conversion rate: if the defined price period includes the
current system date, the rate on the current system date is used. If not, the rate in force at the
end date of the defined price period is used (if the end date is later than the maximum date of
06/06/2079, you need to search for the rate at this maximum date).

The different views provided apply these rules when converting the cost prices and
sales/purchase prices to date.

Page 48/261
7. ANALYSIS PERIODS
BI Architect provides tools that enable you to manage user-defined analysis periods in addition to the standard
calendars in the solution. Analysis periods are designed to integrate custom period definitions that not managed by the
standard calendar such as time-based seasons, sale periods, etc.

Although analysis periods can also manage calendars with standard periods such as weeks, 445 periods, months, or
quarters, as a general rule, you do not use them to manage these standard periods as they are already managed by
standard calendars in the solution. We recommend that you create analysis periods only if the standard calendar in the
BI solution does not support the specific period definitions you require.

You can use analysis periods to manage new custom time analysis axes, N/N-1 comparatives (up to N-3) and time
hierarchies (up to four levels).

You can manage two categories of analysis periods:

• Automatically generated periods: Users define the relevant properties that instruct the system to
generate periods automatically without any user intervention.

• Manually specified periods: Users define the periods and manage period consistency manually.

Comparatives and hierarchies are also managed based on these two categories. They can be automatically generated or
manually specified.

Note: By default, the system provides two types of ready-to-use analysis periods:

• Retail Season: Automatically generated analysis periods for managing retail seasons.

• Sales Mainland France: Analysis periods for managing sale periods for the whole of France. The
periods must be specified. By default, the system provides a history of sale periods.

These two types of analysis periods can be modified by users based on their requirements.

The examples below illustrate the definition of the following types of periods:

• Automatically generated time-based seasons for purchases. Seasons change every six months.

• Sale periods for the UK. The periods must be manually specified.

• A week-based calendar with the three hierarchy levels below: Week, 445 periods, year. Note: This
calendar and its hierarchy already exists in the standard calendar. It is therefore not useful to
manage it using analysis periods. It is only used here as an illustration of hierarchies.

A period type groups together analysis periods defined using common properties. There is no limit to the number of
analysis period types that the system can manage. However, we recommend that you create only what you require to
avoid overloading the solution.

Page 49/261
• Log on to the Retail Intelligence Reporting Services portal.

• Select the System/Functional settings/Analysis periods folder.

• Run the Analysis periods types report.

• You can filter the types of analysis periods to be displayed using the name or ID as a criterion. You
can also use Transact SQL wildcards.

• To create a new type, click Add type.

Page 50/261
• To create a new type of analysis period, you must specify the following:
o id: Unique ID of the type of period. It must not contain any spaces or special characters.
This ID is important because it is used in Dashboard formulas to identify the type of
period. Its value should therefore be clear, accurate and short. In our example, it is
SeasonPurchase.

o Name: Name of the period type. In our example, it is Season supply chain.

o Generation method:

▪ Automatic: Periods are automatically generated by the system based on different


properties.

▪ Entry: Periods are manually specified by users.

In our example, Automatic is selected.

o Week type: Type of week for calculating the date of the first week of the year. This
property can be used to generate periods automatically, link N/N-X comparatives and
hierarchy levels.

In our example, we will keep the default value.

o First weekday: First day of the week for calculating the start of the week. This property
can be used to generate periods automatically, link N/N-X comparatives and hierarchy
levels.

In our example, we will keep the default value.

Page 51/261
o Start date: The earliest start date for generating periods automatically. The first period
can start before this date depending on the properties specified for the generation of
periods. See below. This property must be blank for the Entry generation method. It is
mandatory for the Automatic generation method.

In our example, we will enter 01/06/2016. The period will therefore start on June 1,
2016.

o End date: The end date for generating periods automatically. This property must be blank
for the Entry generation method. It is optional for the Automatic generation method. If
you specify a date, the automatic generation of periods will stop no later than this date.
It can stop after this date depending on the properties specified for the generation of
periods. See below. If no end date is specified, then periods will continue being
automatically generated. This property is usually left blank.

In our example, we will not specify a date.

The other properties are related to the automatic generation. They can however be
specified for the Entry generation method to be used as default values if you want to
generate periods occasionally.

o Generation period: Sub-period used for generating periods automatically.

In our example, specify Month.

The system will generate periods based on this criterion. The possible values are as
follows:

▪ (None): No sub-period, only if periods are manually specified.

▪ Day: The value of each period generated will be at least one day.

▪ Week: The value of each period generated will be at least one week (7 days).

▪ Month: The value of each period generated will be at least one month.

▪ 445 Period: The value of each period generated will be at least one 445 period,
i.e. period of four or five weeks.

▪ Year: The value of each period generated will be at least one year.

▪ Year Week: The value of each period generated will be at least one week. The
year week is the year calculated according to the date of the first day of the first
week of the year and the date of the last day of the last week of the year. It is
based on the Week type and First weekday properties.

Page 52/261
o Generation synchronize periods: In our example, we will keep the True value.

If the value of Generation synchronize periods is True, the system will synchronize
periods so that they are complete based on the Generation period property regardless
of the global start and end dates of the period type.

Here are a few examples:


▪ If the value of Generation period is Month, periods will always start on the first
day of the month and will always end on the last day of the month, i.e. 28 or 29,
30 or 31 depending on the month, regardless of the global start and end dates of
the period type. The first period can therefore start before the global start date
and the last period can end after the global end date if this date is specified.

▪ If the value of Generation period is Week, periods will always start with the first
day of the week (First weekday property) and will always end with the last day of
the week. For example, if Monday is the first day of the week, each period will
start with a Monday and end with a Sunday, regardless of the global start and end
dates of the period type. Week numbers will depend on the Week type property.

▪ If the value of Generation period is Year, periods will always start on the first day
of the year (Jan 1) and will always end on the last day of the year (Dec 31),
regardless of the global start and end dates of the period type.

▪ If the value of Generation period is Year Week, periods will always start on the
first day of the year calculated according to the first week of the year (First Week
type and First weekday properties) and will always end on the last day of the year
calculated according to the last week of the year, regardless of the global start
and end dates of the period type.

Page 53/261
If the value of Generation synchronize periods is False, the system will not synchronize
periods so that they are complete based on the Generation period property. The first
and last periods may therefore be incomplete.

Here are a few examples:

▪ If the value of Generation period is Month, periods will start based on the global
start date of the period type. For example, if the global start date of the period
type is 05/02/2020, each period will start on the 5th of each month and end on
the 4th of the next month. Warning: If the first day of the global start date is the
1st, that is similar to synchronizing months.

▪ If the value of Generation period is Week, periods will start based on the global
start date of the period type. For example, if the global start date of the period
type is 05/02/2020, the first period will start on the 5th and each subsequent
period will be incremented by at least one week.

▪ If the value of Generation period is Year, periods will start based on the global
start date of the period type. For example, if the start date of the period type is
05/02/2020, each period will start on the 5th of the month and end on the 4th of
the same month the following year.

o Generation include first day: Can only be changed if the value of Generation synchronize
periods is False. If the value is False, the first day of the period will not be included when
calculating the next period.

▪ If the value of Generation period is Month, periods will start based on the global
start date of the period type. For example if the start date of the period type is
05/02/2020, the first period will start on the 5th of the month and end on the 5th
of the following month instead of the 4th. The second period will then start on
the 6th instead of the 5th. The third period will start on the 7th and so on. If a
period starts on the 31st and the next month only has 30 days, the following
period will end on the 30th.

o Generation periods by slice: Number of sub-periods to be generated by period. By


default, the value is 1. In our example, specify 6 to generate periods lasting six months
each.

Here are a few examples:


▪ If the value of Generation period is Month and if you enter 2, each period will
last two months.

▪ If the value of Generation period is Days and if you enter 10, each period will last
ten days.

Page 54/261
▪ If the value of Generation period is Week and if you enter 2, each period will last
two weeks.

o Generation future years: Number of future years to generate. In our example, specify 2.

This property indicates the number of future years to generate for periods. By default,
the system generates periods until the end of the current year, i.e. the value of
Generation future years is 0 by default. If you enter 2, the system will generate periods
until the end of the current year + 2 years. For example, if today is November 6, 2020,
the system will generate periods until December 31, 2022 (rounded off based on the
sub-period to generate and the Generation synchronize periods property).

As soon as the year changes, the system will automatically check if new periods should
be generated in order to always have a margin End of current year (Fin de l'année
courante) + Generation future years. Automatic generation is run when BI data from
production databases is loaded. It is transparent to users.

If you specify a global end date for the period type, automatic generation of periods will
automatically stop once this date is reached or exceeded.

o Generation language name: Language used to generate period labels. There are two
possible languages, i.e. English US and French. In our example, we will keep the default
value.

This property is used to generate labels automatically in the language specified. To find
out more, see the Generation name and Generation short name properties below.

Page 55/261
o Generation name help: You do not enter a value in this field. It simply contains a list with
all the keywords you can use to generate period names. See below.

o Generation name and Generation short name: Values of the names and short names of
periods to be generated.

These properties are not mandatory. If Generation short name has a null value, the
period short name will have no value. If this is the case, then the short value will be
equal to the value of the first 16 characters of the period name. However, if Generation
name has a null value, then the system will generate the following value for each period
name: Period from dd/mm/yyyy to dd/mm/yyyy.

The period name must be unique for a given period type. If this is not the case, the
system will add a number to make it unique. The short name does not have to be
unique but we recommend that you make it unique.

In our example, enter #MonthSeasonName# #Year# for Generation name and


#MonthSeasonName# #Year# for Generation short name.

You can enter literal values or use keywords to build period labels. Keywords must be
surrounded by #.

For example, if you enter:


▪ Week #Week#, #Year#: The name will be Week 5, 2020 if the period start date is
in week 5 of year 2020. #Week# will be replaced with the week of the period start
date and #Year# will be replaced with the year of the period start date.

▪ #Day# #MonthName# #Year#: The name will be November 6, 2020 if the period
start date is 06/11/2020. #Day# will be replaced with the day of the month of the
period start date. #MonthName# will be replaced with the name of the month
(in English or French depending on the language specified) of the period start
date. #Year# will be replaced with the year of the period start date.

Glossary of keywords:

▪ #StartDay# or #Day#: Day of the month of the period start date. Example:
01/06/2020 => 1

▪ #StartDay0# or #Day0#: Day of the month of the period start date preceded by a
zero. Example: 01/06/2020 => 01

▪ #EndDay#: Day of the month of the period end date. Example: 30/06/2020 => 30

Page 56/261
▪ #EndDay0#: Day of the month of the period end date preceded by a zero.
Example: 05/06/2020 => 05

▪ #StartWeekdayName# or #WeekdayName#: Name of the weekday of the period


start date based on the language specified. Example: 05/06/2020 => Friday or
Vendredi

▪ #EndWeekdayName#: Name of the weekday of the period end date based on the
language specified. Example: 06/06/2020 => Saturday or Samedi

▪ #StartWeekdayShortName# or #WeekdayShortName#: Short name of the


weekday of the period start date based on the language specified. Example:
05/06/2020 => Fri. or Ven.

▪ #EndWeekdayShortName#: Short name of the weekday of the period end date


based on the language specified. Example: 06/06/2020 =>Sat. or Sam.

▪ #StartWeek# or #Week#: Week number of the period start date. Example:


04/01/2020 => 1

▪ #StartWeek0# or #Week0#: Week number of the period start date preceded by


a zero. Example: 04/01/2020 => 01

▪ #EndWeek#: Week number of the period end date. Example: 04/01/2020 => 1

▪ #EndWeek0#: Week number of the period end date preceded by a zero. Example:
04/01/2020 => 01

▪ #Start445Period# or #445Period#: 445 period of the period start date. Example:


10/07/2020 => 7

▪ #Start445Period0# or #445Period0#: 445 period of the period start date


preceded by a zero. Example: 10/07/2020 => 07

▪ #End445Period#: 445 period of the period end date. Example: 10/07/2020 => 7

Page 57/261
▪ #End445Period0#: 445 period of the period end date preceded by a zero.
Example: 10/07/2020 => 07

▪ #StartMonth# or #Month#: Month of the period start date. Example: 03/05/2020


=> 5

▪ #StartMonth0# or #Month0#: Month of the period start date preceded by a zero.


Example: 03/05/2020 => 05

▪ #EndMonth#: Month of the period end date. Example: 03/05/2020 => 5

▪ #EndMonth0#: Month of the period end date preceded by a zero. Example:


03/05/2020 => 05

▪ #StartMonthName# or #MonthName#: Month name of the period start date


based on the language specified. Example: 05/11/2020 => November or
Novembre

▪ #EndMonthName#: Month name of the period end date based on the language
specified. Example: 05/11/2020 => November or Novembre

▪ #StartMonthShortName# or #MonthShortName#: Month short name of the


period start date based on the language specified. Example: 05/11/2020 => Nov.
or Nov.

▪ #EndMonthShortName#: Month short name of the period end date based on the
language specified. Example: 05/11/2020 => Nov. or Nov.

▪ #StartYear# or #Year#: Year of the period start date. Example: 05/11/2020 =>
2020

▪ #EndYear#: Year of the period end date. Example: 05/11/2021 => 2021

▪ #StartYearShort# or #YearShort#: Year of the period start date. Example:


05/11/2021 => 21

Page 58/261
▪ #EndYearShort#: Year of the period end date. Example: 05/11/2022 => 22

▪ #StartYearWeek# or #YearWeek#: Year week of the period start date. Example:


05/11/2020 => 2020

▪ #EndYearWeek#: Year week of the period end date. Example: 05/11/2021 =>
2021

▪ #StartYearWeekShort# or #YearWeekShort#: Short year week of the period start


date. Example: 05/11/2021 => 21

▪ #EndYearWeekShort#: Short year week of the period end date. Example:


05/11/2022 => 22

▪ #StartTrimester# or #Trimester#: Quarter of the period start date. Example:


03/05/2020 => 2

▪ #StartTrimester0# or #Trimester0#: Quarter of the period start date preceded by


a zero. Example: 03/05/2020 => 02

▪ #EndTrimester#: Quarter of the period end date. Example: 03/05/2020 => 2

▪ #EndTrimester0#: Quarter of the period end date preceded by a zero. Example:


03/05/2020 => 02

▪ #StartSemester# or #Semester#: Semester of the period start date. Example:


03/05/2020 => 1

▪ #StartSemester0# or #Semester0#: Semester of the period start date preceded


by a zero. Example: 03/05/2020 => 01

▪ #EndSemester#: Semester of the period end date. Example: 03/05/2020 => 1

▪ #EndSemester0#: Semester of the period end date preceded by a zero. Example:


03/05/2020 => 01

Page 59/261
▪ #StartMonthSeasonName# or #MonthSeasonName#: Season name depending
on the month of the period start date based on the language specified. The
season is calculated based on months and not the precise date. The pivot months
of the seasons are implicitly associated with the start of the season.

• December, January, February: Winter

• March, April, May: Spring

• June, July, August: Summer

• September, October, November: Autumn

Example: 05/03/2020 => Spring or Printemps

▪ #EndMonthSeasonName#: Season name depending on the month of the period


end date based on the language specified. Example: 05/11/2020 => Autumn or
Automne

▪ #StartMonthSeasonShortName# or #MonthSeasonShortName#: Season short


name depending on the month of the period start date based on the language
specified. Example: 05/12/2020 => Win. or Hiv.

▪ #EndMonthSeasonShortName#: Season short name depending on the month of


the period end date based on the language specified. Example: 05/06/2020 = Sum
or Eté.

Once you have specified the period type properties, select Yes in the Confirm create field and click View report to
create the period type. Periods will then be automatically generated based on the properties.

Each time BI data is loaded by the standard SSIS packages, the system will check if new periods should be added,
depending on the system date, the number of future years specified and if the specified global end date of the period
type is reached.

For our examples, create the following period types:

• Type of period - UK sales


o Id: SalesUK
o Name: Sales mainland UK
o Generation method: Entry

Keep the default values for the other properties.

• Type of period - Year week for the week-based calendar


o Id: YearWeek

Page 60/261
o Name: Year week
o Generation method: Automatic
o Start date: 01/01/2017
o Generation period: Year week
o Generation name: Year #YearWeek#
o Generation short name: #YearWeek#

Keep the default values for the other properties.

• Type of period - 445 Period for the week-based calendar


o Id: 445Period
o Name: Period 445
o Generation method: Automatic
o Start date: 01/01/2017
o Generation period: Period 445
o Generation name: Period #445Period#, #YearWeek#
o Generation short name: #445Period#

Keep the default values for the other properties.

• Type of period - Week for the week-based calendar


o Id: Week
o Name: Week
o Generation method: Automatic
o Start date: 01/01/2017
o Generation period: Week
o Generation name: Week #Week#, #YearWeek#
o Generation short name: #Week#

Keep the default values for the other properties.

You can modify or delete the types of analysis periods if required. You can also change generation method from
Automatic to Entry and vice versa. In the first case, all automatically generated periods will be kept. In the second case,
you must delete all manually specified periods before switching to automatic.

Page 61/261
• To modify a period type, click Update.

• To delete a period type, click Delete.

Note: You cannot change certain properties of automatically generated periods and you cannot
delete a period type if one or more periods are linked to another period in a comparative or
hierarchy, and if this link was manually defined. See below. If you want to delete it, you must
first delete the link. The system will display a warning to users.

If the link was automatically defined, then there are no modification or deletion constraints.

7.1.1. Comparatives and hierarchies for period types


One of the objectives of analysis periods is to perform comparative analyses and manage hierarchies.

Comparatives are used to link periods in order to calculate differences for an indicator between two periods. For
example, you want to compare differences between the summer season of 2020 with the summer season of 2019.

Hierarchies are also used to link periods in order to build a hierarchy. For example, if you want to create a year/week
hierarchy, you must link weeks to years.

Links between periods can be defined automatically or manually. For automatic links, the system will manage links
between periods based on certain properties. For manual links, users must define links manually. As a general rule, we
recommend that you define links automatically as defining them manually is a highly time-consuming task if you want to
modify generation properties and delete periods.

You can define up to three comparative levels and three hierarchy levels. There will therefore be a total of four
hierarchy levels including the base level.

Comparative analyses generally compare data between value N and value N-1. N is the current year and N-1 is the
current year minus one. However, the system enables you to configure a period shift if required. This means that you
can configure N-1 as the current year minus five or even the current year plus five. Comparative analyses can also be
Page 62/261
performed using other criteria such as the order of periods. For the sake of clarity, we recommend that you follow the
rationale below:

• N-1 should logically be the current period -1. If the analysis compares years, then it is the current
year minus one.

• N-2 should logically be the current period -2. If the analysis compares years, then it is the current
year minus two.

etc.

Links for building hierarchy levels for periods is generally done without any period shift. However, if required, you can
configure a period shift just like for comparatives.

Although it is not mandatory, we recommend that you define comparatives and hierarchy levels according to the
rationale below:

• For comparatives, specify N-2 if N-1 is specified. Specify N-3 if N-2 is specified.

• For hierarchy levels, specify level 2 if level 1 is specified. Specify level 3 if level 2 is specified.

Comparatives appear in yellow columns. Hierarchies appear in blue columns. By default, only columns for defined
comparatives and hierarchy levels will appear. If you defined only one comparative or one hierarchy level, or none at all,
only the first column N-1 will appear for comparatives and only Hierarchy Level 1 will appear for hierarchy levels.

Page 63/261
• If required:
o To show or hide comparative N-2, click the plus + sign in the Comparative N-1 column.

o To show or hide comparative N-3, click the plus + sign in the Comparative N-2 column.

• If required:
o To show or hide level 2 of the hierarchy, click the plus + sign in the Hierarchy Level 1
column.

o To show or hide level 3 of the hierarchy, click the plus + sign in the Hierarchy Level 2
column.

• To define comparative N-1 for the Purchase season in our example, click Update to the right of
the period type in the Comparative N-1 column.

• To enable a comparative, you must specify the following (the same procedure applies for
hierarchy levels):
o Method: Used to define the link between periods. In our example, select Period
according year/start date.

The possible methods are described in further detail below:


▪ (None): The comparative or hierarchy level is not enabled.
Page 64/261
▪ Entry: Links between periods are manually defined by users.

▪ Previous period according to order (Période précédente selon ordre): Links


between periods are automatically generated according to the sequence
number of the previous period. The period with sequence number N will be
linked to the period with sequence number N-1. To find out more, see
Management of analysis periods.

▪ Next period according to order: Links between periods are automatically


generated according to the sequence number of the next period. The period
with sequence number N will be linked to the period with sequence number
N+1. To find out more, see Management of analysis periods.

▪ Period according year/start date: Links between periods are automatically


generated according to the year of the period start date. This method requires
you to define other properties. To find out more, see below.

▪ Period according year/end date: Links between periods are automatically


generated according to the year of the period end date. This method requires
you to define other properties. To find out more, see below.

▪ Period according year/start-end date: Links between periods are automatically


generated according to the year of the period start and end dates. This method
requires you to define other properties. To find out more, see below.

o Linked analysis period: Type of period to be used for the link. As a general rule, you
must use the same type of period and therefore, you should keep the (Same type)
default value. You would usually select another period type for hierarchies. In our
example, we will keep the default value.

If you use another period type for links defined with a Period according year... value,
they must have the same generation properties. See below. This is because these
methods are based on sub-period relative values. For example, if you compare the
weeks from one year to another, the system will compare year N with year N-1
according to the week number. For results to be consistent, the type of week and/or
the first day of the week must be identical for the two types of periods. Users must
ensure the correct types are used in comparatives.

o Year shift: Number of years in the period shift configured for Period according year...
comparison methods. The value can be negative, positive or equal to zero. This property
enables you to define the number of years to deduct or to add to calculate the sub-
period used as the link. Generally, this value must be negative for comparatives. -1 is
usually used for comparative N-1, -2 for comparative N-2, etc. To find out more, see
below. In our example, specify -1.

Year sub-period: Sub-period of the year used to generate links for Period according
year… comparison methods. In our example, specify Day/month.

The different methods possible are as follows:


Page 65/261
▪ (None): No sub-period is used (only for methods with no sub-periods).

▪ Week: Periods are linked using the Year shift property and the week number of
the year. This sub-period is usually used to link weeks.

▪ Day/Week: Periods are linked using the Year shift property, the week number
of the year and the day of the week (Monday, Tuesday, etc.). This sub-period is
usually used to link days of the week.

▪ Month: Periods are linked using the Year shift property and the month of the
year. This sub-period is usually used to link months of the year.

▪ Day/Month: Periods are linked using the Year shift property and the
day/month. This sub-period is usually used to link miscellaneous non-standard
periods.

▪ Trimester: Periods are linked using the Year shift property and the quarter of
the year. This sub-period is usually used to link quarters.

▪ Semester: Periods are linked using the Year shift property and the semester of
the year. This sub-period is usually used to link semesters.

▪ 445 Period: Periods are linked using the Year shift property and the number of
the 445 period of the year. This sub-period is usually used to link 445 periods.

▪ Year: Periods are linked using the Year shift property. This sub-period is usually
used to link years.

▪ Year week: Periods are linked using the Year shift property. This sub-period is
usually used to link year weeks.

• Force delete all existing links: If you change the Entry link method to Automatic or (None) and
if one or more links are defined, the system will not authorize any modification to the method
as long as there is an existing manual link. In this case, you should delete all existing links
manually or specify True for this option to delete all links automatically. Warning: Deletion of
links is immediate and permanent. To find out more, see Management of analysis periods.

• Confirm update: Select Yes to modify the link.

Repeat the procedure to add comparative N-2 for the SeasonPurchase period type with the following properties:

• Method: Period according year/start date


• Linked analysis period type: (Same type)
• Year shift: -2
• Year sub-period: Day/month

Page 66/261
For our examples, create the following period types:

• Comparatives for the Sales mainland UK period type.


o Comparative N-1:
▪ Method: Period according year/start date
▪ Linked analysis period type: (Same type)
▪ Year shift: -1
▪ Year sub-period: Day/month

o Comparative N-2:
▪ Method: Period according year/start date
▪ Linked analysis period type: (Same type)
▪ Year shift: -2
▪ Year sub-period: Day/month

• Hierarchy for the Week period type.


o Level 1:
▪ Method: Period according year/start date
▪ Linked analysis period type: Period 445
▪ Year shift: 0
▪ Year sub-period: Period 445

o Level 2:
▪ Method: Period according year/start date
▪ Linked analysis period type: Year week
▪ Year shift: 0
▪ Year sub-period: Year week

About links: We recommend that you use automatic links and avoid specifying links manually. The rules used to manage
links are identical for both comparatives and hierarchies. There are two categories of methods for managing automatic
links:

• Methods using the sequence number: These are required when there is no time-based link
between the sub-periods of a year. See below. Sequence numbers usually require you to classify
periods manually. They are generally used for periods entered without time-based links
between the sub-periods of a year. To find out more, see Management of analysis periods.

• Methods with time-based links between the sub-periods of a year: These are the methods
whose name starts with Period according year... Generally, you should generate links using one
of these methods. In addition to this method, you must specify the following properties:
o The period shift in the number of years for the comparative, i.e. Year shift property.

o The sub-period of the year used for the link, i.e. Year sub-period property.

The link will be based on the period shift of the configured year, sub-period and period start
and end dates.

Page 67/261
As a reminder, there are two types of years. Standard years start on January 1 and end on
December 31. Year weeks start on the first day of the first week of the year and end on the last
day of the last week of the year. In the system, standard years are called Year. Year weeks are
called Year week. Year weeks are used to link the following sub-periods: Weeks, 445 Period,
Year week. Links for all the other sub-periods are generated based on the standard year.

Links can be based on the period start date, period end date or both. Generally, the start date
or the start-end date is used to link periods. If the start-end date method is used, the duration
of the periods to be linked must be strictly identical.

For comparatives, periods must generally have a unique link. N can only have one possible
value to be linked with N-1. If this is not the case, this means that the link between periods is
not logical. In this case, the system will take the first period found for the link. You can use the
period management report to display multiple links. To find out more, see Management of
analysis periods.

For comparatives, the system requires you to link from N to N-X. For the sake of clarity, avoid
defining links in the reverse order.

For hierarchies, there are generally multiple links. For example, the year has N weeks in a
year/week hierarchy. For hierarchies, links must be defined from the lowest level to the
highest level. For example, in a year/week hierarchy, you must link the week to the year, and
not the reverse, so that there is only one possible value. If this is not the case, the hierarchy
will be incorrect.

Based on the definition of links and the generation methods for periods, you may have a
period with multiple links at a given time and a unique link subsequently, or vice versa. If links
are automatically generated and if you want to generate them again, you can simply change
one of the generation properties like Year shift or Year sub period and then specify the initial
value again. Any modification made to one of the generation properties will trigger the
generation of automatic links for all linked periods. Just like for hierarchies, you can link
asymmetrical periods. For example, you can compare one week with one month, or one day
with one year, etc. Users are in charge of configuring properties correctly based on their
requirements.

Here are a few examples:


o Period according to year/start date, Year shift -1, Day/month year sub-period method:
Periods are linked if the day and month of the period N start date is equal to the day
and month of the period N start date minus one year. For example, if period N starts on
22/11/2020, it will be linked to period N-1, the start date for which is 22/11/2019. For
specific cases such as leap years, links between the day/month sub-period will be
between 29/02 and 28/02 if possible.

o Period according year/start-end date method, Year shift -2, Month year sub-period:
Periods are linked if the months of the start date AND end date of period N are equal to
the months of the start date AND end date of period N minus two years. For example,
if the start date and end date of period N are respectively 01/11/2020 and 28/02/2021,
then it will be linked to period N-2 whose start date falls between 01/11/2018 and
30/11/2018 AND whose end date falls between 01/02/2018 and 28/02/2018.
Page 68/261
o Period according year/start date method, Year shift -1, Week year sub-period: Periods
are linked if the week of the year week of the period N start date is equal to the week
of year week of the period N-1 start date minus one year. For example, if the period N
start date is 23/11/2020 corresponding to week 48, it will be linked to period N-1 which
will be week 48 in year week 2019.

Once you have created the period types, periods, comparatives and hierarchies will automatically be generated.
Alternatively, you can specify them manually based on the generation method for the period type and the link method
for comparatives and hierarchies.

To view, modify or delete the generated or manually specified analysis periods as well as comparatives and hierarchies,
you must use the system report called Analysis periods.

• Log on to the Retail Intelligence Reporting Services portal.

• Select the System/Functional settings/Analysis periods folder.

• Run the Analysis periods report.

• You must select a type of period for displaying analysis periods (Analysis period type setting). The
Analysis period type filter (wildcards) setting is used to restrict the list of period types. The other
settings in the report are as follows:
o Period name (wildcards): Used to filter period names. You can use SQL Transact
wildcards.

o Period short name (wildcards): Used to filter period short names. short You can use SQL
Transact wildcards.

Page 69/261
o Period start date from and To: Used to filter the start date of periods.

o Period end date from and To: Used to filter the end date of periods.

o Sort by and Descending sort: Sort order of the periods to be displayed. By default, periods
appear in reverse order of their start dates (period date). The other values possible are
as follows:

▪ Period order
▪ Period name
▪ Period short name

Note: If you want to change the order of Entry period types, you must display periods
using the Period order sort order.

o Show comparisons: If the value is True, this will display the existing comparative columns.
The default value is True.

o Show hierarchies: If the value is True, this will display the existing hierarchy columns. The
default value is True.

o Show duplicates links: If the value is True, this will display the number of links, if there
are more than one, in the comparative or hierarchy columns. In this case, the linked
period will appear in red with an exclamation mark. The default value is False.

In addition to periods and links with other periods for comparatives and hierarchies, the report also enables you to
modify the general properties of period types, comparatives and hierarchies using the Update links as shown below.
This means that you do not need to run the report for types of analysis periods to modify the general properties of
periods.

Page 70/261
Automatically or manually linked periods appear in the comparative and hierarchy columns. You can also specify them
by clicking the Update link if the link is Entry. If this is not the case, then the link will be disabled.

The report is used to view periods that are automatically generated. You can also create, modify or delete period types
whose generation method is Entry.

Select the "Sales mainland UK" period type. The generation method of this type of period is Entry and therefore the
links for managing periods are enabled. Click Add period.

• Specify the following to create a period:


o Name: Period name. This field is mandatory and the name must be unique for the period
type. In our example, enter Winter 2018.

o Short name: Period short name. This field is optional and the short name can be unique
or not unique for the period type. See below. In our example, leave this field blank.

o Start date: Start date of the period. This date cannot be included in the start and end
dates of other periods for a given period type. In our example, enter 26/12/2018.

Page 71/261
o End date: End date of the period. This date cannot be included in the start and end dates
of other periods for a given period type. In our example, enter 31/01/2019.

o Check unique short names: If the value is True (default value), the system will check
whether another period with the same short name exists for the same period type. If this
is the case, the creation will be rejected. However, you can select False if required, to
force the creation. In our example, we will keep the default value.

o Confirm create: Select Yes to validate the creation and click View report to create the
period.

In the same screen, create the following periods:


o Winter 2019: 26/12/2019 to 31/01/2020
o Winter 2020: 26/12/2020 to 31/01/2021

Note: The different periods specified cannot follow each other. Once you have created the
period, automatic links for comparatives and hierarchies will be generated. Entry links however
must be defined manually.

When creating a period, the sequence number of the period will be by default the last sequence
number of the period type incremented by one. This is irrespective of whether the period was
automatically or manually defined. If the period is inserted, its sequence number will be +1 or -1
depending on the sort order of the current period where it was inserted. The sequence numbers
of Entry periods can be subsequently modified. See below.

Once you have created periods, you can modify and delete them. You can also insert periods, change the sequence
numbers and occasionally generate a group of periods.

• The Delete periods link is used to delete a series of manually defined periods based on different
criteria. Note: You cannot delete a period if it is linked to another period (comparative or
hierarchy) with a manually defined link. The deletion of the periods will be rejected as long as the
link exists.

Tip: To delete all the links for a comparative or hierarchy, you can modify the general properties
of the comparative or hierarchy and select the (None) value for the link method. Validate the
Page 72/261
modification. You can now delete the links and specify the initial values again for the method
and properties. If the initial method was Entry, you must force the deletion of links before
validating the modification.

• The Generate periods link is used to generate a series of Entry periods occasionally using
automatic generation properties.

• The Update link is used to modify the corresponding period.

• The Delete link is used to delete the corresponding period. Unlike the deletion of a series, if the
period is linked to another Entry period, the deletion will not be rejected and the link will be
deleted at the same time as the period.

• The Insert link is used to create a period while taking into account the sequence number of the
corresponding period. The link is enabled only when you sort periods by sequence number (Period
order).

o If the sort order is ascending, the inserted period will take the current value and all
sequence numbers greater than or equal to this sequence number will be incremented
by one and moved down.

o If the sort order is descending, the inserted period will take the current value and all
sequence numbers less than or equal to this sequence number will be decremented by
one and moved down.

• The Up and Down links are used to rearrange periods based on their sequence number. You can
only rearrange periods if they are sorted by sequence number (Period order).

• The Update link in comparative and hierarchy columns is used to modify links between periods.
This link is enabled only for comparatives and hierarchies whose link is Entry. If this is the case,
you can specify a period to link in the same period type or in another period type.

Page 73/261
Analysis periods are stored in the following BI Architect tables:

• vtCommonDataSchema.vtAnalysisPeriodType: This contains the types of analysis periods.

• vtCommonDataSchema.vtAnalysisPeriodTypeLink: This contains the links of analysis period


types, i.e. comparative or hierarchy.

• vtCommonDataSchema.vtAnalysisPeriod: This contains the analysis periods.

• vtCommonDataSchema.vtAnalysisPeriodLink: This contains the links of analysis periods, i.e.


comparative or hierarchy.

In order to facilitate the running of queries on periods, the solution provides a view called
vtCommonDataSchema.vtAnalysisPeriodView that enables you to query analysis periods with any period type.
Furthermore, the view provides a link to the standard BI calendar table called vtCommonDataSchema.vtTimeDate.
With this view, periods are therefore normalized and always displayed with a breakdown by day. The view also returns
additional calculated fields such as the order of periods by period type, sequence number of the date according to each
period, etc.

Warning: This view restricts periods based on the vtCommonDataSchema.vtTimeDate table. If a period exceeds or is
earlier than the BI calendar, it will be partially or completely excluded from the view.

Below is an example of a query with the standard view:

SELECT

-- BI date calendar from vtTimeDate

AnalysisPeriod.AnalysisPeriodDayDate AS AnalysisPeriodDate,

-- Period day number calculated according to period start/end date

AnalysisPeriod.AnalysisPeriodDayNumber AS AnalysisPeriodDayNumber,

-- Period order generated or entered

AnalysisPeriod.AnalysisPeriodOrder AS AnalysisPeriodOrder,

-- Period order calculated according to start date

AnalysisPeriod.AnalysisPeriodOrderStartDate AS AnalysisPeriodOrderStartDate,

AnalysisPeriod.AnalysisPeriodTypeIdApp AS AnalysisPeriodTypeIdApp,

AnalysisPeriod.AnalysisPeriodTypeName AS AnalysisPeriodTypeName,

AnalysisPeriod.AnalysisPeriodName AS AnalysisPeriodName,

AnalysisPeriod.AnalysisPeriodStartDate AS AnalysisPeriodStartDate,

AnalysisPeriod.AnalysisPeriodEndDate AS AnalysisPeriodEndDate,

-- hierarchy levels from 1 to 3

AnalysisPeriod.HierarchyAnalysisPeriodName_L1 AS AnalysisPeriodNameLevel1,

AnalysisPeriod.HierarchyAnalysisPeriodName_L2 AS AnalysisPeriodNameLevel2,

AnalysisPeriod.HierarchyAnalysisPeriodName_L3 AS AnalysisPeriodNameLevel3,

Page 74/261
AnalysisPeriod.HierarchyAnalysisPeriodId_L1 AS AnalysisPeriodIdLevel1,

AnalysisPeriod.HierarchyAnalysisPeriodId_L2 AS AnalysisPeriodIdLevel2,

AnalysisPeriod.HierarchyAnalysisPeriodId_L3 AS AnalysisPeriodIdLevel3,

AnalysisPeriod.HierarchyAnalysisPeriodOrderStartDate_L1 AS AnalysisPeriodOrderStartDateL1,

AnalysisPeriod.HierarchyAnalysisPeriodOrderStartDate_L2 AS AnalysisPeriodOrderStartDateL2,

AnalysisPeriod.HierarchyAnalysisPeriodOrderStartDate_L3 AS AnalysisPeriodOrderStartDateL3,

-- Comparative N-1

AnalysisPeriod.ComparabilityAnalysisPeriodTypeIdApp_N_1 AS AnalysisPeriodTypeIdApp_N_1,

AnalysisPeriod.ComparabilityAnalysisPeriodOrder_N_1 AS AnalysisPeriodOrder_N_1,

AnalysisPeriod.ComparabilityAnalysisPeriodTypeName_N_1 AS AnalysisPeriodTypeName_N_1,

AnalysisPeriod.ComparabilityAnalysisPeriodName_N_1 AS AnalysisPeriodName_N_1,

AnalysisPeriod.ComparabilityAnalysisPeriodId_N_1 AS AnalysisPeriodId_N_1,

AnalysisPeriod.ComparabilityAnalysisPeriodStartDate_N_1 AS AnalysisPeriodStartDate_N_1,

AnalysisPeriod.ComparabilityAnalysisPeriodEndDate_N_1 AS AnalysisPeriodEndDate_N_1,

AnalysisPeriod.ComparabilityAnalysisPeriodOrderStartDate_N_1 AS AnalysisPeriodOrderStartDate_N_1,

-- Comparative N-2

AnalysisPeriod.ComparabilityAnalysisPeriodTypeIdApp_N_2 AS AnalysisPeriodTypeIdApp_N_2,

AnalysisPeriod.ComparabilityAnalysisPeriodOrder_N_2 AS AnalysisPeriodOrder_N_2,

AnalysisPeriod.ComparabilityAnalysisPeriodTypeName_N_2 AS AnalysisPeriodTypeName_N_2,

AnalysisPeriod.ComparabilityAnalysisPeriodName_N_2 AS AnalysisPeriodName_N_2,

AnalysisPeriod.ComparabilityAnalysisPeriodId_N_2 AS AnalysisPeriodId_N_2,

AnalysisPeriod.ComparabilityAnalysisPeriodStartDate_N_2 AS AnalysisPeriodStartDate_N_2,

AnalysisPeriod.ComparabilityAnalysisPeriodEndDate_N_2 AS AnalysisPeriodEndDate_N_2,

AnalysisPeriod.ComparabilityAnalysisPeriodOrderStartDate_N_2 AS AnalysisPeriodOrderStartDate_N_2,

-- Comparative N-3

AnalysisPeriod.ComparabilityAnalysisPeriodTypeIdApp_N_3 AS AnalysisPeriodTypeIdApp_N_3,

AnalysisPeriod.ComparabilityAnalysisPeriodOrder_N_3 AS AnalysisPeriodOrder_N_3,

AnalysisPeriod.ComparabilityAnalysisPeriodTypeName_N_3 AS AnalysisPeriodTypeName_N_3,

AnalysisPeriod.ComparabilityAnalysisPeriodName_N_3 AS AnalysisPeriodName_N_3,

AnalysisPeriod.ComparabilityAnalysisPeriodId_N_3 AS AnalysisPeriodId_N_3,

AnalysisPeriod.ComparabilityAnalysisPeriodStartDate_N_3 AS AnalysisPeriodStartDate_N_3,

AnalysisPeriod.ComparabilityAnalysisPeriodEndDate_N_3 AS AnalysisPeriodEndDate_N_3,

AnalysisPeriod.ComparabilityAnalysisPeriodOrderStartDate_N_3 AS AnalysisPeriodOrderStartDate_N_3

FROM vtCommonDataSchema.vtAnalysisPeriodView AS AnalysisPeriod

Page 75/261
Note:

• The Qlik dashboard model and the OLAP cube use this view for loading analysis periods.

• The Qlik Retail dashboard contains standard charts for the Retail Season analysis period.

• The system provides functionalities for generating Qlik formulas for analysis periods. To find out
more, see Create master items using a template in the Dashboards Administration document.

• From version 7.70 onwards, the OLAP cube contains examples of calculated measures for Retail
sales for all analysis periods and custom measures specific to the Retail Season analysis period.
To find out more, see the Excel folder for cube objects.

Page 76/261
8. COMPARABLE STORES
BI Architect provides tools to help you manage and compare values from different periods, such as Y, Y-1 and Y-2, for a
given store. You define the concept of comparability by store and by day in the vtCommonDataSchema.vtSite table.

You can define the comparability of the retail business activity and/or trade business activity.

You define the comparability of stores in two BI Architect tables:

• Comparable store calendar:vtCommonDataSchema.vtSiteComparativeCalendar

This indicates whether data for a store can be compared for a given day or period. This table can
be managed by the system based on the comparability method configured. It can also be
imported from and managed by an external system. To find out how to manipulate this table,
see Managing comparable measures.

• Store calendar: vtCommonDataSchema.vtSiteCalendarEvent

This manages calendars by store. The calendar indicates whether there is a comparable store
closing or opening for a given day. The calendar is used to determine if data for the store can be
compared for a given period based on the comparability method configured. You can specify the
calendar in BI Architect or import it.

Store calendars are not linked to the comparability method and can be used freely.

Furthermore, BI Architect enables you to use the concepts of store opening and closing dates to
optimize analysis.

These tables are subsequently used in the OLAP cube and Qlik model to build measures whose data can
be compared for Y and Y-1. Y and Y-2 data can also be compared.

The comparability of stores can be managed in two ways:

• Comparability is managed by BI Architect using the comparability method you configured and by
integrating store calendars. If this is the case, you must define store calendars or import them to
BI Architect.

• Comparability of stores is managed by an external system and data is imported to BI Architect.

By default, the comparability of retail sales is enabled and handled by BI Architect. The comparability of trade sales is
not enabled by default.

All parameters are therefore configured for use in Retail Intelligence. The default configuration of comparability is as
follows:

• Comparability method: Comparable days constant scope

Page 77/261
• Sales opening date: Date of first sale

• Sales closing date: entered by user

• Type of week of the year synchronized with the global type of week in the BI calendar. By default,
it is the First week of four days with Monday as the first day of the week.

• The OLAP cube has been configured to provide comparable measures for Y and Y-1.

• Retail dashboards have been configured to provide comparable measures for Y and Y-1.

These default parameters can be modified by users. See Configuring comparable stores below.

Note: Comparable stores are managed by the standard SSIS package for loading Retail Intelligence data. If the data
source is Colombus and no job is running the package, you must set up this process.

You can modify the configuration of comparable stores at any time. You can do this in Retail Intelligence in the
Comparable stores setup system report found in the System/Functional settings/Comparable stores folder. See
System reports.

• Log on to the Retail Intelligence Reporting Services portal.

• Select the System/Functional settings/Comparable stores folder.

• Click the Comparable stores setup report and select the type of business activity. In our example,
retail sales is selected. Click View report. The following screen will appear:

Page 78/261
• This screen enables you to view the different properties for comparable stores. You can also
define user fields to update the sales opening and closing dates for stores. See below.

To modify properties, click the Edit properties link.

The different properties are as follows:

• Comparative method: Comparability method or algorithm used by BI Architect. The different


methods are as follows:

o Comparable days constant scope: Default method used by the system. This method uses
the store opening and closing dates and its calendar to determine if a day is comparable
for a given store. The aim is to use the same number of comparable days in Y and Y-1 or
Y and Y-2 when running calculations for a given period.

Days are considered comparable for Y and Y-1 if all of the conditions below are met:

▪ The store has a sales opening date.

▪ The day to be compared for Y and Y-1 is greater than or equal to the sales opening
date.

▪ The day to be compared for Y and Y-1 is less than or equal to the sales closing
date.

▪ The store is not defined as closed or non-comparable in the calendar for the same
day in Y and Y-1. See Store calendar below. The store is therefore open for
business on the same day of the same month in Y and Y-1. It is open for business
on the same day of the week for the same week in Y and Y-1.

If all of these conditions are met, the day is considered to be comparable for the store.

The conditions are similar for comparisons for Y and Y-2.

Note: Days are considered comparable for Y and Y-1 based on two different levels:
Based on the day of the year (same day and same month of the year) OR based on the
day of the week (same day of the week and same week number within the year). As
such, for a given day, the store can compare Y and Y-1 data in two different ways:

Page 79/261
• Data can be compared based on the day of the year, i.e. same day and same
month of the year.

• Data can be compared based on the day of the week, i.e. same day of the
week and same week number within the year.

A given day may therefore be comparable or non-comparable based on the period


analyzed (week of the year or month of the year).

Example:

On December 9, 2015 and December 9, 2014, store ABC was open as usual. On
December 10, 2014, store ABC was closed for renovation work.

December 9, 2015 was a Wednesday in week 50 in 2015.

December 9, 2014 was a Tuesday in week 50 in 2014.

You can compare Wednesday, December 9, 2015 with Tuesday, December 9, 2014
based on the day of the year because the store was opened on two days.

However, you cannot compare the Wednesday of week 50 in 2015 (which also
corresponds to December 9, 2015) with the Wednesday of week 50 in 2014 based on
the day of the week because on Wednesday, December 10, 2014, the store was closed
for renovation work.

This means that you can compare data for store ABC on December 9, 2015 based on the
day of the year but not based on the day of the week of the year. Depending on the
analyses performed (for the week, month or year), comparable turnover may or may not
be null.

o Comparable days constant scope and global periods: This method uses the same
algorithm as the Comparable days constant scope method to check whether or not a day
is comparable. In addition, the method determines whether or not the entire week, entire
month and entire year are comparable.

To do this, the system uses additional parameters corresponding to the minimum


number of comparable days that are taken into account for each period to assess:
• Minimum number of comparable days for the week

• Minimum number of comparable days for the month

• Minimum number of comparable days for the year

Page 80/261
If the minimum number of comparable days determined using the Comparable days
constant scope algorithm reaches the threshold defined, the period will be considered
comparable.

Example:

The minimum number of comparable days for the month is 20.

The month of January had 19 comparable days in 2015 and 25 comparable days in 2014.

The month of February had 20 comparable days in 2015 and 23 comparable days in
2014.

• The month of January is considered non-comparable for 2015 and 2014 because
there were only 19 days in 2015 and the minimum threshold is 20 days.

• The month of February is considered comparable for 2015 and 2014 because the
minimum threshold of 20 days was reached in 2015 and 2014.

o Import: If the Import method is configured, BI Architect will not calculate the
comparability of stores. It must be provided using an external file. To find out more, see
Importing store comparability - vtSiteComparativeCalendar table.

o None: If the None method is configured, the system will not calculate the comparability
of stores.

• Minimum comparative days for week period: Minimum threshold of comparable days for
determining whether or not the entire week is comparable. The minimum is 1 day and the
maximum is 7 days. This property is applicable only if the comparability method is Comparable
days constant scope and global periods. If it is not, the property will be ignored.

• Minimum comparative days for month period: Minimum threshold of comparable days for
determining whether or not the entire month is comparable. The minimum is 1 day and the
maximum is 28 days. This property is applicable only if the comparability method is Comparable
days constant scope and global periods. If it is not, the property will be ignored.

• Minimum comparative days for year period: Minimum threshold of comparable days for
determining whether or not the entire year is comparable. The minimum is 1 day and the
maximum is 365 days. This property is applicable only if the comparability method is Comparable
days constant scope and global periods. If it is not, the property will be ignored.

Note: The calendar year is different from the year of the week. The calendar year corresponds
to the year from January 1 up to December 31. The year of the week corresponds to the year
from week 1 up to week 52 or week 53 depending on certain years. The year of the week may
start in calendar year Y-1 or it may end in calendar year Y+1.

Page 81/261
• Sales opening update method: Method for updating the sales opening date for stores. Note: The
system considers that sales transactions are possible for this date because dates are inclusive. The
opening date is therefore comparable.

The possible methods are as follows:

o Date of first sale: This is the default value. The opening date is calculated using the date
of the first sales transaction observed in BI Architect. This date is reassessed each time
Retail Intelligence is updated regardless of the data source. Retroactive sales transactions
are taken into account.

Note: For a sales transaction to be taken into account, there must be a receipt with a
sales line (returns are included). Sales receipts without any sales lines are ignored.

The date of the first sale is calculated when the standard SSIS package for loading the
data mart is run.

o From sources: The date is specified by the source application of the store. This can be
used only if it is a non-Cegid external data source.

o Entered: The date is specified by the user in the store calendar. See Store calendar.

o From user fields: The date is retrieved from user fields. See below.

Note: If you change the method for updating the date, for example, from Date of first sale to
Entered, the existing dates specified are kept, except if the new method selected is From user
fields. If this is the case, dates are immediately retrieved from user fields. This means that all
existing values specified will immediately be lost.

• Sales closing update method: Method for updating the sales closing date for stores. Note: The
system considers that sales transactions are possible for this date because dates are inclusive. The
closing date is therefore comparable.

The possible methods are as follows:

o From sources: The date is specified by the source application of the store. This can be
used only if it is a non-Cegid external data source.

o Entered: The date is specified by the user in the store calendar. See Store calendar.

o From user fields: The date is retrieved from user fields. See below.

Note: If you change the method for updating the date, for example, from From sources to
Entered, the existing dates specified are kept, except if the new method selected is From user
fields. If this is the case, dates are immediately retrieved from user fields. This means that all
existing values specified will immediately be lost.

Page 82/261
Week properties managed by main configuration: If the value of this option is True, the
properties of the week, i.e. type and first day of the week, will be synchronized with the
properties of the week in the global actual calendar in the solution. This means that these
properties can only be modified in the global configuration of the BI calendar. Likewise, if the
types of week are managed by the system, the automatic modification will also be performed
for comparable stores. To find out more, see BI Calendars.

This option is enabled by default.

Week kind: Type of calculation for the first week of the year for the comparability calendar. You
can modify this property only if Week properties managed by main configuration is enabled.

The numbering of the weeks in the year is based on the selected type.

The possible values are as follows:

o First week of four days (default): The first week of the year with at least four days.

o Starts on January 1: The first week of the year is the one containing January 1.

o First full week: The first week of the year is the first full week running, for example, from
Monday to Sunday, if Monday is the first day of the week.

First week day: First day of the week (Monday to Sunday). This setting is combined with Week
kind to calculate week numbers. You cannot modify this property manually if Week properties
managed by main configuration is enabled.

Force if comparative days don’t exist: Boolean instructing the system on the behavior to adopt
if no other year contains a comparable day. For example, February 29 in Y does not exist in Y-1.
If the value is True, the system will consider that February 29 is comparable for Y even though Y-
1 does not contain this date. Values will be integrated for Y. If the value is False, the system will
consider that February 29 is non-comparable for Y.

Force if comparative weeks don’t exist: Boolean instructing the system on the behavior to
adopt if no other year contains a comparable week of the year. For example, week 53 in Y does
not exist in Y-1. If the value is True, the system will consider that week 53 is comparable for Y
even though Y-1 does not contain this week. Values will be integrated for Y. If the value is False,
the system will consider that week 53 is non-comparable for Y.

• Once you have specified the properties, select Yes in the Confirm update configuration field to
validate the new configuration of comparable stores. Click View report. The following screen will
appear:

Page 83/261
Note:

• If you modify one of the properties that affect the calculation of comparability, the properties in
the vtCommonDataSchema.vtSiteComparativeCalendar table will not be immediately updated.
The status of each store affected will be Waiting to calculate as long as the calculation has not
been run. See Store calendar.

To refresh comparability properties, you should run the standard SSIS package for loading Retail
Intelligence data again, even if Colombus is the only data source for BI. The modification of
comparability properties can also affect the OLAP cube and Dashboard applications.

As such, in order to ensure that modifications to the configuration of comparable stores or store
calendars (see below) are integrated and recalculated in all BI applications, you should proceed
as described below:

o Run the job for loading the data mart and the global calculation of the OLAP cube. This
job is usually run on a weekly basis on Sundays.

o Force the loading of Dashboard applications and partitions.

• By default, the system will calculate the comparability calendar for the next 2 years, i.e. end of
the current year + 2 years. Days in the future will be considered comparable if sales transactions
exist in the future.

Page 84/261
• Once calculated by the system, the vtCommonDataSchema.vtSiteComparativeCalendar table
will store all fields used for calculating comparable measures. See Managing comparable
measures.

• The OLAP cube contains a measure group called Comparable store calendar. This indicates
whether a store is comparable for a given day in Y and Y-1. By default, Y-2 and global periods are
not taken into account in the cube.

8.1.1. User fields for sales opening and closing dates


If the method for updating sales opening and closing dates is From user fields, you must specify the user field associated
with the date field.

The User fields for updating sales opening and closing dates by source section in the comparable store configuration
report enables you to specify the user fields containing the dates to be updated.

User fields must be defined for each active data source if there are multiple databases to be consolidated in BI Architect,
e.g. CBR with Colombus. Stores belonging to the data source configured for the user field will be updated using the date
in the user field.

To add a user field, click Add user fields for source. If required, enter the user login and password. The following screen
will appear.

Specify the following parameters in this window:

• Active source only: Select True if you want to specify only active sources.

Page 85/261
• Source: Select the data source of the stores to be updated by the user fields.

• Sales opening user field: Select the user field for the sales opening date if the method for updating
this date is From user fields. If it is not, select (None).

Note: The type of user field can be a store (storage location) or site. If the type of user field is a
site, the system will retrieve the value from the user field of the main site of the store.

• Sales opening user field format: Select the format for entering the opening date in the user field.
The system will perform a conversion based on the format specified. This means that if the format
is not correct, the date will be null.

• Sales closing user field: Select the user field for the sales closing date if the method for updating
this date is From user fields. If it is not, select (None).

Note: The type of user field can be a store (storage location) or site. If the type of user field is a
site, the system will retrieve the value from the user field of the main site of the store.

• Sales closing user field format: Select the format for entering the closing date in the user field.
The system will perform a conversion based on the format specified. This means that if the format
is not correct, the date will be null.

Once you have specified the settings, click the Confirm add user fields field and check the confirmation box. Click View
report to confirm the addition.

Return to the configuration window. The following screen will appear.

Page 86/261
The user fields added for the source will appear. You can modify or delete them by clicking the relevant link in the
report.

Note:

• If you specify user fields even though the method for updating opening and closing dates is not
From user fields, there are no consequences.

• Once the user field is validated, all stores in the source will automatically be updated using the
date retrieved from the user field. The existing values will be replaced. The status of affected
stores for comparability properties will therefore be Waiting to calculate. You should run the
calculation of comparable stores again. See above.

• Once you have defined the user field, the system will automatically update the relevant date for
the store each time the value in the user field changes in BI Architect.

• If several sources are active, you should configure all of them. Note: The source of a store can
change.

The store calendar is used to define exceptional opening and closing dates for stores. They are indispensable for
managing comparable stores if the comparability method selected is Comparable days constant scope or Comparable
days constant scope and global periods.

They can also be used irrespective of the comparability of stores.


Page 87/261
For these comparability methods, you must indicate the exceptional opening and closing dates of stores. When you
specify that a store is closed or non-comparable for a given day or period, these days or periods will be excluded when
calculating comparability.

Generally, you should specify:

• Exceptional closing dates, e.g. for repairs or renovations, bad weather, etc.

• Exceptional opening dates that must be excluded when calculating comparability, such as an
exceptional opening for business on Christmas day in a specific year.

• Certain recurrent closing dates such as Christmas are usually specified so that they are excluded
when calculating comparability.

• Users can choose whether to specify recurrent weekly closing dates such as Sunday. Note: If a
store is closed every Sunday, its weekly Sunday turnover analyzed will always be zero every year.
This means that it will not be comparable because every Sunday's turnover from one week to the
next will be zero every year. You are therefore not required to specify its closing for calculating
comparability for the week.

If all Sundays are excluded, Saturdays or Mondays in Y-1 are also usually excluded. The number
of days excluded per month and per year may increase significantly as there are at least four
Sundays in one month. If you exclude weekly recurrent closing dates, the number of comparable
days per month and per year will be the same for Y and Y-1, i.e. a constant scope. However, this
also means that you will exclude a significant number of comparable days per month and per
year from your analysis.

The calendar can be imported, specified or generated in Retail Intelligence. To import the calendar, see Importing store
comparability - vtSiteComparativeCalendar table.

To specify or generate the calendar, use the Stores calendar system report found in the System/Functional
settings/Comparable stores folder.

• Log on to the Retail Intelligence Reporting Services portal.

• Select the System/Functional settings/Comparable stores folder.

• Click the Stores calendar system report. The following screen will appear:

Page 88/261
This screen enables you to display the calendar by specifying the following selection criteria:

• Commercial activity: Retail or trade.

• Calendar style: Style of the calendar. The default value, Store by month/week displays the
calendar using the style in MS Outlook. There is one page per store and a maximum of five weeks,
overlapping two months. Weeks appear in rows while the days of the week appear in columns.
Days, months and years appear in cells.

• Country/store (site): You can select multiple elements from the country/store hierarchy.

• Calendar event: Type of specific event to filter. Generally, the default is used.

• Calendar from: Opening date of the calendar to be displayed.

• Calendar to: Closing date of the calendar to be displayed. Warning: The system will produce a
Cartesian product of the selected stores and all of the days from the opening up to the closing
dates. If the period exceeds two years and the number of stores is considerable (exceeding 1,000),
the response time of the display will take more than one minute.

• Comment (wildcards): Specific comment to filter. Generally, the default is used. You can use
Transact-SQL jokers such as % and _.

• Closed or not comparable only: If the value is True, only days or stores whose type is closed or
non-comparable will appear. Generally, the default is used.

Page 89/261
• Display if comparative N-1: If the value is True, this indicates if the day of the year and the day of
the week are comparable for Y/Y-1 and Y-1/Y. Global periods that can be compared (weeks,
months and years) are also displayed if the comparability method is Comparable days constant
scope and global periods or Import.

• Display if comparative N-2: If the value is True, this indicates if the day of the year and the day of
the week are comparable for Y/Y-2 and Y-2/Y. Global periods that can be compared (weeks,
months and years) are also displayed if the comparability method is Comparable days constant
scope and global periods or Import.

• Display comments: If the value is True, this displays comments.

• Display color if not comparative N-2: By default, the system displays the current date in light
yellow if the day is non-comparable for Y/Y-1 or Y-1/Y based on the comparability method. If the
value is True, the color extends to Y/Y-2 or Y-2/Y. See below for the meanings of the colors
displayed.

• Display sales session time: If the value is True, this displays the opening and closing times of sales.
This data is taken from the opening and closing times of the store registers.

Click View report. The following screen will appear:

Page 90/261
If you select the Store by month/week style, BI Architect will attempt to complete weeks from Monday to Sunday even
if the days do not fall within the selection criteria. Week numbers are calculated using the type of week configured for
comparable stores. To find out more, see Configuring comparable stores.

Color key:
• If the cell background color is light gray, this indicates days prior to the opening date of the store or after the closing
date of the store. It can also indicate days that have been excluded from selection criteria other than the date for the
week displayed.

• If the cell background color is white, this indicates days between the opening and closing dates of the store if they have
been specified. These dates are inclusive for sales transactions.

• If the cell background color of the current date is light yellow as shown in the example, this indicates that the day is
non-comparable for Y/Y-1 OR Y-1/Y based on the day of the year OR the day of the week. If the comparability method
includes global periods OR is an import, the background will also appear in light yellow if one of the periods (week,
month or year) is non-comparable for Y/Y-1 OR Y-1/Y. To compare data for Y and Y-2, you should select True for Display
color if not comparative N-2.

• If an event is associated with the day, its name will appear in a light green background. See the screen above.

• If the day is closed or non-comparable for the store, its name will also appear in a light green background. See the
screen above.

In this table, you can:

• Enter or delete events, e.g. exceptional store opening or closing dates.

• Generate or delete events in a series.

Page 91/261
• Specify the sales opening and closing dates if the method for updating dates is Entered.

8.2.1. Generating events


You can create, modify or delete an event in the calendar in two ways. You can generate events in a series for a given
period or you can enter it for a specific date.

To generate an event for a given period, click Generate event. The following screen will appear.

This screen displays the parameters for generating events.

• Country/store (site): You can select multiple elements from the country/store hierarchy to
generate the event.

• Calendar event: Type of event to associate. If you want to indicate an exceptional opening date,
we recommend that you select Exceptional opening.

• Store closed or not comparable: Select True if the store is closed or non-comparable. Days whose
value is True for a store will not be comparable for Y/Y-1 and Y/Y-2. If you select False, the event
will not affect comparability and will be used for information purposes only.

• Comment: Comment associated with the event.

Page 92/261
• Start date: Start date of the period associated with the event.

• End date: End date of the period associated with the event.

• Day of month: Day of the month associated with the event in the selected period. By default, all
days are selected.

• Month: Month of the year associated with the event in the selected period. By default, all months
are selected.

• Day of week: Day of the week associated with the event in the selected period. By default, all
days are selected.

• Week: Weeks of the year associated with the event in the selected period. By default, all weeks
are selected.

• Force update if events exist: If the value is True and an event already exists for the store and for
the same day in the selected period, it will be replaced with the new event. If the value is False,
it will not be replaced.

Note:

• If you want to generate a period that includes all dates between the start and end dates, you
should select all days of the month for Day of month, all months for Month, all days of the week
for Day of week and all weeks of the year for Week.

• If you want to generate a recurrent event within the selected period, you should use the Day of
month, Month, Day of week and Week parameters. See the examples below.

o Christmas day: You should select 25 for Day of month and December for Month. Next,
select all days of the week for Day of week and all weeks of the year for Week.

o Sundays: You should select Sunday for Day of week. Next, select all days of the month
for Day of month, all months for Month and all weeks of the year for Week.

If you want the recurrent event to affect the entire calendar of the selected stores, you should
enter the maximum end date of the calendar, i.e. December 31, 2040. The start date should
generally be the minimum business opening date of the selected stores.

Note: Only the selected stores will be affected by this event. New stores created at a later time
will not be affected. Recurrent events should therefore be defined for all new stores created.

If created events do not fall within the sales opening and closing dates, they will not have an
impact.

Once you have specified the parameters for generating the event, click View report. The
following screen will appear:

Page 93/261
This screen enables you to check the event to be generated, its period, its recurrence and the
stores affected prior to the update.

If the configuration is correct, click Validate the generation of events. If required, enter the user
login and password. The following screen will appear.

Select "Yes" to confirm and click View report. The update will be performed. If the period is long
and the number of stores selected is considerable, the generation may take a few minutes.

Once events have been generated, the following screen will display the number of days and
stores that were updated using the selected criteria in the calendar.

Page 94/261
Return to the calendar to view the generated events.

Note:
• When you create a closing event for a store on a given day, comparability properties are
not updated immediately. You must rerun the standard SSIS package for loading BI
Architect data in order to refresh the comparability of stores. To find out more, see
Configuring comparable stores.

As long as the calculation has not been run, the status of each store affected will
be Waiting to calculate in the calendar. When you display comparability properties, the
same status will appear.

• If you want to modify or delete an event in a given day for a single store, you should click
Edit or Delete in the calendar. See above.

• If you want to delete a series of an event defined for a period for one or more stores, you
should click Delete events. The parameters and concept related to event deletion are
similar to those for event generation.

• By default, the OLAP cube contains a measure group called Store calendar that displays
the calendar indicating the closing or non-comparability of a store. It also contains the
Type of calendar event dimension.

8.2.2. Entering sales opening and closing dates


If the method for updating sales opening and closing dates is Entered, you can specify and modify these dates in the
store calendar.

In this case, a link enabling you to modify the date will appear next to it. See the screen above.

Note: Modifying the opening or closing date of an event affects comparability. You must rerun the standard SSIS
package for loading BI Architect data in order to refresh the comparability of stores.

Comparability properties are stored in the vtCommonDataSchema.vtSiteComparativeCalendar table. This table


indicates whether the day is comparable for each business activity (trade or retail) for a given store (site) and for a given
day.

If the comparability method includes global periods or is Import, comparability properties for global periods (week,
month and year) will also be specified. If this is not the case, they will be equal to zero.
Page 95/261
Comparability properties are SQL Boolean values and can be either 0 or 1.

• 0 => Non-comparable

• 1 => Comparable

Warning: If the record for a business activity, store or day does not exist in the table, the day will be considered non-
comparable for the store.

If the comparability method is managed by BI Architect, the table contains comparability properties between the sales
opening and closing dates for a given store.

If the store does not have an opening date, the table will not contain any record for this store. It will therefore not be
comparable.

If the store does not have a closing date, the table will contain comparability days up to the next two years, i.e. end of
the current year + 2 years.

To view comparability for a measure in a fact table, you should define a join with the
vtCommonDataSchema.vtSiteComparativeCalendar table and use the fields from the fact table as shown in our
examples of queries.

• Store (site): Generally SiteKey.

• Date: Generally, TimeDateSys.

• Business activity: Usually SalesChannelIdSys.

Unique key and index in the vtCommonDataSchema.vtSiteComparativeCalendar table:


• SiteComparativeCalendarSalesChannelIdSys: Value 1 for retail sales and 2 for trade sales. Mandatory.

• SiteComparativeCalendarSiteKey: Mandatory link with the vtCommonDataSchema.vtSite site table.

• SiteComparativeCalendarTimeDateSys: Mandatory comparability date, also used to link with vtCommonDataSchema.vtTimeDate.

List of fields in vtCommonDataSchema.vtSiteComparativeCalendar for comparability based on the day of the year or
the day of the week. These fields are always specified regardless of the comparability method or if they are imported:
• SiteComparativeCalendarIsDayCalendarYearN_N_1: Boolean (0 or 1) indicating whether the day is comparable for Y and Y-1 for the
date in SiteComparativeCalendarTimeDateSys based on the days of the months of the year.

• SiteComparativeCalendarIsDayCalendarYearN_1_N: Boolean (0 or 1) indicating whether the day is comparable for Y-1 and Y for the
date in SiteComparativeCalendarTimeDateSys based on the days of the months of the year.

• SiteComparativeCalendarIsDayWeekYearN_N_1: Boolean (0 or 1) indicating whether the day is comparable for Y and Y-1 for the date
in SiteComparativeCalendarTimeDateSys based on the days of the week of the year.

• SiteComparativeCalendarIsDayWeekYearN_1_N: Boolean (0 or 1) indicating whether the day is comparable for Y-1 and Y for the date
in SiteComparativeCalendarTimeDateSys based on the days of the week of the year.

• SiteComparativeCalendarIsDayCalendarYearN_N_2: Boolean (0 or 1) indicating whether the day is comparable for Y and Y-2 for the
date in SiteComparativeCalendarTimeDateSys based on the days of the months of the year.
Page 96/261
• SiteComparativeCalendarIsDayCalendarYearN_2_N: Boolean (0 or 1) indicating whether the day is comparable for Y-2 and Y for the
date in SiteComparativeCalendarTimeDateSys based on the days of the months of the year.

• SiteComparativeCalendarIsDayWeekYearN_N_2: Boolean (0 or 1) indicating whether the day is comparable for Y and Y-2 for the date
in SiteComparativeCalendarTimeDateSys based on the days of the week of the year.

• SiteComparativeCalendarIsDayWeekYearN_2_N: Boolean (0 or 1) indicating whether the day is comparable for Y-2 and Y for the date
in SiteComparativeCalendarTimeDateSys based on the days of the week of the year.

List of fields for comparability based on global periods that can be the week, month or year. They are specified if the
comparability method is Comparable days constant scope and global periods or Imported:
• SiteComparativeCalendarIsGlobalPeriodWeekYearN_N_1: Boolean (0 or 1) indicating whether the week is comparable for Y and Y-1
for the date in SiteComparativeCalendarTimeDateSys.

• SiteComparativeCalendarIsGlobalPeriodWeekYearN_1_N: Boolean (0 or 1) indicating whether the week is comparable for Y-1 and Y
for the date in SiteComparativeCalendarTimeDateSys.

• SiteComparativeCalendarIsGlobalPeriodMonthYearN_N_1: Boolean (0 or 1) indicating whether the month is comparable for Y and


Y-1 for the date in SiteComparativeCalendarTimeDateSys.

• SiteComparativeCalendarIsGlobalPeriodMonthYearN_1_N: Boolean (0 or 1) indicating whether the month is comparable for Y-1 and
Y for the date in SiteComparativeCalendarTimeDateSys.

• SiteComparativeCalendarIsGlobalPeriodYearYearN_N_1: Boolean (0 or 1) indicating whether the year is comparable for Y and Y-1 for
the date in SiteComparativeCalendarTimeDateSys.

• SiteComparativeCalendarIsGlobalPeriodYearYearN_1_N: Boolean (0 or 1) indicating whether the year is comparable for Y-1 and Y for
the date in SiteComparativeCalendarTimeDateSys.

• SiteComparativeCalendarIsGlobalPeriodWeekYearN_N_2: Boolean (0 or 1) indicating whether the week is comparable for Y and Y-2
for the date in SiteComparativeCalendarTimeDateSys.

• SiteComparativeCalendarIsGlobalPeriodWeekYearN_2_N: Boolean (0 or 1) indicating whether the week is comparable for Y-2 and Y
for the date in SiteComparativeCalendarTimeDateSys.

• SiteComparativeCalendarIsGlobalPeriodMonthYearN_N_2: Boolean (0 or 1) indicating whether the month is comparable for Y and


Y-2 for the date in SiteComparativeCalendarTimeDateSys.

• SiteComparativeCalendarIsGlobalPeriodMonthYearN_2_N: Boolean (0 or 1) indicating whether the month is comparable for Y-2 and
Y for the date in SiteComparativeCalendarTimeDateSys.

• SiteComparativeCalendarIsGlobalPeriodYearYearN_N_2: Boolean (0 or 1) indicating whether the year is comparable for Y and Y-2 for
the date in SiteComparativeCalendarTimeDateSys.

• SiteComparativeCalendarIsGlobalPeriodYearYearN_2_N: Boolean (0 or 1) indicating whether the year is comparable for Y-2 and Y for
the date in SiteComparativeCalendarTimeDateSys.

To load comparable measures, you should multiply the indicator with the Boolean values. Warning: You should use
different measures for comparing weeks and months. See the examples below.

Page 97/261
Example of the SQL query that loads tax-inclusive and tax-exclusive turnover for the comparison of retail sales by
week/year and by month/year between Y and Y-1. Certain views already retrieve comparable values. See Fact extraction
views. Our example is used as an illustration as we strongly recommend that you use views.

SELECT

COALESCE(FactsProductCustomerSales.SiteKey,StockRoom.StockRoomSiteKey) AS SiteKey,

FactsProductCustomerSales.ProductKey,

FactsProductCustomerSales.TimeDateSys,

AmountInvoicedExceptionOfTax * SiteComparativeCalendar.SiteComparativeCalendarIsDayCalendarYearN_N_1 AS
AmountInvoicedExceptionOfTaxComparableDayYearN_N_1,

AmountInvoicedExceptionOfTax * SiteComparativeCalendar.SiteComparativeCalendarIsDayCalendarYearN_1_N AS
AmountInvoicedExceptionOfTaxComparableDayYearN_1_N,

AmountInvoicedExceptionOfTax * SiteComparativeCalendar.SiteComparativeCalendarIsDayWeekYearN_N_1 AS
AmountInvoicedExceptionOfTaxComparableDayWeekN_N_1,

AmountInvoicedExceptionOfTax * SiteComparativeCalendar.SiteComparativeCalendarIsDayWeekYearN_1_N AS
AmountInvoicedExceptionOfTaxComparableDayWeekN_1_N,

AmountInvoicedIncludingTax * SiteComparativeCalendar.SiteComparativeCalendarIsDayCalendarYearN_N_1 AS
AmountInvoicedIncludingTaxComparableDayYearN_N_1,

AmountInvoicedIncludingTax * SiteComparativeCalendar.SiteComparativeCalendarIsDayCalendarYearN_1_N AS
AmountInvoicedIncludingTaxComparableDayYearN_1_N,

AmountInvoicedIncludingTax * SiteComparativeCalendar.SiteComparativeCalendarIsDayWeekYearN_N_1 AS AmountInvoicedIncludingTaxComparableDayWeekN_N_1,

AmountInvoicedIncludingTax * SiteComparativeCalendar.SiteComparativeCalendarIsDayWeekYearN_1_N AS AmountInvoicedIncludingTaxComparableDayWeekN_1_N

FROM FactsProductCustomerSales

LEFT OUTER JOIN StockRoom

ON StockRoom.StockRoomKey = FactsProductCustomerSales.StockRoomKey

LEFT OUTER JOIN SiteComparativeCalendar

ON SiteComparativeCalendar.SiteComparativeCalendarSalesChannelIdSys = FactsProductCustomerSales.SalesChannelIdSys

AND SiteComparativeCalendar.SiteComparativeCalendarSiteKey = COALESCE(FactsProductCustomerSales.SiteKey,StockRoom.StockRoomSiteKey)

AND SiteComparativeCalendar.SiteComparativeCalendarTimeDateSys = FactsProductCustomerSales.TimeDateSys

WHERE FactsProductCustomerSales.SalesChannelIdSys = 1 -- Retail

Page 98/261
In our example:

• Fields ending with DayYearN… must be compared with the year, month or day (date) axis.

• Fields ending with DayWeekN… must be compared with the week of the year, week or day of the
week axis.

• Fields ending with N_1_N (N-1/N) must be used to calculate the values of year Y-1.

If you want to compare Y and Y-2 data, you should add measures in the SELECT query and multiply them with the Y-2
comparability properties.

If the comparability method is Comparable days constant scope and global periods, you can also use the comparability
properties of global periods in the same way. In this case, you should have as many measures as the periods to be
analyzed (week, month or year).

Note: The cube is shipped with a default standard configuration that provides comparable measures for the main retail
sales measures such as quantity, tax excl. turnover, tax incl. turnover, margin, store traffic, etc., but only with a
breakdown by day.

The Qlik model is configured to provide all comparable measures associated with stores such as sales, store traffic, etc.
but only with a breakdown by day.

To find out more, see the dictionary of cube fields and the dashboard guide.

Page 99/261
9. COMPARABLE SEASONS AND COLLECTIONS
BI Architect provides tools to help you define seasons and collections that can be compared. Seasons and collections
come from sources that load BI Architect, namely Orli and Y2. The BI solution is just used to arrange these items in order
to perform N/N-1 comparatives.

BI Architect also provides the possibility of managing analysis periods independently of sources. These analysis periods
enable you to manage time-based concepts such as seasons and can be used with any entity, even if the source does
not support seasons. To find out more, see Analysis periods.

Seasons and collections are compared using the same method. The section below provides an illustration of how
seasons are configured.

You can compare seasons by specifying the following elements for each season:

• The SeasonOrderComparison alphanumeric field to define the order in which seasons are sorted
as regards other seasons for their comparison and display in a dashboard.

• The SeasonIsNonComparableWithPrevious Boolean field to indicate whether or not the season


can be compared with the previous one using this sort order.

These fields are stored in the table named vtSeason for seasons or vtProductCollection for collections and can be
defined using system reports.

You can modify the properties of comparable seasons at any time. You can do this in Retail Intelligence in the
Comparable seasons system report found in the System/Functional settings/Comparable seasons folder. See System
reports.

• Log on to the Retail Intelligence Reporting Services portal.

• Select the System/Functional settings/Comparable seasons folder

• Click the Comparable seasons system report. The following screen will appear:

Page 100/261
• This screen enables you to view and modify the different comparability properties for each
season. Note: The report will be sorted based on the order of comparison.

To modify properties, click the Edit link. If required, enter the user login and password.

The different properties are as follows:

Page 101/261
• Order comparison: User-defined alphanumeric field for sorting seasons based on their order of
comparison. This is based on the principle that season Y is compared with season Y+1. For
example, if you want to compare Summer 2014 with Summer 2015, which is also comparable with
Summer 2016, you must enter values that will enable the system to sort seasons by specifying the
three seasons in consecutive order: S2014 for Summer 2014, S2015 for Summer 2015 and S2016
for Summer 2016.

• Is non comparable with previous season: Boolean to indicate whether or not the season can be
compared with the previous one. Based on our example above, the first winter season (Winter
2014) will contain the value, W2014. This season could be compared with Summer 2016 because
it would be consecutive to it in the sort order for seasons. If you do not want the system to
perform this comparison, you should enter True for this property for Winter 2014.

Note: This Boolean only indicates whether or not the season can be compared. It will not hide
the season.

• Once you have specified the properties, you should select "Yes" in the Confirm update
configuration field to validate the new configuration of the season. Click View report. The
following screen will appear:

Note:

• Once you have specified the fields, the values must be read in different dashboards based on the
specified properties.

Page 102/261
• Because the order of comparison is defined by users, sort values can be different based on
customer requirements.

• If the sort order is not specified, the system will use the code of the season by default for the
order of comparison.

Page 103/261
10. CRM

BI Architect provides tools for analyzing the quality of customer data for CRM. The quality of data can be analyzed in the
CRM dashboard. See the documentation on dashboards.

You can indicate the information to be analyzed by the system using the following controls:

• Check whether a field value is specified or unspecified.

• Manage the history of modified fields.

By default, only certain fields can be flagged to be checked for specified or unspecified values. Optional controls enable
you to check fields as well as to trace any modification to the fields.

You can use a system report to modify these options in BI Architect:

• Log on to the Retail Intelligence Reporting Services portal.

• Select the System/Functional settings/CRM folder.

• Click the Customer data quality setup report. The following screen will appear:

Page 104/261
• This report indicates the customer record fields to be checked by the system for specified or
unspecified values and the fields to be traced in the log. To modify the options for a field, click the
Edit link to the right of the field. If required, enter the user login and password.

• Specify the following options:


o Check empty field: If the value is True, the field will be checked for specified or
unspecified values in CRM dashboards.

o Log update field: If the value is True, each modification of the field will be stored and the
history displayed in CRM dashboards.

o Format: Used to specify the storage format in the log for date fields if the log storage
option is enabled. For all other cases, keep the (None) value.

• Click Confirm update field to validate the modification. The following screen will appear.

Page 105/261
Note:

• By default, the standard CRM dashboard provides tables for analyzing the quality of data. See the
documentation on dashboards.

• The fields traced in logs are stored in the BI Architect log table. To find out more, see Reading
deleted differential data.

Note: Certain fields are keys loaded in the customer record. For example, the title is a setting
related to the customer record. If a log is enabled for this field, only its modification in the
customer record will be traced. If the label of the title itself is modified, this will not be considered
a modification in the customer record.

BI Architect is used to define the customer age ranges to be analyzed.

By default, age ranges are already created. You can add an age range or modify the existing ones.

You can use a system report to modify these age ranges in BI Architect:

• Log on to the Retail Intelligence Reporting Services portal.

• Select the System/Functional settings/CRM folder.

• Click the Age range setup report. The following screen will appear:

Page 106/261
• To create a new age range, click Add age range. If required, enter the user login and password.
The following screen will appear.

Page 107/261
• You should specify the following elements:
o Minimum threshold: This is the minimum threshold of the age range. An age is associated
with the age range as follows:

The customer's age is greater than or equal to the minimum age range threshold and
less than the minimum threshold of the next age range.

You cannot add an existing age range. The system runs a control to check this.

o Auto generate name: If the value is True (by default), this instructs the system to generate
the name of the age range automatically based on the previous and next age ranges.

If you want a specific name, you must set the value to False and enter the name in the
following field. If this is the case, then users must manage the integrity of age ranges
themselves. We recommend that you keep the default value, True.

o Age range name: This enables you to enter the name of the age range if the value of Auto
generate name is False. See above.

• Click Confirm add range to validate the creation. The following screen will appear.

Page 108/261
• To modify an age range, click the Edit link to the right of the age range. If required, enter the user
login and password. The following screen will appear.

• You should specify the following elements:


o Minimum threshold: See above on creating an age range.

o Auto generate name: See above on creating an age range.

o Age range name: See above on creating an age range.

• Click Confirm update range to validate the modification. The following screen will appear.

Page 109/261
• To delete an age range, click the Delete link to the right of the age range. If required, enter the
user login and password. The following screen will appear.

• Confirm the deletion of the age range.

Page 110/261
11. DEFAULT COMPANY FOR COST PRICE SEARCHES
The different BI repositories (OLAP cube and Qlik) and the fact extraction views (see Fact extraction views) provide
information linked to cost prices managed by company. This can be the unit cost price and/or valuations such as costs
for calculating margins. You can tell the system which company to use to search for the cost prices. If the company is
not specified or does not exist, the system will load the cost prices using the nearest current information.

The company associated with the loaded cost price is equal to (managed by exception in the following order):

1. The default company for cost price searches in BI Architect if specified (see below)

2. The company associated with the transaction if specified

3. The company linked to the storage location associated with the transaction if specified

4. The company linked to the site associated with transaction if specified

The default company for loading cost prices is configured in a system report:

• Log on to the Retail Intelligence Reporting Services portal.

• Select the System/General settings folder.

• Click the BI Configuration setup report. The following screen will appear:

Page 111/261
• The Other options field is used to indicate the default company for cost price searches. To change
it, click the Edit link in the same line. If required, enter the user login and password.

• Enter the application identifier of the default search company (check the null box if you do not
wish to specify a company). The application identifier corresponds to the value in the
CompanyIdApp field in the vtCompany table.
In a multi-database consolidation context, if the company ID contains a prefix with the database
code, the ID entered must also have a prefix with the database code.

Note: The system does not block data entry if the company specified does not exist in the BI
database.

Once you have entered the company, you should select "Yes" in the Confirm update field and click View report to
confirm the configured company.

Note:

• This modification is automatically taken into account in both the OLAP cube and the Qlik model
(managed in the views of facts).

Note, however, that if the version of the OLAP cube is earlier than 7.01 during setup, the
modification is not automatically taken into account. In this case, you must modify the OLAP
cube manually for it to integrate this dynamic parameter. The version of the Qlik model must
also be 7.01 or later.

• To recalculate the cost prices in the OLAP cube and the Qlik model after making the change, you
should run the weekly cube process job and load the Qlik model.

Page 112/261
12. STORE TRAFFIC TERMINALS TO BE EXCLUDED
Retail Intelligence provides several dashboards for analyzing store traffic and transformation rates. This data is stored in
the vtCustomerSalesDataSchema.vtFactsStockRoomEvent table in BI Architect.

This table also contains data on store traffic, i.e. the number of customers entering and exiting the store. Some stores
may have installed terminals to analyze foot traffic in front of a store display. These terminals should be excluded from
analyses on store traffic and transformation rates.

Terminals that are not defined as customer entries/exits are automatically excluded from the OLAP cube and,
consequently, from reports as well as dashboards.

You can define the terminals to be excluded in a system report as follows:

• Log on to the Retail Intelligence Reporting Services portal.

• Select the System/General settings folder.

• Click the BI Configuration setup report. The following screen will appear:

• The terminals excluded from analyses on store traffic appear in the Store traffic terminals to
exclude (other than customer entry/exit terminals) field. To modify them, click the Edit link in
the field. If required, enter the user login and password.

Page 113/261
• Specify the terminals you want to exclude in theAttendance terminal id list of not input/output
kind list.

Warning: If you do not want to exclude any terminal, you should select (None).

• Once you have specified the terminals, you should select "Yes" in the Confirm update
configuration field to validate the new configuration. Click View report. The following screen will
appear:

Page 114/261
Note:

• This functionality is automatically integrated in the OLAP cube and, consequently, in reports as
well as dashboards.

Note, however, that if the version of the OLAP cube and dashboards is 6.50 or later, then during
setup, this functionality will not be automatically taken into account. In this case, you must
modify the OLAP cube and Dashboards model manually. You may be required to update the
dashboard version depending on whether dashboards are standard or specific in order to
integrate the dynamic parameter.

• Once you have modified the terminals to be excluded, you should recalculate store traffic in the
OLAP cube by running a weekly cube process job and by loading dashboards.

• Store traffic terminals are stored in the time fields in the


vtCustomerSalesDataSchema.vtFactsStockRoomEvent table. You must therefore run a query on
this table to exclude terminals. Below is an example of a query that extracts customer entries and
exits while excluding non-entry/exit terminals:

SELECT

FactsStockRoomEvent.TimeDateSys,

FactsStockRoomEvent.StockRoomKey,

COALESCE (FactsStockRoomEvent.SiteKey, StockRoom.StockRoomSiteKey) AS StockRoomSiteKey,

Page 115/261
CASE WHEN COALESCE(FactsStockRoomEvent.CountCustomerEntered,0) = 0 THEN
CountCustomerEnteredEntry ELSE CountCustomerEntered END AS CountCustomerEntered,

CASE WHEN COALESCE(FactsStockRoomEvent.CountCustomerLeft,0) = 0 THEN


CountCustomerLeftEntry ELSE CountCustomerLeft END AS CountCustomerLeft

FROM FactsStockRoomEvent

LEFT OUTER JOIN StockRoom ON FactsStockRoomEvent.StockRoomKey = StockRoom.StockRoomKey

LEFT OUTER JOIN AttendanceTerminalId ON AttendanceTerminalId.AttendanceTerminalId =


FactsStockRoomEvent.AttendanceTerminalId

WHERE FactsStockRoomEvent.SalesChannelIdSys = 1

AND FactsStockRoomEvent.EventIdSys = 1

AND COALESCE(AttendanceTerminalId.AttendanceTerminalIsNotInputOutputKind,0) = 0

AND COALESCE (FactsStockRoomEvent.IsAnnounce, 0) = 0

AND (COALESCE(FactsStockRoomEvent.CountCustomerLeft,0) <> 0 OR


COALESCE(FactsStockRoomEvent.CountCustomerEntered,0) <> 0

OR COALESCE(FactsStockRoomEvent.CountCustomerLeftEntry,0) <> 0 OR
COALESCE(FactsStockRoomEvent.CountCustomerEnteredEntry,0) <> 0)

Page 116/261
13. IMPORTING SYSTEM DATA TO BI ARCHITECT
BI Architect can be used to import system data.

You can import system data in two ways:

• Import data to BI Architect via the consolidation module. This is optional for the on-premises
version but mandatory for the SaaS version.

• Import data directly to BI Architect without using the consolidation module. This applies only to
the on-premises version; it is not an option in the SaaS version.

This method is described in the chapter Importing non-Cegid data - BI SaaS and/or BI On-premises in the BI Architect
Database Consolidation document. Please refer to this document to import system data via the consolidation module
and then refer to Description of system data to be imported below which describes the data to be imported for each
entity.

When imported directly, system data is integrated when standard SSIS packages for loading the data marts are run. To
import this data, two folders must already exist and be configured in the BI database. The recommended rules for these
folders are:

• The folders must be in the data drive of the BI Architect server, usually D: on the server hosting
the BI Architect databases.

• The folders must be in a root folder with the same name as the BI Architect database (usually
vtNextDW); all the folders are in this root folder.

• Do not insert spaces or use special characters in the names of the folders.

In our example, the BI Architect database is called vtNextDW and the data drive on the BI Architect server is D:. Proceed
as described below to define the system folder.

• Connect to the BI Architect server using the local administrator account on the server.

• Create the root folder, vtNextDW, on the D: drive if it does not already exist.

• If required, create the following subfolders in D:\vtNextDW.

o D:\vtNextDW\DataArchiveLoad: Folder containing the imported file archives.

o D:\vtNextDW\SystemDataLoad: Folder containing the files to be imported.

• Once you have created the folders and subfolders, you must declare them in BI Architect. Open
the BI configuration setup system report found in the System/General settings folder.

• Click Modify communication settings.

Page 117/261
• On this screen, fill in the following fields if they are empty or have been entered incorrectly:

o Data archive path: Enter the name of the folder containing the imported file archives,
D:\vtNextDW\DataArchiveLoad.

o Files system load path: Enter the name of the folder containing the system files to be
imported, D:\vtNextDW\SystemDataLoad.

• Click View report to validate the entry and then close the system reports window.

You can also use this screen to configure database consolidations and standard imports from external data sources. To
find out more, see the BI Architect Database consolidation document.

Once you have created and declared the folders in BI Architect, place the files to be imported in the relevant folder
D:\vtNextDW\SystemDataLoad. See Description of system data to be imported for all the system data that can be
imported.

13.2.1.Description of system data to be imported

13.2.1.1. Importing the conversion rates to the output currency


The conversion rates to the output currency are stored in the vtCommonDataSchema.vtCurrencyConversionHistory
table (this table also contains the conversion rates for the other currencies) and in the
vtOutputCurrencyConversionHistory table (this table contains only the conversion rates to the output currency).

These tables can also be completed using a system report. To find out more, see Output currency.

Before importing conversion rates, you must first define the folder containing the files to be imported. To find out more,
see Importing system data to BI Architect.

Conversion rates can be imported in a text file encoded in ANSI or in Unicode encoded in UTF-16LE.

If data is in text format, the data file must be called vtOutputCurrencyConversionHistory_ImportCharData.dat.

If data is in Unicode, the data file must be called vtOutputCurrencyConversionHistory_ImportWidecharData.dat.

Page 118/261
The data file to be imported must be placed in the system folder you configured (normally
D:\vtNextDW\SystemDataLoad).

The default column and row separators are as follows:

• Column separator: horizontal tab (ASCII code 9 or \t)


• Row separator: carriage return and line feed (ASCII code 13 and 10 or \r\n)

File columns are displayed in the following order:

Field Type Maximum Properties Description


format

1 CurrencyIdAppToConvert SQLNVARCHAR 64 characters Mandatory Identifier of the currency to convert into the output currency.

This field must contain the value of the CurrencyIdApp field in


the vtCurrency table.

Mandatory field. If the field is unspecified or if the ID does not


exist in the database, the relevant rows will be ignored and a
warning will be generated.

In a multi-database consolidation context, if the store ID contains


a prefix with the database code, the IDs in the file must also
have a prefix with the database code.

2 ConversionHistoryDateSys SQLDATETIME YYYY-MM-DD Mandatory Effective start date of the conversion rate. The system
00:00:00.000 automatically calculates the effective date of the rate. See
Entering/Displaying output currency conversion rates).

If the time displayed for the date is not zero, the system will force
its value to zero.

The date is mandatory.

3 OutputCurrencyIdApp SQLNVARCHAR 64 characters Mandatory Identifier of the output currency.

This field must contain the value of the CurrencyIdApp field in


the vtCurrency table for the output currency.

This field is mandatory. If not specified or does not correspond to


the output currency (see Configuring the output currency), the
lines concerned will be ignored and a warning will appear. The
purpose of this field is purely to check the integrity of the import
in relation to the output currency.

In a multi-database consolidation context, if the store ID contains


a prefix with the database code, the IDs in the file must also
have a prefix with the database code.

4 ConversionRate SQLDECIMAL 10:10 Mandatory Conversion rate of the CurrencyIdAppToConvert currency to


the output currency OutputCurrencyIdApp.

This field is mandatory. Its value must be greater than 0 and


equal to or less than 999999.9999999999. If not, the lines
concerned will be ignored and a warning will appear.

5 IsDeleted SQLBIT 1 or 0 Optional If the conversion rate is 1, the corresponding value will be
deleted in BI Architect.

You are not required to provide a format file. Only the .dat data file must be provided. If you want to ignore certain
columns, modify their order or modify column or row separators, you must provide the format file describing the data
Page 119/261
file. In this case, the format file must have the same name as the data file. Its name should end with Format instead of
Data. Its extension must be XML if it is an XML file, or FMT if it is an FMT file.

You perform the import via a standard Transact-SQL BULK load configured with standard data import properties. To find
out more about this topic or about format files, see the relevant Microsoft documentation.

Once the .dat data file has been placed in the D:\vtNextDW\SystemDataLoad folder containing the files to be imported,
the file will be integrated when standard SSIS packages are next run. Note:

• If there are errors in the format, the file will remain in the folder containing the files to be
imported. If there are no errors, it will be archived in the configured archives folder.

• If there are integrity issues, the system will ignore the relevant rows or it will force the value
depending on the issue, and a warning will be generated.

• You can consult errors and warnings in the standard system reports for monitoring the loading of
data marts. To find out more, see System reports.

After importing the conversion rates to the output currency, they can be used in BI Architect. See Output currency.

13.2.1.2. Importing the geographical coordinates (GPS) of entities


GPS coordinates are stored in the vtCommonDataSchema.vtGeocode table in BI Architect.

Note: This table can also be generated by BI Architect. To find out more, see Generating geographical coordinates
(GPS).

Before importing GPS coordinates, you must first define the folder containing the files to be imported. To find out more,
see Importing system data to BI Architect.

GPS coordinates can be imported in a text file encoded in ANSI or in Unicode encoded in UTF-16LE.

If data is in text format, the data file must be called vtGeocode_ImportCharData.dat.

If data is in Unicode, the data file must be called vtGeocode_ImportWidecharData.dat.

The data file to be imported must be placed in the system folder you configured (normally
D:\vtNextDW\SystemDataLoad).

The default column and row separators are as follows:

• Column separator: horizontal tab (ASCII code 9 or \t)


• Row separator: carriage return and line feed (ASCII code 13 and 10 or \r\n)

Page 120/261
File columns are displayed in the following order:

Page 121/261
Field Type Maximum Properties Description
format

1 OriginEntityKindIdSys SQLINT N Optional Type of entity to import. Possible values (numeric) are as
follows:

• 0: No specific entity. This type can be used for importing


any GPS coordinate for an undefined type.

• 1: Site (vtSite table)

• 2: Storage location (vtStockRoom table)

• 3: Supplier (vtSupplier table)

• 4: Customer (vtCustomer table)

• 5: Country (vtCountry table)

• 6: City (vtAddressCityPart table)

• 7: Region or state (vtAddressStatePart table)

• 8: Customer delivery addresses (vtCustomer table)

If the value of this field is not specified, the system will force the
value to 0.

If the value specified is not in this list, the relevant rows will be
ignored.

2 OriginEntityIdApp SQLNVARCHAR 400 characters Mandatory ID of the entity associated with the GPS coordinates. For
example, if it is a site, this field must contain the value of the
SiteIdApp field from the vtSite table.

For entities whose type is 6 or 7 (city or region), this field should


usually contain:

• For the city: A concatenation of the zip code, city


name and country name. Each value is separated by
a space.
• For the region: A concatenation of the region name
and country name. Each value is separated by a
space.

This field is mandatory. If it is not specified, the relevant rows


will be ignored.

Even if the ID specified in the database does not exist, the


system will integrate the GPS coordinates without any error.

Warning: This field will load the GeocodeLocationValue field in


the vtGeocode table concatenated with the
OriginEntityKindIdSys field. The two fields are separated by a
vertical bar (ASCII code 124) in the GeocodeLocationValue
field as shown:

OriginEntityKindIdSys + | + OriginEntityIdApp

Below is an example of a query with a join using the vtSite


table.

In a multi-database consolidation context, if the entity ID


contains a prefix with the database code, the IDs in the file must
also have a prefix with the database code.

Page 122/261
Field Type Maximum Properties Description
format

3 Latitude SQLDECIMAL 18:20 Optional Latitude of the GPS coordinate.

If unspecified, the value will be 0.

4 Longitude SQLDECIMAL 18:20 Optional Longitude of the GPS coordinate.

If unspecified, the value will be 0.

5 IsDeleted SQLBIT 1 or 0 Optional If the value is 1, the corresponding value will be deleted in BI
Architect.

You are not required to provide a format file. Only the .dat data file must be provided. If you want to ignore certain
columns, modify their order or modify column or row separators, you must provide the format file describing the data
file. In this case, the format file must have the same name as the data file. Its name should end with Format instead of
Data. Its extension must be XML if it is an XML file, or FMT if it is an FMT file.

You perform the import via a standard Transact-SQL BULK load configured with standard data import properties. To find
out more about this topic or about format files, see the relevant Microsoft documentation.

Once the .dat data file has been placed in the D:\vtNextDW\SystemDataLoad folder containing the files to be imported,
the file will be integrated when standard SSIS packages are next run. Note:

• If there are errors in the format, the file will remain in the folder containing the files to be
imported. If there are no errors, it will be archived in the configured archives folder.

• If there are integrity issues (e.g. an unrecognized entity type), the system will ignore the relevant
rows and a warning will be generated.

• You can consult errors and warnings in the standard system reports for monitoring the loading of
data marts. To find out more, see System reports.

Once you have imported geographical coordinates, you can use them in BI Architect. Below is an example of a query
that displays the geographical coordinates of sites using imported data:

SELECT
Site.SiteKey AS SiteKey,
Site.SiteCode AS SiteCode,
Site.SiteName AS SiteName,
Geocode.GeocodeLatitude AS SiteLatitude,
Geocode.GeocodeLongitude AS SiteLongitude

Page 123/261
FROM Site

LEFT OUTER JOIN Geocode


ON Geocode.GeocodeOriginAPIKindIdSys = 1 -- Entity value
AND Geocode.GeocodeLocationValue = '1|' + Site.SiteIdApp

You can see that the join in the GPS location value is made by concatenating the entity type (1 for the site, 2 for the
storage location, etc.) and the entity ID separated by a vertical bar (ASCII code 124).

The join can also be made using the GeocodeOriginAPIKindIdSys field (type 1) that indicates that the GPS coordinate
comes from the entity and not directly from the address (type 0 for searching via the service). To find out more, see
Generating geographical coordinates (GPS).

Note:

• Geographical coordinates can also be generated by BI Architect. If this is the case, the vtGeocode
table is used and queried in a different way. To find out more, see Generating geographical
coordinates (GPS). The two methods are not incompatible and can be combined when managed
by exception. See below.

• A system report displays imported GPS coordinates. To find out more, see Reports in the
System/Functional settings/Geographical folder. This report enables users to force the values by
overwriting existing values with imported ones. Only already existing values can be modified.
Manually entered values will also be overwritten by imported values.

• The standard Dashboard application uses this table to manage maps with Qlik. The application
loads geographical coordinates managed by exception in the following order:

1. If the geographical coordinates (latitude and longitude) are not null when the value of
GeocodeOriginAPIKIndIdSys is 1 (coordinates defined for an entity), they will be retrieved
first.

2. If the geographical coordinates are null when the value of GeocodeOriginAPIKIndIdSys is


1, the system will then load the geographical coordinates for the value of
GeocodeOriginAPIKIndIdSys at 0 (coordinates defined for a location generated and
originating from a Google service).

A set of views is also provided to enable you to query an entity's geographical coordinates
managed by exception in the following order:
The value of the entity's coordinates is 1. If they do not exist, then the geographical coordinates
of the location are returned. Views also enable you to integrate other rules such as rules that
manage delivery addresses by exception.

Example of view names:

Page 124/261
• Sites: vtCommonDataSchema.vtGeocodeSiteView
• Suppliers: vtCommonDataSchema.vtGeocodeSupplierView
• Customers: vtCommonDataSchema.vtGeocodeCustomerView
• Customer deliveries: vtCommonDataSchema.vtGeocodeCustomerDeliveryView
• etc.

13.2.1.3. Importing store calendars


Store calendars are stored in the vtCommonDataSchema.vtSiteCalendarEvent table in BI Architect.

Note: This table can also be completed using a system report. To find out more, see Store calendar.

Before importing store calendars, you must first define the folder containing the files to be imported. To find out more,
see Importing system data to BI Architect.

Store calendars can be imported in a text file encoded in ANSI or in Unicode encoded in UTF-16LE.

If data is in text format, the data file must be called vtSiteCalendarEvent_ImportCharData.dat.

If data is in Unicode, the data file must be called vtSiteCalendarEvent_ImportWidecharData.dat.

The data file to be imported must be placed in the system folder you configured (normally
D:\vtNextDW\SystemDataLoad).

The default column and row separators are as follows:

• Column separator: horizontal tab (ASCII code 9 or \t)


• Row separator: carriage return and line feed (ASCII code 13 and 10 or \r\n)

Page 125/261
File columns are displayed in the following order:

Page 126/261
Field Type Maximum Properties Description
format

1 SalesChannelIdSys SQLINT N Mandatory Business activity. Possible values (numeric) are as follows:

• 1: Retail store sales


• 2: Trade

If the value specified is not in this list, the relevant rows will be
ignored and a warning will be generated.

2 SiteIdApp SQLNVARCHAR 64 characters Mandatory Store (site) ID associated with the calendar.

This field must contain the value of the SiteIdApp field in the vtSite
table.

Mandatory field. If the field is unspecified or if the ID does not exist in


the database, the relevant rows will be ignored and a warning will be
generated.

In a multi-database consolidation context, if the store ID contains a


prefix with the database code, the IDs in the file must also have a
prefix with the database code.

3 CalendarDate SQLDATETIME YYYY-MM-DD Mandatory Date of the event.


00:00:00.000

If the time displayed for the date is not zero, the system will force its
value to zero.

The date is mandatory and must be included in the BI calendar. If this


is not the case, the system will ignore the relevant rows and a
warning will be generated. Null dates that are rejected will be
displayed with the date, 01/01/1800.

Note: The minimum and maximum limits of the BI calendar are


automatically extended by the system whenever required.

4 SiteIsClosedForSales SQLBIT 1 or 0 Optional If the value is 1, the store will be shown as closed or non-comparable
for the day. The closing of the store will have an impact on store
comparability. To find out more, see Managing comparable stores.

If unspecified, the value will be 0.

5 CalendarEventKindIdSy SQLINT N Optional Type of event. Possible values (numeric) are as follows:
s
• 1: Holiday
• 2: Renovation work
• 3: Weekend
• 4: Public holidays
• 5: Weather
• 6: Miscellaneous
• 7: Illness
• 8: Strike
• 9: Stock-taking

If the value specified is not in this list, the value will be forced to null
and a warning will be generated.

6 Comment SQLNVARCHAR 4000 characters Optional User-defined comment. Warning: Do not add separators used in
the file format.

7 IsDeleted SQLBIT 1 or 0 Optional If the value is 1, the corresponding value will be deleted in BI
Architect.

Page 127/261
You are not required to provide a format file. Only the .dat data file must be provided. If you want to ignore certain
columns, modify their order or modify column or row separators, you must provide the format file describing the data
file. In this case, the format file must have the same name as the data file. Its name should end with Format instead of
Data. Its extension must be XML if it is an XML file, or FMT if it is an FMT file.

You perform the import via a standard Transact-SQL BULK load configured with standard data import properties. To find
out more about this topic or about format files, see the relevant Microsoft documentation.

Once the .dat data file has been placed in the D:\vtNextDW\SystemDataLoad folder containing the files to be imported,
the file will be integrated when standard SSIS packages are next run. Note:

• If there are errors in the format, the file will remain in the folder containing the files to be
imported. If there are no errors, it will be archived in the configured archives folder.

• If there are integrity issues, the system will ignore the relevant rows or it will force the value
depending on the issue, and a warning will be generated.

• You can consult errors and warnings in the standard system reports for monitoring the loading of
data marts. To find out more, see System reports.

Once you have imported store calendars, you can use them in BI Architect for comparing stores. To find out more, see
Managing comparable stores.

13.2.1.4. Importing store comparability


Store comparability is stored in the vtCommonDataSchema.vtSiteComparativeCalendar table in BI Architect.

Warning: You can import this table only if the comparability method defined is Import. If this is not the case, the import
will be rejected. To find out more, see Configuring comparable stores.

Before importing comparable calendars, you must first define the folder containing the files to be imported. To find out
more, see Importing system data to BI Architect.

Store calendars can be imported in a text file encoded in ANSI or in Unicode encoded in UTF-16LE.

If data is in text format, the data file must be called vtSiteComparativeCalendar_ImportCharData.dat.

If data is in Unicode, the data file must be called vtSiteComparativeCalendar_ImportWidecharData.dat.

The data file to be imported must be placed in the system folder you configured (normally
D:\vtNextDW\SystemDataLoad).

The default column and row separators are as follows:

• Column separator: horizontal tab (ASCII code 9 or \t)


• Row separator: carriage return and line feed (ASCII code 13 and 10 or \r\n)

Page 128/261
File columns are displayed in the following order:

Page 129/261
Field Type Maximum Properties Description
format

1 SalesChannelIdSys SQLINT N Mandatory Business activity. Possible values (numeric) are as


follows:

• 1: Retail store sales


• 2: Trade

If the value specified is not in this list, the relevant rows


will be ignored and a warning will be generated.

2 SiteIdApp SQLNVARCHAR 64 characters Mandatory Store (site) ID associated with the calendar.

This field must contain the value of the SiteIdApp field in


the vtSite table.

Mandatory field. If the field is unspecified or if the ID does


not exist in the database, the relevant rows will be
ignored and a warning will be generated.

In a multi-database consolidation context, if the store ID


contains a prefix with the database code, the IDs in the
file must also have a prefix with the database code.

3 CalendarDate SQLDATETIME YYYY-MM-DD Mandatory Date of the comparison.


00:00:00.000

If the time displayed for the date is not zero, the system
will force its value to zero.

The date is mandatory and must be included in the BI


calendar. If this is not the case, the system will ignore the
relevant rows and a warning will be generated. Null dates
that are rejected will be displayed with the date,
01/01/1800.

Note: The minimum and maximum limits of the BI


calendar are automatically extended by the system
whenever required.

4 IsDayCalendarYearN_N_1 SQLBIT 1 or 0 Optional If the value is 1, data for the store can be compared for
this calendar day in Y and Y-1. To find out more, see
Managing comparable stores.

If unspecified, the value will be 0.

5 IsDayWeekYearN_N_1 SQLBIT 1 or 0 Optional If the value is 1, data for the store can be compared for
this day of the week in Y and Y-1. To find out more, see
Managing comparable stores.

If unspecified, the value will be 0.

6 IsDayCalendarYearN_N_2 SQLBIT 1 or 0 Optional If the value is 1, data for the store can be compared for
this calendar day in Y and Y-2. To find out more, see
Managing comparable stores.

If unspecified, the value will be 0.

Page 130/261
Field Type Maximum Properties Description
format

7 IsDayWeekYearN_N_2 SQLBIT 1 or 0 Optional If the value is 1, data for the store can be compared for
this day of the week in Y and Y-2. To find out more, see
Managing comparable stores.

If unspecified, the value will be 0.

8 IsDayCalendarYearN_1_N SQLBIT 1 or 0 Optional If the value is 1, data for the store can be compared for
this calendar day in Y-1 and Y. To find out more, see
Managing comparable stores.

If unspecified, the value will be 0.

9 IsDayWeekYearN_1_N SQLBIT 1 or 0 Optional If the value is 1, data for the store can be compared for
this day of the week in Y-1 and Y. To find out more, see
Managing comparable stores.

If unspecified, the value will be 0.

10 IsDayCalendarYearN_2_N SQLBIT 1 or 0 Optional If the value is 1, data for the store can be compared for
this calendar day in Y-2 and Y. To find out more, see
Managing comparable stores.

If unspecified, the value will be 0.

11 IsDayWeekYearN_2_N SQLBIT 1 or 0 Optional If the value is 1, data for the store can be compared for
this day of the week in Y-2 and Y. To find out more, see
Managing comparable stores.

If unspecified, the value will be 0.

12 IsGlobalPeriodWeekYearN_N_ SQLBIT 1 or 0 Optional If the value is 1, data for the store can be compared for
1 the entire week in Y and Y-1. To find out more, see
Managing comparable stores.

If unspecified, the value will be 0.

13 IsGlobalPeriodMonthYearN_N SQLBIT 1 or 0 Optional If the value is 1, data for the store can be compared for
_1 the entire month in Y and Y-1. To find out more, see
Managing comparable stores.

If unspecified, the value will be 0.

Page 131/261
Field Type Maximum Properties Description
format

14 IsGlobalPeriodYearYearN_N_1 SQLBIT 1 or 0 Optional If the value is 1, data for the store can be compared for
the entire year in Y and Y-1. To find out more, see
Managing comparable stores.

If unspecified, the value will be 0.

15 IsGlobalPeriodWeekYearN_N_ SQLBIT 1 or 0 Optional If the value is 1, data for the store can be compared for
2 the entire week in Y and Y-2. To find out more, see
Managing comparable stores.

If unspecified, the value will be 0.

16 IsGlobalPeriodMonthYearN_N SQLBIT 1 or 0 Optional If the value is 1, data for the store can be compared for
_2 the entire month in Y and Y-2. To find out more, see
Managing comparable stores.

If unspecified, the value will be 0.

17 IsGlobalPeriodYearYearN_N_2 SQLBIT 1 or 0 Optional If the value is 1, data for the store can be compared for
the entire year in Y and Y-2. To find out more, see
Managing comparable stores.

If unspecified, the value will be 0.

18 IsGlobalPeriodWeekYearN_1_ SQLBIT 1 or 0 Optional If the value is 1, data for the store can be compared for
N the entire week in Y-1 and Y. To find out more, see
Managing comparable stores.

If unspecified, the value will be 0.

19 IsGlobalPeriodMonthYearN_1_ SQLBIT 1 or 0 Optional If the value is 1, data for the store can be compared for
N the entire month in Y-1 and Y. To find out more, see
Managing comparable stores.

If unspecified, the value will be 0.

20 IsGlobalPeriodYearYearN_1_N SQLBIT 1 or 0 Optional If the value is 1, data for the store can be compared for
the entire year in Y-1 and Y. To find out more, see
Managing comparable stores.

If unspecified, the value will be 0.

Page 132/261
Field Type Maximum Properties Description
format

21 IsGlobalPeriodWeekYearN_2_ SQLBIT 1 or 0 Optional If the value is 1, data for the store can be compared for
N the entire week in Y-2 and Y. To find out more, see
Managing comparable stores.

If unspecified, the value will be 0.

22 IsGlobalPeriodMonthYearN_2_ SQLBIT 1 or 0 Optional If the value is 1, data for the store can be compared for
N the entire month in Y-2 and Y. To find out more, see
Managing comparable stores.

If unspecified, the value will be 0.

23 IsGlobalPeriodYearYearN_2_N SQLBIT 1 or 0 Optional If the value is 1, data for the store can be compared for
the entire year in Y-2 and Y. To find out more, see
Managing comparable stores.

If unspecified, the value will be 0.

24 IsDeleted SQLBIT 1 or 0 Optional If the value is 1, the corresponding value will be deleted
in BI Architect.

You are not required to provide a format file. Only the .dat data file must be provided. If you want to ignore certain
columns, modify their order or modify column or row separators, you must provide the format file describing the data
file. In this case, the format file must have the same name as the data file. Its name should end with Format instead of
Data. Its extension must be XML if it is an XML file, or FMT if it is an FMT file.

You perform the import via a standard Transact-SQL BULK load configured with standard data import properties. To find
out more about this topic or about format files, see the relevant Microsoft documentation.

Once the .dat data file has been placed in the D:\vtNextDW\SystemDataLoad folder containing the files to be imported,
the file will be integrated when standard SSIS packages are next run. Note:

• If there are errors in the format, the file will remain in the folder containing the files to be
imported. If there are no errors, it will be archived in the configured archives folder.

• If there are integrity issues, the system will ignore the relevant rows or it will force the value
depending on the issue, and a warning will be generated.

• You can consult errors and warnings in the standard system reports for monitoring the loading of
data marts. To find out more, see System reports.

Once you have imported comparable calendars, you can use them in BI Architect for comparing stores. To find out
more, see Managing comparable stores.

Page 133/261
13.2.1.5. Importing BI translations
BI translations are translations of the labels used in screens. This does not apply to data from sources where multilingual
versions are managed in the databases. BI translations are primarily used in QlikSense dashboards and are stored in the
following tables:

• vtBIItemToTranslate: Contains the items to be translated (ID and value to be translated)

• vtBITransation: Contains translations of items to be translated as per the languages managed by


the system

There are two categories of items to be translated:

• Standard system items: The import allows you to customize system translations.

• Custom items: Items are created by the user. Custom items can be created/modified/deleted.

For these two categories, translations can only be imported in the languages managed by the system; see below.

Translations can also be completed using a system report that makes it possible to customize the model for dashboard
master items; see Customize translations in the Customization of dashboards section of the Dashboards
Administration document.

This document also explains how to generate a file with the values to be translated and the existing translations in line
with the standard default format expected by the system; see Exporting items to be translated and translations in the
Customization of dashboards section of the Dashboards Administration document.

Note: If the model for dashboard master items is opened in development mode, the imported translations will be
rejected with an error.

Before importing translations, you must first indicate the folder that will contain the files to be imported. See Importing
system data to BI Architect.

Translations can be imported in text format (encoded in ANSI) or in Unicode format (encoded in UTF-16LE). We strongly
recommend that you use Unicode format, which is required if you want to import a language such as Simplified Chinese.

If data is in text format, the data file must be called vtBITranslation_ImportCharData.dat.

If data is in Unicode format, the data file must be called vtBITranslation_ImportWidecharData.dat.

The data file to be imported must be placed in the system folder you configured (normally
D:\vtNextDW\SystemDataLoad).

The default column and row separators are as follows:

• Column separator: horizontal tab (ASCII code 9 or \t)


• Row separator: carriage return and line feed (ASCII code 13 and 10 or \r\n)

Page 134/261
File columns are displayed in the following order:

Page 135/261
Field Type Maximum Properties Description
format

1 ValueToTranslate SQLNVARCHAR N characters Optional Value of the item to be translated; this value is optional in combination
(maximum for with the ItemToTranslateIdSys field (see field two below). If the
SQL Server) ItemToTranslateIdSys field is populated, the system uses this field to
find the item to be translated, otherwise it uses the ValueToTranslate
field to identify the item. Therefore, at least one of the two fields must
be completed, otherwise the row will be rejected on import.

The value of ValueToTranslate may depend on whether the item is


system or custom:

• If the item is system, the value must be in English and the


value must exist in the vtBIItemToTranslate table,
otherwise the row will be rejected.

• If the item is custom, the value to be translated can be


defined in any language. However, we recommend defining
the custom values to be translated in the same language. If
possible, we recommend defining values, such as the
system values, in English. This is for consistency purposes
but is not mandatory.

We recommend completing this field for custom values to be translated,


especially as this field is mandatory if the custom value does not exist
and must be created. For system values, we recommend using the
ItemToTranslateIdSys field only.

For a custom value that does not exist, it will be created by the system,
which will assign it an ID. This field is therefore used to populate the
table with custom items to be translated vtBIItemToTranslate. Note: If
you would like to modify the value to be translated for an existing
custom item, you must complete the ItemToTranslateIdSys field.

If you import a custom value to be translated that already exists with


the same system value, the custom value will still be created if it does
not already exist. This means that the same value to be translated can
appear in the BI database several times.

Note: The same value to be translated is repeated in several rows for


each translation language. For a custom item, if the lowest character of
the value to be translated is different, it will be considered to be a
different item to be translated. This item will then be created or updated
depending on whether the ID has been entered and whether it exists.

To view the existing values to be translated and potentially generate a


default file, you must use the definition report for the dashboard master
item model; see Exporting items to be translated and translations in
the Customization of dashboards section of the Dashboards
Administration document.

Page 136/261
Field Type Maximum Properties Description
format

2 ItemToTranslateIdSys SQLNVARCHAR 64 characters Optional The system ID of the item to be translated; this value is optional in
combination with the ValueToTranslate field (see field 1 above). If the
ItemToTranslateIdSys field is populated, the system uses this field to
find the item to be translated, otherwise it uses the ValueToTranslate
field to identify the item to be translated. Therefore, at least one of the
two fields must be completed, otherwise the row will be rejected on
import.

ItemToTranslateIdSys contains a system-generated ID. If you want to


create new custom values, this field must be empty because if the item
has not been created, its value will be unknown. In this context, the
ValueToTranslate field must be completed; we recommend completing
only the ValueToTranslate field for the custom items to be translated.

If this field is completed, it must exist in the vtBIItemToTranslate table,


otherwise the row will be rejected.

We recommend completing this field for the system items to be


translated only, and to leave the ValueToTranslate field empty. For
custom values, the opposite is advised; use only the ValueToTranslate
field and leave the ItemToTranslateIdSys field empty (unless you want
to modify the values of the items to be translated).

To view the existing values to be translated and potentially generate a


default file, you must use the definition report for the dashboard master
item model; see Exporting items to be translated and translations in
the Customization of dashboards section of the Dashboards
Administration document.

3 IsItemToTranslateCustom SQLBIT 0 or 1 Mandatory Boolean that indicates whether the item to be translated is custom (1)
or system (0). This field is required; if it is left blank, its value will be set
to 0 (system) by default.

Note: If the item to be translated is system but it is actually custom in


the BI database (or vice versa), the row will be rejected.

4 BILanguageLCIDIdSys SQLINT N Mandatory Translations language number (see field 5 below). Languages are
defined by the system and possible values are:

• 1033: English (U.S.)


• 1036: French
• 1040: Italian
• 2052: Simplified Chinese
• 2070: Portuguese
• 3082: Spanish

This field is mandatory. If it is not completed or the data entered is


incorrect, the row will be rejected.

5 TranslatedValue SQLNVARCHAR N characters Optional Translation of the item to be translated according to the language of the
(maximum for BILanguageLCIDIdSys field.
SQL Server)

If the value is empty:

• If the item to be translated is system: The custom


translation is deleted for the language (the system
translation will therefore be used again).

• If the item to be translated is custom: The translation is


deleted for the language.

You can also specify that a translation is to be deleted with the


IsDeletedTranslatedValue field; see field 6 below.

Page 137/261
Field Type Maximum Properties Description
format

6 IsDeletedTranslatedValue SQLBIT 0 or 1 Optional Boolean that indicates that the translation is to be deleted; it is
equivalent to clearing the TranslatedValue field.

7 IsDeletedItemToTranslate SQLBIT 0 or 1 Optional Boolean that indicates whether or not the item to be translated and all
the existing translations in the database related to this item are to
be deleted. This is independent of the BILanguageLCIDIdSys field
present in the file; all the translations and the item itself will be deleted.

Note: For custom items only, if 1 is specified for a system item, the row
will be rejected.

You are not required to provide a format file. Only the .dat data file must be provided. If you want to ignore certain
columns, modify their order or modify column or row separators, you must provide the format file describing the data
file. In this case, the format file must have the same name as the data file. Its name must end with Format instead of
Data. Its extension must be XML if it is an XML file, or FMT if it is an FMT file.

You perform the import via a standard Transact-SQL BULK load configured with standard data import properties. To find
out more about this topic or about format files, see the relevant Microsoft documentation.

Once the .dat data file has been placed in the D:\vtNextDW\SystemDataLoad folder containing the files to be imported,
the file will be integrated when standard SSIS packages are next run.

Note:

• If there are errors, the file will remain in the folder containing the files to be imported. If there
are no errors, it will be archived in the configured archives folder.

• If there are integrity issues, the system will ignore the relevant rows or it will force the value
depending on the issue, and a warning will be generated.

• You can consult errors and warnings in the standard system reports for monitoring the loading of
data marts. To find out more, see System reports.

Page 138/261
14. INTEGRITY RULES FOR ENTITIES
You can configure integrity rules for entities loaded to BI Architect. This configuration can only be performed for Y2 and
Orli sources and non-Cegid external sources.

Controls are run to check the integrity constraints of links between two entities. By default and except in specific cases,
if an entity linked to another entity does not exist in the source database (e.g. a product associated with a sale does not
exist), BI Architect will not generate any error when loading the sale. There are two possible outcomes:

• BI Architect will force the value of the link. Usually, the null value will be used. In our example of
a sale associated with a non-existing product, the sale will be associated with a null product.

• BI Architect generates non-existing members with the Awaiting creation status before creating
facts. These are specific cases where the field to be checked is part of the unique granularity of
the key and is mandatory, such as the current product inventory. To find out more, see
Dimensions archived or awaiting creation.

You can modify the behavior of BI Architect by configuring integrity rules for entities.

By default, standard data loads with Y2 do not have these controls enabled. You must enable them yourself. For Orli and
external sources, all rules are enabled by default with the warning rule. See below.

Note:

• Certain dimension entities can be automatically generated using facts. In this case, integrity
controls will be ignored for the relevant dimensions in these fact tables. To find out more, see
Dimensions archived or awaiting creation.

• These rules are generally used for external sources. For Y2 sources, as integrity rules are already
managed in the Y2 source database, database integrity is maintained.

• By default, the system will automatically generate an error for entity A if an error occurs when
loading an entity linked with entity A. For example, if there is an error when loading products,
there will automatically be an error for sales. This behavior is modified if activity constraints for
the linked entity are configured to generate errors. If this is the case, then if there is an error for
the product entity, there will be no error for the sales entity. The sales entity will run an integrity
control on non-existing products and will only generate an error if products do not exist.

Warning: When performing an initialization using an external data source, we recommend that
you send all dimensions before sending facts. This is because if a dimension (such as product) is
rejected during import, the import of facts related to this dimension may generate several
thousands of integrity error messages during initialization.

• The configured integrity rules are run only if the update is performed in BI Architect. If you enable
the loading of the data mart without any update in BI Architect, integrity rules will not be run.

• Even if you configure integrity constraints with warnings, i.e. without generating an error for the
entity, the system can nevertheless generate a general error message to warn users. To find out
more, see below. The advantage of the general error message is that the loading of the entity is
Page 139/261
not in error and is therefore not blocked. BI Architect can therefore continue being loaded. This
message is optional. You can configure it in the miscellaneous options of the source configuration
in the report called BI configuration setup found in the System/General settings folder in system
reports. By default, the general error message option is enabled for all sources from the multi-
database consolidation.

You configure integrity rules in the system dashboard called BI Entities integrity rules found in the System/General
settings folder. In this report, you must first select the data source and the entity to configure as shown below.

The configuration of integrity rules is managed by exception. It can be performed for the entity and for each entity field
proposed by the system. Fields will inherit the property defined for the entity. Click the Edit link to configure the entity.
The following window will appear:

Page 140/261
In this window, you can configure the following:

• Integrity reject rule: Rule to be applied when integrity issues for the entity occur. The possible
values are as follows:

o (None): Integrity controls are not enabled for this entity or for its fields. In this case, the
system will apply the default rule that ignores integrity issues, e.g. by forcing the value to
null, and it will update the entity.

o Update with warning: Integrity controls are enabled for this entity. The system will check
the integrity of each entity field proposed by the system. If field integrity is maintained,
no message is generated. If field integrity is lost, then BI Architect will generate a warning
with a trace in the log. It will apply the default rule to ignore integrity issues, e.g. by forcing
the value to null, and it will update the entity. It can therefore continue its update without
any errors.

o Update with error: Integrity controls are enabled for this entity. The system will check
the integrity of each entity field proposed by the system. If field integrity is maintained,
no message is generated. If field integrity is lost, then BI Architect will generate an error
message and the whole entity will be in error. Reminder: If an entity is in error, it can
generate a sequence of errors in other linked entities. This may interrupt data loading for
the whole group of entities. In certain cases, a partial update can be performed, e.g. if the
error generated is due to sales payments, sales can be updated without payments. In this
case, the entity remains in error as long as the problem is not solved in the source
database. We therefore recommend that you enable this option only in specific cases,
e.g. for external source import.

• Apply rule at field level if not defined (if rule active): Boolean specifying whether the entity's rule
has an impact on all of its fields. If the value is False, integrity must be defined field by field for
this option. If the value is True, it may be inherited. To find out how to manage this algorithm by
exception, see the configuration of entity fields below.

You can also define integrity rules field by field for the entity. To do this, click the Edit link of the field you would like to
modify. The following window will appear:

Page 141/261
In this window, you can configure the following:

• Is exception: Boolean specifying whether or not the integrity rule of the field will override the one
for the entity. If the value is True, integrity must be defined for the field (exception). If the value
is False, integrity may be inherited from the entity. To find out more, see the algorithm for
applying this rule below.

• Integrity reject rule: Rule applicable in the event of integrity issues for the field. See the
configuration of entities for the list of possible values and their meaning.

The algorithm for managing field integrity controls by exception is as follows:

• The entity of the field must have a value different from (None) in the Integrity reject rule field. If
the value is (None), no rule will be applied, regardless of whether or not this field was defined
with an exception.

• The rule is then applied based on the following algorithm:

o If the value of the Is exception property in the field is True, the system will apply the value
defined for the field.

o If the value of the Is exception property in the field is False, the system will apply the
value defined for the entity with two possible outcomes:

▪ If the value of the Apply rule at field level if not defined (if rule active) property
in the entity is True, the system will apply the rule defined for the entity to this
field.

▪ If the value of the Apply rule at field level if not defined (if rule active) property
in the entity is False, the system will not apply any integrity rule to this field.
Page 142/261
Note on the configuration of fields:
• All fields defined with exceptions appear highlighted in light yellow in the report with the word
(exception).

• If an exception was defined for a field and the value of the Integrity reject rule property in the
entity is (None), the integrity rule for the field and for all entity fields is disabled and as such, will
not be applied.

• Each field must have one or two types of control (usually one):

o Must exist: This means that there is a link to another entity and the control checks the
existence of the field in the source table (the most frequent scenario).

o Mandatory: This means that the field cannot be null. It is mandatory.

Users cannot modify the types of controls.

Page 143/261
15. FILTERING AND PURGING DATA
You can filter Data Warehouse data. As filters are integrated retroactively, this can also be used to purge the data
already sent to the Data Warehouse.

Only Data Warehouse facts, not dimensions, can be filtered and purged.

Filters are used to exclude the data to be loaded or present in the BI solution.

Sent data to be excluded is not deleted directly. It is purged using a standard SSIS package that you must run. To find out
more, see Purging filtered data.

This section describes how you can manage filters for CBR, Orli and external sources other than Colombus. Data from
Colombus can also be filtered but the filters must be defined directly in Colombus. To find out more, see the document
on loading the Data Warehouse using Colombus.

You should run the system dashboard called BI Entities filters found in the System/General settings folder on the
Reporting Services portal. This dashboard also displays the current filters and their status.

General information on defining filters


• Select the source and the entity or filter type you want to view, create or modify from the list.
Note:

o Only standard active sources or inactive consolidated database sources are displayed. In
this case, you must have declared, at the very least, the consolidated database, the
database linked to the main database and the source type of the linked database.

o Entities that have source and target stores, e.g. the transfer chain, will display two rows
in the list of entities. For example, transfer notes will have one row called Transfer
Page 144/261
delivery - Store filter for filtering stores that initiate the transfer and another row called
Transfer delivery - Recipient store filter for filtering stores that receive the transfer.

• Once you have made your selection, click View report. Based on the selection, if filters exist, they
will appear in a table.

o To add a new filter (except for Colombus), click Add Filter.

o To modify a filter (except for Colombus), click the Edit link for the relevant row.

o To delete a filter (except for Colombus), click the Delete link for the relevant row.

Entering the store codes to be filtered


• The store codes are filtered. The data associated with these stores will be filtered or deleted.
Warning: Store codes are not the IdApp. In a multi-database consolidation context with mapped
stores, the original code of the source store is filtered. If there is historical data with a mapping
for the store to be filtered, you must also filter the mapped target code.

• You can enter jokers in store codes, e.g. % for a set of characters or _ for a single character. If you
enter %, all stores will be included. To see other possible characters, see the Transact-SQL
documentation. Warning: Jokers are included only if the wildcard option is True when the filter
was added.

• You can enter a non-existing store code.

• You cannot enter two rows using the same store code. However, you can enter two rows that can
include the same store code if jokers are used. For example, you can define a filter that excludes
all stores (%) using one period and the M1 store code using another period. In this case, store M1
will be filtered twice for different periods. Filters are added on.

• Once you have validated the filter, you will not be able to modify the store code or the wildcard
option. If you need to modify the store code or the wildcard option, you must delete the filter and
create it again. See below.

Defining dates for excluding data:


• Data included in dates is filtered or deleted. Except in specific cases, the date is the transaction
entry date (TimeDateSys in BI Architect) associated with the filtered entity. For current stock,
dates are ignored.

• Dates can have null values. In this case, all data will be included.

o M1 store filter from 01/01/2012 to 12/31/2012: Data from 01/01/2012 to 12/31/2012


(inclusive) for store M1 will be filtered.

Page 145/261
o Store filter % from 12/31/2011 (start date not specified): Data earlier than 12/31/2011
(inclusive) for all stores will be filtered.

o Store filter M% from 01/01/2010 (end date not specified): Data later than 01/01/2010
(inclusive) for all stores starting with M will be filtered.

• Once you have defined the filter, you can modify dates but their modification will lead to the
consequences below. If the status is Awaiting activation, there are no consequences. To find out
more, see Status of filters.

o If the period for excluding data has been extended, there are no specific consequences
except that the purge process must be scheduled accordingly. To find out more, see
Purging filtered data. If the filter status is active, it will change to Activation in progress
so that data can be purged. To find out more, see Status of filters.

o If the period for excluding data has been reduced, an initialization will automatically be
scheduled for the entity so that the filtered elements can be sent to BI. The processing
time required for the initialization will depend on the data volume being filtered and on
the entity concerned. Warning: If the source is a linked consolidated database with file
exchanges, initialization is not automatically performed. In this case, if you want to send
filtered data, you must schedule initialization directly in the data source to send data to
the BI Architect consolidated database.

Deleting a filter row


You can delete a filter row. If you delete a filter row with a status other than Awaiting activation, an initialization will
automatically be scheduled for the entity so that the filtered elements can be sent to BI Architect. This is true only if the
source is a linked consolidated database with file exchanges.

Characteristics specific to filters


• In a consolidation context where a cross-reference mapping such as Mapping IdApp is applied,
target store codes are tested and filtered. As a result, if store codes are mapped, the filter should
apply to the target store codes in the cross-reference mapping. However, depending on the
configuration of the mapping, non-mapped stores will be integrated with the initial store codes.
As such, you may also be required to filter the target code.

• When a filter is applied to a transaction entity with a header and rows (like a sales transaction), it
will filter all new transactions without any associated rows, e.g. no sales or payment row.
However, existing transactions without any associated rows will not be purged. To find out more,
see Purging filtered data.

• The Inventory transaction entity with Store filter is filtered using EffectStockRoomKey and
EffectTimeDateSys. It is not filtered using the StockRoomKey and TimeDateSys fields.

• If you want to filter all stores using %, only facts that have an associated store will be filtered. If
facts are associated with a null store, they cannot be deleted.
Page 146/261
• Certain transactions such as supplier orders can have several different stores. In this case, only
facts for stores excluded from the transaction will be deleted. The transaction itself will only be
deleted when all stores for the transaction have been excluded.

• Applying filters does not improve loading time when the entity in question is being initialized. This
is because filters are applied after the data has been loaded from the source and not while the
source data is being read.

Page 147/261
16. STATUS OF FILTERS
The status of filters determines the actions that can be performed and indicates whether a filter is active. The different
statuses are as follows:

• Awaiting activation: The filter has just been created but it is not active. It can be deleted or
modified without any effect on other elements. The filter will automatically become active and
its status will change to Activation in progress when source data is next loaded to BI Architect or
when data is next purged.

• Activation in progress: The filter is active but the data purge has not been run or is not completed.
BI Architect can still contain filtered data but no new filtered data will be integrated in the data
mart. If you delete or modify the filter, an initialization may be required for the entity in question.
The data purge must be run even if no data exists in the data mart. You do not need to run the
data purge immediately. It can be run at a later time depending on requirements. However, the
filtered data already present in BI Architect will only be purged when the purge is run.

• Active: The filter is active and the data purge has been run. To find out more, see Purging filtered
data. There is no more data from filtered entities from the source in the Data Warehouse.

Filtered data that is present in BI Architect is purged by a standard SSIS package shipped with standard BI Architect
connectors.

The standard SSIS package is called BI ARCHITECT DATA MART delete entity filter.

Cegid also provides an SQL job that runs this package called BI Architect data mart delete entities filtered. By default,
this job is not scheduled to run recurrently because this depends on customer requirements. The Other options section
in the system report called BI configuration setup is used to enable the automatic running of this package at the end of
the standard data mart loading. Warning: This processing may consume a significant amount of resources. You should
only enable the automatic running if necessary.

If you want the status of filters to be up to date, then you must run this processing once a filter is created or modified.

Note:

• Only transactions with one or more fact rows will be purged retroactively. For example, one sales
or payment row for a sales transaction. If the transaction does not have any fact row in BI
Architect, it will not be purged.

• Archived facts are never purged. To find out more, see Archiving facts.

• If you want to filter all stores using %, only facts that have an associated store will be filtered. If
facts are associated with a null store, they cannot be deleted.

Page 148/261
• Certain transactions such as supplier orders can have several different stores. In this case, only
facts for stores excluded from the transaction will be deleted. The transaction itself will only be
deleted when all stores for the transaction have been excluded.

• The data purge can be run while data is being loaded. However, we strongly recommend that you
do not run it during data loading periods to avoid risks of conflict.

• If no data is purged, the job is completed very quickly. If there is data to be purged, the duration
of the job will depend on the data volume which can range from a few seconds to several hours.
If you need to purge a large amount of data in the BI Architect database (e.g. several tens of
millions of records), the processing can easily last several hours. You should therefore schedule it
to run during the weekend, for example.

• If the processing is unexpectedly aborted for whatever reason, you just need to run the job again.
There are no adverse effects, apart from the fact that not all data was purged.

• The system dashboard that manages filters can also be used to monitor the running of this process
and display error messages if any.

Page 149/261
17. DATA PARTITIONS
To improve the processing and loading performance of BI repositories, the solution manages partitions in both the OLAP
cube and Qlik model. The aim of partitioning is to divide data into separate parts, each corresponding to a different
period of the data’s history. The shorter the history, the faster the partition can be processed. Depending on your
particular requirements and constraints, partitions must be configured to be loaded as fast as possible and you must
ensure that data is refreshed as frequently as possible.

Using partitions, data in a repository can be refreshed at intervals of less than a day to every hour, even for large
databases.

Partitions in the OLAP cube are currently managed in the cube project. To modify/create new partitions in the cube, you
must modify the associated project. This section does not describe OLAP cube partitions.

The partitions in the Qlik model are dynamic and configurable from a BI Architect system report.

The configuration of Qlik partitions is described in the Dashboards Administration document.

Page 150/261
18. ENTITIES ARCHIVED OR AWAITING CREATION
Most entities can be archived or have members that are awaiting creation. They are managed differently depending on
whether they are facts or dimensions.

When dimension members are deleted in a source, they are not automatically deleted in BI Architect. In this case, the
dimension member is archived. Its status will become ARCHIVED (EntityStatusIdApp field associated with the
vtEntityStatus table).

BI Architect can also automatically create non-existing dimension members in the source whose status will be
WAITINGCREATION (EntityStatusIdApp field in the vtEntityStatus table). This generation is done in the following
circumstances:

• In a multi-database consolidation context, an entity is imported with a mapping import and the
target member of the mapping does not exist in the BI database. In this case, the target member
will automatically be generated in the BI database. This applies to all dimensions that can be
mapped. To find out more, see the BI Architect Database consolidation document.

• A member does not exist in the source and a fact was sent with this member to BI Architect. In
this case, if the dimension is part of the unique granularity of the fact key and if the key does not
accept null values, the member will automatically be generated in the BI database. To ensure
optimal performance, the automatic generation of non-existing dimension members in sources
has been implemented only in facts with this constraint. Integrity rules apply for the other facts.
To find out more, see Integrity rules for entities.

The facts concerned are as follows:


o Outstanding stock
o Stock transactions
o Cost price
o Product base price (sale and purchase)
o Inventory closing
o Inventory snapshots
o Inventory

The dimensions concerned are as follows:


o Products
o Storage locations
o Customer price lists
o Supplier price lists
o Companies
o Stock quality

This situation is not normal in the source which has an integrity issue between the relevant facts and dimension
members in its database. The objective of BI is to provide a workaround solution to this integrity issue by integrating
facts with a uniqueness constraint for these dimensions while ensuring the integrity of keys in the BI database.
Members are therefore created in the BI database while awaiting their creation in the source.
Page 151/261
Members will be created with null values in the BI database except for the following information:

• IdApp: IdApp of the source


• Member status: Awaiting creation (WAITINGCREATION in the EntityStatusIdApp field of the
vtEntityStatus table)
• Member code: IdApp of the source
• Member name: System creation from origin information, waiting update from source

For certain entities like products, other information can be automatically loaded.

Members awaiting creation cannot be archived as long as they are not active. However, they will be taken into account
as archived members in the archive deletion process. See below. Members awaiting creation can therefore be
automatically created by the system and deleted immediately afterwards if authorized by the constraints of the
database.

Regardless of whether a dimension member is archived or awaiting creation, you can delete it. This applies only to
certain dimensions. To delete it, you must run the standard SSIS package called BI Architect data mart delete archive.
The SSIS package can be run manually or recurrently based on customer requirements. The SSIS package is deployed on
the SQL server containing the packages for loading data to BI Architect. An SQL job with the same name that runs this
package is also deployed on the BI Architect server.

Warning: This processing may consume a significant amount of resources. You should only enable recurrent processing
if necessary. We recommend that you run the package manually on demand.

Note:

• Not all dimensions that are archived or awaiting creation will be processed by the package. Here
is a comprehensive list of the dimensions processed by the package and whose members can be
permanently deleted:
a) BLOBs
b) Products
c) Packaging
d) Customers
e) Suppliers
f) Customer price lists
g) Supplier price lists
h) Salespeople
i) Companies
j) Storage locations
k) Sites
l) Cash registers
m) Loyalty cards
n) Loyalty plans
o) Loyalty campaigns

• Some dimension-related information (such as the cost price of the products or inventory) are
automatically deleted if the associated member is deleted and the source of the information is

Page 152/261
the same as the source of the entity, or if the source is system. This applies unless all other facts
(such as an existing sale) can block the deletion of the members; see below.

• If the entity member is mapped with another source or target member, the mapping will be
deleted if it was generated by the system. If the mapping was manually defined by users, it will
not be deleted.

• Integrity constraints in BI Architect may prevent deletion, e.g. if you try to delete a product that
has existing sales or stock on another source (stock on the same source is automatically deleted).
In this case, the product cannot be deleted and the process generates a warning. As a reminder,
logs are stored in the vtLoadRejectLog table and they can be displayed in the system report called
BI load rejects history found in \System\Monitoring\Advanced.

• When the processing is initially run, the processing time may be long because the system
generates indexes for entity keys in order to optimize the performance of integrity controls.

• In a Y2 multi-database consolidation context, where two BI Architect databases communicate via


file exchanges, you must proceed as described below:

o Run the standard processing for loading data in the source database.

o Ensure that pending exports have been integrated in the target database after the
processing.

o Run the processing for deleting archived entities in the source database.

o Run the processing for deleting archived entities in the target database.

When a fact (e.g. a sale) is deleted in the source database, it will automatically be deleted in BI Architect. The fact is not
archived by default. To archive facts, the source database must send an archive request to BI Architect. This is
automatically the case with Y2 when documents are archived in the Y2 HPIECE table. When facts are archived in BI
Architect, these facts, including their main and secondary tables, will not be deleted or updated. The status of facts
archived in BI Architect will become ARCHIVEDSOURCE (EntityStatusKey field associated with the vtEntityStatus table).

To stop archiving facts, you should send them to BI Architect with all of their secondary data from the source database
without an archive request. Below is a comprehensive list of facts that can be archived in BI Architect:

a) Customer orders
b) Customer deliveries
c) Customer sales
d) Supplier orders
e) Supplier receipts
f) Supplier invoices
g) Transfer requests
h) Transfer deliveries
i) Transfer receipts
j) Stock movements (inventory transactions)
k) Sales events (not managed in Y2)

Page 153/261
When communicating with CBR, archives are automatically managed between Y2 and BI Architect.
When a document is archived in the Y2 HPIECE table, it will automatically be archived in BI Architect.
When a stock movement is archived in the Y2 HLIGNE table, it will automatically be archived in BI
Architect.

Note:

• Only facts present in BI Architect can be archived. If a request to archive a fact is sent to BI but
the fact is not present in BI, the fact will not be archived in BI. In Y2, facts can only be archived
after a period of two years. As such, they are usually already present in BI.

• If BI is enabled after facts are archived in Y2, then the Y2 archived facts will never be in BI.

• Unlike dimensions, you cannot delete archived facts directly in BI Architect. To delete archived
facts in BI Architect, you must resend them to BI Architect without an archive request and with a
deletion request. This is not a standard process in Y2.

• If filters are active in BI Architect and if these filters concern archived facts, the archived facts will
not be deleted. To find out more, see Filtering and purging data.

Warning: If these filters were active before facts were sent to BI Architect and archived, then
the facts will not be present in BI Architect. In this case, they will not be archived and they will
be lost forever.

• In a Y2 multi-database consolidation context, before you archive facts in the Y2 database, which
consolidates and communicates data to the remote BI Architect, you must ensure that the facts
to be archived have already been sent to the remote BI. If this is not the case, the facts will never
be sent to the remote BI.

Page 154/261
19. GENERATING GEOGRAPHICAL COORDINATES (GPS)
BI Architect provides tools for managing the GPS coordinates of locations or entities in order to use maps. The
vtGeocode table contains the GPS coordinates associated with a location or entity. This table can be loaded in two
ways:

• Geographical coordinates can be generated using the stored procedure called


vtSystemProceduresSchema.vtGeocode_spGenerate

• Geographical coordinates can be imported. To find out more, see Importing the geographical
coordinates (GPS) of entities - vtGeocode table.

This chapter will describe the generation of geographical coordinates. To find out more about importing the table, see
the relevant chapter.

You can call the stored procedure for generating geographical coordinates from any application. It runs HTTP queries
requesting APIs to retrieve the GPS coordinates of a location. Internet access is required on the BI Architect server.
Note:

• To date, the system supports Google, MapQuest and Bing APIs.

• At present, the MapQuest API is free of charge while Google and Bing APIs are fee-paying. You
must provide a key for the API. You can obtain this key from the website of the provider, i.e.
Google, MapQuest or Bing.

• The use of APIs is generally limited. If the limit is exceeded, the stored procedure will
automatically stop loading data. In this case, you must run the stored procedure again the next
day to continue loading data. By default, the stored procedure does not reload values that have
already been loaded. See the @OnlyIfNoCoordinateOrStatusNotCorrect parameter below. If the
number of coordinates to be loaded for an entity exceeds the limits, you will need several days
or weeks to load all coordinates for a large entity. If this is the case, we recommend that you
import the GPS coordinates instead of generating them.

You can pass the limits as a parameter in the procedure. However, they must not exceed the
quotas authorized by the API. See below. Please consult the API provider's website to find out
more about quotas.

• HTTPS queries take a long time to run. Their use is restricted by the provider. You must therefore
not request the systematic loading of GPS coordinates that have already been loaded. To find out
more, see the @OnlyIfNoCoordinateOrStatusNotCorrect parameter below.

• As a general rule, the main addresses of entities are used when retrieving GPS coordinates. The
address will be truncated to 445 characters. This is because in SQL Server indexes, location values
can only contain 445 characters.

• BI Dashboards can automatically load the GPS coordinates of entities. Except in specific cases,
you are not required to do so if BI Dashboards is installed with this option. See the documentation
on setting up dashboards.

Page 155/261
• Because the use of HTTPS queries is limited, we recommend that you import GPS coordinates
instead of generating them. To find out more, see Importing the geographical coordinates (GPS)
of entities - vtGeocode table.

The stored procedure is called by entity type. You must use the vtDbAdmin account to run it as shown below.

EXECUTE vtSystemProceduresSchema.vtGeocode_spGenerate
@OriginEntityKindIdSys = 1
,@SalesChannelIdSys = NULL
,@OnlyIfNoCoordinateOrStatusNotCorrect = 1
,@WithDisplayGenerate = 1
,@IsWorkAPI = 0
,@ServiceKindIdSys = 2 -- MapQuest
,@URLAPI = NULL
,@APIKey = 'YourKey'
,@MaxQueryPerSecond = NULL
,@MaxQuery = NULL
,@CredentialsDomain = 'DomainName'
,@CredentialsLogin = 'LoginDomain'
,@CredentialsPassword = 'PasswordLoginDomain'
,@ProxyServer = 'ProxyServerName'
,@ProxyServerPort = 80
,@ProxyIsBypassOnLocal = 1

The different settings are as follows:

• @OriginEntityKindIdSys (mandatory, integer): Type of entity for which coordinates must be


generated. The possible values are as follows:
▪ 1: Load coordinates for sites (main address in the vtSite table)

▪ 2: Load coordinates for storage locations (main address in the vtStockRoom table)

▪ 3: Load coordinates for suppliers (main address in the vtSupplier table)

▪ 4: Load coordinates for customers (main address in the vtCustomer table)

Page 156/261
▪ 5: Load coordinates for countries (name in the vtCountry table)

▪ 6: Load coordinates for cities (name in the vtAddressCityPart table)

▪ 7: Load coordinates for regions or states (name in the vtAddressStatePart table)

▪ 8: Load coordinates for customer delivery addresses (delivery address in the vtCustomer table)

• @SalesChannelIdSys (optional, by default null, integer): Only for customers. If the value is 1, it loads
coordinates for retail customers. If the value is 2, it loads coordinates for trade customers. If the value
is null, it loads coordinates for all customers.

• @OnlyIfNoCoordinateOrStatusNotCorrect (mandatory, Boolean): This parameter indicates whether


coordinates should be loaded only if they do not exist or if the API returned an error for the last loading
for the entity. Warning: As HTTP queries take a long time to run, you must not request the systematic
loading of GPS coordinates. Except in specific cases, the value of this parameter should be 1. If you
change the value to 0 and if values were entered in the GPS coordinates report, the manually entered
values will be overwritten by the generated values.

• @WithDisplayGenerate (mandatory, Boolean): This parameter indicates whether the system should
display the progress of the loading. If the value is 1, the loaded values are displayed in SQL Server
Management Studio in the Messages tab. If the stored procedure is called using an external process,
the value must be 0.

• @IsWorkAPI (optional by default 0, Boolean): Do not use.

• @URLAPI (optional by default NULL, NVARCHAR): Do not use.

• @ServiceKindIdSys (optional by default 2, integer): API service to use. Two possible values:

• 0 Google
• 2 MapQuest
• 3 Bing (only works if the version of SQL Server is 2012 or later)

Note:

• Services require a key. See below.

• To use the Bing service, the version of SQL Server must be 2012 or later.

• At present, the MapQuest API is free of charge while Google and Bing APIs are fee-paying.
Although MapQuest is the default service, we recommend that you use Google because of its
accuracy. Bing is less accurate than Google but more so than MapQuest. The Google search
algorithm is more effective than those used by Bing and MapQuest. If the address is not
sufficiently precise, or if it is incorrectly phrased or incomplete, MapQuest will return several
possible values or will not be able to locate the address whereas Google is capable of locating

Page 157/261
the address in most cases. In this case, the GPS coordinates will not be retrieved by MapQuest
and the status of the address will be AMBIGUOUS_RESULT_MAPQUEST. To solve the
problem, you should correct or complete the address.

• @APIKey (mandatory, NVARCHAR): Authentication key assigned by the provider. This key is
mandatory. Log on to the provider's website to obtain a key.

• @MaxQueryPerSecond (optional by default 20, integer): Maximum number of HTTPS queries per
second. This depends on the quota of the service. By default, enter NULL to let the system decide.

• @MaxQuery (optional by default 10000, integer): Maximum number of HTTPS queries for calling a
procedure. This depends on the quota of the service. By default, enter NULL to let the system decide.

• @CredentialsDomain (optional, by default null, NVARCHAR): Domain of the Windows account used
to run the HTTPS query. If the value is not specified, the service account of the SQL Server instance,
VCSNEXT, will be used. The account running the HTTPS query must have adequate rights to access the
Internet on the customer network especially if a proxy is set up. To find out more, see the
@ProxyServer parameter below.

• @CredentialsLogin (optional, by default null, NVARCHAR): Login of the Windows account used to run
the HTTPS query. If the value is not specified, the service account of the SQL Server instance, VCSNEXT,
will be used. The account running the HTTPS query must have adequate rights to access the Internet
on the customer network especially if a proxy is set up. To find out more, see the @ProxyServer
parameter below.

• @CredentialsPassword (optional, by default null, NVARCHAR): Password of the Windows account


used to run the HTTP query. If the value is not specified, the service account of the SQL Server instance,
VCSNEXT, will be used. The account running the HTTPS query must have adequate rights to access the
Internet on the customer network especially if a proxy is set up.

• @ProxyServer (optional by default NULL, NVARCHAR): Proxy server to contact. You should specify a
URI, e.g. http:\\ProxyServerIP, or the name of the proxy server. You should specify this property when
the default proxy is not accessible.

• @ProxyServerPort (optional by default NULL, integer): Number of the proxy server port to contact.
See the property above. If it is 0, ignore the port.

• @ProxyBypassOnLocal (optional by default NULL, BIT): Boolean for ignoring the proxy server if it is a
local URL on the network. You generally specify 1.

Page 158/261
Once the stored procedure is run for an entity, you can query the vtGeocode table to retrieve the GPS coordinates of a
location as shown. Our example shows site addresses whose GPS coordinates have been loaded using an API service.

SELECT
Site.SiteKey AS SiteKey,
Site.SiteCode AS SiteCode,
Site.SiteName AS SiteName,
Geocode.GeocodeHTTPStatusReturn AS SiteStatus,
Geocode.GeocodeHTTPErrorReturn AS SiteError,
Geocode.GeocodeLocationValue AS SiteAddress,
Geocode.GeocodeLatitude AS SiteLatitude,
Geocode.GeocodeLongitude AS SiteLongitude

FROM vtCommonDataSchema.vtSite AS Site


INNER JOIN vtCommonDataSchema.vtAddress AS Address
ON Address.AddressKey = Site.SiteMainAddressKey
INNER JOIN vtCommonDataSchema.vtGeocode AS Geocode
ON Geocode.GeocodeOriginAPIKindIdSys = 0 – Service API
AND Geocode.GeocodeLocationValue =
SUBSTRING(Address.AddressCompleteIdentification,1,445)

Warning: The join for the value of the GPS location is made with the site address truncated to 445 characters. See
above. The join must also be made using the GeocodeOriginAPIKindIdSys field with a value of 0 because a given
location may have different GPS coordinates depending on the API used. The table is also used to store GPS coordinates
of entities separately from addresses. In this case, a query must be run using GeocodeOriginAPIKindIdSys with a value
of 1. If GPS coordinates are imported, the query will be different.

Note:

• Geographical coordinates can also be imported. A set of views is also provided to enable you to
query an entity's geographical coordinates managed by exception. To find out more, see Importing
the geographical coordinates (GPS) of entities - vtGeocode table. The two methods are not
incompatible and can be combined when managed by exception.

• A system report displays generated and non-generated GPS coordinates with the corresponding
reasons. To find out more, see Reports in the System/Functional settings/Geographical folder. This
report enables users to force the values by overwriting existing values with generated ones. Only
already existing values can be modified. Manually entered values will also be overwritten by
generated values if you force the regeneration of coordinates. To find out more, see the
@OnlyIfNoCoordinateOrStatusNotCorrect parameter above.
Page 159/261
BI Architect provides a low-level stored procedure used to retrieve the GPS coordinates of any location independently of
the entities managed in BI Architect. This stored procedure is called
vtSystemProceduresSchema.vtToolsGetGeocodeCoordinates_spCall. The section above on APIs and their limits also
applies to this stored procedure.

Example of use:

DECLARE @_Status AS NVARCHAR(512)


DECLARE @_Latitude AS DECIMAL(38,20)=0
DECLARE @_Longitude AS DECIMAL(38,20)=0
DECLARE @_ErrorMessage AS NVARCHAR(4000)

EXECUTE vtSystemProceduresSchema.vtToolsGetGeocodeCoordinates_spCall
@ServiceKindIdSys = 2,
@UrlAPI = NULL,
@GeocodeKindIdSys = 1,
@APIKey = 'YourKey'
@CredentialsDomain = N'DOMAIN',
@CredentialsLogin = N'LOGIN',
@CredentialsPassword = N'PASWWORD',
@ProxyServer = 'ProxyServerName',
@ProxyServerPort = 80,
@ProxyIsBypassOnLocal = 1,
@CompleteAddress = N'25 Rue d''astorg, 75008 PARIS, FRANCE',
@Status = @_Status OUTPUT,
@Latitude = @_Latitude OUTPUT,
@Longitude = @_Longitude OUTPUT,
@ErrorMessage = @_ErrorMessage OUTPUT

SELECT @_Status,@_ErrorMessage,@_Latitude,@_Longitude

Page 160/261
The different settings are as follows:

• @ServiceKindIdSys (optional by default 2, integer): Service to use. See above.

• @URLAPI (optional by default NULL, NVARCHAR): Do not use.

• @GeocodeKindIdSys (mandatory, integer): Used to indicate the type to be found to the system. The
possible values are as follows:
• 1 Address
• 2 City
• 3 Region or state
• 4 Country

• @APIKey (mandatory, NVARCHAR): Authentication key assigned by the provider. This key is
mandatory. Log on to the provider's website to obtain a key.

• @CredentialsDomain (mandatory, NVARCHAR): Domain of the Windows account used to run the
HTTP query. See above. If the value is null, the service account of the SQL Server instance, VCSNEXT,
will be used.

• @CredentialsLogin (mandatory, NVARCHAR): Login of the Windows account used to run the HTTP
query. See above. If the value is null, the service account of the SQL Server instance, VCSNEXT, will be
used.

• @CredentialsPassword (mandatory, NVARCHAR): Password of the Windows account used to run the
HTTP query. See above. If the value is null, the service account of the SQL Server instance, VCSNEXT,
will be used.

• @ProxyServer (optional by default NULL, NVARCHAR): Proxy server to contact. You should specify a
URI, e.g. http:\\ProxyServerIP, or the name of the proxy server. You should specify this property when
the default proxy is not accessible.

• @ProxyServerPort (optional by default NULL, integer): Number of the proxy server port to contact.
See the property above. If it is 0, ignore the port.

• @ProxyBypassOnLocal (optional by default NULL, BIT): Boolean for ignoring the proxy server if it is a
local URL on the network. You generally specify 1.

• @CompleteAddress (mandatory, NVARCHAR): Location whose GPS coordinates are required.

• @Status (mandatory, NVARCHAR, output parameter): Status returned by the API. To find out more
about the different statuses, please refer to the API provider website.

• @Latitude (mandatory, DECIMAL(38,20), output parameter): Latitude returned by the API if the
return status is correct. If not, 0.

Page 161/261
• @Longitude (mandatory, DECIMAL(38,20), output parameter): Longitude returned by the API if the
return status is correct. If not, 0.

• @ErrorMessage (mandatory, NVARCHAR, output parameter): Error message returned by the stored
procedure if an error occurs. For example, if the account passed as a parameter does not have
adequate rights to access the Internet, the stored procedure will return an error.

Page 162/261
20. FACT EXTRACTION VIEWS
To hide the complexity of certain management rules and facilitate querying the relational database, BI Architect
provides a set of views for extracting facts from the solution. These views implement most standard management rules
in BI linked to facts, namely:

• Managing output currency

• Comparability

• Loading the cost price

• Managing by exception certain analysis axes such as companies or sites

etc.

Views extract all records from the facts table without restriction. If necessary, filters can be set when using a view.

Only the main fact tables are associated with views, they are not associated exhaustively. Each view is associated with at
least one fact table. A view returns at least all of the fields in the fact table. If the entity is a transaction (e.g. sales), the
view also returns all of the fields in the document (table with the Transaction suffix).

For example, the vtCustomerSalesDataSchema.vtCustomerSalesProductView view extracts data from the


vtCustomerSalesDataSchema.vtFactsProductCustomerSales customer sales fact table and the
vtCustomerSalesDataSchema.vtCustomerSalesTransaction customer sales document table.

The fields returned by the view contain the same names as in the initial fact table (and in the table in the initial
document for the transactions) unless the field is not present in the tables mentioned. Views are used to extract facts
from BI repositories (OLAP cube and Qlik model).

The main views available are (non-exhaustive list):

• vtCustomerSalesDataSchema.vtCustomerOrderProductView: processes trade and retail


customer product orders (vtFactsProductCustomerOrder and vtCustomerOrderTransaction)

• vtCustomerSalesDataSchema.vtCustomerDeliveryProductView: processes trade and retail


customer product deliveries (vtFactsProductCustomerDelivery and
vtCustomerDeliveryTransaction)

• vtCustomerSalesDataSchema.vtCustomerSalesProductView: processes trade and retail


customer product sales (vtFactsProductCustomerSales and vtCustomerSalesTransaction)

• vtCustomerSalesDataSchema.vtCustomerSalesPaymentView: processes trade and retail


customer sales payments (vtFactsPaymentCustomerSales and vtCustomerSalesTransaction)

Page 163/261
• vtCustomerSalesDataSchema.vtCustomerSalesLineDiscountView: processes the details of
trade and retail customer product discounts
(vtFactsProductCustomerSalesLineDiscountDetailed and vtCustomerSalesTransaction)

• vtCustomerSalesDataSchema.vtStockRoomEventView: processes store events


(vtFactsStockRoomEvent)

• vtCustomerSalesDataSchema.vtCashRegisterEventView: processes register events


(vtFactsCashRegisterEvent)

• vtCustomerSalesDataSchema.vtSiteObjectiveView: processes store objectives


(vtFactsSiteObjective)

• vtCustomerSalesDataSchema.vtCustomerFidelityView: processes store loyalty transactions


(vtFactsCustomerFidelity)

• vtCustomerSalesDataSchema.vtSalesPersonPresenceView: processes salesperson store


schedules (vtFactsSalesPersonPresence)

• vtPurchaseDataSchema.vtSupplierOrderProductView: processes supplier product orders


(vtFactsProductSupplierOrder and vtSupplierOrderTransaction)

• vtPurchaseDataSchema.vtSupplierReceiptProductView: processes supplier product receipts


(vtFactsProductSupplierReceipt and vtSupplierReceiptTransaction)

• vtPurchaseDataSchema.vtSupplierPurchaseProductView: processes supplier product invoices


(vtFactsProductSupplierPurchase and vtSupplierPurchaseTransaction)

• vtStockDataSchema.vtTransferOrderProductView: processes product transfer requests


(vtFactsProductTransferOrder and vtTransferOrderTransaction)

• vtStockDataSchema.vtTransferDeliveryProductView: processes product transfer deliveries


(vtFactsProductTransferDelivery and vtTransferDeliveryTransaction)

• vtStockDataSchema.vtTransferReceiptProductView: processes product transfer receipts


(vtFactsProductTransferReceipt and vtTransferReceiptTransaction)

• vtStockDataSchema.vtCurrentStockProductView: processes current product stock


(vtFactsProductCurrentStock)

• vtStockDataSchema.vtStockHistoryProductView: processes product stock inputs/outputs


(vtFactsProductStockHistory)

• vtStockDataSchema.vtProductStockTransactionView: processes product stock transactions


(vtFactsProductStockTransaction)

Page 164/261
• vtStockDataSchema.vtProductInventoryView: processes product inventories
(vtFactsProductInventory and vtInventoryTransaction)

• vtStockDataSchema.vtSalesPriceView: processes product sales prices (vtFactsSalesPrice)

• vtStockDataSchema.vtPurchasePriceView: processes product purchases prices


(vtFactsPurchasePrice)

If the entity can be impacted by the management of comparable stores as well as sales (see Comparable stores), the
view returns five instances of each measure field, for example for the view processing sales, the sold quantity is
returned in five columns:

• QuantityInvoiced: Returns the “normal” sold quantity

• QuantityInvoicedComparableDayYearN_N_1: Returns the sold quantity in year Y compared


with year Y-1 for day/month/year calendar periods

• QuantityInvoicedComparableDayYearN_1_N: Returns the sold quantity in year Y-1 compared


with year Y for day/month/year calendar periods

• QuantityInvoicedComparableDayWeekYearN_N_1: Returns the sold quantity in year Y


compared with year Y-1 for week calendar periods

• QuantityInvoicedComparableDayWeekYearN_1_N: Returns the sold quantity in year Y-1


compared with year Y for week calendar periods

In addition, the view also returns comparability Booleans:

• SiteComparativeCalendarIsDayCalendarYearN_N_1: Boolean that indicates if the sales line at the


TimeDateSys date of the sale in year Y is comparable with year Y-1 for day/month/year calendar
periods

• SiteComparativeCalendarIsDayCalendarYearN_1_N: Boolean that indicates if the sales line at the


TimeDateSys date of the sale in year Y-1 is comparable with year Y for day/month/year calendar
periods

• SiteComparativeCalendarIsDayWeekYearN_1_N: Boolean that indicates if the sales line at the


TimeDateSys date of the sale in year Y-1 is comparable with year Y for week calendar periods

• SiteComparativeCalendarIsDayWeekYearN_1_N: Boolean that indicates if the sales line at the


TimeDateSys date of the sale in year Y-1 is comparable with year Y for week calendar periods

All views linked to managing comparable stores always return several instances of the fields with the same suffixes as
described above.

Page 165/261
Views processing amounts and prices return these fields in the following three currencies:

• Consolidation currency

• Initial currency

• Output currency

For example, for the view processing sales, the tax-excl. sold amount is provided in three columns:

• AmountInvoicedExceptionOfTax: returns the tax-excl. sales amount in the output currency or


the consolidation currency of the source depending on whether the manage output currency
option is enabled or not. Note: If the option is enabled and the conversion rate does not exist as
yet, the view returns the amount in the consolidation currency. To find out more, see the
section entitled Output currency.

• AmountInvoicedExceptionOfTaxNotConverted: returns the tax-excl. sales amount in the initial


currency of the sale.

• AmountInvoicedExceptionOfTaxConsolidatedCurrency: returns the tax-excl. sales amount in


the consolidation currency of the source.

In addition, the view also returns the conversion rate to date from the initial currency to the output currency (field also
returned in OutputCurrencyRate).

If the view manages comparability (see Fields linked to managing comparable stores), each field is also returned for
comparability. This means that each amount field is returned inat least 15 instances per view.

All views processing amounts and prices return fields in more than one instance with the same suffixes as described
above (except for fields containing non-converted values).

Views linked to fact tables used to calculate valuations with cost prices (such as margins for sales or stock valuations)
return valuations at the cost price based on the search rule for the cost price, configurable and described in the section
Default company for cost price searches.

For example, the view processing sales returns the following elements:

• AmountCostToDate: returns the cost to date valued at the company cost price to date based on
the standard search rule.

Since this is an amount field, it will also be returned in multiple instances for comparability (see
Fields linked to managing comparable stores).

Page 166/261
Since this is an amount field, it will also be returned in multiple instances for currencies (see
Fields linked to currencies). Note: however, the view does not return this field converted into
the initial currency.

• CostPriceHistory: returns the company cost price to date based on the standard search rule.

Since this is a price field, it will also be returned in multiple instances for currencies (see Fields
linked to currencies). Note: However, the view does not return this field converted into the
initial currency.

In addition, the view also returns the search company key of the cost price (generally the CostPriceCompanyKey field).

All views generating valuations at cost prices return the fields as described above.

Page 167/261
21. JOBS FOR LOADING DATA MARTS
There are two BI jobs for loading data marts (jobs for loading the data marts include reloading the dashboards as an
option):

• The standard load job: BI ARCHITECT DATA MART load

On-premises: These are usually CustomNext BI OLAP - Dimensions And Cube and CustomNext
BI OLAP - Dimensions And Daily Partitions jobs which combine data mart loading with cube
processes and are used instead of the BI ARCHITECT DATA MART load job.

The standard job is mandatory; it performs all the mandatory and optional standard processes
in the solution.

• The fast load job: BI ARCHITECT DATA MART fast load

The fast job is optional and it does not have a scheduler by default. It loads some, but not all,
information very quickly. To find out more, see below. The objective is to use it in conjunction
with the standard load job.

The two jobs (standard and fast) can therefore be used in conjunction. However, they cannot be run at the same time. If
this happens, a warning such as the one below would be generated for the two jobs (the text is italics is context-
dependent):
Warning: New process "FAST LOAD (BI ARCHITECT DATA MART fast load) - 335D832F-06CB-41B4-B6F9-CEC1BF8A028F" has tried to start
while the job "BI ARCHITECT DATA MART load" is still running, unable to execute the new process while the blocking job is still running, the
new process stopped

In addition, the job that tries to start while the other job is running would generate an error such as the following (the
text is italics is context-dependent):

CEGID ERROR REPORT-------------------------------


Error 50000, Severity 16, State 1, Procedure vtSystemProceduresSchema.vtDataWarehouseConfiguration_spUpdateSPID,
Line 129
Warning: New process "FAST LOAD (BI ARCHITECT DATA MART fast load) - 335D832F-06CB-41B4-B6F9-CEC1BF8A028F"
has tried to start while the job "BI ARCHITECT DATA MART load" is still running, unable to execute the new process while the
blocking job is still running, the new process stopped after 5 minutes

Page 168/261
This is the main BI job that performs all the mandatory and optional BI processes. This primarily includes, but is not
limited to:

• Loading data from different sources (Y2, Orli or custom)

• BI exports if they are enabled

• Reloading the dashboard model and dashboard partitions (QVD generation)

• Reloading dashboards business applications (all dashboards) if they are enabled

• Any complementary standard and optional ancillary process, such as managing comparable
stores, merging mappings, etc.

The standard job potentially performs a lot of tasks, so the run time may be significantly longer or shorter depending on
the data volume to be processed and the optional tasks it has to complete. In general, this job is run once a day (early in
the morning). However, the frequency at which it is run can be increased depending on the circumstances and the run
times observed. Outside of the initialization phases, the standard BI ARCHITECT DATA MART load job lasts between 15
and 45 minutes on average. In an on-premises set up, if the CustomNext BI OLAP - Dimensions and Daily partitions job
is used on a daily basis, the average time is between one and two hours. Even though you are authorized to increase the
frequency, this should only be done when strictly necessary. Running the job too frequently could adversely affect the
performance of BI servers and must be avoided.

Execution of the standard job (BI ARCHITECT DATA MART load or CustomNext BI OLAP - Dimensions And Daily
Partitions depending on the context), is mandatory and must be performed at least once a day.

Note: For a SaaS-based setup, the run frequency is subject to quotas. To find out more, see Configuring jobs for loading
data marts in SaaS.

In addition to the standard data mart load job, a BI ARCHITECT DATA MART fast load job is provided. Unlike the
standard job which must be run, the use of this job is optional.

The fast load job performs a minimum amount of tasks, which are (this is a comprehensive list):

• Loading modified data on the current date from the Y2 source only (the fast load only processes
documents with the current date). Note that not all entities are supported. However, the
following is a comprehensive list of the entities that are:

o Retail and trade sales (note: links and many-to-many values with linked documents such
as orders and/or deliveries, etc. are not processed).

o Store events (store traffic, weather, etc.)

o Register operations

o Salesperson schedule

• Updating comparative periods.


Page 169/261
• Reloading the Store Performance dashboard if it is active (other business dashboards, the
dashboard model and dashboard partitions are not reloaded [QVD generation]).

Any other optional process, entity or dashboard that is not listed above is not supported by the data mart fast load job.

As it only performs a minimum number of tasks and only loads data from the day in question for certain entities, this job
is very quick to run. On average, it takes between 5 and 15 minutes, depending on the circumstances.

The purpose of this job is to have a high reloading frequency for sales data from Y2. Therefore, the frequency at which it
is run can be increased depending on the circumstances and the run times observed.

Note: For a SaaS-based setup, the run frequency is subject to quotas. To find out more, see Configuring jobs for loading
data marts in SaaS.

Note:

• The standard data mart load job must continue to be run at least once a day and at least once
before the fast load job. Because the fast load job does not perform all tasks and does not load
all entities, the rest of the entities and tasks must therefore be performed at least once a day with
the standard load job.

In general, we recommend setting up:


o One standard load job to take place early in the morning.

o Multiple fast loads jobs after the first standard load job of the day. Fast loads may be run
regularly at a specific time, such as every two hours during a specific time period for the
day.

o One additional standard load job at the end of the day after regular quick loads.

Note: As the two jobs cannot run at the same time, separate time slots must be planned by
scheduling some leeway such as:

o Running the standard job once a day at 05:00.

o Running the fast job regularly at two-hour intervals between 07:00 and 22:00.

o Running the standard job once a day at 23:00.

• The fast load job only loads the standard Y2 source; other standard sources, such as Orli or custom
sources, are not included the fast load job. Note: Only Y2 direct sources are supported. If the Y2
source is connected via file exchange such as for BI Hybrid, it will not be included in the fast load
job.

• The fast load job does not perform the optional entity integrity control, even if it is enabled. If you
load a sale but the sales products do not yet exist in the BI, the sale is created in the BI with these
products at NULL. This would happen if the products have not yet been processed by the standard
load job. The sale will be updated with the correct products at a later stage, i.e. as soon as the
next standard load job is run. Therefore, it is important to understand that entities processed by

Page 170/261
the fast load job may be incomplete, which is why they are queued and are then reprocessed and
finalized during the next standard load.

• The fast load job does not perform BI upgrades that are supported by the standard load job; if a
BI update is pending, the fast load job stops with a warning.

• The fast load job does not perform scheduled initializations of the current entities that are
supported by the standard load job; the fast load job ignores initializations.

• The fast load job does not reload the dashboards model, the dashboards partitions (QVD
generation) or the business dashboards ,apart from the Store Performance dashboard. Th
partitions from the dashboards model (QVD generation) must therefore be loaded at least once
a day and at least once before the fast load. In general, this is done with the standard data mart
load job (see above).

• The fast load job stops with a warning in the following cases:

o A standard load job (or any other BI job) is running.

o In a Cegid Cloud setup and if the loading of the fast load dashboards (application and
Store Performance task) is not active after the data marts have been loaded.

o If the standard load job has not been run at least once in full before the first fast load.

o If the partition process has not been run at least once in full before the first fast load.

o If an upgrade to a new BI version is pending and the standard load job has not yet been
run.

• A system report allows you to track fast loads. To do this, the Fast load process history report can
be found in the \System\Monitoring folder. In addition, if fast loads are performed, the BI Entities
status and BI Entities history load system reports display the number of fast loads performed for
the day/period with a link to the detailed report.

Page 171/261
22. CONFIGURING JOBS FOR LOADING DATA MARTS IN SAAS
This chapter only applies to BI solutions deployed in the Cegid Cloud.

Standard load and fast load jobs for loading BI data marts can be configured in SaaS. To find out more, see Jobs for
loading data marts. To modify job configuration, use the BI jobs process setup system report found in the
\System\Monitoring\Advanced folder.

The report allows you to carry out the following configurations:

• Management of schedulers (creation/modification/deletion), it can be used in particular to


increase the frequency at which data marts and dashboards are loaded on a daily basis.

• Configuration of automatic error recovery at the job step.

Increasing the run frequency is subject to certain constraints and these depend on the load type: standard or fast.

This concerns the BI ARCHITECT DATA MART load job. By default, a maximum of three daily standard loads can be
configured. To exceed this threshold, the system checks the authorized quota based on the following items calculated
for each day of the week:

• Standard loads recorded over a one-year period (if one year of history is not available, there must
be at least ten standard loads for the day D). Initializations are taken into account while loads with
an error are ignored.

• The median value of the full standard load time recorded in the history for each day of the week.

Based on these items, the system checks whether the authorized quota of 1.5 hours of processing (i.e. 5400 seconds)
per day of the week per tenant (customer) is exceeded according to the following values:

• If the median duration of the BI processes for the day D of the week is greater than or equal to
1350 seconds or if there are fewer than ten loads for that particular day over a full one-year
period, a maximum of three processes are allowed for the day D.

• If the median duration of the BI processes for the day D of the week is less than 1350 seconds: a
maximum of four processes are allowed for the day D.

• If the median duration of the BI processes for the day D of the week is less than 1080 seconds: a
maximum of five processes are allowed for the day D.

• If the median duration of the BI processes for the day D of the week is less than 900 seconds: a
maximum of six processes are allowed for the day D.

Therefore, a maximum of six daily loads in SaaS are allowed for the standard data mart load job.

Page 172/261
Note: If the maximum number is exceeded with a margin of +1 for the day (i.e. a maximum of seven standard loads for a
day), loading stops with a warning. In this case, the only way to run the job is to start it manually via the job execution
system report (the test is only performed if the job is run automatically).

Both the standard and fast load jobs are normally used in conjunction with one another; see Jobs for loading data marts.
The maximum possible number of reloads in SaaS for sales data and the Store Performance dashboard is therefore the
sum of both of these jobs, i.e. between 15 and 18 reloads for a day. For other entities and/or dashboards, the maximum
number of reloads depends on the frequency of the standard load job, i.e. no more than 3 to 6.

To calculate the number of current processes per day, the system takes into account the following items in the
schedulers:

• Inactive schedulers are ignored and appear with a light gray background in the report.

• Schedulers with one-time processes are ignored.

• Schedulers outside the processing period are included.

• Certain types of schedules (not managed by the BI solution) are included as a daily process even
if they are not run on a daily basis. These are:
o Monthly frequency schedulers are recorded for each day
o Inactive processor schedulers are recorded for each day
o Schedulers for running a task every X amount of days are recorded for each day

Please note that these types of schedules are not supported by the BI solution and therefore
cannot be configured via the report.

Note:

• In general, one or more schedulers will be configured with a regular, hourly run frequency during
a specific time period for the fast load job.

• If the quota is exceeded for one day of the week, no additional load jobs can be added for that
day.

• If at least two daily loads are enabled for a day of the week, the week/day/hour partition model
is automatically set up for Sense dashboards and it becomes mandatory in SaaS.

• The report does not support all of the scheduler types managed by the SQL Server. If the schedule
type is not supported, the scheduler appears in red in the report and cannot be modified.
However, it can be deleted.

This concerns the BI ARCHITECT DATA MART fast load job. By default, this job does not contain any scheduler.

There are no constraints in relation to time credit for the data mart fast load, but the number of SaaS runs is limited; a
maximum of 12 fast loads can be configured per day.
Page 173/261
Note: If the maximum number is exceeded with a margin of +1 for the day (i.e. a maximum of 13 fast loads for a day),
loading stops with a warning. In this case, the only way to run the job is to start it manually via the job execution system
report (the test is only performed if the job is run automatically).

Both the standard and fast load jobs are normally used in conjunction with one another; see Jobs for loading data marts.
The maximum possible number of reloads in SaaS for sales data and the Store Performance dashboard is therefore the
sum of both of these jobs, i.e. between 15 and 18 reloads for a day. For other entities and/or dashboards, the maximum
number of reloads depends on the frequency of the standard load job, i.e. no more than 3 to 6.

Note:

• In general, one or more schedulers will be configured with a regular, hourly run frequency during
a specific time period for the fast load job.

• If the quota is exceeded for one day of the week, no additional load jobs can be added for that
day.

• If at least two daily loads are enabled for a day of the week, the week/day/hour partition model
is automatically set up for Sense dashboards and it becomes mandatory in SaaS.

• The report does not support all of the scheduler types managed by the SQL Server. If the schedule
type is not supported, the scheduler appears in red in the report and cannot be modified.
However, it can be deleted.

• If fast loading of the dashboards (application and Store Performance task) is not active after the
data marts have been loaded, the job stops automatically with a warning. See BI ARCHITECT
DATA MART fast load job.

Page 174/261
23. SCD (SLOWLY CHANGING DIMENSIONS) TYPE 2
This process is demanding in terms of resources when loading source data to BI Architect. Its implementation therefore
requires careful thought.

The objective of this concept is to enable the system to track the historical values of certain dimension attributes that
can change over time. For example, in certain distribution activities, a product collection can change over time. The
system must allow end users to see the relevant collection for a given product at a specific time. This implementation is
usually reserved for dimensions. However, BI Architect contains fact tables that can store "current" stock, base purchase
and selling prices that are not tracked historically. SCD Type 2 can also be applied to certain fact tables.

By default, BI Architect uses SCD Type 1 for dimension attributes or facts. This means that all table column values in the
BI Architect relational database or all dimension attribute members in the OLAP cube will systematically be overwritten
when an attribute value is modified in the production database. If you want to track historical data, you should apply
SCD Type 2 to each attribute.

Applying SCD Type 2 to attributes is therefore optional in BI Architect.

You can use stored procedures or a standard report for SCD implementation. To find out more, see System reports.

The chapter below describes the stored procedures that can be used to implement SCD Type 2 for attributes instead of
the standard report, and other functions related to cube management.

Warning: If the attribute belongs to a dimension, we recommend that you first initialize the dimension in BI Architect
before enabling SCD Type 2. To find out why, see Historical table structure for SCD Type 2 attributes.

BI Architect provides two stored procedures to enable or disable an SCD Type 2 attribute. You can also use the
Configure SCD attributes system report contained in the System/General settings folder.

To run the stored procedures, you must:

• Open SQL Server Management Studio.

• Log on to the VCSNEXT instance hosting the BI Architect relational database using the vtDbAdmin
account.

• Open the query editor for the BI Architect relational database (usually vtNextDW).

In the query editor:

To enable SCD Type 2 for an attribute, you must use the stored
vtSystemProceduresSchema.vtSCDAttribute_spSetType2 procedure that receives two input parameters:

Page 175/261
• The name of the table

• The name of the column to be tracked

Example:

EXECUTE vtSystemProceduresSchema.vtSCDAttribute_spSetType2 'vtProduct','ProductProductCollectionKey'

When you enable SCD Type 2 for an attribute, SQL Server Management Studio will confirm that it is enabled and display
the list of enabled attributes in the Messages tab. For example:

SCD Type 2 activated for the attribute ProductProductCollectionKey, table vtProduct

List of activated attributes with SCD type 2:

Table: Product (VtProduct), Column: Product collection (ProductProductCollectionKey) is type 2

If a column or table does not exist or is not available for SCD Type 2 implementation, SQL Server Management Studio
will display the following message:

Attribute ProductProductCollectionKey for table vtProduct doesn't exist, SCD Type 2 is not activated

To disable SCD Type 2 for an attribute, you should use the vtSystemProceduresSchema.vtSCDAttribute_spDropType2
stored procedure that receives two input parameters:

• The name of the table

• The name of the column to be tracked

Example:

EXECUTE vtSystemProceduresSchema.vtSCDAttribute_spDropType2 'vtProduct','ProductProductCollectionKey'

When you disable SCD Type 2 for an attribute, SQL Server Management Studio will confirm that it is disabled and display
the list of remaining enabled attributes in the Messages tab. For example:

Page 176/261
SCD Type 2 deactivated for the attribute ProductProductCollectionKey, table vtProduct

List of activated attributes with SCD type 2:

Table: Product (VtProduct), Column: Product theme (ProductProductThemeKey) is type 2

If a column or table does not exist or is not available for SCD Type 2 implementation, SQL Server Management Studio
will display the following message:

Attribute ProductProductCollectionKey for table vtProduct doesn't exist, SCD Type 2 is not deactivated

Note:
For certain tables, especially fact tables, all attributes are enabled or disabled at the same time. In this case, the name of
the attribute should contain the value '*'.

You can enable and disable the option at any time. Modifications are traced in the BI Architect system log.

SCD Type 2 implementation is not dynamic. Candidate attributes are limited in number and are selected by Cegid. Each
new dimension or attribute where SCD Type 2 is applied requires some modification in BI Architect. Below is a list of
attributes available:

'vtProduct' (product table):

• 'ProductProductCollectionKey' (product collection column)


• 'ProductProductThemeKey' (product theme column)

'vtUserFieldOfProduct' (product table user field, vtProduct table secondary table):

• '*' (all columns)

'vtStockRoom' (store table):

• 'StockRoomArea' (store surface area column)

Page 177/261
'vtFactsPackagingPurchasePrice' (fact table containing packaging base purchase prices and base price exceptions by
store):

• '*' (all purchase price and non-converted purchase price columns)

'vtFactsPackagingSalesPrice' (fact table containing packaging base selling prices and base price exceptions by store):

• '*' (all selling price and non-converted selling price columns)

'vtFactsProductPurchasePrice' (fact table containing product base purchase prices and base price exceptions by store):

• '*' (all purchase price and non-converted purchase price columns and base price exceptions by
store)

'vtFactsProductSalesPrice' (fact table containing product base selling prices and base price exceptions by store):

• '*' (all selling price and non-converted selling price columns)

vtFactsProductCurrentStock (fact table containing current stock for products):

• '*' (all columns)

Warning: If the data source is CBR, Orli or an external source, current stock is not tracked historically by BI Architect.
CBR already has historical stock tracking that you can load to BI Architect using stock snapshots in the data mart.

vtFactsPackagingCurrentStock (fact table containing current stock for packaging):

• '*' (all columns)

When you apply SCD Type 2 to an attribute, any modification made to its value is tracked in the BI Architect database.
To find out more, see Historical table structure for SCD Type 2 attributes.

In addition, BI Architect provides two other stored procedures for checking whether or not attributes are enabled.

vtSystemProceduresSchema.vtSCDAttribute_spListType:

This displays the list of attributes where you can apply SCD Type 2 and the value of the SCD type. It can receive an
integer as a parameter that indicates the SCD type (1 or 2). If the parameter is ignored, all attributes are displayed.
Example:

Page 178/261
EXECUTE vtSystemProceduresSchema.vtSCDAttribute_spListType

vtSystemProceduresSchema.vtSCDAttribute_spPrintListType:

It has the same function as the previous stored procedure except that the list is displayed in the Messages tab in SQL
Server Management Studio. Example:

EXECUTE vtSystemProceduresSchema.vtSCDAttribute_spPrintListType

Because BI Architect can store fact tables linked to dimensions with different structures and non-time-based current
fact tables with differential updates, Cegid has come up with a method of facilitating SCD Type 2 implementation that is
different from the existing standard methods. Time-based attribute values are stored in separate tables from the initial
table.

Each BI Architect table with one or more columns where SCD Type 2 can be applied now has an additional table with the
same name followed by the AttributeHistory suffix, e.g. vtProductAttributeHistory for the vtProduct table. The
AttributeHistory table enables you to track modifications made to each column of the initial table where SCD Type 2 is
applied.

The objective of the table is to store and make the following data available:

• The internal key of the modified record. For example, in the vtProduct table, this is the product
internal key, therefore the field name with the ProductKey suffix.

• The validity start date of the record with no specific hour. A validity start date that is 01/01/1929
indicates that the record is valid from its creation. The field name is followed by the RowStartDate
suffix.

• The validity end date of the record with no specific time (mandatory). The field name is followed
by the RowEndDate suffix.

The table contains all SCD Type 2 attributes with a Boolean indicating its implementation date. For each attribute or
column, two fields are available:

• A field containing the time-based value, e.g. the field with the ProductCollectionKey suffix for
tracking modifications in the product collection column in the vtProduct table. This field is only
specified if the option is enabled for the attribute. Complex data such as product classes are
stored in pivot tables. Historical attributes are stored in a dedicated table and not in a single
column.

Page 179/261
• A field with the Is prefix and Updated suffix, e.g. IsProductCollectionUpdated for the product
collection. This Boolean field indicates whether or not the column is tracked historically from this
date.

Note:
For fact tables (e.g. current stock, purchase and selling prices), all attributes are systematically tracked. Booleans do not
exist in the historical data table. The table only stores the history of modified values. The current value is stored directly
in the initial table. For example, the current product collection is stored in the vtProduct table and not in the
vtProductAttributeHistory table. For SCD Type 2 attributes, each modification will be stored in the AttributeHistory
table using the system date sent by the production database and not the BI Architect server date. This is because there
may be a significant difference between the time the modification is made and its integration in BI.

Warning: Dates are stored without the time. The time stored is always midnight. This means that if two modifications to
a given value are made in one day, the first will be stored. The second will be considered to be a data entry error and
will overwrite the current value in the initial table. If the system date in the production database is brought backwards
and if BI Architect attempts to update a modification earlier than the last one available for a given record, an error
message will be generated by the system.

When a new record is created for a dimension or in a fact table, its history is managed as follows:

The first record in the history is automatically created. Attribute values are created with null values. If the options are
enabled after creating the records in the dimension table or fact table, you will not be able to see the date on which the
records had a null value. This is important for current stock facts for example. In this case, the first attribute value will
be considered to be the initial value of the field when the option was enabled, regardless of the date prior to the first
modification. This is why the values of these fields are only relevant once SCD Type 2 has been enabled for an attribute.

We also recommend that you enable SCD Type 2 for a dimension attribute once you have initialized the dimension in BI
Architect. If this is not the case, all time-based attributes will be stored with a null value for the period prior to the
sending of the dimensions to BI Architect.

Rules for retrieving SCD Type 2 attributes


You want to retrieve the current product collection whose product key is 7.

You load ProductProductCollectionKey from the vtProduct table with the following record:

• ProductKey = 7

You want to retrieve the collection on 04/16/2007:

Page 180/261
You search for the following record in the vtProductAttributeHistory table:

• ProductAttributeHistoryProductKey = 7
• And ProductAttributeHistoryRowStartDate <= 4/16/2007
• And ProductAttributeHistoryRowEndDate >= 4/16/2007
• And IsProductCollectionUpdated = 1

If the record exists, you load ProductAttributeHistoryProductCollectionKey. If not, you load ProductCollectionKey from
the vtProduct table. This corresponds to the current product collection by default. Warning: A null value is considered
valid as long as the record exists and Boolean value IsProductCollectionUpdated = 1.

This is an example of an algorithm. BI Architect provides all of the objects required for manipulating the
AttributeHistory tables easily. To find out more, see Functions and views for SCD Type 2 attributes.

To facilitate SCD Type 2 implementation, BI Architect provides a number of tools you can use to manipulate SCD Type 2
columns and tables.

There is a view available for each table with at least one column where SCD Type 2 can be applied. This is generally the
case except for certain fact tables such as those for base purchase and selling prices where there is no view. These
objects are usually manipulated using functions. The objective of this view is to summarize the current column values
for each record in the initial table as well as the time-based column values from the historical table.

The view has the same name as the initial table with the HistoryView suffix, e.g. vtProductHistoryView for the
vtProduct table. This view provides a summary of the vtProduct and vtProductAttributeHistory tables. In addition to all
of the columns in the initial table, the view also returns the following columns:

• A virtual key, ProductVirtualKey for products. This integer key corresponds to the actual product
key for current records and to the calculated virtual product key for time-based records. Warning:
This is a BIGINT key.

• An actual key, ProductActualKey for products. This integer key corresponds to the actual product
key.

• A Boolean indicates whether or not it is a current record, ProductIsCurrent for products.

• A date field containing the start validity date of the record, ProductRowStartDate for products.

• A date field containing the end validity date of the record, ProductRowEndDate for products.

All SCD Type 2 columns are returned. Depending on whether or not the value of the IsCurrent column is True, there may
be time-based values in addition to date fields indicating the validity periods of the columns.

Page 181/261
Views from dimensions have additional columns. For each SCD Type 2 column, there is a value corresponding to the
time-based value (in current records, this column contains the current value) and a column containing the current value
(in current records, this column contains the same value as the time-based value). In our example, the view will return
[ProductProductCollectionKey] which corresponds to the time-based product collection and
[ProductProductCollectionKeyCurrent] which corresponds to the current product collection. If it is a current record, i.e.
ProductIsCurrent = 1, these two fields will be identical.

Below is an example of a query that displays product information, the time-based and current product collections and
themes:

SELECT

[ProductVirtualKey]

,[ProductActualKey]

,[ProductIsCurrent]

,[ProductRowStartDate]

,[ProductRowEndDate]

,[ProductIdApp]

,[ProductCode]

,[ProductName]

,[ProductSkuCode]

,[ProductReference]

,[ProductAlias]

,[ProductProductCollectionKey]

,[ProductProductCollectionKeyCurrent]

,[ProductProductThemeKey]

,[ProductProductThemeKeyCurrent]

,[ProductBrandKey]

FROM [ProductHistoryView]

The available views are as follows:

• vtProductHistoryView (synonym ProductHistoryView, for the vtProduct table)

• vtStockRoomHistoryView (synonym StockRoomHistoryView, for the vtStockroom table)


Page 182/261
• vtFactsProductCurrentStockHistoryView (synonym FactsProductCurrentStockHistoryView for
the vtFactsProductCurrentStock table)

• vtFactsPackagingCurrentStockHistoryView (synonym FactsPackagingCurrentStockHistoryView


for the vtFactsPackagingCurrentStock table)

In addition to these views, BI Architect provides one function for each table, except for certain fact tables such as prices.
The objective of this function is to return the virtual key of a table record with the same algorithm used by the view. The
function receives two parameters, the internal key of the corresponding record (actual key) and the date for the key. If
there is a null value for the date, the current actual key is returned. Below is an example of how this is used in the
vtProduct table:

SELECT

vt.GetProductHistoryKey(ProductKey,TimeDateSys)

FROM FactsProductCustomerSales

The available functions are as follows:

• vtCommonDataSchema.vtGetProductHistoryKey (synonym vt.GetProductHistoryKey, for the


vtProduct table)

• vtCommonDataSchema.vtGetStockRoomHistoryKey (synonym vt.GetStockRoomHistoryKey,


for the vtStockRoom table)

Warning: These functions return BIGINT integers.

Lastly, for each SCD Type 2 column (except for certain fact tables), you can use a function that returns the time-based
value. The function receives two parameters, the internal key of the corresponding record (actual key) and the date for
the key. For facts, the number of keys corresponds to the analysis level of the table. If there is a null value for the date,
the current actual key is returned.

Below is an example of how this is used in the ProductProductCollectionKey column of the vtProduct table:

SELECT

vt.GetProductCollectionHistoryKey(ProductKey,TimeDateSys)

FROM FactsProductCustomerSales

The available functions are as follows:


Page 183/261
Product dimension:
• vtCommonDataSchema.vtGetProductCollectionHistoryKey (synonym
vt.GetProductCollectionHistoryKey, for the ProductProductCollectionKey column of the
vtProduct table)

• vtCommonDataSchema.vtGetProductThemeHistoryKey (synonym
vt.GetProductThemeHistoryKey, for the ProductProductThemeKey column of the vtProduct
table)

• vtCommonDataSchema.vtGetMainProductClassificationHistoryKey (synonym
vt.GetMainProductClassificationHistoryKey, for the ProductMainProductClassificationKey
column of the vtProduct table)

• Product classes have a specific structure and are managed using the
vt.GetClassificationClassesOfProducthistory table function. To find out more, see the following
KB article
http://equipe.vcstimeless.local/sites/datawarehouse/Lists/BI%20Architect%20Business%20rules/
DispForm.aspx?ID=53.

• Product user fields have a specific structure and are managed using the
vt.GetUserFieldOfProducthistory table function. To find out more, see the following KB article
http://equipe.vcstimeless.local/sites/datawarehouse/Lists/BI%20Architect%20Business%20rules/
DispForm.aspx?ID=55

Store dimension:
• vtCommonDataSchema.vtGetStockRoomAreaHistory (synonym vt.GetStockRoomAreaHistory,
for the StockRoomArea column of the vtStockRoom table)

• vtCommonDataSchema.vtGetMainStockRoomClassificationHistoryKey (synonym
vt.GetMainStockRoomClassificationHistoryKey, for the
StockRoomMainStockRoomClassificationKey column of the vtStockRoom table)

• Store classes have a specific structure and are managed using the
vt.GetClassificationClassesOfProductHistory table function. To find out more, see the following
KB article
http://equipe.vcstimeless.local/sites/datawarehouse/Lists/BI%20Architect%20Business%20rules/
DispForm.aspx?ID=53.

Product current stock facts:


• vtStockDataSchema.vtGetProductCurrentQuantityStockHistory(synonym
vt.GetProductCurrentQuantityStockHistory, for the QuantityStock column of the
vtFactsProductCurrentStock table). Warning: This function receives two keys as parameters in
addition to ProductKey and StockRoomKey.

Page 184/261
• vtStockDataSchema.vtGetProductCurrentQuantityStockPackagedHistory(synonym
vt.GetProductCurrentQuantityStockPackagedHistory, for the QuantityStockPackaged column of
the vtFactsProductCurrentStock table). Warning: This function receives two keys as parameters
in addition to ProductKey and StockRoomKey.

• vtStockDataSchema.vtGetProductCurrentQuantityStockMinimumHistory(synonym
vt.GetProductCurrentQuantityStockMinimumHistory, for the QuantityStockMinimum column
of the vtFactsProductCurrentStock table). Warning: This function receives two keys as
parameters in addition to ProductKey and StockRoomKey.

• vtStockDataSchema.vtGetProductCurrentQuantityStockMaximumHistory(synonym
vt.GetProductCurrentQuantityStockMaximumHistory, for the QuantityStockMaximum column
of the vtFactsProductCurrentStock table). Warning: This function receives two keys as
parameters in addition to ProductKey and StockRoomKey.

Packaging current stock facts:


• vtStockDataSchema.vtGetPackagingCurrentQuantityStockHistory(synonym
vt.GetPackagingCurrentQuantityStockHistory, for the QuantityStock column of the
vtFactsPackagingCurrentStock table). Warning: This function receives two keys as parameters in
addition to PackagingKey and StockRoomKey.

• vtStockDataSchema.vtGetPackagingCurrentQuantityStockPackagedHistory(synonym
vt.GetPackagingCurrentQuantityStockPackagedHistory, for the QuantityStockPackaged column
of the vtFactsPackagingCurrentStock table). Warning: This function receives two keys as
parameters in addition to PackagingKey and StockRoomKey.

• vtStockDataSchema.vtGetPackagingCurrentQuantityStockMinimumHistory(synonym
vt.GetPackagingCurrentQuantityStockMinimumHistory, for the QuantityStockMinimum
column of the vtFactsPackagingCurrentStock table). Warning: This function receives two keys as
parameters in addition to PackagingKey and StockRoomKey.

• vtStockDataSchema.vtGetPackagingCurrentQuantityStockMaximumHistory(synonym
vt.GetPackagingCurrentQuantityStockMaximumHistory, for the QuantityStockMaximum
column of the vtFactsPackagingCurrentStock table). Warning: This function receives two keys as
parameters in addition to PackagingKey and StockRoomKey.

Base purchase and selling price facts:


Because base purchase and selling prices are managed by exception, only functions can be used. The available functions
are as follows:

• vt.PackagingPurchasePrice_BaseHistoryPriceConvertedByPriceCategoryIdApp

• vt.PackagingSalesPrice_BaseHistoryPriceConvertedByPriceCategoryIdApp

• vt.ProductPurchasePrice_BaseHistoryPriceConvertedByPriceCategoryIdApp

• vt.ProductSalesPrice_BaseHistoryPriceConvertedByPriceCategoryIdApp

Page 185/261
Warning: The type returned by these functions depends on the type of the columns.

The main purpose of these views and functions is their use in the OLAP cube so users can retrieve time-based attribute
values. To find out more, see OLAP methods for managing SCD Type 2 attributes.

SCD Type 2 attributes can be implemented in the OLAP cube in different ways. The section below will describe two
methods that involve dimensions.

Note: Other methods can be used as the two methods presented here are not exclusive.

Warning: Regardless of the method, if you implement SCD Type 2 attributes in a cube that has semi-additive measures,
it will not work correctly if you run a query that contains an SCD Type 2 attribute. Reminder: Semi-additive measures
include time-based stock calculated by default in the cube and time-based stock valuations calculated using time-based
stock.

Example:

• Collection C1 is associated with product P1, P2 on 03/15/2010

• Collection C1 is associated with product P1, P3 on 03/20/2010

• In this scenario, the aggregation of attributes is used to calculate historical stock. If you want to
retrieve historical stock for collection C1 on 3/15/2010, P2 movements will not be taken into
account from 3/20/2010 because the collection is no longer associated with this product.
Moreover, product P3 would be partially taken into account even though it was not associated
with collection C1 on 3/15. In actual fact, you should extract the list of products associated with
the collection at the requested date and calculate the historical stock associated with these
products by drilling down to the lowest analysis level without taking aggregations into account.
The current system would lose its relevance.

In this case, you should do one of the following:

• Prohibit users from querying semi-additive measures using a scope that checks if SCD members
are different from [ALL].

• Set up a system to manage snapshots of these measures such as snapshots of historical stock.

• Have several attributes for different time-based values in the SCD Type 2 dimension. This solution
is not dynamic.

• Include the SCD Type 2 attribute in the query only for information purposes, without any filter or
subtotal.

Page 186/261
• Model the attribute as a dimension (see method 1 below). You link the attribute to a single
measure group paired with the initial dimension with a single Boolean measure that defines the
attribute as one of the dimension members. This solution enables you to use the attribute with
filters but you cannot aggregate values of other measures for this attribute.

Method 1: Creating one dimension per SCD Type 2 attribute in the OLAP cube
We recommend this method which is quite simple to set up and maintain. The objective is to declare as many
dimensions as SCD Type 2 attributes. You then link the dimensions with the time-based values returned by the relevant
functions in measure groups where you want to aggregate values.

Our example is based on the Collection attribute of the Product dimension. You should proceed as described below:

• Create a named query in the DSV that returns all collections.

• Model the Collection dimension with at least an attribute with an internal key of the collection
from BI Architect.

• Use the vt.GetProductCollectionHistoryKey function in all named queries of fact tables that must
be joined to the dimension in order to retrieve the collection applicable during the transaction.
For current data, the date should be null.

• Join the dimension and fact tables in the cube dimension usage.

Repeat this procedure as many times as required for all attributes you want to declare as dimensions. The only
disadvantage of this solution is that you may have dimension attributes that are duplicates of dimensions. For example,
besides the Product collection attribute, there is also the Collection dimension. However, this is not important, except
for semi-additive measures, as you can delete redundant attributes.

Below is an example of a query run on the retail sales fact table that loads the historical collection and joins it to the
Collection dimension:

SELECT

sales.FactsProductCustomerSalesKey,

Sales.StockRoomKey,

sales.ProductKey, sales.TimeDateSys,

Vt.GetProductCollectionHistoryKey(sales.ProductKey, sales.TimeDateSys) AS ProductCollectionKey,

sales.QuantityInvoiced,

sales.AmountInvoicedExceptionOfTax,

sales.AmountInvoicedIncludingTax,
Page 187/261
sales.QuantityInvoiced * Vt.CostPriceCompany_HistoryConverted(sales.StockRoomKey,sales.ProductKey,
FactsProductCustomerSales.TimeDateSys) AS 'Amount cost to date',

StockRoom.StockRoomCompanyKey,

StockRoom.StockRoomSiteKey

FROM FactsProductCustomerSales AS sales

INNER JOIN StockRoom ON sales.StockRoomKey = StockRoom.StockRoomKey

WHERE sales.SalesChannelIdSys = 1

Below is an example of a query run on the current stock fact table that loads the current collection and joins it to the
Collection dimension:

SELECT

Stock.StockKey,

Stock.QuantityStock,

Stock.StockRoomKey,

Stock.ProductKey,

Vt.GetProductCollectionHistoryKey(Stock.ProductKey, NULL) AS ProductCollectionKey,

Stock.QuantityStock * Vt.CostPriceCompany_CurrentConverted(Stock.StockRoomKey, Stock.ProductKey) AS 'Amount


current cost price stock'

FROM FactsProductCurrentStock as Stock INNER JOIN

StockRoom ON Stock.StockRoomKey = StockRoom.StockRoomKey

WHERE Stock.QuantityStock <> 0

Method 2: Overloading the Product dimension in the OLAP cube


Although this method is presented here, we do not recommend it because it is difficult to set up and maintain. This
method requires the use of a view that returns virtual keys of dimension keys with attributes, e.g. Product dimension. If
you implement this method, you can no longer query the Product dimension directly or indirectly. You can only do so
using the view which will return virtual keys. See below. For example, if a measure group has non-additive measures
such as prices where products are cross-joined with time, you must replace the query and use a view instead.

Apart from this, this method has the advantage of presenting a global solution. The objective is to replace the initial
dimension with a dimension that summarizes current and time-based values. The new dimension will contain records
with actual keys corresponding to current values and records with virtual keys corresponding to time-based values. All
fact tables linked to the initial dimension must also use the appropriate function to return the virtual key for linking to
the dimension. This method allocates IDs from the production application to several identical members even though

Page 188/261
they should be unique. For example, in the Product dimension, a given SKU code may be allocated to several different
members.

Our example will use the Product dimension. You should proceed as described below.

• In the OLAP cube, the named query of the Product dimension is exclusively based on the
ProductAttributeHistoryView view. All columns from the Product table must be from this view.
The named query of the Product dimension DSV must use this view as a master table. Joins with
secondary tables will use the actual product key (ProductActualKey) instead of the virtual key
(ProductVirtualKey). The Product dimension key however, uses the virtual product key
(ProductVirtualKey).

• All named queries of fact tables in the DSV to be joined to the Product dimension must now use
the vt.GetProductAttributeHistoryKey function which returns the virtual product key. The
ProductKey field is no longer used for joins with the dimension in the cube dimension usage.

• Although this is not mandatory, for consistency reasons, we recommend that you also use this
function with fact tables (e.g. outstanding data) by specifying a null value for the transaction date.

• If the named query of the fact table must retrieve an actual product value in the database such
as the cost price, you should use the actual product key instead of the
vt.GetProductAttributeHistoryKey function.

Below is an example of a named query for the Product and Stock dimensions:

SELECT

Product.ProductVirtualKey,

Product.ProductName,

Product.ProductSkuCode,

Product.ProductReference,

Product.ProductProductCollectionKey,

ProductCollection.ProductCollectionCode,

ProductCollection.ProductCollectionName,

ClassOfProduct.MainClassOfProductClass01Code,

ClassOfProduct.MainClassOfProductClass01Key,

ClassOfProduct.MainClassOfProductClass01Name

FROM ProductHistoryView AS Product

LEFT OUTER JOIN Vt.GetClassificationClassesOfProduct('1', 1, 2, 3, 4,

Page 189/261
5, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL,
Vt.GetEnglishId(), Vt.GetFrenchId(),

Vt.GetSpanishId(), Vt.GetItalianId(), Vt.GetPortugueseId()) AS ClassOfProduct

ON ClassOfProduct.ProductKey = Product.ProductActualKey

SELECT

StockRoom.StockRoomVirtualKey,

StockRoom.StockRoomCode,

StockRoom.StockRoomName,

StockRoom.StockRoomCompanyKey,

StockRoom.StockRoomSiteKey,

StockRoom.StockRoomArea,

ClassOfStockroom.MainClassOfStockroomClass01Key,

ClassOfStockroom.MainClassOfStockroomClass01Code,

ClassOfStockroom.MainClassOfStockroomClass01Name

FROM StockRoomHistoryView AS StockRoom

LEFT OUTER JOIN Vt.GetClassificationClassesOfStockroom('STO', 1, 2, 3, 4,

5, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL,
Vt.GetEnglishId(), Vt.GetFrenchId(),

Vt.GetSpanishId(), Vt.GetItalianId(), Vt.GetPortugueseId()) AS ClassOfStockroom

ON ClassOfStockroom.StockroomKey = StockRoom.StockRoomActualKey

Below is an example of a named query for the retail sales fact table to be joined with the Product and Stock dimensions:

SELECT

Sales.SalesKey,

Vt.GetStockRoomHistoryKey(Sales.StockRoomKey,Sales.TimeDateSys) AS StockRoomVirtualKey,

Vt.GetProductHistoryKey(Sales.ProductKey, Sales.TimeDateSys) AS ProductVirtualKey,

Sales.TimeDateSys,

Sales.QuantityInvoiced,

Sales.AmountInvoicedExceptionOfTax,
Page 190/261
Sales.AmountInvoicedIncludingTax,

Sales.QuantityInvoiced * Vt.CostPriceCompany_HistoryConverted(Sales.StockRoomKey, Sales.ProductKey,


Sales.TimeDateSys) AS 'Amount cost to date',

StockRoom.StockRoomCompanyKey,

StockRoom.StockRoomSiteKey

FROM FactsProductCustomerSales AS Sales

INNER JOIN StockRoom ON Sales.StockRoomKey = StockRoom.StockRoomKey

WHERE Sales.SalesChannelIdSys = 1

Below is an example of a named query for the current stock fact table to be joined with the Product and Stock
dimensions:

SELECT

Stock.StockKey,

Stock.QuantityStock,

Vt.GetStockRoomHistoryKey(Stock.StockRoomKey, NULL) AS StockRoomVirtualKey,

Vt.GetProductHistoryKey(Stock.ProductKey, NULL) AS ProductVirtualKey,

Stock.QuantityStock * Vt.CostPriceCompany_CurrentConverted(Stock.StockRoomKey, Stock.ProductKey) AS 'Amount


current cost price stock'

FROM FactsProductCurrentStock AS Stock

INNER JOIN StockRoom ON Stock.StockRoomKey = StockRoom.StockRoomKey

WHERE Stock.QuantityStock <> 0

Page 191/261
24. READING BI ARCHITECT DIFFERENTIAL DATA
In certain configurations, the BI Architect database is used like an intermediate data mart to load other data
consolidation data marts. Within this context, you must be able to read differential data from the BI Architect database
based on data updated at intervals.

You can use different techniques for reading differential data. You can trace CUD operations (CREATE, UPDATE, DELETE)
for a table, you can read differential data in batches, or you can combine both techniques.

The least costly technique (as regards updates) is to read differential data in batches. We recommend that you do this
by default based on the available processing time. To find out more, see Reading differential data in batches.

If required, you can trace all operations in the tables by enabling the SQL Server Change Tracking option. To find out
more, see Enabling change tracking in SQL Server.

The SQL Server Change Tracking option is costly. We recommend that you do not enable it on a regular basis. The most
suitable intermediate solution would be to combine the two techniques depending on tables, data volume, scheduled
processing time and available resources.

All BI Architect tables (except for system tables) have the SystemLastUpdate field that contains the date and time (to
the nearest millisecond) of the last update made to table records. You can therefore use this field to read differential
table data. This field must be stored for each table imported into the target database.

By default, this field is not found in any index. If you want to read differential data in large tables, to optimize
performance, Cegid offers you the possibility of creating or deleting an index for this field in each table where it is
found.

To create or delete an index for this field in a table, proceed as described below.

• Log on to the VCSNEXT instance on the BI Architect server using the vtDbAdmin account.

• Connect to the database you want, generally vtNextDW.

• Run the stored procedure, [VtAdminToolsSchema].[vtAddSystemLastUpdateIndex]. This stored


procedure creates a single index with the SystemLastUpdate field (sorted in ascending order)
and passes the table name as a parameter. The index will be called
TableName_AK_SystemLastUpdate. The example below shows a call:

EXECUTE [VtAdminToolsSchema].[vtAddSystemLastUpdateIndex]
@TableName = 'vtCustomerSalesTransaction'

When you run this stored procedure, it will create the vtCustomerSalesTransaction_AK_SystemLastUpdate index that
will only contain the SystemLastUpdate field sorted in ascending order. A message will warn you if the table does not
exist, if it does not contain the SystemLastUpdate field, or if it already has an index for this field.

Page 192/261
If you want to delete an existing index for the field, you must run the
[VtAdminToolsSchema].[vtDropSystemLastUpdateIndex] stored procedure as shown.

EXECUTE [VtAdminToolsSchema].[vtDropSystemLastUpdateIndex]
@TableName = 'vtCustomerSalesTransaction'

A message will warn you if the table does not exist, if it does not contain the SystemLastUpdate field, or if it does not
have any index for this field.

If you want to create an index for this field in all BI Architect tables, you should specify an asterisk (*) instead of the
table name in the parameter in the [VtAdminToolsSchema].[vtAddSystemLastUpdateIndex] stored procedure.

Likewise, if you want to delete all indexes for this field in all BI Architect tables, you should specify an asterisk (*) instead
of the table name in the parameter in the [VtAdminToolsSchema].[vtDropSystemLastUpdateIndex] stored procedure.

To ensure overall performance and minimize data volume, we recommend that you avoid adding an index in all tables
unless it is absolutely necessary to do so.

The retrieval technique based on the last update date does not take into consideration entity records that may be
permanently deleted. If entity records are likely to be deleted in the BI data source, you may need to trace the deletion
of these records in order to identify them in the target database. To do this, you can use the vtCUDTableLog table in BI
Architect.

This table contains, on an optional basis, logs on the creation, update and deletion of table records as well as the update
of BI Architect table fields. These logs are optional. They are loaded using triggers that you must enable in BI Architect.

Users can enable the logs for the deletion, creation and update of entity records. The log for the update of table fields is
reserved for the system and, except in specific cases, cannot be enabled by users.

You can use a system report to enable the log for the deletion of a table in BI Architect:

• Log on to the Retail Intelligence Reporting Services portal.

• Select the System/General settings folder

• Click the Configure log tables report. The following screen will appear:

Page 193/261
• If the table already exists in the list, you can click the Edit link to the right of the table. To add a
new deletion log, click the Add log to table link. If required, enter the user login and password.

• Specify the name of the table to be traced (without the schema).

Page 194/261
• Select True for the Log delete option.

• Click Confirm add log table to validate the creation. The following screen will appear.

Note:

• Except where necessary, we recommend that you avoid logging large tables. You usually associate
one log to the main entity table only. If a sale is deleted, for example, all of the records for that
sale will be deleted in all tables linked to the entity. This is why one log associated with the main
table is sufficient for logging deleted records. The same applies to products, customers, etc.

To identify the main entity table and the tables associated with the entity, you should run a
query on metadata in the BI Architect database.

The query below enables you to identify all main entity tables sorted by entity:

SELECT

BIEntity.BIEntityName AS 'BI Entity',


TableOfBIEntity.TableOfBIEntityTableName AS 'Main table of BI
Entity'

FROM TableOfBIEntity

Page 195/261
INNER JOIN BIEntity ON BIEntity.BIEntityIdSys =
TableOfBIEntity.TableOfBIEntityBIEntityIdSys
WHERE TableOfBIEntity.TableOfBIEntityIsMain = 1
ORDER BY BIEntity.BIEntityName

The query below extracts all tables by entity together with the main entity table:

SELECT

BIEntity.BIEntityName AS 'BI Entity',


(SELECT TOP(1) MainTableOfBIEntity.TableOfBIEntityTableName
FROM TableOfBIEntity AS MainTableOfBIEntity
WHERE MainTableOfBIEntity.TableOfBIEntityBIEntityIdSys
= TableOfBIEntity.TableOfBIEntityBIEntityIdSys
AND
MainTableOfBIEntity.TableOfBIEntityIsMain = 1)
AS 'Main table of BI
Entity',
TableOfBIEntity.TableOfBIEntityTableName AS 'Table of BI
Entity'

FROM TableOfBIEntity
INNER JOIN BIEntity ON BIEntity.BIEntityIdSys =
TableOfBIEntity.TableOfBIEntityBIEntityIdSys
WHERE COALESCE(TableOfBIEntity.TableOfBIEntityIsMain,0) = 0
ORDER BY BIEntity.BIEntityName

• As mentioned earlier, only deletion logs can be enabled by users. Except in specific cases, the
other logs are reserved for the system. If you need to log all operations for a table, we recommend
that you use the SQL Server Change Tracking option. To find out more, see Enabling change
tracking in SQL Server.

• If you want to disable the log for a table or enable it again after it was disabled, you should return
to the main screen and click Edit.

• You can filter the tables displayed using the Table name (wildcards) filter in the report. This filter
accepts SQL Server jokers.

Once you have enabled the deletion log, all deletions will be logged in the vtCUDTableLog table. This table contains the
following fields:

Page 196/261
• CUDTableLogKey: Internal counter incremented by 1 with each log. This counter is never reset to
zero. If you browse through the table using the order of the counter, you can see all operations
logged in the correct order.

• CUDTableLogSystemDate: System date on which the log was created.

• CUDTableLogOperationDate: Software date from the data source of the last entity update
(DateOfLastChange suffix). If the entity does not contain this type of date, the value of the field
will be identical to the CUDTableLogSystemDate. In a deletion context, this date is not required.

• CUDTableUserLogin: Login of the SQL account user who performed the deletion.

• CUDTableUserName: Name of the SQL account user who performed the deletion.

• CUDTableLogCUDTableKey: ID key of the table associated with the log. Used to perform a join
with the vtCUDTable table containing the names of tables associated with a log.

• CUDTableLogOperationKind: Type of operation:

o C: For a creation (Insert SQL)


o U: For an update (Update SQL)
o D: For a deletion (Delete SQL)

• CUDTableLogRecordKey: If the table associated with the log contains a unique primary key
(generally the case in BI Architect), the field will contain the value of the key. Otherwise, it will
contain a null value. Note: If the primary key is a composite key made up of several columns, this
field will contain the value of the key of the main entity table if it is the main entity table, a table
directly linked to the main entity table, or the first key of the index.

• CUDTableLogRecordId: The value of this field depends on the type of table associated with the
log:

o If the table contains a unique IdApp index (this is the case for most main entity tables),
this field will contain the IdApp field value.

o If the table does not contain the IdApp field but is a table associated with the main entity
table that contains an IdApp field, this field will contain the IdApp field value from the
main entity table. Note: This rule applies if the table is directly linked to the main entity
table. If this is the case, the IdApp field is also stored in the CUDTableLogEntityIdApp
field. See below.

o If no IdApp field is detected by the system, this field will contain a concatenation of all
primary keys from the table associated with the log. Each primary key will be separated
by the pipe symbol (|).

• CUDTableLogEntityIdApp: This contains the IdApp field value from the main entity table if the
field exists and if it is the main entity table or a table directly linked to the main entity table.
Otherwise, the value of this field will be null.

Page 197/261
• CUDTableLogCUDFieldKey: This contains the key for field update logs. Except in specific cases,
this log cannot be enabled directly by users. You can, however, enable it indirectly. To find out
more, see Managing customer data quality. This field is used to perform a join with the vtCUDField
table containing the names of fields associated with a log.

• CUDTableLogFieldOldValue: For field update logs, this contains the previous value stored as an
alphanumeric value.

• CUDTableLogFieldNewValue: For field update logs, this contains the new value stored as an
alphanumeric value.

• CUDTableLogFieldOldKey: For field update logs, this contains the previous value stored as an
integer if this is the type for the key.

• CUDTableLogFieldNewKey: For field update logs, this contains the new value stored as an integer
if this is the type for the key.

The vtCUDTableLog table contains several indexes from which you can extract information.

If you want to trace table deletions, you should avoid browsing through the entire log table. To do this, you can, for
example, use a table in the target database that will store the name of the logged BI Architect table and the key of the
last deletion log of that table. This value is found in CUDTableLogKey in the vtCUDTableLog table. Subsequently, you
simply need to run a query to read all table deletion logs whose value is greater than the key. In our example based on
the main sales entity table, vtCustomerSalesTransaction, LastCudTableLogKey is stored and retrieved in the target
database:

SELECT
*
FROM CUDTableLog
INNER JOIN CUDTable ON CUDTable.CUDTableKey =
CUDTableLog.CUDTableLogCUDTableKey
WHERE CUDTable.CUDTableName = 'vtCustomerSalesTransaction'
AND CUDTableLog.CUDTableLogOperationKind = 'D' --D for
delete
AND CUDTableLog.CUDTableLogKey > LastCudTableLogKey

SQL Server provides the Change Tracking option with its database engine. This is used to enable logs for tracing CUD
operations (CREATE, UPDATE, DELETE) in database table rows. There is also an option available for tracing operations to
fields called Change Data Capture, but its use is not authorized in the solution.

The Change Tracking option enables you to see all operations performed on a table and to read the data affected by
these operations.

To enable the Change Tracking option, run the relevant stored procedures using the vtDbAdmin login.

Page 198/261
Because the Change Tracking option affects performance when upgrading BI Architect, we recommend that you read
differential data in batches by default and that you avoid enabling this option unless strictly necessary.

You must first enable the option in the database as shown.

EXECUTE VtAdminToolsSchema.vtDatabaseChangeTracking_SpEnable
@ChangeRetentionDays = 20,
@IsAutoCleanUp = 0

The @ChangeRetentionDays parameter defines the number of days information will be stored in the log in the
event of an automatic cleanup.

If the value of the @IsAutoCleanUp parameter is 1, the automatic cleanup is performed by SQL Server. If the value is
0, the cleanup must be performed manually using the system stored procedure,
sys.sp_flush_CT_internal_table_on_demand.

The stored procedure exists from SQL Server 2012 Service Pack 4 onwards and for SQL Server 2016 from Service Pack 1
onwards. To find out more, see https://support.microsoft.com/en-us/help/3173157/adds-a-stored-procedure-for-the-
manual-cleanup-of-the-change-tracking

Once the option is enabled in the database, it must be enabled for each of the relevant tables as shown.

EXECUTE VtAdminToolsSchema.vtTableChangeTracking_SpEnable
@TableName = 'vtProduct',
@IsTrackColumnsUpdated = 0

In our example, Change Tracking is enabled for the vtProduct table. The @IsTrackColumnsUpdated option is used
to store information on the modified columns if the value of the parameter is 1. We recommend that you avoid enabling
this option unless strictly necessary.

To read the affected rows, use the following command. Our example is based on the vtProduct table.

SELECT
*
FROM CHANGETABLE(CHANGES vtCommonDataSchema.vtProduct, 0) AS TableChange

Page 199/261
If there is a creation or modification, you should add a join to the query for the vtProduct table using the table’s primary
keys, i.e. ProductKey, in order to retrieve the values of table fields. Deletions are also traced.

To disable the option for a table, run the stored procedure below.

EXECUTE VtAdminToolsSchema.vtTableChangeTracking_SpDisable
@TableName = 'vtProduct'

In our example, Change Tracking is disabled for the vtProduct table.

To disable the option for the entire database, run the stored procedure below.
EXECUTE VtAdminToolsSchema.vtDatabaseChangeTracking_SpDisable

Change tracking must first be disabled for all change-tracked tables before it can be disabled for the database.

To find out more about Change Tracking, see the relevant Microsoft documentation https://docs.microsoft.com/en-
us/sql/relational-databases/track-changes/about-change-tracking-sql-server?view=sql-server-ver15.

Page 200/261
25. LOADING Y2 DATA WHILE THE SOURCE DATABASE IS ACTIVE
This chapter only examines source databases hosted in SQL Server.

If you want to run SSIS packages to load Y2 and/or .Next data while these application databases are active, i.e. during
database updates, you must enable an option in the source database.

Standard SSIS packages run queries to load data when the READ COMMITTED option is enabled. This is the default
option for reading data in SQL Server. This option issues shared locks that may generate conflicts. Conflicting access may
occur while Retail Intelligence records are being read if Y2 and/or .Next update the same records and vice versa. Retail
Intelligence only runs SELECT queries on the respective tables in both applications. However, the SELECT query can be
blocked if the table being read is also being updated by the production database.

To avoid conflict, you can enable the READ_COMMITTED_SNAPSHOT option in the SQL SERVER source database. To
find out whether or not the option is enabled, you should run the following query and replace DatabaseName with the
actual name of the database:

SELECT

name,

is_read_committed_snapshot_on

FROM sys.databases

WHERE name = 'DatabaseName'

To enable the option, you should run the following query in SQL Server and replace DatabaseName with the actual
name of the database:

USE [master]

GO

ALTER DATABASE [DatabaseName] SET READ_COMMITTED_SNAPSHOT ON

GO

To find out more about this option, please refer to the relevant Microsoft documentation. You can also see Cube
process and transaction isolation levels in the AS Administration document.

You can also restrict locks in addition to the READ_COMMITTED_SNAPSHOT option by reducing the table escalation
level. To find out more about the option called SET (LOCK_ESCALATION=DISABLE), see the relevant SQL Server
documentation.

Page 201/261
Page 202/261
26. INITIALIZING THE IDENTIFICATION OF A BI ARCHITECT SOURCE
Note: The procedure described in this chapter must be run with caution and only if you are fully informed of all is
implications.

BI Architect manages identification linked to the server/source database for each data source. If the source is a Y2
database, BI Architect manages a unique internal identifier for recognizing the source database.

If you replace one Y2 database with another Y2 database, or if you replace one BI database with another BI database,
the next loading of BI data from Y2 will be blocked with the following error message:

BI ARCHITECT internal ID from Y2 XXXXXXXXX is different from BI ARCHITECT internal ID YYYYYYYYYYYY registered in
BI ARCHITECT for source "Source name": BI ARCHITECT database is not the same database registered in Y2, unable to
load data (source identification "ZZZZZZZZZZ").

The purpose of this control is to verify the identification of sources each time BI is populated in order to
avoid configuration errors, for example merging two different source environments in the same BI
Architect database. If this error message appears, integrity will be lost between the BI database and its
source or several Y2 databases will be cumulated in a single BI database.

This control will generate a warning and block this type of configuration error. However, it is possible to
force and unblock loading if necessary by resetting the identifiers.

Please note: The administrator performing this operation must be aware of the problem, e.g. the cause of the problem,
if the source database must be unblocked to populate BI. The administrator must ensure the integrity of the BI database
after it is unblocked.

To reset the identification of a source, use the BI Configuration setup system report found in the \System\General
settings folder. Go to the source table at the bottom of the report and click on the "+" sign to the left of the source
name to display the identification; there is a link on the same row to force the identification.

Please note: If the Y2 database is replaced by a different Y2 database (e.g. a backup is restored), then an initialization to
reload all the data from the source database will be automatically programmed.

Page 203/261
27. ASSIGNING THE CONTROL SERVER RIGHT TO AN ACCOUNT
In certain cases, you may be required to assign the Control Server role to an account. For example, some backup tools
require the account used to have the same rights as sysadmin.

Cegid provides a procedure for assigning this right. However, you should avoid doing this unless absolutely necessary.
This role gives users greater rights.

Only Windows logins can have this right. SQL logins are not authorized.

To assign the right to a user account, we have provided a standard external application, BIArchitectSetLogin.exe found
in \vt BINEXT\BI ARCHITECT\vt Admin. You should specify the following settings:

• Name of the server\instance hosting the database.


• Name of the database.
• Standard password for the vtDbAdmin account. Warning: Regardless of the account being
configured, you should always use the vtDbAdmin password.
• User account to which you want to assign the right.
• The SET or DROP keyword depending on whether you want to assign the right or remove the
right from the user account.

Our example is based on a local server and the vtNextDw database. The user account where the role must be assigned
is LoginName. To run this application, you must open a Command Prompt window, go to the vt Admin folder
mentioned earlier and enter the following type of command:

BIArchitectSetLogin.exe LOCALHOST\VCSNEXT vtNextDw PasswordOfVtDbAdmin LoginName SET

If the application runs correctly, it must display the following elements:

• Trying connection to database XXXX…


• Executing procedure, wait…
• Login XXXXXX is set to CONTROL SERVER
• Procedure ended

To remove the right, you should change the SET keyword to DROP as follows:

BIArchitectSetLogin.exe LOCALHOST\VCSNEXT vtNextDw PasswordOfVtDbAdmin LoginName DROP

Exceptionally, you can assign the sysadmin role to logins if the customer has a DBA. To do this, the vtAdministrator SID
in the SQL instance must be referenced by Cegid.

If this is the case, then the sysadmin role can be assigned with the SET_SYSADMIN keyword as shown.
BIArchitectSetLogin.exe LOCALHOST\VCSNEXT vtNextDw PasswordOfVtDbAdmin LoginName SET_SYSADMIN

Page 204/261
28. MAKING THE STANDARD BI ARCHITECT DATABASE OPERATIONAL
Once you have restored the standard BI Architect database or once you have detached and reattached the standard BI
Architect database, you must run a standard external application to ensure the database is operational. To find out
more, see Ensure an operational database in the DB Administration document.

Page 205/261
29. DISABLING BI TRIGGERS WHEN UPGRADING Y2
Please note: The procedure described below must be performed with due care and only during a Y2 major version
upgrade or when reloading the Y2 database using a dump or load. This should only be done if the CBR database is large
(exceeding 50 GB). If this is not the case, you must not apply this procedure.

To communicate differentially with Y2, BI Architect implements triggers in certain Y2 tables which are run during CUD
operations (CREATE (INSERT), UPDATE, DELETE). During a major CBR version upgrade, certain large Y2 tables may be
modified by one of the operations above and this may run the triggers associated with the tables. If all of the rows in
large tables are affected, this may significantly increase the time required for the Y2 upgrade.

In Oracle, you can perform mass data imports using Data Pump Import (impdp) or Original Import (imp) and disable the
triggers. We recommend the use of impdp. Triggers are not run if the target tables do not exist. If they exist, then the
existing triggers will be run. To avoid this problem, you should ensure that the target tables do not exist when you
import data.

If you use the impdp command, you can also disable triggers using the
TABLE_EXISTS_ACTION=REPLACE option. Warning: You should use this only when replacing data. To
find out more, see the following Oracle article
http://docs.oracle.com/cd/E11882_01/appdev.112/e17126/triggers.htm#i1007097.

In any case, you can ask BI Architect to ignore the disabling of triggers when performing the following
operations.

To avoid slowing down the Y2 version upgrade or any dump or load operation, the database administrator in charge of
the Y2 database must proceed as described below:

• Prior to the Y2 upgrade, disable all triggers in the Y2 database. The triggers reserved for BI start
with ZBI_.

• Upgrade the Y2 database once the triggers have been disabled.

• Once the upgrade is complete, enable all the triggers again. Warning: The triggers must be
enabled immediately after the Y2 upgrade and before the database is made available online for
users. If this is not the case, data will not be sent to BI.

• Once the triggers have been enabled again, you must run the following stored procedure in BI
Architect.

o Open SQL Server Management Studio on the server hosting BI Architect.

o Log on to the VCSNEXT instance on the BI Architect server using the vtDbAdmin account.

o Select the relevant BI database, generally vtNextDW, and click New Query.

o Run the following stored procedure by specifying the data source to reinitialize. Our
example is based on a standard Y2 source database.

Page 206/261
EXECUTE [VtAdminToolsSchema].[vtInitializeSourceCUDObject]
@SourceIdSys = 5

Note: In a multi-database consolidation context, Y2 external sources with an OLE DB


direct link have a dynamic ID. If you want to initialize an external source and see its ID,
you should run a SELECT * query on the vtCommonDataSchema.vtSource table to
return the ID.

Page 207/261
30. FORCING THE END OF THE DATA SOURCE LOADING PROCESS
If the data loading process using SSIS packages or Colombus is unexpectedly aborted, certain properties will no longer
be modifiable in the configuration tool of the solution.

In addition, you cannot run the data loading process for data from Colombus, Y2 and/or an external source at the same
time to the same BI Architect database. This is because the data loading processes are mutually exclusive. You must
ensure that one data loading process (Colombus agent or SSIS package ) is completed before you run another. If a data
loading process is blocking another and it is not actually running although its status indicates that it is in progress, this
means that it was unexpectedly aborted before completion. If this is the case, you should run this data loading process
again and wait until it is completed before running the other data loading process. The BI entities status system report
displays the status of the different sources and their data loading process.

If you want to unblock the situation without running the unexpectedly aborted data loading process again, you can
force it to end. To do this, you should run a standard stored procedure that will free up the process currently in
progress.

Warning: The procedure described below must be performed with all due care as you must ensure that the data loading
process is really not running.

Log on to the VCSNEXT instance on the BI Architect server using the vtDbAdmin account. Connect to the BI Architect
database and run the following stored procedure:

EXECUTE [vtSystemProceduresSchema].[vtSource_spForceEndLoading] @SourceIdSys =


SourceNumber

SourceNumber depends on the source for which you would like to end the data loading process. If it is Colombus,
specify 1. If it is Y2 or another external source, you must run the BI configuration system report to see the list of active
sources with the number of the source in brackets.

Page 208/261
31. FORCING THE END OF THE QLIK DATA LOADING PROCESS
If the Qlik data loading process is unexpectedly aborted, certain properties will no longer be modifiable in the
configuration tool of the solution.

If you want to unblock the situation without running the unexpectedly aborted Qlik data loading process again, you can
force it to end. To do this, you should run a standard stored procedure that will free up the process currently in
progress.

Warning: The procedure described below must be performed with all due care as you must ensure that the Qlik data
loading process is not running.

Log on to the standard instance on the BI Architect server using the vtDbAdmin account. Connect to the BI Architect
database and run the following stored procedure:

EXECUTE [vtSystemProceduresSchema].[vtBIRepository_spForceEndLoading]
@BIRepositoryIdSys = RepositoryNumber

RepositoryNumber depends on the data repository:

• 1: QlikView

• 3: QlikSense

Page 209/261
32. CONFIGURING THE FIREWALL TO ALLOW ACCESS TO THE BI ARCHITECT SERVER
By default, we recommend that you disable the firewall on the BI Architect server. If, for security reasons, you need to
enable the firewall, then you must ensure that BI Architect server ports are open to allow access to the database engine
in the VCSNEXT instance or default instance and to the cube from another server.

To configure the firewall and ports to allow access to an SQL Server instance, please refer to the Microsoft
documentation https://msdn.microsoft.com/fr-fr/library/ms175043(v=sql.90).aspx.

Warning: If this is the case, to access the server protected by the firewall, you should indicate the port as described
below.

• To access the VCSNEXT instance: ServerName\VCSNEXT,PortNumber

• To access the OLAP cube: ServerName:PortNumber if the cube is not in the default instance
ServerName:PortNumber/Instance

To find out if you can access the instance and the database you want, you can use a UDL file.

• Create a file called Check.UDL on the Windows Desktop.

• Double-click to open the file properties and specify the following:

o Provider tab: Select the SQL Native Client or Microsoft OLE DB Provider for SQL
Server driver to test the connection to the SQL Server database. Select the Microsoft OLE
DB Provider for Analysis Services X.XX driver to test the connection to the OLAP cube.

o Connection tab: Specify the server/instance, the account and database whose connection
you want to test.

o Click Test Connection.

Page 210/261
33. SQL AGENT: SENDING EMAILS ON JOB COMPLETION
This chapter explains how to configure SQL agent jobs so that emails are sent automatically if jobs are completed with
errors or successfully.

Our example shows how to send an email to an administrator if an error occurs. We will assume that there is an SMTP
server.

The first step consists of enabling the email system for the SQL Server instance hosting the SQL agent jobs. This is
generally the BI Architect default instance. The screenshots below are taken from SQL Server 2008 R2.

• Log on to the default instance using an account with the sysadmin role, e.g. BINextServices.

• Expand the Management folder. Right-click Database Mail and select Configure Database Mail
from the contextual menu. The following window will appear.

Click Next.

Page 211/261
Enter a name similar to SQL Agent in the Profile name field. Click Add. The following window
will appear.

You must specify at least the following in this window:


o Account name: You can enter any name you want, e.g. SQL Agent.
o E-mail address: Enter the email account that will send messages for this profile. This
account must exist on your SMTP server.
o Server name: Enter the name of your SMTP server.
o SMTP Authentication: Specify the authentication method by selecting Windows
authentication or Anonymous authentication. Click OK. The following window will
appear.

Page 212/261
Click Next.

Tick the Public box for the profile in the Public Profiles tab and click Next.

Page 213/261
Click Next.

Click Finish and then click Close.

You can run a test by right-clicking Database Mail and selecting Send Test E-mail from the
contextual menu. Enter the recipient email address and send the test email. If the configuration
is correct, the recipient will receive an email. If not, you should check the configuration.

The second step consists of creating an operator. This is the user who will receive the emails. To
do this, expand the SQL Server Agent folder. Right-click Operators and select New Operator
from the contextual menu. The following window will appear.
Page 214/261
In this window, you must specify at least the name, e.g. SQL Agent Administrator, and the
recipient email address. Click OK.

The third step consists of enabling the email system in the SQL agent properties. To do this,
right-click the SQL Agent component of the default instance and select Properties from the
contextual menu. The following window will appear.

In this window,
• tick the Enable mail profile box.
• Select Database Mail from the Mail system drop-down list.

Page 215/261
• Select the mail profile you just created, e.g. SQL Agent, from the Mail profile drop-down
list.

Click OK and restart the SQL agent service.

Once you have restarted the SQL agent service, you can use the email system to send emails
automatically once jobs are completed. Our example shows how to send an automatic email if
an error occurs in the daily cube process.

Right-click the SQL job of the daily cube process and select Properties from the contextual
menu. In the Job Properties window, select Notifications in the left pane. The following window
will appear.

Tick the E-mail box. Select the operator you just created from the drop-down list, e.g. SQL Agent
Administrator, and keep the When the job fails option. Click OK.

Each time the job fails, the operator will be notified by email.

Repeat this procedure for all jobs where you want emails to be sent automatically.

Warning: By default, jobs that contain several steps are configured to proceed to the next step
even if an error occurs. In this case (if the last step was not in error), the operator will receive an
Page 216/261
email stating that the job was successful even if one of the previous steps was in error. If you
configure the sending of an automatic email to the operator, you must also modify the
configuration of the job steps so that the job stops if there is an error.

Page 217/261
34. SQL AGENT: MODIFYING THE ACCOUNT IN CHARGE OF RUNNING A JOB STEP
By default, a job step is run using the SQL Agent service account. This account can be modified in the Run as field of the
job step if specific rights are required to run a task.

The first step consists of creating your credentials in the Security folder of the SQL instance containing the agent. To do
so, follow the procedure below. The screenshots below are taken from SQL Server 2008 R2.

• Log on to the default instance using an account with the sysadmin role, e.g. BINextServices.

• Select Security in the left pane and click Credentials. Right-click and add a new credential. The
following window will appear.

• We recommend that you enter a name for the credential identical to the one for the identity.

• Enter a password and validate.

Once you have created the credential, you must create a proxy. To do so, expand the SQL Server Agent folder in the left
pane and select Proxies. Select the type of proxy you want to create depending on the type of task or job step. Right-
click the type of proxy and select New Proxy from the contextual menu. The following window will appear.

Page 218/261
• Enter a descriptive name for the proxy so that you can identify it easily.
• Enter the credential you just created.
• Click OK.
• Select the subsystems (job tasks associated with the SQL agent) for which the proxy must be active

Once you have created the proxy, you can enter it in the Run as field when running the job step. The job step will then
be run using the proxy account.

Page 219/261
35. MIGRATE A BI FOUNDATION ENVIRONMENT TO NEW SERVERS
This chapter describes the different steps you should perform to migrate BI Foundation servers to new servers.

The migration applies to the following BI Foundation modules:

• BI Architect: Data mart of the solution

• BI OLAP: Analysis Services cube

• BI Reports: System and business Reporting Services portal

The QlikSense dashboard server is not included.

The chapter describes the migration of the three components. You may not want to migrate one of the components,
e.g. because it is not present, or because it should not be migrated. In this case, you can ignore the relevant section.

Step 1: Set up the new servers


• Set up the new servers in line with the target architecture:
o The number of servers must respect the recommended architecture. To find out more,
see the architecture and sizing document.

o The servers must have adequate resources. To find out more, see the architecture and
sizing document.

o The servers must respect the prerequisites, i.e. OS, Framework, etc. To find out more, see
the SQL Server installation document.

Step 2: Install SQL Server instances on the new servers


• Install SQL Server instances on the new servers based on the architecture defined.
o See the document on the customer's target architecture.

o Refer to the SQL Server installation document shipped with the Retail Intelligence
solution for configuring the instances.

Page 220/261
Step 3: Install BI resources on the new server
• Install BI resources on the new BI Architect server. To do this, refer to the BI Consulting
configuration document.

The objective is to follow the BI Consulting configuration document very closely, by installing
resources and by creating an empty BI database in the standard instance. The only task you do
not need to perform is the creation of SQL jobs because they will subsequently be recovered
from the old server.

Step 4: Disable data loading and updates on the old servers


Warning: This step is fundamental because it ensures that the new environment is synchronized correctly with loading
sources.

• On the old BI Architect server, you must disable the SQL Agent that runs jobs in the solution. To
do this, proceed as described below.

• Log on to the data mart server using an admin account.

• Run SQL Server XXXX Configuration Manager by selecting Microsoft SQL Server XXXX in the
Windows menu.

• Select SQL Server Services in the left pane. In the right pane, click SQL Server Agent for the
relevant instance. Its Start Mode is Automatic. This is generally the default instance,
MSSQLSERVER.

• Right-click SQL Server Agent and select Properties from the contextual menu.

Page 221/261
• Select the Service tab. In the Start Mode field, select Disabled and click OK.

• You will return to the previous window. Right-click SQL Server Agent again and select Stop to stop
the service. SQL jobs will no longer run.

• As a precaution, ensure that the agent is really stopped by checking its status in SQL Server
Management Studio.

• If you also need to migrate an old Report server and if it is on a server other than BI Architect, you
should repeat the procedure to disable the SQL Server Agent for the Report server.

Step 5: Perform backups on the old servers


You must perform at least the following backups, not including any custom objects to be migrated:

• The BI Architect database, generally vtNextDW

• The production and test OLAP cube, generally CustomNext and CustomNextTest

• The Reporting Services system databases, generally ReportServer and ReportServerTempDB

• Encrypted keys from the Reporting Services system databases as described below:

• Open the Reporting Services Configuration Manager on the data mart server. Run Report Server
Configuration Manager in the Windows menu.

• Log on to the Report server.

• Select Encryption Keys in the left pane.

Page 222/261
• Click Backup in the right pane.

• Specify the following backup file, BI REPORTS.snk and save it in the \Custom BINEXT\BI
REPORTS folder. If this file exists, you should overwrite it. Enter the following password,
Cegid.2012. Validate and click OK to generate the file.

To find out more, see the RS Administration document.

Step 6: Transfer resources and databases


• Recover the folder called \Custom BINEXT from the old server and use it to replace the
corresponding folder on the new server. You should first delete the folder on the new server.

• Recover the folder called \Database name found in the data drive on the old server and use it to
replace the corresponding folder on the new server. You should delete the folder on the new
server first.

Warning: Database name depends on the name of the database. If the database name is
vtNextDW, the folder will be called \vtNextDW.

• On the new server, log on using the vtDbAdmin account and delete the empty BI Architect
database, generally vtNextDW that was created earlier.

• Restore the backup of the BI Architect database, generally vtNextDW in the BI Architect standard
instance on the new server using the vtDbAdmin account.

• Run the BIArchitectDatabaseReady.exe utility to ensure the BI database is operational. To find


out more, see Ensure an operational database in the DB Administration document.

• Restore the backups of OLAP cubes by overwriting the empty cubes deployed.

o You must check that the cube data sources point to the new server.

• Log on to the Database Engine instance on the new Report server. This is generally the default
instance. You should log on using the Reporting Services portal administrator account, generally
BINextServices.

• Expand Databases and delete the ReportServer and ReportServerTempDB databases. You must
ensure that you are on the correct server and instance. The aim is to delete the empty Reporting
Services databases on the new Report server.

• Restore the backups of the Reporting Services databases, i.e. ReportServer and
ReportServerTempDB on the instance of the new Report server.

Page 223/261
• Once you have restored both databases, run Report Server Configuration Manager in the
Windows menu on the new Report server.

• Enter the name of the machine if it is not specified. Click Connect.

• Select Database in the left pane.

• Click Change Database.

• Select Choose an existing report server database and click Next.

Page 224/261
• Specify the following:
o Server name = LOCALHOST (the instance depends on the context)
o Authentication type: This depends on the context, generally Current User – Integrated
Security
o Test the connection. If it runs correctly, click Next.

• In the Report Server Database drop-down list, select ReportServer. Be careful to avoid any errors.

• Click Next.

Page 225/261
• This depends on the context. You usually select Service credentials. Click Next.

• Click Next. The Progress and Finish window will appear.

• Once processing is completed, click Finish to return to the main configuration window.

• Select Encryption Keys in the left pane.

• Click Restore in the right pane.

• Locate the file called BI REPORTS.SNK you recovered from the old server and restore the
encrypted keys on the new Report server.

• Enter the password of the file and click OK to restore the keys.

• Once you have restored the keys, click Exit to close the configuration tool.

• Run the Reporting Services portal on the new Report server. Modify the data sources to ensure
that they point to the new servers.
Page 226/261
The new Reporting Services portal is now configured for reports. However, you must also update system reports,
especially if the BI version on the new server is later than the one on the old server. To do this, run the BI wizard for
version upgrades and update the system reports.

Step 7: Transfer SQL jobs


• In SQL Server Management Studio on the old server, generate the SQL scripts for creating SQL
jobs. To do this, right-click the job and select Script Job as…, create to…, New Query Editor
Window

• Copy the scripts to the new server.

o Check if the script refers to the old server. If it does, you must update the names of the
servers. Warning: Depending on the context, certain jobs may also require you to create
other objects first, such as execution proxies.

Run the creation scripts on the default instance of the new server.

Step 8: Perform checks


• Check that the drives and root paths where folders \vt BINEXT, \Custom BINEXT and \Database
Name are located are identical on both the old and new servers.

If the drives and root paths are different, you must:


o Update the paths in the BI system reports below:
▪ BI Configuration setup: Update communication parameters section.

▪ BI Database communication setup: Only if external databases using file exchange


are defined.

o Run the BI configuration wizard again to check and correct the paths if required.

o Check and correct the steps in SQL jobs, especially if they refer to one of the updated
paths or to the old servers.

• Check that all custom projects are present in the folder called \Custom BINEXT on the new server.

Step 9: Disable access to the old servers


As a precaution, stop all SQL services on the old servers. Disable all services to ensure they will no longer run.

Step 10: Run tests


You have completed the migration. You must now run at least the following tests on the new servers:

Page 227/261
• Run the different SQL jobs to check that everything is correct.

• Check that reports run correctly.

Page 228/261
36. RUNNING CUSTOMIZED SSIS PACKAGES
You can run custom SSIS packages linked to standard SSIS packages. You can run these packages once data has been
loaded from different ERPs by enabling options. By default, data loading is disabled and must be enabled using the
Retail Intelligence configuration wizard. To find out more, see Configuring BI Foundation.

User packages are stored in the SSIS project, User BI ARCHITECT DATA MART PACKAGE found in the \Custom
BINEXT\BI ETL\Projects folder. This project and its packages can be modified for or by customers but the names of the
main packages listed below must not be changed. The four main packages that can call other packages if required and
that can be modified are as follows:

• User BI ARCHITECT DATA MART load for ALL: This is the main user package called after data is
loaded from all ERPs. This package is called by the BI ARCHITECT DATA MART LOAD package. You
can enable this option in the XML file with the same name.

• User BI ARCHITECT DATA MART load for CBR: This is the main user package called after data is
loaded from CBR. This package is called by the CBR DATA MART LOAD package. You can enable
this option in the XML file with the same name.

• User BI ARCHITECT DATA MART load for Next: This is the main user package called after data is
loaded from .Next. This package is called by the CNext DATA MART LOAD package. You can enable
this option in the XML file with the same name.

If you do not want to use these packages, you should avoid any future conflict by adding the following prefix to their
name, _User BI Architect..., e.g. _User BI Architect name example.

You can use a single XML file to manage all packages. This file, called User BI ARCHITECT DATA MART update, is found
in the Custom BINEXT\BI ETL\Package configuration folder. As this file is associated with user packages, it can also be
modified for or by users. The configuration file is associated with the environment variable
VTBI_UserBIARCHITECTDATAMARTUPDATE_CONFIG that you must configure on the workstation running the SSIS
packages.

Warning: If you enable these options, you must deploy optional packages using the SSIS deployment wizard once they
have been modified and compiled. The package to be deployed is called User BI ARCHITECT DATA MART PACKAGE. You
should deploy it from the BIN folder of the User BI ARCHITECT DATA MART load project, found in Custom BINEXT\BI
ETL\Projects.

The User BI Architect data mart update XML configuration file is used to manage all customer-specific user packages for
updating the BI Architect data marts.

The environment variable associated with the XML file is usually already configured on the BI Architect server using the
Retail Intelligence configuration wizard. To find out more, see Configuring BI Foundation. If required, you can configure
it manually as described below.

• Log on to the BI Architect data server.


Page 229/261
• Right-click the Computer icon on your Desktop or select it from the Windows Start menu.

• Select Properties from the contextual menu.

• Select the Advanced system settings tab. The following window will appear.

• Click the Environment Variables button. The following window will appear.

Page 230/261
• Check whether or not the system variable called
VTBI_UserBIARCHITECTDATAMARTLOAD_CONFIG exists. If it does not exist, click the New
button in the System variables groupbox. The following window will appear.

• Specify the following values:


o Variable name: VTBI_UserBIARCHITECTDATAMARTUPDATE_CONFIG
o Variable value: \\SERVERFOLDER\Custom BINEXT\BI ETL\Package configuration\User BI
Architect data mart update.dtsConfig

\\SERVERFOLDER\ corresponds to the root folder where the Custom BINEXT folder is
located. If the server where the folder is located is not the one hosting SSIS, then the server
hosting SSIS must be able to access the server where the folder is located.

• Once you have specified the values for the system variable, you must specify the settings for
running it in the configuration file of the User BI ARCHITECT DATA MART
update.dtsConfig package located in the \\SERVERFOLDER\Custom BINEXT\BI ETL\Package
configuration\ folder.

Below is the list of default properties that can be modified in the XML file. Mandatory values are indicated. You can add
customer-specific properties in the XML file.

Description Property Mandatory Yes/No Comment

Name of the server hosting the BI \Package.Connections[User DATA MART BI Yes Default value is ServerName\VCSNEXT. The
Architect database ARCHITECT].Properties[ServerName] VCSNEXT instance is mandatory. The name
of the machine should be specified if
required. Warning: You cannot enter
localhost as the name of the server.

BI Architect database name \Package.Connections[User DATA MART BI Yes Default value is vtNextDW
ARCHITECT].Properties[InitialCatalog]
SQL login for the BI Architect \Package.Connections[DATA MART BI Yes Default value is vtDbAdmin. This user must
database ARCHITECT].Properties[UserName] not be modified. This login must be used for
loading and updating data in BI Architect.

Password for logging on to the BI \Package.Connections[DATA MART BI Yes Default password for vtDbAdmin (see BI
Architect database ARCHITECT].Properties[Password] Administration) This password must not be
modified.

Root path of the vt \Package.Variables[User::vtBINEXTRootPath].Prop Yes Default value is C:. You should specify only
BINEXT environment erties[Value] the root where the vt BINEXT folder is
located. You should not end with a slash, \.

Page 231/261
Root path of the Custom \Package.Variables[User::CustomBINEXTRootPath] Yes Default value is D:. You should specify only
BINEXT environment .Properties[Value] the root where the Custom BINEXT folder is
located. You should not end with a slash, \.

Page 232/261
37. STOPPING/STARTING AN SQL INSTANCE/SERVICE USING A COMMAND LINE
To stop/start a SQL instance using a command line, use the net start and net stop commands to start/stop a service.

For example, if you would like to stop and start the VCSNEXT instance before loading the data marts, proceed as
described below:

• Create the StopAndStartVCSNextInstance.cmd command file in the D:\Custom BINEXT\BI


ARCHITECT\Scripts folder. The file contains the following rows. In our scenario, the Custom
BINEXT folder is located on the D: drive):
• Net stop "SQL Server (VCSNEXT)"
• Timeout 10
• Net start "SQL Server (VCSNEXT)"
• Timeout 15

• In the SQL job, insert a step before loading that calls the command file. It must be a Operating
system (CmdExec) step.

• The command line will be:

Call "D\Custom BINEXT\BI ARCHITECT\Scripts\StopAndStartVCSNextInstance.cmd"

Note:

• Note: It may take a long time to start the service. We therefore recommend that you create a
separate job to stop the services and which runs with a wide margin before loading the data
marts.

• "SQL Server (VCSNEXT)" depends on the name of the service. This means that you must first check
the name of the service to be stopped/restarted in the SQL server configuration tool.

• If the SQL Server Agent is active, it must first be stopped before stopping the SQL instance, and
then restarted afterwards.

Page 233/261
38. APPENDIX
The functionalities described in this chapter are obsolete because they are managed differently in more recent versions.
As such, they should not be used in the BI solution.

Our aim in continuing to describe them in this chapter is to store a trace of the configurations defined for customers, in
order to ensure backward compatibility.

Below is a link to the forum:


http://social.msdn.microsoft.com/Forums/en-US/sqldataaccess/thread/33436d82-085c-43e4-b991-a2d0d701c8fc

Below is an extract from the forum:


There is now a 64-bit driver available, you can download it here:

Microsoft Access Database Engine 2010 Redistributable


http://www.microsoft.com/downloads/en/details.aspx?FamilyID=c06b8369-60dd-4b64-a44b-84b371ede16d

This will register a driver which is listed under Server Objects -> Linked Servers -> Providers with the name
"Microsoft.ACE.OLEDB.12.0" which you must use as the Provider string.

Connection string for 64-bit OLEDB Provider:


For CSV / Text files, Add "Text" to the Extended Properties of the OLEDB connection string.
Important: With the new 12.0 driver and text files the schema.ini file is compulsory in the directory of the csv/text file,
otherwise you will receive a "Could not find installable ISAM" error.
schema.ini documentation can be found here:
http://msdn.microsoft.com/en-us/library/ms709353(VS.85).aspx

If you are connecting to Microsoft Office Excel data, add “Excel 14.0” to the Extended Properties of the OLEDB
connection string.

Page 234/261
When performing an external import, integrity controls are optional. Because of this, sometimes you may encounter
empty data in BI even though this data should be specified. BI Architect provides a tool that you can use to export linked
documents missing from the database in a text file.

The \vt BINEXT\BI ARCHITECT\vt Scripts folder contains the ExportMissingLinkedDocuments.cmd command file. This
command file receives the following parameters:

• BI Architect SQL Server instance

• BI Architect data mart database name

• vtDbAdmin account password

To export the missing linked documents, you should:

• Log on to the BI Architect server.

• Create a folder called C:\TEMP on the BI Architect server if it does not exist.

• Open a Command Prompt window and enter the following command line:

o ExportMissingLinkedDocuments.cmd LOCALHOST\VCSNEXT vtNextDWEUN PasswordOfVtDbAdmin

The command file will generate the following files in the C:\TEMP folder on the BI Architect
server:

o Missing customer deliveries linked to invoices:


MissingCustomerDeliveryLinkedToSales.txt

o Missing customer orders linked to deliveries:


MissingCustomerOrderLinkedToDelivery.txt

o Missing customer orders linked to invoices: MissingCustomerOrderLinkedToSales.txt

o Missing supplier orders linked to receipts: MissingSupplierOrderLinkedToReceipt.txt

o Missing supplier receipts linked to invoices:


MissingSupplierReceiptLinkedToPurchase.txt

o Missing supplier orders linked to invoices: MissingSupplierOrderLinkedToPurchase.txt

These text files contain a header row and information on the missing data. Each column is separated by a tab. We
recommend that you open these files using Excel as this will import them when loading. Use the default settings for
importing Excel files when opening, i.e. tab-delimited and presence of a header row.

Page 235/261
Note: Even if no document is missing, the file will still contain a header row with the names of columns.

Page 236/261
You can run customized SSIS packages using standard SSIS packages.

To import non-Cegid data, we recommend that you use the consolidation module. To find out more, see the chapter
called Importing non-Cegid data – BI SaaS/BI On-premises in the BI ARCHITECT Database Consolidation document.

If you want to develop SSIS packages, see the configuration below.

You can run the customized packages once data has been loaded from different data sources by enabling options. By
default, data loading is disabled and must be enabled using the Retail Intelligence configuration wizard. To find out
more, see Configuring BI Foundation.

User packages are stored in the SSIS project, User BI ARCHITECT DATA MART PACKAGE found in the \Custom
BINEXT\BI ETL\Projects folder. This project and its packages can be modified for or by customers but the names of the
main packages listed below must not be changed. The four main packages that can call other packages if required and
that can be modified are as follows:

• User BI ARCHITECT DATA MART load for ALL: This is the main user package called after data is
loaded from all ERPs. This package is called by the BI ARCHITECT DATA MART LOAD package. You
can enable this option in the XML file with the same name.

• User BI ARCHITECT DATA MART load for CBR: This is the main user package called after data is
loaded from CBR. This package is called by the CBR DATA MART LOAD package. You can enable
this option in the XML file with the same name.

• User BI ARCHITECT DATA MART load for Next: This is the main user package called after data is
loaded from .Next. This package is called by the CNext DATA MART LOAD package. You can enable
this option in the XML file with the same name.

These packages usually are sufficient to meet customer requirements. You can however create other packages to be
called by user packages if required. To avoid any risk of conflict, you must add the following prefix to their name, _User
BI ARCHITECT, e.g. _User BI ARCHITECT name example.

You can use a single XML file to manage all packages. This file, called User BI ARCHITECT DATA MART update, is found
in the Custom BINEXT\BI ETL\Package configuration folder. As this file is associated with user packages, it can also be
modified for or by users. The configuration file is associated with the environment variable
VTBI_UserBIARCHITECTDATAMARTUPDATE_CONFIG that you must configure on the workstation running the SSIS
packages.

Warning: If you enable these options, you must deploy optional packages using the SSIS deployment wizard once they
have been modified and compiled. The package to be deployed is called User BI ARCHITECT DATA MART PACKAGE. You
should deploy it from the BIN folder of the User BI ARCHITECT DATA MART load project, found in Custom BINEXT\BI
ETL\Projects.

Page 237/261
38.3.1.Configuring the user BI Architect data mart update
The User BI Architect data mart update XML configuration file is used to manage all customer-specific user packages for
updating the BI Architect data marts.

The environment variable associated with the XML file is usually already configured on the BI Architect server using the
Retail Intelligence configuration wizard. To find out more, see Configuring BI Foundation. If required, you can configure
it manually as described below.

• Log on to the BI Architect data server.

• Right-click the Computer icon on your Desktop or select it from the Windows Start menu.

• Select Properties from the contextual menu.

• Select the Advanced system settings tab. The following window will appear.

• Click the Environment Variables button. The following window will appear.

Page 238/261
• Check whether or not the system variable called
VTBI_UserBIARCHITECTDATAMARTLOAD_CONFIG exists. If it does not exist, click the New
button in the System variables groupbox. The following window will appear.

• Specify the following values:


o Variable name: VTBI_UserBIARCHITECTDATAMARTUPDATE_CONFIG
o Variable value: \\SERVERFOLDER\Custom BINEXT\BI ETL\Package configuration\User BI
Architect data mart update.dtsConfig

\\SERVERFOLDER\ corresponds to the root folder where the Custom BINEXT folder is
located. If the server where the folder is located is not the one hosting SSIS, then the server
hosting SSIS must be able to access the server where the folder is located.

• Once you have specified the values for the system variable, you must specify the settings for
running it in the configuration file of the User BI ARCHITECT DATA MART
update.dtsConfig package located in the \\SERVERFOLDER\Custom BINEXT\BI ETL\Package
configuration\ folder.

Below is the list of default properties that can be modified in the XML file. Mandatory values are indicated. You can add
customer-specific properties in the XML file.

Page 239/261
Description Property Mandatory Yes/No Comment

Name of the server hosting the BI \Package.Connections[User DATA MART BI Yes Default value is ServerName\VCSNEXT. The
Architect database ARCHITECT].Properties[ServerName] VCSNEXT instance is mandatory. The name
of the machine should be specified if
required. Warning: You cannot enter
localhost as the name of the server.

BI Architect database name \Package.Connections[User DATA MART BI Yes Default value is vtNextDW
ARCHITECT].Properties[InitialCatalog]
SQL login for the BI Architect \Package.Connections[DATA MART BI Yes Default value is vtDbAdmin. This user must
database ARCHITECT].Properties[UserName] not be modified. This login must be used for
loading and updating data in BI Architect.

Password for logging on to the BI \Package.Connections[DATA MART BI Yes Default password for vtDbAdmin (see BI
Architect database ARCHITECT].Properties[Password] Administration) This password must not be
modified.

Root path of the vt \Package.Variables[User::vtBINEXTRootPath].Prop Yes Default value is C:. You should specify only
BINEXT environment erties[Value] the root where the vt BINEXT folder is
located. You should not end with a slash, \.

Root path of the Custom \Package.Variables[User::CustomBINEXTRootPath] Yes Default value is D:. You should specify only
BINEXT environment .Properties[Value] the root where the Custom BINEXT folder is
located. You should not end with a slash, \.

Page 240/261
To ensure consistency, simplicity and performance, we recommend that you avoid accessing non-Cegid external data
directly from the solution. If you need to use non-Cegid data in the solution, we recommend that you import it to BI
Architect. To find out more, see the chapter called Importing non-Cegid data - BI SaaS/BI On-premises in the BI
Architect Database Consolidation document.

The sections below describe some techniques for accessing external data directly from the BI solution.

38.4.1.Accessing a secondary database on the standard BI instance


If you want to create an SQL Server database on the BI instance, usually VCSNEXT, proceed as described below.

Log on to the BI instance using the SQL login, vtDbAdmin. VtDbAdmin has read access to the BI Architect database and
has adequate rights to create a new database.

If you want, for practical reasons, to give one of the BI Retail standard user accounts read access to the database, you
can open SQL Server Management Studio using the vtDbAdmin account to create a user in the database. In our
example, the vtAnalysisServices user is the one who usually runs cube processes and to whom this type of right is
generally given. In Object Explorer, expand the Databases folder. Right-click the Security folder and select New and
then click User. The following window will appear.

Specify the following in the window:

• The user name (UserAllReader in our example)


• The login (vtAnalysisServices in our example)
• The role(s) (db_datareader is sufficient for read access, do not assign the db_denydatareader
role which will remove this right)
Page 241/261
• Click OK.

You can then run queries in multiple databases using one of the following syntaxes:
DatabaseName.OwnerName.ObjectName Or DatabaseName.SchemaName.ObjectName

For example:
UserDB..vtFactsWeatherInfo Or vtNextDW.vtCommonDataSchema.vtProduct

Note: Based on different performance tests, there are no significant differences when querying joined multiple
databases as long as the databases are hosted on the same instance.

If you use this solution, you do not need to create a new data source in the cube project because the main data source
used to access the BI Architect database also allows access to all other databases hosted on the same instance. As such,
you can also join multiple databases in the named query of the DSV that will access BI Architect.

If you need to create stored procedures to run consolidations and these stored procedures are triggered by an SQL
command run by an SQL agent job on another instance, you must create a linked server for that instance to access the
stored procedure. This scenario occurs for a cube process job run on the default instance and used to run stored
procedures on the BI instance. When you create a linked server, you must specify the following options:

• Authorize RPC
• Increase the timeout
• Map the user running the job with a user with run and write access in the database, e.g.
vtDbAdmin

38.4.2.Accessing a database via a linked server on the standard BI instance


This solution involves queries run on multiple servers. This type of query does not ensure optimal performance. We
therefore recommend that you avoid using linked servers.

Otherwise, you must declare the remote server in SQL Server Management Studio. To do this, you should use the
system procedures provided by SQL Server and log on using the vtDbAdmin account which has adequate rights. The SQL
Server Management Studio interface requires the sysadmin role. However, this is inconsistent as vtDbAdmin has
adequate rights for creating linked servers.

Below is an example of how to add a linked server to a standard BI instance in SQL Server using the vtDbAdmin account:

USE [master]
GO
EXEC master.dbo.sp_addlinkedserver @server = N'SERVERNAME\VCSNEXT',
@srvproduct=N'SQL Server'

GO
EXEC master.dbo.sp_serveroption @server=N'SERVERNAME\VCSNEXT',
@optname=N'collation compatible', @optvalue=N'false'
GO
Page 242/261
EXEC master.dbo.sp_serveroption @server=N'SERVERNAME\VCSNEXT', @optname=N'data
access', @optvalue=N'true'
GO
EXEC master.dbo.sp_serveroption @server=N'SERVERNAME\VCSNEXT', @optname=N'dist',
@optvalue=N'false'
GO
EXEC master.dbo.sp_serveroption @server=N'SERVERNAME\VCSNEXT', @optname=N'pub',
@optvalue=N'false'
GO
EXEC master.dbo.sp_serveroption @server=N'SERVERNAME\VCSNEXT', @optname=N'rpc',
@optvalue=N'true'
GO
EXEC master.dbo.sp_serveroption @server=N'SERVERNAME\VCSNEXT', @optname=N'rpc
out', @optvalue=N'true'
GO
EXEC master.dbo.sp_serveroption @server=N'SERVERNAME\VCSNEXT', @optname=N'sub',
@optvalue=N'false'
GO
EXEC master.dbo.sp_serveroption @server=N'SERVERNAME\VCSNEXT', @optname=N'connect
timeout', @optvalue=N'0'
GO
EXEC master.dbo.sp_serveroption @server=N'SERVERNAME\VCSNEXT',
@optname=N'collation name', @optvalue=null
GO
EXEC master.dbo.sp_serveroption @server=N'SERVERNAME\VCSNEXT', @optname=N'lazy
schema validation', @optvalue=N'false'
GO
EXEC master.dbo.sp_serveroption @server=N'SERVERNAME\VCSNEXT', @optname=N'query
timeout', @optvalue=N'0'
GO
EXEC master.dbo.sp_serveroption @server=N'SERVERNAME\VCSNEXT', @optname=N'use
remote collation', @optvalue=N'true'
GO
EXEC master.dbo.sp_serveroption @server=N'SERVERNAME\VCSNEXT', @optname=N'remote
proc transaction promotion', @optvalue=N'true'
GO
USE [master]

Page 243/261
GO
EXEC master.dbo.sp_addlinkedsrvlogin @rmtsrvname = N'SERVERNAME\VCSNEXT',
@locallogin = NULL , @useself = N'False', @rmtuser = N'vtDbAdmin', @rmtpassword =
N'XXXXXXXXXXXX'
GO

Save the server creation script in the \Custom BINEXT\BI Architect\vt Scripts folder so that you can run it again without
redefining the configuration.

Subsequently, you should define your query. Our example shows a multi-server query. The databases in this context are
prefixed with the server name which is SRV-PAR-SDBI\VCSNEXT in our example.

SELECT
TimePeriod.TimeDateCalendarYearName as 'Year',
sum(Sales.[AmountInvoicedIncludingTax]) as 'Turnover',
count(*) as 'Count sales'
FROM [SRV-PAR-
SDBI\VCSNEXT].[VtNextDW].[vtCustomerSalesDataSchema].[vtFactsProd
uctCustomerSales] as Sales

inner join [WKS-


MRI\VCSNEXT].[vtNextDW].[vtCommonDataSchema].[vtTimeDate] as
TimePeriod
on TimePeriod.TimeDateSys = Sales.TimeDateSys

group by TimePeriod.TimeDateCalendarYearName
order by TimePeriod.TimeDateCalendarYearName

Note:

• You can declare different objects on the linked server such as the OLAP cube which will be queried
using the relational model.

• If you create a linked server with a provider, you should pass the product name as a parameter
with an empty value as shown:

EXEC master.dbo.sp_addlinkedserver @server =


N'SERVERNAME\VCSNEXT', @srvproduct=N'',@provider=N'SQLOLEDB'

• To delete the linked server, you must run the following procedure:

Page 244/261
USE [master]
GO
EXEC master.dbo.sp_dropserver @server=N'SERVERNAME\VCSNEXT',
@droplogins='droplogins'
GO

• To find out more about the different options, please refer to the relevant Microsoft
documentation. Below is a tip on how to generate the script in SQL Server Management Studio:

o Log on to the default instance (not the standard BI instance) in the solution using an
account with the sysadmin role, e.g. BINextServices.

o Expand Server Objects in the default instance, right-click the Linked Servers folder and
click New Linked Server.

o Tick SQL Server.

o In the Linked server field, enter the name of the machine followed by the name of the
instance as shown.

o Select Security in the left pane.

Page 245/261
o Select the option called Be made using this security context.

o Enter the user name, vtDbAdmin and the password.

o Select Server Options in the left pane.

• Specify the same options as shown.

Page 246/261
• Right-click the Script button and select Script Action to New Query Window. Warning: Do not
validate the screen to avoid creating the linked server in the default instance.

• Click OK to generate the script and run it in the VCSNEXT instance.

• If you want to modify an option for a provider, repeat the same procedure to generate the script in
the default instance.

38.4.3.Accessing a database using ad hoc remote queries


The current server of the database can access external data sources in two ways, using a standard linked server or by
running ad hoc remote queries, also known as ad hoc distributed queries, using the Transact-SQL function,
OPENROWSET. As a general rule, we recommend that you do not use the linked server method as it is less flexible,
especially for non-standard data sources. For certain databases or subqueries, a CAST is required. We recommend the
use of the OPENROWSET function for the data sources in this chapter.

38.4.4.Presentation of the Transact-SQL function, OPENROWSET


Microsoft provides the OPENROWSET function in Transact-SQL. This method is an alternative to accessing tables in a
linked server and is a one-time, ad hoc method of connecting and accessing remote data by using OLE DB. "

To find out more how to use it in external databases, text files, etc., see http://msdn2.microsoft.com/fr-
fr/library/ms190312.aspx.

38.4.4.1. Rights for using the OPENROWSET function in ad hoc remote queries
To be able to use the OPENROWSET function in an ad hoc remote query, you must enable two options on the standard
BI instance.

If option 1 is not enabled, the following error message will appear:

“OLE DB Error: OLE DB or ODBC Error: SQL Server blocked access to STATEMENT 'OpenRowset/OpenDatasource' of
component 'Ad Hoc Distributed Queries' because this component is turned off as part of the security configuration
for this server. A system administrator can enable the use of 'Ad Hoc Distributed Queries' by using sp_configure. For
more information about enabling 'Ad Hoc Distributed Queries', see "Surface Area Configuration" in SQL Server Books
Online. ; 42000”.

If option 2 is not enabled, the following error message will appear:

“OLE DB Error: OLE DB or ODBC Error: Ad hoc access to OLE DB provider 'Microsoft.Jet.OLEDB.4.0' has been denied.
You must access this provider through a linked server.; 42000”.

Option 1: Enable the use of the OPENROWSET function for remote queries. Warning: From BI version 3.2 onwards, this
option is enabled by default. You will no longer need to enable it if your BI version is equal to or later than 3.2.

If you are required to enable the option, proceed as described below.

• Select SQL Server Management Studio in the Microsoft SQL Server XXXX menu.

• Once SQL Server Management Studio is displayed, log on to the standard BI instance using the
vtDbAdmin account as shown. The screenshots below are taken from SQL Server 2005.

Page 247/261
• The following window will appear.

Right-click the instance and select New Query from the contextual menu. Enter the following
SQL script in the right pane:

EXEC sp_configure 'Show Advanced Options',1


GO
RECONFIGURE
GO

Page 248/261
EXEC sp_configure 'Ad Hoc Distributed Queries',1
GO
RECONFIGURE
GO

Run the query to enable the option.

Option 2: Ensure that the Disallow adhoc access option is disabled. This option is enabled by default even if it is not
selected. When you select the option, it is stored in the registry. However, even if it is not in the registry, it is enabled
unless the role of the user running the query is SysAdmin. Warning: If a linked server is installed with the relevant
provider, you can also access this option in the properties of the linked server window.

You can enable this option in the properties of the provider in SQL Server Management Studio. Display the Object
Explorer for the VCSNEXT instance and expand the Server Objects folder. Expand Linked Servers and select Providers.
Double-click the relevant provider to display the Provider Options window. You cannot disable this option in this
window. As mentioned above, this option is enabled by default. If you untick the option, it will be deleted from the
registry but will still be enabled. This appears to be an SQL Server bug. The only solution for disabling this option is to
access the registry directly. To find out more, see the following article http://support.microsoft.com/kb/327489.

To disable this option, proceed as described below.

• Open the Registry Editor by running regedit. To do this, open the Windows Start menu and select
Run as shown.

• Once the Registry Editor is displayed, locate the following key:

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft SQL Server\Instance


Names\SQL. Display the values of the SQL Server instances as shown.

Page 249/261
• Take note of the value associated with the standard BI instance, generally VCSNEXT. In our
example, it is MSSQL.5.

• Once you know the value of the standard BI instance (MSSQL.5), expand the corresponding folder.
It is usually located below the Instance Names folder. Display the Providers folder and select
Microsoft.Jet.OLEDB.4.0. Warning: The name of the driver will depend on the one being used. As
it is Microsoft Jet in our example, we will therefore select the following key:
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft SQL Server\MSSQL.5\Providers\
Microsoft.Jet.OLEDB.4.0

• Check that the DisallowAdHocAccess registry value exists and that its value is 0 as shown.

o If the registry value does not exist, you should create it as follows:

▪ Right-click in the right pane.

▪ Select New and click DWORD Value.

▪ Right-click the generated key and select Rename from the contextual menu. The
name of the registry value must be DisallowAdHocAccess.

o If the registry value is 1, modify the value as follows:

▪ Right-click the registry value.

▪ Select Modify.

▪ Enter 0 in the Value data field.

▪ Click OK.

• Close the Registry Editor.

• Restart the machine or run SQL and AS services again. The option is automatically taken into
account by AS but not by SQL Server.

Page 250/261
Once you have enabled both options, you can use OPENROWSET in two ways: indirectly via a data source in the AS
project, or directly via a named query in the AS project.

The section below describes a few of the different data sources that use OPENROWSET.

38.4.5.Accessing an MS Access database:


The example below is based on a 32-bit SQL Server version. If you use a 64-bit version, you must ensure that the driver
is also 64-bit. This is normally available only from MS Office 2010 onwards. Warning: In this case, the driver name and
Excel version will also change. To find out more, see Appendix: Installing the 64-bit Microsoft database access driver.

To access an MS Access database, you must use Native OLE DB\Microsoft Jet 4.0 OLE DB Provider.

You can access a database with this provider in two ways, - using a linked server or via an ad hoc remote query using the
Transact-SQL function, OPENROWSET. To find out more, see above.

38.4.6.Accessing a database via an AS cube project data source


There are two possibilities:

• The database is hosted on the standard BI instance. It is therefore an SQL Server database.

• The database is hosted on another instance and/or server. It can be a database other than SQL
Server.

Warning: You can retrieve data from another instance or server and associate it with a dimension
present in BI Architect in order to add attributes if it is an SQL Server database. To do this, you must:

• Create the SQL account, vtAnalysisServices, if this has not yet been done, on the source server
or instance using the same BI Architect password. This account is the one used for accessing
the BI Architect data mart.

• Assign the account adequate rights to the source database so that data can be queried and
retrieved.

• In the OLAP project, create a data source that points to the database to be loaded using the
vtAnalysisServices account, which is the one used to access BI Architect.

• In the Data Source View of the OLAP project, create a named query to load the dimension's
secondary data using the new data source and add a unique field to the SELECT query to
establish the link with the BI Architect dimension table. This field is usually equivalent to the
IdApp present in BI Architect.

• Modify the named query of the dimension and add the same field to the SELECT query to
establish the link (usually the IdApp).

• Establish the link between the two named queries in DSV using the common field.

• Add attributes to the dimension from the new named query extracted from the external
database.

Page 251/261
38.4.7.Accessing a secondary database on the non-standard BI instance in the AS cube project
In this case, the database is not necessarily an SQL Server database. To access this database, you must create a
dedicated data source for the connection. In this case, joins with the BI Architect tables are defined in the DSV and not
in the named queries. If you want to define them directly in the named queries, there are three ways to do so. The
database should be on the same instance. Alternatively, you should set up a linked server, or you can use the
OPENROWSET function. See the solutions below.

If joins are not defined in the named query, then outer joins (if they exist) are defined in the properties of Analysis
Services objects. For example, the relationship between dimensions in the Dimension Usage properties and in measure
groups for managing errors in these objects. To find out more, see the relevant Analysis Services documentation.

If possible, use the native provider of the database in the data source, e.g. for SQL Server: Native OLE DB\SQL native
client. Consult the database administrator for access rights management.

Note:
If several data sources are used in an Analysis Services project, Analysis Services will apply the concept of the primary
data source. The primary data source must include the VtAnalysisServices user account with its initial rights because it
is used as a reference when Analysis Services objects are processed. When you process an Analysis Services object from
an external data source, the rights of the primary data source are used for the process, independently of the rights
specific to other data sources. Otherwise, there could be errors specific to certain providers.

You must therefore create the SQL account, vtAnalysisServices, using the same password on the target server hosting
the database. This account will have read access to the target database.

38.4.8.Accessing an MS Access database via an AS cube project data source


This solution indirectly uses the OPENROWSET function. You should first create a data source in the AS project by
selecting the provider and then specify the setting as shown.

▪ The database file name will depend on your context.

Page 252/261
▪ The Admin user name and the absence of a password show that it is by default a non-protected
MS Access database. If the database is protected, you should contact the database administrator.

Once you have created the data source, you can use the database in the DSV (Data Source View) just like any other SQL
Server database, except in the following case. You cannot define named queries with this provider using a data source.
Error messages will appear when processing objects. If you need to define named queries, you can only do so with the
primary data source of the AS project.

Apart from this constraint, this is the best solution because it centralizes connections to the database using a single data
source. This eliminates the need to multiply connections and connection settings.

38.4.9.Accessing an MS Access database via an AS cube project named query


You can use the primary data source to access the SQL Server database provided in the AS project in order to define
named queries in the MS Access database using the Transact-SQL function, OPENROWSET. The disadvantage of this
solution is that it multiplies connections and connection settings in each query run in the database. The previous
solution that uses a primary data source is optimized. The database file, user names and passwords shown in our
example will differ, depending on your context. To find out more about using this function, refer to the relevant
Microsoft documentation.

Below is an example of a named query that returns a table of objectives from an MS Access database:

SELECT OBJ
FROM
OPENROWSET
(
N'Microsoft.Jet.OLEDB.4.0',
N'C:\temp\db1.mdb';'admin';'',[Table_Objectives]
) as Objective

Below is an example of a named query that returns a table of objectives from an MS Access database with a join to the
vtStockRoom table:

SELECT vtStockRoom.StockRoomKey, Objective.OBJ


from vtStockRoom left outer join

OPENROWSET
(
N'Microsoft.Jet.OLEDB.4.0',
N'C:\temp\db1.mdb';'admin';'',[Table_Objectives]
) as Objective
Page 253/261
on vtStockRoom.StockRoomKey = Objective.[Stock room]

In certain cases, error messages may appear for some data types because of a data conversion error. If this is the case,
you should use a CAST for each column as shown.

CAST (ColumnName as DataType) as MyColumnName

Similarly, you may need to define a subquery to access the query returned by OPENROWSET if Analysis Services loses
the link with the columns.

SELECT OBJ from (SELECT OBJ


FROM
OPENROWSET
(
N'Microsoft.Jet.OLEDB.4.0',
N'C:\temp\db1.mdb';'admin';'',[Table_Objectives]
) as Objective) as FactsObjectives

38.4.10. Accessing an MS Access database via a linked server and an AS cube project named
query
We do not recommend this solution because it requires the systematic use of CAST on data.

You can however use the primary data source to access the SQL Server database provided in the AS project in order to
define named queries in the MS Access database using multi-server queries.

Below is an example of how to configure the MS Access linked server using SQL Server Management Studio.

Page 254/261
You need to map users.

Page 255/261
Once the server is declared, you can run multi-server queries but you must use CAST in the AS project even though this
is not mandatory in SQL Server Management Studio. Below is an example of a query that loads the table of objectives in
the MS Access database.

select
cast([Stock room] as nvarchar(5)) as Stockroom,
cast(OBJ as decimal(15,2)) as Objective
from [ACCESS_SOURCE]...[Table_objectives]

38.4.11. Accessing an Excel file


The example below is based on a 32-bit SQL Server version. If you use a 64-bit version, you must ensure that the driver
is also 64-bit. This is normally available only from MS Office 2010 onwards. Warning: In this case, the driver name and
Excel version will also change. To find out more, see Installing the 64-bit Microsoft database access driver.

We were unable to use Excel as a data source via a linked server or via a data source. In theory, this is supposed to be
possible. This is why we recommend that you use the Transact-SQL function, OPENROWSET to access Excel files directly
from named queries in the AS project using Microsoft.Jet.OLEDB.4.0. To find out more, see Presentation of the
Transact-SQL function, OPENROWSET.

38.4.12. Accessing an Excel file via an AS cube project named query


You can use the primary data source to access the SQL Server database provided in the AS project in order to define
named queries for an Excel file using the Transact-SQL function, OPENROWSET.

Below is an example of a named query that returns a table of objectives from the Module tab of an Excel file. You must
use a subquery and CAST for returned columns if you want to avoid generating errors.

SELECT
MyKey, MyName, obj
FROM (SELECT
CAST([NameColumn1] AS nvarchar(50)) AS MyKey,
CAST([NameColumn2] AS nvarchar(50)) AS MyName,
CAST([NameColumn3] AS decimal(15,2)) AS obj
FROM OPENROWSET('Microsoft.Jet.OLEDB.4.0', 'Excel
8.0;Database=C:\Personal_Data\My documents\modules.xls;HDR=YES', [Module$])
AS Objective) AS FactsObjectives

Page 256/261
38.4.13. Accessing text files
We recommend that you use the OPENROWSET function in named queries in the DSV if you want to access text files.
Unlike certain providers such as Microsoft Jet above, you are not required to enable certain options to use
OPENROWSET in a text file. To find out more, see above.

OPENROWSET handles bulk operations using the built-in BULK provider used to read file data and return it in a table.
The BULK provider handles the import of the text file and is therefore very effective.

You can use the vtAnalysisServices account configured in the project's data source as it has adequate rights to execute
BULK commands. The text files however must be visible to the server.

Below is an example of how to import a text file with the tab column separator.

C:\temp\values.txt
1 Data Item 1

2 Data Item 2

3 Data Item 3

You must define a BULK file (in standard or XML format). To find out more, see http://msdn2.microsoft.com/fr-
fr/library/ms178129.aspx.

You create a standard file that will describe the two columns in character strings. You can use character strings other
than SQLCHAR but it is simpler to manage everything in SQLCHAR and perform conversions later as shown in the
example.

C:\temp\values.fmt
9.0

1 SQLCHAR 0 10 "\t" 1 ID SQL_Latin1_General_Cp437_BIN

2 SQLCHAR 0 40 "\r\n" 2 Description SQL_Latin1_General_Cp437_BIN

You create a named query in the DSV of the AS project and link it to the DBCUSTOMNEXT data source as shown.

SELECT CAST(a.ID as INT) as ID, a.Description FROM OPENROWSET(


BULK 'c:\temp\values.txt'
, FORMATFILE = 'c:\temp\values.fmt'
) AS a

The CAST instruction in the ID field is used to convert the key to an integer and allow links with other columns of this
type found in BI Architect tables. You can also create a logical primary key in the DSV to improve performance.

Page 257/261
38.4.14. Accessing Colombus databases
Direct access to production databases goes against the traditional architecture of a Data Warehouse made up of data
marts. The main disadvantages are:

• Slower production server performance

• Slower BI server performance

• Slower performance for the entire system

This solution therefore requires careful thought prior to implementation which should be done only if no other solution
is possible. In general, you access a production database only to retrieve a low volume of data that is not present in BI,
e.g. certain settings specific to production.

Access to the Colombus databases depends on the storage of these databases. The different types of databases are:

• Progress

• SQL Server

• Oracle

If the Colombus production databases are Oracle or SQL Server, the native database providers should be used in the
Analysis Services DSV. If this is not the case, the production database is Progress and the ODBC driver is used to access
the database. In our example, we will be using Progress version 9.1E.

To access a Progress database, you must first install and configure the ODBC driver shipped with Progress on the
production database server and on the BI server accessing the database as an ODBC client. To find out more about
installing and configuring the ODBC driver, see the relevant Colombus documentation. In SQL Server, you must
select Microsoft OLE DB Provider for ODBC Drivers.

Warning: A SELECT clause issues shared locks by default. To avoid lock conflicts and improve performance, you should
specify the READ UNCOMMITTED isolation level in the ODBC driver configuration. Once you have specified the isolation
level, you must restart the system and Colombus database servers.

Page 258/261
You can access a database with this provider in two ways, - using a linked server or via an ad hoc remote query using the
Transact-SQL function, OPENROWSET. To find out more, see above.

The ways in which you can access the Colombus production databases are described below.

38.4.15. Accessing Colombus databases via an AS cube project data source


Generally Microsoft OLE DB Provider for ODBC Drivers is not available in the list of data source providers. As such, you
cannot create a data source that points to the Progress database in Analysis Services.

38.4.16. Accessing Colombus databases via an AS cube project named query


Warning: This method requires you to use CAST systematically on data in Analysis Services even though this is not
mandatory in SQL Server Management Studio.

Below is an example of a named query that returns a list of product references from the Colombus database:

SELECT
Article.[ART-c-Ref]
FROM OPENROWSET('MSDASQL', 'DSN_Colombus_Prod';'ODBC';'ODBC',
‘select * from pub.article') AS Article;

or

SELECT
Article.[ART-c-Ref]
FROM OPENROWSET('MSDASQL', 'DSN_Colombus_Prod';'ODBC';'ODBC',
pub.article) AS Article;

Page 259/261
• MSDASQL: This corresponds to Microsoft OLE DB Provider for ODBC Drivers.

• DSN_Colombus_Prod: The name of the ODBC connection defined on the client workstation
accessing the Colombus database. To find out more, see the Colombus document on installing the
ODBC driver.

• ‘ODBC’ ;’ODBC’: This corresponds to passwords defined in the ODBC driver configuration. To find
out more, see the Colombus document on installing the ODBC driver.

• PUB: This is the default name of the database area. It may differ depending on your context. The
name is defined in the configuration .st of the database area.

38.4.17. Accessing Colombus databases via a linked server and an AS cube project named query
You can use the primary data source to access the SQL Server database provided in the AS project in order to define
named queries in the Colombus production databases using multi-server queries. See above for the creation of the
linked server.

Below is an example of how to configure the Progress linked server (colombus.db) using SQL Server Management
Studio.

Page 260/261
The ODBC login and password are the ones defined during the installation and configuration of the ODBC driver. To find
out more, see the Colombus document on installing the ODBC driver.

Once the server is declared, you can run multi-server queries. Below is an example of a query that loads product
references from the ITEM table in the Colombus database.

SELECT
article.[ART-c-Ref]
FROM colombus..pub.article AS article

See the previous section on the PUB object.

Page 261/261

You might also like