Data Quality Talend
Data Quality Talend
5.2.1
Talend Open Studio for Data Quality
Copyleft
This documentation is provided under the terms of the Creative Commons Public License (CCPL).
For more information about what you can and cannot do with this documentation in accordance with the CCPL,
please read: http://creativecommons.org/licenses/by-nc-sa/2.0/
Notices
All brands, product names, company names, trademarks and service marks are the properties of their respective
owners.
5.2. Data mining types . . . . . . . . . . . . . . . . . . . . . . . . . 78
Table of Contents 5.2.1. Nominal . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
5.2.2. Interval . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
Preface ............................................... vii 5.2.3. Unstructured text . . . . . . . . . . . . . . . . . . 80
1. General information . . . . . . . . . . . . . . . . . . . . . . . . . vii 5.2.4. Other . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
1.1. Purpose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii 5.3. Analyzing columns in a database . . . . . . . . . 80
1.2. Audience . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii 5.3.1. Defining the columns to be
1.3. Typographical conventions . . . . . . . . . . vii analyzed and setting indicators . . . . . . . . . . 81
2. Feedback and Support . . . . . . . . . . . . . . . . . . . . . . vii 5.3.2. Finalizing the column analysis
Chapter 1. Overview .............................. 1 before execution . . . . . . . . . . . . . . . . . . . . . . . . . 89
1.1. Why profiling data . . . . . . . . . . . . . . . . . . . . . . . . . . 2 5.3.3. Using the Java or the SQL
1.2. About Talend data quality . . . . . . . . . . . . . . . . . . 2 engine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
1.2.1. What is Talend data quality . . . . . . . . 2 5.3.4. Accessing the detailed view of
1.2.2. Core features . . . . . . . . . . . . . . . . . . . . . . . 2 the database column analysis . . . . . . . . . . . . 94
Chapter 2. Getting started with Talend 5.3.5. Viewing and exporting
analyzed data . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
data quality .......................................... 5
5.3.6. Using regular expressions and
2.1. Working principles of data quality . . . . . . . . . 6
SQL patterns in a column analysis . . . . . . . 99
2.2. Launching the studio . . . . . . . . . . . . . . . . . . . . . . . 6
5.3.7. Saving the queries executed on
2.3. Important features and configuration
indicators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
5.3.8. Creating table and columns
2.3.1. Defining the maximum
analyses in shortcut procedures . . . . . . . . . 106
memory size threshold . . . . . . . . . . . . . . . . . . . . 7
5.4. Analyzing master data on an MDM
2.3.2. Setting preferences of analysis
server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
editors and analysis results . . . . . . . . . . . . . . . . 8
5.4.1. Defining the business entities
2.3.3. Displaying and hiding the help
to be analyzed and setting indicators. . . 108
content . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
5.4.2. Accessing the detailed view of
2.3.4. Displaying the error log view
the master data analysis . . . . . . . . . . . . . . . . 116
and managing log files . . . . . . . . . . . . . . . . . . . 12
5.4.3. Analyzing master data in
2.3.5. Opening new editors . . . . . . . . . . . . . . 15
shortcut procedures . . . . . . . . . . . . . . . . . . . . . 117
2.4. Icons appended on analyses names in
5.5. Analyzing data in a file . . . . . . . . . . . . . . . . . . . 118
the DQ Repository . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
5.5.1. Analyzing columns in a
2.5. Multi-perspective approach . . . . . . . . . . . . . . . 18
delimited file . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
2.5.1. Switching between different
5.5.2. Analyzing columns in an excel
perspectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
2.5.2. Saving the configuration of a
perspective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 Chapter 6. Table analyses .................... 135
6.1. Steps to analyze a table . . . . . . . . . . . . . . . . . . 136
Chapter 3. Before you begin profiling
6.2. Analyzing tables in databases . . . . . . . . . . . . 136
data .................................................... 21 6.2.1. Creating a simple table
3.1. Creating connections to different data analysis: the analysis of a set of
sources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 columns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
3.1.1. Connecting to a database . . . . . . . . . . 22 6.2.2. Creating a table analysis with
3.1.2. Connecting to a file . . . . . . . . . . . . . . . 27 SQL business rules . . . . . . . . . . . . . . . . . . . . . 150
3.1.3. Connecting to an MDM server. . . . 29 6.2.3. Detecting anomalies in the
3.2. Managing connections to data sources table columns: column functional
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 dependency analysis . . . . . . . . . . . . . . . . . . . . 174
3.2.1. Managing database 6.2.4. Creating a column analysis
connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 from a simple table analysis . . . . . . . . . . . . 180
3.2.2. Managing MDM connections . . . . . 43 6.3. Analyzing tables in delimited files . . . . . . . 181
3.2.3. Managing file connections . . . . . . . . 48 6.3.1. Creating a column set analysis
3.3. Catalogs and schemas in database on a delimited file using patterns . . . . . . . 181
systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 6.3.2. Creating a column analysis
Chapter 4. Profiling database content from the analysis of a set of columns. . . 190
........................................................... 51 6.4. Analyzing tables on MDM servers . . . . . . . 191
4.1. Managing database content analyses. . . . . 52 6.4.1. Creating a column set analysis
4.1.1. Creating a database content on an MDM server . . . . . . . . . . . . . . . . . . . . . 191
analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 6.4.2. Creating a column analysis
4.1.2. Creating a database content from the column set analysis . . . . . . . . . . . 197
analysis in shortcut procedure . . . . . . . . . . . 56 Chapter 7. Redundancy analysis ........... 199
4.1.3. Creating a catalog analysis . . . . . . . . 57 7.1. What are redundancy analyses . . . . . . . . . . 200
4.1.4. Creating a schema analysis . . . . . . . . 61 7.2. Comparing identical columns in
4.2. Displaying a table key and index in different tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200
the analyzed database . . . . . . . . . . . . . . . . . . . . . . . . . . 65 7.3. Matching primary and foreign keys . . . . . 205
4.3. Tracking data changes in source Chapter 8. Correlation analyses ........... 211
databases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 8.1. What are column correlation analyses
4.3.1. Comparing tree-view metadata ................................................... 212
structures with database structures . . . . . . . 67 8.2. Numerical correlation analysis . . . . . . . . . . 212
4.3.2. Synchronizing the connection 8.2.1. Creating numerical correlation
structure with the database structure. . . . . 72 analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213
Chapter 5. Column analyses .................. 77 8.2.2. Accessing the detailed view of
5.1. Steps to analyze a column . . . . . . . . . . . . . . . . . 78 the analysis results . . . . . . . . . . . . . . . . . . . . . . 219
8.3. Time correlation analysis . . . . . . . . . . . . . . . . 221 B.3. Toolbar of the data explorer . . . . . . . . . . . . . . . 325
8.3.1. Creating time correlation B.4. Connections view . . . . . . . . . . . . . . . . . . . . . . . . . 325
analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221 B.5. SQL History view . . . . . . . . . . . . . . . . . . . . . . . . . 325
8.3.2. Accessing the detailed view of B.6. SQL editor view . . . . . . . . . . . . . . . . . . . . . . . . . . . 326
the analysis results . . . . . . . . . . . . . . . . . . . . . . 226 B.7. Database Structure view . . . . . . . . . . . . . . . . . . . 327
8.4. Nominal correlation analysis . . . . . . . . . . . . . 227 B.8. Database Detail view . . . . . . . . . . . . . . . . . . . . . . 327
8.4.1. Creating nominal correlation Appendix C. Regular expressions on
analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228
SQL Server ....................................... 329
8.4.2. Accessing the detailed view of
C.1. Main concept . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 330
the analysis results . . . . . . . . . . . . . . . . . . . . . . 231
C.2. How to create a regular expression
Chapter 9. Extended functionality: function on SQL Server . . . . . . . . . . . . . . . . . . . . . . . . 330
patterns and indicators ....................... 235 C.2.1. How to create a project in
9.1. Patterns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236 Visual Studio . . . . . . . . . . . . . . . . . . . . . . . . . . . 330
9.1.1. Pattern types . . . . . . . . . . . . . . . . . . . . . 236 C.2.2. How to deploy the regular
9.1.2. Managing User-Defined expression function to the SQL
Functions in databases . . . . . . . . . . . . . . . . . . 236 server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 332
9.1.3. Adding regular expressions C.2.3. How to set up the studio . . . . . . . . . 336
and SQL patterns to column analyses C.3. How to test the created function via the
........................................... 242 SQL Server editor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 337
9.1.4. Managing regular expressions
and SQL patterns . . . . . . . . . . . . . . . . . . . . . . . 242
9.2. Indicators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264
9.2.1. Indicator types . . . . . . . . . . . . . . . . . . . 264
9.2.2. Managing system indicators . . . . . . 269
9.2.3. Managing user-defined
indicators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271
9.2.4. Indicator parameters . . . . . . . . . . . . . 290
Chapter 10. Other important
management procedures ...................... 293
10.1. Creating and storing SQL queries . . . . . . 294
10.2. Importing data profiling items or
projects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 296
10.3. Exporting data profiling items . . . . . . . . . . 298
10.4. Migrating a group of connections . . . . . . 300
10.5. Upgrading projects items from older
versions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 301
Chapter 11. Managing existing
analyses ............................................. 303
11.1. Procedures for all types of analyses. . . . 304
11.1.1. Opening an analysis . . . . . . . . . . . . 304
11.1.2. Executing an analysis . . . . . . . . . . . 304
11.1.3. Duplicating an analysis . . . . . . . . . 304
11.1.4. Adding a task to an analysis. . . . 305
11.1.5. Deleting or restoring an
analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 305
11.2. Managing tasks . . . . . . . . . . . . . . . . . . . . . . . . . . 305
11.2.1. Adding a task to a column in
a database connection . . . . . . . . . . . . . . . . . . 306
11.2.2. Adding a task to an item in a
specific analysis . . . . . . . . . . . . . . . . . . . . . . . . 307
11.2.3. Adding a task to an indicator
in a column analysis . . . . . . . . . . . . . . . . . . . . 308
11.2.4. Displaying the task list . . . . . . . . . . 309
11.2.5. Filtering the task list . . . . . . . . . . . . 310
11.2.6. Deleting a completed task . . . . . . . 313
Appendix A. The studio management
GUI .................................................. 315
A.1. Main window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 316
A.2. Menu bar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 317
A.3. Toolbar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 318
A.4. Tree view . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 318
A.5. Detailed View . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 319
A.6. The Profiling perspective of the studio. . . . 319
A.7. Tab panel of the analysis editors . . . . . . . . . . 320
A.8. Selecting a task from the studio
management GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321
Appendix B. Data Explorer
management GUI ............................... 323
B.1. Main window of the data explorer . . . . . . . . . 324
B.2. Menu bar of the data explorer . . . . . . . . . . . . . 324
1. General information
1.1. Purpose
This User Guide explains how to manage Talend Open Studio for Data Quality functions in a normal
operational context.
Information presented in this document applies to release 5.2.1 of Talend Open Studio for Data
Quality.
1.2. Audience
This guide is for business users, database administrators and data analysts in charge of checking the
quality of data and collecting statistics and information about that data.
The layout of GUI screens provided in this document may vary slightly from your actual GUI.
text in bold: window and wizard buttons and fields, keyboard keys, menus and menu options,
The icon indicates an item that provides additional information about an important point. It is
also used to add comments related to a table or a figure,
The icon indicates a message that gives information about the execution requirements or
recommendation type. It is also used to refer to situations or information the end user needs to be
aware of or pay special attention to.
http://talendforge.org/forum
If data is of a poor quality, or managed in structures that cannot be integrated to meet the needs of the enterprise,
business processes and decision-making suffer.
Compared to manual analysis techniques, data profiling technology improves the enterprise ability to meet the
challenge of managing data quality and to address the data quality challenges faced during data migrations and
data integrations.
data profiler, for more information about the data profiler, see appendix The studio management GUI.
data explorer, for more information about the data explorer, see appendix Data Explorer management GUI.
pattern manager, for more information about the pattern manager, see section Patterns and indicators.
metadata manager, for more information about the metadata manager, see section Metadata repository.
For more information, see section Connecting to a database and chapter Table analyses.
Indicators are the results achieved through the implementation of different patterns. They can represent the results
of data matching and different other data-related operations. The Profiling perspective of the studio lists two types
of indicators: system indicators, a list of predefined indicators, and user-defined indicators, a list of those defined
by the user.
This chapter explains the typical sequence of profiling data using the studio and many other important
miscellaneous subjects.
Before starting data profiling management procedures, you need to be familiar with the Graphical User Interface
(GUI). For more information, see appendix The studio management GUI.
A typical sequence of profiling data using the studio involves the following steps:
1. Connecting to a data source including databases, a Master Data Management (MDM) servers and delimited
files or excel files in order to be able to access the tables and columns on which you want to define and
execute analyses. For more information, see chapter Before you begin profiling data.
2. Defining any of the available data quality analyses including database content analysis, column analysis, table
analysis, redundancy analysis, correlation analysis, etc. These analyses will carry out data profiling processes
that will define the content, structure and quality of highly complex data structures. The analysis results will
be displayed graphically next to each of the analysis editors, or in more detail in the Analysis Results view.
While you can use all analyses types to profile data in databases, you can only use Column Analysis and Column
Set Analysis to profile data in a delimited or excel file and to profile master data on MDM servers.
1. Unzip the Talend studio zip file and, in the folder, double-click the executable file corresponding to your
operating system.
The studio zip archive contains binaries for several platforms including Mac OS X and Linux/Unix.
2. In the [License] window that is displayed, read and accept the terms of the license agreement to proceed
to the next step.
3. If required, follow the instructions provided to join Talend community or click Register later to open a
welcome window.
From this window, you can have access to the perspectives of other applications integrated within the studio. For
more information, see section Multi-perspective approach.
You can now start to profile your data by creating your own analyses or importing already created ones.
For more information about creating new analyses, see section Working principles of data quality.
For more information about importing analyses and data quality items created in other studios, see section
Importing data profiling items or projects and section Upgrading projects items from older versions.
Why would you like to set a memory limit when running such analyses? If you use column analysis or column
set analysis to profile very big sets of data or data with many problems, you may run out of memory and end up
with a Java heap error. By defining the maximum memory size threshold for these analyses, the Studio will stop
the analysis execution when the memory limit size is reached and provide you with the analysis results that were
measured on the data before the analysis execution was terminated by the memory limit size.
1. On the menu bar, select Window > Preferences to display the [Preferences] window.
3. In the Memory area, select the Enable analysis thread memory control check box.
4. Move the slider to the right to define the memory limit at which the analysis execution will be stopped.
The execution of any column analysis or column set analysis will be stopped if it exceeds the allocated memory
size. The analysis results given in the Studio will cover the data analyzed before the interruption of the analysis
execution.
1. On the menu bar, select Window > Preferences to display the [Preferences] window.
3. In the Folding area, select the check box(es) corresponding to the display mode you want to set for the
different sections in all the editors.
4. In the Analysis results folding area, select the check boxes corresponding to the display mode you want to
set for the statistic results in the Analysis Results view of the analysis editor.
5. In the Graphics area, select the Hide graphics in analysis results page option if you do not want to show
the graphical results of the executed analyses in the analysis editor. This will optimize system performance
when you have so many graphics to generate.
6. In the Analyzed Items Per Page field, set the number for the analyzed items you want to group on each page.
7. In the Business Rules Per Page field, set the number for the business rules you want to group in each page.
You can always click the Restore Defaults tab on the [Preferences] window to bring back the default values.
8. Click Apply and then OK to validate the changes and close the [Preferences] window.
While carrying on different analyses, all corresponding editors will open with the display mode you set in the
[Preferences] window.
You can also have access to a help panel that is attached to all wizards used in the studio to create the different
types of analyses or to set thresholds on indicators.
If you close the cheat sheet panel in the Profiling perspective of the studio, it will be always closed anytime you
switch back to this perspective until you open it manually.
1. Either:
Or,
1. Select Help > Cheat Sheets from the menu bar. The [Cheat Sheet Selection] dialog box opens.
You can also press the Alt+H shortcut keys to open the Help menu and then select Cheat Sheets.
3. Select the cheat sheet you want to open in the studio and then click OK to close the dialog box.
The selected cheat sheet opens in the studio main window. Use the local toolbar icons to manage the display
of the cheat sheets.
To display the help panel in any of the wizards used in the Studio, do the following:
1. Select Window > Preferences > Talend > Profiling > Web Browser.
2. Clear the Block browser help check box and then click OK to close the dialog box.
All the wizards in the studio will display with the help panel.
To display the error log view in the Studio, do one of the following:
1. Either:
The filter field at the top of the view enables you to do dynamic filtering, i.e. as you type your text in the field, the
list will show only the logs that match the filter.
You can use icons on the view toolbar to carry out different management options including exporting and
importing the error log files.
Each error log in the list is preceded by an icon that indicates the severity of the log: for errors, for
warnings and for information.
4. Double-click any of the error log files to open the [Event Detail] dialog box.
5. If required, click the icon in the [Event Detail] dialog box to copy the event detail to the Clipboard and
then paste it anywhere you like.
Prerequisite(s): An analysis editor or an SQL query editor is open in the Profiling perspective of the studio.
1. In the open analysis or SQL editor, right-click the editor title tab.
A new analysis or SQL editor opens on the same analysis metadata and parameters or on the same SQL query.
The new editor will be an exact duplicate of the initial one.
To open an empty new SQL editor from the Data Explorer perspective, do the following:
1. In the Connections view of the Data Explorer perspective, right-click any connection in the list.
To open an empty SQL editor from the Profiling perspective of the studio, see the procedure outlined in section
Creating and storing SQL queries.
The number of the analyses created in the studio will be indicated next to this Analyses folder in the DQ Repository tree
view.
This analysis list will give you an idea about any problems in one or more of your analyses before even opening
the analysis.
If an analysis fails to run, a small red-cross icon will be appended on it. If an analysis runs correctly but has violated
thresholds, a warning icon is appended on such analysis.
Several other perspectives that extend the studio functionalities are also available within the studio.
3. Click OK .
The current perspective is saved as a new perspective under the new name.
You can open this perspective any time by selecting it from the [Open Perspective] dialog box. For further
information, see section Switching between different perspectives.
This chapter explains how to set up different connections to your data sources in order to be able to profile data
in these sources. It describes as well how to manage such connections.
Before starting data profiling management procedures, you need to be familiar with the studio Graphical User
Interface (GUI). For more information, see appendix The studio management GUI.
These connections to different databases are reflected by different tree levels and different icons in the DQ
Repository tree view because the logical and physical structure of data differs from one relational database to
another. The highest level structure Catalog followed by Schema and finally by Table is not applicable to
all database types.
1. In the DQ Repository tree view, expand Metadata, right-click DB Connections and select Create DB
Connection.
2. In the Name field, enter a name for this new database connection.
3. If required, set other connection metadata (purpose, description and author name) in the corresponding fields
and click Next to proceed to the next step.
4. In the DB Type field and from the drop-down list, select the type of database to which you want to connect.
For example, MySQL.
If you select to connect to a database that is not supported in the studio (using the ODBC or JDBC methods), it
is recommended to use the Java engine to execute the column analyses created on the selected database. For more
information on column analyses, see section Defining the columns to be analyzed and setting indicators, and for more
information on the Java engine, see section Using the Java or the SQL engine.
5. In the DB Version field, select the version of the database to which you are creating the connection.
6. Enter your login, password, server and port information in their corresponding fields.
7. In the Database field, enter the database name you are connecting to. If you need to connect to all of the
catalogs within one connection, if the database allows you to, leave this field empty.
A folder for the created database connection is displayed under DB Connection in the DQ Repository tree view.
The connection editor opens with the defined metadata in the studio.
Click Connection information to show the connection parameters for the relevant database.
Click the Check button to check the status of your current connection.
Click the Edit... button to open the connection wizard and modify any of the connection information.
For information on how to set up a connection to a file, see section Connecting to a file. For information on how
to set up a connection to an MDM server, see section Connecting to an MDM server.
In the connection wizard, you must select from the Distribution list the platform that hosts Hive. You must set as
well Hive version and mode. For further information, check http://hadoop.apache.org/.
Please note that one analysis type and few indicators and functions are still not supported for Hive, see the table
below for more detail:
2. Right-click FileDelimited connections and then select Create File Delimited Connection to open the [New
Delimited File] wizard.
3. Follow the steps defined in the wizard to create a connection to a delimited file. For further information, see
the Talend Open Studio for Data Integration User Guide.
You can then create a column analysis and drop the columns to analyze from the delimited file metadata
in the DQ Repository tree view to the open analysis editor. For further information, see section Analyzing
columns in a delimited file.
For information on how to set up a connection to a database, see section Connecting to a database. For further
information on how to set up a connection to an MDM server, see section Connecting to an MDM server.
1. On the task bar of your desktop, click the Start button and then select Control Panel to open the
corresponding page.
4. In the User DSN view, click Add... to open a dialog box where you can select the ODBC driver, Microsoft
Excel in this example, for the data source (database) to which you want to connect.
5. Click Finish to proceed to the step where you can define the Data Source.
6. In the Data Source Name field, enter a name for the Data Source, and then click the Select Workbook... tab
to proceed to the step where you link this Data Source to the excel file you want to profile.
7. In the open dialog box, browse to the excel file to which you want to link your Data Source.
To be able to set an ODBC connection to the Data Source without problems, make sure that the excel files you want
to profile are put in a folder, i.e. they are not located on the root directory of your system.
8. Select the excel file and then click OK to close the open dialog boxes. The Data Source you create is listed
in the User Data Sources list.
You can then create a column analysis and drop the columns to analyze from the excel file metadata in the DQ
Repository tree view to the open analysis editor. For further information, see section Analyzing columns in an
excel file.
For information on how to set up a connection to a database, see section Connecting to a database. For further
information on how to set up a connection to an MDM server, see section Connecting to an MDM server.
Prerequisite(s): The MDM server to which you want to connect is up and running.
1. In the DQ Repository tree view, expand Metadata, right-click MDM Connections and then select Create
MDM Connection.
2. In the Name field, enter a name for this new MDM connection.
Spaces between words are not allowed when typing in the connection name in this field.
3. If required, set a purpose and a description for the connection in the corresponding fields. The Status field is a
customized field that can be defined. For more information, see the Talend Open Studio for Data Integration
User Guide.
1. Enter your login and password to the MDM server in their corresponding fields.
Make sure that the role that has been assigned to you in the MDM Studio gives you enough rights to access the MDM
server via your studio. For further information, see the Talend Open Studio for MDM Administrator Guide.
2. Set the connection parameters to the MDM server in the Server and Port fields.
3. Click the Check button to verify if your connection is successful. A confirmation message is displayed.
4. Click OK to close the message and then Next to proceed to the next step.
5. From the Version list, select the master data Version on the MDM server to which you want to connect.
6. From the Data-Model list, select the data model against which master data is validated.
7. From the Data-Container list, select the data container that holds the master data you want to access.
A folder for the created MDM connection is displayed under the MDM Connections folder under the Metadata
node in the DQ Repository tree view, and the analysis editor opens with the defined metadata.
The display of the connection editor depends on the parameters you set in the [Preferences] window. For more information,
see section Setting preferences of analysis editors and analysis results.
Click Connection information to show the connection parameters for the relevant MDM server.
Click the Check button to check the status of your current connection.
Click the Edit... button to open the connection wizard where you can edit the connection parameters.
For information on how to set up a connection to a database, see section Connecting to a database. For further
information on how to set up a connection to a file, see section Connecting to a file.
Prerequisite(s): A database connection is created in the Profiling perspective of the studio. For further
information, see section Connecting to a database.
2. Either:
Right-click the database connection and select Open in the contextual menu.
4. Click the Edit... button in the Connection information view to open the [Database Connection] wizard.
5. Go through the steps in the wizard and modify the database connection settings as required.
A dialog box opens prompting you to reload the updated database connection.
7. Select the reload option if you want to reload the new database structure for the updated database connection.
If you select the don't reload option, you will still be able to execute the analyses using the connection even after
you update it.
If the database connection is used by profiling analyses in the Studio, another dialog box is displayed to list
all the analyses that use the database connection. It alerts you that if you reload the database new structure,
all the analyses using the connection will become unusable although they will be always listed in the DQ
Repository tree view.
8. Click OK to accept reloading the database structure or Cancel to cancel the operation and close the dialog
box.
9. Click OK to close the messages and reload the structure of the new connection.
You can filter your database connections to list the databases that match the filter you set. This option is very
helpful when the number of databases in a specific connection is very big.
Prerequisite(s): A database connection is already created in the Profiling perspective of the studio. For further
information, see section Connecting to a database.
2. Right-click the database connection you want to filter and select Package Filter to open the corresponding
dialog box.
3. In the Package Filter field, enter the complete name of the database you want to view and then click Finish
to close the dialog box.
Only the database that matches the filter you set is listed under the database connection in the DQ Repository
tree view.
1. In the [Package Filter] dialog box, delete the text from the Package Filter field.
All databases are listed under the selected database connection in the DQ Repository tree view.
Prerequisite(s): A database connection is created in the Profiling perspective of the studio. For further
information, see section Connecting to a database.
2. Right-click the connection you want to duplicate and select Duplicate from the contextual menu.
The duplicated database connection shows under the connection list in the DQ Repository tree view as a copy of
the original connection. You can now open the duplicated connection and modify its metadata as needed.
Prerequisite(s): A database connection is created in the Profiling perspective of the studio. For further
information, see section Connecting to a database.
2. Right-click the connection to which you want to add a task, and then select Add task... from the contextual
menu.
The [Properties] dialog box opens showing the metadata of the selected connection.
3. In the Description field, enter a short description for the task you want to attach to the selected connection.
4. On the Priority list, select the priority level and then click OK to close the dialog box.
You can follow the same steps in the above procedure to add a task to a catalog, a table or a column in the connection. For
further information, see section Adding a task to a column in a database connection.
For more information on how to access the task list, see section Displaying the task list.
This option is very helpful when the number of tables in the database to which the studio is connecting is very
big. If so, a message is displayed prompting you to set a table filter on the database connection in order to list only
defined tables in the DQ Repository tree view.
Prerequisite(s): A database connection is already created in the Profiling perspective of the studio. For further
information, see section Connecting to a database.
2. Expand the database connection in which you want to filter tables/views and right-click the desired catalog/
schema.
3. Select Table/View Filter from the list to display the corresponding dialog box.
4. Set a table and a view filter in the corresponding fields and click Finish to close the dialog box.
Only tables/views that match the filter you set are listed in the DQ Repository tree view.
Prerequisite(s): A database connection is created in the studio. For further information, see section Connecting
to a database.
2. Right-click the database connection you want to delete and select Delete in the contextual menu.
You will always be able to run any analysis that uses the connection moved to the recycle bin. However, an alert
message will be displayed next to the connection name in the analysis editor.
1. Right-click the database connection in the Recycle Bin and choose Delete from the contextual menu.
If the connection is not used by any analysis in the current Studio, a [Delete forever] dialog box is displayed.
2. Click Yes to confirm the operation and close the dialog box.
If the connection is used by one or more analyses in the current Studio, a dialog box is displayed to list all
the analyses that use this database connection.
3. Either:
Click Ok to close the dialog box without deleting the database connection from the recycle bin.
Select the Force to delete all the dependencies check box and then click OK to delete the database
connection from the Recycle Bin and to delete all the dependent analyses from the Data Profiling node.
You can also delete permanently the database connection by emptying the recycle bin. To empty the Recycle Bin,
do the following:
If the connection is not used by any analysis in the current Studio, a confirmation dialog box is displayed.
If the connection is used by one or more analyses in the current Studio, a dialog box is displayed to list all
the analyses that use this database connection.
3. Click OK to close the dialog box without removing the connection from the recycle bin.
2. Either:
Right-click the MDM connection and select Open from the contextual menu.
4. Click the Edit... button in the Connection information view to open the connection wizard again.
5. Go through the steps in the wizard and modify the MDM connection information as required, and then click
Finish to validate the modifications and close the wizard.
Prerequisite(s): At least one MDM connection is created in the Profiling perspective of the studio. For further
information, see section Connecting to an MDM server.
2. Right-click the connection you want to duplicate and select Duplicate... from the contextual menu.
The duplicated MDM connection shows under the connection list in the DQ Repository tree view as a copy of
the original connection. You can now open the duplicated connection and modify its metadata as needed.
Prerequisite(s): An MDM connection is created in the Profiling perspective of the studio. For further information,
see section Connecting to an MDM server.
2. Right-click the connection to which you want to add a task, and then select Add task... from the contextual
menu.
The [Properties] dialog box opens showing the metadata of the selected connection.
3. In the Description field, enter a short description for the task you want to attach to the selected connection.
4. On the Priority list, select the priority level and then click OK to close the dialog box. The created task is
added to the Tasks list.
You can follow the same steps in the above procedure to add a task to an entity or column in the connection.
For more information on how to access the task list, see section Displaying the task list.
2. Right-click the MDM connection you want to delete and select Delete from the contextual menu.
You will always be able to run any analysis that uses the connection moved to the recycle bin. However, an alert
message will be displayed next to the connection name in the analysis editor.
1. Right-click it in the Recycle Bin and choose Delete from the contextual menu.
If the connection is not used by any analysis in the current Studio, a [Delete forever] dialog box is displayed.
If the connection is used by one or more analyses in the current Studio, a dialog box is displayed to list all
the analyses that use this MDM connection.
3. Either:
Click OK to close the dialog box without deleting the MDM connection from the recycle bin.
Select the Force to delete all the dependencies check box and then click OK to delete the connection
from the Recycle Bin and to delete all the dependent analyses from the Data Profiling node.
You can also delete permanently the MDM connection by emptying the recycle bin. To empty the Recycle Bin,
do the following:
If the connection is not used by any analysis in the current Studio, a confirmation dialog box is displayed.
If the connection is used by one or more analyses in the current Studio, a dialog box is displayed to list all
the analyses that use the MDM connection.
3. Click OK to close the dialog box without removing the connection from the recycle bin.
The procedures to manage file connections are the same as those for managing MDM connections. For further
information, see section Managing MDM connections.
The table below describes the structure of some databases in terms of catalog and schemas:
Before starting data profiling management procedures, you need to be familiar with the studio Graphical User
Interface (GUI). For more information, see appendix The studio management GUI.
You can also analyze one specific catalog or schema in a database, if this entity is used in the physical structure
of the database.
Prerequisite(s): At least, one database connection is set in the Profiling perspective of the studio. For further
information, see section Connecting to a database.
To create a database content analysis, you must first define the relevant analysis and then select the database
connection you want to analyze.
3. Expand the Connection Analysis node, click Database Structure Overview and then click the Next button.
5. Set the analysis metadata (purpose, description and author name) in the corresponding fields and click Next.
1. Expand DB Connections and select a database connection to analyze, if more than one exists.
3. Set filters on tables and/or views in their corresponding fields according to your needs using the SQL
language.
By default, the analysis will include all tables and views in the database.
A folder for the newly created analysis is listed under the Analyses folder in the DQ Repository tree view,
and the connection editor opens with the defined metadata.
The display of the connection editor depends on the parameters you set in the [Preferences] window. For more
information, see section Setting preferences of analysis editors and analysis results.
In the Number of connections per analysis field, set the number of concurrent connections allowed per
analysis to the selected database connection.
You can set this number according to the database available resources, that is the number of concurrent
connections each database can support.
Select the Reload databases check box if you want to reload all databases in your connection on the server
when you run the overview analysis.
When you try to reload a database, a message will prompt you for confirmation as any change in the
database structure may affect existing analyses.
6. Click Analysis Summary to show all the parameters of the current analysis along with the current analysis
execution status.
7. Click the save icon on top of the editor and then press F6 to execute the current analysis. A message opens
to confirm that the operation is in progress.
8. Click Statistical information to show analytical information about the content of the relevant database.
9. Click a catalog or a schema in the Statistical information view to list all tables included in the selected
catalog or schema along with a summary of their content: number of rows, keys and indexes.
The selected catalog or schema is highlighted in blue. Catalogs or schemas highlighted in red indicate potential
problems in data.
10. Click any column header in the analytical table to sort alphabetically data listed in catalogs or schemas.
You can also sort alphabetically all columns in the result table doing the same.
You can create catalog, schema or table analysis directly from the open connection analysis if you right-click the desired
catalog, schema or table and select Overview analysis or Table analysis.
Prerequisite(s): At least, one database connection is set in the Profiling perspective of the studio. For further
information, see section Connecting to a database.
1. Right-click the database for which you want to create content analysis.
This way, you do not have to specify in the new analysis wizard either the type of analysis you want to carry out
or the DB connection to analyze. Otherwise, all other procedural steps are exactly the same as in section Creating
a database content analysis.
Prerequisite(s): At least one database connection has been created to connect to a database that uses the catalog
entity.
3. Expand the Catalog Analysis node and then click Catalog Structure Overview.
You can directly go to this step in the analysis creation wizard if you right-click the catalog to analyze in Metadata>DB
Connections and select Overview analysis.
6. Set the analysis metadata (purpose, description and author name) in the corresponding fields and click Next.
2. Click Next.
3. Set filters on tables and/or views in their corresponding fields according to your needs using the SQL
language.
By default, the analysis will include all tables and views in the catalog.
A folder for the newly created analysis is listed under Analysis in the DQ Repository tree view, and the
analysis editor opens with the defined metadata.
The display of the analysis editor depends on the parameters you set in the [Preferences] window. For more
information, see section Setting preferences of analysis editors and analysis results.
In the Number of connections per analysis field, set the number of concurrent connections allowed per
analysis to the selected database connection.
You can set this number according to the database available resources, that is the number of concurrent
connections each database can support.
6. Click the save icon on top of the editor and then press F6 to execute the current analysis.
7. Click Statistical informations to show analytical information about the content of the relevant catalog.
8. If required, click the catalog in the analytical table to open a result list that details all tables included in the
selected catalog with a summary of their content.
The selected catalog is highlighted in blue. Catalogs highlighted in red indicate potential problems in data.
9. If required, click any column header in the result table to sort the listed data alphabetically.
Prerequisite(s): At least one database connection has been created to connect to a database that uses the schema
entity, for example the DB2 database. For further information, see section Connecting to a database.
3. Expand the Schema Analysis node and then click Schema Structure Overview.
You can directly get to this step in the analysis creation wizard if you right-click the schema to analyze in Metadata
>DB connections and select Overview analysis.
6. If required, set the analysis metadata (purpose, description and author name) in the corresponding fields and
click Next to proceed to the next step.
2. Click Next.
3. Set filters on tables and/or views in their corresponding fields according to your needs using the SQL
language.
By default, the analysis will include all tables and views in the catalog.
A folder for the newly created analysis is listed under Analysis in the DQ Repository tree view, and the
analysis editor opens with the defined metadata.
The display of the analysis editor depends on the parameters you set in the [Preferences] window. For more
information, see section Setting preferences of analysis editors and analysis results.
In the Number of connections per analysis field, set the number of concurrent connections allowed per
analysis to the selected database connection.
You can set this number according to the database available resources, that is the number of concurrent
connections each database can support.
6. Click the save icon on top of the editor and then press F6 to execute the current analysis.
7. Click Statistical informations to show analytical information about the content of the relevant catalog.
8. Click the schema in the analytical table to open a result list that details all tables included in the selected
schema with a summary of their content.
The selected schema is highlighted in blue. Schemas highlighted in red indicate potential problems in data.
9. Click any column header in the result table to sort the listed data alphabetically.
Prerequisite(s): At least one database content analysis has been created and executed in the studio.
To display the details of the key and index of a given table in the analyzed database, do the following:
1. In the Statistical information view, click a catalog or a schema. All the tables included in the selected catalog
or schema are listed along with a summary of their content: number of rows, keys and indexes.
2. In the table list, right-click the table key and select View keys.
The Database Structure and the Database Detail views display the structure of the analyzed database and
information about the primary key of the selected table.
If one or both views do not show, select Window > Show View > Database Structure or Window > Show View
> Database Detail.
3. In the table list, right-click the table index and select View indexes.
The Database Structure and the Database Detail views display the structure of the analyzed database and
information about the index of the selected table.
4. If required, click any of the tabs in the Database Detail view to display the relevant metadata about the
selected table.
From the studio, you can compare the connection structure displayed in the DQ Repository tree view with the
database structures itself to locate possible differences. Then you can synchronize the connection structure in the
tree view with the actual database structure.
Comparing and synchronizing a database connection with a database structure may take long time. Do not do it unless
you are sure that incoherency does exist.
The studio takes a connection structure in the DQ Repository tree view and compares it to the database trying to
locate all structure differences and display these differences in the Compare view.
You can then, if necessary, synchronize the connection structure in the tree view with the database structure. For
more information, see section Synchronizing the connection structure with the database structure.
You can perform the structure comparison at the following three different levels:
2. Right-click the DB connection for which you want to compare the metadata structure with the database
structure and select Database Compare.
3. If required, click the Cancel button on the message to stop the operation.
A compare view opens displaying the differences between your connection structure and the actual database
structure.
Color Indication
green highlights any deleted item.
blue highlights any updated item.
red highlights any added item.
If you select an item in the top half of the view, the color markers in the bottom half of the view become
thicker to highlight the selected item. If you select any database from the Distant Structure list in the bottom
half of the view, the corresponding description will be highlighted in the top half of the view.
4. If required, right-click a specific catalog in this view to display a contextual menu where you can select
Compare the list of tables or Compare the list of views. This will display respectively the table list or
the view list of the selected catalog. For further information about comparing table lists, see section How
to compare table lists
If you select a specific catalog in this list and press the T or V keys on your keyboard, you can display respectively the
table or view lists of the selected catalog.
2. Browse through the entities in your database connection to reach the Table folder you want to compare with
that of the database.
The Compare view opens displaying any differences between the table lists in the tree view and the actual
database.
Color Indication
green highlights any deleted item.
blue highlights any updated item.
red highlights any added item.
If you select an item in the top half of the view, the color markers in the bottom half of the view become
thicker to highlight the selected item. If you select any database from the Distant Structure list in the bottom
half of the view, the corresponding description will be highlighted in the top half of the view.
4. If required, right-click a specific table in the Compare view to display a contextual menu. Select Compare
the list of columns to display the columns list of the selected table. For further information, see section How
to compare column lists
If you select a specific table in the Compare list and press the C key on your keyboard, you can display the column list
of the selected table.
2. Browse through the entities in your database connection to reach the Columns folder you want to compare
with that of the database.
The Compare view opens displaying any differences between the column list in the tree view and the
database.
Color Indication
green highlights any deleted item.
blue highlights any updated item.
red highlights any added item.
If you select an item in the top half of the view, the color markers in the bottom half of the view become
thicker to highlight the selected item. If you select any database from the Distant Structure list in the bottom
half of the view, the corresponding description will be highlighted in the top half of the view.
2. Right-click the DB connection you want to synchronize with the database and select Reload database list.
A message will prompt you for confirmation as any change in the database structure may affect the analyses
listed in the Studio.
The selected database connection is updated with the new catalogs and schemas, if any.
2. Browse through the entities in your database connection to reach the Table folder you want to synchronize
with the database.
A message will prompt you for confirmation as any change in the database structure may affect existing
analyses.
The selected table list is updated with the new tables in the database, if any.
2. Browse through the entities in your database connection to reach the Columns folder you want to synchronize
with the database.
A message will prompt you for confirmation as any change in the database structure may affect existing
analyses.
The selected column list is updated with the new column in the database, if any.
Before starting data profiling management procedures, you need to be familiar with the studio Graphical User
Interface (GUI). For more information, see appendix The studio management GUI.
master data available on a Master Data Management (MDM) server. For further information about master data
and master data management, see the Talend Open Studio for MDM Administrator Guide.
The sequence of profiling data in one or multiple columns involves the following steps:
1. Connecting to the data source being a database, a file or an MDM server. For further information, see chapter
Before you begin profiling data.
2. Defining one or more columns on which to carry out data profiling processes that will define the content,
structure and quality of the data included in the column(s).
3. Settings predefined system indicators or indicators defined by the user on the column(s) that need to be
analyzed or monitored. These indicators will represent the results achieved through the implementation of
different patterns.
4. Adding to the column analyses the patterns against which you can define the content, structure and quality
of the data.
For further information, see section How to add a regular expression or an SQL pattern to a column analysis
and
The section Analyzing columns in a database explains in detail the procedures to analyze the content of one or
multiple columns in a database.
The section Analyzing master data on an MDM server explains in detail the procedures to analyze master data
on an MDM server.
The section Analyzing data in a file explains in detail the procedures to analyze columns in delimited or excel files.
These data mining types help the studio to choose the appropriate metrics for the associated column since not all
indicators (or metrics) can be computed on all data types.
Available data mining types are: Nominal, Interval, Unstructured Text and Other. The sections below describe
these data mining types.
5.2.1. Nominal
Nominal data is categorical data which values/observations can be assigned a code in the form of a number where
the numbers are simply labels. You can count, but not order or measure nominal data.
In the studio, the mining type of textual data is set to nominal. For example, a column called WEATHER with the
values: sun, cloud and rain is nominal.
And a column called POSTAL_CODE that has the values 52200 and 75014 is nominal as well in spite of
the numerical values. Such data is of nominal type because it identifies a postal code in France. Computing
mathematical quantities such as the average on such data is non sense. In such a case, you should set the data
mining type of the column to Nominal, because there is currently no way in the studio to automatically guess the
correct type of data.
The same is true for primary or foreign-key data. Keys are most of the time represented by numerical data, but
their data mining type is Nominal.
5.2.2. Interval
This data mining type is used for numerical data and time data. Averages can be computed on this kind of data.
In databases, sometimes numerical quantities are stored in textual fields.
In the studio, it is possible to declare the data mining type of a textual column (e.g. a column of type VARCHAR)
as Interval. In that case, the data should be treated as numerical data and summary statistics should be available.
For example, the data mining type of a column called COMMENT that contains commentary text can not be
Nominal, since the text in it is unstructured. Still, we could be interested in seeing the duplicate values of such a
column and here comes the need for such a new data mining type.
5.2.4. Other
This is another new data mining type introduced in the studio. This type designs the data that the studio does not
know how to handle yet.
When you use the Java engine to run a column analysis, you can view the analyzed data according to parameters
you set yourself. For more information, see section Using the Java or the SQL engine.
When you use the Java engine to run a column analysis on big sets or on data with many problems, it is advisable to define
a maximum memory size threshold to execute the analysis as you may end up with a Java heap error. For more information,
see section Defining the maximum memory size threshold.
You can also analyze a set of columns. This type of analysis provides statistics on the values across all the data
set (full records). For more information, see section Analyzing tables in databases.
For more information, see section How to define the columns to be analyzed.
2. Settings predefined system indicators or indicators defined by the user for the column(s).
For more information, see section How to set indicators for the column(s) to be analyzed. For more
information on indicator types and indicator management, see section Indicators.
3. Adding the patterns against which to define the content, structure and quality of the data.
For more information, see section Using regular expressions and SQL patterns in a column analysis. For
more information on pattern types and management, see section Patterns.
The following sections provide a detailed description on each of the preceding steps.
Prerequisite(s): At least one database connection is set in the Profiling perspective in the studio. For further
information, see section Connecting to a database.
3. Expand the Column Analysis node and then click Column Analysis.
5. In the Name field, enter a name for the current column analysis.
Space is not acceptable when typing in the analysis name in this field.
6. Set column analysis metadata (purpose, description and author name) in the corresponding fields and click
Next to proceed to the next step.
For the DB2 database, if double quotes exist in the column names of a table, the double quotation marks cannot be
retrieved when retrieving the column. Therefore, it is recommended not to use double quotes in column names in a
DB2 database table.
A file for the newly created column analysis is listed under the Analysis node in the DQ Repository tree
view, and the analysis editor opens with the defined analysis metadata.
The display of the analysis editor depends on the parameters you set in the [Preferences] window. For more
information, see section Setting preferences of analysis editors and analysis results.
You can drag the columns to be analyzed directly from the DQ Repository tree view to the Analyzed Columns list
in the analysis editor.
3. Click the Select columns to analyze link to open a dialog box and select the columns you want to analyze.
You can filter the table or column lists by typing letters or complete words in the Table filter or Column
filter fields respectively. The lists will show only the tables/columns that correspond to the text you type in.
If one of the columns you want to analyze is a primary or a foreign key, its data mining type will automatically become
Nominal when you list it in the Analyzed Columns view. For more information on data mining types, see section
Data mining types.
4. If required, change your database connection by selecting another connection from the Connection box. This
box lists all the connections created in the Studio with the corresponding database names.
If the columns listed in the Analyzed Columns view do not exist in the new database connection you want
to set, you will receive a warning message that enables you to continue or cancel the operation.
If you select to connect to a database that is not supported in the studio (using the ODBC or JDBC methods), it
is recommended to use the Java engine to execute the column analyses created on the selected database. For more
information on the java engine, see section Using the Java or the SQL engine.
You can right-click any of the listed columns in the Analyzed Columns view and select Show in DQ
Repository view to locate it in the database connection in the DQ Repository tree view.
Prerequisite(s): A column analysis is open in the analysis editor in the Profiling perspective of the studio. For
more information, see section How to define the columns to be analyzed.
1. In the analysis editor, click Analyzed Columns to open the analyzed columns view.
If you right-click any of the listed columns in the Analyzed Columns view and select Show in DQ Repository view,
the selected column will be automatically located under the corresponding connection in the tree view.
2. Click Select indicators for each column to open the [Indicator Selection] dialog box.
In this dialog box, you can change column positions by dropping them with the cursor.
3. If you are analyzing very large number of columns, place the cursor in the top/bottom right corner of the
[Indicator Selection] dialog box to access the columns to the very right.
Similarly, place the cursor in the top/bottom left corner of the [Indicator Selection] dialog box to access
the columns to the very left.
4. Click in the cells to set indicator parameters for the analyzed column(s) as needed and then click OK.
Indicators are accordingly attached to the analyzed columns in the Analyzed Columns view.
If you attach the Data Pattern Frequency Table to a date column in your analysis, you can generate a date regular
expression from the analysis results. For more information, see section How to generate a regular expression from
the Date Pattern Frequency Table.
Prerequisite(s): A column analysis is open in the analysis editor in the Profiling perspective of the studio. For
more information, see section How to define the columns to be analyzed.
For more information about setting indicators, see section How to set system indicators.
1. In the analysis editor, click Analyzed Columns to open the analyzed columns view.
2.
Click the option icon next to the defined indicator to open the dialog box where you can set options for
the given indicator.
For example, if you want to flag if there are null values in the column you analyze, you can set 0 in the Upper
threshold field for the Null Count indicator.
Indicators settings dialog boxes differ according to the parameters specific for each indicator. For more information
about different indicator parameters, see section Indicator parameters.
Prerequisite(s):
A column analysis is open in the analysis editor in the Profiling perspective of the studio. For more information,
see section How to define the columns to be analyzed.
A user-defined indicator is created in the Profiling perspective of the studio. For more information, see section
How to create SQL user-defined indicators.
1. In the analysis editor, click Analyzed Columns to open the analyzed columns view.
2. Either:
1.
In the Analyzed Columns view, click the icon next to the column name to which you want to
define a user-defined indicator.
2. Select the user-defined indicators you want to use on the column and then click OK to close the dialog
box.
Or:
2. From the User Defined Indicator folder, drop the user-defined indicator(s) against which you want to
analyze the column content to the column name(s) in the Analyzed Columns view.
Prerequisite(s):
The column analysis is open in the analysis editor in the Profiling perspective of the studio . For more
information, see section How to define the columns to be analyzed.
You have set system or predefined indicators for the column analysis. For more information, see section How
to set indicators for the column(s) to be analyzed.
To finalize the column analysis defined in the above sections, do the following:
1. In the analysis editor, click Data Filter to open the corresponding view and filter data through SQL
WHERE clauses, if required.
In the Number of connections per analysis field, set the number of concurrent connections allowed per
analysis to the selected database connection.
You can set this number according to the database available resources, that is the number of concurrent
connections each database can support.
From the Execution engine list, select the engine, Java or SQL, you want to use to execute the analysis.
If you select the Java engine and then select the Allow drill down check box in the Analysis parameters
view, you can store locally the analyzed data and thus access it in the Analysis Results > Data view. You
can use the Max number of rows kept per indicator field to decide the number of the data rows you want
to make accessible. For more information on viewing analyzed data, see section Using the Java or the SQL
engine.
When you select the Java engine, the system will look for Java regular expressions first, if none is found,
it looks for SQL regular expressions. For more information on the Java and the SQL engines, see section
Using the Java or the SQL engine
If you select to connect to a database that is not supported in the studio (using the ODBC or JDBC methods), it
is recommended to use the Java engine to execute the column analyses created on the selected database. For more
information on the java engine, see section Using the Java or the SQL engine.
3. Click the save icon on the toolbar of the analysis editor and then press F6 to execute the column analysis.
A group of graphics is displayed in the Graphics panel to the right of the analysis editor, each corresponding to
the group of the indicators set for each analyzed column.
Below are the graphics representing the Frequency Statistics and Simple Statistics for the email column analyzed
in the above procedure.
Below are the graphics representing the order of magnitude and the Benford's law frequency statistics for the
total_sales column analyzed in the above procedure.
For further information about the Benford's law frequency statistics usually used as an indicator of accounting and
expenses fraud in lists or tables, see section Benford's law frequency indicator.
For information on how to access a detailed view of the results of the analysis, see section How to access the
detailed view of the analysis results.
If you execute this analysis using the SQL engine, you can view the executed query for each of the attached
indicators if you right-click an indicator and then select the View executed query option from the list. However,
when you use the Java engine, SQL queries will not be accessible and thus clicking this option will open a warning
message.
For more information on the Java and the SQL engines, see section Using the Java or the SQL engine.
an SQL query is generated for each indicator used in the column analysis,
Using this engine, you guarantee system better performance. You can also access valid/invalid data in the data
explorer, for more information, see section Viewing and exporting analyzed data.
only one query is generated for all indicators used in the column analysis,
you can set the parameters to decide whether to access the analyzed data and how many data rows to show per
indicator. This will help to avoid memory limitation issues since it is impossible to store all analyzed data.
When you use the Java engine to execute a column analysis you do not need different query templates specific for
each database. However, system performance is significantly reduced in comparison with the SQL engine.
To set the parameters to access analyzed data when using the Java engine, do the following:
1. In the Analysis Parameter view of the column analysis editor, select Java from the Execution engine list.
2. Select the Allow drill down check box to store locally the data that will be analyzed by the current analysis.
3. In the Max number of rows kept per indicator field enter the number of the data rows you want to make
accessible.
You can now run your analysis and then have access to the analyzed data according to the set parameters. For
more information, see section Viewing and exporting analyzed data.
To access a more detailed view of the analysis results of the procedures outlined in section Defining the columns to
be analyzed and setting indicators and section Finalizing the column analysis before execution, do the following:
1. Click the Analysis Results tab at the bottom of the analysis editor to open the corresponding view.
2. Click the Analysis Result tab in the view and then the name of the analyzed column for which you want
to open the detailed results.
The display of the Analysis Results view depends on the parameters you set in the [Preferences] window. For more
information, see section Setting preferences of analysis editors and analysis results.
The detailed analysis results view shows the generated graphics for the analyzed columns accompanied with
tables that detail the statistic results.
Below are the tables that accompany the Frequency and Simple Statistics graphics in the Analysis Results
view for the analyzed email column.
In the Simple Statistics table, if an indicator value is displayed in red, this means that a threshold has been set
on the indicator in the column analysis editor and that this threshold has been violated. For further information
about data thresholds, see section How to set options for system indicators.
Below are the tables and the graphics representing the order of magnitude and the Benford's law frequency
statistics in the Analysis Results view for the analyzed total_sales column.
For further information about the Benford's law frequency statistics usually used as an indicator of accounting
and expenses fraud in lists or tables, see section Benford's law frequency indicator.
3. Right-click any data row in the result tables and select View rows to access a view of the analyzed data.
For more information, see section Viewing and exporting analyzed data.
After running your analysis using the Java engine, you can use the analysis results to access a view of the actual
data.
After running your analysis using the SQL engine, you can use the analysis results to open the Data Explorer
perspective and access a view of the actual data.
1. At the bottom of the analysis editor, click the Analysis Results tab to open a detailed view of the analysis
results.
2. Right-click a data row in the statistic results of the analyzed columns and select an option as the following:
Option Operation
View rows open a view on a list of all data rows in the analyzed column.
For the Duplicate Count indicator, the View rows option will list all the rows
that are duplicated. So if the duplicate count is 12 for example, this option will
list 24 rows.
View values open a view on a list of the actual data values of the analyzed column.
Options other than the above listed ones are available when using regular expressions and SQL patterns in a column analysis.
For further information, see section Using regular expressions and SQL patterns in a column analysis and section How to
view the data analyzed against patterns.
When using the SQL engine, the view opens in the Data Explorer perspective listing the rows or the values of
the analyzed data according to the limits set in the data explorer.
This explorer view will give also some basic information about the analysis itself. Such information is of great
help when working with multiple analysis at the same time.
The data explorer does not support connections which has empty user name, such as Single sign-on of MS SQL Server.
If you analyze data using such connection and you try to view data rows and values in the Data Explorer perspective, a
warning message prompt you to set your connection credentials to the SQL Server.
When using the Java engine, the view opens in the studio listing the number of the analyzed data rows you set
in the Analysis parameters view of the analysis editor. For more information, see section Using the Java or the
SQL engine.
From this view, you can export the analyzed data into a csv file. To do that:
1.
Click the icon in the upper left corner of the view.
2. Click the Choose... button and browse to where you want to store the csv file and give it a name.
A csv file is created in the specified place holding all the analyzed data rows listed in the view.
For more information on regular expressions and SQL patterns, see section Patterns and indicators and chapter
Table analyses.
If the database you are using does not support regular expressions or if the query template is not defined in the studio, you
need first to declare the user defined function and define the query template before being able to add any of the specified
patterns to the column analysis. For more information, see section Managing User-Defined Functions in databases.
1. Follow the steps outlined in section How to define the columns to be analyzed to create a column analysis.
2. In the open analysis editor, click Analyze Columns to open the analyzed columns view.
If you right-click any of the listed columns in the Analyzed Columns view and select Show in DQ Repository view,
the selected column will be automatically located under the corresponding connection in the tree view.
3.
Click the icon next to the column name to which you want to add a regular expression or an SQL pattern,
the email column in this example.
4. Expand Patterns and browse to the regular expression or/and the SQL patterns you want to add to the column
analysis.
5. Select the check box(es) of the expression(s) or pattern(s) you want to add to the selected column.
The added regular expression(s) or SQL pattern(s) are displayed under the analyzed column in the Analyzed
Column list.
You can add a regular expression or an SQL pattern to a column simply by a drag and drop operation from the DQ
Repository tree view onto the analyzed column.
7. Click the save icon on the toolbar of the analysis editor and then press F6 to execute the column analysis.
A group of graphics is displayed in the Graphics panel to the right of the analysis editor. These graphics
show the results of the column analysis including those for pattern matching.
2. Right-click the pattern you want to edit and select Edit pattern from the contextual menu.
3. In the pattern editor, click Pattern Definition to edit the pattern definition, or change the selected database,
or add other patterns specific to available databases using the [+] button.
If the regular pattern is simple enough to be used in all databases, select Default in the list.
When you edit a pattern through the analysis editor, you modify the pattern listed in the DQ Repository tree view. Make
sure that your modifications are suitable for all other analyses that may be using the pattern modified.
When you use the Java engine to run the analysis, the view of the actual data will open in the studio. While if you use the
SQL engine to execute the analysis, the view of the actual data will open in the Data Explorer perspective.
Prerequisite(s): A column analysis that uses patterns has been created and executed.
To view the actual data in the column analyzed against a specific pattern, do the following:
1. Follow the steps outlined in section How to define the columns to be analyzed and section How to add a
regular expression or an SQL pattern to a column analysis to create a column analysis that uses a pattern.
3. In the analysis editor, click the Analysis Results tab at the bottom of the editor to open the corresponding
view.
The display of the Analysis Results view depends on the parameters you set in the [Preferences] window. For more
information, see section Setting preferences of analysis editors and analysis results.
The generated graphic for the pattern matching is displayed accompanied with a table that details the matching
results.
5. Right-click the pattern line in the Pattern Matching table and select:
Option To...
View valid/invalid values open a view of all valid/invalid values measured against the pattern used on the selected
column
View valid/invalid rows open a view of all valid/invalid rows measured against the pattern used on the selected
column
When using the SQL engine, the view opens in the Data Explorer perspective listing valid/invalid rows or values
of the analyzed data according to the limits set in the data explorer.
This explorer view will also give some basic information about the analysis itself. Such information is of great
help when working with multiple analysis at the same time.
The data explorer does not support connections which has empty user name, such as Single sign-on of MS SQL Server.
If you analyze data using such connection and you try to view data rows and values in the Data Explorer perspective, a
warning message prompt you to set your connection credentials to the SQL Server.
When using the Java engine, the view opens in the Profiling perspective of the studio listing the number of valid/
invalid data according to the row limit you set in the Analysis parameters view of the analysis editor. For more
information, see section Using the Java or the SQL engine.
You can save the executed query and list it under the Libraries > Source Files folders in the DQ Repository tree view if you
click the save icon on the SQL editor toolbar. For more information, see section Saving the queries executed on indicators.
For more information about the data explorer Graphical User Interface, see appendix Data Explorer management
GUI.
To save any of the queries executed on an indicator set in a column analysis, do the following:
1. In the column analysis editor, right-click any of the used indicators to open a contextual menu.
2. Select View executed query to open the data explorer on the query executed on the selected indicator.
The data explorer does not support connections which has empty user name, such as Single sign-on of MS SQL Server.
If you analyze data using such connection and you try to view the executed queries in the Data Explorer perspective,
a warning message prompt you to set your connection credentials to the SQL Server.
3. Click the save icon on the editor toolbar to open the [Select folder] dialog box
4. Select the Source Files folder or any sub-folder under it and enter in the Name field a name for the open query.
Make sure that the name you give to the open query is always followed by .sql. Otherwise, you will not be able to
save the query.
The selected query is saved under the selected folder in the DQ Repository tree view.
However, the options you have to create column analyses if you start from the table name are different from those
you have if you start from the column name.
To create a column analysis directly from the relevant table name in the DB Connection, do the following:
2. Browse to the table that holds the column(s) you want to analyze and right-click it.
Item To...
Table analysis analyze the selected table using SQL business rules.
For more information on the Simple Statistics indicators, see chapter Table analyses.
Column analysis analyze all the columns included in the selected table using the Simple Statistics
indicators.
For more information on the Simple Statistics indicators, see section Simple statistics.
Pattern frequency analysis analyze all the columns included in the selected table using the Pattern Frequency
Statistics along with the Row Count and the Null Count indicators.
For more information on the Pattern Frequency Statistics, see section Pattern frequency
statistics.
The above steps replace the procedures outlined in section Defining the columns to be analyzed and setting
indicators. Now, you proceed following the steps outlined in section Finalizing the column analysis before
execution.
To create a column analysis directly from the column name in the DB Connection, do the following:
Item To...
Analyze create an analysis for the selected column
you must later set the indicators you want to use to analyze the selected column.
For more information on setting indicators, see section How to set indicators for the
column(s) to be analyzed. For more information on accomplishing the column analysis,
see section Finalizing the column analysis before execution.
Analyze correlation perform column correlation analyses between nominal and interval columns or nominal
and date columns in database tables.
For more information on the Simple Statistics indicators, see section Simple statistics.
Pattern frequency analysis analyze the selected column using the Pattern Frequency Statistics along with the Row
Count and the Null Count indicators.
For more information on the Pattern Frequency Statistics, see section Pattern frequency
statistics.
The above steps replace one of or both of the procedures outlined in section Defining the columns to be analyzed
and setting indicators. Now, you proceed following the same steps outlined in section Finalizing the column
analysis before execution.
You can also analyze a set of columns, for more information, see section Analyzing tables on MDM servers.
For more information, see section How to define the columns to be analyzed.
For more information, see section How to set indicators for the column(s) to be analyzed. For more
information on indicator types and indicator management, see section Indicators.
You can also use Java user-defined indicators when analyzing master data on the condition that a Java user-defined indicator
is already created. For further information, see section How to define Java user-defined indicators.
The following sections provide detailed description on each of the preceding steps.
Prerequisite(s): At least one MDM connection is set in the Profiling perspective in the studio. For further
information, see section Connecting to an MDM server.
5. In the Name field, enter a name for the current column analysis.
Space is not acceptable when typing in the analysis name in this field.
6. If required, set the analysis metadata (purpose, description and author name) in the corresponding fields and
click Next to proceed to the next step.
1. Expand MDM connections and browse through the data containers on the MDM server to reach the business
entity (column) holding the data you want to analyze.
2. Select the columns to analyze and then click Finish to close the wizard.
A file for the newly created analysis is displayed under the Analysis node in the DQ Repository tree view,
and the analysis editor opens with the defined analysis metadata.
The display of the connection editor depends on the parameters you set in the [Preferences] window. For more
information, see section Setting preferences of analysis editors and analysis results.
3. Click the Analyzed Column tab to open the corresponding view, if not already open.
The Connection field has the connection name to the MDM server that holds the items you want to analyze
and these items (columns) are already listed in the column list.
4. If required, click the Select columns to analyze link to open a dialog box where you can modify your column
selection. You can filter the table or column lists by typing the desired text in the Table filter or Column
filter fields respectively. The lists will show only the tables/columns that correspond to the text you type in.
5. Click the business entity name to display all its record in the right-hand panel of the [Column Selection]
dialog box.
6. In the list to the right, select the check boxes of the column(s) you want to analyze and click OK to proceed
to the next step.
The selected records display in the Analyzed Column view of the analysis editor.
You can drag the records to be analyzed directly from the DQ Repository tree view to the column analysis editor.
7. If required, use the delete, move up or move down buttons to manage the analyzed columns.
The data mining type is set to Other by default. For more information on data mining types in the studio, see section
Data mining types.
If you right-click any of the listed records in the Analyzed Columns view and select Show in DQ Repository view,
the selected record will be automatically located under the corresponding MDM connection in the tree view.
You can also use Java user-defined indicators when analyzing master data on the condition that a Java user-defined indicator
is already created. For further information, see section How to define Java user-defined indicators.
Prerequisite(s): An analysis of a business entity is open in the analysis editor in the studio. For more information,
see section How to define the columns to be analyzed.
1. In the analysis editor, click Analyzed Columns to open the analyzed columns view.
2. Click Select indicators for each column to open the [Indicator Selection] dialog box.
In this dialog box, you can change column positions by dropping them with the cursor.
3. If you are analyzing very large number of columns, place the cursor in the top/bottom right corner of the
[Indicator Selection] dialog box to access the columns to the very right.
Similarly, place the cursor in the top/bottom left corner of the [Indicator Selection] dialog box to access
the columns to the very left.
4. Click in the simple statistics cell to set these indicators for the MDM records and then click OK to proceed
to the next step.
The selected indicators are attached to the analyzed records in the Analyzed Columns view.
1. In the analysis editor, click Analyzed Columns to open the analyzed columns view.
2.
Click the option icon next to the defined indicator to open the dialog box where you can set options for
the given indicator.
Running the analysis will show if these thresholds are violated through appending a warning icon on such a result
and the result itself will be in red. For further information, see section How to access the detailed view of the analysis
results.
Indicators settings dialog boxes differ according to the parameters specific for each indicator. For more information
about different indicator parameters, see section Indicator parameters.
5. In the analysis editor, click the Data Filter tab to display the corresponding view and filter master data
through XQuery clauses, if required.
6. In the analysis editor, click the Analysis Parameters tab to display the corresponding view and select the
engine you want to use to run the analysis. For more information on available engines, see section Using
the Java or the SQL engine.
7. Click the save icon on the toolbar of the analysis editor and then press F6 to execute the analysis.
The Graphics panel to the right of the analysis editor displays a group of graphic(s), each corresponding to one
of the analyzed records.
To view the different graphics associated with all analyzed records, you may need to navigate through the different pages
in the Graphics panel using the toolbar on the upper-right corner.
1. Click the Analysis Results tab at the bottom of the analysis editor to open the corresponding view.
2. Click Analysis Results and then the name of the analyzed column for which you want to display the detailed
results.
The display of the Analysis Results view depends on the parameters you set in the [Preferences] window. For more
information, see section Setting preferences of analysis editors and analysis results.
The detailed analysis results view shows the generated graphics for the analyzed columns accompanied with
tables that detail the statistic results.
Below are the tables that accompany the Simple Statistics graphics in the Analysis Results view for the analyzed
records in the procedure outlined in section Defining the columns to be analyzed and setting indicators.
For further information, see section Creating table and columns analyses in shortcut procedures.
From the studio, you can also analyze a set of columns, for more information, see section Analyzing tables in
delimited files.
For more information, see section How to define the columns to be analyzed.
For more information, see section How to set indicators for the column(s) to be analyzed. For more
information on indicator types and indicator management, see section Indicators.
3. setting patterns for the defined columns. For more information, see section Patterns.
You can also use Java user-defined indicators when analyzing columns in a delimited file on the condition that a Java user-
defined indicator is already created. For further information, see section How to define Java user-defined indicators.
The following sections provide a detail description on each of the preceding steps.
Prerequisite(s): At least one connection to a delimited file is set in the Profiling perspective of the studio. For
further information, see section How to connect to a delimited file.
You can directly get to this step in the analysis creation wizard if you right-click the column to analyze in Metadata
> FileDelimited and select Column Analysis > Analyze. For further information, see section Creating table and
columns analyses in shortcut procedures.
5. In the Name field, enter a name for the current column analysis.
6. If required, set the analysis metadata (purpose, description and author name) in the corresponding fields and
click Next to proceed to the next step.
1. Expand FileDelimited and then browse to the columns you want to analyze.
2. Select these columns and then click Finish to close the wizard.
A file for the newly created analysis is displayed under the Analyses node in the DQ Repository tree view,
and the analysis editor opens with the defined analysis metadata.
The display of the connection editor depends on the parameters you set in the [Preferences] window. For more
information, see section Setting preferences of analysis editors and analysis results.
You can also drop the columns to analyze directly from the DQ Repository tree view to the analysis editor.
The Connection field shows the selected connection and the columns you want to analyze are already listed
in the column list.
4. If required, click the Select columns to analyze link to open a dialog box where you can modify your column
selection.
In this example, you want to analyze the id, firstname and age columns from the selected connection.
5. If required, use the delete, move up or move down buttons to manage the analyzed columns.
If you right-click any of the listed columns in the Analyzed Columns table and select Show in DQ Repository view,
the selected column will be automatically located under the corresponding delimited file connection in the tree view.
You can also use Java user-defined indicators when analyzing columns in a delimited file on the condition that a Java user-
defined indicator is already created. For further information, see section How to define Java user-defined indicators.
Prerequisite(s): An analysis of a delimited file is open in the analysis editor in the Profiling perspective of the
studio. For more information, see section How to define the columns to be analyzed.
1. Follow the procedure outlined in section How to define the columns to be analyzed.
2. In the analysis editor, click Analyzed Columns to open the analyzed columns view.
3. Click Select indicators for each column to open the [Indicator Selection] dialog box.
In this dialog box, you can change column positions by dropping them with the cursor.
4. If you are analyzing very large number of columns, place the cursor in the top/bottom right corner of the
[Indicator Selection] dialog box to access the columns to the very right.
Similarly, place the cursor in the top/bottom left corner of the [Indicator Selection] dialog box to access
the columns to the very left.
5. Click in the cells to set indicator parameters for the columns to be analyzed and then click OK to proceed
to the next step.
In this example, you want to set the Simple Statistics indicators on all columns, the Text Statistics indicators
on the firstname column and the Soundex Frequency Table on the firstname column as well.
The selected indicators are attached to the analyzed columns in the Analyzed Columns view.
To set options for system indicators used on the columns to be analyzed, do the following:
1. Follow the procedures outlined in section How to define the columns to be analyzed and section How to set
indicators for the column(s) to be analyzed.
2. In the analysis editor, click Analyzed Columns to open the analyzed columns view.
3.
In the Analyzed Columns list, click the option icon next to the indicator to open the dialog box where
you can set options for the given indicator.
Indicators settings dialog boxes differ according to the parameters specific for each indicator. For more information
about different indicator parameters, see section Indicator parameters.
Prerequisite(s): An analysis of a delimited file is open in the analysis editor in the Profiling perspective of the
studio. For more information, see section How to define the columns to be analyzed, section How to set indicators
for the column(s) to be analyzed and section How to set options for system indicators.
1. Define the regular expression you want to add to the analyzed column. For further information on creating
regular expressions, see section How to create a new regular expression or SQL pattern.
In this example, the regular expression checks for all words that start with uppercase.
2. Add the regular expression to the analyzed column in the open analysis editor, the firstname column in this
example. For further information, see section How to add a regular expression or an SQL pattern to a column
analysis.
3. Click the save icon on the toolbar of the analysis editor and then press F6 to execute the analysis.
If the format of the file you are using has problems, you will have an error message to indicate which row causes
the problem.
The Graphics panel to the right of the analysis editor displays a group of graphic(s), each corresponding to
one of the analyzed columns.
4. If you analyze more than one column, navigate through the different pages in the Graphics panel using the
toolbar on the upper-right corner in order to view the different graphics associated with all analyzed columns.
Below is a sample of the graphical results of one of the analyzed columns: firstname.
In order to view detail results of the analyzed columns, see section How to access the detailed view of the analysis
results.
1. Click the Analysis Results tab at the bottom of the analysis editor to open the corresponding view.
2. Click Analysis Result and then the name of the analyzed column for which you want to display the detailed
results.
The display of the Analysis Results view depends on the parameters you set in the [Preferences] window. For more
information, see section Setting preferences of analysis editors and analysis results.
The detailed analysis results view shows the generated graphics for the analyzed columns accompanied with tables
that detail the statistic results.
Below are the tables that accompany the statistics graphics in the Analysis Results view for the analyzed firstname
column in the procedure outlined in section Analyzing columns in a delimited file.
1. At the bottom of the analysis editor, click the Analysis Results tab to open a detailed view of the analysis
results.
2. Right-click a data row in the statistic results of any of the analyzed columns and select an option as the
following:
Option Operation
View rows open a view on a list of all data rows in the analyzed column.
For the Duplicate Count indicator, the View rows option will list all the rows
that are duplicated. So if the duplicate count is 12 for example, this option will
list 24 rows.
View values open a view on a list of the actual data values of the analyzed column.
Option Operation
View valid/invalid rows open a view on a list of all valid/invalid rows measured against a pattern.
Option Operation
View valid/invalid values open a view on a list of all valid/invalid values measured against a pattern.
From this view, you can export the analyzed data into a csv file. To do that:
1.
Click the icon in the top left corner of the view.
2. Click the Choose... button and browse to where you want to store the csv file and give it a name.
A csv file is created in the specified place holding all the analyzed data rows listed in the view.
For further information, see section Creating table and columns analyses in shortcut procedures.
Profiling excel files is done via ODBC for the time being. In later releases, you will be able to analyze excel files directly
as you do with delimited files.
Prerequisite(s): At least one connection to an excel file is set in the Profiling perspective of the studio. For further
information, see section How to connect to an Excel file.
1. In the DQ Repository tree view, expand Metadata, and then right-click DB connections.
3. If required, fill in a purpose and a description for the connection, and then click Next to proceed to the next
step.
5. In the DataSource field, enter the exact name of the Data Source you created in the previous procedure.
6. Click the Check button to display a confirmation message about the status of the connection.
7. If your connection is successful, click OK to close the message, and then click Finish to close the wizard.
8. The connection is listed under DB connections in the DQ Repository tree view and the connection editor
opens in the Studio.
If you have difficulty retrieving the columns from the excel file, give the worksheet in the excel file the same name of
the table. To do that, select the whole table in the excel file and then press Ctrl + F3 and modify the name.
You can now create a column analysis in the Profiling perspective of the studio to profile the columns in the
excel file.
The procedures to analyze columns in an excel file are exactly the same as those for analyzing columns in a
delimited file. For further information on analyzing columns in an excel files, see section Analyzing columns in a
delimited file, section How to access the detailed view of the analysis results and section Analyzing master data
in shortcut procedures.
Make sure to select the Java engine in the Analysis Parameter view in the analysis editor before executing the analysis of
the excel columns, otherwise you will have an error message when running the analysis.
It describes how to set up SQL business rules based on WHERE clauses and add them as indicators to database
table analyses.
Before starting data profiling management procedures, you need to be familiar with the studio Graphical User
Interface (GUI). For more information, see appendix The studio management GUI.
The sequence of profiling data in one or multiple tables may involve the following steps:
1. Defining one or more tables on which to carry out data profiling processes that will define the content,
structure and quality of the data included in the table(s).
2. Creating SQL business rules based on WHERE clauses and add them as indicators to table analyses.
3. Creating column functional dependencies analyses to detect anomalies in the column dependencies of the
defined table(s) through defining columns as either determinant or dependent.
section Analyzing tables in databases explains in detail the different options to analyze a table.
Using the studio, you can better explore the quality of data in a database table through either:
Creating a simple table analysis through analyzing all columns in the table using patterns. For more information,
see section Creating a simple table analysis: the analysis of a set of columns.
Adding data quality rules as indicators to table analysis. For more information, see section Creating a table
analysis with SQL business rules.
Detecting anomalies in column dependencies. For more information, see section Detecting anomalies in the
table columns: column functional dependency analysis.
The sections below explain in detail all types of analysis that can be executed against tables.
The analysis of a set of columns focuses on a column set (full records) and not on separate columns as it is the case
with the column analysis. The statistics presented in the analysis results (row count, distinct count, unique count
and duplicate count) are measured against the values across all the data set and thus do not analyze the values
separately within each column.
With the Java engine, you may also apply patterns on each column and the result of the analysis will give the
number of records matching all the selected patterns together. For further information, see section How to add
patterns to the analyzed columns.
When you use the Java engine to run a column set analysis on big sets or on data with many problems, it is advisable to define
a maximum memory size threshold to execute the analysis as you may end up with a Java heap error. For more information,
see section Defining the maximum memory size threshold.
With this analysis, you can use patterns to validate the full records against all patterns and have a single-bar result
chart that shows the number of the rows that match all the patterns..
Prerequisite(s): At least one database connection is set in the Profiling perspective of the studio. For further
information, see section Connecting to a database.
3. Expand the Table Analysis node and then click Column Set Analysis.
Space is not acceptable when typing in the analysis name in this field.
6. Set column analysis metadata (purpose, description and author name) in the corresponding fields and then
click Next.
1. Expand DB connections.
2. In the desired database, browse to the columns you want to analyze, select them and then click Finish to
close this [New analysis] wizard.
A folder for the newly created analysis is listed under Analysis in the DQ Repository tree view, and the
analysis editor opens with the defined analysis metadata.
The display of the analysis editor depends on the parameters you set in the [Preferences] window. For more
information, see section Setting preferences of analysis editors and analysis results.
3. Click the Analyzed Columns tab to open the corresponding view. Click the Select columns to analyze link
to open a dialog box where you can modify your table or column selection.
If you select to connect to a database that is not supported in the studio (using the ODBC or JDBC methods), it
is recommended to use the Java engine to execute the column analyses created on the selected database. For more
information on the Java engine, see section Using the Java or the SQL engine.
4. Either:
expand the DB Connections folder and browse through the catalog/schemas to reach the table holding the
columns you want to analyze, or,
filter the table or column lists by typing the desired text in the Table filter or Column filter fields
respectively. The lists will show only the tables/columns that correspond to the text you type in.
As this analysis retrieves as many rows as the number of distinct rows in order to compute the statistics, it is advised
to avoid selecting a primary key column.
In this example, you want to analyze a set of six columns in the customer table: account number
(account_num), education (education), email (email), first name (fname), second name (Iname) and gender
(gender). you want to identify the number of rows, the number of distinct and unique values and the number
of duplicates.
5. Click the table name to list all its columns in the right-hand panel of the [Column Selection] dialog box.
6. In the column list, select the check boxes of the column(s) you want to analyze and click OK.
Select the check boxes of all the columns if you want to get simple statistics on the whole table.
The selected columns is displayed in the Analyzed Column view of the analysis editor.
7. If required, select to connect to a different database by selecting a different connection from the Connection
box. This box lists all the connections created in the Studio with the corresponding database names.
If the columns listed in the Analyzed Columns view do not exist in the new database connection you want to set, you
will receive a warning message that enables you to continue or cancel the operation.
8. If required, right-click any of the listed columns in the Analyzed Columns view and select Show in DQ
Repository view. The selected column is automatically located under the corresponding connection in the
tree view.
9. Use the delete, move up or move down buttons to manage the analyzed columns when necessary.
You can add patterns to one or more of the analyzed columns to validate the full record (all columns) against all
the patterns, and not to validate each column against a specific pattern as it is the case with the column analysis.
The results chart is a single bar chart for the totality of the used patterns. This chart shows the number of the rows
that match all the patterns.
Before being able to use a specific pattern with a set of columns analysis, you must manually set the pattern definition
for Java in the pattern settings, if it does not already exist. Otherwise, a warning message opens prompting you to set the
definition of the Java regular expression.
Prerequisite(s): An analysis of a set of columns is open in the analysis editor in the Profiling perspective of the
studio. For more information, see section How to define the set of columns to be analyzed.
1.
Click the icon next to each of the columns you want to validate against a specific pattern.
You can drop the regular expression directly from the Patterns folder in the DQ Repository tree view directly to the
column name in the column analysis editor.
If no Java expression exists for the pattern you want to add, a warning message opens prompting you to add the
pattern definition for Java. Click Yes to open the pattern editor and add the Java regular expression, then proceed
to add the pattern to the analyzed columns.
In this example, you want to add a corresponding pattern to each of the analyzed columns to validate data in
these columns against the selected patterns. The result chart will show the percentage of the matching/non-
matching values, the values that respect the totality of the used patterns.
2. In the [Pattern Selector] dialog box, expand Patterns and browse to the regular expression you want to add
to the selected column.
3. Select the check box(es) of the expression(s) you want to add to the selected column.
4. Click OK.
The added regular expression(s) are displayed under the analyzed column(s) in the Analyzed Columns list,
and the All Match indicator is displayed in the Indicators list in the Indicators view.
What is left before executing this set of columns analysis is to define indicators, data filter and analysis parameters.
Prerequisite(s): A column set analysis has already been defined in the Profiling perspective of the studio. For
further information, see section How to define the set of columns to be analyzed and section How to add patterns
to the analyzed columns.
The indicators representing the simple statistics are by-default attached to this type of analysis. For further information
about the indicators for simple statistics, see section section Simple statistics.
2.
Click the option icon to open a dialog box where you can set options for each indicator according to
your needs.
3. Click Data Filter in the analysis editor to open its view and filter data through SQL WHERE clauses
according to your needs.
In the Number of connections per analysis field, set the number of concurrent connections allowed per
analysis to the selected database connection.
You can set this number according to the database available resources, that is the number of concurrent
connections each database can support.
From the Execution engine list, select the engine, Java or SQL, you want to use to execute the analysis.
If you select the Java engine and then select the Allow drill down check box in the Analysis parameters
view, you can store locally the analyzed data and thus access it in the Analysis Results > Data view. You
can use the Max number of rows kept per indicator field to decide the number of the data rows you want
to make accessible.
For further information, see section How to access the detailed result view.
If you select the SQL engine, select the Store data check box if you want to store locally the list of all
analyzed rows and thus access it in the Analysis Results > Data view. For further information, see section
How to access the detailed result view.
If the data you are analyzing is very big, it is advisable to leave this check box unchecked in order to have only the
analysis results without storing analyzed data at the end of the analysis computation.
5. Click the save icon on top of the analysis editor and then press F6 to execute the analysis.
The graphical result of the set of columns analysis is displayed in the Graphics panel to the right of the
analysis editor.
This graphical result provides the simple statistics on the full records of the analyzed column set and not on
the values within each column separately.
When you use patterns to match the content of the set of columns, another graphic is displayed to illustrates
the match and non-match results against the totality of the used patterns.
Prerequisite(s): An analysis of a set of columns is open in the analysis editor in the Profiling perspective of the
studio. For more information, see section How to define the set of columns to be analyzed and section How to add
patterns to the analyzed columns.
1. Click the Analysis Results tab at the bottom of the analysis editor.
The corresponding view is displayed. Here you can read the analysis results in a table that accompanies the
Simple Statistics and All Match graphics.
The display of the Analysis Results view depends on the parameters you set in the [Preferences] window. For more
information, see section Setting preferences of analysis editors and analysis results.
2. To have a view of the actual analyzed data, click Data in the Analysis Results view.
In order to have the analyzed data stored in this view, you must select the Store data check box in the Analysis
Parameter view. For further information, see section How to finalize and execute the analysis of a set of columns.
You can filter analyzed data according to any of the used patterns. For further information, see section How to filter data
against patterns.
After analyzing a set of columns against a group of patterns and having the results of the rows that match or do
not match all the patterns, you can filter the valid/invalid data according to the used patterns.
Prerequisite(s): An analysis of a set of columns is open in the analysis editor in the Profiling perspective of the
studio. For more information, see section How to define the set of columns to be analyzed and section How to add
patterns to the analyzed columns.
To filter data resulted from the analysis of a set of columns, do the following:
1. In the analysis editor, click the Analysis Results tab at the bottom of the editor to open the detailed result view.
The display of the Analysis Results view depends on the parameters you set in the [Preferences] window. For more
information, see section Setting preferences of analysis editors and analysis results.
This table lists the actual analyzed data in the analyzed columns.
A dialog box is displayed listing all the patterns used in the column set analysis.
4. Select the check box(es) of the pattern(s) according to which you want to filter the data, and then select a
display option according to your needs.
5. Select All data to show all analyzed data, or matches to show only the data that matches the pattern, or non-
matches to show the data that does not match the selected pattern(s).
In this example, data is filtered against the Email Address pattern, and only the data that does not match is
displayed.
All email addresses that do not match the selected pattern appear in red. Any data row that has a missing
value appear with a red background.
Prerequisite(s): A simple table analysis is defined in the analysis editor in the Profiling perspective of the studio.
To create a column analysis on one or more columns defined in a simple table analysis, do the following:
2. In the Analyzed Columns view, right-click the column(s) you want to create a column analysis on.
4. In the Name field, enter a name for the new column analysis and then click Next to proceed to the next step.
The analysis editor opens with the defined metadata and a folder for the newly created analysis is listed under
the Analyses folder in the DQ Repository tree view.
5. Follow the steps outlined in section Analyzing columns in a database to continue creating the column analysis.
When you use the Java engine to run a column set analysis on big sets or on data with many problems, it is advisable to define
a maximum memory size threshold to execute the analysis as you may end up with a Java heap error. For more information,
see section Defining the maximum memory size threshold.
For an example of a table analysis with a simple business rule, see section How to create a table analysis with a
simple SQL business rule. For an example of a table analysis with a business rule that has a join condition, see
section How to create a table analysis with an SQL business rule with a join condition.
2. Right-click SQL.
3. From the contextual menu, select New Business Rule to open the [New Business Rule] wizard.
Consider as an example that you want to create a business rule to match the age of all customers listed in the
age column of a defined table. You want to filter all the age records to identify those that fulfill the specified
criterion.
4. In the Name field, enter a name for this new SQL business rule.
Space is not acceptable when typing in the business rule name in this field.
5. Set other metadata (purpose, description and author name) in the corresponding fields and then click Next.
6. In the Where clause field, enter the WHERE clause to be used in the analysis.
In this example, the WHERE clause is used to match the records where customer age is greater than 18.
A sub-folder for this new SQL business rule is displayed under the Rules folder in the DQ Repository tree
view. The SQL business rule editor opens with the defined metadata.
In the SQL business rule editor, you can modify the WHERE clause or add a new one directly in the Data quality
rule view.
This will act as an indicator to measure the importance of the SQL business rule.
This step is not obligatory. You can decide to create a business rule without a join condition and use it with only
the WHERE clause in the table analysis.
For an example of a table analysis with a simple business rule, see section How to create a table analysis with a
simple SQL business rule. For an example of a table analysis with a business rule that has a join condition, see
section How to create a table analysis with an SQL business rule with a join condition.
1. In the SQL business rule editor, click Join Condition to open the corresponding view.
2. Click the [+] button to add a row in the Join Condition table.
3. Expand the Metadata folder in the DQ Repository tree view, and then browse to the columns in the tables
on which you want to create the join condition.
This join condition will define the relationship between a table A and a table B using a comparison operator
on a specific column in both tables. In this example, the join condition will compare the "name" value in the
Person and Person_Ref tables that have a common column called name.
You must be careful when defining the join clause. In order to get an easy to understand result, it is advisable to make
sure that the joined tables do not have duplicate values. For further information, see section How to create a table
analysis with an SQL business rule with a join condition.
4. Drop the columns from the DQ Repository tree view to the Join Condition table.
A dialog box is displayed prompting you to select where to place the column: in TableA or in TableB.
5. Select a comparison condition operator between the two columns in the tables and save your modifications.
In the analysis editor, you can now drop this newly created SQL business rule onto a table that has an "age"
column. When you run the analysis, the join to the second column is done automatically.
The table to which to add the business rule must contain at least one of the columns used in the SQL business rule.
For more information about using SQL business rules as indicators on a table analysis, see section Creating a
table analysis with SQL business rules.
1. In the DQ Repository tree view, expand Libraries > Rules > SQL.
2. Right-click the SQL business rule you want to open and select Open from the contextual menu.
The SQL business rule editor opens displaying the rule metadata.
4. Click the save icon on top of the editor to save your modifications.
Prerequisite(s):
At least one SQL business rule has been created in the Profiling perspective of the studio. For further
information about creating SQL business rules, see section How to create an SQL business rule
At least one database connection is set in the Profiling perspective of the studio. For further information, see
section Connecting to a database.
In this example, you want to add the SQL business rule created in section How to create an SQL business rule to
a top_custom table that contains an age column. This SQL business rule will match the customer ages to define
those who are older than 18.
3. Expand the Table Analysis node and then select Business Rule Analysis.
Space is not acceptable when typing in the analysis name in this field.
6. Set the analysis metadata (purpose, description and author name) in the corresponding fields and then click
Next.
You can directly select the data quality rule you want to add to the current analysis by clicking the Next button in the
[New Analysis] wizard or you can do that at later stage in the Analyzed Tables view as shown in the following steps.
A folder for the newly created table analysis is listed under the Analyses folder in the DQ Repository tree
view, and the analysis editor opens with the defined metadata.
3. Click the Analyzed Tables tab to open the Analyzed Tables view.
4. If required, click Select tables to analyze to open the [Table Selection] dialog box and modify the selection
and/or select new table(s).
You can filter the table or column lists by typing the desired text in the Table filter or Column filter fields
respectively. The lists will show only the tables/columns that correspond to the text you type in.
6. Select the check box next to the table name and click OK.
You can connect to a different database by selecting another connection from the Connection box. This box lists
all the connections created in the Studio with the corresponding database names. If the tables listed in the Analyzed
Tables view do not exist in the new database connection you want to set, you will receive a warning message that
enables you to continue or cancel the operation.
If you right-click any of the listed columns in the Analyzed Columns view and select Show in DQ
Repository view, the selected column is automatically located under the corresponding connection in the
tree view.
2. Expand the Rules folder and select the check box(es) of the predefined SQL business rule(s) you want to
use on the corresponding table(s).
3. Click OK.
The selected business rule is listed below the table name in the Analyzed Tables view.
You can also drag the business rule directly from the DQ Repository tree view to the table in the analysis editor.
4. If required, right-click the business rule and select View executed query.
5. Click Data Filter in the analysis editor to open the view where you can set a filter on the data of the analyzed
table(s).
An information pop-up opens to confirm that the operation is in progress. The table analysis results are
displayed in the Graphics panel to the right.
7. Click Analysis Results at the bottom of the analysis editor to switch to the detail result view.
All age records in the selected table are evaluated against the defined SQL business rule. The analysis results
has two bar charts: the first is a row count indicator that shows the number of rows in the analyzed table, and
the second is a match and non-match indicator that indicates in red the age records from the "analyzed result
set" that do not match the criteria (age below 18).
8. Right-click the business rule results in the second table, or right-click the result bar in the chart itself and
select:
Option To...
View valid rows access a list in the SQL editor of all valid rows measured against the pattern used on the
selected table
View invalid rows access a list in the SQL editor of all invalid rows measured against the pattern used on
the selected table
Analyze duplicates generates a ready-to-use analysis that analyzes duplicates in the table, if any, and give
the row and duplicate counts. For further information, see section How to generate an
analysis on the join results to analyze duplicates.
For further information about the Analysis Results view, see section How to access the detailed view of the
analysis results.
You can carry out a table analysis in a direct and more simplified way. For further information, see section
How to create a table analysis with an SQL business rule in a shortcut procedure.
Depending on the analyzed data and the join clause itself, several different results of the join are possible, for
example #match + #no match > #row count, #match + #no match < #row count or #match + #no match = #row
count.
The example below explains in detail the case where the data set in the join result is bigger than the row count
(#match + #no match > #row count) which indicates duplicates in the processed data.
Prerequisite(s):
At least one SQL business rule has been created in the Profiling perspective of the studio. For further
information about creating SQL business rules, see section How to create an SQL business rule
At least one database connection is set in the Profiling perspective of the studio. For further information, see
section Connecting to a database.
In this example, you want to add the SQL business rule created in section How to create an SQL business rule to
a Person table that contains the age and name columns. This SQL business rule will match the customer ages to
define those who are older than 18. The business rule also has a join condition that compares the "name" value
between the Person table and another table called Person_Ref through analyzing a common column called name.
Below is a capture of the result of the join condition between these two tables:
The result set may give duplicate rows as it is the case here. Thus the results of the analysis may become a bit harder
to understand. The analysis here will not analyze the rows of the table that match the business rule but it will run
on the result set given by the business rule. See the end of the section for detail explanation of the analysis results.
1. Define the table analysis and select the table you want to analyze as outlined in section How to create a table
analysis with a simple SQL business rule.
2.
Add the business rule with the join condition to the selected table through clicking the icon next to the
table name.
This business rule has a join condition that compares the "name" value between two different tables through
analyzing a common column. For further information about SQL business rules, see section How to create
an SQL business rule.
An information pop-up opens to confirm that the operation is in progress. The table analysis results are
displayed in the Graphics panel to the right.
All age records in the selected table are evaluated against the defined SQL business rule. The analysis results
has two bar charts: the first is a row count indicator that shows the number of rows in the analyzed table, and
the second is a match and non-match indicator that indicates in red the age records from the "analyzed result
set" that do not match the criteria (age below 18).
To better understand the Business Rule Statistics bar chart in the analysis results, do the following:
1. In the analysis editor, right-click the business rule and select View executed query.
2. Modify the query in the top part of the editor to read as the following: SELECT * FROM
`person_joins`.`PERSON` PERSON JOIN `person_joins`.`PERSON_REF` PERSON_REF ON
(PERSON.`name`=PERSON_REF.`name`).
This will list the result data set of the join condition in the editor.
3.
In the top left corner of the editor, click the icon to execute the query.
The query result, that is the analyzed result set, is listed in the bottom part of the editor.
4. Click the Analysis Results tab at the bottom of the analysis editor to open a detail view of the analysis results.
The analyzed result set may contain more or fewer rows than the analyzed table. In this example, the number
of match and non-match records (5 + 2 = 7) exceeds the number of analyzed records (6) because the join of
the two tables generates more rows than expected.
Here 5 rows (71.43%) match the business rule and 2 rows do not match. Because the join generates duplicate
rows, this result does not mean that 5 rows of the analyzed table match the business rule. It only means that 5
rows among the 7 rows of the result set match the business rule. Actually, some rows of the analyzed tables
may not be even analyzed against the business rule. This happens when the join excludes these rows. For this
reason, it is advised to check for duplicates on the columns used in the join of the business rule in order to
make sure that the join does not remove or add rows in the analyzed result set. Otherwise the interpretation
of the result is more complex.
For further information on the result detail view, see section How to access the detailed view of the analysis
results.
In the Analysis Results view, if the number of match and non-match records exceeds the number of analyzed records,
you can generate a ready-to-use analysis that will analyze the duplicates in the selected table. For further information,
see section How to access the detailed view of the analysis results.
To access a more detailed view of a table analysis that uses an SQL business rule, do the following:
1. Click the Analysis Results tab at the bottom of the analysis editor to open the corresponding view.
The display of the Analysis Results view depends on the parameters you set in the [Preferences] window. For more
information, see section Setting preferences of analysis editors and analysis results.
The detailed analysis results view shows the two bar charts that indicate the number of the analyzed rows in
the selected table and the percentage of the rows that match and non-match the SQL business rule. The bar
charts are also accompanied with the tables that detail the statistic results.
If a join condition is used in the SQL business rule, the number of the rows of the join (#match + # no match) can
be different from the number of the analyzed rows (row count). For further information, see section How to create a
table analysis with an SQL business rule with a join condition.
2. Right-click the Row Count row in the first table and select View rows.
The SQL editor opens in the Studio to display a list of the analyzed rows.
3. Right-click the business rule results in the second table, or right-click the result bar in the chart itself and
select:
Option To...
View valid rows access a list in the SQL editor of all valid rows measured against the pattern used on the
selected table
View invalid rows access a list in the SQL editor of all invalid rows measured against the pattern used on
the selected table
Analyze duplicates generates a ready-to-use analysis that analyzes duplicates in the table and give the row
and duplicate counts. For further information, see section How to generate an analysis
on the join results to analyze duplicates.
4. In the SQL editor, click the save icon on the toolbar to save the executed query on the SQL business rule and
list it under the Libraries > Source Files folder in the DQ Repository tree view.
For further information, see section Saving the queries executed on indicators.
You can generate a ready-to-use analysis to analyze these duplicate records. The results of this analysis help you
to better understand why there are more records in the join results than in the table.
Prerequisite(s): A table analysis with an SQL business rule, that has a join condition, is defined and executed in
the Profiling perspective of the studio. The join results must show that there are duplicates in the table. For further
information, see section How to create a table analysis with an SQL business rule with a join condition.
To generate an analysis that analyzes the duplicate records in a table, do the following:
1. After creating and executing an analysis on a table that has duplicate records as outlined in section How to
create a table analysis with an SQL business rule with a join condition, click the Analysis Results tab at
the bottom of the analysis editor.
2. Right-click the join results in the second table and select Analyze duplicates.
The [Column Selection] dialog box opens with the analyzed tables selected by default.
3. Modify the selection in the dialog box if needed and then click OK.
Two column analyses are generated and listed under the Analyses folder in the DQ Repository tree view
and the analysis editor opens in the Studio on the settings of the generated analysis.
The analysis results show two bars, one representing the row count of the data records in the analyzed column
and the other representing the duplicate count.
5. Click Analysis Results at the bottom of the analysis editor to access the detail result view.
6. Right-click the row count or duplicate count results in the table, or right-click the result bar in the chart itself
and select:
Option To...
View rows open a view on a list of all data rows or duplicate rows in the analyzed column.
View values open a view on a list of the duplicate data values of the analyzed column.
Prerequisite(s):
At least one SQL business rule is created in the Profiling perspective of the studio.
At least one database connection is set in the Profiling perspective of the studio.
For more information about creating SQL business rules, see section How to create an SQL business rule.
To create a table analysis with an SQL business rule in a shortcut procedure, do the following:
1. In the DQ Repository tree view, expand Metadata > DB Connections, and then browse to the table you
want to analyze.
2. Right-click the table name and select Table analysis from the list.
3. Enter the metadata for the new analysis in the corresponding fields and then click Next to proceed to the
next step.
Space is not acceptable when typing in the table analysis name in the Name field.
4. Expand Rules > SQL and then select the check box(es) of the predefined SQL business rule(s) you want to
use on the corresponding table(s).
The table name along with the selected business rule are listed in the Analyzed Tables view.
6. If required, click Data Filter in the analysis editor to open the view where you can set a filter on the data
of the analyzed table(s).
An information pop-up opens to confirm that the operation is in progress. The table analysis results are
displayed in the Graphics panel to the right.
This type of analysis detects to what extent a value in a determinant column functionally determines another value
in a dependant column.
This can help you identify problems in your data, such as values that are not valid. For example, if you analyze
the dependency between a column that contains United States Zip Codes and a column that contains states in the
United States, the same Zip Code should always have the same state. Running the functional dependency analysis
on these two columns will show if there are any violations of this dependency.
Prerequisite(s): At least one database connection is set in the Profiling perspective of the studio. For further
information, see section Connecting to a database.
Space is not acceptable when typing in the analysis name in this field.
6. Set the analysis metadata (purpose, description and author name) in the corresponding fields, and then click
Next.
1. Expand DB connections, and then browse to the columns you want to analyze, select them and then click
Finish to close the [New Analysis] wizard.
A folder for the newly created functional dependency analysis is listed under Analysis in the DQ Repository
tree view, and the analysis editor opens with the defined metadata.
The display of the analysis editor depends on the parameters you set in the [Preferences] window. For more
information, see section Setting preferences of analysis editors and analysis results.
2. Click the Analyzed Column Set tab to open the corresponding view.
3. Click Determinant columns: Select columns from set A to open the [Column Selection] dialog box.
Here you can select the first set of columns against which you want to analyze the values in the dependant
columns. You can also drag the columns directly from the DQ Repository tree view to the left column panel.
In this example, you want to evaluate the records present in the city column and those present in the
state_province column against each other to see if state names match to the listed city names and vice versa.
4. In the [Column Selection] dialog box, expand DB Connections and browse to the column(s) you want to
define as determinant columns.
You can filter the table or column lists by typing the desired text in the Table filter or Column filter fields respectively.
The lists will show only the tables/columns that correspond to the text you type in.
5. Select the check box(es) next to the column(s) you want to analyze and click OK to proceed to the next step.
The selected column(s) are displayed in the Left Columns panel of the Analyzed Columns Set view. In this
example, we select the city column as the determinant column.
6. Do the same to select the dependant column(s) or drag it/them from the DQ Repository tree view to the
Right Columns panel. In this example, we select the state_province column as the dependent column. This
relation will show if the state names match to the listed city names.
If you right-click any of the listed columns in the Analyzed Columns view and select Show in DQ
Repository view, the selected column is automatically located under the corresponding connection in the
tree view.
7. Click the Reverse columns tab to automatically reverse the defined columns and thus evaluate the reverse
relation, what city names match to the listed state names.
You can select to connect to a different database by selecting another connection from the Connection box. This
box lists all the connections created in the Studio with the corresponding database names. If the columns listed in the
Analyzed Columns Set view do not exist in the new database connection you want to set, you will receive a warning
message that enables you to continue or cancel the operation.
8. Click the save icon on top of the editor, and then press F6 to execute the current analysis.
A progress information pop-up opens to confirm that the operation is in progress. The results of column
functional dependency analysis are displayed in the Analysis Results view.
The display of the Analysis Results view depends on the parameters you set in the [Preferences] window. For more
information, see section Setting preferences of analysis editors and analysis results.
This functional dependency analysis evaluated the records present in the city column and those present in the
state_province column against each other to see if the city names match to the listed state names and vice
versa. The returned results, in the %Match column, indicate the functional dependency strength for each
determinant column. The records that do not match are indicated in red.
The #Match column in the result table lists the numbers of the distinct determinant values in each of the
analyzed columns. The #row column in the analysis results lists the actual relations between the determinant
attribute and the dependant attribute. In this example, #Match in the first row of the result table represents the
number of distinct cities, and #row represents the number of distinct pairs (city, state_province). Since these
two numbers are not equal, then the functional dependency relationship here is only partial and the ratio of
the numbers (%Match) measures the actual dependency strength. When these numbers are equal, you have
a "strict" functional dependency relationship, i.e. each city appears only once with each state.
The presence of null values in either of the two analyzed columns will lessen the dependency strength. The system
does not ignore null values, but rather calculates them as values that violates the functional dependency.
9. In the Analysis Results view, right-click any of the dependency lines and select:
Option To...
View valid/invalid rows access a list in the SQL editor of all valid/invalid rows measured according to the
functional dependencies analysis
View valid/invalid values access a list in the SQL editor of all valid/invalid values measured according to the
functional dependencies analysis
View detailed valid/detailed access a detailed list in the SQL editor of all valid/invalid values measured according to
invalid values the functional dependencies analysis
From the SQL editor, you can save the executed query and list it under the Libraries > Source Files folders in the
DQ Repository tree view if you click the save icon on the editor toolbar. For more information, see section Saving
the queries executed on indicators.
Talend Open Studio for Data Quality User Guide 179
Creating a column analysis from a simple table analysis
Prerequisite(s):A simple table analysis is defined in the analysis editor in the Profiling perspective of the studio.
To create a column analysis on one or more columns defined in a simple table analysis, do the following:
2. In the Analyzed Columns view, right-click the column(s) you want to create a column analysis on.
4. In the Name field, enter a name for the new column analysis and then click Finish to proceed to the next step.
The analysis editor opens with the defined metadata and a folder for the newly created analysis is listed under
the Analyses folder in the DQ Repository tree view.
5. Follow the steps outlined in section Analyzing columns in a database to continue creating the column analysis.
You can then execute the created analysis using the Java engine.
It is also possible to add patterns to this type of analysis and have a single-bar result chart that shows the number
of the rows that match all the patterns.
When carrying out this type of analysis, the set of columns to be analyzed must not include a primary key column.
3. Expand the Table Analysis folder and click Column Set Analysis.
Space is not acceptable when typing in the analysis name in this field.
6. If required, set column analysis metadata (purpose, description and author name) in the corresponding fields
and click Next to proceed to the next step.
7. Expand the FileDelimited connection and browse to the set of columns you want to analyze.
8. Select the columns to be analyzed, and then click Finish to close this [New analysis] wizard.
The analysis editor opens with the defined analysis metadata, and a folder for the newly created analysis is
displayed under Analysis in the DQ Repository tree view.
The display of the analysis editor depends on the parameters you set in the [Preferences] window. For more
information, see section Setting preferences of analysis editors and analysis results.
9. If required, select another connection from the Connection box in the Analyzed Columns view. This box
lists all the connections created in the Studio with the corresponding database names.
By default, the delimited file connection you have selected in the previous step is displayed in the Connection
box.
10. If required, click the Select columns to analyze link to open a dialog box where you can modify your column
selection.
You can filter the table or column lists by typing the desired text in the Table filter or Column filter fields
respectively. The lists will show only the tables/columns that correspond to the text you type in.
11. In the column list, select the check boxes of the column(s) you want to analyze and click OK to proceed
to the next step.
In this example, you want to analyze a set of six columns in the delimited file: account number (account_num),
education (education), email (email), first name (fname), second name (Iname) and gender (gender). You
want to identify the number of rows, the number of distinct and unique values and the number of duplicates.
12. If required, use the delete, move up or move down buttons to manage the analyzed columns.
If you right-click any of the listed columns in the Analyzed Columns view and select Show in DQ Repository view, the
selected column will be automatically located under the corresponding connection in the tree view.
Before being able to use a specific pattern with a set of columns analysis, you must manually set in the patterns settings
the pattern definition for Java, if it does not already exist. Otherwise, a warning message will display prompting you to set
the definition of the Java regular expression.
Prerequisite(s): An analysis of a set of columns is open in the analysis editor in the studio. For more information,
see section How to define the set of columns to be analyzed.
1.
Click the icon next to each of the columns you want to validate against a specific pattern.
You can drop the regular expression directly from the Patterns folder in the DQ Repository tree view directly to the
column name in the column analysis editor.
If no Java expression exists for the pattern you want to add, a warning message will display prompting you to add the
pattern definition for Java. Click Yes to open the pattern editor and add the Java regular expression, then proceed
to add the pattern to the analyzed columns.
In this example, you want to add a corresponding pattern to each of the analyzed columns to validate data in
these columns against the selected patterns. The result chart will show the percentage of the matching/non-
matching values, the values that respect the totality of the used patterns.
2. In the [Pattern Selector] dialog box, expand Patterns and browse to the regular expression you want to add
to the selected column.
3. Select the check box(es) of the expression(s) you want to add to the selected column.
The added regular expression(s) display(s) under the analyzed column(s) in the Analyzed Columns view
and the All Match indicator is displayed in the Indicators list in the Indicators view.
Prerequisite(s):A column set analysis is defined in the Profiling perspective of the studio. For further information,
see section How to define the set of columns to be analyzed in a delimited file and section How to add patterns
to the analyzed columns in the delimited file.
The indicators representing the simple statistics are by-default attached to this type of analysis. For further information
about the indicators for simple statistics, see section section Simple statistics.
2.
If required, click the option icon to open a dialog box where you can set options for each indicator. For
more information about indicators management, see section Indicators.
3. If required, click Data Filter in the analysis editor to display its view and filter data through SQL WHERE
clauses.
4. In the Analysis Parameters view, select the Allow drill down check box to store locally the data that will
be analyzed by the current analysis.
5. In the Max number of rows kept per indicator field enter the number of the data rows you want to make
accessible.
The Allow drill down check box is selected by default, and the maximum analyzed data rows to be shown per indicator
is set to 50.
6. Click the save icon on top of the analysis editor and then press F6 to execute the analysis.
The Graphics panel to the right of the analysis editor displays the graphical result corresponding to the
Simple Statistics indicators used to analyze the defined set of columns.
When you use patterns to match the content of the columns to be analyzed, another graphic is displayed to
illustrates the match results against the totality of the used patterns.
6.3.1.4. How to access the detailed result view for the delimited
file analysis
The procedure to access the detailed results for the delimited file analysis is the same as that for the database
analysis. For further information, see section How to access the detailed result view.
Prerequisite(s): A simple table analysis is defined in the analysis editor in the Profiling perspective of the studio.
To create a column analysis on one or more columns defined in the set of columns analysis, do the following:
2. In the Analyzed Columns view, right-click the column(s) you want to create a column analysis on.
3. Select Column analysis from the contextual menu. The [New Analysis] wizard opens.
4. In the Name field, enter a name for the new column analysis and then click Next to proceed to the next step.
The analysis editor opens with the defined metadata and a folder for the newly created analysis is displayed
under the Analyses folder in the DQ Repository tree view.
5. Follow the steps outlined in section Analyzing columns in a delimited file to continue creating the column
analysis on a delimited file.
You can then execute the created analysis using the Java engine.
3. Expand the Table Analysis folder and click Column Set Analysis.
Space is not acceptable when typing in the analysis name in this field.
6. If required, set column analysis metadata (purpose, description and author name) in the corresponding fields
and click Next to proceed to the next step.
7. Expand MDM connections and browse to the set of columns attributes you want to analyze.
8. Select the attributes to be analyzed, and then click Finish to close this [New analysis] wizard.
The analysis editor opens with the defined analysis metadata, and a folder for the newly created analysis is
displayed under Analysis in the DQ Repository tree view.
The display of the analysis editor depends on the parameters you set in the [Preferences] window. For more
information, see section Setting preferences of analysis editors and analysis results.
9. If required, select another connection from the Connection box in the Analyzed Columns view. This box
lists all the connections created in the Studio with the corresponding database names.
By default, the connection you have selected in the previous step is displayed in the Connection box.
10. If required, click the Select columns to analyze link to open a dialog box where you can modify your column
selection.
When carrying out this type of analysis, the set of columns to be analyzed must not include a primary key column.
11. In the column list, select the check boxes of the attributes you want to analyze and click OK to proceed to
the next step.
12. If required, use the delete, move up or move down buttons to manage the analyzed columns.
Prerequisite(s):A column set analysis has been defined in the Profiling perspective of the studio. For further
information, see section How to define the set of columns to be analyzed on the MDM server.
The indicators representing the simple statistics are by-default attached to this type of analysis. For further information
about the indicators for simple statistics, see section section Simple statistics.
2.
If required, click the option icon to open a dialog box where you can set options for each indicator. For
more information about indicators management, see section Indicators.
3. In the Analysis Parameters view, select the Allow drill down check box to store locally the data that will
be analyzed by the current analysis.
4. In the Max number of rows kept per indicator field enter the number of the data rows you want to make
accessible.
The Allow drill down check box is selected by default, and the maximum analyzed data rows to be shown per indicator
is set to 50.
5. Click the save icon on top of the analysis editor and then press F6 to execute the analysis.
The Graphics panel to the right of the analysis editor displays the graphical result corresponding to the
Simple Statistics indicators used to analyze the defined set of columns.
Prerequisite(s): A column set analysis has been defined in the Profiling perspective of the studio. For further
information, see section How to define the set of columns to be analyzed on the MDM server.
To create a column analysis on one or more columns defined in the column set analysis, do the following:
2. In the Analyzed Columns view, right-click the column(s) you want to create a column analysis on.
3. Select Column analysis from the contextual menu. The [New Analysis] wizard opens.
4. In the Name field, enter a name for the new column analysis and then click Next to proceed to the next step.
The analysis editor opens with the defined metadata and a folder for the newly created analysis is displayed
under the Analyses folder in the DQ Repository tree view.
5. Follow the steps outlined in section Analyzing master data on an MDM server to continue creating the column
analysis on a delimited file.
Before starting data profiling management procedures, you need to be familiar with the studio Graphical User
Interface (GUI). For more information, see appendix The studio management GUI.
Matching foreign keys in one table to primary keys in the other table and vice versa.
The sections below provide detailed information about these two types of redundancy analyses.
The number of the analyses created in the Profiling perspective of the studio is indicated next to the Analyses folder in
the DQ Repository tree view.
Prerequisite(s): At least one database connection is set in the Profiling perspective of the studio. For further
information, see section Connecting to a database.
3. Expand the Redundancy Analysis node and then select Column Content Comparison.
4. Click Next.
6. Set the analysis metadata (purpose, description and author name) in the corresponding fields and then click
Next.
1. Expand DB connections and in the desired database, browse to the columns you want to analyze, select them
and then click Finish to close the wizard.
A file for the newly created analysis is listed under the Analysis folder in the DQ Repository tree view. The
analysis editor opens with the defined analysis metadata.
The display of the analysis editor depends on the parameters you set in the [Preferences] window. For more
information, see section Setting preferences of analysis editors and analysis results.
2. Click Analyzed Column Sets to open the view where you can set the columns or modify your selection.
In this example, you want to compare identical columns in the account and account_back tables.
3. From the Connection box, select the database connection relevant to the database to which you want to
connect.
This box lists all the connections created in the Studio with the corresponding database names.
4. Click Select columns for the A set to open the [Column Selection] dialog box.
5. Expand DB Connections and then browse through the catalogs/schemas to reach the table holding the
columns you want to analyze.
You can filter the table or column lists by typing the desired text in the Table filter or Column filter fields
respectively. The lists will show only the tables/columns that correspond to the text you type in.
6. Click the table name to list all its columns in the right-hand panel of the [Column Selection] dialog box.
7. In the list to the right, select the check boxes of the column(s) you want to analyze and click OK to proceed
to the next step.
You can drag the columns to be analyzed directly from the DQ Repository tree view to the editor.
If you right-click any of the listed columns in the Analyzed Columns view and select Show in DQ Repository view,
the selected column will be automatically located under the corresponding connection in the tree view.
8. Click Select Columns from the B set and follow the same steps to select the second set of columns or drag
it to the right column panel.
9. Select the Compute only number of A rows not in B check box if you want to match the data from the A
set against the data from the B set and not vice versa.
10. Click Data Filter in the analysis editor to open the view where you can set a filter on each of the column sets.
11. Click the save icon on top of the editor and then press F6 to execute the column comparison analysis.
12. Read the confirmation message and click OK if you want to continue the operation.
In this example, 72.73% of the data present in the columns in the account table could be matched with the
same data in the columns in the account_back table.
Through this view, you can also access the actual analyzed data via the Data Explorer.
To access the analyzed data rows, right-click any of the lines in the table and select:
Option To...
View match rows access a list of all rows that could be matched in the two identical column sets
View not match rows access a list of all rows that could not be matched in the two identical column sets
View rows access a list of all rows in the two identical column sets
The data explorer does not support connections which has empty user name, such as Single sign-on of MS SQL Server. If
you analyze data using such connection and you try to view data rows in the Data Explorer perspective, a warning message
prompt you to set your connection credentials to the SQL Server.
The figure below illustrates the data explorer list of all rows that could be matched in the two sets, eight in this
example.
From the SQL editor, you can save the executed query and list it under the Libraries > Source Files folders in the DQ
Repository tree view if you click the save icon on the editor toolbar. For more information, see section Saving the queries
executed on indicators.
The figure below illustrates the data explorer list of all rows that could not be matched in the two sets, three in
this example.
For more information about the data explorer Graphical User Interface, see appendix Data Explorer management
GUI.
Prerequisite(s): At least one database connection is set in the Profiling perspective of the studio. For further
information, see section Connecting to a database.
3. Expand the Redundancy Analysis folder and select Column Content Comparison.
4. Click Next.
Space is not acceptable when typing in the analysis name in this field.
6. Set the analysis metadata (purpose, description and author name) in the corresponding fields and then click
Finish.
A file for the newly created analysis is displayed under the Analysis folder in the DQ Repository tree view.
The analysis editor opens with the defined analysis metadata.
In this example, you want to match the foreign keys in the customer_id column of the sales_fact_1998 table
with the primary keys in the customer_id column of the customer table, and vice versa. This will explore the
relationship between the two tables to show us for example if every customer has an order in the year 1998.
2. From the Connection box, select the database connection relevant to the database to which you want to
connect. This box lists all the connections created in the Studio with the corresponding database names.
3. Click Select columns for the A set to open the [Column Selection] dialog box.
If you want to check the validity of the foreign keys, select the column holding the foreign keys for the A set and the
column holding the primary keys for the B set.
4. Expand the DB Connections folder and browse through the catalogs/schemas to reach the table holding the
column you want to match. In this example, the column to be analyzed is customer_id that holds the foreign
keys.
You can filter the table or column lists by typing the desired text in the Table filter or Column filter fields
respectively. The lists will show only the tables/columns that correspond to the text you type in.
5. Click the table name to display all its columns in the right-hand panel of the [Column Selection] dialog box.
6. In the list to the right, select the check box of the column holding the foreign keys and then click OK to
proceed to the next step.
You can drag the columns to be analyzed directly from the DQ Repository tree view to the editor.
If you right-click any of the listed columns in the Analyzed Columns view and select Show in DQ Repository view,
the selected column will be automatically located under the corresponding connection in the tree view.
7. Click Select Columns from the B set and follow the same steps to select the column holding the primary
keys or drag it from the DQ Repository to the right column panel.
If you select the Compute only number of rows not in B check box, you will look for any missing primary keys
in the column in the B set.
8. Click Data Filter in the analysis editor to display the view where you can set a filter on each of the analyzed
columns.
9. Click the save icon on top of the editor, and then press F6 to execute this key-matching analysis. A
confirmation message is displayed.
10. Read the confirmation message and click OK if you want to continue the operation.
The execution of this type of analysis may takes some time. Wait till the Analysis Results view opens automatically
showing the analysis results.
In this example, every foreign key in the sales_fact_1998 table is identified with a primary key in the customer
table. However, 98.22% of the primary keys in the customer table could not be identified with foreign keys
in the sales_fact_1998 table. These primary keys are for the customers who did not order anything in 1998.
Through this view, you can also access the actual analyzed data via the data explorer.
To access the analyzed data rows, right-click any of the lines in the table and select::
Option To...
View match rows access a list of all rows that could be matched in the two identical column sets
View not match rows access a list of all rows that could not be matched in the two identical column sets
View rows access a list of all rows in the two identical column sets
The data explorer does not support connections which has empty user name, such as Single sign-on of MS SQL Server. If
you analyze data using such connection and you try to view data rows in the Data Explorer perspective, a warning message
prompt you to set your connection credentials to the SQL Server.
The figure below illustrates the data explorer list of all analyzed rows in the two columns.
From the SQL editor, you can save the executed query and list it under the Libraries > Source Files folders in the DQ
Repository tree view if you click the save icon on the editor toolbar. For more information, see section Saving the queries
executed on indicators.
For more information about the data explorer Graphical User Interface, see appendix Data Explorer management
GUI.
Column correlation analyses are usually used to explore relationships and correlations in data. They are not used
to provide statistics about the quality of data.
Before starting data profiling management procedures, you need to be familiar with the studio Graphical User
Interface (GUI). For more information, see appendix The studio management GUI.
It is very important to make the distinction between column correlation analyses and all other types of data quality
analyses. Column correlation analyses are usually used to explore relationships and correlations in data and not
to provide statistics about the quality of data.
Several types of column correlation analysis are possible. For more information, see section Creating numerical
correlation analysis, section Creating time correlation analysis and section Creating nominal correlation analysis.
For more information about the use of data mining types in the studio, see section Data mining types.
The number of the analyses created in the studio is indicated next to the Analyses folder in the DQ Repository tree view.
A bubble chart is created for each selected numeric column. In a bubble chart, each bubble represents a distinct
record of the nominal column. For example, a nominal column called outlook with 3 distinct nominal instances:
sunny (11 records), rainy (16 records) and overcast (4 records) will generate a bubble chart with 3 bubbles.
The second column in this example is the temperature column where temperature is in degrees Celsius. The
analysis in this example will show the correlation between the outlook and the temperature columns and will give
the result in a bubble chart. The vertical axis represents the average of the numeric column and the horizontal
axis represents the number of records of each nominal instance. The average temperature would be 23.273 for the
"sunny" instances, 7.5 for the "rainy" instances and 18.5 for the "overcast" instances.
The two things to pay attention to in such a chart is the position of the bubble and its size.
Usually, outlier bubbles must be further investigated. The more the bubble is near the left axis, the less confident
we are in the average of the numeric column. For example, the overcast nominal instance here has only 4 records,
hence the bubble is near the left axis. We cannot be confident in the average with only 4 records. When looking
for data quality issues, these bubbles could indicate problematic values.
The bubbles near the top of the chart and those near the bottom of the chart may suggest data quality issues too.
A too high or too low temperature in average could indicate a bad measure of the temperature.
The size of the bubble represents the number of null numeric values. The more there are null values in the interval
column, the bigger will be the bubble.
When several nominal columns are selected, the order of the columns plays a crucial role in this analysis. A series
of bubbles (with one color) is displayed for the average temperature and the weather. Another series of bubbles is
displayed for the average temperature and each record of any other nominal column.
Prerequisite(s): At least one database connection is set in the Profiling perspective of the studio. For further
information, see section Connecting to a database.
3. Expand the Column Correlation Analysis node and select Numerical Correlation Analysis.
4. Click Next.
6. Set the analysis metadata (purpose, description and author name) in the corresponding fields and then click
Next.
A folder for the newly created analysis is listed under Analysis in the DQ Repository tree view, and the
analysis editor opens with the defined analysis metadata.
The display of the analysis editor depends on the parameters you set in the [Preferences] window. For more
information, see section Setting preferences of analysis editors and analysis results.
3. From the Connection box, select the database to which you want to connect. This box lists all the connections
created in the Studio with the corresponding database names.
You can change your database connection by selecting another connection from the Connection box. If the columns
listed in the Analyzed Columns view do not exist in the new database connection you want to set, you will receive
a warning message that enables you to continue or cancel the operation.
4. Click Select columns to analyze to open the [Column Selection] dialog box.
5. Expand DB Connections and browse the catalogs/schemas in your database connection to reach the table
that holds the column(s) you want to analyze.
You can filter the table or column lists by typing the desired text in the Table filter or Column filter fields
respectively. The lists will show only the tables/columns that correspond to the text you type in.
6. Click the table name to list all its columns in the right-hand panel of the [Column Selection] dialog box.
7. In the column list, select the check boxes of the column(s) you want to analyze and click OK to proceed
to the next step.
In this example, you want to compute the age average of the personnel of different enterprises located in
different states. Then the columns to be analyzed are AGE, COMPANY and STATE.
The selected columns are displayed in the Analyzed Column view of the analysis editor.
You can drag the columns to be analyzed directly from the corresponding database connection in the DQ Repository
tree view into the Analyzed Columns area.
If you right-click any of the listed columns in the Analyzed Columns view and select Show in DQ Repository view,
the selected column will be automatically located under the corresponding connection in the tree view.
8.
Click Indicators in the analysis editor and then click the option icon to open a dialog box where you
can set thresholds for each indicator.
The indicators representing the simple statistics are by-default attached to this type of analysis.
9. Click Data Filter in the analysis editor to open the view where you can set a filter on the data of the analyzed
columns.
10. Click the save icon on top of the editor and then press F6 to execute the column comparison analysis.
The graphical result is displayed in the Graphics panel to the right of the editor.
The data plotted in the bubble chart have different colors with the legend pointing out which color refers
to which data.
place the pointer on any of the bubbles to see the correlated data values at that position,
Option To...
Show in full screen open the generated graphic in a full screen
View rows access a list of all analyzed rows in the selected position
The below figure illustrates an example of the SQL editor listing the correlated data values at the selected position.
From the SQL editor, you can save the executed query and list it under the Libraries > Source Files folders in the DQ
Repository tree view if you click the save icon on the editor toolbar. For more information, see section Saving the queries
executed on indicators.
For more information on the bubble chart, see the below section.
To access a more detailed view of the analysis results of the procedure outlined in section Creating numerical
correlation analysis, do the following:
1. Click the Analysis Results tab at the bottom of the analysis editor to open the corresponding view.
2. Click on Analysis Result to see more detail of the analysis results in the three different views: Graphics,
Simple Statistics and Data.
The display of the Analysis Results view depends on the parameters you set in the [Preferences] window. For more
information, see section Setting preferences of analysis editors and analysis results.
3. Click Graphics, Simple Statistics or Data to show the generated graphic, the number of the analyzed records
or the actual analyzed data respectively.
In the Graphics view, the data plotted in the bubble chart have different colors with the legend pointing out which
color refers to which data.
The more the bubble is near the left axis the less confident we are in the average of the numeric column. For the
selected bubble in the above example, the company name is missing and there are only two data records, hence
the bubble is near the left axis. We cannot be confident about age average with only two records. When looking
for data quality issues, these bubbles could indicate problematic values.
The bubbles near the top of the chart and those near the bottom of the chart may suggest data quality issues too,
too big or too small age average in the above example.
clear the check box of the value(s) you want to hide in the bubble chart,
place the pointer on any of the bubbles to see the correlated data values at that position,
Option To...
Show in full screen open the generated graphic in a full screen
View rows access a list of all analyzed rows in the selected column
The Simple Statistics view shows the number of the analyzed records falling in certain categories, including the
number of rows, the number of distinct and unique values and the number of duplicates.
You can sort the data listed in the result table by simply clicking any column header in the table.
Prerequisite(s): At least one database connection is set in the Profiling perspective of the studio. For further
information, see section Connecting to a database.
3. Expand the Column Correlation Analysis folder and select Time Correlation Analysis.
4. Click Next.
6. Set the analysis metadata (purpose, description and author name) in the corresponding fields and then click
Next.
1. Expand DB connections and in the desired database, browse to the columns you want to analyze, select them
and then click Finish.
A folder for the newly created analysis is displayed under Analysis in the DQ Repository tree view, and the
time analysis editor opens with the defined analysis metadata.
The display of the analysis editor depends on the parameters you set in the [Preferences] window. For more
information, see section Setting preferences of analysis editors and analysis results.
3. From the Connection box, select the database to which you want to connect. This box lists all the connections
created in the Studio with the corresponding database names.
You can change your database connection by selecting another connection from the Connection box. If the columns
listed in the Analyzed Columns view do not exist in the new database connection you want to set, you will receive
a warning message that enables you to continue or cancel the operation.
4. Click Select columns to analyze to open the [Column Selection] dialog box and select the columns, or drag
them directly from the DQ Repository tree view into the Analyzed Columns view.
If you right-click any of the listed columns in the Analyzed Columns view and select Show in DQ Repository view,
the selected column will be automatically located under the corresponding connection in the tree view.
5. If required, click Indicators in the analysis editor to display the indicators used in the current time correlation
analysis.
6. Click Data Filter in the analysis editor to display the view where you can set a filter on the analyzed column
set.
7. Click the save icon on top of the editor and press F6 to execute the column comparison analysis.
This gantt chart displays a range showing the minimal and maximal birth dates for each country listed in the
selected nominal column. It also highlights the range bars that contain null values for birth dates.
For example, in the above chart, the minimal birth date for Mexico is 1910 and the maximal is 2000. And of all
the data records where the country is Mexico, 41 records have null value as birth date.
place the pointer on any of the range bars to display the correlated data values at that position,
put the pointer on a specific birth date and drag it to another birth date to change the chart and show the minimal
and maximal birth dates related only to your selection.
Option To...
Show in full screen open the generated graphic in a full screen
View rows access a list of all analyzed rows in the selected nominal column
The below figure illustrates an example of the SQL editor listing the correlated data values at the selected range bar.
From the SQL editor, you can save the executed query and list it under the Libraries > Source Files folders in the DQ
Repository tree view if you click the save icon on the editor toolbar. For more information, see section Saving the queries
executed on indicators.
For more information on the gantt chart, see the below section.
To access a more detailed view of the analysis results of the procedure outlined in section Creating time correlation
analysis, do the following:
1. Click the Analysis Results tab at the bottom of the analysis editor to open the corresponding view.
2. Click on Analysis Result to display the analysis more detailed results in the three different views: Graphics,
Simple Statistics and Data.
The display of the Analysis Results view depends on the parameters you set in the [Preferences] window. For more
information, see section Setting preferences of analysis editors and analysis results.
3. Click Graphics, Simple Statistics or Data to show the generated graphic, the number of the analyzed records
or the actual analyzed data respectively.
In the Graphics view, you can clear the check box of the value(s) you want to hide in the chart.
You can also select a specific birth date range to show if you put the pointer at the start nominal value you want
to show and drag it to the end nominal value you want to show.
clear the check box of the value(s) you want to hide in the chart,
place the pointer on any of the range bars to display the correlated data values at that position,
Option To...
Show in full screen open the generated graphic in a full screen
View rows access a list of all analyzed rows in the selected column
The Simple Statistics view shows the number of the analyzed records falling in certain categories, including the
number of rows, the number of distinct and unique values and the number of duplicates.
You can sort the data listed in the result table by simply clicking any column header in the table.
In the chart, each column will be represented by a node that has a given color. The correlations between the
nominal values are represented by lines. The thicker the line is, the weaker the association is. Thicker lines can
identify problems or correlations that need special attention. However, you can always inverse edge weight, that
is give larger edge thickness to higher correlation, by selecting the Inverse Edge Weight check box below the
nominal correlation chart.
The correlations in the chart are always pairwise correlations: show associations between pairs of columns.
Prerequisite(s): At least one database connection is set in the Profiling perspective of the studio. For further
information, see section Connecting to a database.
4. Click Next.
6. Set the analysis metadata (purpose, description and author name) in the corresponding fields and then click
Next.
1. Expand DB connections and in the desired database, browse to the columns you want to analyze, select them
and then click Finish to close the wizard.
A folder for the newly created analysis is displayed under Analysis in the DQ Repository tree view, and the
analysis editor opens with the defined analysis metadata.
The display of the analysis editor depends on the parameters you set in the [Preferences] window. For more
information, see section Setting preferences of analysis editors and analysis results.
3. From the Connection box, select the database to which you want to connect.
This box lists all the connections created in the Studio with the corresponding database names.
You can change your database connection by selecting another connection from the Connection box. If the columns
listed in the Analyzed Columns view do not exist in the new database connection you want to set, you will receive
a warning message that enables you to continue or cancel the operation.
4. Click Select columns to analyze to open the [Column Selection] dialog box and select as many nominal
columns as you want, or drag them directly from the DQ Repository tree view.
If you select too many columns, the analysis result chart will be very difficult to read.
If you right-click any of the listed columns in the Analyzed Columns view and select Show in DQ Repository view,
the selected column will be automatically located under the corresponding connection in the tree view.
5. If required, click Indicators in the analysis editor to display the indicators used in the current nominal
correlation analysis.
6. Click Data Filter in the analysis editor to display the view where you can set a filter on the data of the
analyzed columns.
7. Click the save icon on top of the editor and then press F6 to execute the nominal correlation analysis. The
graphical result is displayed in the Graphics panel to the right of the editor.
In the above chart, each value in the country and marital-status columns is represented by a node that has a
given color. Nominal correlation analysis is carried out to see the relationship between the number of married
or single people and the country they live in. Correlations are represented by lines.
To better view the graphical result of the nominal correlation analysis, right-click the graphic in the Graphics panel
and select Show in full screen. For more information on the chart, see the below section.
To access a more detailed view of the analysis results of the procedure outlined in section Creating nominal
correlation analysis, do the following:
1. Click the Analysis Results tab at the bottom of the analysis editor to open the corresponding view.
2. Click on Analysis Result to display the analysis more detailed results in three different views: Graphics,
Simple Statistics and Data.
The display of the Analysis Results view depends on the parameters you set in the [Preferences] window. For more
information, see section Setting preferences of analysis editors and analysis results.
3. Click Graphics, Simple Statistics or Data to show the generated graphic, the number of the analyzed records
or the actual data respectively.
The Graphics view shows the generated graphic for the analyzed columns.
In the above chart, each value in the country and marital-status columns is represented by a node that has a given
color. Nominal correlation analysis is carried out to see the relationship between the number of married or single
people and the country they live in. Correlations are represented by lines, the thicker the line is, the higher the
association is - if the Inverse Edge Weight check box is selected.
The buttons below the chart help you manage the chart display. The following table describes these buttons and
their usage:
Button Description
Filter Edge Weight Move the slider to the right to (filter out edges with small weight) visualize the more important
edges.
plus and minus Click the [+] or [-] buttons to respectively zoom in and zoom out the chart size.
Reset Click to put the chart back to its initial state.
Inverse Edge Weight By default, the thicker the line is, the weaker the correlation is.
Button Description
Select this check box to inverse the current edge weight, that is give larger edge thickness to higher
correlation.
Picking Select this check box to be able to pick any node and drag it to anywhere in the chart.
Save Layout Click this button to save the chart layout.
Restore Layout Click this button to restore the chart to its previously saved layout.
The Simple Statistics view shows the number of the analyzed records falling in certain categories, including the
number of rows, the number of distinct and unique values and the number of duplicates.
You can sort the data listed in the result table by simply clicking any column header in the table.
Before starting data profiling management procedures, you need to be familiar with the studio Graphical User
Interface (GUI). For more information, see appendix The studio management GUI.
9.1. Patterns
Patterns are sets of strings against which you can match the content of the columns to be analyzed.
Regular expressions (regex) are predefined patterns that you can use to search and manipulate text in the databases
to which you connect. You can also create your own regular expressions and use them to analyze columns.
SQL patterns are a kind of personalized patterns that are used in SQL queries. These patterns usually
contain the percent sign (%). For more information on SQL wildcards, see http://www.w3schools.com/SQL/
sql_wildcards.asp.
You can use any of the above two pattern types either with column analyses or with the analyses of a set of columns
(simple table analyses). These pattern-based analyses illustrate the frequencies of various data patterns found in
the values of the analyzed columns. For more information, see section Analyzing columns in a databaseand section
How to create an analysis of a set of columns using patterns.
From the studio, you can generate graphs to represent the results of analyses using patterns. You can also view
tables in the Analysis Results view that write in words the generated graphs. From those graphs and analysis
results you can easily determine the percentage of invalid values based on the listed patterns. For more information,
see section Tab panel of the analysis editors.
Management processes for SQL patterns and regular expressions, including those for Java, are the same. For more
information, see section Managing regular expressions and SQL patterns.
Some databases do not support regular expressions. To work with such databases, some configuration is necessary before
being able to use regular expressions. For more information, see section Managing User-Defined Functions in databases.
A different case is when the regular expression function is built in the database but the query template of the
regular expression indicator is not defined.
extend the functionality of certain database servers to support the regular expression function. For more
information, see section How to declare a User-Defined Function in a specific database.
define the query template for a database that supports the regular expression function. For more information,
see section How to define a query template for a specific database.
Either:
1. Install the relevant regular expressions libraries on the database. For an example of creating a regular
expression function on a database, see appendix Regular expressions on SQL Server.
2. Create a query template for the database in the studio. For more information, see section How to define a
query template for a specific database.
Or:
Execute the column analysis using the Java engine. In this case, the system will use the Java regular
expressions to analyze the specified column(s) and not SQL regular expressions. For more information on
the Java engine, see section Using the Java or the SQL engine.
Set the database-specific regular expression if this expression is not simple enough to be used with all databases.
The below example shows how to define a query template specific for the Microsoft SQL Server database.
appendix Regular expressions on SQL Server gives a detailed example on how to create a user-defined regular
expression function on an SQL server.
3. Double-click Regular Expression Matching, or right-click it and select Open from the contextual menu.
The corresponding view is displayed to show the indicator metadata and its definition.
You need now to add to the list of databases the database for which you want to define a query template. This
query template will compute the regular expression matching.
4. Click the [+] button at the bottom of the Indicator Definition view to add a field for the new template.
5. In the new field, click the arrow and select the database for which you want to define the template. In this
example, select Ingres.
8. Paste the indicator definition (template) in the Expression box and then modify the text after WHEN in order
to adapt the template to the selected database. In this example, replace the text after WHEN with WHEN REGEX.
9. Click OK to proceed to the next step. The new template is displayed in the field.
10. Click the save icon on top of the editor to save your changes.
You have finalized creating the query template specific for the Ingres database. You can now start analyzing the
columns in this database against regular expressions.
If the regular expression you want to use to analyze data on this server is simple enough to be used with all
databases, you can start your column analyses immediately. If not, you must edit the definition of the regular
expression to work with this specific database, Ingres in this example.
For more information on how to set the database-specific regular expression definition, see section How to edit a
regular expression or an SQL pattern and section How to duplicate a regular expression or an SQL pattern.
3. Double-click Regular Expression Matching, or right-click it and select Open from the contextual menu.
The corresponding view is displayed to show the indicator metadata and its definition.
4.
Click the button next to the database for which you want to edit the query template.
5. In the Expression area, edit the regular expression template as required and then click OK to close the dialog
box and proceed to the next step.
3. Double-click Regular Expression Matching, or right-click it and select Open from the contextual menu.
The corresponding view is displayed to show the indicator metadata and its definition.
4.
Click the button next to the database for which you want to delete the query template.
The selected query template is deleted from the list in the Indicator definition view.
You can also edit the regular expression or SQL pattern parameters after attaching it to a column analysis. For
more information, see section How to edit a pattern in the column analysis.
After the execution of the column analysis that uses a specific expression or pattern, you can:
access a list of all valid/invalid data in the analyzed column. For more information, see section How to view
the data analyzed against patterns.
The sections below explain in detail each of the management option for regular expressions and SQL patterns.
Management processes for both types of patterns are exactly the same.
Management processes for regular expressions and SQL patterns are the same. The procedure below with all the included
screen captures reflect the steps to create a regular expression. You can follow the same steps to create an SQL pattern.
1. In the DQ Repository tree view, expand Libraries > Patterns, and then right-click Regex.
2. From the contextual menu, select New regular pattern to open the corresponding wizard.
When you open the wizard, a help panel automatically opens with the wizard. This help panel guides you through the
steps of creating new regular patterns.
3. In the Name field, enter a name for this new regular expression.
4. If required, set other metadata (purpose, description and author name) in the corresponding fields and click
Next to proceed to the next step.
5. In the Regular expression field, enter the syntax of the regular expression to be created. The regular
expression must be surrounded by single quotes.
6. From the Language Selection list, select the language (a specific database or Java).
A sub-folder for this new regular expression is listed under the Regex folder in the DQ Repository tree view,
and the pattern editor opens with the defined metadata and the defined regular expression.
8. In the Pattern Definition view, click the [+] button and add as many regular expressions as necessary in
the new pattern.
You can define the regular expressions specific to any of the available databases or specific to Java.
If the regular expression is simple enough to be used in all databases, select Default from the list.
Sub-folders labeled with the specified database types or Java are listed below the name of the new pattern
under the Patterns folder in the DQ Repository tree view.
Once the pattern is created, you can drop it directly onto a database column in the open analysis editor.
10. If required, click the pattern name to display its detail in the Detail View in the Studio.
In the pattern editor, you can click Test next to the regular expression to test the regular pattern definition. For more
information, see section How to test a regular expression in the Pattern Test View. Also, from the [Pattern Test View],
you can create a new pattern based on the regular expression you are testing. For further information, see xsection
How to create a new pattern from the Pattern Test View .
Prerequisite(s): At least one database connection is set in the Profiling perspective of the studio.
1. Follow the steps outlined in section How to create a new regular expression or SQL pattern to create a new
regular expression.
2. In the open pattern editor, click Pattern Definition to open the relevant view.
3. Click the Test button next to the definition against which you want to test a character sequence to proceed
to the next step.
The test view is displayed in the Studio showing the selected regular expression.
4. In the Test Area, enter the character sequence you want to check against the regular expression
5. From the DB Connection list, select the database in which you want to use the regular expression.
If you select to test a regular expression in Java, the Java option will be selected by default and the DB Connections
option and list will be unavailable in the test view.
6. Click Test.
An icon is displayed in the upper left corner of the view to indicate if the character sequence matches or does
not match the selected pattern definition.
7. If required, modify the regular expression according to your needs and then click Save to save your
modifications.
You can create/modify patterns directly from the Pattern Test View via the Create Pattern button. For further information,
see section How to create a new pattern from the Pattern Test View
9.1.4.3. How to create a new pattern from the Pattern Test View
You can create your own customized patterns from the [Pattern Test View].
The advantage of creating a pattern from this view is that you can create your customized pattern based on an
already tested regular expression. All you need to do is to customize the expression definition according to your
needs and save it to create a new pattern.
To create a new pattern based on a predefined or a newly created regular expression, do the following:
1. In the DQ Repository tree view, expand Libraries > Patterns > Regex and double-click the pattern you
want to use to create your customized pattern to open the pattern editor.
2. Click Test next to the definition you want to use as a base to create the new pattern.
The [Pattern Test View] is opened on the definition of the selected regular expression.
3. If required, test the regular expression through entering text in the Test Area. For further information, see
section How to test a regular expression in the Pattern Test View.
5. In the Name field, enter a name for this new regular expression.
6. If required, set other metadata (purpose, description and author name) in the corresponding fields and click
Next to proceed to the next step.
The definition of the initial regular expression is already listed in the Regular expression field.
7. Customize the syntax of the initial regular expression according to your needs. The regular expression
definition must be surrounded by single quotes.
8. From the Language Selection list, select the database in which you want to use the new regular expression.
A sub-folder for the new pattern is listed under the Regex folder in the same file of the initial regular pattern.
The pattern editor opens on the pattern metadata and pattern definition.
Once the new pattern is created, you can drop it onto a column in the open analysis editor.
Prerequisite(s): In the Profiling perspective of the studio, a column analysis is created on a date column using
the Date Pattern Frequency Table indicator.
To be able to use the Date Pattern Frequency Table indicator on date columns, you must set the execution engine to Java in
the Analysis Parameter view of the column analysis editor. For more information on execution engines, see section Using
the Java or the SQL engine.
For more information on how to create a column analysis, see section Analyzing columns in a database.
To generate a regular expression from the results of a column analysis, do the following:
1. In the DQ Repository tree view, right-click the column analysis that uses the date indicator on a date column.
2. Select Open from the contextual menu to open the corresponding analysis editor.
3. Press F6 to execute the analysis and display the analysis results in the Graphics panel to the right of the
Studio.
4. At the bottom of the editor, click the Analysis Results tab to display a more detailed result view.
In this example, 100.00% of the date values follow the pattern yyyy MM dd and 39.41% follow the pattern
yyyy dd MM.
5. Right-click the date value for which you want to generate a regular expression and select Generate Regular
Pattern from the contextual menu.
The pattern editor opens with the defined metadata and the generated pattern definition.
The new regular expression is listed under Pattern > Regex in the DQ Repository tree view. You can drag
it onto any date column in the analysis editor.
8. If required, click the Test button to test a character sequence against this date regular expression as outlined
in the following section.
2. Browse through the regular expression or SQL pattern lists to reach the expression or pattern you want to
open/edit.
3. Right-click its name and select Open from the contextual menu.
The pattern editor opens displaying the regular expression or SQL pattern settings.
4. Modify the pattern metadata, if required, and then click Pattern Definition to display the relevant view. In
this view, you can: edit pattern definition, change the selected database and add other patterns specific to
available databases through the [+] button.
5. If the regular expression or SQL pattern is simple enough to be used in all databases, select Default in the list.
6. Click the save icon on top of the editor to save your changes.
You can test regular expressions before start using them against data in the specified database. For more information, see
section How to test a regular expression in the Pattern Test View.
When you edit a regular expression or an SQL pattern, make sure that your modifications are suitable for all the analyses
that may be using this regular expression or SQL pattern.
Management processes for regular expressions and SQL patterns are the same. The procedure below with all the included
screen captures reflect the steps to export regular expressions. You can follow the same steps to export SQL patterns.
1. In the DQ Repository tree view, expand Libraries > Patterns, and then right-click Regex.
4. Click Select All to select all listed regular expressions or select the check boxes of the regular expressions
you want to export to the csv file.
All exported regular expressions are saved in the defined csv file.
1. In the DQ Repository tree view, expand Libraries > Patterns, and then browse to the regular expression
family you want to export.
3. Click Select All to select all the check boxes of the regular expressions or select the check boxes of the regular
expressions you want to export to the csv file.
All exported regular expressions are saved in the defined csv file.
You can export regular expressions or SQL patterns from your current version of studio to Talend Exchange
where you can share them with other users.
Management processes for regular expressions and SQL patterns are the same. The procedure below with all the included
screen captures reflect the steps to export regular expressions to Talend Exchange. You can follow the same steps to export
SQL patterns to Talend Exchange.
4. Click Select All to select all the regular expressions in the list or select the check boxes of the regular
expressions you want to export to the specified folder.
A distinct csv file is created for each exported regular expression. Each csv file is compressed as a zip.
All these zip files are saved in the defined folder. You need now to upload them to Talend Exchange at
http://www.talendforge.org/exchange/top/help_guest.php. For information about how to upload a file to Talend
Exchange, see Talend Open Studio for Data Integration User Guide.
1. In the DQ Repository tree view, expand Libraries > Patterns, and then browse to the regular expression
you want to export.
2. Right-click it and then select Export for Talend Exchange from the contextual menu.
3. Click Select All to select all the regular expressions in the list, or select the check boxes of the regular
expressions or SQL patterns you want to export to the folder.
A distinct csv file is created for each exported regular expression or SQL pattern. Each csv file is compressed as
zip. All these zip files are saved in the defined folder.
Management processes for regular expressions and SQL patterns are the same. The procedure below with all the included
screen captures reflect the steps to import regular expressions. You can follow the same steps to import SQL patterns.
Option To...
skip existing patterns import only the regular expressions that do not exist in the corresponding lists in the DQ
Repository tree view. A warning message is displayed if the imported patterns already
exist under the Patterns folder.
rename new patterns with suffix identify each of the imported regular expressions with a suffix. All regular expression
will be imported even if they already exist under the Patterns folder.
All imported regular expressions are listed under the Regex folder in the DQ Repository tree view.
A warning icon next to the name of the imported regular expression or SQL pattern in the tree view identifies that it is
not correct. You must open the expression or the pattern and try to figure out what is wrong. Usually, problems come from
missing quotes. Check your regular expressions and SQL patterns and ensure that they are encapsulated in single quotes.
Management processes for regular expressions and SQL patterns are the same. The procedure below with all the
included screen captures reflect the steps to import regular expressions from Talend Exchange. You can import
SQL patterns following the same steps.
2. Under Exchange, expand Regex and right-click the name of the pattern you want to import.
You will have access only to versions that are compatible with the version of your current Studio.
If more than one version for the selected regular expression is available on Talend Exchange, a dialog box
is displayed to list the versions that are compatible with your current Studio version.
The imported regular expression is listed under the Patterns > Regex folders in the DQ Repository tree view.
2. Browse through the regular expression/SQL pattern lists to reach the expression/pattern you want to duplicate.
3. Right-click its name and select Duplicate... from the contextual menu.
The duplicated regular expression/SQL pattern is displayed under the Regex/SQL folder in the DQ Repository
tree view.
You can now double-click the duplicated pattern to modify its metadata as needed.
You can test new regular expressions before start using them against data in the specified database. For more information,
see section How to test a regular expression in the Pattern Test View.
Prerequisite(s): A column analysis is open in the analysis editor in the Profiling perspective of the studio.
To delete a regular expression or an SQL pattern from the analyzed column, do the following:
2. Right-click the regular expression/SQL pattern you want to delete and select Remove Elements from the
contextual menu.
The selected regular expression/SQL pattern disappears from the Analyzed Column list.
How to delete and restore a regular expression or an SQL pattern from the DQ
Repository
To delete a regular expression or an SQL pattern from the DQ Repository tree view, do the following:
2. Browse to the regular expression or SQL pattern you want to remove from the list.
3. Right-click the expression or pattern and select Delete from the contextual menu.
1. Right-click it in the Recycle Bin and choose Delete from the contextual menu.
If it is not used by any analysis in the current Studio, a [Delete forever] dialog box is displayed.
2. Click Yes to confirm the operation and close the dialog box.
If it is used by one or more analyses in the current Studio, a dialog box is displayed to list all the analyses
that use the pattern.
3. Either:
Click OK to close the dialog box without deleting the pattern from the recycle bin.
Select the Force to delete all the dependencies check box and then click OK to delete the pattern from
the recycle bin and to delete all the dependent analyses from the Data Profiling node.
You can also delete the pattern permanently by emptying the recycle bin. To empty the Recycle Bin, do the
following:
If the pattern is not used by any analysis in the current Studio, a confirmation dialog box is displayed.
If the pattern is used by one or more analyses in the current Studio, a dialog box is displayed to list all the
analyses that use the pattern.
3. Click OK to close the dialog box without removing the pattern from the recycle bin.
9.2. Indicators
Indicators can be the results achieved through the implementation of different patterns that are used to define the
content, structure and quality of your data.
Indicators represent as well the results of highly complex analyses related not only to data-matching, but also to
different other data-related operations.
User-defined indicators, as their name indicates, are indicators created by the user. You can use them through a
simple drag-and-drop operation from the User Defined Indicators folder in the tree view. User-defined indicators
are used only with column analyses. For more information on how to set user-defined indicators for columns, see
section How to set user-defined indicators.
System indicators are predefined indicators grouped under different categories in the System Indicators folder in
the DQ Repository tree view. Each category of the system indicators is used with a corresponding analysis type.
You can not create a system indicator or drag it directly from the DQ Repository tree view to an analysis. However,
you can open and modify the parameters of a system indicator to adapt it to a specific database for example. For
further information, see section How to edit a system indicator
Several management options including editing, duplicating, importing and exporting are possible for both types
of indicators. For more information, see section Managing user-defined indicators and section Managing system
indicators.
The below sections describe the system indicators used on column analyses. These system indicators can range
from simple or advanced statistics to text strings analysis, including summary data and statistical distributions
of records.
You can see under the System Indicators folder in the DQ Repository tree view system indicators other than the indicators
in the below sections. Those different system indicators are used on the other analysis types, for example redundancy,
correlation and overview analyses.
Blank Count: counts the number of blank rows. A blank is a non null textual data that contains only white
space. Note that Oracle does not distinguish between the empty string and the null value.
Duplicate Count: counts the number of values appearing more than once. You have the relation: Duplicate count
+ Unique count = Distinct count. For example, a,a,a,a,b,b,c,d,e => 9 values, 5 distinct values, 3 unique values,
2 duplicate values.
Unique Count: counts the number of distinct values with only one occurrence. It is necessarily less or equal
to Distinct counts.
Other text indicators are available to count each of the above indicators with null values, with blank values or
with null and blank values.
Null values will be counted as data of 0 length, i.e. the minimal length of null values is 0. This means that the
Minimal Length With Null and the Maximal Length With Null will compute the minimal/maximal length of
a text field including null values.
Blank values will be counted as data of 0 length, i.e. the minimal length of blank values is 0. This means that the
Minimal Length With Blank and the Maximal Length With Blank will compute the minimal/maximal length
of a text field including blank values.
The below table gives an example of computing the length of few textual fields in a column using all different
types of text statistic indicators.
Data Current length With blank values With null values With blank
and null values
Brayan 6 6 6 6
Ava 3 3 3 3
1 0 1 0
0 0 0 0
Null 0 0
Minimal, Maximal and Average lengths
Minimal length 0 0 0 0
Maximal length 6 6 6 6
Average length 9/4 = 2.25 8/4 = 2 9/5 = 1.8 8/5 = 1.6
Median: computes the value separating the higher half of a sample, a population, or a probability distribution
from the lower half.
Inter quartile range: computes the difference between the third and first quartiles.
Range: computes the difference between the highest and lowest records.
Mode: computes the most probable value. For numerical data or continuous data, you can set bins in the
parameters of this indicator. It is different from the average and the median. It is good for addressing
categorical attributes.
Frequency table: computes the number of most frequent values for each distinct record. Other frequency table
indicators are available to aggregate data with respect to date, week, month, quarter, year and "bin".
Frequency table statistics are applied only on columns that have "date" data.
Low frequency table: computes the number of less frequent records for each distinct record. Other low frequency
table indicators are available for each of the following values: date, week, month, quarter, year and
bin where bin is the aggregation of numerical data by intervals.
Pattern frequency table: computes the number of most frequent records for each distinct pattern.
Pattern low frequency table: computes the number of less frequent records for each distinct pattern.
Date pattern frequency table: retrieves the date patterns from date or text columns. It works only with the Java
engine.
They index records by sounds. This way, records with the same pronunciation (only English pronunciation) are
encoded to the same representation so that they can be matched despite minor differences in spelling.
Soundex frequency table: computes the number of most frequent distinct records relative to the total number
of records having the same pronunciation.
Soundex low frequency table: computes the number of less frequent distinct records relative to the total number
of records having the same pronunciation.
Possible phone number count: computes the supposed valid phone numbers.
Valid region code number count: computes phone numbers with valid region code.
Invalid region code count. computes phone numbers with invalid region code.
Well formed national phone number count: computes well formatted national phone numbers.
Well formed international phone number count: computes the international phone numbers that respect the
international phone format (phone numbers that start with the country code) .
Well formed E164 phone number count: computes the international phone numbers that respect the international
phone format ( maximum of fifteen digits written with a + prefix.
Format Frequency Pie: shows the results of the phone number count in a pie chart divided into sectors.
Benford's law states that in lists and tables the digit 1 tends to occur as a leading digit about 30% of the time.
Larger digits occur as the leading digits with lower frequency, for example the digit 2 about 17%, the digit 3
about 12% and so on. Valid, unaltered data will follow this expected frequency. A simple comparison of first-
digit frequency distribution from the data you analyze with the expected distribution according to Benford's law
ought to show up any anomalous results.
For example, let's assume an employee has committed fraud by creating and sending payments to a fictitious
vendor. Since the amounts of these fictitious payments are made up rather than occurring naturally, the leading
digit distribution of all fictitious and valid transactions (mixed together) will no longer follow Benford's law.
Furthermore, assume many of these fraudulent payments have 2 as the leading digit, such as 29, 232 or 2,187. By
using the Benford Law indicator to analyze such data, you should see the amounts that have the leading digit 2
occur more frequently than the usual occurrence pattern of 17%.
When using the Benford Law Frequency indicator, it is advised to:
make sure that the numerical data you analyze do not start with 0 as Benford's law expects the leading digit to range only
from 1 to 9. This can be verified by using the number > Integer values pattern on the column you analyze.
check the order of magnitude of the data either by selecting the min and max value indicators or by using the Order of
Magnitude indicator you can import from Talend Exchange. This is because Benford's law tends to be most accurate
when values are distributed across multiple orders of magnitude. For further information about importing indicators from
Talend Exchange, see section How to import user-defined indicators from Talend Exchange.
In the result chart of the Benford Law Frequency indicator, digits 1 through 9 are represented by bars and the
height of the bar is the percentage of the first-digit frequency distribution of the analyzed data. The dots represent
the expected first-digit frequency distribution according to Benford's law.
Below is an example of the results of an analysis after using the Benford Law Frequency indicator and the Order
of Magnitude user-defined indicator on a total_sales column.
The first chart shows that the analyzed data varies over 6 orders of magnitude, that is there are 6 digits between
the minimal value and maximal value of the numerical column.
The second chart shows that the actual distribution of the data (height of bars) does not follow the Benford's law
(dot values). The differences are very big between the frequency distribution of the sales figures and the expected
distribution according to Benford's law. For example, the usual occurrence pattern for sales figures that start with
1 is 30% and those figures in the analyzed data represent only 11%. Some fraud could be suspected here, sales
figures may have been modified by someone or some data may be missing.
Below is another example of the result chart of a column analysis after using the Benford Law Frequency
indicator.
The red bar labeled as invalid means that this percentage of the analyzed data does not start with a digit. And the 0
bar represents the percentage of data that starts with 0. Both cases are not expected when analyzing columns using
the Benford Law Frequency indicator and this is why they are represented in red.
For further information about analyzing columns, see section Analyzing columns in a database.
1. In the DQ Repository tree view, expand Libraries > Indicators, and then browse through the indicator lists
to reach the indicator you want to modify.
2. Right-click the indicator name and select Open from the contextual menu.
3. Modify the indicator metadata, if required, and then click Indicator Definition.
In this view, you can edit the indicator definition, change the selected database and add other indicators
specific to available databases using the [+] button at the bottom of the editor.
4. Click the save icon on top of the editor to save your changes.
If the indicator is simple enough to be used in all databases, select Default in the database list.
When you edit an indicator, you modify the indicator listed in the DQ Repository tree view. Make sure that your modifications
are suitable for all analyses that may be using the modified indicator.
2. Browse through the indicator lists to reach the indicator you want to duplicate, right-click its name and select
Duplicate... from the contextual menu.
The duplicated indicator is displayed under the System folder in the DQ Repository tree view.
You can now open the duplicated indicator to modify its metadata and definition as needed. For more information
on editing system indicators, see section How to edit a system indicator.
The management options available for user-defined indicators include: create, export and import, edit and
duplicate. For detailed information, see the following sections.
Management processes for user-defined indicators are the same as those for system indicators.
4. In the Name field, enter a name for the indicator you want to create.
If required, set other metadata (purpose, description and author name) in the corresponding fields and click
Next to proceed to the next step.
5. From the Language Selection list, select the database that will support the created indicator.
6. In the SQL Template field, enter the SQL template statement corresponding to the indicator you want to
create and then click Finish to close the wizard and proceed to the next step.
The indicator editor opens displaying the metadata of the user-defined indicator.
2. If required, change the selected database or click the Edit... button to the right of the view to edit the indicator
definition.
3. If required, click the [+] button and add other indicators specific to available databases.
5. Click Indicator Category to display the corresponding view. In this view, you can select from the list a
category for the created indicator. The selected category will determine the type of chart that will represent
the results of the executed analysis that uses the created indicator.
6. From the Indicator Category list, select a category for the created indicator.
The created indicator is listed under the User Defined Indicators folder in the DQ Repository tree view.
You can also import a ready-to-use Java user-defined indicator from the Exchange folder in the DQ Repository tree view.
This Java user-defined indicator connects to the mail server and checks if the email exists. For further information on
importing indicators from Talend Exchange, see section How to import user-defined indicators from Talend Exchange.
The two sections below detail the procedures to create Java user-defined indicators.
4. In the Name field, enter a name for the Java indicator you want to create.
5. If required, set other metadata (purpose, description and author name) in the corresponding fields and click
Next to proceed to the next step.
6. From the Language Selection list, select Java and then click Finish to open the indicator settings.
The indicator editor opens displaying the metadata of the Java indicator.
1. In the editor, click Indicator Definition to display the corresponding view. Java is selected by default.
2. Click the browse button to the right of the view and browse to the Java archive holding the Java classes. For
more information on creating a Java archive, see section How to create a Java archive for the user-defined
indicator.
Make sure that the class name includes the package path, if this string parameter was not correctly specified, an error
message will display when you try to save the Java user-defined indicator.
5. From the Indicator Category list, select a category for the created Java indicator.
The selected category will determine the type of chart that will represent the results of the executed analysis
that uses the created Java indicator.
In this table, you can set the default parameters for this new Java indicator. These default parameters are
stored in a Map object.
7. Click the [+] button at the bottom of the table to add as many lines as needed and define the parameter key
and value.
8. Click in the line and define the parameter key and the parameter value.
You can edit these default parameters or even add new parameters any time you use the indicator in a column analysis.
To do this, click the indicator option icon in the analysis editor to open a dialog box where you can edit the default
parameters according to your needs or add new parameters.
The created indicator is listed under the User Defined Indicators folder in the DQ Repository tree view.
Before creating a Java archive for the user defined indicator, you must define, in Eclipse, the target platform
against which the workspace plug-ins will be compiled and tested.
2. Expand Plug-in Development and select Target Platform then click Add... to open a view where you can
create the target definition.
3. Select the Nothing: Start with an empty target definition option and then click Next to proceed to the
next step.
4. In the Name field, enter a name for the new target definition and then click the Add... button to proceed
to the next step.
5. Select Installation from the Add Content list and then click Next to proceed to the next step.
6. Use the Browse... button to set the path of the installation directory and then click Next to proceed to the
next step.
To create a Java archive for the user defined indicator, do the following:
In this Java project, you can find four Java classes that correspond to the four indicator categories listed in
the Indicator Category view in the indicator editor.
Each one of these Java classes extends the UserDefIndicatorImpl indicator. The figure below illustrates
an example using the MyAvgLength Java class.
2. Modify the code of the methods that follow each @Override according to your needs.
3. If required, use the following methods in your code to retrieve the indicator parameters:
The Java archive is now ready to be attached to any Java indicator you want to create in from the Profiling
perspective of the studio.
You can also export user-defined indicators to folders or archive files. For further information, see section
Exporting data profiling items.
You can only export user-defined indicators based on SQL templates. It is not possible to export Java user-defined indicators.
You can export user-defined indicators and store them locally in a csv file.
Prerequisite(s): At least one user-defined indicator is created in the Profiling perspective of the studio.
1. In the DQ Repository tree view, expand Libraries > Indicators and then right-click User Defined
Indicators.
The [Export Indicators] wizard opens with the check boxes of all indicators selected by default.
4. If required, clear the check boxes of the indicators you do not want to export to the csv file.
All exported user-defined indicators are saved in the defined csv file.
You can export user-defined indicators from your current version of studio to Talend Exchange where you can
share them with other users.
Prerequisite(s):At least one user-defined indicator is created in the Profiling perspective of the studio.
2. Right-click the User Defined Indicator folder and select Export for Talend Exchange.
4. If required, clear the check boxes of the indicators you do not want to export to the specified folder.
A distinct csv file is created for each exported indicator. Each csv file is compressed as a zip. All these zip files
are saved in the defined folder. You need now to upload them to Talend Exchangeat http://www.talendforge.org/
exchange/top/help_guest.php.
You can also import user-defined indicators from folders or archive files. For further information, see section
Importing data profiling items or projects.
You can import indicators stored locally in a csv file to use them on your column analyses.
Prerequisite(s): You have already selected the Profiling perspective of the studio. The csv file is stored locally.
Option To...
skip existing indicators import only the indicators that do not exist in the corresponding lists in the DQ Repository
tree view. A warning message is displayed if the imported indicators already exist under
the Indicators folder.
rename new indicators with suffix identify each of the imported indicators with a suffix. All indicators will be imported even
if they already exist under the Indicators folder.
All imported indicators are listed under the User Defined Indicators folder in the DQ Repository tree view.
A warning icon next to the name of the imported user-defined indicator in the tree view identifies that it is not correct.
You must open the indicator and try to figure out what is wrong.
You can import user-defined indicators created by other users and stored in Talend Exchange into your current
version of studio and use them, as needed, on your column analyses.
The indicators you can import from Talend Exchange include for example:
Order of Magnitude: It computes the number of digits between the minimal value and maximal value of a
numerical column.
Email validation via mail server: This Java user-defined indicator connects to the mail server and checks if
the email exists.
Prerequisite(s): You have already selected the Profiling perspective of the studio. Your network is up and
running.
If you have connection problems, you will not be able to access any of the regular expressions or SQL patterns under the
Exchange node in the DQ Repository tree view.
2. Under Exchange, expand indicator and right-click the name of the indicator you want to import, a Java user-
defined indicator in this example.
You will have access only to versions that are compatible with the version of your current Studio.
If more than one version for the selected indicator is available on Talend Exchange, a dialog box is displayed
to list the versions that are compatible with your current Studio version.
The user-defined indicator is imported from Talend Exchange and listed under the User Defined Indicators
folder in the DQ Repository tree view. You can now use this indicator on a column analysis to check emails by
sending an SMTP request to the mail server.
You can also use the Studio to create an SQL user-defined indicator or a Java user-defined indicator from scratch.
For further information, see section How to create SQL user-defined indicators and section How to define Java
user-defined indicators respectively.
Prerequisite(s):At least one user-defined indicator is created in the Profiling perspective of the studio.
1. In the DQ Repository tree view, expand Libraries > Indicators, and then browse through the indicator lists
to reach the indicator you want to modify the definition of.
2. Right-click the indicator name and select Open from the contextual menu.
3. Modify the indicator metadata, if required, and then click Indicator Definition to display the relevant view.
In this view, you can: edit indicator definition, change the selected database and add other indicators specific
to available databases using the [+] button.
If the indicator is simple enough to be used in all databases, select Default in the list.
User Defined Match (by-default Uses user-defined indicators to evaluate the number of the data records that match a
category) regular expression or an SQL pattern. The analysis results show the record matching count
and the record total count.
User Defined Frequency Uses user-defined indicators for each distinct data record to evaluate the record frequency
that match a regular expression or an SQL pattern. The analysis results show the distinct
count giving a label and a label-related count.
User Defined Real Value Uses user-defined indicators which return real value to evaluate any real function of the
data.
User Defined Count Uses user-defined indicators that return a row count.
6. Click the save icon on top of the editor to save your changes.
When you edit an indicator, you modify the indicator listed in the DQ Repository tree view. Make sure that your modifications
are suitable for all analyses that may be using the modified indicator.
Prerequisite(s): At least one user-defined indicator has been defined in the Profiling perspective of the studio.
2. Browse through the user-defined indicator lists to reach the indicator you want to duplicate, right-click its
name and select Duplicate... from the contextual menu.
The duplicated indicator is displayed under the User Defined Indicators folder in the DQ Repository tree view.
You can now open the duplicated indicator to modify its metadata and definition as needed. for more information
on editing user-defined indicators, see section How to edit a user-defined indicator.
Bins Designer
Blank Options
In Oracle, empty strings and null strings are the same objects. Therefore,
you must select or clear both check boxes in order to get consistent results.
Data Thresholds
Indicator Thresholds
Java Options
Phone number
Text Parameters
Text Length
Before starting data profiling management procedures, you need to be familiar with the studio Graphical User
Interface (GUI). For more information, see appendix The studio management GUI.
2. Right-click Source Files and select Create SQL File from the contextual menu. The [Create SQL File]
dialog box is displayed.
3. In the Name field, enter a name for the SQL query you want to create and then click Finish to proceed to
the next step.
If the Connections view is not open, use the combination Window > Show View > Data Explorer > Connections
to open it.
5. From the Choose Connection list, select the database you want to run the query on.
6.
On the SQL Editor toolbar, click to execute the query on the defined base table(s).
Data rows are retrieved from the defined base table(s) and displayed in the editor.
A file for the new SQL query is listed under Source Files in the DQ Repository tree view.
Option To...
Open open the selected Query file
Duplicate create a copy of the selected Query file
Rename SQL File open a dialog box where you can edit the name of the query file
Open in Data Explorer open in the data explorer the SQL editor on the selected query file
Delete delete the query file
The deleted item will go to the Recycle Bin in the DQ Repository tree view.
You can restore or delete such item via a right-click on the item. You can also
empty the recycle bin via a right-click on it.
When you open a query file in the SQL Editor, make sure to select the database connection from the Choose Connection
list before executing the query. Otherwise the run icon on the editor toolbar will be unavailable.
When you create or modify a query in a query file in the SQL Editor and try to close the editor, you will be prompted to
save the modifications. The modifications will not be taken into account unless you click the save icon on the editor toolbar.
You can not import an item without all its dependencies. When you try to import an analysis for example, all its
dependencies such as a metadata connection and the patterns and indicators used in this analysis will be selected
by default and imported with the analysis.
You can not import into your current Studio data profiling items created in versions older than 4.0.0. To use such items
in your current Studio, you must carry out an upgrade operation. For further information, see section Upgrading projects
items from older versions.
Prerequisite(s): You have access to another studio version in which data profiling items have been created.
1.
In the Profiling perspective, click the icon on the toolbar.
2. Select the root directory or the archive file option according to whether the data profiling items you want to
import are in the workspace file within the Studio directory or are already exported into a zip file.
If you select the root directory option, click Browse and set the path to the project folder containing the
items to be imported within the workspace file of the Studio directory.
All items and their dependencies that do not exist in your current Studio are selected by default in the
dialog box.
If you select the archive file option, click Browse and set the path to the archive file that holds the data
profiling items you want to import.
All items and their dependencies that do not exist in your current Studio are selected by default in the
dialog box.
3. Select the Overwrite existing items check box if some error and warning messages are listed in the Error
and Warning area.
This means that items with the same names already exist in the current Studio.
For release 5.2.0 and above, when you try to import some system indicators that are modified in a Studio version, they
will not overwrite the system indicators in the current Studio. All modifications from older versions will be integrated
with the system indicators in the current Studio. This enables you to use these indicators on your analyses in the current
Studio without a problem.
4. Select or clear the check boxes of the data profiling items you want or do not want to import according to
your needs.
All dependencies for the selected item are selected by default. When you clear the check box of an item, the
check boxes of the dependencies of this item are automatically cleared as well. Also, an error message will
display on top of the dialog box if you clear the check box of any of the dependencies of the selected item.
The imported items display under the corresponding folders in the DQ Repository tree view.
If you import SQL Servers (2005 or 2008) connections into your current Studio, a warning icon is docked on the
connection names in the DB connections folder. This indicates that the driver path for these connections is empty.
You must open the connection wizard and redefine the connection manually to set the path to a JDBC driver you can
download from the Microsoft download center.
For further information on editing a database connection, see section How to open or edit a database connection.
You can also set the path to a JDBC driver for a group of database connections simultaneously in order not to define
them one by one. For further information, see section Migrating a group of connections.
You can also import local project folders from the login window of your studio. For further information, see
section Launching the studio.
Prerequisite(s): At least, one data profiling item has been created in the studio.
1.
In the Profiling perspective, click the icon on the toolbar.
2. Select the root directory or archive file option and then click Browse... and browse to the file/archive where
you want to export the data profiling items.
3. Select the check boxes of the data profiling items you want to export or use the Select All or Deselect All tabs.
When you select an analysis check box, all analysis dependencies including the metadata connection and any patterns
or indicators used in this analysis are selected by default. Otherwise, if you have an error message on top of the dialog
box that indicates any missing dependencies, click the Include dependencies tab to automatically select the check
boxes of all items necessary to the selected data profiling analysis.
4. If required, select the Show only selected elements check box to have in the export list only the selected
data profiling elements.
A progress bar is displayed to indicate the progress of the export operation and the data profiling items are
exported in the defined place.
Some of the migrated JDBC connections may have a warning icon docked on their names in the DB connections
folder in the DQ Repository tree view. This indicates that the driver path for these connections is empty after
migration.
Setting the driver path manually for each of the connections could be tedious especially if you have imported big
number. The studio enables you to set the driver path once for all. You may download such a driver from the
Microsoft download center, for example.
Prerequisite(s): You have already migrated your database connections from an older version of the studio as
outlined in section Importing data profiling items or projects.
1. In the menu bar, select Window > Preferences to display the [Preferences] window.
2. In the search field, type jdbc and then select JDBC Driver Setting to open the corresponding view.
3. Set the JDBC parameters in the corresponding fields, and then click Apply to connections....
A dialog box is displayed to list all the JDBC connections that do not have the required JDBC driver after
migration.
4. Select the check boxes of the connections for which you want to apply the driver settings and then click OK.
To migrate data profiling items (analyses, database connections, patterns and indicators, etc.) created in versions
older than 4.0.0, do the following:
1. From the folder of the old version studio, copy the workspace file and paste it in the folder of your current
Studio. Accept to replace the current workspace file with the old file.
The upgrade operation is completed once the Studio is completely launched, and you should have access to all
your data profiling items.
Regarding system indicators during migration, please pay attention to the following:
When you upgrade the repository items to version 4.2 from a prior version, the migration process overwrites any changes
you made to the system indicators.
When you upgrade the repository items from version 4.2 to version 5.0, you do not lose any changes you made to the
system indicators.
Before starting data profiling management procedures, you need to be familiar with the studio Graphical User
Interface (GUI). For more information, see appendix The studio management GUI.
From the contextual menu of the selected analysis, you can open, execute, duplicate or delete this analysis. You
can also add a task to the selected analysis.
2. Either:
right-click the analysis you want to open and select Open from the contextual menu.
3. If required, click Refresh the graphics to the right of the editor to display the results of the analysis.
4. If required, click the Analysis results button at the bottom of the editor to open a more detailed view of the
analysis results.
2. Right-click the analysis you want to execute and select Run from the contextual menu.
You can execute many analyses simultaneously if you select them, right-click the selection and finally click Run.
Prerequisite(s): At least one analysis has been created in the Profiling perspective of the studio.
2. Right-click the analysis you want to duplicate and select Duplicate... from the contextual menu.
The duplicated analysis shows in the analysis list in the DQ Repository tree view. You can now open the
duplicated analysis and modify its metadata as needed.
2. Right-click the analysis you want to delete and select Delete from the contextual menu.
You can also delete the analysis permanently by emptying the recycle bin. To empty the Recycle Bin, do the
following:
in the DQ Repository tree view on connections, catalogs, schemas, tables, columns and created analyses,
or, on columns, or patterns and indicators set on columns directly in the current analysis editor.
For example, you can add a general task to any item in a database connection via the Metadata node in the DQ
Repository tree view. You can add a more specific task to the same item defined in the context of an analysis
through the Analyses node. And finally, you can add a task to a column in an analysis context (also to a pattern
or an indicator set on this column) directly in the current analysis editor.
The procedure to add a task to any of these items is exactly the same. Adding tasks to such items will list these
tasks in the Tasks list accessible through the Window > Show view... combination. Later, you can open the editor
corresponding to the relevant item by double-clicking the appropriate task in the Tasks list.
For examples on how to add a task to different items, see the sections below.
2. Navigate to the column you want to add a task to, account_id in this example.
3. Right-click the account_id and select Add task... from the contextual menu.
The [Properties] dialog box opens showing the metadata of the selected column.
4. In the Description field, enter a short description for the task you want to carry on the selected item.
5. In the Priority list, select the priority level and then click OK to close the dialog box.
The created task is added to the Tasks list. For more information on how to access the task list, see section
Displaying the task list.
double-click a task to open the editor where this task has been set.
select the task check box once the task is completed in order to be able to delete it.
filter the task view according to your needs using the options in a menu accessible through the drop-down
arrow on the top-right corner of the Tasks view. For further information about filtering the task list, see section
Filtering the task list.
Prerequisite(s): The analysis has been created in the Profiling perspective of the studio.
2. Expand an analysis and navigate to the item you want to add a task to, the account_id column in this example.
3. Right-click account_id and select Add task... from the contextual menu.
4. Follow the steps outlined in section Adding a task to a column in a database connection to add a task to
account_id in the selected analysis.
For more information on how to access the task list, see section Displaying the task list.
Prerequisite(s):
A column analysis is open in the analysis editor in the Profiling perspective of the studio.
1. In the open analysis editor, click Analyzed columns to open the relevant view.
2. In the Analyzed Columns list, right-click the indicator name and select Add task... from the contextual menu.
The [Properties] dialog box opens showing the metadata of the selected indicator.
3. In the Description field, enter a short description for the task you want to attach to the selected indicator.
4. On the Priority list, select the priority level and then click OK to close the dialog box. The created task is
added to the Tasks list.
For more information on how to access the task list, see section Displaying the task list.
Prerequisite(s): At least, one task is added to an item in the Profiling perspective of the studio.
1. On the menu bar of Talend Open Studio for Data Quality, select Window > Show view... .
The Tasks view opens in the Profiling perspective of the studio listing the added task(s).
4. If required, double-click any task in the Tasks list to open the editor corresponding to the item to which the
task is attached.
You can create different filters for the content of the task list. For further information, see section Filtering the task list.
You can create filters to decide what to list in the task view.
Prerequisite(s): At least, one task is added to an item in the Profiling perspective of the studio.
1. Follow the steps outlined in section Displaying the task list to open the task list.
2. Click the drop-down arrow in the top right corner of the view, and then select Configure contents....
The [Configure contents...] dialog box is displayed showing the by-default configuration.
3. Click New to open a dialog box and then enter a name for the new filter.
5. Set the different options for the new filter as the following:
From the Scope list, select a filter scope option, and then click Select... to open a dialog box where you
can select a working set for your filter.
Select whether you want to display completed or not completed tasks or both of them.
Select to display tasks according to their priority or according to the text they have.
Finally, select the check boxes of the task types you want to list.
The task list shows only the tasks that confirm to the new filter options.
Prerequisite(s): At least one task is added to an item in the Profiling perspective of the studio.
1. Follow the steps outlined in section Displaying the task list to access the Tasks list.
2. Select the check boxes next to each of the tasks and right-click anywhere in the list.
3. From the contextual menu, select Delete Completed Tasks. A confirmation message is displayed to validate
the operation.
All tasks marked as completed are deleted from the Tasks list.
the toolbar,
a detailed view
the workspace,
The figure below illustrates the main window and its possible views.
The following sections give detailed information about each of the above views.
Other...: Opens a dialog box where you can select any of the
available perspectives
Show View... Opens the [Show View] dialog box which enables you to display
different views in the studio
Preferences Opens the [Preferences] window which enables you to set your
preferences
Reset Perspective... Resets the current perspective to its default view after confirmation
Help Welcome Opens a welcoming page which has links to the user documentation
and Talend practical sites
Help Contents Opens the Eclipse help system documentation
About Talend studio Displays:
A.3. Toolbar
The toolbar contains icons that provide you with quick access to the commonly used operations you can perform
from the studio main window.
When expanding the Data profiling folder in the tree view, you display the created analyses (either executed or
not executed yet).
When expanding the Libraries folder in the tree view list, you display the list of the pre-defined patterns and SQL
patterns. Imported patterns and patterns created by you will also show under the Patterns folder.
Under Libraries as well, you have all created SQL business rules and all imported patterns from Talend
Exchange.
When expanding the Metadata folder in the tree view list, you display the list of all created DB connections.
You can use the local toolbar icons to manage the display of the DQ Repository tree view.
The figure below shows an example of the detailed view of the selected DB connection.
You can use the local toolbar icons to manage the display of Detail View.
When you open a column analysis, a pattern or a DB connection through the tree view area, the relevant editor
opens in the studio workspace.
You can use the local toolbar icons to manage the display of the workspace.
Analysis Settings,
Analysis Results.
The Analysis Settings tab lists the settings for the current analysis in the currenty editor.
a summary of the executed analysis in the Analysis Summary view in which it specifies the connection, the
database and the table names for the current analysis,
the results of the executed analysis, graphics and tables, in the Analysis Results view.
click the arrow located next to a column name to display the types of analyses done on that column,
select a type of analysis to display the corresponding generated graphics and tables.
a toolbar icon, or
a right-click list, or
shortcut keys.
Example 1: To show a view in the Talend Open Studio for Data Quality main window, either:
use the Window > Show View... menu - submenu combination, or,
right-click the analysis you want to execute and select Run from the contextual menu, or
This appendix introduces the Graphical User Interfaces (GUI) of the data explorer which is based on the SQL
Explorer for which you can find documentation at http://www.sqlexplorer.org/.
menu bar,
toolbar,
Connections view,
The figure below illustrates an example of the data explorer main window and its components.
The following sections give detailed information about each of the above components.
Table A.1, Table 1Management menus of Appendix A describes menus and menu items available to you.
Table A.2, Table 2Management toolbar of Appendix A describes the toolbar icons and their functions.
You can use the local toolbar icons to manage the display of the Connections view.
The view shows the statement, the date and time when the statement was last executed, which connection was
used and how many times the statement has been executed. The SQL statements can be filtered, sorted, removed
and opened in or appended to the [SQL Editor].
You can use the local toolbar icons to manage the display of SQL History View.
Session/Catalog/Schema switching
The lower part of the [SQL Editor] view, the Messages area, detailed information about your data exploring
actions. When you execute a query in the SQL query editor, the Messages area displays the query results.
You can save all the queries you execute in the data explorer under Libraries > Source Files in the DQ Repository tree
view in the studio.
When you select a node in the Database Structure view, the corresponding detail is shown in the Database Detail
view. For more information, see section Database Detail view. If the detailed view is not active, double-clicking
the node will bring the detail view to the front.
When you select a database node in the Database Structure view, the Database Detail view will show you the
connection information as shown in the figure below.
When you select a specific table in the database connection in the Database Structure view, the Database Detail
view shows you detail information about the selected table including Exported Keys and Imported Keys.
The Imported Keys column shows how the table references other tables based on primary and foreign key
declarations.
The Exported Keys column shows how other tables reference the selected table based on primary and foreign
key declarations.
For example, the following databases natively support regular expressions: MySQL, PostgreSQL, Oracle 10g,
Ingres, etc., while Microsoft SQL server does not.
After you create the regular expression function, you should use the studio to declare that function in a specific
database before being able to use regular expressions on analyzed columns.
For more information on how to declare a regular expression function in the studio, see section How to define a
query template for a specific database and section How to declare a User-Defined Function in a specific database.
To create a regular expression function in SQL Server, follow the steps outlined in the sections below.
1. On the menu bar, select File > New > Project to open the [New Project] window.
2. In the Project types tree view, expand Visual C# and select Database.
3. In the Templates area to the right, select SQL Server Project and then enter a name in the Name field for
the project you want to create, UDF function in this example.
5. From the Available References list, select the database in which you want to create the project and then
click OK to close the dialog box.
If the database you want to create the project in is not listed, you can add it to the Available Reference list through
the Add New Reference tab.
The project is created and listed in the Solution Explorer panel to the right of the Visual Studio main window.
1. In the project list in the Solution Explorer panel, expand the node of the project you created and right-click
the Test Scripts node.
3. From the Templates list, select Class and then in the Name field, enter a name to the user-defined function
you want to add to the project, RegExMatch in this example.
The added function is listed under the created project node in the Solution Explorer panel to the right.
4. Click Add to validate your changes and close the dialog box.
5. In the code space to the left, enter the instructions corresponding to the regular expression function you
already added to the created project.
Below is the code for the regular expression function we use in this example.
Using System;
Using Microsoft.SqlServer.Server;
Using System.Text.RegularExpressions;
Public partial class RegExBase
{
[SqlFunction(IsDeterministic = true, IsPrecise = true)]
Public static int RegExMatch( string matchString , string pattern)
{
Regex r1 = new Regex(pattern.TrimEnd(null));
if (r1.Match(matchString.TrimEnd(null)).Success == true)
{
return 1 ;
}
else
{
return 0 ;
}
Using
}
};
6. Press Ctrl+S to save your changes and then on the menu bar, click Build and in the contextual menu select
the corresponding item to build the project you created, Build UDF function in this example.
The lower pane of the window displays a message to confirm that the build operation was successful or not.
7. On the menu bar, click Build and in the contextual menu select the corresponding item to deploy the project
you created, Deploy UDF function in this example.
The lower pane of the window displays a message to confirm that the deploy operation was successful,
or not.
If required:
1. launch SQL Server and check if the created function exists in the function list,
2. check if the function works well, for more information, see section How to test the created function via
the SQL Server editor.
3. Double-click Regular Expression Matching, or right-click it and select Open from the contextual menu.
The corresponding view displays the indicator metadata and its definition.
You need now to add to the list of databases the database for which you want to define a query template. This
query template will compute the regular expression matching.
4. Click the [+] button at the bottom of the Indicator Definition view to add a field for the new template.
5. In the new field, click the arrow and select the database for which you want to define the template, Microsoft
SQL Server.
8. Paste the indicator definition (template) in the Expression box and then modify the text after WHEN in order
to adapt the template to the selected database.
9. Click OK to proceed to the next step. The new template is displayed in the field.
10. Click the save icon on top of the editor to save your changes.
For more detailed information on how to declare a regular expression function in the studio, see section How to
define a query template for a specific database and section How to declare a User-Defined Function in a specific
database.
LastName nvarchar(30),
EmailAddress nvarchar(30) CHECK
(dbo.RegExMatch('[a-zA-Z0-9_\-]+@([a-zA-Z0-9_\-]+\.)
+(com|org|edu|nz)',
EmailAddress)=1),
USPhoneNo nvarchar(30) CHECK
(dbo.RegExMatch('\([1-9][0-9][0-9]\) [0-9][0-9][0-9]
\-[0-9][0-9][0-9][0-9]',
UsPhoneNo)=1))
To search for the expression that match, use the following code:
SELECT [FirstName]
, [LastName]
, [EmailAddress]
, [USPhoneNo]
FROM [talend].[dbo].[Contacts]
where [talend].[dbo].RegExMatch([EmailAddress],
'[a-zA-Z0-9_\-]+@([a-zA-Z0-9_\-]+\.)+(com|org|edu|nz|au)')
= 1
To search for the expression that do not match, use the following code:
SELECT [FirstName]
, [LastName]
, [EmailAddress]
, [USPhoneNo]
FROM [talend].[dbo].[Contacts]
where [talend].[dbo].RegExMatch([EmailAddress],
'[a-zA-Z0-9_\-]+@([a-zA-Z0-9_\-]+\.)+(com|org|edu|nz|au)')
= 0