SAP HANA Predictive Analysis Library PAL en PDF
SAP HANA Predictive Analysis Library PAL en PDF
3 PAL Functions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
3.1 Clustering Algorithms. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
Affinity Propagation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
Agglomerate Hierarchical Clustering. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
Anomaly Detection. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .31
Cluster Assignment. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
DBSCAN. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .46
Gaussian Mixture Model (GMM). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
K-Means. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
K-Medians. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .75
K-Medoids. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .81
LDA Estimation and Inference. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
Self-Organizing Maps. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
Slight Silhouette. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
Incremental Clustering on SAP HANA Smart Data Streaming. . . . . . . . . . . . . . . . . . . . . . . . . . 115
3.2 Classification Algorithms. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
Area Under Curve (AUC). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
Back Propagation Neural Network. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
C4.5 Decision Tree. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
CART Decision Tree. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
CHAID Decision Tree. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
Confusion Matrix. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
KNN. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158
Logistic Regression (with Elastic Net Regularization). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162
Multi-Class Logistic Regression. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176
6 Important Disclaimer for Features in SAP HANA Platform, Options and Capabilities. . . . . . . 567
This reference describes the Predictive Analysis Library (PAL) delivered with SAP HANA. This application
function library (AFL) defines functions that can be called from within SAP HANA SQLScript procedures to
perform analytic algorithms.
SAP HANA’s SQLScript is an extension of SQL. It includes enhanced control-flow capabilities and lets
developers define complex application logic inside database procedures. However, it is difficult to describe
predictive analysis logic with procedures.
For example, an application may need to perform a cluster analysis in a huge customer table with 1T records. It
is impossible to implement the analysis in a procedure using the simple classic K-means algorithms, or with
more complicated algorithms in the data-mining area. Transferring large tables to the application server to
perform the K-means calculation is also costly.
The Predictive Analysis Library (PAL) defines functions that can be called from within SQLScript procedures to
perform analytic algorithms. This release of PAL includes classic and universal predictive analysis algorithms in
nine data-mining categories:
● Clustering
● Classification
● Regression
● Association
● Time Series
● Preprocessing
● Statistics
● Social Network Analysis
● Miscellaneous
The algorithms in PAL were carefully selected based on the following criteria:
PAL also provides several incremental machine learning algorithms that learn and update a model on the fly, so
that predictions are based on a dynamic model.
For detailed information, see the section on machine learning with streaming in the SAP HANA Smart Data
Streaming: Developer Guide.
This section covers the information you need to know to start working with the SAP HANA Predictive Analysis
Library.
2.1 Prerequisites
Note
The revision number of the AFL must match the revision number of SAP HANA. See SAP Note 1898497
for details.
● Enable the Script Server in HANA instance. See SAP Note 1650957 for further information.
Related Information
You can dramatically increase performance by executing complex computations in the database instead of at
the application sever level. SAP HANA provides several techniques to move application logic into the database,
and one of the most important is the use of application functions. Application functions are like database
procedures written in C++ and called from outside to perform data intensive and complex operations.
Functions for a particular topic are grouped into an application function library (AFL), such as the Predictive
Analysis Library (PAL) and the Business Function Library (BFL). Currently, PAL and BFL are delivered in one
archive (that is, one SAR file with the name AFL<version_string>.SAR). The AFL archive is not part of the
HANA appliance, and must be installed separately by the administrator.
This section provides detailed security information which can help administrator and architects answer some
common questions.
Role Assignment
For each AFL area, there are two roles. You must be assigned one of the roles to execute the functions in the
library. The roles for the PAL library are automatically created when the Application Function Library (AFL) is
installed. The role names are:
AFL__SYS_AFL_AFLPAL_EXECUTE
AFL__SYS_AFL_AFLPAL_EXECUTE_WITH_GRANT_OPTION
Note
There are 2 underscores between AFL and SYS.
To generate or drop PAL procedures, you also need the following role, which is created when SAP HANA is
installed:
AFLPM_CREATOR_ERASER_EXECUTE
Note
Once the above roles are automatically created, they cannot be dropped. In other words, even when an area
with all its objects is dropped and re-created during system startup, the user still keeps these roles originally
granted.
To confirm that the PAL functions were installed successfully, you can check the following three public views:
● sys.afl_areas
● sys.afl_packages
● sys.afl_functions
These views are granted to the PUBLIC role and can be accessed by anyone.
● From within SQLScript code, generate a procedure that wraps the PAL function.
● Call the procedure, for example, from an SQLScript procedure.
Any user granted with the AFLPM_CREATOR_ERASER_EXECUTE role can generate an AFLLANG procedure for a
specific PAL function. The syntax is shown below:
(
POSITION int,
SCHEMA_NAME nvarchar(256),
TYPE_NAME nvarchar(256),
PARAMETER_TYPE varchar(7)
)
Note
1. The SYS.AFLLANG_WRAPPER_PROCEDURE_CREATE is in invoker mode, which means, the invoker must
be allowed to read the table.
2. The records in the signature table must follow this order: first input table types, next parameter table
type, and then output table types.
The implicit decimal-to-double conversion is done on SQL layer when a generated procedure with tables
including decimal columns is called. Note that the PAL data precision will NOT be higher than double.
8. The AFLLANG procedure generator described in this Step was introduced since SAP HANA SPS 09. For
backward compatibility information, see SAP Note 2046767.
After generating a PAL procedure, any user that has the AFL__SYS_AFL_AFLPAL_EXECUTE or
AFL__SYS_AFL_AFLPAL_EXECUTE_WITH_GRANT_OPTION role can call the procedure, using the syntax below.
CALL <schema_name>.<procedure_name>(
<data_input_table> {,…},
<parameter_table>,
<output_table> {,…}) with overview;
Note
1. The input, parameter, and output tables must be created before calling the procedure.
2. Some PAL algorithms have more than one input table or more than one output table.
3. To call the PAL procedure generated in Step 1, you need the AFL__SYS_AFL_AFLPAL_EXECUTE or
AFL__SYS_AFL_AFLPAL_EXECUTE_WITH_GRANT_OPTION role.
Related Information
PAL functions use parameter tables to transfer parameter values. Each PAL function has its own parameter
table. To avoid a conflict of table names when several users call PAL functions at the same time, the parameter
table must be created as a local temporary column table, so that each parameter table has its own unique
scope per session.
Each row contains only one parameter value, either integer, double or string.
The following table is an example of a parameter table with three parameters. The first parameter,
THREAD_NUMBER, is an integer parameter. Thus, in the THREAD_NUMBER row, you should fill the parameter
value in the intArgs column, and leave the doubleArgs and stringArgs columns blank.
THREAD_NUMBER 1
SUPPORT 0.2
VAR_NAME hello
Exceptions thrown by a PAL function can be caught by the exception handler in a SQLScript procedure with
AFL error code 423.
The following shows an example of handling the exceptions thrown by an ARIMA function. In the example, you
are generating the "DM_PAL".PAL_ARIMAXTRAIN_PROC procedure to call an ARIMA function, whose input
data table is empty. You create a SQLScript procedure "DM_PAL".PAL_ARIIMAX_NON_EXCEPTION_PROC
which calls the "DM_PAL".PAL_ARIMAXTRAIN_PROC procedure. When you call the
"DM_PAL".PAL_ARIIMAX_NON_EXCEPTION_PROC procedure, the exception handler in the procedure will
catch the errors thrown by the function.
Expected Result
The SAP HANA Application Function Modeler (AFM) in SAP HANA Studio supports functions from PAL in
flowgraph models. With the AFM, you can easily add PAL function nodes to your flowgraph, specify its
parameters and input/output table types, and generate the procedure, all without writing any SQLScript code.
You can also execute the procedure to get the output result of the function, and save the auto-generated
SQLScript code for future use.
1. Create a new flowgraph or open an existing flowgraph in the Project Explorer view.
Note
For details on how to create a flowgraph, see "Creating a Flowgraph" in SAP HANA Developer Guide for
SAP HANA Studio
2. Specify the target schema by selecting the flowgraph container and editing Target Schema in the
Properties view.
3. Add the input(s) for the flowgraph by doing the following:
1. Right-click the input anchor region on the left side of the flowgraph container and choose Add Input.
2. Edit the table types of the input by editing the signature the Properties view.
Note
You can also drag a table from the catalog in the Systems view to the input anchor region of the
flowgraph container.
4. Specify the output table types of the function by selecting the output anchor and editing the signature
in the Properties view.
5. (Optional) You can add more PAL nodes to the flowgraph if needed and connect them by holding the
Connect button from the source anchor and dragging a connection to the destination anchor.
6. Connect the input(s) in the input anchor region of the flowgraph container to the required input anchor(s)
of the PAL function node.
7. For the output tables that you want to see the output result after procedure execution, add them to the
output anchor region on the right side of the flowgraph container. To do that, move your mouse cursor over
the output anchor of the function node, hold the Connect button , and drag a connection to the output
anchor region.
8. Save the flowgraph by choosing File Save in the HANA Studio main menu.
9. Activate the flowgraph by right-clicking the flowgraph in the Project Explorer view and choosing Team
Activate .
A new procedure is generated in the target schema which is specified in Step 2.
Note
To activate the flowgraph, the database user _SYS_REPO needs SELECT object privileges for objects
that are used as data sources.
10. Select the black downward triangle next to the Execute button in the top right corner of the AFM.
A context menu appears. It shows the options Execute in SQL Editor and Open in SQL Editor as well as the
option Execute and Explore for every output of the flowgraph. In addition, the context menu shows the
option Edit Input Bindings.
11. (Optional) If the flowgraph has input tables without fixed content, choose the option Edit Input Bindings.
A wizard appears that allows you to bind all inputs of the flowgraph to data sources in the catalog.
Note
If you do not bind the inputs, AFM will automatically open this wizard when executing the procedure.
12. Choose one of the options Execute in SQL Editor, Open in SQL Editor, or Execute and Explore for one of the
outputs of the flowgraph.
The behavior of the AFM depends on the execution mode.
○ Open in SQL Editor: Opens a SQL console containing the SQL code to execute the runtime object.
○ Execute in SQL Editor: Opens a SQL console containing the SQL code to execute the runtime object
and runs this SQL code.
○ Execute and Explore: Executes the runtime object and opens the Data Explorer view for the chosen
output of the flowgraph.
13. Close the flowgraph by choosing File Close in the HANA Studio main menu.
For more information on how to use AFM, see the "Transforming Data Using SAP HANA Application Function
Modeler"section in SAP HANA Developer Guide for SAP HANA Studio.
The following are the available algorithms and functions in the Predictive Analysis Library.
VALIDATEKMEANS
NBCPREDICT
RANDOMFORESTSCORING
SVMPREDICT
FORECASTWITHEXPR
FORECASTWITHLR
FORECASTWITHPOLYNOMIALR
LITEAPRIORIRULE
APRIORIRULE2
ARIMAFORECAST
ARIMAXFORECAST
DISTRFITCENSORED
This section describes the clustering algorithms that are provided by the Predictive Analysis Library.
Affinity Propagation (AP) is a relatively new clustering algorithm that has been introduced by Brendan J. Frey
and Delbert Dueck. The authors described affinity propagation as follows:
“An algorithm that identifies exemplars among data points and forms clusters of data points around these
exemplars. It operates by simultaneously considering all data point as potential exemplars and exchanging
messages between data points until a good set of exemplars and clusters emerges.”
One of the most significant advantages of AP-cluster is that the number of clusters is not necessarily
predetermined.
Prerequisites
AP
Procedure Generation
Procedure Calling
The input, seed, parameter, and output tables must follow types specified in the signature table.
Signature
Input Tables
Data 1st column Integer, bigint, varchar, ID This must be the first
or nvarchar column.
Seed Data 1st column Integer, bigint, varchar, ID This must be the first
or nvarchar column
Parameter Table
Mandatory Parameters
● 1: Manhattan
● 2: Euclidean
● 3: Minkowski
● 4: Chebyshev
● 5: Standardised Euclidean
● 6: Cosine
Optional Parameters
The following parameters are optional. If a parameter is not specified, PAL will use its default value.
MINKOW_P Integer 3 The power of the Min Only valid when DIS
kowski method. TANCE_METHOD is 3.
Output Table
Example
Assume that:
Expected Result
Hierarchical clustering is a widely used clustering method which can find natural groups within a set of data.
The idea is to group the data into a hierarchy or a binary tree of the subgroups. A hierarchical clustering can be
either agglomerate or divisive, depending on the method of hierarchical decomposition.
The implementation in PAL follows the agglomerate approach, which merges the clusters with a bottom-up
strategy. Initially, each data point is considered as an own cluster. The algorithm iteratively merges two clusters
based on the dissimilarity measure in a greedy manner and forms a larger cluster. Therefore, the input data
must be numeric and a measure of dissimilarity between sets of data is required, which is achieved by using the
following two parameters:
An advantage of hierarchical clustering is that it does not require the number of clusters to be specified as the
input. And the hierarchical structure can also be used for data summarization and visualization.
The agglomerate hierarchical clustering functions in PAL now supports eight kinds of appropriate metrics and
seven kinds of linkage criteria.
If the input data has category attributes, you must set the DISTANCE_FUNC parameter to Gower Distance to
support calculating the distance matrix. Gower Distance is calculated in the following way.
Suppose that the items Xi and Xj have K attributes, the distance between Xi and Xj is:
Sijk=(Xik‒Xjk)/Rk
Wk=1
Rk is the range of values for the kth variable; Wk is set by user and the default is 1.
If Xik=Xjk: Sijk=0
Prerequisites
● The first column of the input data is an ID column and the other columns are of integer, double, varchar, or
nvarchar data type.
● The input data does not contain null value. The algorithm will issue errors when encountering null values.
Procedure Generation
Procedure Calling
The input, parameter, combine process, and result tables must be of the types specified in the signature table.
Signature
Input Table
Parameter Table
Mandatory Parameters
None.
Optional Parameters
The following parameters are optional. If a parameter is not specified, PAL will use its default value.
Jaccard Distance =
(M01 + M10) / (M11 +
M01 + M10)
● 1: Nearest Neigh
bor (single link
age)
● 2: Furthest Neigh
bor (complete
linkage)
● 3: Group Average
(UPGMA)
● 4: Weighted Aver
age (WPGMA)
● 5: Centroid Clus
tering
● 6: Median Cluster
ing
● 7: Ward Method
● 0: does nothing
● 1: Z score stand
ardize
● 2: transforms to
new range: -1 to 1
● 3: transforms to
new range: 0 to 1
Default: 1
Output Tables
2nd column Varchar, nvarchar, or One of the clusters After the combining,
integer that is to be combined the new cluster will be
in one combine stage, named as the smaller
name as its row num one.
ber in the input data
The type must be the
table.
same as the ID type in
the input table.
3rd column Varchar, nvarchar, or The other cluster to be The type must be the
integer combined in the same same as the ID type in
combine stage, named the input table.
as its row number in
the input data table.
Result 1st column Varchar, nvarchar, or ID of the input data. The type must be the
integer same as the type of ID
in the input table.
Example
Assume that:
Expected Result
COMBINEPROCESS_TBL:
Anomaly detection is used to find the existing data objects that do not comply with the general behavior or
model of the data. Such data objects, which are grossly different from or inconsistent with the remaining set of
data, are called anomalies or outliers. Sometimes anomalies are also referred to as discordant observations,
exceptions, aberrations, surprises, peculiarities or contaminants in different application domains.
Anomalies in data can translate to significant (and often critical) actionable information in a wide variety of
application domains. For example, an anomalous traffic pattern in a computer network could mean that a
hacked computer is sending out sensitive data to an unauthorized destination. An anomalous MRI image may
indicate presence of malignant tumors. Anomalies in credit card transaction data could indicate credit card or
identity theft or anomalous readings from a space craft sensor could signify a fault in some component of the
space craft.
● The input data contains an ID column and the other columns are of integer or double data type.
● The input data does not contain null value. The algorithm will issue errors when encountering null values.
ANOMALYDETECTION
Procedure Generation
Procedure Calling
The input, parameter, and output tables must be of the types specified in the signature table.
Signature
Input Table
Data 1st column Integer, bigint, varchar, ID It must be the first col
or nvarchar umn.
Parameter Table
Mandatory Parameters
None.
Optional Parameters
● 1: Manhattan distance
● 2: Euclidean distance
● 3: Minkowski distance
● 1: First k observations
● 2: Random with replace
ment
● 3: Random without re
placement
● 4: Patent of selecting
the init center (US
6,882,998 B1)
● 0: No
● 1: Yes. For each point
X(x1,x,…,xn), the nor
malized value will be
X'(x1/S,x2/S,...,xn/S),
where S = |x1|+|x2|+...|
xn|.
● 2: For each column C,
get the min and max
value of C, and then C[i]
= (C[i]-min)/(max-min).
Output Tables
Note
The statistics and centers output table were introduced in SAP HANA SPS 09. The version with only the
outliers ouput table is also supported.
Example
Assume that:
Expected Result
PAL_AD_RESULT_TBL:
PAL_AD_STATISTIC_TBL:
PAL_AD_CENTERS_TBL:
Cluster assignment is used to assign data to the clusters that were previously generated by some clustering
methods such as K-means, DBSCAN (Density-Based Spatial Clustering of Applications with Noise), and SOM
(Self-Organizing Maps).
This algorithm requires that the corresponding clustering procedures save cluster information, or cluster
model, which also includes the control parameters for consistency. It assumes that new data is from similar
distribution as previous data, and will not update the cluster information.
For clusters generated by K-means, distances between new data and cluster centers are calculated, and then
the new data is assigned to the cluster with the smallest distance.
For clusters generated by DBSCAN, all core objects are stored. For each piece of new data, the algorithm tries
to find a core object in some formed cluster whose distance is less than the value of the RADIUS parameter. If
such a core object is found, the new data is then assigned to the corresponding cluster, otherwise it is assigned
to cluster -1, indicating that it is noise. It is possible that a piece of data can belong to more than one cluster,
which can be further divided into the following two cases:
● If the number of core objects whose distances to the new data is less than the MINPTS parameter value,
meaning that the new data is a border object, the new data is assigned to the cluster where there is a core
object having the smallest distance to the new data.
● If the number of core objects whose distances to the new data is not less than MINPTS, which means the
new data is also a core object, it is then assigned to cluster -2, indicating that it belongs to more than one
cluster. In this case, re-running the DBSCAN function is highly suggested.
For clusters generated by SOM, similar to K-means, the distances between new data and weight vector are
calculated, and the new data is then assigned to the cluster with the smallest distance.
Prerequisites
CLUSTERASSIGNMENT
This function directly assigns data to clusters based on the previous cluster model, without running clustering
procedure thoroughly. It currently supports the K-means, DBSCAN, and SOM clustering methods.
Procedure Generation
Procedure Calling
The input, parameter, and output tables must be of the types specified in the signature table.
Signature
Input Tables
Data 1st column Integer, bigint, varchar, ID This must be the first
or nvarchar column.
Cluster Model 1st column Integer Cluster model ID This must be the first
column.
2nd column CLOB, varchar, or Cluster model saved The table must be a
nvarchar as JSON string column table. The min
imum length of each
unit (row) is 5000.
Parameter Table
None.
Optional Parameters
None.
Output Table
Example
Assume that:
For K-means:
Expected Result
PAL_CLUSTER_ASSIGNED_TBL:
Expected Result
PAL_CLUSTER_ASSIGNED_TBL:
For SOM:
Expected Result
PAL_CLUSTER_ASSIGNED_TBL:
Related Information
3.1.5 DBSCAN
DBSCAN (Density-Based Spatial Clustering of Applications with Noise) is a density-based data clustering
algorithm. It finds a number of clusters starting from the estimated density distribution of corresponding
nodes.
DBSCAN requires two parameters: scan radius (eps) and the minimum number of points required to form a
cluster (minPts). The algorithm starts with an arbitrary starting point that has not been visited. This point's
eps-neighborhood is retrieved, and if the number of points it contains is equal to or greater than minPts, a
cluster is started. Otherwise, the point is labeled as noise. These two parameters are very important and are
usually determined by user.
PAL provides a method to automatically determine these two parameters. You can choose to specify the
parameters by yourself or let the system determine them for you.
Prerequisites
DBSCAN
Procedure Generation
The input, parameter, and output tables must be of the types specified in the signature table.
Signature
Input Table
Data 1st column Integer, bigint, varchar, ID This must be the first
or nvarchar column.
Parameter Table
Mandatory Parameters
Optional Parameters
The following parameters are optional. If a parameter is not specified, PAL will use its default value.
● 1: Manhattan
● 2: Euclidean
● 3: Minkowski
● 4: Chebyshev
● 5: Standardized
Euclidean
● 6: Cosine
Output Tables
Result 1st column Integer, bigint, varchar, ID This must be the first
or nvarchar column.
Cluster Model (op 1st column Integer Cluster model ID This must be the first
tional) column.
2nd column CLOB, varchar, or Cluster model saved The table must be a
nvarchar as JSON string column table. The min
imum length of each
unit (row) is 5000.
Example
Assume that:
Expected Result
PAL_DBSCAN_MODEL_TBL:
GMM is a Gaussian mixture model in which each component has its own weight, mean, and covariance matrix.
Weight means the importance of a Gaussian distribution in the GMM, and mean and covariance matrix are the
basic parameters of a Gaussian distribution, as shown in the following formula:
Expectation maximization (EM) algorithm is used to inference all of the unknown parameters of GMM. The
algorithm performs two steps: the expectation step and the maximization step.
The expectation step calculates the contribution of training sample i to the Gaussian k:
The maximization step calculates the parameters weight, mean, and covariance matrix:
GMM can be used in image segmentation, clustering, and so on. It gives the probability of a sample belonging
to each Gaussian component.
Prerequisite
Procedure Generation
Procedure Calling
The input, parameter, and output tables must be of the types specified in the signature table.
Signature
Input Table
Data 1st column Integer, bigint, varchar, ID This must be the first
or nvarchar column.
Parameter Table
Mandatory Parameter
Optional Parameters
The following parameters are optional. If a parameter is not specified, PAL will use its default value.
● 0: specifies the
output format 0
● 1: specifies the
output format 1
Initialize Parameter 1st column Integer or string ID This must be the first
column.
When INIT_MODE is
set to 1 in the parame
ter table, then each
row represents a seed
which is the sequence
number of the data in
data table (starting
from 0). There can be
more than one seed.
For example, setting
seed data to 1,2,10
means selecting the
1th, 2th, and 10th data
in the input data table
as the seeds.
Output Tables
Cluster Result 1st column Integer, bigint, varchar, ID This must be the first
column.
or nvarchar
Assign a data to a
Gaussian distribution
component with the
highest probability.
Output format 1:
Cluster Result 1st column Integer, bigint, varchar, ID This must be the first
column.
or nvarchar
Models 1st column varchar or nvarchar GMM model. The number of rows is
dependent on the
Each row stores a
number of compo
Gaussian distribution
nents in GMM.
model with the weight,
mean, and covariance
matrix as a JSON for
mat.
Example
Example 1
Expected Results
PAL_GMM_RESULTSMODEL_TBL:
Expected Results
PAL_GMM_RESULTS_TBL:
3.1.7 K-Means
In predictive analysis, k-means clustering is a method of cluster analysis. The k-means algorithm partitions n
observations or records into k clusters in which each observation belongs to the cluster with the nearest center.
In marketing and customer relationship management areas, this algorithm uses customer data to track
customer behavior and create strategic business initiatives. Organizations can thus divide their customers into
segments based on variants such as demography, customer behavior, customer profitability, measure of risk,
and lifetime value of a customer or retention probability.
Clustering works to group records together according to an algorithm or mathematical formula that attempts
to find centroids, or centers, around which similar records gravitate. The most common algorithm uses an
iterative refinement technique. It is also referred to as Lloyd's algorithm:
Given an initial set of k means m1, ..., mk, the algorithm proceeds by alternating between two steps:
● Assignment step: assigns each observation to the cluster with the closest mean.
● Update step: calculates the new means to be the center of the observations in the cluster.
The k-means implementation in PAL supports multi-thread, data normalization, different distance level
measurement, and cluster quality measurement (Silhouette). The implementation does not support
categorical data, but this can be managed through data transformation. The first K and random K starting
methods are supported.
If an attribute is of category type, it will be converted to a binary vector and then be used as a numerical
attribute. For example, in the below table, "Gender" is of category type.
T1 31 10,000 Female
T2 27 8,000 Male
Because "Gender" has two distinct values, it will be converted into a binary vector with two dimensions:
T1 31 10,000 0 1
T2 27 8,000 1 0
The means of categorical attributes will not be outputted. Instead, the means will be replaced by the modes
similar to the K-Modes algorithm. Take the below center for example:
Prerequisites
● The input data contains an ID column and the other columns are of integer or double data type.
● The input data does not contain null value. The algorithm will issue errors when encountering null values.
KMEANS
Procedure Generation
Procedure Calling
The input, parameter, and output tables must be of the types specified in the signature table.
Signature
Input Table
Data 1st column Integer, bigint, varchar, ID This must be the first
or nvarchar column.
Parameter Table
Mandatory Parameters
None.
Optional Parameters
The following parameters are optional. If a parameter is not specified, PAL will use its default value.
● 1: Manhattan dis
tance
● 2: Euclidean dis
tance
● 3: Minkowski dis
tance
● 4: Chebyshev dis
tance
MINKOWSKI_POWER Double 3.0 When you use the Min Only valid when DIS
kowski distance, this TANCE_LEVEL is 3.
parameter controls the
value of power.
● 1: First k observa
tions
● 2: Random with
replacement
● 3: Random with
out replacement
● 4: Patent of se
lecting the init
center (US
6,882,998 B1)
● 0: No
● 1: Yes. For each
point X
(x1,x2,...,xn), the
normalized value
will be X'(x1/S,x2/
S,...,xn/S), where
S = |x1|+|x2|+...|
xn|.
● 2: for each col
umn C, get the
min and max
value of C, and
then C[i] = (C[i]-
min)/(max-min).
Output Tables
Result 1st column Integer, bigint, varchar, ID This must be the first
or nvarchar column.
Center Points 1st column Integer Cluster center ID This must be the first
column.
Or
Result 1st column Integer, bigint, varchar, ID This must be the first
or nvarchar column.
Center Points 1st column Integer Cluster center ID This must be the first
column.
Center Statistics 1st column Integer Cluster center ID This must be the first
column.
Statistics 1st column Varchar or nvarchar Statistic name This must be the first
column.
Cluster Model 1st column Integer Cluster model ID This must be the first
column.
2nd column CLOB, varchar, or Cluster model saved The table must be a
nvarchar as JSON string column table. The min
imum length of each
unit (row) is 5000.
Example
Assume that:
Expected Result
PAL_KMEANS_ASSIGNED_TBL:
PAL_KMEANS_SIL_CENTERS_TBL:
PAL_KMEANS_STATISTIC_TBL:
PAL_KMEANS_MODEL_TBL:
VALIDATEKMEANS
This is a quality measurement function for k-means clustering. The current version of VALIDATEKMEANS does
not support category attributes. You can use the CONV2BINARYVECTOR algorithm to convert category
attributes into binary vectors, and then use them as continuous attributes.
Procedure Generation
Procedure Calling
The input, parameter, and output tables must be of the types specified in the signature table.
Signature
Input Tables
Parameter Table
Mandatory Parameter
Optional Parameter
The following parameter is optional. If it is not specified, PAL will use its default value.
Output Table
Example
Assume that:
Expected Result
PAL_SILHOUETTE_RESULT_TBL:
3.1.8 K-Medians
K-medians is a clustering algorithm similar to K-means. K-medians and K-means both partition n observations
into K clusters according to their nearest cluster center. In contrast to K-means, while calculating cluster
centers, K-medians uses medians of each feature instead of means of it.
Given an initial set of K cluster centers: m1, ..., mk, the algorithm proceeds by alternating between the following
two steps and repeats until the assignments no longer change.
● Assignment step: assigns each observation to the cluster with the closest center.
● Update step: calculates the new median of each feature of each cluster to be the new center of that cluster.
The K-medians implementation in PAL supports multi-threads, data normalization, different distance level
measurements, and cluster quality measurement (Silhouette). The implementation does not support
categorical data, but this can be managed through data transformation. Because median method cannot apply
to categorical data, the K-medians implementation uses the most frequent one instead. The first K and random
K starting methods are supported.
T1 31 10,000 Female
T2 27 8,000 Male
Because "Gender" has two distinct values, it will be converted into a binary vector with two dimensions:
T1 31 10,000 0 1
T2 27 8,000 1 0
Where γ is the weight to be given to the transposed categorical attributes to lessen the impact on the clustering
from the 0/1 attributes.
Prerequisites
● The input data contains an ID column and the other columns are of integer, varchar, nvarchar, or double
data type.
● The input data does not contain null value. The algorithm will issue errors when encountering null values.
KMEDIANS
Procedure Generation
Procedure Calling
The input, parameter, and output tables must be of the types specified in the signature table.
Signature
Input Table
Data 1st column Integer, bigint, varchar, ID This must be the first
or nvarchar column.
Parameter Table
Mandatory Parameters
Optional Parameters
The following parameters are optional. If a parameter is not specified, PAL will use its default value.
● 1: Manhattan dis
tance
● 2: Euclidean dis
tance
● 3: Minkowski dis
tance
● 4: Chebyshev dis
tance
MINKOWSKI_POWER Double 3.0 When you use the Min Only valid when DIS
kowski distance, this TANCE_LEVEL is 3.
parameter controls the
value of power.
● 1: first k observa
tions
● 2: random with re
placement
● 3: random without
replacement
● 4: patent of se
lecting the initial
center (US
6,882,998 B1)
● 0: No
● 1: Yes. For each
point X
(x1,x2,...,xn), the
normalized value
will be X'(x1/S,x2/
S,...,xn/S), where
S = |x1|+|x2|+...|
xn|.
● 2: For each col
umn C, get the
min and max
value of C, and
then C[i] = (C[i]-
min)/(max-min).
Output Tables
Example
Assume that:
Expected Results
PAL_KMEDIANS_ASSIGN_TBL:
3.1.9 K-Medoids
This is a clustering algorithm related to the K-means algorithm. Both k-medoids and k-means algorithms
partition n observations into k clusters in which each observation is assigned to the cluster with the closest
center. In contrast to k-means algorithm, k-medoids clustering does not calculate means, but medoids to be
the new cluster centers.
A medoid is defined as the center of a cluster, whose average dissimilarity to all the objects in the cluster is
minimal. Compared with the K-means algorithm, K-medoids is more robust to noise and outliers.
Given an initial set of K medoids: m1, ..., mk, the algorithm proceeds by alternating between two steps as below:
● Assignment step: assigns each observation to the cluster with the closest center.
● Update step: calculates the new medoid to be the center of the observations for each cluster.
In PAL, the K-medoids algorithm supports multi-threads, data normalization, different distance level
measurements, and cluster quality measurement (Silhouette). It does not support categorical data, but this
can be managed through data transformation. The first K and random K starting methods are supported.
If an attribute is of category type, it will be converted to a binary vector and then be used as a numerical
attribute. For example, in the below table, "Gender" is of category type.
T1 31 10,000 Female
T2 27 8,000 Male
Because "Gender" has two distinct values, it will be converted into a binary vector with two dimensions:
T1 31 10,000 0 1
T2 27 8,000 1 0
Where γ is the weight to be given to the transposed categorical attributes to lessen the impact on the clustering
from the 0/1 attributes. Then you can use the traditional method to update the medoid of every cluster.
Prerequisites
● The input data contains an ID column and the other columns are of integer, varchar, nvarchar, or double
data type.
● The input data does not contain null value. The algorithm will issue errors when encountering null values.
KMEDOIDS
Procedure Generation
Procedure Calling
The input, parameter, and output tables must be of the types specified in the signature table.
Signature
Input Table
Data 1st column Integer, bigint, varchar, ID This must be the first
or nvarchar column.
Parameter Table
Mandatory Parameters
Optional Parameters
The following parameters are optional. If a parameter is not specified, PAL will use its default value.
● 1: Manhattan dis
tance
● 2: Euclidean dis
tance
● 3: Minkowski dis
tance
● 4: Chebyshev dis
tance
MINKOWSKI_POWER Double 3.0 When you use the Min Only valid when DIS
kowski distance, this TANCE_LEVEL is 3.
parameter controls the
value of power.
● 1: First k observa
tions
● 2: Random with
replacement
● 3: Random with
out replacement
● 4: Patent of se
lecting the init
center (US
6,882,998 B1)
● 0: No
● 1: Yes. For each
point X
(x1,x2,...,xn), the
normalized value
will be X'(x1/S,x2/
S,...,xn/S), where
S = |x1|+|x2|+...|
xn|.
● 2: For each col
umn C, get the
min and max
value of C, and
then C[i] = (C[i]-
min)/(max-min).
Output Tables
Example
Assume that:
Expected Result
PAL_KMEDOIDS_ASSIGN_TBL:
PAL_KMEDOIDS_CENTERS_TBL:
Latent Dirichlet allocation (LDA) is a generative model in which each item of a collection is modeled as a
distribution over an underlying set of groups (topics). In the context of text modeling, it posits that each
document in the text corpora is composed by several topics with different probabilities and each word belongs
to certain topics with different probabilities. In PAL, the parameter inference is done via Gibbs sampling.
LDAESTIMATE
Procedure Generation
Procedure Calling
Signature
Input Table
Parameter Table
Mandatory Parameter
Optional Parameters
The following parameters are optional. If a parameter is not specified, PAL will use its default value.
● 0: Uniform
● 1: Initialization by
sampling
Output Tables
Word Topic Assignment (op 1st column Integer, varchar, or nvarchar Document ID
tional)
(must be the same as the in
put data table)
Example
Assume that:
Expected Results
DICTIONARY_TBL:
GENERALINFO_TBL:
LDAINFERENCE
This function inferences the topic assignment for new documents based on the previous LDA estimation
results.
Procedure Generation
The input, parameter, and output tables must be of the types specified in the signature table.
Signature
Input Table
Parameter Table
Mandatory Parameter
None.
Optional Parameters
The following parameters are optional. If a parameter is not specified, PAL will use its default value.
● 0: Uniform
● 1: Initialization by
sampling
DELIMIT Varchar Space Specifies the delimits Only valid for the first
to separate each word alternative input table
in the document. This schema.
value takes prece
dence over the corre
sponding one in the
general information ta
ble.
Output Tables
Word Topic Assignment (op 1st column Integer, varchar, or nvarchar Document ID
tional)
(must be the same as the in
put data table)
Example
Assume that:
DOCTOPICDIST_TBL:
Self-organizing feature maps (SOMs) are one of the most popular neural network methods for cluster analysis.
They are sometimes referred to as Kohonen self-organizing feature maps, after their creator, Teuvo Kohonen,
or as topologically ordered maps. SOMs aim to represent all points in a high-dimensional source space by
points in a low-dimensional (usually 2-D or 3-D) target space, such that the distance and proximity
relationships are preserved as much as possible. This makes SOMs useful for visualizing low-dimensional views
of high-dimensional data, akin to multidimensional scaling.
SOMs can also be viewed as a constrained version of k-means clustering, in which the cluster centers tend to
lie in low-dimensional manifold in the feature or attribute space. The learning process mainly includes three
steps:
An important variant is batch SOM, which updates the weighted vectors only at the end of every learning
epoch. It requires that the whole set of training data is present, and is independent on the order of input
vectors.
The SOM approach has many applications such as virtualization, web document clustering, and speech
recognition.
Prerequisites
● The first column of the input data is an ID column and the other columns are of integer or double data type.
● The input data does not contain null value. The algorithm will issue errors when encountering null values.
Procedure Generation
Procedure Calling
The input, parameter, and output tables must be of the types specified in the signature table.
Signature
Input Table
Data 1st column Integer, bigint, varchar, ID This must be the first
or nvarchar column.
Parameter Table
Mandatory Parameters
None.
Optional Parameters
The following parameters are optional. If a parameter is not specified, PAL will use its default value.
MAX_ITERATION Integer 1000 plus 500 times the Maximum number of itera
number of neurons in the lat tions.
tice
Note that the training might
not converge if this value is
too small, for example, less
than 1000.
● 0: No
● 1: Transform to new
range (0.0, 1.0)
● 2: Z-score normalization
● 1: Gaussian
● 2: Bubble/Flat
● 1: Exponential
● 2: Linear
● 1: Rectangle
● 2: Hexagon
● 0: Classical SOM
● 1: Batch SOM
Note
The SOMs algorithm has been updated since SAP HANA SPS 12, so the algorithm results may be different
from the results in the previous versions. The default value (1: Gaussian) for the KERNEL_FUNCTION
parameter is not available in SAP HANA SPS 11 or lower, which only supports the bubble neighborhood
function.
Output Tables
SOM Map 1st column Integer Unit cell ID. This must be the first
column.
SOM Assign 1st column Integer, varchar, or ID of original tuples This must be the first
nvarchar column.
Cluster Model (op 1st column Integer Cluster model ID This must be the first
tional) column.
2nd column CLOB, varchar, or Cluster model saved The table must be a
nvarchar as JSON string column table. The min
imum length of each
unit (row) is 5000.
The SOM Assign output table can also use the following format:
● 0: Not adjacent
● 1: Adjacent
Example
Assume that:
Expected Result
PAL_SOM_MAP_TBL:
PAL_SOM_MODEL_TBL:
Expected Result
PAL_SOM_MAP_TBL:
PAL_SOM_MODEL_TBL:
The complex of Silhouette is O(N2), where N represents the number of records. When N is very large, silhouette
will cost much time.
In consideration of the efficient, PAL provides a lite version of Sihouette called Slight Silhouette. Suppose you
have N records. For every record i, the following is defined:
It is clear that ‒1≤S≤1. ‒1 indicates poor clustering result, and 1 stands for good result.
For attributes of category type, you can pre-process the input data using the method described in K-means.
Prerequisites
SLIGHTSILHOUETTE
Procedure Generation
Procedure Calling
The input, parameter, and output tables must be of the types specified in the signature table.
Signature
Input Table
Data Last column Integer, bigint, varchar, Class label This must be the last
or nvarchar column.
Parameter Table
Mandatory Parameters
None.
Optional Parameters
The following parameters are optional. If a parameter is not specified, PAL will use its default value.
● 1: Manhattan dis
tance
● 2: Euclidean dis
tance
● 3: Minkowski dis
tance
● 4: Chebyshev dis
tance
MINKOWSKI_POWER Double 3 When you use the Min Only valid when DIS
kowski distance, this TANCE_LEVEL is 3.
parameter controls the
value of power.
● 0: No
● 1: Yes. For each
point X
(x1,x2,...,xn), the
normalized value
will be X'(x1/S,x2/
S,...,xn/S), where
S = |x1|+|x2|+...|
xn|.
● 2: For each col
umn C, get the
min and max
value of C, and
then C[i] = (C[i]-
min)/(max-min).
Output Table
Example
Assume that:
Expected Result
For information on incremental clustering on SAP HANA Smart Data Streaming, see the example of DenStream
clustering in the SAP HANA Smart Data Streaming: Developer Guide.
Note
SAP HANA Smart Data Streaming is only supported on Intel-based hardware platforms.
This section describes the classification algorithms that are provided by the Predictive Analysis Library.
Area under curve (AUC) is a traditional method to evaluate the performance of classification algorithms.
Basically, it can evaluate the binary classifier, but can be extended to multiple-class condition easily.
In an area under curve algorithm, curve is the receiver operating characteristic (ROC) curve. The curve can be
obtained by plotting the true positive rate (TPR) against the false positive rate (FPR) at several threshold. The
calculation formulas are listed below.
Where:
After plotting the ROC curve, you can calculate the area under the curve by using numerical integral algorithms
such as Simpson’s rule. The value of AUC ranges from 0.5 to 1. If the AUC equals to 1, the classifier is expected
to have perfect performance.
Prerequisite
AUC
Procedure Generation
Or
Procedure Calling
The input, parameter, and output tables must be of the types specified in the signature table.
Signature
Input Table(s)
2nd column Integer, varchar, or All of the possible pre The type must be the
nvarchar dictive labels same as the second
column type of the
previous table.
3rd column Double The probability belong For each sample data,
ing to each possible la the sum of the proba
bel bilities belonging to
each possible label
should be 1.
Parameter Table
Mandatory Parameters
None.
Optional Parameters
The following parameters are optional. If a parameter is not specified, PAL will use its default value.
Output Tables
Example
Assume that:
Example 1
Expected Result
PAL_AUC_TBL:
PAL_ROC_TBL:
Example 2
Expected Result
PAL_AUC_TBL:
PAL_ROC_TBL:
Neural network is a calculation model inspired by biological nervous system. The functionality of neural
network is determined by its network structure and connection weights between neurons. Back propagation
neural network (BPNN) is one of the very popular types for its training method called back propagation.
Single Neuron
A neural network consists of three parts: input layer, hidden layer(s), and output layer. Each layer owns several
neurons and there are connections between layers. Signals are given in input layer and transmit through
connections and layers. Finally output layer gives transformed signals.
Steps
Training
● Batch training: weights updating is based on the error of the entire package of training patterns. Thus, in
one round the weights are updated once.
● Stochastic training: weights updating is based on the error of a single training pattern. Thus, in one round
the weights are updated for each pattern.
If an attribute is of category type, it will be converted to a binary vector and then be used as numerical
attributes. For example, in the below table, "Gender" is of category type.
T1 31 10,000 Female
T2 27 8,000 Male
Because "Gender" has two distinct values, it will be converted into a binary vector with two dimensions:
T1 31 10,000 0 1
T2 27 8,000 1 0
Prerequisites
● The data are of integer, varchar, nvarchar, or double data type and do not contain null value. Otherwise the
algorithm will issue errors.
● If it is for classification, then the last column is considered as the label column and is of integer, varchar, or
nvarchar type.
● If it is for regression, then you should specify how many last columns are considered as target values, and
they are of integer or double type.
● The data are of integer, varchar, nvarchar, or double data type and does not contain null value. Otherwise
the algorithm will issue errors.
● The first column is ID column and should be of integer type.
● The column order and column number of the predicted data are the same as the order and number used in
model training.
CREATEBPNN
Procedure Calling
The input, parameter, and output tables must be of the types specified in the signature table.
Signature
Input Table
Parameter Table
Mandatory Parameters
Optional Parameters
The following parameters are optional. If a parameter is not specified, PAL will use its default value.
● 0: Classification
● 1: Regression
● 0: Batch
● 1: Stochastic
● 0: None
● 1: Z-transform
● 2: Scalar
● 0: All zeros
● 1: Normal distri
bution
● 2: Uniform distri
bution in range (0,
1)
Output Tables
● TANH = 1
● LINEAR = 2
● SIGMOID_ASYMMETRIC = 3
● SIGMOID_SYMMETRIC = 4
● GAUSSIAN_ASYMMETRIC = 5
● GAUSSIAN_SYMMETRIC = 6
● ELLIOT_ASYMMETRIC = 7
● ELLIOT_SYMMETRIC = 8
● SIN_ASYMMETRIC = 9
● SIN_SYMMETRIC = 10
● COS_ASYMMETRIC = 11
● COS_SYMMETRIC = 12
Examples
Assume that:
Classification example:
Expected Results
Note: Your result may look slightly different from the following results.
PAL_CLASSIFICATION_NN_MODEL_TBL:
Regression example:
Expected Results
Note: Your result may look slightly different from the following results.
PAL_TRAIN_NN_RESULT_TBL:
PAL_REGRESSION_NN_MODEL_TBL:
PREDICTWITHBPNN
Procedure Calling
The input, parameter, and output tables must be of the types specified in the signature table.
The model input table must store the trained neural network model.
Signature
Input Tables
Parameter Table
Mandatory Parameters
None.
Optional Parameter
The following parameter is optional. If it is not specified, PAL will use its default value.
Output Table
Examples
Assume that:
Classification example:
Expected Results
Note: Your result may look slightly different from the following results.
PAL_PREDICT_NN_RESULT_TBL:
Regression example:
Expected Result
Note: Your result may look slightly different from the following result.
PAL_PREDICT_NN_RESULT_TBL:
A decision tree is used as a classifier for determining an appropriate action or decision among a predetermined
set of actions for a given case. A decision tree helps you to effectively identify the factors to consider and how
each factor has historically been associated with different outcomes of the decision. A decision tree uses a tree
- like structure of conditions and their possible consequences. Each node of a decision tree can be a leaf node
or a decision node.
As a classification algorithm, C4.5 builds decision trees from a set of training data, using the concept of
information entropy. The training data is a set of already classified samples. At each node of the tree, C4.5
chooses one attribute of the data that most effectively splits it into subsets in one class or the other. Its
criterion is the normalized information gain (difference in entropy) that results from choosing an attribute for
splitting the data. The attribute with the highest normalized information gain is chosen to make the decision.
The C4.5 algorithm then proceeds recursively until meeting some stopping criteria such as the minimum
number of cases in a leaf node.
The C4.5 decision tree functions implemented in PAL support both discrete and continuous values. In PAL
implementation, the REP (Reduced Error Pruning) algorithm is used as pruning method.
● The column order and column number of the predicted data are the same as the order and number used in
tree model building.
● The last column of the training data is used as a predicted field and is of discrete type. The predicted data
set has an ID column.
● The table used to store the tree model is a column table.
● The target column of training data must not have null values, and other columns should have at least one
valid value (not null).
Note
C4.5 decision tree treats null as a special value.
CREATEDTWITHC45
This function creates a decision tree from the input training data.
Procedure Generation
Procedure Calling
The input, parameter, and output tables must be of the types specified in the signature table.
Signature
Input Table
Training / Historical Columns Varchar, nvarchar, inte Table used to build the Discrete value: integer,
Data ger, or double predictive tree model varchar, or nvarchar
Parameter Table
Mandatory Parameters
None.
Optional Parameters
The following parameters are optional. If a parameter is not specified, PAL will use its default value.
● String or integer:
categorical
● Double: continu
ous
Output Tables
Example
Assume that:
Expected Result
PAL_C45_TREEMODEL_TBL:
PAL_C45_PMMLMODEL_TBL:
Classification and regression tree (CART) is used for classification or regression and only supports binary split.
CART is a recursive partitioning method similar to C4.5 decision tree. It uses GINI index or TWOING for
classification, and least square error for regression. In PAL, CART only supports the GINI split strategy. The
surrogate split method is used to support missing values when creating the tree model.
Prerequisites
● The target column of training data must not have null values, and other columns should have at least one
valid value (not null).
● The table used to store the tree model is a column table.
CART
Procedure Generation
Procedure Calling
The input, parameter, and output tables must be of the types specified in the signature table.
Signature
Input Table
Last column Varchar, nvarchar, dou Dependent field Null values are not al
ble, or integer lowed.
Parameter Table
Mandatory Parameters
None.
Optional Parameters
The following parameters are optional. If a parameter is not specified, PAL will use its default value.
● 103: GINI
● String or integer:
categorical
● Double: continu
ous
If IS_OUTPUT_RULES
and PMML_EXPORT
are both set to 0, the
algorithm will output
JSON tree model.
Output Tables
Example
Assume that:
Expected Result:
PAL_CART_TREEMODEL_TBL:
PAL_CART_STATISTIC_TBL:
Prerequisites
● The target column of the training data must not have null values, and other columns should have at least
one valid value (not null).
● The table used to store the tree model is a column table.
Note
CHAID treats null values as special values.
CREATEDTWITHCHAID
This function creates a decision tree from the input training data. It can be used for classification.
Procedure Generation
Procedure Calling
The input, parameter, and output tables must be of the types specified in the signature table.
Signature
Input Table
Last column Varchar, nvarchar, or Dependent field Null values are not al
integer lowed.
Parameter Table
Mandatory Parameters
None.
Optional Parameters
The following parameters are optional. If a parameter is not specified, PAL will use its default value.
● String or integer:
categorical
● Double: continu
ous
● 0: MDLPC
● 1: Equal Fre
quency
<column name> Integer 10 If the column is contin Only valid when DIS
uous and DISCRETI CRETIZATION_TYPE is
ZATION_TYPE is set to 1.
1, you can use this pa
rameter to specify the
number of bins
Output Tables
2nd column Varchar or nvarchar Tree model saved as a The minimum length
JSON string of each unit (row) is
5000.
2nd column Varchar or nvarchar Tree model in PMML The minimum length
format of each unit (row) is
5000.
Example
Assume that:
Expected Result
PAL_CHAID_PMML_TBL:
Confusion matrix is a traditional method to evaluate the performance of classification algorithms, including the
multiple-class condition.
The following is an example confusion matrix of a 3-class classification problem. The rows show the original
labels and the columns show the predicted labels. For example, the number of class 0 samples classified as
class 0 is a; the number of class 0 samples classified as class 2 is c.
Class 0 a b c
Class 1 d e f
Class 2 g h i
From the confusion matrix, you can compute the precision, recall, and F1-score for each class. In the above
example, the precision and recall of class 0 are:
CONFUSIONMATRIX
Procedure Generation
Procedure Calling
The input, parameter, and output tables must be of the types specified in the signature table.
Signature
Input Table
Data 1st column Integer, bigint, varchar, ID This must be the first
column.
or nvarchar
2nd column Integer, varchar, or Original label The data type of the
2nd and 3rd columns
nvarchar
must be the same.
Mandatory Parameters
None.
Optional Parameters
The following parameters are optional. If a parameter is not specified, PAL will use its default value.
Output Tables
Example
Assume that:
Expected Results
PAL_CM_CONFUSIONMATRIX_TBL:
PAL_CM_CLASSIFICATIONREPORT_TBL:
3.2.7 KNN
K-Nearest Neighbor (KNN) is a memory based classification method with no explicit training phase. In the
testing phase, given a query sample x, its top K nearest samples is found in the training set first, then the label
of x is assigned as the most frequent label of the K nearest neighbors. In this release of PAL, the description of
each sample should be real numbers. In order to speed up the search, the KD-tree searching method is
provided.
Prerequisites
● The first column of the training data and input data is an ID column. The second column of the training
data is of class type. The class type column is of integer type. Other data columns are of integer or double
type.
● The input data does not contain null value.
KNN
Procedure Generation
Procedure Calling
The input, parameter, and output tables must be of the types specified in the signature table.
Signature
Input Tables
Parameter Table
Mandatory Parameters
None.
Optional Parameters
The following parameters are optional. If a parameter is not specified, PAL will use its default value.
● 0: Majority voting
● 1: Distance-weighted
voting
Output Table
Example
Assume that:
Expected Result
This function can only handle binary-class classification problems. For multiple-class classification problems,
refer to Multi-Class Logistic Regression.
Considering a training data set with n samples and m explanatory variables, the logistic regression model is
made by:
Assuming that there are only two class labels, {0,1}, you can get the below formula:
Here θ0, θ1, …, θm can be obtained through the Maximum Likelihood Estimation (MLE) method.
Where .
Here and λ≥0. If α=0, we have the ridge regularization; if α=1, we have the LASSO regularization.
Function FORECASTWITHLOGISTICR is used to predict the labels for the testing data.
Prerequisites
LOGISTICREGRESSION
Procedure Generation
The input, parameter, and output tables must be of the types specified in the signature table.
Signature
Input Table
Parameter Table
Mandatory Parameters
CLASS_MAP0 Varchar Specifies the dependent vari Only valid when the column
able value which will be map type of dependent variables
ped to 0. is varchar or nvarchar.
CLASS_MAP1 Varchar Specifies the dependent vari Only valid when the column
able value which will be map type of dependent variables
ped to 1. is varchar or nvarchar.
By default, varchar or
nvarchar type column holds
discrete independent varia
bles and integer or double
type column holds continu
ous independent variables.
Optional Parameters
STEP_SIZE Double No default value Step size for line Only valid when
searching. METHOD is 1.
ENET_LAMBDA Double No default value Penalized weight. The Only valid when
value should be equal METHOD is 2.
to or greater than 0.
ENET_ALPHA Double 1.0 The elastic net mixing Only valid when
parameter. The value METHOD is 2.
range is between 0 and
1 inclusively.
● 0: Ridge penalty
● 1: LASSO penalty
SELECTED_FEATURES Varchar No default value A string to specify the Only used when you
features that will be need to indicate the
processed. The pat needed features.
tern is “X1,…,Xn”, where
Xi is the corresponding
column name in the
data table. If this pa
rameter is not speci
fied, all the features
will be processed.
DEPENDENT_VARIA Varchar No default value Column name in the Only used when you
BLE data table used as de need to indicate the
pendent variable. dependence.
Output Table
● A0: intercept
● A1: beta coefficient for
X1
● A2: beta coefficient for
X2
● …
Example
Assume that:
Expected Result
PAL_LOGISTICR_RESULTS_TBL:
PAL_LOGISTICR_STAT_TBL:
Expected Results
PAL_ENET_LOGISTICR_RESULTS_TBL:
PAL_ENET_LOGISTICR_STAT_TBL:
PAL_ENET_LOGISTICR_PMMLMODEL_TBL:
FORECASTWITHLOGISTICR
Procedure Generation
Procedure Calling
The input, parameter, and output tables must be of the types specified in the signature table.
Signature
Input Table
Parameter Table
Mandatory Parameters
By default, varchar or
nvarchar type column holds
discrete independent varia
bles and integer or double
column holds continuous in
dependent variables.
CLASS_MAP0 Varchar The same value as LOGISTI Only valid when the column
CREGRESSION’s parameter. type of dependent variable is
varchar or nvarchar.
CLASS_MAP1 Varchar The same value as LOGISTI Only valid when the column
CREGRESSION’s parameter. type of dependent variable is
varchar or nvarchar.
Optional Parameters
The following parameters are optional. If a parameter is not specified, PAL will use its default value.
Output Table
Example
Assume that:
Expected Result
Related Information
In many business scenarios we want to train a classifier with more than two classes. Multi-class logistic
regression (also referred to as multi-nomial logistic regression) extends binary logistic regression algorithm
(two classes) to multi-class cases.
The inputs and outputs of multi-class logistic regression are similar to those of logistic regression. In the
training phase, the inputs are features and labels of the samples in the training set, and the outputs are some
vectors. In the testing phase, the inputs are features of the samples in the testing set and the output of the
training phase, and the outputs are the labels of the samples in the testing set.
Algorithms
Training Phase
training phase is denoted as W*∈R(P+1)×K, where (p ≤ P) corresponds the weight of the p-th feature for
the k-th class, and corresponds the constant for the k-th class. W* is obtained by solving the following
optimization problem:
Testing Phase
In the testing set, let be the number of samples, ∈RN×P be the features where is the p-th feature of
the i-th sample. Let be the unknown labels where is the label of the i-th
sample, and be the prediction confidences of the prediction where is the confidence (likelihood) of the i-
LRMCTR
Procedure Generation
Procedure Calling
The input, parameter, and output tables must be of the types specified in the signature table.
Signature
Input Table
Parameter Table
Mandatory Parameters
None.
Optional Parameters
The following parameters are optional. If a parameter is not specified, PAL will use its default value.
SELECTED_FEATURES Varchar No default value A string to specify the Only used when you
features that will be need to indicate the
processed. The pat needed features.
tern is “X1,…,Xn”, where
Xi is the corresponding
column name in the
data table. If this pa
rameter is not speci
fied, all the features
will be processed.
DEPENDENT_VARIA Varchar No default value Column name in the Only used when you
BLE data table used as de need to indicate the
pendent variable. dependence.
By default, varchar or
nvarchar type column
holds discrete inde
pendent variables and
integer or double type
column holds continu
ous independent varia
bles.
Output Tables
Example
Assume that:
PAL_LRMC_MODEL_TBL:
PAL_LRMC_PMML_TBL:
LRMCTE
Procedure Generation
The input, parameter, and output tables must be of the types specified in the signature table.
Signature
Non-PMML format:
PMML format:
Parameter Table
Mandatory Parameters
None.
Optional Parameter
The following parameter is optional. If it is not specified, PAL will use its default value.
Output Table
Example
Assume that:
Expected Result
Expected Result
Naive Bayes is a classification algorithm based on Bayes theorem. It estimates the class-conditional probability
by assuming that the attributes are conditionally independent of one another.
Given the class label y and a dependent feature vector x1 through xn, the conditional independence
assumption can be formally stated as follows:
Since P(x1, ..., xn) is constant given the input, we can use the following classification rule:
We can use Maximum a posteriori (MAP) estimation to estimate P(y) and P(xi|y). The former is then the
relative frequency of class y in the training set.
The different Naive Bayes classifiers differ mainly by the assumptions they make regarding the distribution
of P(xi|y).
For continuous attributes, the attribute data are fitted to a Gaussian distribution and get the P(xi|y).
For discrete attributes, the count number ratio is used as P(xi|y). However, if there are categories that did not
occur in the training set, P(xi|y) will become 0, while the actual probability is merely small instead of 0. This
will bring errors to the prediction. To handle this issue, PAL introduces Laplace smoothing. The P(xi|y) is then
denoted as:
This is a type of shrinkage estimator, as the resulting estimate is between the empirical estimate xi / N, and
the uniform probability 1/d. α > 0 is the smoothing parameter, also called Laplace control value in the
following discussion.
Despite its simplicity, Naive Bayes works quite well in areas like document classification and spam filtering, and
it only requires a small amount of training data to estimate the parameters necessary for classification.
The Naive Bayes algorithm in PAL includes two functions: NBCTRAIN for generating training model; and
NBCPREDICT for making prediction based on the training model.
● The input data is of any data type, but the last column cannot be of double type.
● The input data does not contain null value.
NBCTRAIN
This function reads input data and generates training model with the Naive Bayes algorithm.
Procedure Generation
Procedure Calling
The input, parameter, and output tables must be of the types specified in the signature table.
Signature
Input Table
Training / Historical Other columns Varchar, nvarchar, inte Attribute columns Discrete value: integer,
Data ger, or double varchar, or nvarchar
Mandatory Parameters
None.
Optional Parameters
The following parameters are optional. If a parameter is not specified, PAL will use its default value.
● 0: Disables Laplace
smoothing
● Positive value: Enables
Laplace smoothing for
discrete values
Output Table
Example
Assume that:
Expected Result
NBCPREDICT
This function uses the training model generated by NBCTRAIN to make predictive analysis.
Procedure Calling
The input, parameter, and output tables must be of the types specified in the signature table.
Signature
Input Tables
Parameter Table
Mandatory Parameters
None.
Optional Parameters
● 0: Disables Laplace
smoothing.
● Positive value: Enables
Laplace smoothing for
discrete values.
Output Table
Assume that:
Expected Result
Parameter selection and model evaluation (PSME) is used to enable cross validation and parameter selection
for some PAL functions.
To avoid over fitting, it is a common practice to take use of cross validation to evaluate model performance and
perform model selection for the optimal parameters of the model. This algorithm is an envelope for different
classification algorithms to provide automatic parameter selection and model evaluation facilities during
training phase. Logistic regression, naive Bayes, support vector machine (SVM), and random forest are
supported.
PSME
This function performs parameter selection and model evaluation for classification algorithms including
logistic regression, naive Bayes, and support vector machine (SVM).
Procedure Calling
The input, parameter, and output tables must be of the types specified in the signature table.
Signature
Input Table
● LOGISTICREGRESSION:
Logistic Regression
● NBCTRAIN: Naive Bayes
● SVMTRAIN: Support
Vector Machine
● RANDOMFOREST: Ran
dom Forest
Parameter Table
Parameters supported by wrapped functions are also supported. Besides, there are some additional general
parameters and parameters specific for wrapped functions.
General Parameters
● 0: Disabled
● 1: Enabled
● ‘ACCURACY’: Accuracy
is used as measure
● ‘F1_SCORE’: The F1
score is used as meas
ure
Output Tables
Function Specific Output Ta Function specific Function specific Function specific
ble(s)
Note
Serialized confusion matrix is output in the evaluation results table in JSON format as shown below. The
"Count"array is filled by row. It may be split into more rows if the length exceeds 5000.
{
ConfusionMatrix:{
ActualCategories:[],
Count:[],
PredictedCategories:[]
}
}
For example,
{"ConfusionMatrix":{"ActualCategories":
["setosa","versicolor","virginica"],"Count":
[50,0,0,0,47,3,0,4,46],"PredictedCategories":
["setosa","versicolor","virginica"]}} represents
setosa 50 0 0
versicolor 0 47 3
virginica 0 4 46
Examples
Assume that:
Expected Results
CV_LR_EVALUATION_TBL:
Expected Results
EVALUATION_RESULT_TBL:
Example 3: SVM
Expected Results
EVALUATION_RESULT_TBL:
SELECTED_PARAMETER_TBL:
Expected Results
PAL_EVAL_TBL:
PAL_PARASEL_TBL:
Prerequisites
PREDICTWITHDT
Procedure Generation
Procedure Calling
The input, parameter, and output tables must be of the types specified in the signature table.
Signature
Input Tables
Parameter Table
Mandatory Parameters
None.
Optional Parameters
The following parameters are optional. If a parameter is not specified, PAL will use its default value.
Output Table
Example
Assume that:
Expected Result
The algorithm uses both bagging and random feature selection techniques. Each new training set is drawn with
replacement from the original training set, and then a tree is grown on the new training set using random
feature selection.
Prerequisite
The target column of the training data must not have null values, and other columns should have at least one
valid value (not null).
RANDOMFORESTTRAIN
Procedure Generation
Procedure Calling
Signature
Input Table
Last column Varchar, nvarchar, dou Dependent field. Null values are not al
ble, or integer lowed.
The varchar type is
used for classification,
and double type is for
regression.
Parameter Table
Mandatory Parameters
None.
Optional Parameters
The following parameters are optional. If a parameter is not specified, PAL will use its default value.
CONTINUOUS_COL Integer Detected from input data Indicates which columns are
continuous attributes. The
default behavior is:
Output Tables
or
2nd column Double Out-of-bag error rate or
Mean Squared Error (for re mean squared error for ran
gression) dom forest up to indexed tree
Example
Assume that:
Expected Result
PAL_RF_MODEL_TBL:
PAL_RF_VAR_IMP_TBL:
PAL_RF_ERR_RATE_TBL:
PAL_RF_CONFUSION_TBL:
RANDOMFORESTSCORING
Procedure Calling
The input, parameter, and output tables must be of the types specified in the signature table.
Signature
Input Table
Scoring Data 1st column Integer, varchar, or ID This must be the first
nvarchar column.
Parameter Table
Mandatory Parameters
None.
None.
Output Table
Example
Assume that:
Expected Result
PAL_RF_SCORING_RESULT_TBL:
Support Vector Machines (SVMs) refer to a family of supervised learning models using the concept of support
vector. Compared with many other supervised learning models, SVMs have the advantages in that the models
produced by SVMs can be either linear or non-linear, where the latter is realized by a technique called Kernel
Trick.
Like most supervised models, there are training phase and testing phase for SVMs. In the training phase, a
function f(x):->y where f(∙) is a function (can be non-linear) mapping a sample onto a TARGET, is learnt.
The training set consists of pairs denoted by {xi, yi}, where x denotes a sample represented by several
attributes, and y denotes a TARGET (supervised information). In the testing phase, the learnt f(∙) is further
used to map a sample with unknown TARGET onto its predicted TARGET.
In the current implementation in PAL, SVMs can be used for the following three tasks:
Because non-linearity is realized by Kernel Trick, besides the datasets, the kernel type and parameters should
be specified as well.
Prerequisite
SVMTRAIN
This function reads the input data and generates training model.
Procedure Generation
Procedure Calling
The training procedure name is the same as specified in the procedure generation.
The input, parameter, and output tables must be of the types specified in the signature table.
Input Table
Data 1st column Integer, bigint, varchar, ID This must be the first
or nvarchar column.
Parameter Table
Mandatory Parameter
Optional Parameters
The following parameters are optional. If a parameter is not specified, PAL will use its default value.
● 0: LINEAR KER
NEL
● 1: POLY KERNEL
● 2: RBF KERNEL
● 3: SIGMOID KER
NEL
Value range: ≥ 1
RBF_GAMMA Double 0.005 Coefficient for the RBF Only valid when KER
KERNEL type. NEL_TYPE is 2.
COEF_LIN Double 0.0 Coefficient for the Only valid when KER
POLY/SIGMOID KER NEL_TYPE is 1 or 3.
NEL type.
COEF_CONST Double 0.0 Coefficient for the Only valid when KER
POLY/SIGMOID KER NEL_TYPE is 1 or 3.
NEL type.
REGRESSION_EPS Double 0.1 Epsilon width of tube Only valid when TYPE
for regression. is 2.
Output Tables
Example
Assume that:
Expected Result
Expected Result
Expected Result
This function uses the training model generated by SVMTRAIN to make predictive analysis.
Procedure Generation
Procedure Calling
The predicting procedure name is the same as specified in the procedure generation.
Signature
Input Table
Data 1st column Integer, bigint, varchar, ID This must be the first
or nvarchar column.
Parameter Table
Mandatory Parameters
None.
Optional Parameter
The following parameter is optional. If it is not specified, PAL will use its default value.
Output Table
Examples
Assume that:
Expected Result
Expected Result
Expected Result
Related Information
For information on incremental classification on SAP HANA Smart Data Streaming, see the example of
Hoeffding tree training and scoring in the SAP HANA Smart Data Streaming: Developer Guide.
Note
SAP HANA Smart Data Streaming is only supported on Intel-based hardware platforms.
This section describes the regression algorithms that are provided by the Predictive Analysis Library.
Geometric regression is an approach used to model the relationship between a scalar variable y and a variable
denoted X. In geometric regression, data is modeled using geometric functions, and unknown model
parameters are estimated from the data. Such models are called geometric models.
In PAL, the implementation of geometric regression is to transform to linear regression and solve it:
y = β0 × x β1
Thus, y’ and x’ is a linear relationship and can be solved with the linear regression method.
The implementation also supports calculating the F value and R^2 to determine statistical significance.
Prerequisites
GEOREGRESSION
Procedure Generation
Procedure Calling
The input, parameter, and output tables must be of the types specified in the signature table.
Signature
Input Table
Parameter Table
Mandatory Parameters
None.
Optional Parameters
The following parameters are optional. If a parameter is not specified, PAL will use its default value.
● 0: Doolittle decomposi
tion (LU)
● 2: Singular value de
composition (SVD)
Output Tables
● A0: intercept
● A1: beta coeffi-
cient for X1
Example
Assume that:
Expected Result
PAL_GR_RESULTS_TBL:
PAL_GR_FITTED_TBL:
PAL_GR_PMMLMODEL_TBL:
FORECASTWITHGEOR
Procedure Generation
Procedure Calling
The input, parameter, and output tables must be of the types specified in the signature table.
Signature
Input Tables
Parameter Table
Mandatory Parameters
None.
Optional Parameters
The following parameters are optional. If a parameter is not specified, PAL will use its default value.
Output Table
Example
Assume that:
Expected Result
PAL_FGR_FITTED_TBL:
In PAL, the implementation of natural logarithmic regression is to transform to linear regression and solve it:
y = β1ln(x) + β0
Let x’ = ln(x)
Then y = β0 + β1 × x’
The implementation also supports calculating the F value and R^2 to determine statistical significance.
Prerequisites
LNREGRESSION
Procedure Generation
Procedure Calling
The input, parameter, and output tables must be of the types specified in the signature table.
Signature
Input Table
Parameter Table
Mandatory Parameters
None.
Optional Parameters
The following parameters are optional. If a parameter is not specified, PAL will use its default value.
● 0: Doolittle decomposi
tion (LU)
● 2: Singular value de
composition (SVD)
Output Tables
● A0: intercept
● A1: beta coeffi-
cient for X1
● A2: beta coeffi-
cient for X2
● …
Example
Expected Result
PAL_NLR_FITTED_TBL:
PAL_NLR_SIGNIFICANCE_TBL:
PAL_NLR_PMMLMODEL_TBL:
FORECASTWITHLNR
This function performs prediction with the natural logarithmic regression result.
Procedure Generation
Procedure Calling
The input, parameter, and output tables must be of the types specified in the signature table.
Signature
Input Tables
Parameter Table
Mandatory Parameters
None.
Optional Parameters
The following parameters are optional. If a parameter is not specified, PAL will use its default value.
Output Table
Example
Assume that:
CALL SYS.AFLLANG_WRAPPER_PROCEDURE_DROP('DM_PAL','PAL_FORECAST_LNR_PROC');
CALL
SYS.AFLLANG_WRAPPER_PROCEDURE_CREATE('AFLPAL','FORECASTWITHLNR','DM_PAL','PAL_FOR
ECAST_LNR_PROC',PAL_FNLR_PDATA_TBL);
DROP TABLE #PAL_CONTROL_TBL;
CREATE LOCAL TEMPORARY COLUMN TABLE #PAL_CONTROL_TBL ("Name" VARCHAR(100),
"INTARGS" INT, "DOUBLEARGS" DOUBLE,"STRINGARGS" VARCHAR(100));
INSERT INTO #PAL_CONTROL_TBL VALUES ('THREAD_NUMBER',8,null,null);
DROP TABLE PAL_FNLR_PREDICTDATA_TBL;
CREATE COLUMN TABLE PAL_FNLR_PREDICTDATA_TBL ( "ID" INT,"X1" DOUBLE);
INSERT INTO PAL_FNLR_PREDICTDATA_TBL VALUES (0,1);
INSERT INTO PAL_FNLR_PREDICTDATA_TBL VALUES (1,2);
INSERT INTO PAL_FNLR_PREDICTDATA_TBL VALUES (2,3);
INSERT INTO PAL_FNLR_PREDICTDATA_TBL VALUES (3,4);
INSERT INTO PAL_FNLR_PREDICTDATA_TBL VALUES (4,5);
INSERT INTO PAL_FNLR_PREDICTDATA_TBL VALUES (5,6);
INSERT INTO PAL_FNLR_PREDICTDATA_TBL VALUES (6,7);
DROP TABLE PAL_FNLR_COEFFICIENT_TBL;
CREATE COLUMN TABLE PAL_FNLR_COEFFICIENT_TBL ("ID" INT,"Ai" DOUBLE);
INSERT INTO PAL_FNLR_COEFFICIENT_TBL VALUES (0,14.86160299);
INSERT INTO PAL_FNLR_COEFFICIENT_TBL VALUES (1,98.29359746);
DROP TABLE PAL_FNLR_FITTED_TBL;
CREATE COLUMN TABLE PAL_FNLR_FITTED_TBL ("ID" INT,"Fitted" DOUBLE);
CALL DM_PAL.PAL_FORECAST_LNR_PROC(PAL_FNLR_PREDICTDATA_TBL,
PAL_FNLR_COEFFICIENT_TBL, "#PAL_CONTROL_TBL", PAL_FNLR_FITTED_TBL) with overview;
SELECT * FROM PAL_FNLR_FITTED_TBL;
Expected Result
Exponential regression is an approach to modeling the relationship between a scalar variable y and one or more
variables denoted X. In exponential regression, data is modeled using exponential functions, and unknown
model parameters are estimated from the data. Such models are called exponential models.
In PAL, the implementation of exponential regression is to transform to linear regression and solve it:
y = β0 × exp(β1 × x1 + β2 × x2 + … + βn × xn)
Thus, y’ and x1…xn is a linear relationship and can be solved using the linear regression method.
The implementation also supports calculating the F value and R^2 to determine statistical significance.
Prerequisites
EXPREGRESSION
Procedure Calling
The input, parameter, and output tables must be of the types specified in the signature table.
Signature
Input Table
Parameter Table
Mandatory Parameters
None.
Optional Parameters
The following parameters are optional. If a parameter is not specified, PAL will use its default value.
● 0: Doolittle decomposi
tion (LU)
● 2: Singular value de
composition (SVD)
Output Tables
Example
Assume that:
CALL SYS.AFLLANG_WRAPPER_PROCEDURE_DROP('DM_PAL','PAL_EXPR_PROC');
CALL
SYS.AFLLANG_WRAPPER_PROCEDURE_CREATE('AFLPAL','EXPREGRESSION','DM_PAL','PAL_EXPR_
PROC',PAL_ER_PDATA_TBL);
DROP TABLE #PAL_CONTROL_TBL;
CREATE LOCAL TEMPORARY COLUMN TABLE #PAL_CONTROL_TBL ("NAME" VARCHAR(100),
"INTARGS" INT, "DOUBLEARGS" DOUBLE,"STRINGARGS" VARCHAR(100));
INSERT INTO #PAL_CONTROL_TBL VALUES ('THREAD_NUMBER',8,null,null);
INSERT INTO #PAL_CONTROL_TBL VALUES ('PMML_EXPORT',2,null,null);
DROP TABLE PAL_ER_DATA_TBL;
CREATE COLUMN TABLE PAL_ER_DATA_TBL ( "ID" INT,"Y" DOUBLE,"X1" DOUBLE, "X2"
DOUBLE);
INSERT INTO PAL_ER_DATA_TBL VALUES (0,0.5,0.13,0.33);
INSERT INTO PAL_ER_DATA_TBL VALUES (1,0.15,0.14,0.34);
INSERT INTO PAL_ER_DATA_TBL VALUES (2,0.25,0.15,0.36);
INSERT INTO PAL_ER_DATA_TBL VALUES (3,0.35,0.16,0.35);
INSERT INTO PAL_ER_DATA_TBL VALUES (4,0.45,0.17,0.37);
INSERT INTO PAL_ER_DATA_TBL VALUES (5,0.55,0.18,0.38);
INSERT INTO PAL_ER_DATA_TBL VALUES (6,0.65,0.19,0.39);
INSERT INTO PAL_ER_DATA_TBL VALUES (7,0.75,0.19,0.31);
INSERT INTO PAL_ER_DATA_TBL VALUES (8,0.85,0.11,0.32);
INSERT INTO PAL_ER_DATA_TBL VALUES (9,0.95,0.12,0.33);
DROP TABLE PAL_ER_RESULTS_TBL;
CREATE COLUMN TABLE PAL_ER_RESULTS_TBL LIKE PAL_ER_RESULT_T;
DROP TABLE PAL_ER_FITTED_TBL;
CREATE COLUMN TABLE PAL_ER_FITTED_TBL LIKE PAL_ER_FITTED_T;
DROP TABLE PAL_ER_SIGNIFICANCE_TBL;
CREATE COLUMN TABLE PAL_ER_SIGNIFICANCE_TBL LIKE PAL_ER_SIGNIFICANCE_T;
DROP TABLE PAL_ER_PMMLMODEL_TBL;
CREATE COLUMN TABLE PAL_ER_PMMLMODEL_TBL LIKE PAL_ER_PMMLMODEL_T;
CALL DM_PAL.PAL_EXPR_PROC(PAL_ER_DATA_TBL, "#PAL_CONTROL_TBL",
PAL_ER_RESULTS_TBL, PAL_ER_FITTED_TBL, PAL_ER_SIGNIFICANCE_TBL,
PAL_ER_PMMLMODEL_TBL) WITH OVERVIEW;
SELECT * FROM PAL_ER_RESULTS_TBL;
SELECT * FROM PAL_ER_FITTED_TBL;
SELECT * FROM PAL_ER_SIGNIFICANCE_TBL;
SELECT * FROM PAL_ER_PMMLMODEL_TBL;
Expected Result
PAL_ER_RESULTS_TBL:
PAL_ER_SIGNIFICANCE_TBL:
PAL_ER_PMMLMODEL_TBL:
FORECASTWITHEXPR
Procedure Generation
Procedure Calling
The input, parameter, and output tables must be of the types specified in the signature table.
Signature
Input Tables
Parameter Table
Mandatory Parameters
None.
Optional Parameters
The following parameters are optional. If a parameter is not specified, PAL will use its default value.
Output Table
Example
Assume that:
Expected Result
PAL_FER_FITTED_TBL:
Linear regression is an approach to modeling the linear relationship between a variable , usually referred to
as dependent variable, and one or more variables, usually referred to as independent variables, denoted as
predictor vector . In linear regression, data are modeled using linear functions, and unknown model
parameters are estimated from the data. Such models are called linear models.
Assume we have m observation pairs (xi,yi). Then we obtain an overdetermined linear system =Y,
with is m×(n+1) matrix, is (n+1)×1 matrix, and Y is m×1 matrix, where m>n+1. Since equality is
usually not exactly satisfiable, when m > n+1, the least squares solution minimizes the squared Euclidean
Elastic net regularization for multiple linear regression seeks to find that minimizes:
Where .
Here and λ≥0. If α=0, we have the ridge regularization; if α=1, we have the LASSO regularization.
The implementation also supports calculating F and R^2 to determine statistical significance.
Prerequisites
LRREGRESSION
Procedure Calling
The input, parameter, and output tables must be of the types specified in the signature table.
Signature
Input Table
Parameter Table
Mandatory Parameters
None.
Optional Parameters
The following parameters are optional. If a parameter is not specified, PAL will use its default value.
ALPHA_TO_ENTER Double 0.05 P-value for forward se Only valid when VARI
lection. ABLE_SELECTION is 1.
ALPHA_TO_REMOVE Double 0.1 P-value for backward Only valid when VARI
selection. ABLE_SELECTION is 2.
ENET_LAMBDA Double No default value Penalized weight. The Only valid when ALG is
value should be equal 4.
to or greater than 0.
ENET_ALPHA Double 1.0 The elastic net mixing Only valid when ALG is
parameter. The value 4.
range is between 0 and
1 inclusively.
● 0: Ridge penalty
● 1: LASSO penalty
● 0: No
● 1: Yes
● 0: No
● 1: Yes
Output Tables
Example
Assume that:
CALL SYS.AFLLANG_WRAPPER_PROCEDURE_DROP('DM_PAL','PAL_LR_PROC');
CALL
SYS.AFLLANG_WRAPPER_PROCEDURE_CREATE('AFLPAL','LRREGRESSION','DM_PAL','PAL_LR_PRO
C',PAL_MLR_PDATA_TBL);
DROP TABLE #PAL_CONTROL_TBL;
CREATE LOCAL TEMPORARY COLUMN TABLE #PAL_CONTROL_TBL ("NAME" VARCHAR(100),
"INTARGS" INT, "DOUBLEARGS" DOUBLE,"STRINGARGS" VARCHAR(100));
INSERT INTO #PAL_CONTROL_TBL VALUES ('THREAD_NUMBER',8,null,null);
INSERT INTO #PAL_CONTROL_TBL VALUES ('PMML_EXPORT',0,null,null);
DROP TABLE PAL_MLR_DATA_TBL;
CREATE COLUMN TABLE PAL_MLR_DATA_TBL ( "ID" INT,"Y" DOUBLE,"V1" DOUBLE, "V2"
DOUBLE);
INSERT INTO PAL_MLR_DATA_TBL VALUES (0,0.5,0.13,0.33);
INSERT INTO PAL_MLR_DATA_TBL VALUES (1,0.15,0.14,0.34);
INSERT INTO PAL_MLR_DATA_TBL VALUES (2,0.25,0.15,0.36);
INSERT INTO PAL_MLR_DATA_TBL VALUES (3,0.35,0.16,0.35);
INSERT INTO PAL_MLR_DATA_TBL VALUES (4,0.45,0.17,0.37);
INSERT INTO PAL_MLR_DATA_TBL VALUES (5,0.55,0.18,0.38);
INSERT INTO PAL_MLR_DATA_TBL VALUES (6,0.65,0.19,0.39);
INSERT INTO PAL_MLR_DATA_TBL VALUES (7,0.75,0.19,0.31);
INSERT INTO PAL_MLR_DATA_TBL VALUES (8,0.85,0.11,0.32);
INSERT INTO PAL_MLR_DATA_TBL VALUES (9,0.95,0.12,0.33);
DROP TABLE PAL_MLR_RESULTS_TBL;
CREATE COLUMN TABLE PAL_MLR_RESULTS_TBL ("Coefficient"
varchar(50),"CoefficientValue" DOUBLE);
DROP TABLE PAL_MLR_FITTED_TBL;
CREATE COLUMN TABLE PAL_MLR_FITTED_TBL ("ID" INT, "Fitted" DOUBLE);
DROP TABLE PAL_MLR_SIGNIFICANCE_TBL;
CREATE COLUMN TABLE PAL_MLR_SIGNIFICANCE_TBL ("NAME" varchar(50),"VALUE" DOUBLE);
DROP TABLE PAL_MLR_PMMLMODEL_TBL;
CREATE COLUMN TABLE PAL_MLR_PMMLMODEL_TBL ("ID" INT, "PMMLMODEL" VARCHAR(5000));
CALL DM_PAL.PAL_LR_PROC(PAL_MLR_DATA_TBL,"#PAL_CONTROL_TBL",
PAL_MLR_RESULTS_TBL, PAL_MLR_FITTED_TBL, PAL_MLR_SIGNIFICANCE_TBL,
PAL_MLR_PMMLMODEL_TBL) WITH OVERVIEW;
SELECT * FROM PAL_MLR_RESULTS_TBL;
SELECT * FROM PAL_MLR_FITTED_TBL;
SELECT * FROM PAL_MLR_SIGNIFICANCE_TBL;
Expected Results
PAL_MLR_FITTED_TBL:
PAL_MLR_SIGNIFICANCE_TBL:
Example 2: Fitting multiple linear regression model with elastic net penalties
Expected Results
PAL_ENET_MLR_RESULTS_TBL:
PAL_ENET_MLR_SIGNIFICANCE_TBL:
FORECASTWITHLR
Procedure Generation
Procedure Calling
The input, parameter, and output tables must be of the types specified in the signature table.
Signature
Input Tables
Parameter Table
Mandatory Parameters
None.
Optional Parameters
The following parameters are optional. If a parameter is not specified, PAL will use its default value.
Output Table
Example
Assume that:
Expected Result
PAL_FMLR_FITTED_TBL:
In PAL, the implementation of exponential regression is to transform to linear regression and solve it:
y = β0 + β1 × x + β2 × x2 + … + βn × xn
y’ = β0’ + β1 × x1 + β2 × x2 + … + βn × xn
So, y’ and x1…xn is a linear relationship and can be solved using the linear regression method.
The implementation also supports calculating the F value and R^2 to determine statistical significance.
Prerequisites
POLYNOMIALREGRESSION
Procedure Generation
Procedure Calling
The input, parameter, and output tables must be of the types specified in the signature table.
Signature
Parameter Table
Mandatory Parameter
Optional Parameters
The following parameters are optional. If a parameter is not specified, PAL will use its default value.
● 0: Doolittle decomposi
tion (LU)
● 2: Singular value de
composition (SVD)
Output Tables
Example
Assume that:
Expected Result
PAL_PR_RESULTS_TBL:
PAL_PR_FITTED_TBL:
PAL_PR_SIGNIFICANCE_TBL:
FORECASTWITHPOLYNOMIALR
Procedure Generation
Procedure Calling
The input, parameter, and output tables must be of the types specified in the signature table.
Signature
Input Tables
Mandatory Parameters
None.
Optional Parameters
The following parameters are optional. If a parameter is not specified, PAL will use its default value.
Output Table
Example
Assume that:
CALL
SYS.AFLLANG_WRAPPER_PROCEDURE_DROP('DM_PAL','PAL_FORECAST_POLYNOMIALR_PROC');
CALL
SYS.AFLLANG_WRAPPER_PROCEDURE_CREATE('AFLPAL','FORECASTWITHPOLYNOMIALR','DM_PAL',
'PAL_FORECAST_POLYNOMIALR_PROC',PAL_FPR_PDATA_TBL);
DROP TABLE #PAL_CONTROL_TBL;
Expected Result
PAL_FPR_FITTED_TBL:
This section describes the association algorithms that are provided by the Predictive Analysis Library.
3.4.1 Apriori
Apriori is a classic predictive analysis algorithm for finding association rules used in association analysis.
Association analysis uncovers the hidden patterns, correlations or casual structures among a set of items or
objects. For example, association analysis enables you to understand what products and services customers
tend to purchase at the same time. By analyzing the purchasing trends of your customers with association
analysis, you can predict their future behavior.
Apriori is designed to operate on databases containing transactions. As is common in association rule mining,
given a set of items, the algorithm attempts to find subsets which are common to at least a minimum number
of the item sets. Apriori uses a “bottom up” approach, where frequent subsets are extended one item at a time,
a step known as candidate generation, and groups of candidates are tested against the data. The algorithm
terminates when no further successful extensions are found. Apriori uses breadth-first search and a tree
The Apriori function in PAL uses vertical data format to store the transaction data in memory. The function can
take varchar/nvarchar or integer transaction ID and item ID as input. It supports the output of confidence,
support, and lift value, but does not limit the number of output rules. However, you can use SQL script to select
the number of output rules, for example:
Prerequisites
APRIORIRULE
This function reads input transaction data and generates association rules by the Apriori algorithm.
Procedure Generation
Procedure Calling
The input, parameter, and output tables must be of the types specified in the signature table.
Input Table
Parameter Table
Mandatory Parameters
Optional Parameters
The following parameters are optional. If a parameter is not specified, PAL will use its default value.
INSERT INTO
PAL_CONTROL_TBL
VALUES
(‘RHS_RESTRICT’,
NULL, NULL, ‘i1’);
INSERT INTO
PAL_CONTROL_TBL
VALUES
(‘RHS_RESTRICT’,
NULL, NULL, ‘i2’);
INSERT INTO
PAL_CONTROL_TBL
VALUES
(‘LHS_IS_COMPLEMENT
ARY_RHS’, 1, NULL,
NULL);
Output Tables
Example
Assume that:
Expected Result:
PAL_APRIORI_RESULT_TBL:
APRIORIRULE2
This function has the same logic with APRIORIRULE, but it splits the result table into three tables.
Procedure Generation
Procedure Calling
The input, parameter, and output tables must be of the types specified in the signature table.
Signature
Input Table
Parameter Table
Mandatory Parameters
Optional Parameters
The following parameters are optional. If a parameter is not specified, PAL will use its default value.
INSERT INTO
PAL_CONTROL_TBL
VALUES
(‘RHS_RESTRICT’,
NULL, NULL, ‘i1’);
INSERT INTO
PAL_CONTROL_TBL
VALUES
(‘RHS_RESTRICT’,
NULL, NULL, ‘i2’);
INSERT INTO
PAL_CONTROL_TBL
VALUES
(‘LHS_IS_COMPLEMENT
ARY_RHS’, 1, NULL,
NULL);
Output Tables
Example
Assume that:
PAL_APRIORI_RULES_TBL:
PAL_APRIORI_ANTE_ITEMS_TBL:
LITEAPRIORIRULE
This is a light association rule mining algorithm to realize the Apriori algorithm. It only calculates two large item
sets.
Procedure Generation
Procedure Calling
The input, parameter, and output tables must be of the types specified in the signature table.
Signature
Input Table
Parameter Table
Mandatory Parameters
Optional Parameters
The following parameters are optional. If a parameter is not specified, PAL will use its default value.
Output Tables
Example
Assume that:
Expected Result
PAL_LITEAPRIORI_RESULT_TBL:
3.4.2 FP-Growth
FP-Growth is an algorithm to find frequent patterns from transactions without generating a candidate itemset.
In PAL, the FP-Growth algorithm is extended to find association rules in three steps:
Prerequisites
This function reads input transaction data and generates association rules by the FP-Growth algorithm.
Procedure Generation
Procedure Calling
The input, parameter, and output tables must be of the types specified in the signature table.
Signature
Input Table
Parameter Table
Mandatory Parameters
None.
Optional Parameters
The following parameters are optional. If a parameter is not specified, PAL will use its default value.
INSERT INTO
PAL_CONTROL_TBL
VALUES
(‘RHS_RESTRICT’,
NULL, NULL, ‘i1’);
INSERT INTO
PAL_CONTROL_TBL
VALUES
(‘RHS_RESTRICT’,
NULL, NULL, ‘i2’);
INSERT INTO
PAL_CONTROL_TBL
VALUES
(‘LHS_IS_COMPLEMENT
ARY_RHS’, 1, NULL,
NULL);
Output Table
Example
Assume that:
Expected Result
FP-Growth with relational output uses the same algorithm as FP-Growth. The only difference is the output
format. The relational output version of FP-Growth separates the result into three tables.
Procedure Generation
Procedure Calling
The input, parameter, and output tables must be of the types specified in the signature table.
Signature
Input Table
Parameter Table
Mandatory Parameters
None.
Optional Parameters
The following parameters are optional. If a parameter is not specified, PAL will use its default value.
INSERT INTO
PAL_CONTROL_TBL
VALUES
(‘RHS_RESTRICT’,
NULL, NULL, ‘i1’);
INSERT INTO
PAL_CONTROL_TBL
VALUES
(‘RHS_RESTRICT’,
NULL, NULL, ‘i2’);
INSERT INTO
PAL_CONTROL_TBL
VALUES
(‘LHS_IS_COMPLEMENT
ARY_RHS’, 1, NULL,
NULL);
Output Tables
Example
Assume that:
Expected Results
PAL_FPGROWTH_PRERULE_TBL:
PAL_FPGROWTH_POSTRULE_TBL:
K-optimal rule discovery (KORD) follows the idea of generating association rules with respect to a well-defined
measure, instead of first finding all frequent itemsets and then generating all possible rules. The algorithm only
calculates the top-k rules according to that measure. The size of the right hand side (RHS) of those rules is
restricted to one. Futhermore, the KORD implementation generates only non-redundant rules.
The algorithm's search strategy is based on the so-called OPUS search. While the search space of all possible
LHSs is traversed in a depth-first manner, the information about all qualified RHSs of the rules for a given LHS
is propagated further to the deeper search levels. KORD does not build a real tree search structure; instead it
traverses the LHSs in a specific order, which allows the pruning of the search space by simply not visiting those
itemsets subsequently. In this way it is possible to use pruning rules which restrict the possible LHSs and RHSs
at different rule generation stages.
Prerequisites
KORD
Procedure Generation
Procedure Calling
The input, parameter, and output tables must be of the types specified in the signature table.
Signature
Transaction Table
Parameter Table
Mandatory Parameters
None.
Optional Parameters
The following parameters are optional. If a parameter is not specified, PAL will use its default value.
● 0: Leverage
● 1: Lift
Default: T
Output Tables
Example
Assume that:
Expected Result
PAL_KORD_RULES_TBL
PAL_KORD_ANTE_ITEMS_TBL
Financial market data or economic data usually comes with time stamps. Predicting the future values, such as
stock value for tomorrow, is of great interest in many business scenarios. Quantity over time is called time
series, and predicting the future value based on existing time series is also known as forecasting. In this release
of PAL, three smoothing based time series models are implemented. These models can be used to smooth the
existing time series and forecast. In the time series algorithms, let xt be the observed values for the t-th time
period, and T be the total number of time periods.
3.5.1 ARIMA
The auto regressive integrated moving average (ARIMA) algorithm is famous in econometrics, statistics and
time series analysis. An ARIMA model can be written as ARIMA (p, d, q), where p refers to the auto regressive
order, d refers to integrated order, and q refers to the moving average order. This algorithm helps you
understand the time series data better and predict future data in the series.
The auto regressive integrated moving average with intervention (ARIMAX) algorithm is an extension for
ARIMA. Compared with ARIMA, an ARIMAX model can not only get the internal relationship with former data,
but also take external factor into consideration.
ARIMATRAIN
ARIMA Model
An ARIMA model is a universalization of an auto regressive moving average (ARMA) model. The integrated part
is mainly applied to induce stationary when data show evidence of non-stationary.
Φ(B)(1−B)d(Yt−c)=Θ(B)εt, t∈Z
Φ(B)(Yt−c)=Θ(B)εt, t∈Z
εt~i.i.d.N(0,σ2)
Where B is lag operator (backward shift operator), c is the mean of the series data,
Θ(B)=1+θ1B+θ2B2+…+ θqBq, q ≥ 0
ARIMAX Model
An ARIMAX model is a universalization of an ARMAX model. The integrated part is mainly applied to induce
stationary when data show evidence of non-stationary.
Φ(B)(1−B)d(Yt−c)=HT(1−B)dXt+Θ(B)εt, t∈Z
Φ(B)Yt=HTXt+Θ(B)εt, t∈Z
εt~i.i.d.N(0,σ2)
Where B is lag operator (backward shift operator), Xt is a covariate vector at time t, and H is its coefficient
vector.
In PAL, the ARIMATRAIN algorithm first converts the original non-stationary time series data to a new
stationary time series data by the integrated step, and then ARIMA fits the stationary time series data to an
ARMA model, and ARIMAX fits the stationary time series data to an ARMAX model.
PAL provides two parameter estimation methods: conditional sum of squares (CSS or conditional maximum
likelihood estimation) and maximum likelihood estimation (MLE).
SARIMA Model
Where
CSS Estimation
Yt=φ0+φ1Yt-1+φ2Yt-2+…+φpYt-p+…+εt+θ1εt-1+θ2εt-2+…+θqεt-q
Where .
Let r = max (p, q), yi, i = 1, 2, …, N denote the observed series data, with the length N,
Yr+1=φ0+φ1Yr+φ2Yr-1+…+φpYr-p+1+…+εr+1+θ1εr+θ2εr-1+…+θqεr-q+1
Yr+1~N((φ0+φ1Yr+φ2Yr-1+…+φpYr-p+1), σ2)
L(φ,θ,σ2)=logf(YN,YN-1,…,Yr+1|yr,…,y1,εr=εr-1=…=εr-q+1=0,φ,θ,σ2)
MLE Estimation
Kalman filtering is applied to calculate MLE. An ARMA (p, q) model can be expressed as a Kalman state space
model.
Where
Let dt denote the first element of Pt, therefore the log likelihood is
L(φ,θ,σ2)=logf(YN,YN-1,…,Y1|yN,…,y1,φ,θ,σ2)
Procedure Generation
Procedure Calling
The input, parameter, and output tables must be of the types specified in the signature table.
Input Table
(For ARIMAX only) Other col Integer or double External (Intervention) data
umns
Parameter Table
Mandatory Parameters
None.
Optional Parameters
The following parameters are optional. If a parameter is not specified, PAL will use its default value.
Examples
Assume that:
Example 1: ARIMA
Expected Result
Example 2: ARIMAX
Expected Result
Expected Result
An ARMA (p, q) model can be transformed to a MA (∞) model. More generally, an ARIMA (p, d, q) can also be
changed to a MA (∞) model.
Yt=Ψ(B)εt=εt+ψ1εt-1+ψ2εt-2+…
At t=N+l
Procedure Generation
Procedure Calling
The input, parameter, and output tables must be of the types specified in the signature table.
Signature
Input Table
Parameter Table
Mandatory Parameters
None.
Optional Parameter
The following parameter is optional. If it is not specified, PAL will use its default value.
Output Table
Example
Assume that:
Expected Result
The forecast procedure of this algorithm is similar to that of ARIMAFORECAST. However, this algorithm
requires one more table to restore the future external data. The mean of ARIMAX forecast is the sum of
external value and ARIMA forecast.
The variance of the ARIMAX forecast error is equal to the ARIMA forecast. Refer to ARIMAFORECAST for more
information.
Procedure Generation
Procedure Calling
The input, parameter, and output tables must be of the types specified in the signature table.
Signature
Input Table
Parameter Table
Mandatory Parameters
None.
Optional Parameter
The following parameter is optional. If it is not specified, PAL will use its default value.
Output Table
Example
Assume that:
Expected Result
Related Information
This function automatically identifies the orders of an ARIMA model, that is, (p,d,q) (P,D,Q)m, where m is the
seasonal period according to some information criterion such as AICC, AIC, and BIC. If order selection
succeeds, the function gives the optimal model as in the ARIMATRAIN function.
Successively, you can use functions such as ARIMAFORECAST and ARIMAXFORECAST, which are described in
the ARIMA topic, to make forecast.
Prerequisite
AUTOARIMA
Procedure Generation
Procedure Calling
The input, parameter, and output tables must be of the types specified in the signature table.
Signature
Input Table
Parameter Table
Mandatory Parameters
None.
Optional Parameters
The following parameters are optional. If a parameter is not specified, PAL will use its default value.
● Negative: Auto
matically identify
seasonality by
means of auto-
correlation
scheme.
● 0 or 1: Non-sea
sonal.
● Others: Seasonal
period.
SEASONALITY_CRITE Double 0.5 The criterion of the Valid only when SEA
RION auto-correlation coeffi- SONAL_PERIOD is
cient for accepting negative. Refer to Sea
seasonality, in the sonality Test for more
range of (0, 1). The information.
larger it is, the less
probable a time series
is regarded to be sea
sonal.
● Negative: Auto
matically identi
fies first-differenc-
ing order with
KPSS test.
● Others: Uses the
specified value as
the first-differenc-
ing order.
● Negative: Auto
matically identi
fies seasonal-dif
ferencing order
Canova-Hansen
test
● Others: Uses the
specified value as
the seasonal-dif
ferencing order.
CH_SIGNIFI Double 0.05 The significance level Valid only when SEA
CANCE_LEVEL for Canova-Hansen SONAL_D is negative.
test. Supported values
are 0.01, 0.025, 0.05,
0.1, and 0.2. The
smaller it is, the larger
probable a time series
is considered sea
sonal-stationary, that
is, the less probable it
needs seasonal-differ-
encing.
● 0: AICC
● 1: AIC
● 2: BIC
● 0: Exhaustive. Tra
verse all models
specified by
MAX_P, MAX_Q,
MAX_SEA
SONAL_P,
MAX_SEA
SONAL_Q, and
MAX_ORDER.
● 1: Stepwise.
Changes one or
two orders of (p,
q, P, Q) by 1 from
the current opti
mal model each
time, until no bet
ter model is
found. This is
more time-effi-
cient but possibly
less accuracy-ef
fective.
● 0: No guess. Be
sides user-defined
model, uses
states (2, 2) (1,
1)m, (1, 0) (1, 0)m,
and (0, 1) (0, 1)m
meanwhile as
starting states.
● 1: Guesses start
ing states taking
advantage of
ACF/PACF.
(MAX_SEASONAL_Q
+1)
Note
In practice, the ARMA model (0,0) (0,0)m, that is, white noise, is not actually calculated.
Examples
Assume that:
Expected Result
Expected Result
Expected Result
For non-adaptive brown exponential smoothing, let St and Tt be the simply smoothed value and doubly
smoothed value for the (t+1)-th time period, respectively. Let at and bt be the intercept and the slope. The
procedure is as follows:
1. Initialization:
S 0 = x0
T0 = x0
a0 = 2S0 – T0
Ft+1 = at + bt
For adaptive brown exponential smoothing, you need to update the parameter of α for every forecasting. The
following rules must be satisfied.
1. Initialization:
S0 = x0
T0 = x0
a0 = 2S0 – T0
F1 = a0 + b0
A0 = M0 = 0
α1 = α2 = α3 = δ = 0.2
2. Calculation:
Et = xt – Ft
At = δEt + (1 – δ)At–1
Mt = δ|Et| + (1 – δ)Mt–1
St = αtxt + (1 – αt)St–1
Tt = αtSt + (1 – αt)Tt–1
at = 2St – Tt
Ft+1 = at + bt
Where α, δ ∈(0,1) are two user specified parameters. The model can be viewed as two coupled single
exponential smoothing models, and thus forecast can be made by the following equation:
FT+m = aT + mbT
Note
F0 is not defined because there is no estimation for the time slot 0. According to the definition, you can get
F1 = a0 + b0 and so on.
Prerequisites
Procedure Generation
Procedure Calling
The input, parameter, and output tables must be of the types specified in the signature table.
Signature
Input Table
Parameter Table
Mandatory Parameters
None.
Optional Parameters
The following parameters are optional. If a parameter is not specified, PAL will use its default value.
DELTA Double 0.2 Value of weighted for Only valid when ADAP
At and Mt. TIVE_METHOD is 1.
Output Table
Example
Assume that:
Expected Result
PAL_BROWNSMOOTH_STATISTIC_TBL:
Related Information
The Croston’s method is a forecast strategy for products with intermittent demand. The Croston’s method
consists of two steps. First, separate exponential smoothing estimates are made of the average size of a
demand. Second, the average interval between demands is calculated. This is then used in a form of the
constant model to predict the future demand.
Initialization
The system checks the first time bucket of the historical values. If it finds a value (not zero), it is set as the Z’s
initial value and X is set to 1. Otherwise, Z is set to 1 and X to 2.
If 1st value ≠ 0
Z(0) = V(1), X(0) = 1
If 1st value = 0
Z(0) = 1, X(0) = 2
The forecast is made using a modified constant model. The forecast parameters P and X are determined as
follows:
If V(t) = 0
q = q + 1
Else
Z(t) = Z(t−1) + α[V(t) − Z(t−1)]
X(t) = X(t−1) + α[q − X(t−1)]
Endif
In the last iteration, the parameters Z(f) and X(f) will be delivered for the forecast.
Prerequisites
CROSTON
Procedure Calling
The input, parameter, and output tables must be of the types specified in the signature table.
Signature
Input Table
Parameter Table
Mandatory Parameters
None.
Optional Parameters
The following parameters are optional. If a parameter is not specified, PAL will use its default value.
Output Tables
Example
Assume that:
Expected Results
PAL_CROSTON_RESULT_TBL:
PAL_CROSTON_STATISTICS_TBL:
Related Information
Measures are used to check the accuracy of the forecast made by PAL algorithms. They are calculated based
on the difference between the historical values and the forecasted values of the fitted model. The measures
supported in PAL are MPE, MSE, RMSE, ET, MAD, MASE, WMAPE, SMAPE, and MAPE.
Prerequisite
Procedure Generation
Procedure Calling
The input, parameter, and output tables must be of the types specified in the signature table.
Signature
Input Table
Parameter Table
Mandatory Parameter
Optional Parameters
None.
Output Table
Example
Assume that:
CALL
DM_PAL.PAL_FORECASTACCURACYMEASURES_PROC(PAL_FORECASTACCURACYMEASURES_DATA_TBL,
PAL_CONTROL_TBL, PAL_FORECASTACCURACYMEASURES_RESULT_TBL) WITH OVERVIEW;
SELECT * FROM PAL_FORECASTACCURACYMEASURES_RESULT_TBL;
Expected Result
Forecast smoothing is used to calculate optimal parameters of a set of smoothing functions in PAL, including
Single Exponential Smoothing, Double Exponential Smoothing, and Triple Exponential Smoothing.
This function also outputs the forecasting results based on these optimal parameters. This optimization is
computed by exploring of the parameter space which includes all possible parameter combinations. The
quality assessment is done by comparing historic and forecast values. In PAL, MSE (mean squared error) or
MAPE (mean absolute percentage error) is used to evaluate the quality of the parameters.
The parameter optimization is based on global and local search algorithms. The global search algorithm used
in this function is simulated annealing, whereas the local search algorithm is Nelder Mead. Those algorithms
allow for efficient search processes.
To evaluate the flexibility of the function, a train-and-test scheme is carried out. In other words, a partition of
the time series is allowed, of which the former one is used to train the parameters, whereas the latter one is
applied to test.
Di=(1+i* ∝2)*E
Where ∝ is the weight of smoothing and E is the calculated MSE or MAPE value.
Prerequisites
This function is used to calculate optimal parameters and output forecast results.
Procedure Generation
Procedure Calling
The procedure name is the same as the name specified in the procedure generation.
The input, parameter, and output tables must be of the types specified in the signature table.
Signature
Input Table
Parameter Table
Mandatory Parameter
Optional Parameters
The following parameters are optional. If a parameter is not specified, PAL will use its default value.
BETA Double 0.1 Weight for the trend Only valid when FORE
component. CAST_MODEL_NAME
is DESM or TESM.
Value range: 0 < β < 1
GAMMA Double 0.1 Weight for the sea Only valid when FORE
sonal component. CAST_MODEL_NAME
is TESM.
Value range: 0 < γ < 1
● 0: The value of
CYCLE is set by
user.
● 1: The optimal
value of CYCLE
will be computed
automatically.
For TESM:
Note
Cycle determines the seasonality within the time series data by considering the seasonal factor of a data
pointt-CYCLE+1 in the forecast calculation of data pointt+1. Additionally, the algorithm of TESM takes an entire
CYCLE as the base to calculate the first forecast value for data pointCYCLE+1. The value range for CYCLE
should be 2 ≤ CYCLE ≤ total number of data points/2.
For example, there is one year of weekly data (52 data points) as input time series. The value for CYCLE
should be within the range of 2 <= CYCLE <= 26. If CYCLE is 4 then we get the first forecast value for data
point 5 (e.g. week 201205) by considering the seasonal factor of data point 1 (e.g. week 201201). The second
forecast value for data point 6 (e.g. week 201206) takes into account of the seasonal factor of data point 2
(e.g. week 201202), etc. If CYCLE is 2 then we get the first forecast value for data point 3 (e.g. week 201203)
by considering the seasonal factor of data point 1 (e.g. week 201201). The second forecast value for data
point 4 (e.g. week 201204) takes into account of the seasonal factor of data point 2 (e.g. week 201202), etc.
Output Tables
Example
Assume that:
Expected Result
PAL_FORECASTSINGLESMOOTHING_RESULT_TBL:
Expected Results
Expected Results
PAL_OUTPARAMETER_TBL:
Expected Result
PAL_FORECASTMODELSELECTION_RESULT_TBL:
For Train-and-Test:
Expected Result
PAL_FORECASTMODELSELECTION_RESULT_TBL:
Related Information
Linear regression with damped trend and seasonal adjust is an approach for forecasting when a time series
presents a trend. In PAL, it provides a damped smoothing parameter for smoothing the forecasted values. This
dampening parameter avoids the over-casting due to the “indefinitely” increasing or decreasing trend. In
addition, if the time series presents seasonality, you can deal with it by providing the length of the periods in
order to adjust the forecasting results. On the other hand, it also helps you to detect the seasonality and to
determine the periods.
Note
Occasionally, there is probability that the average, the linear forecast, or the seasonal index is calculated to
be 0, which gives rise to the issue of division by zero in the subsequent calculation. To address this,
therefore, a tiny value, 1.0e-6 for example, is adopted as the divisor instead of 0. In the Result output table,
an indicator named “HandleZero” is given, which represents if the substitution takes place (1) or not (0).
Prerequisites
● No missing or null data in the inputs. The algorithm will issue errors when encountering null values.
● The data is numeric, not categorical.
LRWITHSEASONALADJUST
This is the function for linear regression with damped trend and seasonal adjust.
Procedure Generation
The input, parameter, and output tables must be of the types specified in the signature table.
Signature
Input Table
Parameter Table
Mandatory Parameters
None.
Optional Parameters
The following parameters are optional. If a parameter is not specified, PAL will use its default value.
● 0: Affects all.
● 1: Affects the fu
ture only.
● 0: Non-seasonal
ity.
● 1: Seasonality ex
ists and user in
puts the value of
periods.
● 2: Automatically
detects seasonal
ity.
PERIODS Integer No default value Length of the periods. Only valid when SEA
SONALITY is 1.
SEASONAL_HAN Integer 0 Method used for calcu Only valid when SEA
DLE_METHOD lating the index value SONALITY is 2.
in the periods.
● 0: Average
method.
● 1: Fitting linear re
gression.
Output Tables
Example
Assume that:
Expected Result
PAL_FORECASTSLR_FORECAST_TBL:
Related Information
Single Exponential Smoothing model is suitable to model the time series without trend and seasonality. In the
model, the smoothed value is the weighted sum of previous smoothed value and previous observed value.
PAL provides two simple exponential smoothing algorithms: single exponential smoothing and adaptive-
response-rate simple exponential smoothing. The adaptive-response-rate single exponential smoothing
algorithm may have an advantage over single exponential smoothing in that it allows the value of alpha to be
modified.
For single exponential smoothing, let St be the smoothed value for the t-th time period. Mathematically:
S1 = x0
St = αxt−1 + (1−a)St−1
For adaptive-response-rate single exponential smoothing, let St be the smoothed value for the t-th time period.
Initialize for adaptive-response-rate single exponential smoothing as follows:
S1 = x0
α1 = α2 = α3 = δ = 0.2
A0 = M0 = 0
Et = Xt − St
At = δEt + (1 − δ)At-1
Mt = δ|Et| + (1 − δ)Mt-1
It is worth nothing that when t ≥ T+2, the smoothed value St, that is, the forecast value, is always ST+1 (xt−1 is
not available and St−1 is used instead).
PAL calculates the prediction interval to get the idea of likely variation. Assume that the forecast data is
normally distributed. The mean value is St and the variance is σ2. Let Ut be the upper bound of prediction
interval for St and Lt be the lower bound. Then they are calculated as follows:
Ut = St + zσ
Lt = St - zσ
Here z is the one-tailed value of a standard normal distribution. It is derived from the input parameters
PREDICTION_CONFIDENCE_1 and PREDICTION_CONFIDENCE_2.
Prerequisites
SINGLESMOOTH
Procedure Generation
Procedure Calling
Note
The statistics output table is optional.
The input, parameter, and output tables must be of the types specified in the signature table.
Signature
Input Table
Parameter Table
Mandatory Parameters
None.
Optional Parameters
The following parameters are optional. If a parameter is not specified, PAL will use its default value.
DELTA Double 0.2 Value of weighted for Only valid when ADAP
At and Mt. TIVE_METHOD is 1.
Output Table
Example
Assume that:
Expected Result
PAL_SINGLESMOOTH_RESULT_TBL:
Related Information
Double Exponential Smoothing model is suitable to model the time series with trend but without seasonality. In
the model there are two kinds of smoothed quantities: smoothed signal and smoothed trend.
PAL provides two methods of double exponential smoothing: Holt's linear exponential smoothing and additive
damped trend Holt's linear exponential smoothing. The Holt’s linear exponential smoothing displays a constant
trend indefinitely into the future. Empirical evidence shows that the Holt's linear method tends to over-forecast.
A parameter that is used to damp the trend could improve the situation.
Let St and bt be the smoothed value and smoothed trend for the (t+1)-th time period, respectively. The
following rules are satisfied:
S0 = x0
b0 = x1 – x0
Where α, β∈(0,1) are two user specified parameters. The model can be understood as two coupled Single
Exponential Smoothing models, and forecast can be made by the following equation:
FT+m = ST + mbT
Let St and bt be the smoothed value and smoothed trend for the (t+1)-th time period, respectively. The
following rules are satisfied:
S0 = x0
b0 = x1 – x0
Where α, β, Φ∈(0,1) are three user specified parameters. Forecast can be made by the following equation:
PAL calculates the prediction interval to get the idea of likely variation. Assume that the forecast data is
normally distributed. The mean value is St and the variance is σ2. Let Ut be the upper bound of prediction
interval for St and Lt be the lower bound. Then they are calculated as follows:
Ut = St + zσ
Lt = St - zσ
Here z is the one-tailed value of a standard normal distribution. It is derived from the input parameters
PREDICTION_CONFIDENCE_1 and PREDICTION_CONFIDENCE_2.
Note
The algorithm is backward compatible. You can still work in the SAP HANA SPS 11 or older versions where
the prediction interval feature is not available. In that case, only point forecasts are calculated.
Prerequisites
DOUBLESMOOTH
Procedure Generation
Procedure Calling
Note
The statistics output table is optional.
The input, parameter, and output tables must be of the types specified in the signature table.
Signature
Input Table
Parameter Table
Mandatory Parameters
None.
Optional Parameters
The following parameters are optional. If a parameter is not specified, PAL will use its default value.
Output Table
Example
Assume that:
Expected Result
PAL_DOUBLESMOOTH_STATISTIC_TBL:
Related Information
Triple exponential smoothing is used to handle the time series data containing a seasonal component. This
method is based on three smoothing equations: stationary component, trend, and seasonal. Both seasonal and
trend can be additive or multiplicative. PAL supports multiplicative triple exponential smoothing and additive
triple exponential smoothing. For additive triple exponential smoothing, an additive damped method is also
supported.
Ft+m = St + m × Bt + Ct−L+1+((m−1)mod L)
The additive damped method of additive triple exponential smoothing is given by the below formula:
Where:
X Observation
S Smoothed observation
C Seasonal index
Note
α, β, and γ are the constants that must be estimated in such a way that the MSE of the error is minimized.
PAL uses two methods for initialization. The first is the following formula:
To initialize the seasonal indices Ci for i = 0,1,...,L−1 for multiplicative triple exponential smoothing:
To initialize the seasonal indices Ci for i = 0,1,...,L−1 for additive triple exponential smoothing:
ci = xi - SL-1 0≤i≤L-1
Where
Note
SL−1 is the average value of x in the L cycle of your data.
1. Get the trend component by using moving averages with the first two CYCLE observations.
2. The seasonal component is computed by removing the trend component from the observations. For
additive, use the observations minus the trend component. For multiplicative, use the observations divide
the trend component.
3. The start values of Ct are initialized by using the seasonal component calculated in Step 2. The start values
of St and Bt are initialized by using a simple linear regressing on the trend component, St is initialized by
intercept and Bt is initialized by slope.
Ut = St + zσ
Lt = St - zσ
Here z is the one-tailed value of a standard normal distribution. It is derived from the input parameters
PREDICTION_CONFIDENCE_1 and PREDICTION_CONFIDENCE_2.
Note
The algorithm is backward compatible. You can still work in the SAP HANA SPS 11 or older versions where
the prediction interval feature is not available. In that case, only point forecasts are calculated.
Prerequisites
TRIPLESMOOTH
Procedure Generation
Procedure Calling
The input, parameter, and output tables must be of the types specified in the signature table.
Signature
Input Table
Parameter Table
Mandatory Parameters
None.
Optional Parameters
The following parameters are optional. If a parameter is not specified, PAL will use its default value.
Note
Cycle determines the seasonality within the time series data by considering the seasonal factor of a data
pointt-CYCLE+1 in the forecast calculation of data pointt+1. Additionally, the algorithm of TESM takes an entire
CYCLE as the base to calculate the first forecasted value for data pointCYCLE+1. The value for CYCLE should
be within the range of 2 ≤ CYCLE ≤ entire number of data point/2.
For example, there is one year of weekly data (52 data points) as input time series. The value for CYCLE
should range within 2 ≤ CYCLE ≤ 26. If CYCLE is 4, we get the first forecast value for data point 5 (e.g. week
201205) which considers the seasonal factor of data point 1 (e.g. week 201201). The second forecast value
for data point 6 (e.g. week 201206) considers the seasonal factor of data point 2 (e.g. week 201202), etc. If
CYCLE is 2, then we get the first forecast value for data point 3 (e.g. week 201203) which considers the
seasonal factor of data point 1 (e.g. week 201201). The second forecast value for data point 4 (e.g. week
201204) considers the seasonal factor of data point 2 (e.g. week 201202), etc.
Example
Assume that:
Expected Result
PAL_TRIPLESMOOTH_STATISTICS_TBL:
Related Information
This algorithm is used to test whether a time series has a seasonality or not. If it does, the corresponding
additive or multiplicative seasonality model is identified, and the de-seasonalized series (both trend and
seasonality are eliminated) is given.
1. Additive: xt = mt + st + yt
2. Multiplicative: xt = mt × st × yt
Where mt, st, and yt are trend, seasonality, and random components, respectively. They satisfy the properties:
Where d is the length of the seasonality cycle, that is, the period. It is believed that the additive model is useful
when the seasonal variation is relatively constant over time, whereas the multiplicative model is useful when
the seasonal variation increases over time.
Autocorrelation is employed to identify the seasonality. The autocorrelation coefficient at lag h is given by:
r h = c h / c0
The resulting rh has a value in the range of −1 to 1, and a larger value indicates more relevance.
For an n-element time series, the probable seasonality cycle is from 2 to n/2. The main procedure to determine
the seasonality, therefore, is to calculate the autocorrelation coefficients of all possible lags (d=2,3,...,n) in
the case of both additive and multiplicative models. There is a user-specified threshold for the coefficient, for
example, 0.2, indicating that only if the autocorrelation is larger than the threshold is the tested seasonality
considered. If there is no lag satisfying the requirement, the time series is regarded to have no seasonality.
Otherwise, the one having the largest autocorrelation is the optimal seasonality.
1. Estimate the trend (t = q+1, ..., n−q). Here moving average is applied to estimate the trend, that
is,
2. De-trend the time series. For an additive model, this is done by subtracting the trend estimates from the
series. For a multiplicative decomposition, likewise, this is done by dividing the series by the trend values.
And then two sets (additive and multiplicative) of autocorrelation coefficients are calculated from the de-
trended series.
, additive
decomposition,
or, , multiplicative
decomposition.
Note that during the course of calculating the moving average, the element at t is determined by x(t−q, ...,
t+q), where d=2q or d=2q+1, d is the seasonality cycle. As a result, the trend and random series are valid only
within the time range between q and n−q. Should there be no seasonality, the random series is just exactly the
input time series.
Prerequisites
● No null data in the inputs. The time periods should be unique and equal sampling.
● The length of time series must be at least 1.
● The data type of time periods is integer. The data type of time series is integer or double.
SEASONALITYTEST
This function identifies the seasonality and calculates de-seasonalized series (random) of a time series.
Procedure Generation
Procedure Calling
The input, parameter, and output tables must be of the types specified in the signature table.
Signature
Input Table
Parameter Table
Mandatory Parameters
None.
Optional Parameters
The following parameter is optional. If it is not specified, PAL will use its default value.
Output Tables
Example
Assume that:
Expected Result
PAL_TSSEASONALITY_SEASONALITY_TBL:
PAL_TSSEASONALITY_RANDOM_TBL:
This algorithm is used to identify whether a time series has an upward or downward trend or not, and calculate
the de-trended time series.
Two methods are provided for identifying the trend: difference-sign test and rank test.
Difference-Sign Test
The difference-sign test counts the number S of times that x(t)−x(t−1) is positive, t=1,2,...,n. For an IID
(Independent and Identically Distributed) series, the theoretical expectation of S is
µs=(n−1)/2,
For a large n, S approximately behaves Gaussian distribution . Hence, a large positive or negative
value of S−µS indicates the presence of an increasing (or decreasing) trend. The data is considered to have a
trend if |S−µS|>σS, otherwise no trend exists. Nevertheless, the difference-sign test must be used with great
caution. In the case of large proportion of tie data, for example, it may give a result of negative trend, while in
fact it has no trend.
Rank Test
The second solution is the rank test, which is usually known as Mann-Kendall (MK) Test (Mann 1945, Kendall
1975, Gilbert 1987). It tests whether to reject the null hypothesis (H0) and accept the alternative hypothesis
(Hα), where
α: the tolerance probability that falsely concludes a trend exists when there is none, 0<α<0.5.
The MK test may give rise to different trends for an identical time series, given a distinct α. Anyway, for a very
small α, for example, 0.05, MK test is expected to achieve a quite satisfactory estimation for trend. At the same
time, MK test requires that the length of time series should be at least 4. Provided the length of data set is only
3, therefore, a linear regression strategy is applied.
The resulting trend indicator has three possible numeric values: 1 indicating upward trend, −1 indicating
downward trend, and 0 for no trend.
Should there be a trend, a de-trended time series with first differencing approach is given:
w(t)=x(t)−x(t−1),
where x(t) is the input time series, and w(t) is the de-trended time series. Apparently, the length of de-
trended series is exactly one less than the input’s (lack of the first period). On the other hand, the output series
is just the input one if no trend is identified. Note that the resulting time series is sorted by time periods.
TRENDTEST
This function identifies the seasonality and calculates de-seasonalized series (random) of a time series.
Procedure Generation
Procedure Calling
The input, parameter, and output tables must be of the types specified in the signature table.
Signature
Input Table
Parameter Table
Mandatory Parameters
None.
Optional Parameters
The following parameters are optional. If a parameter is not specified, PAL will use its default value.
Output Tables
● 1: Upward
● -1: Downward
● 0: No trend
De-trended Series 1st column Integer Time periods that are monot
onically increasing sorted.
Example
Assume that:
Expected Result
PAL_TSTREND_TREND_TBL:
PAL_TSTREND_DETRENDED_TBL:
This algorithm is used to identify whether a time series is a white noise series. If white noise exists in the raw
time series, the algorithm returns the value of 1. If not, the value of 0 will be returned.
PAL uses Ljung-Box test to test for autocorrelation at different lags. The Ljung-Box test can be defined as
follows:
Prerequisites
WHITENOISETEST
Procedure Generation
Procedure Calling
The input, parameter, and output tables must be of the types specified in the signature table.
Signature
Input Table
Parameter Table
Mandatory Parameters
None.
Optional Parameters
The following parameters are optional. If a parameter is not specified, PAL will use its default value.
LAG Integer Half of the sample size Specifies the lag autocorrela
tion coefficient that the sta
tistic will be based on. It cor
responds to the freedom de
gree of chi-square distribu
tion.
Output Table
Example
Expected Result
PAL_WHITENOISETEST_RESULT_TBL:
The records in business database are usually not directly ready for predictive analysis due to the following
reasons:
● Some data come in large amount, which may exceed the capacity of an algorithm.
● Some data contains noisy observations which may hurt the accuracy of an algorithm.
● Some attributes are badly scaled, which can make an algorithm unstable.
To address the above challenges, PAL provides several convenient algorithms for data preprocessing.
3.6.1 Binning
Binning data is a common requirement prior to running certain predictive algorithms. It generally reduces the
complexity of the model, for example, the model in a decision tree.
Binning methods replace a value by a "bin number" defined by all elements of its neighborhood, that is, the bin
it belongs to. The ordered values are distributed into a number of bins. Because binning methods consult the
neighborhood of values, they perform local smoothing.
Note
Binning can only be used on a table with only one attribute.
Binning Methods
Smoothing Methods
● Smoothing by bin means: each value within a bin is replaced by the average of all the values belonging to
the same bin.
● Smoothing by bin medians: each value in a bin is replaced by the median of all the values belonging to the
same bin.
● Smoothing by bin boundaries: the minimum and maximum values in a given bin are identified as the bin
boundaries. Each value in the bin is then replaced by its closest boundary value.
Note
When the value is equal to both sides, it will be replaced by the front boundary value.
Prerequisites
BINNING
Procedure Generation
Procedure Calling
The input, parameter, and output tables must be of the types specified in the signature table.
Signature
Input Table
Parameter Table
Mandatory Parameters
Optional Parameters
The following parameters are optional. If a parameter is not specified, PAL will use its default value.
Examples: 1 S.D.; 2
S.D.; 3 S.D.
Output Table
Result 1st column Integer, bigint, varchar, ID This must be the first
column.
or nvarchar
Binning Model (op 1st column Integer Binning model ID This must be the first
tional)
column.
2nd column CLOB, varchar, or Binning model saved The table must be a
nvarchar as JSON string column table. The min
imum length of each
unit (row) is 5000.
Example
Assume that:
Expected Result
PAL_BINNING_RESULT_TBL:
PAL_BINNING_MODEL_TBL:
Binning assignment is used to assign data to the bins previously generated by the Binning algorithm. Therefore
it accepts a binning model as input.
It is assumed that the binning model generated in the previous binning stage includes bin starts (S) and ends
(E) of all bins, and therefore new data (d) can be assigned to bin i directly satisfying Si ≤ d < Ei (for the last
bin, the less than relation should be less than or equal to).
There is some probability that new data locates too far away from all bins. Here the IQR technique is adopted to
justify if a piece of data is an outlier. If new data is lower than both Q1 – 1.5*IQR and the first bin start, or
higher than both Q3 + 1.5*IQR and the last bin end, it is regarded an outlier and assigned to a virtual bin of
index -1 without smoothing.
Note that for the cast of Equal Number Per Bin strategy, the new data assignment may violate its original
binning properties.
Prerequisites
BINNINGASSIGNMENT
This function directly assigns data to bins based on the previous binning model, without running binning
procedure thoroughly.
Procedure Generation
Procedure Calling
The input, parameter, and output tables must be of the types specified in the signature table.
Signature
Input Tables
Data 1st column Integer, bigint, varchar, ID This must be the first
or nvarchar column.
Binning Model 1st column Integer Binning model ID This must be the first
column.
2nd column CLOB, varchar, or Binning model saved The table must be a
nvarchar as JSON string column table. The min
imum length of each
unit (row) is 5000.
Parameter Table
Mandatory Parameters
None.
Optional Parameters
None.
Output Table
Assume that:
Expected Result
PAL_BINNING_ASSIGNED_TBL:
Related Information
This function converts category type to binary vector with numerical columns.
Assume that you have a Gender attribute which has two distinct values: Female and Male. You can convert it
into:
Female 1 0
Male 0 1
Female 1 0
Prerequisites
● The input data must contain an ID column, and the ID column must be the first column.
● The other columns of the input table must be of the integer, varchar, or nvarchar type.
● The input data does not contain any null value.
CONV2BINARYVECTOR
Procedure Generation
Procedure Calling
The input, parameter, and output tables must be of the types specified in the signature table.
Signature
Input Table
Parameter Table
Mandatory Parameter
Optional Parameter
The following parameter is optional. If it is not specified, PAL will use its default value.
Output Table
Example
Assume that:
Expected Result
Given a series of numeric data, the inter-quartile range (IQR) is the difference between the third quartile (Q3)
and the first quartile (Q1) of the data.
IQR = Q3 – Q1
The p-th percentile of a numeric vector is a number, which is greater than or equal to p% of all the values of this
numeric vector.
IQR Test is a method to test the outliers of a series of numeric data. The algorithm performs the following tasks:
Prerequisites
IQRTEST
This function performs the inter-quartile range test and outputs the test results.
Procedure Generation
Procedure Calling
The input, parameter, and output tables must be of the types specified in the signature table.
Signature
Input Table
Parameter Table
Mandatory Parameters
None.
Optional Parameter
The following parameter isoptional. If it is not specified, PAL will use its default value.
Output Tables
Example
Assume that:
Expected Result
PAL_IQR_TBL:
PAL_IQR_RESULTS_TBL:
The algorithm partitions an input dataset randomly into three disjoints subsets called training, testing, and
validation set. The proportion of each subset is defined as a parameter. Let us remark that the union of these
three subsets might not be the complete initial dataset.
In the second case, the dataset needs to have at least one categorical attribute (for example, of type varchar).
The initial dataset will first be subdivided according to the different categorical values of this attribute. Each
mutually exclusive subset will then be randomly split to obtain the training, testing, and validation subsets. This
ensures that all "categorical values" or "strata" will be present in the sampled subset.
Prerequisites
PARTITION
This function reads the input data and generates training, testing, and validation data with the partition
algorithm.
Procedure Generation
Procedure Calling
The input, parameter, and output tables must be of the types specified in the signature table.
Signature
Other columns Varchar, nvarchar, inte Data columns The column used for
ger, or double stratification must be
categorical (integer,
varchar, or nvarchar).
Parameter Table
Mandatory Parameters
None.
Optional Parameters
The following parameters are optional. If a parameter is not specified, PAL will use its default value.
● 0: Random parti
tion
● Not 0: Stratified
partition
STRATIFIED_COLUMN Varchar No default value Indicates which col Valid only when PARTI
umn is used for strati TION_METHOD is set
fication. to a non-zero value
(stratified partition).
Output Table
Example
Assume that:
Posterior scaling is used to scale data based on the previous scaling model generated by the scaling range
procedure.
It is assumed that new data is from similar distribution and will not update the scaling model.
POSTERIORSCALING
This function directly scales data based on the previous scaling model, without running the scaling range
procedure once more.
Procedure Generation
Procedure Calling
The input, parameter, and output tables must be of the types specified in the signature table.
Signature
Input Tables
Data 1st column Integer, bigint, varchar, ID This must be the first
or nvarchar column.
Scaling Model 1st column Integer Scaling model ID This must be the first
column.
2nd column CLOB, varchar, or Scaling model saved The table must be a
nvarchar as JSON string column table. The min
imum length of each
unit (row) is 5000.
Parameter Table
Mandatory Parameters
None.
Optional Parameters
None.
Output Table
Example
Assume that:
Expected Result
PAL_NEW_SCALING_TBL:
Related Information
Principal component analysis (PCA) aims at reducing the dimensionality of multivariate data while accounting
for as much of the variation in the original data set as possible. This technique is especially useful when the
variables within the data set are highly correlated.
Principal components seek to transform the original variables to a new set of variables that are:
The signs of the columns of the loadings matrix are arbitrary, and may differ between different
implementations for PCA.
Note that if there exists one variable which has constant value across data items, you cannot scale variables
any more.
Prerequisites
Procedure Generation
Procedure Calling
The input, parameter, and output tables must be of the types specified in the signature table.
Signature
Input Table
Parameter Table
Mandatory Parameters
None.
The following parameters are optional. If a parameter is not specified, PAL will use its default value.
● 0: No
● 1: Yes
● 0: No
● 1: Yes
Output Tables
Example
Assume that:
Expected Result
PAL_PCA_LOADINGS_TBL:
PAL_PCA_LOADINGS_INFO_TBL:
PAL_PCA_SCORES_TBL:
PCAPROJECTION
Procedure Generation
Procedure Calling
The input, parameter, and output tables must be of the types specified in the signature table.
Signature
Input Tables
Scaling Information 1st column Integer Variable ID Row number of this ta
ble should be equal to
2nd column Double Mean value of each the number of varia
variable bles in input data.
Parameter Table
Mandatory Parameters
None.
Optional Parameters
The following parameters are optional. If a parameter is not specified, PAL will use its default value.
● 0: No
● 1: Yes
Projection 1st column Integer Data item ID Make sure the output
table has enough col
Columns Double Transformed data item umns to hold the out
put data. The column
size of output data de
pends on the
MAX_COMPONENTS
parameter.
Example
Assume that:
Expected Result
PAL_PCAPROJ_SCORES_TBL:
In PAL, this function supports four different distributions. The probability density functions of different
distributions are defined as follows:
● Uniform
● Normal
● Weibull
● Gamma
Procedure Generation
1 <schema_name> <Distribution IN
parameter INPUT table
type>
Procedure Calling
The input, parameter, and output tables must be of the types specified in the signature table.
Signature
Input Table
● Uniform
○ Min (default: 0) in (-
∞,+∞)
○ Max (default: 1) in (-
∞,+∞)
(Min < Max)
● Normal
○ Mean (default: 0) in
(-∞,+∞)
○ Variance (default: 1)
in (0,+∞)
○ SD (default: 1) in
(0,+∞)
Variance and SD cannot
be used together.
Choose one from them.
● Weibull
○ Shape (default: 1) in
(0,+∞)
○ Scale (default: 1) in
(0,+∞)
● Gamma
○ Shape (default: 1) in
(0,+∞)
○ Scale (default: 1) in
(0,+∞)
Parameter Table
Mandatory Parameters
None.
Optional Parameters
The following parameters are optional. If a parameter is not specified, PAL will use its default value.
Output Table
Example
Assume that:
Expected Result
3.6.9 Sampling
In business scenarios the number of records in the database is usually quite large, and it is common to use a
small portion of the records as representatives, so that a rough impression of the dataset can be given by
analyzing sampling.
● First_N
● Middle_N
● Last_N
● Every_Nth
● SimpleRandom_WithReplacement
● SimpleRandom_WithoutReplacement
● Systematic
● Stratified
SAMPLING
Procedure Generation
Procedure Calling
The input, parameter, and output tables must be of the types specified in the signature table.
Signature
Input Table
Parameter Table
Mandatory Parameters
● 0: First_N
● 1: Middle_N
● 2: Last_N
● 3: Every_Nth
● 4: SimpleRandom_With
Replacement
● 5: SimpleRandom_With
outReplacement
● 6: Systematic
● 7: Stratified_WithRe-
placement
● 8: Stratified_WithoutRe-
placement
INTERVAL Integer The interval between two Only required when SAM
samples. PLING_METHOD is 3. If this
parameter is not specified,
the SAMPLING_SIZE param
eter will be used.
COLUMN_CHOOSE Integer The column that is used to Only required when SAM
do the stratified sampling. PLING_METHOD is 7 or 8.
Optional Parameters
The following parameters are optional. If a parameter is not specified, PAL will use its default value.
Output Table
Result Columns Integer, double, varchar, or The Output Table has the
nvarchar same structure as defined in
the Input Table.
Examples
Assume that:
Example 1
Expected Result
Example 2
In real world scenarios the collected continuous attributes are usually distributed within different ranges. It is a
common practice to have the data well scaled so that data mining algorithms like neural networks, nearest
neighbor classification and clustering can give more reliable results.
This release of PAL provides three scaling range methods described below. In the following, Xip and Yip are the
original value and transformed value of the i-th record and p-th attribute, respectively.
1. Min-Max Normalization
Each transformed value is within the range [new_minA, new_maxA], where new_minA and new_maxA are
use-specified parameters. Supposing that minA and maxA are the minimum and maximum values of
attribute A, we get the following calculation formula:
Yip = (Xip ‒ minA) × (new_maxA - new_minA) / (maxA - minA) + new_minA
2. Z-Score Normalization (or zero-mean normalization).
PAL uses three z-score methods.
○ Mean-Standard Deviation
The transformed values have mean 0 and standard deviation 1. The transformation is made as follows:
Where μp and σp are mean and standard deviations of the original values of the p-th attributes.
○ Mean-Mean Absolute Deviation
Prerequisites
SCALINGRANGE
Procedure Generation
Procedure Calling
The input, parameter, and output tables must be of the types specified in the signature table.
Signature
Input Table
Parameter Table
Mandatory Parameters
● 0: Min-max normaliza
tion
● 1: Z-Score normalization
● 2: Decimal scaling nor
malization
NEW_MAX Double or integer The new maximum value of Only valid when SCAL
the min-max normalization ING_METHOD is 0.
method
NEW_MIN Double or integer The new minimum value of Only valid when SCAL
min-max normalization ING_METHOD is 0.
method
Optional Parameter
The following parameter is optional. If it is not specified, PAL will use its default value.
Output Table
Result 1st column Integer, bigint, varchar, ID This must be the first
or nvarchar column.
Scaling Model (op 1st column Integer Scaling model ID This must be the first
tional)
column.
2nd column CLOB, varchar, or Binning model saved The table must be a
nvarchar as JSON string column table. The min
imum length of each
unit (row) is 5000.
Example
Assume that:
Expected Result
PAL_SCALING_MODEL_TBL:
This function is used to replace the missing values with some statistical values. Currently, three methods are
provided. The missing values of a specific attribute are replaced by one of the following values (of the attribute/
column):
● Median: the numerical value separating the higher half of the values from the lower half. The median value
is calculated by:
Note
The Mode can only be used for categorical attributes, whereas the Mean and Median can only be used for
continuous attributes.
SUBSTITUTE_MISSING_VALUES
Procedure Generation
Procedure Calling
The input, parameter, and output tables must be of the types specified in the signature table.
Signature
Input Table
Parameter Table
Mandatory Parameter
● 100: Mode
● 200: Mean
● 201: Median
● 101: Use specific string value to re
place missing values
● 202: Use specific integer value to
replace missing values
● 203: Use specific double value to
replace missing values
INSERT INTO
#PAL_CONTROL_TBL
VALUES(‘V0’, <METHOD>,
NULL, ‘<NEW VALUE>’);
Optional Parameter
The following parameter is optional. If it is not specified, PAL will use its default value.
Output Tables
Example
Assume that:
Expected Result
PAL_MISSING_VALUES_RESULT_TBL:
Variance Test is a method to identify the outliers of n number of numeric data {xi} where 0 < i < n+1, using the
mean {μ} and the standard deviation of {σ} of n number of numeric data {xi}.
Prerequisites
VARIANCETEST
Procedure Calling
The input, parameter, and output tables must be of the types specified in the signature table.
Signature
Input Table
Parameter Table
Mandatory Parameter
Optional Parameter
The following parameter is optional. If it is not specified, PAL will use its default value.
Example
Assume that:
Expected Result
PAL_VT_RESULT_TBL:
This section describes the statistics functions that are provided by the Predictive Analysis Library.
The chi-squared test for goodness of fit tells whether or not an observed distribution differs from an expected
chi-squared distribution.
The chi-squared value X2 will be used to calculate a p-value by comparing the value of chi-squared to a chi-
squared distribution. The degree of freedom is set to n-p where p is the reduction in degrees of freedom.
● The input data has three columns. The first column is ID column with integer, varchar, or nvarchar type; the
second column is the observed data with integer or double type; the third column is the expected
frequency.
● The input data does not contain null value. The algorithm will issue errors when encountering null values.
CHISQTESTFIT
This function does the Pearson’s chi-squared test for goodness of fit according to the user’s input.
Procedure Generation
Procedure Calling
The input, result, and statvalue tables must be of the types specified in the signature table.
Signature
Input Table
Data 1st column Integer, bigint, varchar, ID This must be the first
or nvarchar column.
Output Tables
Example
Assume that:
Expected Result
PAL_CHISQTESTFIT_RESULT_TBL:
PAL_CHISQTESTFIT_STATVALUE_TBL:
The chi-squared test for independent tells whether observations of two variables are independent from each
other.
where Oi,j is an observed frequency and Ei,j is an expected frequency. Then X2 will be used to calculate a p-value
by comparing the value of chi-squared to a chi-squared distribution. The degree of freedom is set to
(r-1)*(c-1).
● The input data contains an ID column in the first column with the type of integer, varchar, or nvarchar and
the other columns are of integer or double data type.
● The input data does not contain null value. The algorithm will issue errors when encountering null values.
CHISQTESTIND
This function does the Pearson’s chi-squared test for independent according to the user’s input.
Procedure Generation
Procedure Calling
The input, parameter, result, and statvalue tables must be of the types specified in the signature table.
Signature
Input Table
Data 1st column Integer, bigint, varchar, ID This must be the first
or nvarchar column.
Parameter Table
Mandatory Parameters
None.
Optional Parameter
Output Tables
Example
Assume that:
Expected Result
PAL_CHISQTESTIND_EXPECTEDRESULT_TBL:
PAL_CHISQTESTIND_STATVALUE_TBL:
This algorithm evaluates the probability of a variable x from the cumulative distribution function (CDF) or
complementary cumulative distribution function (CCDF) for a given probability distribution.
CDF
CDF describes the lower tail probability of a probability distribution. It is the probability that a random variable
X with a given probability distribution takes a value less than or equal to x. The CDF F(x) of a real-valued
random variable X is given by:
F(x)=P[X≤x]
CCDF
CCDF describes the upper tail probability of a probability distribution. It is the probability that a random
variable X with a given probability distribution takes a value greater than x. The CCDF of a real-valued
random variable X is given by:
=P[X>x]=1-F(x)
Prerequisites
DISTRPROB
This function calculates the value of CDF or CCDF (depending on the parameter given by user) for a given
distribution.
Procedure Generation
2 <schema_name> <Distribution IN
parameter INPUT table
type>
Procedure Calling
The input, parameter, and output tables must be of the types specified in the signature table.
Signature
Input Tables
Distribution Parameter 1st column Varchar or nvarchar Names of the distribution pa
rameters. See Distribution
Parameters Definition table
for details.
"Max" "1.0"
"Mean" "0.0"
Parameter Table
The following parameter is optional. If it is not specified, PAL will use its default value.
Output Table
Example
Assume that:
Expected Result
In PAL, you can choose one probability distribution type from a supporting list (Normal, Gamma, Weibull, and
Uniform) and then PAL will calculate the optimized parameters of this distribution which fit the observed
variable best.
PAL supports two distribution fitting interfaces: DISTRFIT and DISTRFITCENSORED. DISTRFIT fits un-censored
data while DISTRFITCENSORED fits censored data.
Maximum likelihood and median rank are two estimation methods for finding the optimized parameters. In
PAL, the maximum likelihood method supports all distribution types in the supporting list for un-censored data
and supports Weibull distribution fitting for a mixture of left, right, and interval censored data. The median rank
Prerequisites
DISTRFIT
Procedure Generation
Procedure Calling
The input, parameter, and output tables must be of the types specified in the signature table.
Signature
Input Table
Parameter Table
Mandatory Parameter
● Normal
● Gamma
● Weibull
● Uniform
Optional Parameter
The following parameter is optional. If it is not specified, PAL will use its default value.
Note
For Gamma or Weibull distribution, you can specify the start values of parameters for optimization. If the
start values are not specified, the algorithm will calculate them automatically.
Output Tables
Example
Expected Result
PAL_DISTRFIT_ESTIMATION_TBL:
PAL_DISTRFIT_STATISTICS_TBL:
DISTRFITCENSORED
This is a Weibull distribution fitting function with censored data. This release of PAL only supports the
maximum likelihood estimation method on a mixture of left, right, and interval censored data and the median
rank estimation method on right censored data.
Procedure Generation
Procedure Calling
The input, parameter, and output tables must be of the types specified in the signature table.
Signature
Input Table
Parameter Table
Mandatory Parameter
● Weibull
Optional Parameter
The following parameter is optional. If it is not specified, PAL will use its default value.
● 0: Maximum likelihood
● 1: Median rank (only for
weibull distribution)
Output Tables
Example
Assume that:
Expected Result
PAL_DISTRFITCENSORED_ESTIMATION_TBL:
PAL_DISTRFITCENSORED_STATISTICS_TBL:
Note: The median rank estimation method only supports Weibull distribution, and this example does not have
statistics output.
Expected Result
PAL_DISTRFITCENSORED_ESTIMATION_TBL:
PAL_DISTRFITCENSORED_STATISTICS_TBL:
Grubbs’ test is used to detect outliers from a given univariate data set Y={Y1,Y2,...,Yn}. The algorithm
assumes that Y comes from Gaussian distribution.
Here
3. Given the significance level α, if
outlier. Here denotes the quantile value of t-distribution with n-2 degrees and a significance level
The above is called two-sided test. There is another version called one-sided test for minimum value or
maximum value.
Suppose Ymax is an outlier from Grubbs’ test, you can calculate the statistic value U as shown below:
2. Calculate
PAL also supports the repeat version of two-sided test. The steps are as follows:
Prerequisites
GRUBBSTEST
This function performs Grubbs’ test for identifying outliers from input data.
Procedure Generation
Procedure Calling
The input, parameter, and output tables must be of the types specified in the signature table.
Signature
Input Tables
Data 1st column Integer, bigint, varchar, ID This must be the first
or nvarchar column.
Parameter Table
Mandatory Parameter
None.
Optional Parameter
The following parameters are optional. If a parameter is not specified, PAL will use its default value.
Output Table
Example
Assume that:
Expected Result
PAL_GRUBBS_OUTLIERS_TBL:
PAL_GRUBBS_STATISTICS_TBL:
The Kaplan-Meier estimator is a non-parametric statistic used to estimate the survival function from lifetime
data. It is often used to measure the time-to-death of patients after treatment or time-to-failure of machine
parts.
Sometimes subjects under study are refused to remain in the study or some of the subjects may not
experience the event before the end of the study, or you lose touch with them midway in the study. These
situations are labeled as censored observations.
Where ni is the number of subjects at risk and di is the number of subjects who fail, both at time ti.
The Kaplan-Meier estimator can be regarded as a point estimate of the survival function S(t) at any time
t. We can construct 95% confidence intervals around each of these estimates. To compute the confidence
intervals, Greenwood’s Formula gives an asymptotic estimate of the variance of for large groups:
However the endpoints of Greenwood’s confidence interval can be negative or greater than one. Here we use
another confidence interval based on the large sample normal distribution of log(-log( )) with:
So we get:
(exp(-exp(ci_lower)), exp(-exp(ci_upper)))
For i=1, 2, ... , g and j=1, 2, … , k, where g is number of groups and k is number of distinct failure
times.
oij = observed number of failures in ith group at jth ordered failure time
eij = expected number of failures in ith group at jth ordered failure time =
The, the log rank statistics is given by the matrix product formula:
Which has approximately a chi-squared distribution with g-1 degrees of freedom under the null hypothesis
that all g groups have a common survival function.
Where
Prerequisite
This function estimates the probability of surviving time t (t is an event time) using Kaplan-Meier estimator
and compares several groups of survival functions using log rank test. This function does log rank test if the
lifetime data comes from multiple groups, otherwise it skips doing it.
Procedure Generation
Procedure Calling
The input, parameter, and output tables must be of the types specified in the signature table.
Signature
Input Table
Parameter Table
Mandatory Parameters
None.
Optional Parameters
The following parameters are optional. If a parameter is not specified, PAL will use its default value.
Output Tables
( )
Example
Assume that:
Expected Result
PAL_KMSURV_LOGRANK_STAT1_TBL:
PAL_KMSURV_LOGRANK_STAT2_TBL:
Covariance Matrix
The covariance between two data samples (random variables) x and y is:
Suppose that each column represents a data sample (random variable), the covariance matrix Σ is defined as
the covariance between any two random variables:
where X=[X1,X2,...,Xn].
The Pearson’s correlation coefficient between two data samples (random variables) X and Y is the covariance
of X and Y divided by the product of standard deviation of X and the standard deviation Y:
where X=[X1,X2,...,Xn].
Prerequisites
MULTIVARSTAT
This function reads input data and calculates the basic multivariate statistics values for each column, including
covariance matrix and Pearson’s correlation coefficient matrix.
Procedure Generation
Procedure Calling
The input, parameter, and output tables must be of the types specified in the signature table.
Signature
Input Table
Parameter Table
Mandatory Parameters
Optional Parameter
The following parameter is optional. If it is not specified, PAL will use its default value.
● 0: Covariance matrix
● 1: Pearson’s correlation
coefficient matrix
Output Table
Example
Assume that:
Expected Result
This algorithm evaluates the inverse of the cumulative distribution function (CDF) or the inverse
of the complementary cumulative distribution function (CCDF) for a given probability p and probability
distribution.
F(x)=P[X≤x]
=P[X>x]=1-F(x)
Prerequisites
DISTRQUANTILE
Procedure Generation
2 <schema_name> <Distribution IN
parameter INPUT table
type>
Procedure Calling
The input, parameter, and output tables must be of the types specified in the signature table.
Signature
Input Tables
Distribution Parameter 1st column Varchar or nvarchar Names of the distribution pa
rameters. See Distribution
Parameters Definition table
for details.
"Max" "1.0"
"Mean" "0.0"
Note
The names and values of the distribution parameters are not case sensitive.
Parameter Table
The following parameter is optional. If it is not specified, PAL will use its default value.
Output Table
Example
Assume that:
Expected Result
This function calculates several basic univariate statistics including mean, median, variance, standard
deviation, skewness and kurtosis. The function treats each column as one dataset and calculates the statistics
respectively.
Mean
where Xi is the i-th element of the dataset and n is the size of the dataset.
Median
The median is defined as the numerical value separating the higher half of a dataset from the lower half. If there
is an even number of observations, the median is defined to be the mean of the two middle elements.
Use the median to divide the elements of the dataset into two halves. Do not include the median in either half.
The lower quartile value is the median of the lower half of the data. The upper quartile value is the median of
the upper half of the data.
Variance (population)
Variance (sample)
Skewness
where x' = x - and n is the size of the dataset. There are three definitions of skewness:
Definition 1 y
Definition 2
Definition 3
Kurtosis
Kurtosis is a measure of the peakedness or flatness compared to a normal distribution. Suppose that
where x' = x - and n is the number of elements. There are three definitions of kurtosis:
Definition 1 r-3
Definition 2
Definition 3
Prerequisites
UNIVARSTAT
This function reads input data and calculates the basic univariate statistics values for each column, including
mean, median, variance, standard deviation, skewness, and kurtosis.
Procedure Generation
Procedure Calling
The input, parameter, and result tables must be of the types specified in the signature table.
Signature
Input Table
Parameter Table
Mandatory Parameters
None.
Optional Parameters
The following parameters are optional. If a parameter is not specified, PAL will use its default value.
● 0: Definition 1
● 1: Definition 2
● 2: Definition 3
● 0: Definition 1
● 1: Definition 2
● 2: Definition 3
● 0: Sample dataset
● 1: Population dataset
Output Table
Example
Assume that:
Expected Result
This function is used to test the equality of two random variances using F-test. The null hypothesis is that two
independent normal variances are equal. The observed sums of some selected squares are then examined to
see whether their ratio is significantly incompatible with this null hypothesis.
Let x1, x2, ..., xn and y1, y2, ..., yn be independent and identically distributed samples from two
populations.
The F value will be used to calculate a p-value by comparing the value of F to an F-distribution. The degree of
freedom is set to (n-1) and (m-1).
Prerequisites
● The input data has two tables and each table has only one column with the type of integer or double.
● The input data does not contain null value. The algorithm will issue errors when encountering null values.
VAREQUALTEST
This function tests the equality of variances between two input data.
Procedure Generation
The input1, input2, parameter, and statvalue tables must be of the types specified in the signature table.
Signature
Input Tables
Parameter Table
Mandatory Parameters
None.
Optional Parameter
The following parameter is optional. If it is not specified, PAL will use its default value.
● 0: Two sides
● 1: Less
● 2: Greater
Output Table
Example
Assume that:
Expected Result
This section describes the algorithms provided by the PAL that are mainly used for social network analysis.
Predicting missing links is a common task in social network analysis. The Link Prediction algorithm in PAL
provides four methods to compute the distance of any two nodes using existing links in a social network, and
make prediction on the missing links based on these distances.
Let x and y be two nodes in a social network, and be the set containing the neighbor nodes of x, the four
methods to compute the distance of x and y are briefly described as follows.
Common Neighbors
Jaccard's Coefficient
Adamic/Adar
The quantity is computed as the sum of inverse log degree over all the common neighbors:
Katzβ
The quantity is computed as a weighted sum of the number of paths of length l connecting x and y:
Where is the user-specified parameter, and is the number of paths with length l
which starts from node x and ends at node y.
LINKPREDICTION
Procedure Generation
Procedure Calling
The input, parameter, and output tables must be of the types specified in the signature table.
Signature
Input Table
Parameter Table
Mandatory Parameter
● 1: Common Neighbors
● 2: Jaccard's Coefficient
● 3: Adamic/Adar
● 4: Katz
Optional Parameters
The following parameters are optional. If a parameter is not specified, PAL will use its default value.
BETA Double 0.005 Parameter for the Katz Only valid when
method. METHOD is 4.
Output Table
Example
Assume that:
Expected Result
This section describes the ABC Analysis and Weighted Score Table algorithms that are provided by the
Predictive Analysis Library.
This algorithm is used to classify objects (such as customers, employees, or products) based on a particular
measure (such as revenue or profit). It suggests that inventories of an organization are not of equal value, thus
can be grouped into three categories (A, B, and C) by their estimated importance. “A” items are very important
for an organization. “B” items are of medium importance, that is, less important than “A” items and more
important than “C” items. “C” items are of the least importance.
● “A” items – 20% of the items (customers) accounts for 70% of the revenue.
● “B” items – 30% of the items (customers) accounts for 20% of the revenue.
● “C” items – 50% of the items (customers) accounts for 10% of the revenue.
Prerequisites
Procedure Generation
Procedure Calling
The input, parameter, and output tables must be of the types specified in the signature table.
Signature
Input Table
Parameter Table
Mandatory Parameter
Optional Parameters
The following parameters are optional. If a parameter is not specified, PAL will use its default value.
Output Table
Example
Assume that:
Expected Result
PAL_ABC_RESULT_TBL:
A weighted score table is a method of evaluating alternatives when the importance of each criterion differs. In a
weighted score table, each alternative is given a score for each criterion. These scores are then weighted by the
importance of each criterion. All of an alternative's weighted scores are then added together to calculate its
total weighted score. The alternative with the highest total score should be the best alternative.
You can use weighted score tables to make predictions about future customer behavior. You first create a
model based on historical data in the data mining application, and then apply the model to new data to make
the prediction. The prediction, that is, the output of the model, is called a score. You can create a single score
for your customers by taking into account different dimensions.
WEIGHTEDTABLE
This function performs weighted table calculation. It is similar to the Volume Driver function in the Business
Function Library (BFL). Volume Driver calculates only one column, but weightedTable calculates multiple
columns at the same time.
Procedure Generation
Procedure Calling
The input, parameter, and output tables must be of the types specified in the signature table.
Signature
Input Tables
Target/ Input Data Columns Varchar, nvarchar, inte Specifies which will be Discrete value: integer,
ger, or double used to calculate the varchar, nvarchar, dou
scores ble
An ID column is man
datory. Its data type
should be integer.
Map Function Columns Varchar, nvarchar, inte Creates the map func Every attribute (except
ger, or double tion ID) in the Input Data
table maps to two col
umns in the Map Func
tion table: Key column
and Value column. The
Value column must be
of double type.
Parameter Table
Mandatory Parameters
None.
Optional Parameter
The following parameter is optional. If it is not specified, PAL will use its default value.
Output Table
Example
Assume that:
Expected Result
PAL_WT_RESULT_TBL:
This section provides end-to-end scenarios of predictive analysis with PAL algorithms.
We wish to predict segmentation/clustering of new customers for a supermarket. First use the K-means
function in PAL to perform segmentation/clustering for existing customers in the supermarket. The output can
then be used as the training data for the C4.5 Decision Tree function to predict new customers’ segmentation/
clustering.
Technology Background
● K-means clustering is a method of cluster analysis whereby the algorithm partitions N observations or
records into K clusters, in which each observation belongs to the cluster with the nearest center. It is one of
the most commonly used algorithms in clustering method.
● Decision trees are powerful and popular tools for classification and prediction. Decision tree learning, used
in statistics, data mining, and machine learning uses a decision tree as a predictive model which maps the
observations about an item to the conclusions about the item's target value.
Implementation Steps
Assume that:
Step 1
Input customer data and use the K-means function to partition the data set into K clusters. In this example,
nine rows of data will be input. K equals 3, which means the customers will be partitioned into three levels.
Step 2
Use the above output as the training data of C4.5 Decision Tree. The C4.5 Decision Tree function will generate a
tree model which maps the observations about an item to the conclusions about the item's target value.
Step 3
Use the above tree model to map each new customer to the corresponding level he or she belongs to.
We wish to do an analysis of the cash flow of an investment required to create a new product. Projected
estimates are given for the product revenue, product costs, overheads, and capital investment for each year of
the analysis, from which the cash flow can be calculated. For capital investment appraisal the cash flows are
summed for each year and discounted for future values, in other words the net present value of the cash flow is
derived as a single value measuring the benefit of the investment.
Monte Carlo Simulation is used in our example to estimate the net present value (NPV) of the investment. The
equations used in the simulation are:
Suppose the simulation covers k years’ time periods and the discount rate is r, the net present value of the
investment is defined as:
Technology Background
Monte Carlo Simulation is a computational algorithm that repeatedly generates random samples to compute
numerical results based on a formula or model in order to obtain the unknown probability distribution of an
event or outcome.
In PAL, the Random Distribution Sampling, Distribution Fitting, and Cumulative Distribution algorithms may be
used for Monte Carlo Simulation.
Implementation Steps
Assume that:
Step 1
Input the given estimates (single point deterministic values) for product revenue, product costs, overheads,
and capital investment. In this example, the time periods are 5 (from year 1 to year 5).
● Product Revenue:
Normal distribution and the mean and standard deviation are listed in the following table.
● Product Costs:
Normal distribution and the mean and standard deviation are listed in the following table.
Product Revenue
Product Costs
Overheads
Capital Investment
Run the Random Distribution Sampling algorithm for each variable and generate 1,000 sample sets. The
number of sample sets is a choice for the analysis. The larger the value then the more smooth the output
distribution and the closer it will be to a normal distribution.
Step 2
Calculate the net present value of the investment by the following equation for each sampling.
Step 3
Plot the distribution of the net present value of the investment and run Distribution Fitting to fit a normal
distribution to the NPV of the investment as. (The Central Limit theorem states that the output distribution will
be a normal distribution.)
---------------------------------------
------- distribution fit process ---
---------------------------------------
DROP TYPE PAL_DISTRFIT_DATA_T;
CREATE TYPE PAL_DISTRFIT_DATA_T AS TABLE(NPVALUE DOUBLE);
DROP TYPE PAL_DISTRFIT_ESTIMATION_T;
CREATE TYPE PAL_DISTRFIT_ESTIMATION_T AS TABLE(NAME VARCHAR(50),VAL VARCHAR(50));
DROP TYPE PAL_DISTRFIT_STATISTICS_T;
CREATE TYPE PAL_DISTRFIT_STATISTICS_T AS TABLE(NAME VARCHAR(50),VAL DOUBLE);
DROP TYPE PAL_CONTROL_T;
CREATE TYPE PAL_CONTROL_T AS TABLE(NAME VARCHAR (50),INTARGS INTEGER,DOUBLEARGS
DOUBLE,STRINGARGS VARCHAR (100));
DROP TABLE PDATA_TBL;
CREATE COLUMN TABLE PDATA_TBL("POSITION" INT, "SCHEMA_NAME" NVARCHAR(256),
"TYPE_NAME" NVARCHAR(256), "PARAMETER_TYPE" VARCHAR(7));
INSERT INTO PDATA_TBL VALUES (1, 'DM_PAL','PAL_DISTRFIT_DATA_T', 'IN');
INSERT INTO PDATA_TBL VALUES (2, 'DM_PAL','PAL_CONTROL_T', 'IN');
INSERT INTO PDATA_TBL VALUES (3, 'DM_PAL','PAL_DISTRFIT_ESTIMATION_T', 'OUT');
INSERT INTO PDATA_TBL VALUES (4, 'DM_PAL','PAL_DISTRFIT_STATISTICS_T', 'OUT');
call SYS.AFLLANG_WRAPPER_PROCEDURE_DROP('DM_PAL','PAL_DISTRFIT');
call SYS.AFLLANG_WRAPPER_PROCEDURE_CREATE('AFLPAL', 'DISTRFIT',
'DM_PAL','PAL_DISTRFIT', PDATA_TBL);
DROP TABLE PAL_CONTROL_TBL;
Step 4
According to the fitted model, run the Cumulative Distribution function to obtain the probability of having an
NPV of investment smaller than or equal to a given NPV of the investment.
---------------------------------------
----distribution probability process --
---------------------------------------
DROP TYPE PAL_DISTRPROB_DATA_T;
CREATE TYPE PAL_DISTRPROB_DATA_T AS TABLE(DATACOL DOUBLE);
DROP TYPE PAL_DISTRPROB_DISTRPARAM_T;
CREATE TYPE PAL_DISTRPROB_DISTRPARAM_T AS TABLE(NAME VARCHAR(50),VALUEE
VARCHAR(50));
DROP TYPE PAL_DISTRPROB_RESULT_T;
CREATE TYPE PAL_DISTRPROB_RESULT_T AS TABLE(INPUTDATA DOUBLE,PROBABILITY DOUBLE);
DROP TYPE PAL_CONTROL_T;
CREATE TYPE PAL_CONTROL_T AS TABLE(NAME VARCHAR (50),INTARGS INTEGER,DOUBLEARGS
DOUBLE,STRINGARGS VARCHAR (100));
DROP TABLE PAL_DISTRPROB_PDATA_TBL;
CREATE COLUMN TABLE PAL_DISTRPROB_PDATA_TBL("POSITION" INT, "SCHEMA_NAME"
NVARCHAR(256), "TYPE_NAME" NVARCHAR(256), "PARAMETER_TYPE" VARCHAR(7));
INSERT INTO PAL_DISTRPROB_PDATA_TBL VALUES (1, 'DM_PAL','PAL_DISTRPROB_DATA_T',
'IN');
INSERT INTO PAL_DISTRPROB_PDATA_TBL VALUES (2, 'DM_PAL',
'PAL_DISTRPROB_DISTRPARAM_T', 'IN');
INSERT INTO PAL_DISTRPROB_PDATA_TBL VALUES (3, 'DM_PAL','PAL_CONTROL_T', 'IN');
INSERT INTO PAL_DISTRPROB_PDATA_TBL VALUES (4,
'DM_PAL','PAL_DISTRPROB_RESULT_T', 'OUT');
CALL SYS.AFLLANG_WRAPPER_PROCEDURE_DROP('DM_PAL','PAL_DISTRPROB_PROC');
CALL SYS.AFLLANG_WRAPPER_PROCEDURE_CREATE('AFLPAL', 'DISTRPROB',
'DM_PAL','PAL_DISTRPROB_PROC', PAL_DISTRPROB_PDATA_TBL);
DROP TABLE PAL_DISTRPROB_DATA_TBL;
CREATE TABLE PAL_DISTRPROB_DATA_TBL LIKE PAL_DISTRPROB_DATA_T;
INSERT INTO PAL_DISTRPROB_DATA_TBL VALUES (7000);
INSERT INTO PAL_DISTRPROB_DATA_TBL VALUES (8000);
INSERT INTO PAL_DISTRPROB_DATA_TBL VALUES (9000);
INSERT INTO PAL_DISTRPROB_DATA_TBL VALUES (10000);
INSERT INTO PAL_DISTRPROB_DATA_TBL VALUES (11000);
DROP TABLE PAL_DISTRPROB_DISTRPARAM_TBL;
CREATE TABLE PAL_DISTRPROB_DISTRPARAM_TBL LIKE PAL_DISTRPROB_DISTRPARAM_T;
INSERT INTO PAL_DISTRPROB_DISTRPARAM_TBL VALUES ('DISTRIBUTIONNAME', 'Normal');
INSERT INTO PAL_DISTRPROB_DISTRPARAM_TBL VALUES ('MEAN', '100');
INSERT INTO PAL_DISTRPROB_DISTRPARAM_TBL VALUES ('VARIANCE', '1');
In clinical trials or community trials, the effect of an intervention is assessed by measuring the number of
subjects who have survived or are saved after that intervention over a period of time. We wish to measure the
survival probability of Dukes’C colorectal cancer patients after treatment and evaluate statistically whether the
patients who accept treatment can survive longer than those who are only controlled conservatively.
Technology Background
Kaplan-Meier estimate is one of the simplest way to measure the fraction of subjects living for a certain
amount of time after treatment. The time starting from a defined point to the occurrence of a given event, for
example death, is called as survival time.
This scenarios describes a clinical trial of 49 patients for the treatment of Dukes’C colorectal cancer. The
following data shows the survival time in 49 patients with Dukes’C colorectal cancer who are randomly
assigned to either linoleic acid or control treatment.
Linoleic acid (n = 25) 1+, 5+, 6, 6, 9+, 10, 10, 10+, 12, 12, 12, 12, 12+, 13+, 15+, 16+,
20+, 24, 24+, 27+, 32, 34+, 36+, 36+, 44+
Control (n = 24) 3+, 6, 6, 6, 6, 8, 8, 12, 12, 12+, 15+, 16+, 18+, 18+, 20, 22+, 24,
28+, 28+, 28+, 30, 30+, 33+, 42
The + sign indicates censored data. Until 6 months after treatment, there are no deaths. The effect of the
censoring is to remove from the alive group those that are censored. At time 6 months two subjects have been
censored so the number alive just before 6 months is 23. There are two deaths at 6 months. Thus,
We now reduce the number alive (“at risk”) by two. The censored event at 9 months reduces the “at risk” set to
20. At 10 months there are two deaths. So the proportion surviving is 18/20 = 0.9, and the cumulative
proportion surviving is 0.913*0.90 = 0.8217.
To compare survival estimates produced from two groups, we use log-rank test. It is a hypothesis test to
compare the survival distribution of two groups (some of the observations may be censored) and is used to
test the null hypothesis that there is no difference between the populations (treatment group and control
group) in the probability of an event (here a death) at any time point. The methods are nonparametric in that
they do not make assumptions about the distributions of survival estimates. The analysis is based on the times
of events (here deaths). For each such time we calculate the observed number of deaths in each group and the
number expected if there were in reality no difference between the groups. It is widely used in clinical trials to
establish the efficacy of a new treatment in comparison with a control treatment when the measurement is the
time to event (such as the time from initial treatment to death).
Because the log-rank test is purely a test of significance, it cannot provide an estimate of the size of the
difference between the groups.
Implementation Step
Assume that:
Technology Background
Weibull distribution is often used for reliability and survival analysis. It is defined by 3 parameters: shape, scale,
and location. Scale works as key to magnify or shrink the curve. Shape is the crucial factor to define how the
curve looks like, as described below:
● Shape = 1: The failure rate is constant over time, indicating random failure.
● Shape < 1: The failure rate decreases over time.
● Shape > 1: The failure rate increases over time.
For the same raw data as in the above Kaplan-Meier option, also shown below:
Linoleic acid (n = 25) 1+, 5+, 6, 6, 9+, 10, 10, 10+, 12, 12, 12, 12, 12+, 13+, 15+, 16+,
20+, 24, 24+, 27+, 32, 34+, 36+, 36+, 44+
Control (n = 24) 3+, 6, 6, 6, 6, 8, 8, 12, 12, 12+, 15+, 16+, 18+, 18+, 20, 22+, 24,
28+, 28+, 28+, 30, 30+, 33+, 42
The DISTRFITCENSORED function is used to fit the Weibull distribution on the censored data. For the two types
of treatment, linoleic acid and control, two separate calls of DISTRFITCENSORED are performed to get two
Weibull distributions.
Implementation Steps
Assume that:
Step 1
Get Weibull distribution and statistics from the linoleic acid treatment data:
Step 2
Get Weibull distribution and statistics from the control treatment data:
The results show that the shape values for both treatments are greater than 1, indicating the failure rate
increases over time.
Step 3
Get the CDF (cumulative distribution function) of Weibull distribution for the linoleic acid treatment data:
Step 4
Get the CDF (cumulative distribution function) of Weibull distribution for the control treatment data:
Related Information
● Create an SQL view for the input table if the table structure does not meet what is specified in this guide.
● Avoid null values in the input data. You can replace the null values with the default values via an SQL
statement (SQL view or SQL update) because PAL functions cannot infer the default values.
● Create the parameter table as a local temporary table to avoid table name conflicts.
● If you do not use PMML export, you do not need to create a PMML output table to store the result. Just set
the PMML_EXPORT parameter to 0 and pass ? or null to the function.
● When using the KMEANS function, different INIT_TYPE and NORMALIZATION settings may produce
different results. You may need to try a few combinations of these two parameters to get the best result.
● When using the APRIORIRULE function, in some circumstances the rules set can be huge. To avoid an
extra long runtime, you can set the MAXITEMLENGTH parameter to a smaller number, such as 2 or 3.
SAP HANA server software and tools can be used for several SAP HANA platform and options scenarios as well
as the respective capabilities used in these scenarios. The availability of these is based on the available SAP
HANA licenses and the SAP HANA landscape, including the type and version of the back-end systems the SAP
HANA administration and development tools are connected to. There are several types of licenses available for
SAP HANA. Depending on your SAP HANA installation license type, some of the features and tools described in
the SAP HANA platform documentation may only be available in the SAP HANA options and capabilities, which
may be released independently of an SAP HANA Platform Support Package Stack (SPS). Although various
features included in SAP HANA options and capabilities are cited in the SAP HANA platform documentation,
each SAP HANA edition governs the options and capabilities available. Based on this, customers do not
necessarily have the right to use features included in SAP HANA options and capabilities. For customers to
whom these license restrictions apply, the use of features included in SAP HANA options and capabilities in a
production system requires purchasing the corresponding software license(s) from SAP. The documentation
for the SAP HANA options is available in SAP Help Portal. If you have additional questions about what your
particular license provides, or wish to discuss licensing features available in SAP HANA options, please contact
your SAP account team representative.
Coding Samples
Any software coding and/or code lines / strings ("Code") included in this documentation are only examples and are not intended to be used in a productive system
environment. The Code is only intended to better explain and visualize the syntax and phrasing rules of certain coding. SAP does not warrant the correctness and
completeness of the Code given herein, and SAP shall not be liable for errors or damages caused by the usage of the Code, unless damages were caused by SAP
intentionally or by SAP's gross negligence.
Gender-Neutral Language
As far as possible, SAP documentation is gender neutral. Depending on the context, the reader is addressed directly with "you", or a gender-neutral noun (such as
"sales person" or "working days") is used. If when referring to members of both sexes, however, the third-person singular cannot be avoided or a gender-neutral noun
does not exist, SAP reserves the right to use the masculine form of the noun and pronoun. This is to ensure that the documentation remains comprehensible.
Internet Hyperlinks
The SAP documentation may contain hyperlinks to the Internet. These hyperlinks are intended to serve as a hint about where to find related information. SAP does not
warrant the availability and correctness of this related information or the ability of this information to serve a particular purpose. SAP shall not be liable for any
damages caused by the use of related information unless damages have been caused by SAP's gross negligence or willful misconduct. All links are categorized for
transparency (see: https://help.sap.com/viewer/disclaimer).