UNIT 2 Data Preprocessing
UNIT 2 Data Preprocessing
Data preprocessing
1.
preprocessing 1
Outline
Introduction
Data cleaning
Data integration and transformation
Data reduction
Discretization and concept hierarchy generation
Summary
preprocessing 2
Why Data Preprocessing?
preprocessing 3
Data Understanding: Relevance
preprocessing 4
Data Understanding: Quantity
Number of targets
Rule of thumb: >100 for each class
if very unbalanced, use stratified sampling
preprocessing 5
Multi-Dimensional Measure of Data Quality
preprocessing 6
Major Tasks in Data Preprocessing
Data cleaning
Fill in missing values, smooth noisy data, identify or remove outliers, and
resolve inconsistencies
Data integration
Integration of multiple databases, data cubes, or files
Data transformation
Normalization and aggregation
Data reduction
Obtains reduced representation in volume but produces the same or
similar analytical results
Data discretization
Part of data reduction but with particular importance, especially for
numerical data
preprocessing 7
Forms of data preprocessing
preprocessing 8
Outline
Introduction
Data cleaning
Data integration and transformation
Data reduction
Discretization and concept hierarchy generation
Summary
preprocessing 9
Data Cleaning
preprocessing 10
Data Cleaning: Acquisition
preprocessing 11
Data Cleaning: Example
Clean data
0000000001,199706,1979.833,8014,5722 , ,#000310 ….
,111,03,000101,0,04,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0300,
0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0300,0300.00
preprocessing 12
Data Cleaning: Metadata
Field types:
binary, nominal (categorical), ordinal, numeric, …
For nominal fields: tables translating codes to full
descriptions
Field role:
input : inputs for modeling
target : output
id/auxiliary : keep, but not use for modeling
ignore : don’t use for modeling
weight : instance weight
…
Field descriptions
preprocessing 13
Data Cleaning: Reformatting
preprocessing 14
Missing Data
preprocessing 15
How to Handle Missing Data?
Ignore the tuple: usually done when class label is missing (assuming the
tasks in classification—not effective when the percentage of missing values
per attribute varies considerably.
Fill in the missing value manually: tedious + infeasible?
Use a global constant to fill in the missing value: e.g., “unknown”, a
new class?!
Imputation: Use the attribute mean to fill in the missing value, or use the
attribute mean for all samples belonging to the same class to fill in the
missing value: smarter
Use the most probable value to fill in the missing value: inference-based
such as Bayesian formula or decision tree
preprocessing 16
Data Cleaning:
Unified Date Format
We want to transform all dates to the same format internally
Some systems accept dates in many formats
e.g. “Sep 24, 2003” , 9/24/03, 24.09.03, etc
preprocessing 17
Unified Date Format Options
Problem:
values are non-obvious
don’t help intuition and knowledge discovery
preprocessing 18
KSP Date Format
days_starting_Jan_1 - 0.5
KSP Date = YYYY + ----------------------------------
365 + 1_if_leap_year
Preserves intervals (almost)
The year and quarter are obvious
Sep 24, 2003 is 2003 + (267-0.5)/365= 2003.7301 (round to 4 digits)
Consistent with date starting at noon
Can be extended to include time
preprocessing 19
Y2K issues: 2 digit Year
preprocessing 20
Conversion: Nominal to Numeric
preprocessing 21
Conversion: Binary to Numeric
Binary fields
E.g. Gender=M, F
Convert to Field_0_1 with 0, 1 values
e.g. Gender = M Æ Gender_0_1 = 0
Gender = F Æ Gender_0_1 = 1
preprocessing 22
Conversion: Ordered to Numeric
B+ Æ 3.3
B Æ 3.0
preprocessing 23
Conversion: Nominal, Few Values
Multi-valued,
unordered attributes with small (rule of
thumb < 20) no. of values
e.g. Color=Red, Orange, Yellow, …, Violet
for each value v create a binary “flag” variable C_v , which is 1
if Color=v, 0 otherwise
preprocessing 24
Conversion: Nominal, Many Values
Examples:
US State Code (50 values)
Profession Code (7,000 values, but only few frequent)
preprocessing 25
Noisy Data
Noise: random error or variance in a measured variable
Incorrect attribute values may due to
faulty data collection instruments
data entry problems
data transmission problems
technology limitation
inconsistency in naming convention
Other data problems which requires data cleaning
duplicate records
incomplete data
inconsistent data
preprocessing 26
How to Handle Noisy Data?
Binning method:
first sort data and partition into (equi-depth) bins
then one can smooth by bin means, smooth by bin median, smooth
by bin boundaries, etc.
Clustering
detect and remove outliers
Combined computer and human inspection
detect suspicious values and check by human
Regression
smooth by fitting the data into regression functions
preprocessing 27
Simple Discretization Methods: Binning
preprocessing 28
Binning Methods for Data Smoothing
* Sorted data for price (in dollars): 4, 8, 9, 15, 21, 21, 24, 25, 26, 28, 29, 34
* Partition into (equi-depth) bins:
- Bin 1: 4, 8, 9, 15
- Bin 2: 21, 21, 24, 25
- Bin 3: 26, 28, 29, 34
* Smoothing by bin means:
- Bin 1: 9, 9, 9, 9
- Bin 2: 23, 23, 23, 23
- Bin 3: 29, 29, 29, 29
* Smoothing by bin boundaries:
- Bin 1: 4, 4, 4, 15
- Bin 2: 21, 21, 25, 25
- Bin 3: 26, 26, 26, 34
preprocessing 29
Cluster Analysis
preprocessing 30
Regression
y
Y1
Y1’ y=x+1
X1 x
preprocessing 31
Outline
Introduction
Data cleaning
Data integration and transformation
Data reduction
Discretization and concept hierarchy generation
Summary
preprocessing 32
Data Integration
Data integration:
combines data from multiple sources into a coherent store
Schema integration
integrate metadata from different sources
Entity identification problem: identify real world entities from
multiple data sources, e.g., A.cust-id ≡ B.cust-#
Detecting and resolving data value conflicts
for the same real world entity, attribute values from different sources
are different
possible reasons: different representations, different scales, e.g., metric
vs. British units
preprocessing 33
Handling Redundant Data in
Data Integration
Redundant data occur often when integration of multiple
databases
The same attribute may have different names in different databases
One attribute may be a “derived” attribute in another table, e.g.,
annual revenue
Redundant data may be able to be detected by correlational
analysis
Careful integration of the data from multiple sources may
help reduce/avoid redundancies and inconsistencies and
improve mining speed and quality
preprocessing 34
Data Transformation
preprocessing 35
Data Transformation:
Normalization
min-max normalization
v − minA
v' = (new _ maxA − new _ minA) + new _ minA
maxA − minA
z-score normalization
v − mean A
v'=
stand _ dev A
normalization by decimal scaling
v
v' = j Where j is the smallest integer such that Max(| v ' |)<1
10
preprocessing 36
Unbalanced Target Distribution
preprocessing 37
Handling Unbalanced Data
preprocessing 38
Building Balanced Train Sets
Y
Targets ..
.. Balanced set
Non-Targets N
N
N
..
..
..
..
Balanced Train
Raw Held
..
Balanced Test
preprocessing 39
Learning with Unbalanced Data
preprocessing 40
Outline
Introduction
Data cleaning
Data integration and transformation
Data reduction
Discretization and concept hierarchy generation
Summary
preprocessing 41
Data Reduction Strategies
Warehouse may store terabytes of data: Complex data
analysis/mining may take a very long time to run on the
complete data set
Data reduction
Obtains a reduced representation of the data set that is much smaller
in volume but yet produces the same (or almost the same) analytical
results
Data reduction strategies
Data cube aggregation
Dimensionality reduction
Numerosity reduction
Discretization and concept hierarchy generation
preprocessing 42
Data Cube Aggregation
The lowest level of a data cube
the aggregated data for an individual entity of interest
e.g., a customer in a phone calling data warehouse.
Multiple levels of aggregation in data cubes
Further reduce the size of data to deal with
Reference appropriate levels
Use the smallest representation which is enough to solve the task
Queries regarding aggregated information should be answered
using data cube, when possible
preprocessing 43
Dimensionality Reduction
preprocessing 44
Example of Decision Tree Induction
A1? A6?
preprocessing 46
Data Compression
String compression
There are extensive theories and well-tuned algorithms
Typically lossless
But only limited manipulation is possible without expansion
Audio/video compression
Typically lossy compression, with progressive refinement
Sometimes small fragments of signal can be reconstructed without
reconstructing the whole
Time sequence is not audio
Typically short and vary slowly with time
preprocessing 47
Data Compression
s s y
lo
Original Data
Approximated
preprocessing 48
Wavelet Transforms
Haar2 Daubechie4
Discrete wavelet transform (DWT): linear signal processing
Compressed approximation: store only a small fraction of the
strongest of the wavelet coefficients
Similar to discrete Fourier transform (DFT), but better lossy
compression, localized in space
Method:
Length, L, must be an integer power of 2 (padding with 0s, when necessary)
Each transform has 2 functions: smoothing, difference
Applies to pairs of data, resulting in two set of data of length L/2
Applies two functions recursively, until reaches the desired length
preprocessing 49
Principal Component Analysis
Given N data vectors from k-dimensions, find c <= k
orthogonal vectors that can be best used to represent data
The original data set is reduced to one consisting of N data vectors
on c principal components (reduced dimensions)
Each data vector is a linear combination of the c principal
component vectors
Works for numeric data only
Used when the number of dimensions is large
preprocessing 50
Principal Component Analysis
X2
Y1
Y2
X1
preprocessing 51
Numerosity Reduction
Parametric methods
Assume the data fits some model, estimate model parameters,
store only the parameters, and discard the data (except possible
outliers)
Log-linear models: obtain value at a point in m-D space as the
product on appropriate marginal subspaces
Non-parametric methods
Do not assume models
Major families: histograms, clustering, sampling
preprocessing 52
Regression and Log-Linear Models
Linear regression: Data are modeled to fit a straight line
Often uses the least-square method to fit the line
preprocessing 53
Regress Analysis and Log-Linear Models
Linear regression: Y = α + β X
Two parameters , α and β specify the line and are to be estimated by
using the data at hand.
using the least squares criterion to the known values of Y1, Y2, …,
X1, X2, ….
Multiple regression: Y = b0 + b1 X1 + b2 X2.
Many nonlinear functions can be transformed into the above.
Log-linear models:
The multi-way table of joint probabilities is approximated by a
product of lower-order tables.
Probability: p(a, b, c, d) = αab βacχad δbcd
preprocessing 54
Histograms
A popular data reduction
40
technique
35
Divide data into buckets
and store average (sum) 30
for each bucket 25
Can be constructed 20
optimally in one 15
dimension using dynamic
10
programming
5
Related to quantization
problems. 0
10000 30000 50000 70000 90000
preprocessing 55
Clustering
Partition data set into clusters, and one can store cluster
representation only
Can be very effective if data is clustered but not if data is
“smeared”
Can have hierarchical clustering and be stored in multi-
dimensional index tree structures
There are many choices of clustering definitions and
clustering algorithms, further detailed in Chapter 8
preprocessing 56
Sampling
Allow a mining algorithm to run in complexity that is
potentially sub-linear to the size of the data
Choose a representative subset of the data
Simple random sampling may have very poor performance in the
presence of skew
Develop adaptive sampling methods
Stratified sampling:
Approximate the percentage of each class (or subpopulation of
interest) in the overall database
Used in conjunction with skewed data
preprocessing 57
Sampling
W O R
SRS le random
im p ho ut
(s e wit
p l
sam ment)
pl ac e
re
SRSW
R
Raw Data
preprocessing 58
Sampling
preprocessing 59
Hierarchical Reduction
Use multi-resolution structure with different degrees of
reduction
Hierarchical clustering is often performed but tends to define
partitions of data sets rather than “clusters”
Parametric methods are usually not amenable to hierarchical
representation
Hierarchical aggregation
An index tree hierarchically divides a data set into partitions by value
range of some attributes
Each partition can be considered as a bucket
Thus an index tree with aggregates stored at each node is a
hierarchical histogram
preprocessing 60
Outline
Introduction
Data cleaning
Data integration and transformation
Data reduction
Discretization and concept hierarchy generation
Summary
preprocessing 61
Discretization
preprocessing 62
Discretization and Concept hierachy
Discretization
reduce the number of values for a given continuous attribute by
dividing the range of the attribute into intervals. Interval labels
can then be used to replace actual data values.
Concept hierarchies
reduce the data by collecting and replacing low level concepts
(such as numeric values for the attribute age) by higher level
concepts (such as young, middle-aged, or senior).
preprocessing 63
Discretization and concept hierarchy
generation for numeric data
Binning (see sections before)
Entropy-based discretization
preprocessing 64
Entropy-Based Discretization
preprocessing 65
Segmentation by natural partitioning
3-4-5 rule can be used to segment numeric data into
relatively uniform, “natural” intervals.
* If an interval covers 3, 6, 7 or 9 distinct values at the most
significant digit, partition the range into 3 equi-width
intervals
* If it covers 2, 4, or 8 distinct values at the most significant
digit, partition the range into 4 intervals
* If it covers 1, 5, or 10 distinct values at the most significant
digit, partition the range into 5 intervals
preprocessing 66
Example of 3-4-5 rule count
(-$1,000 - $2,000)
Step 3:
(-$4000 -$5,000)
Step 4:
preprocessing 67
Concept hierarchy generation for
categorical data
Specification of a partial ordering of attributes explicitly at
the schema level by users or experts
Specification of a portion of a hierarchy by explicit data
grouping
Specification of a set of attributes, but not of their partial
ordering
Specification of only a partial set of attributes
preprocessing 68
Specification of a set of attributes
Concept hierarchy can be automatically generated based on
the number of distinct values per attribute in the given
attribute set. The attribute with the most distinct values
is placed at the lowest level of the hierarchy.
preprocessing 69
Outline
Introduction
Data cleaning
Data integration and transformation
Data reduction
Discretization and concept hierarchy generation
Summary
preprocessing 70
Summary
preprocessing 72