Measure
Measure
Introduction
An old saying goes:
One accurate measure is
worth a thousand expert
opinions.
The Measure phase allows you
to understand the present
condition of the process
before you attempt to identify
improvements.
Objective
Deliverables from the Measure Phase:
Selecting & Setting Performance
Standards for Y
Data to be collected (xs & ys)
Operational Definitions
Data Collection Plan
Measurement System Analysis
Baseline data (Current Performance)
MSME Development Institute Chennai
Use of FMEA
Why Use FMEA
To identify and prioritize parts of the product or
process that need further improvement.
To ensure the quality, reliability and safety of products
and services
To document and track actions taken to reduce risk
To develop action plans to avoid risk/defects
When to Use FMEA
When a service, product or process is created,
improved, or redesigned
When existing products, services, or processes are
used in new ways or in new environments.
In the Measure phase of DMAIC to identify potential
failures in current process steps.
In the improve phase of DMAIC to expose potential
problems in the solution.
MSME Development Institute Chennai
FMEA Terminology
Failure Mode:
Cause :
Effect:
Impact on Customer if Failure Mode is not prevented or
corrected
The Failure Mode can be thought of as the inprocess
whereas anorEffect
is the impact
of
Customer defect,
can be downstream
the ultimate
customer
the defect on the customer.
FMEA Terminology
Severity (SEV):
How significant is the impact of the Effect to the customer
(internal or external)?
Occurrence (OCC):
How likely is the Cause of the Failure Mode to occur?
Detection (DET):
How likely will the current system detect the Cause or Failure
Mode if it occurs?
Risk Priority Number (RPN):
Severity
Effect
Process : Manufacturing /
Assembling Effect
Hazardous with
9
warning
8Very High
7High
6Moderate
5Low
4Very Low
3Minor
2Very Minor
1None
Occurrence
10
9
8
7
Likelihood
Either
Or
Cpk
Very High:
1 in 10 or less
<0.55
Persistent Failures
1 in 20 - 50
0.55 to 0.78
1 in 50 - 100
0.78 to 0.86
1 in 100 - 200
0.86 to 0.94
1 in 200 - 500
0.94 to 1.00
1 in 500 - 1000
1.00 to 1.10
1 in 2,000 10,000
1.20 to 1.30
1 in 10,000 100,000
1.30 to 1.67
1 in 100,000 or
more
>=1.67
High:
Frequent Failures
6
Moderate:
5
Occasional
Failures
Low: Relatively
Few Failures
2
1
Remote : Failure is
unlikely
Detection
10
Controls may
6
detect
Controls may
detect
Controls have a
4 good chance to
detect
Controls have a
3 good chance to
detect
2
Controls almost
certain to detect
Controls almost
1 certain to detect
10
Building an FMEA
Steps for Constructing an FMEA
1.
Identify all major process steps
2.
Within the team brainstorm and group possible
failure modes for each step
3.
List one or more potential effects for each failure
mode
4.
Give a severity rating for each effect
5.
Give an occurrence rating for each failure cause
6.
Give a detection rating for each failure mode
7.
Compute risk priority number (RPN) for each effect
8.
Use the RPNs to prioritize high risk failure modes
9.
Plan to reduce or eliminate the risk associated with
high priority failure modes
10. Execute risk mitigation plans
11. Recalculate RPN
MSME Development Institute Chennai
11
Data
type
Unit of
Measu
re
Operation
al
Definition
LSL
USL
Target
12
Ys
13
Identifying the Xs
Process
Maps
Cause & Effect Diagram
Cause & Effect Matrices
14
Mother
Nature
Machine
Man
Effect Y
(CTQ)
Measuremen
t
Material
Method
15
In
Measure phase:
In
Analyze phase:
16
17
Lead
time
Cost
Repeat
s
CTQ
10
Total
Customer
87
Applicatio
n
75
Technique
69
Location
177
..
..
18
Data types
Discrete data
Gender
Pass/Fail
Invoice errors
Defectives
Continuous
data
Cycle time
Length
Weight
Voltage
19
Sampling
Why Sampling:
Reduce cost
Reduce time & effort
Reduce loss (if destructive testing)
20
Sampling
Important aspects of Sampling:
Collect new data (avoid historical data)
Collect enough data
Sample should represent the entire
population
Use stratification effectively in sampling
Should be random and not predictable
Sampling should be intelligent, not
convenient
MSME Development Institute Chennai
21
Data Analysis
Numerical
Analysis:
To understand location & spread
Graphical
Analysis:
To understand location, spread & shape
22
Spread
23
Mean
Standard
Deviation
Variance
Populatio
n
_
x
s2
24
MSA
25
Repeatability
Reproducibility
Accuracy
(Bias)
Stability
Linearity
Discrimination
26
Gage Repeatability
Gage Repeatability is the variation in measurements
obtained
when one operator uses the same gage for measuring
the
identical characteristics of the same part
Example:
A thermometer (Gage ) is used to measure the temperature of boiling
water.
27
Gage Reproducibility
Gage Reproducibility is the variation in the average of
measurements made by different operators using the
same gage when measuring identical characteristics of
the same part
Example:
Reproducibility If two people measure the temperature
using the
same thermometer, do we get the same results? If not
then we have a
reproducibility problem.
28
Gage Accuracy
Gage Accuracy is the difference between the observed
average of
measurements and the true average
True average is best determined by measuring with the
most accurate
device available or comparing with a reference (eg. NIST,
NPL)
Example:
Accuracy Given the standard temperature of boiling
water at 100C,
does the Gage show a reading of exactly 100C. If not we
have an
accuracy problem.
MSME Development Institute Chennai
29
Gage Stability
Gage Stability refers to the difference in the average of at
least two sets
of measurements obtained with the same gage on the
same parts taken
at different times.
Example:
Stability does the thermometer show the same reading
when used
after a long period of time? If not then we have a stability
problem.
MSME Development Institute Chennai
30
Gage Linearity
Gage linearity is the difference in the accuracy values
through the
expected operating rage of the gage
Example:
Linearity If the thermometer is as accurate at
measuring the
temperature of ice at 0C as at measuring the
temperature of boiling
water at 100C.
31
Gage Discrimination
To meet the requirements of measurement system, the
gage should be precise and accurate.
Select a gage with
Least count one level lower than the required tolerance
of performance of standard
Variation in gage should be less than expected
variation in process
Example:
If we are measuring the length of Bar having a tolerance
of 1cm, the gage should have a least count of 1mm.
32
MSA tolerance
Gage R&R
Contribution
Decision
Action
< 10%
Good
Acceptable
10 - 30%
> 30%
Problem
Not
acceptable
33
Operational
Definition
Data Collection Plan
Data Collection Forms
Data Collection Process
34
Operational Definition
Removes
members
Ensures
35
36
Operation
al
Definition
Servic
e Call
time
Time
difference
between
the
Customer
call and
Call
resolution
Data
Source
/
Locatio
n
Sampl
e Size
CRM
200
Who will
collect
the data
Custome
r Service
Executiv
e
When
will Data
be
collecte
d
How
will
Data be
collecte
d
1st
week of
Jan, 2nd
week of
Feb, 3rd
week of
Mar
Call
Analys
is
report
(CRM )
X Data to be
collected
Location,
Customer
Type,
Technique,
Application,
Engineer,
response
time, cycle
time, repeats
Identifying big Xs
Histogram
Control Chart
Scatter diagrams
37
38
Sigma
Level
Capability Indices
Cost of Poor Quality (CoPQ)
Yield
39
Sigma Level
DPMO
= Defects * 1 million
Units * Opportunities for error
Check
40
41
Process Sigma
(ST)
Defects per
1,000,000
Defects per
1000
Long Term
Yield
Process Sigma
(ST)
Defects per
1,000,000
Defects per
1000
99.999660%
6.0
0.0034
97.130000%
3.4
28700
28.7
99.999500%
5.9
0.005
96.410000%
3.3
35900
35.9
99.999200%
5.8
0.008
95.540000%
3.2
44600
44.6
99.999000%
5.8
10
0.01
94.520000%
3.1
54800
54.8
99.998000%
5.6
20
0.02
93.320000%
3.0
66800
66.8
99.997000%
5.5
30
0.03
91.920000%
2.9
80800
80.8
99.996000%
5.4
40
0.04
90.320000%
2.8
96800
96.8
99.993000%
5.3
70
0.07
88.500000%
2.7
115000
115
99.990000%
5.2
100
0.1
86.500000%
2.6
135000
135
99.985000%
5.1
150
0.15
84.200000%
2.5
158000
158
99.977000%
5.0
230
0.23
81.600000%
2.4
184000
184
99.967000%
4.9
330
0.33
78.800000%
2.3
212000
212
99.952000%
4.8
480
0.48
75.800000%
2.2
242000
242
99.932000%
4.7
680
0.68
69.200000%
2.0
308000
308
99.904000%
4.6
960
0.96
65.600000%
1.9
344000
344
99.865000%
4.5
1350
1.35
61.800000%
1.8
382000
382
99.814000%
4.4
1860
1.86
58.000000%
1.7
420000
420
99.745000%
4.3
2550
2.55
54.00000%
1.6
460000
460
99.654000%
4.2
3460
3.46
50.00000%
1.5
500000
500
99.534000%
4.1
4660
4.66
46.00000%
1.4
540000
540
99.379000%
4.0
6210
6.21
43.000000%
1.3
570000
570
99.181000%
3.9
8190
8.19
39.000000%
1.2
610000
610
98.930000%
3.8
10700
10.7
35.00000%
1.1
650000
650
98.610000%
3.7
13900
13.9
31.000000%
1.0
690000
690
98.220000%
3.6
17800
17.8
97.730000%
3.5
22700
22.7
42
Capability Indices
A
43
Capability Indices
Cp
6s
Interpretation of Cp
Poor Capability
Marginal Capability
Good Capability
6 Capability
MSME Development Institute Chennai
44
Capability Indices
Cpk
3s
45
Cp & Cpk
Cp
indicates the
inherent capability
of the process
High
Cp (>1)
means good
process control &
repeatability
Cpk
indicates the
process capability
with respect to
tolerance
High Cpk (>1.5)
means Six sigma
process, with process
limits comfortably
within tolerance
limits
46
Cp & Cpk
If
Cp=1, SW ______ PW
If
Cp>1, SW ______ PW
If
47
Yield
Yield
First
Rolled
48
Costs
Internal
Failures
External
Failures
49
Standards for Y
Data to be collected (xs & ys)
Operational Definitions
Data Collection Plan
Measurement System Analysis
Baseline data (Current
Performance)
50
Thank You
51