Varsigma Greenbeltbookv8 210913 134817
Varsigma Greenbeltbookv8 210913 134817
VarSigma is an accredited training provider of Exemplar Global (previously RABQSA), a member of the ASQ family.
Define ........................................................................................................................40
Define Overview .................................................................................... 40
Measure ....................................................................................................................57
Measure Phase Overview ...................................................................... 57
Improve ...................................................................................................................119
Improve Phase Overview..................................................................... 119
Control .....................................................................................................................134
Control Phase Overview ...................................................................... 134
Step 13- Implement Control System for Critical X’s ........................... 135
Appendix..................................................................................................................146
Acronyms ............................................................................................. 146
Elementary Statistics
Basics of Six Sigma Project Management
DMAIC Roadmap, tools and techniques
Six Sigma Glossary
Practice data files
Reference Study links
VarSigma’s Lean Six Sigma Green Belt workshop primarily focuses on application of Six Sigma tools and techniques
in different business situations. During the workshop, we will explore multiple case studies across industries, and
also complete a practice project. Therefore, it is imperative for Green Belt participants to thoroughly study the
preparatory module before attending the workshop. You may list down your questions and share it with the
facilitator on the 1st day.
VarSigma has invested over 3000 hours of research in developing Lean Six Sigma Green Belt Workshop and continues
to invest 480-640 hours/ year of content research and development exclusively for Lean Six Sigma Green Belt
workshop.
We encourage you to help us improve the book. If you spot an error or would like to suggest changes, or want to
share specific case studies, articles, please e-mail us at [email protected]
Regards,
1
What is Six Sigma?
History and Evolution of Lean Six Sigma
Defining Six Sigma
Six Sigma has been labelled as a metric, methodology, a management system and now a philosophy. Green Belts,
Black Belts, Master Black Belts, Champions and Sponsors have been trained on Six Sigma as a metric and
methodology, however very few have experienced or been exposed to Six Sigma as an overall management system
and a way of life. Reviewing the metric and the methodology will help create a context for beginning to understand
Six Sigma as a management system.
Six Sigma is a vehicle for strategic change and an organizational approach to performance excellence. It is important
for business operations because it can be used both to increase top-line growth and also reduce bottom line costs.
Six Sigma can be used to enable:
Transformational change by applying it across the board for large-scale fundamental changes
throughout the organization to change processes, cultures, and achieve breakthrough results.
Transactional change by applying tools and methodologies to reduce variation and defects and
dramatically improve business results.
Six Sigma as a methodology provides businesses with the tools to improve the capability of their business
1
processes. For Six Sigma, a process is the basic unit for improvement. A process could be a product or a service
Page
process that a company provides to outside customers, or it could be an internal process within the company,
such as a billing or production process. In Six Sigma, the purpose of process improvement is to increase
performance and decrease performance variation. This increase in performance and decrease in performance
variation will lead to defect reduction and improvement in profits, to employee morale, Quality of product, and
eventually to business excellence
General Electric: “It is not a secret society, a slogan or a cliché. Six Sigma is a highly disciplined process that helps
focus on developing and delivering near-perfect products and services. Six Sigma has changed our DNA – it is
now the way we work.”
Honeywell: “Six Sigma refers to our overall strategy to improve growth and productivity as well as a Quality
measure. As a strategy, Six Sigma is a way for us to achieve performance breakthroughs. It applies to every
function in our company and not just to the factory floor.”
The tools used in Six Sigma are not new. Six Sigma is based on tools that have been around for centuries. For
example, Six Sigma relies a lot on the normal curve which was introduced by Abraham de Moivre in 1736 and
later popularized by Carl Friedrich Gauss in 1818.
2
Page
2
Understanding Lean
Why do we want to lose excess Fat?
Defining Lean
Lean operation principles are derived from the Lean manufacturing practices developed by Toyota. The key focus
of Lean is to identify and eliminate wasteful actions that do not add value to customers in any business process.
Therefore, Lean operation principles can be used greatly to improve the efficiency and speed of all processes.
The strategy part of Lean looks at balancing multiple value streams (i.e., typically, a family of products or services)
and integrating the work done in operations and in the rest of the organisation (be it a factory, a hospital, a
software development company) with the customer in mind. The concept is simple. "Lean" describes any process
developed to a goal of near 100% value added with very few wasteful steps or interruptions to the workflow.
That includes physical things like products and less tangible information like orders, request for information,
quotes, etc.
Lean is typically driven by a need for quicker customer response times, the proliferation of product and service
offerings, a need for faster cycle times, and a need to eliminate waste in all its forms. The Lean approach
challenges everything and accepts nothing as unchangeable. It strives continuously to eliminate waste from all
processes, a fundamental principle totally in alignment with the goals of the Six Sigma Management System.
These methods are especially effective in overcoming cultural barriers where the impossible is often merely the
untried.
Lean, like any other major business strategy, is best if driven from the top, linked into the organisation's
performance measurement systems, and used as a competitive differentiator. This is what we would like to do
however sometimes reality differs. In most instances, the Champions driving this approach should look for pilot
areas in the organisation to test the concept and see if a business case for Lean can be built over time. One
cannot just flip a switch and get the whole organisation doing this anyway, so starting small and building from
there can be a valuable approach.
4
Page
An account closing process typically takes a long time to do because a flood of transactions take place
at the end of the period: journal entries, special analysis, allocations, report preparation, etc. The
excessive transactions cause accounting departments to be somewhat chaotic places at the end of the
period. Adopting Lean in this world is different from a factory, but the goal is still stable amounts of
work, flexible operations (within defined parameters), and pull from the customers of the process.
Imagine a purchase order going through a process. Ninety-nine percent of its processing life is going to
be "wait" time. It may also have re-work problems as people try to get the information right, and in
terms of workload balance, some purchase orders are more difficult to do than others. This is not so
different from what goes on in the factory. Many of these problems can be individually addressed using
Kaizen and Lean Teams.
Multiple re-inputs of information into Excel spread sheets, Access data bases, requirements generators,
or the different languages used inside a business for the same physical product. Purchasing has a
different numbering scheme than engineering, which has a different numbering scheme than
accounting. We may have to keep a matrix up-to-date that maps these relationships. Don’t you think it
is chaos?
5
Page
Sort – Clearly distinguish needed items from unneeded items and eliminate the latter.
Straighten /Stabilize/ Set in Order – Keep needed items in the correct place to allow for easy and
immediate retrieval.
Standardize – Develop standardized work processes to support the first three steps.
Sustain – Put processes in place to ensure that the first four steps are rigorously followed.
6
Page
Figure 1: 5S
Waiting/Idle Time/Search time (look for items, wait for elements or instructions to be delivered)
Correction (defects/re-work & scrap - doing the same job more than once)
The benefits of doing Kaizen are less direct or indirect labour requirements, less space requirements, increased
flexibility, increased quality, increased responsiveness, and increased employee enthusiasm. Figure 2 shows a
Kaizen team in action discussing improvements.
Warning (let the user know that there is a potential problem – like door ajar warning in a car)
Shutdown (close down the process so it does not cause damage – like deny access to ATM machines if
password entered is wrong 3 times in a row)
Auto-correction (automatically change the process if there is a problem – like turn on windshield wipers
in case of rain in some advanced cars)
8
Page
W O R M P I T
Waiting Over-Production Rework Motion Over-processing Inventory Transportation
The underutilization of talent and skills is sometimes called the 8th waste in Lean.
Waiting is non- productive time due to lack of material, people, or equipment. This can be due to slow or broken
machines, material not arriving on time, etc. Waste of Waiting is the cost of an idle resource. Examples are:
Over-Production refers to producing more than the next step needs or more than the customer buys. Waste of
Over-production relates to the excessive accumulation of work-in-process (WIP) or finished goods inventory. It
may be the worst form of waste because it contributes to all the others. Examples are:
Over-ordering materials
Rework or Correction or defects are as obvious as they sound. Waste of Correction includes the waste of handling
and fixing mistakes. This is common in both manufacturing and transactional settings. Examples are:
Extra steps
Over-Processing refers to tasks, activities and materials that don’t add value. Can be caused by poor product or
process design as well as from not understanding what the customer wants. Waste of Over-processing relates
to over-processing anything that may not be adding value in the eyes of the customer. Examples are:
Sign-offs
Reports that contain more information than the customer wants or needs
Communications, reports, emails, contracts, etc that contain more than the necessary points (concise is better)
Duplication of effort/reports
Inventory is the liability of materials that are bought, invested in and not immediately sold or used. Waste of
Inventory is identical to over-production except that it refers to the waste of acquiring raw material before the
exact moment that it is needed. Examples are:
Transportation is the unnecessary movement of material and information. Steps in a process should be located
close to each other so movement is minimized. Examples are:
3
Statistics for Lean Six Sigma
Elementary Statistics for Business
The field of statistics deals with the collection, presentation, analysis, and use of data to make decisions, solve
problems, and design products and processes. Statistical techniques can be a powerful aid in designing new
products and systems, improving existing designs, and designing, developing, and improving processes
Statistical methods are used to help us describe and understand variability. By variability, we mean that
successive observations of a system or phenomenon do not produce exactly the same result. We all encounter
variability in our everyday lives, and statistical thinking can give us a useful way to incorporate this variability
into our decision-making processes. For example, consider the gasoline mileage performance of your car. Do you
always get exactly the same mileage performance on every tank of fuel? Of course not—in fact, sometimes the
mileage performance varies considerably. This observed variability in gasoline mileage depends on many factors,
such as the type of driving that has occurred most recently (city versus highway), the changes in condition of the
vehicle over time (which could include factors such as tire inflation, engine compression, or valve wear), the
brand and/or octane number of the gasoline used, or possibly even the weather conditions that have been
recently experienced. These factors represent potential sources of variability in the system. Statistics gives us a
framework for describing this variability and for learning about which potential sources of variability are the
most important or which have the greatest impact on the gasoline mileage performance.
Descriptive statistics focus on the collection, analysis, presentation, and description of a set of data. For
example, the United States Census Bureau collects data every 10 years (and has done so since 1790) concerning
many characteristics of residents of the United States. Another example of descriptive statistics is the employee
benefits used by the employees of an organisation in fiscal year 2005.These benefits might include healthcare
costs, dental costs, sick leave, and the specific healthcare provider chosen by the employee.
Inferential statistics focus on making decisions about a large set of data, called the population, from a subset of
the data, called the sample. The invention of the computer eased the computational burden of statistical
methods and opened up access to these methods to a wide audience. Today, the preferred approach is to use
statistical software such as Minitab to perform the computations involved in using various statistical methods.
11
Page
A population, also called a universe, is the entire group of units, items, services, people, etc., under investigation for a
fixed period of time and a fixed location.
A sample is the portion of a population that is selected to gather information to provide a basis for action on the
population. Rather than taking a complete census of the whole population, statistical sampling procedures focus on
collecting a small portion of the larger population. For example, 50 accounts receivable drawn from a list, or frame, of
10,000 accounts receivable constitute a sample. The resulting sample provides information that can be used to
estimate characteristics of the entire frame.
There are two kinds of samples: non-probability samples and probability samples.
In a non-probability sample, items or individuals are chosen without the benefit of a frame. Because non-probability
samples choose units without the benefit of a frame, there is an unknown probability of selection (and in some cases,
participants have self-selected). For a non-probability sample, the theory of statistical inference should not be applied
to the sample data. For example, many companies conduct surveys by giving visitors to their web site the opportunity
to complete survey forms and submit them electronically. The response to these surveys can provide large amounts of
data, but because the sample consists of self-selected web users, there is no frame. Non-probability samples are
selected for convenience (convenience sample) based on the opinion of an expert (judgment sample) or on a desired
proportional representation of certain classes of items, units, or people in the sample (quota sample). Non-probability
samples are all subject to an unknown degree of bias. Bias is caused by the absence of a frame and the ensuing classes
of items or people that may be systematically denied representation in the sample (the gap).
Non-probability samples have the potential advantages of convenience, speed, and lower cost. However, they have
two major disadvantages: potential selection bias and the ensuing lack of generalized ability of the results. These
disadvantages offset advantages of non - probability samples. Therefore, you should only use non-probability sampling
methods when you want to develop rough approximations at low cost or when small-scale initial or pilot studies will
be followed by more rigorous investigations.
You should use probability sampling whenever possible, because valid statistical inferences can be made from a
probability sample. In a probability sample, the items or individuals are chosen from a frame, and hence, the individual
units in the population have a known probability of selection from the frame.
12
The four types of probability samples most commonly used are simple random, stratified, systematic, and cluster.
Page
These sampling methods vary from one another in their cost, accuracy, and complexity.
In a simple random sample, every sample of a fixed size has the same chance of selection as every other sample of
that size. Simple random sampling is the most elementary random sampling technique. It forms the basis for the other
random sampling techniques. With simple random sampling, n represents the sample size, and N represents the frame
size, not the population size. Every item or person in the frame is numbered from 1 to N. The chance of selecting any
particular member of the frame on the first draw is 1/N. You use random numbers to select items from the frame to
eliminate bias and hold uncertainty within known limits.
Two important points to remember are that different samples of size n will yield different sample statistics, and
different methods of measurement will yield different sample statistics. Random samples, however, do not have bias
on average, and the sampling error can be held to known limits by increasing the sample size. These are the advantages
of probability sampling over non-probability sampling.
Stratified Sample
In a stratified sample, the N items in the frame are divided into sub populations or strata, according to some common
characteristic. A simple random sample is selected within each of the strata, and you combine results from separate
simple random samples. Stratified sampling can decrease the overall sample size, and, consequently, lower the cost of
a sample. A stratified sample will have a smaller sample size than a simple random sample if the items are similar within
a stratum (called homogeneity) and the strata are different from each other (called heterogeneity). As an example of
stratified sampling, suppose that a company has workers located at several facilities in a geographical area. The workers
within each location are similar to each other with respect to the characteristic being studied, but the workers at the
different locations are different from each other with respect to the characteristic being studied. Rather than take a
simple random sample of all workers, it is cost efficient to sample workers by location, and then combine the results
into a single estimate of a characteristic being studied.
Systematic Sample
In a systematic sample, the N individuals or items in the frame are placed into k groups by dividing the size of the frame
N by the desired sample size n. To select a systematic sample, you choose the first individual or item at random from
the k individuals or items in the first group in the frame. You select the rest of the sample by taking every kth individual
or item thereafter from the entire frame.
If the frame consists of a listing of pre-numbered checks, sales receipts, or invoices, or a preset number of consecutive
items coming off an assembly line, a systematic sample is faster and easier to select than a simple random sample. This
method is often used in industry, where an item is selected for testing from a production line (say, every fifteen
minutes) to ensure that machines and equipment are working to specification. This technique could also be used when
questioning people in a sample survey. A market researcher might select every 10th person who enters a particular
store, after selecting a person at random as a starting point; or interview occupants of every 5th house in a street, after
selecting a house at random as a starting point.
A shortcoming of a systematic sample occurs if the frame has a pattern. For example, if homes are being assessed, and
every fifth home is a corner house, and the random number selected is 5, then the entire sample will consist of corner
houses. Corner houses are known to have higher assessed values than other houses. Consequently, the average
13
assessed value of the homes in the sample will be inflated due to the corner house phenomenon.
Page
In a cluster sample, you divide the N individuals or items in the frame into many clusters. Clusters are naturally occurring
subdivisions of a frame, such as counties, election districts, city blocks, apartment buildings, factories, or families. You
take a random sampling of clusters and study all individuals or items in each selected cluster. This is called single-stage
cluster sampling.
Cluster sampling methods are more cost effective than simple random sampling methods if the population is spread
over a wide geographic region. Cluster samples are very useful in reducing travel time. However, cluster sampling
methods tend to be less efficient than either simple random sampling methods or stratified sampling methods. In
addition, cluster sampling methods are useful in cutting cost of developing a frame because first, a frame is made of
the clusters, and second, a frame is made only of the individual units in the selected clusters. Cluster sampling often
requires a larger overall sample size to produce results as precise as those from more efficient procedures.
Types of Variables
Variable: In statistics, a variable has two defining characteristics:
For example, a person's hair colour is a potential variable, which could have the value of "blonde" for one
person and "brunette" for another.
Data could be classified into two types: attribute data and measurement data.
Attribute Data
o Attribute data (also referred to as categorical or count data) occurs when a variable is either classified into
categories or used to count occurrences of a phenomenon.
o Attribute data places an item or person into one of two or more categories. For example, gender has only
two categories.
o In other cases, there are many possible categories into which a variable can be classified. For example, there
could be many reasons for a defective product or service.
o Regardless of the number of categories, the data consists of the number or frequency of items in a particular
category, whether it is number of voters in a sample who prefer a particular candidate in an election or the
number of occurrences of each reason for a defective product or service.
o Count data consists of the number of occurrences of a phenomenon in an item or person. Examples of count
data are the number of blemishes in a yard of fabric or number of cars entering a highway at a certain
location during a specific time period.
14
o The colour of a ball (e.g., red, Green, blue) or the breed of a dog (e.g., collie, shepherd, terrier) would be
examples of qualitative or categorical variables.
Page
o Measurement data (also referred to as continuous or variables data) results from a measurement taken
on an item or person. Any value can theoretically occur, limited only by the precision of the measuring
process.
o For example, height, weight, temperature, and cycle time are examples of measurement data.
o E.g. suppose the fire department mandates that all fire fighters must weigh between 150 and 250
pounds. The weight of a fire fighter would be an example of a continuous variable; since a fire fighter's
weight could take on any value between 150 and 250 pounds.
o Examples of continuous data in nonmanufacturing processes include:
Cycle time needed to complete a task
Revenue per square foot of retail floor space
Costs per transaction
From a process point of view, continuous data are always preferred over discrete data, because they are more
efficient (fewer data points are needed to make statistically valid decisions) and they allow degree of variability
in the output to be quantified. For example, it is much more valuable to know how long it actually took to resolve
a customer complaint than simply noting whether it was late or not.
There are four scales of measurement: nominal, ordinal, interval, and ratio. Attribute data classified into
categories is nominal scale data—for example, conforming versus nonconforming, on versus off, male versus
female. No ranking of the data is implied. Nominal scale data is the weakest form of measurement. An ordinal
scale is used for data that can be ranked, but cannot be measured—for example, ranking attitudes on a 1 to 5
scale, where 1 = very dissatisfied, 2 = dissatisfied, 3 = neutral, 4 = satisfied, and 5 = very satisfied. Ordinal scale
data involves a stronger form of measurement than attribute data. However, differences between categories
cannot be measured.
Measurement data can be classified into interval- and ratio-scaled data. In an interval scale, differences between
measurements are a meaningful amount, but there is no true zero point. In a ratio scale, not only are differences
between measurements a meaningful amount, but there is a true zero point. Temperature in degrees Fahrenheit
or Celsius is interval scaled because the difference between 30 and 32 degrees is the same as the difference
between 38 and 40 degrees, but there is no true zero point (0° F is not the same as 0° C).Weight and time are
ratio-scaled variables that have a true zero point; zero pounds are the same as zero grams, which are the same
as zero stones. Twenty minutes is twice as long as ten minutes, and ten minutes is twice as long as five minutes.
15
Page
We use measures of central tendency (Mean, Median) to determine the location and measures of dispersion
(Standard Deviation) to determine the spread. When we compute these measures from a sample, they are
statistics and if we compute these measures from a population, they are parameters. (To distinguish sample
statistics and population parameters, Roman letters are used for sample statistics, and Greek letters are used
for population parameters.)
Central Tendency: The tendency of data to cluster around some value. Central tendency is usually
expressed by a measure of location such as the mean, median, or mode.
The mean of a sample of numerical observations is the sum of the observations divided by the number of
observations. It is the simple arithmetic average of the numbers in the sample. If the sample members are
denoted by x1, x2, ... , xn where n is the number of observations in the sample or the sample size, then the sample
mean is usually denoted by 𝑥and pronounced "x-bar”. The population mean is denoted by
The arithmetic mean (also called the mean or average) is the most commonly used measure of central tendency.
You calculate the arithmetic mean by summing the numerical values of the variable, and then you divide this
sum by the number of values.
For a sample containing a set of n values, X1, X2. . .Xn, the arithmetic mean of a sample (given by the symbol X
called X-bar) is written as:
𝑺𝒖𝒎 𝒐𝒇 𝒕𝒉𝒆 𝑽𝒂𝒍𝒖𝒆𝒔
̅=
𝒙
𝑵𝒖𝒎𝒃𝒆𝒓 𝒐𝒇 𝒕𝒉𝒆 𝑽𝒂𝒍𝒖𝒆𝒔
X i
X i 1
n
To illustrate the computation of the sample mean, consider the following example related to your Personal life:
the time it takes to get ready to go to work in the morning. Many people wonder why it seems to take longer
16
than they anticipate getting ready to leave for work, but very few people have actually measured the time it
takes them to get ready in the morning. Suppose you operationally define the time to get ready as the time in
Page
minutes (rounded to the nearest minute) from when you get out of bed to when you leave your home. You
decide to measure these data for a period of 10 consecutive working days, with the following results:
Day Time 1 2 3 4 5 6 7 8 9 10
Minutes 39 29 43 52 39 44 40 31 44 35
To compute the mean (average) time, first compute the sum of all the data values, 39 + 29 + 43 + 52 + 39 + 44 +
40 + 31 + 44 + 35, which are 396. Then, take this sum of 396 and divide by 10, the number of data values. The
result, 39.6, is the mean time to get ready. Although the mean time to get ready is 39.6 minutes, not one
individual day in the sample actually had that value. In addition, the calculation of the mean is based on all the
values in the set of data. No other commonly used measure of central tendency possesses this characteristic.
Because its computation is based on every value, the mean is greatly affected by any extreme value or values.
When there are extreme values, the mean presents a distorted representation of the data. Thus, the mean is
not the best measure of central tendency to use for describing or summarizing a set of data that has extreme
values.
Median
It is the point in the middle of the ordered sample. Half the sample values exceed it and half do not. It is used,
not surprisingly, to measure where the center of the sample lies, and hence where the center of the population
from which the sample was drawn might lie. The median of a set of data is that value that divides the data into
two equal halves. When the number of observations is even, say 2n, it is customary to define the median as the
average of the nth and (n+ 1) st rank ordered values.
The median is the value that splits a ranked set of data into two equal parts. If there are no ties, half the values
will be smaller than the median, and half will be larger. The median is not affected by any extreme values in a
set of data. Whenever an extreme value is present, the median is preferred instead of the mean in describing
the central tendency of a set of data.
To calculate the median from a set of data, you must first rank the data values from smallest to largest. Then,
the median is computed, as described next.
We can use Equation to compute the median by following one of two rules:
Rule 1: If there are an odd number of values in the data set, the median is the middle ranked value.
Rule 2: If there is an even number of values in the data set, then the median is the average of the two values in
the middle of the data set.
𝒏+𝟏
𝑴𝒆𝒅𝒊𝒂𝒏 = 𝒓𝒂𝒏𝒌𝒆𝒅 𝒗𝒂𝒍𝒖𝒆
𝟐
17
29 31 35 39 39 40 43 44 44 52
RANKS
1 2 3 4 5 6 7 8 9 10
Median=39.5
Using rule 2 for the even-sized sample of 10 days, the median corresponds to the (10 + 1)/2 = 5.5 ranked value,
halfway between the fifth-ranked value and the sixth ranked value. Because the fifth-ranked value is 39 and the
sixth ranked value is 40, the median is the average of 39 and 40, or 39.5. The median of 39.5 means that for half
of the days, the time to get ready is less than or equal to 39.5 minutes, and for half of the days, the time to get
ready is greater than or equal to 39.5 minutes.
Mode: The mode of a sample is that observed value that occurs most frequently.
Range
The range is the simplest measure of variation in a set of data. The range is equal to the largest value minus the
smallest value. The smallest sample value is called the minimum of the sample, and the largest sample value is
called the maximum. The distance between the sample minimum and maximum is called the range of the
sample.The range clearly is a measure of the spread of sample values. As such it is a fairly blunt instrument, for
it takes no cognizance of where or how the values between the minimum and maximum might be located.
Range = largest value – smallest value
Using the data pertaining to the time to get ready in the morning
Range = largest value – smallest value
Range = 52 – 29 = 23 minutes
This means that the largest difference between any two days in the time to get ready in the morning is 23
minutes.
Inter-quartile Range
18
Quartiles divide the sample into four equal parts. The lower quartile has 25% of the sample values below it and
75% above. The upper quartile has 25% of the sample values above it and 75% below. The middle quartile is, of
course, the median. The middle half of the sample lies between the upper and lower quartile. The distance
between the upper and lower quartile is called the inter-quartile range. Like the range, the inter-quartile range
is a measure of the spread of the sample. It measures variability or dispersion.
A simple measure around the mean might just take the difference between each value and the mean, and then
sum these differences. However, if you did that, you would find that because the mean is the balance point in a
set of data, for every set of data, these differences would sum to zero. One measure of variation that would
differ from data set to data set would square the difference between each value and the mean and then sum
these squared differences. In statistics, this quantity is called a sum of squares (or SS).This sum of squares is then
divided by the number of observations minus 1 (for sample data) to get the sample variance. The square root of
the sample variance (s2) is the sample standard deviation (s). This statistic is the most widely used measure of
variation.
COMPUTING s2 AND s
To compute s2, the sample variance, do the following:
1. Compute the difference between each value and the mean.
2. Square each difference.
3. Add the squared differences.
4. Divide this total by n– 1.
To compute s, the sample standard deviation, take the square root of the variance.
Table 5 illustrates the computation of the variance and standard deviation using the steps for the time to get
ready in the morning data. You can see that the sum of the differences between the individual values and the
mean is equal to zero.
Table 5
You calculate the sample variance S2 by dividing the sum of the squared differences computed in step 3 (412.4)
by the sample size (10) minus 1:
n
X i X
2
s i 1
n 1
Where
𝑋 = Sample Mean
n = Sample Size
Xi =ith Value of the Variable X
𝒏 𝟐
∑ (𝑿𝒊 − 𝑿) = summation of all the squared differences between the X values and 𝑋
𝒊=𝟏
n
X i X
2
s 2 i 1
n 1
412.4
𝑠2 = = 45.82
10−1
𝑠 = √45.82 = 6.77
21
Page
10
8
Frequency
0
30 40 50 60
Transaction Time
Figure 4
Worksheet: Transaction.mtw
Scatter Plot
A graph of a set of data pairs (x, y) used to determine whether there is a statistical relationship between the
variables x and y. In this scatter plot, we explore the correlation between Weight Gained (Y) and Calories
Consumed (X)
1000
800
Weight gained
600
400
200
0
22
Page
Figure 5
60
Transaction Time
50
40
30
20
Figure 6
Worksheet: Transaction.mtw
Did you know: Box plot was introduced by John Tukey in 1977
23
Page
Binomial distribution
Binomial distribution describes the possible number of times that a particular event will occur in a sequence of
observations. The event is coded binary; it may or may not occur. The binomial distribution is used when a
researcher is interested in the occurrence of an event, not in its magnitude. For instance, in a clinical trial, a
patient may survive or die. The researcher studies the number of survivors, and not how long the patient survives
after treatment. The binomial distribution is specified by the number of observations, n, and the probability of
occurrence, which is denoted by p.
Other situations in which binomial distributions arise are quality control, public opinion surveys, medical
research, and insurance problems
The following conditions have to be met for using a binomial distribution:
• The number of trials is fixed
• Each trial is independent
• Each trial has one of two outcomes: event or non-event
• The probability of an event is the same for each trial
Suppose a process produces 2% defective items. You are interested in knowing how likely is it to get 3 or more
defective items in a random sample of 25 items selected from the process. The number of defective items (X)
follows a binomial distribution with n = 25 and p = 0.02.
Figure 7
24
Page
The Poisson Distribution was developed by the French mathematician Simeon Denis Poisson in 1837.The Poisson
distribution is used to model the number of events occurring within a given time interval. The Poisson
distribution arises when you count a number of events across time or over an area. You should think about the
Poisson distribution for any situation that involves counting events. Some examples are:
the number of Emergency Department visits by an infant during the first year of life,
The number of white blood cells found in a cubic centimetre of blood.
Sometimes, you will see the count represented as a rate, such as the number of deaths per year due to horse
kicks, or the number of defects per square yard.
Four Assumptions
Information about how the data was generated can help Green Belt/Black Belt decide whether the Poisson
distribution fits. The Poisson distribution is based on four assumptions. We will use the term "interval" to refer
to either a time interval or an area, depending on the context of the problem.
The probability of observing a single event over a small interval is approximately proportional to the size
of that interval.
The probability of two events occurring in the same narrow interval is negligible.
The probability of an event within a certain interval does not change over different intervals.
The probability of an event in one interval is independent of the probability of an event in any other non-
overlapping interval.
The Poisson distribution is similar to the binomial distribution because they both model counts of events.
However, the Poisson distribution models a finite observation space with any integer number of events greater
than or equal to zero. The binomial distribution models a fixed number of discrete trials from 0 to n events.
25
Page
The most widely used model for the distribution of a random variable is a normal distribution. Whenever a
random experiment is replicated, the random variable that equals the average (or total) result over the replicates
tends to have a normal distribution as the number of replicates becomes large. De Moivre presented this
fundamental result, known as the central limit theorem, in 1733. Unfortunately, his work was lost for some time,
and Gauss independently developed a normal distribution nearly 100 years later. Although De Moivre was later
credited with the derivation, a normal distribution is also referred to as a Gaussian distribution.
A Normal distribution is an important statistical data distribution pattern occurring in many natural phenomena,
such as height, blood pressure, lengths of objects produced by machines, etc. Certain data, when graphed as a
histogram (data on the horizontal axis, amount of data on the vertical axis), creates a bell-shaped curve known
as a normal curve, or normal distribution.
Normal distribution is symmetrical with a single central peak at the mean (average) of the data. The shape of the
curve is described as bell-shaped with the graph falling off evenly on either side of the mean. Fifty percent of the
distribution lies to the left of the mean and fifty percent lies to the right of the mean. The spread of a normal
distribution is controlled by the standard deviation. The smaller the standard deviation, more concentrated the
data. The mean and the median are the same in a normal distribution. In a normal standard distribution, mean
is 0 and standard deviation is 1
For example, the heights of all adult males residing in the state of Punjab are approximately normally distributed.
Therefore, the heights of most men will be close to the mean height of 69 inches. A similar number of men will
be just taller and just shorter than 69 inches. Only a few will be much taller or much shorter.
The mean (μ) and the standard deviation (σ) are the two parameters that define the normal distribution. The
mean is the peak or center of the bell-shaped curve. The standard deviation determines the spread in the data.
Approximately, 68.27% of observations are within +/- 1 standard deviation of the mean; 95.46% are within +/- 2
standards deviations of the mean; and 99.73% are within +/- 3 standard deviations of the mean.
For the height of men in Punjab, the mean height is 69 inches and the standard deviation is 2.5 inches. For a
continuous distribution, like to normal curve, the area under the probability density function (PDF) gives the
probability of occurrence of an event.
26
Page
Figure 8
Figure 9
Figure 10
Mean:____________________________________________________
Median:___________________________________________________
Standard Deviation:_________________________________________
Range: ___________________________________________________
28Page
Inter-Quartile Range:________________________________________
Graph>Box Plot
29
Page
30
Page
4
Managing Six Sigma Projects
Managing Projects
Six Sigma is different from other quality or process improvement methodologies- ‘IT DEMANDS RESULTS’. These
results are delivered by PROJECTS that are tightly linked to customer demands and enterprise strategy.The
efficacy of Six Sigma projects is greatly improved by combining project management and business process
improvement practices.
What is a Project?
A project is a temporary endeavor undertaken to create a unique product, service, or result. The temporary
nature of project indicates a definite beginning and end. The end is reached when the project's objectives have
been achieved or when the project is terminated because its objectives will not or cannot be met, or when the
need for the project no longer exists. Temporary doesn't necessarily mean short in duration. Temporary doesn't
generally apply to the product, service, or result created by the project; most projects are undertaken to create
a lasting outcome.
The logical process flow
A logical process flow as explained by Thomas Pyzdek is as follows:
1. Define the project’s goals and deliverables.
a. If these are not related to the organisation’s strategic goals and objectives, stop. The project is
not a Six Sigma project. This does not necessarily mean that it isn’t a “good” project or that the
project shouldn’t be done. There are many worthwhile and important projects that are not Six
Sigma projects.
2. Define the current process and analyze the measurement systems.
3. Measure the current process and analyze the data using exploratory and descriptive statistical methods.
a. If the current process meets the goals of the project, establish control systems and stop, else
4. Audit the current process and correct any deficiencies found.
a. If the corrected process meets the goals of the project, establish control systems and stop, else
5. Perform a process capability study using SPC.
6. Identify and correct special causes of variation.
31
a. If the controlled process meets the goals of the project, establish control systems and stop, else
7. Optimize the current process by applying statistically designed experiments.
Page
a. If the optimized process meets the goals of the project, establish control systems and stop, else
8. Employ breakthrough strategy to develop and implement an entirely new process that meets the
project’s goals.
9. Establish control and continuous improvement systems and stop.
Project charters (sometimes called project scope statements) should be prepared for each project and
subproject. The Project Charter includes the project justification, the major deliverables, and the project
objectives. It forms the basis of future project decisions, including the decision of when the project or subproject
is complete. The Project Charter is used to communicate with stakeholders and to allow scope management as
the project moves forward. The Project Charter is a written document issued by the Project Sponsor. The Project
Charter gives the project team authority to use organisational resources for project activities.
Before launching a significant effort to solve a business problem, be sure that it is the correct problem and not
just a symptom. Is the “defect” Green Belt/Black Belt are trying to eliminate something the customer cares about
or even notices? Is the design requirement really essential, or can engineering relax the requirement? Is the
performance metric really a key business driver, or is it arbitrary? Conduct a project validation analysis and
discuss it with the stakeholders
Project Metrics
At this point Green Belt/Black Belt know who the project’s customers are and what they expect in the way of
project deliverables. Now Green Belt/Black Belt must determine precisely how Green Belt/Black Belt will
measure progress toward achieving the project’s goals.
32
Page
Preliminary estimates of benefits were made previously during the initial planning. However, the data obtained
by the team will allow the initial estimates to be made more precisely at this time. Whenever possible,
“characteristics” should be expressed in the language of management: dollars. One needn’t strive for to-the-
penny accuracy; a rough figure is usually sufficient. It is recommended that the finance and accounting
department develop dollar estimates; however, in any case it is important that the estimates at least be accepted
(in writing) by the accounting and finance department as reasonable. This number can be used to compute a
return on investment (ROI) for the project.
Six Sigma projects have a significant impact on people while they are being conducted. It is important that the
perspectives of all interested parties be periodically monitored to ensure that the project is meeting their expectations
and not becoming too disruptive. The Green Belt/Black Belt should develop a means for obtaining this information,
analyzing it, and taking action when the results indicate a need. Data collection should be formal and documented.
Relying on “gut feeling” is not enough.
Means of monitoring includes but not limited to Personal interviews, Focus groups, Surveys, Meetings, Comment cards.
Green Belt/Black Belt may also choose to use Stakeholder analysis and Force Field analysis to proactively assess change
management challenges that lie ahead.
The creation of work breakdown structures involves a process for defining the final and intermediate products of a
project and their interrelationships. Defining project activities is complex. It is accomplished by performing a series of
decompositions, followed by a series of aggregations. For example, a software project to develop an SPC software
application would disaggregate the customer requirements into very specific engineering requirements. The customer
requirement that the product create 𝑥 charts would be decomposed into engineering requirements such as
subroutines for computing subgroup means and ranges, plotting data points, drawing lines, etc. Re-aggregation would
involve, for example, linking the various modules to produce an xbar chart and display it on the screen.
The project deliverables expected by the project’s sponsors were initially defined in the Project Charter. For most Six
Sigma projects, major project deliverables are so complex as to be unmanageable. Unless they are broken into
components, it isn’t possible to obtain accurate cost and duration estimates for each deliverable. WBS creation is the
process of identifying manageable components or sub-products for each major deliverable.
Project schedules are developed to ensure that all activities are completed, reintegrated, and tested on or before
the project due date. The output of the scheduling activity is a time chart (schedule) showing the start and finish
times for each activity as well as its relationship to other activities in the project and responsibility for completing
the activity. The schedule must identify activities that are critical in the sense that they must be completed on
33
The information obtained in preparing the schedule can be used to improve it. Activities that the analysis
indicates to be critical are prime candidates for improvement. Pareto analysis can be used to identify those
critical elements that are most likely to lead to significant improvement in overall project completion time. Cost
data can be used to supplement the time data and the combined time/cost information can be analysed using
Pareto analysis
What is the latest completion date that allows the project to meet its objective?
What are the penalties for missing this date? Things to consider are lost market share, contract penalties,
fines, lost revenues, etc.
Activity Definition
Once the WBS is complete, it can be used to prepare a list of the activities (tasks) necessary to complete the
project. Activities don’t simply complete themselves. The resources, time, and personnel necessary to complete
the activities must be determined. A common problem to guard against is scope creep. As activities are
developed, be certain that they do not go beyond the project’s original scope. Equally common is the problem
of scope drift. In these cases, the project focus gradually moves away from its original Charter. Since the activities
are the project, this is a good place to carefully review the scope statement in the Project Charter to ensure that
the project remains focused on its goals and objectives.
Activity Dependencies
Some project activities depend on others: sometimes a given activity may not begin until another activity is complete.
To sequence activities so they happen at the right time, Green Belt/Black Belt must link dependent activities and specify
the type of dependency. The linkage is determined by the nature of the dependency. Activities are linked by defining
the dependency between their finish and start dates
In addition to knowing the dependencies, to schedule the project Green Belt/Black Belt also need estimates of
how long each activity might take. This information will be used by senior management to schedule projects for
the enterprise and by the project manager to assign resources, to determine when intervention is required, and
for various other purposes.
Needed to complete the Project - All resources should be identified, approved, and procured. Green Belt/Black
Belt should know who is to be on your team and what equipment and materials Green Belt/Black Belt are
acquiring to achieve project goals. In today’s business climate, it’s rare for people to be assigned to one project
from start to finish with no additional responsibilities outside the scope of a single project. Sharing resources
with other areas of the organisation or among several projects requires careful resource management to ensure
that the resource will be available to your project when it is needed. 34
Page
A Gantt chart shows the relationships among the project tasks, along with time estimates. The horizontal axis of
a Gantt chart shows units of time (days, weeks, months, etc.). The vertical axis shows the activities to be
completed. Bars show the estimated start time and duration of the various activities.
A project network diagram shows both the project logic and the project’s critical path activities, i.e., those
activities that, if not completed on schedule, will cause the project to miss its due date. Although useful, Gantt
charts and their derivatives provide limited project schedule analysis capabilities. The successful management
of large-scale projects requires more rigorous planning, scheduling, and coordinating of numerous, interrelated
activities. To aid in these tasks, formal procedures based on the use of networks and network techniques were
developed beginning in the late 1950s. The most prominent of these procedures have been PERT (Program
Evaluation and Review Technique) and CPM (Critical Path Method).
Evaluate the effect of diverting resources from the project or redirecting additional resources to the project.
The planning phase involves breaking the project into distinct activities. The time estimates for these activities are then
determined and a network (or arrow) diagram is constructed, with each activity being represented by an arrow.
The ultimate objective of the scheduling phase is to construct a time chart showing the start and finish times for each
activity as well as its relationship to other activities in the project. The schedule must identify activities that are critical
in the sense that they must be completed on time to keep the project on schedule.
It is vital not to merely accept the schedule as given. The information obtained in preparing the schedule can be used
to improve it. Activities that the analysis indicates to be critical are candidates for improvement. Pareto analysis can be
used to identify those critical elements that are most likely to lead to significant improvement in overall project
completion time. Cost data can be used to supplement the time data. The combined time/cost information can be
analyzed using Pareto analysis.
The final phase in CPM project management is project control. This includes the use of the network diagram and time
chart for making periodic progress assessments. CPM network diagrams can be created by a computer program or
constructed manually.
The Critical Path Method (CPM) calculates the longest path in a project so that the project manager can focus on the
activities that are on the critical path and get them completed on time.
36
Page
•Teams start to become disillusioned. Why are we here, is the goal achievable?
•Identifying resistors, counselling to reduce resistance.
•Help people with the new roles & responsibilities
Storm •Have a different person take meeting minutes, lead team meetings etc
Stakeholder Analysis
Stakeholder Analysis is a technique that identifies individuals or groups affected by and capable of influencing
the change process. Assessment of stakeholders and Stakeholder issues and viewpoints are necessary to identify
the range of interests that need to be taken into consideration in planning change and to develop the vision and
change process in a way that generates the greatest support. The following parameters are used to develop the
segmentation of the stakeholders:
The plan should outline the perceptions and positions of each Stakeholder group, including means
of involving them in the change process and securing their commitment
Define how Green Belt/Black Belt intend to leverage the positive attitudes of enthusiastic
stakeholders and those who “own” resources supportive of change
State how Green Belt/Black Belt plan to minimize risks, including the negative impact of those who
will oppose the change
38
Clearly communicate change actions, their benefits and desired Stakeholder roles during the change
Page
process
This plan should be updated regularly and should continue throughout the life of the project.
Stakeholders believe Six Sigma produces feelings of Stakeholders see 6 Sigma as a loss of power and
inadequacy or stupidity on statistical and process control
knowledge
Strategy: Address issues of “perceived loss” straight
Strategy: Focus on high level concepts to build on. Look for Champions to build consensus for 6
competencies. Then add more statistical theorems as Sigma and its impact on change
knowledge base broadens
Organisational Resistance: Individual Resistance:
Stakeholders experience issues of pride, ego and loss Stakeholders experience fear and emotional
of ownership of change initiatives. paralysis as a result of high stress
Strategy: Look for ways to reduce resistance by Strategy: Decrease the fear by increased
engaging stakeholders in the process involvement, information and education
5
Define
Define Overview
A Lean Six Sigma project starts out as a practical problem that is adversely impacting the business and ultimately
ends up as a practical solution that improves business performance. Projects state performance in quantifiable
terms that define expectations related to desired levels of performance and timeline.
The primary purpose of the define phase is to ensure the team is focusing on the right metric. The Define phase
seeks to answer the question, "What is important?" That is, what is important for the business? The team should
work on something that will impact the Big Y's - the key metrics of the business. If Six Sigma implementation is
not driven from the top, a Green/Black Belt may not see the big picture and the selection of a project may not
address something of criticality to the organisation.
The objective is to identify and/or validate the improvement opportunity, develop the business processes, define
Critical Customer Requirements, and prepare an effective project team. The deliverables from the Define phase
include:
Table 7
VOC, VOB and COPQ data help us develop project themes which can help us understand the Big Y (output).
Voice of the Customer (VOC) is obtained from the downstream customer, the direct recipient of the
process/service. This can be internal (Process Partner) to the company or an external customer.
When obtained from an internal Process Partner, it tends to be very specific, but might need to be
validated with information from the ultimate external customer (as external requirements flow
backwards through the broader process steps)
When obtained from an ultimate external customer, the needs must often be translated into something
meaningful for the process/service developer
“Voice of the Customer” (VOC) is the expression of customer needs and desires
May be specific – “I need delivery on Tuesday”
May be ambiguous – “Deliver faster”
It can be compared to internal data (“Voice of the Process”) to assess our current process performance or process
capability
Voice of the Business (VOB) is often best obtained from the Process Owner
Tends to be very specific. Example: Lead time of 2 hours, labour efficiency of 85% of standard.
These are usually in reference to the health of the organization
Examples: Process Cost Efficiency, Repair Costs, etc. (Derived primarily from the business - VOB)
42
Page
Quality
Cost
•Process Cost Efficiency, Price to Consumer (Initial Plus Life Cycle), Repair Costs, Purchase Price,
Financing Terms, Depreciation, Residual Value (Derived Primarily from the Business – VOB)
Speed
•Lead Times, Delivery Times, Turnaround Times, Setup Times, Cycle Times, Delays (Derived
equally from the Customer or the Business– VOC/VOB)
•Ethical Business Conduct, Environmental Impact, Business Risk Management, Regulatory and
Legal Compliance
Once the team has the pertinent Voice of customer, Voice of the business and Cost of poor quality data, the
team needs to translate this information into Critical Customer Requirements (CCR's). A CCR is a specific
characteristic of the product or service desired by and important to the customer. The CCR should be measurable
with a target and an allowable range. The team would take all the data, identify the key common customer
issues, and then define the associated CCR's. The Critical Customer Requirements (CCR's) often are at too high a
level to be directly useful to the team. So the next step is to take the applicable CCR's and translate them into
43
Critical to Quality (CTQ) measures. A CTQ is a measure on the output of the process. It is a measure that is
important to meeting the defined CCR's.
Page
Table 9
Of course there are multiple alternative methods to drill down to an appropriate CTQ. Project team may also
choose to drill down to the Project CTQ from the Big Y.
Few examples of CTQs
Warranty Cost % errors in Patient Registration Form
Average Handling Time Claims Processing Cycle time
Waiting time Idle time
Defects % Defective %
Daily Sales Outstanding % Incomplete Applications
44
Page
_________________________________________
_________________________________________
_________________________________________
_________________________________________
_________________________________________
_________________________________________
45
_________________________________________
Page
Ideally, Green Belts and Black Belts are expected to work on projects and are not directly responsible for
generation or selection of Six Sigma projects. Six Sigma projects are selected by senior management on certain
criteria. These criteria include linkage between the proposed project and company strategy, expected timeline
for project completion, expected revenue from the projects, whether data exists to work on the problem,
whether the root cause or solution is already known. Table 10 shows a typical project selection template used
by management to pick projects. The projects that have the highest net score are the ones that get picked for
execution.
Table 10: Example Project Selection Template
Project
Number Sponsor Costs Benefits Timeline Strategy Risk Score
Description
Reduce Inventory
1 Job Smith levels for 123 0 1,00,000 6 months High Low 4.4
series product
Improve efficiency
2 Bob Bright for machine 456 - 5000 2,00,000 6 months Medium Low 5.0
2333
Effort
Can the project be completed in 3 to 6 months? ---A Good Project Must Be Manageable. Prolonged
projects risk loss of interest and start building frustrations within the team and all the way around. The
team also runs the risk of disintegrating
48
Page
•What are the boundaries of the initiative (start and end points of the process or parts
Project Scope of a system)?
•What authority do we have? What is not within scope?
Opportunity Statement
The opportunity statement describes the “why” of undertaking the improvement initiative. The problem
statement should address the following questions:
Goal Statement
The goal statement should be most closely linked with the Opportunity statement. The goal statement defines
the objective of the project to address the specific pain area, and is SMART (Specific, Measurable, Achievable,
Relevant and Time-bound). The goal statement addresses:
What is the improvement team seeking to accomplish?
How will the improvement team’s success be measured?
What specific parameters will be measured? These must be related to the Critical to Cost, Quality, and/or
Delivery (Collectively called the CTQ’s).
What are the tangible results deliverables (e.g., reduce cost, cycle time, etc.)?
What is the timetable for delivery of results?
Project Scope
The project scope defines the boundaries of the business opportunity. One of the Six Sigma tools that can be
used to identify/control project scope is called the In-Scope/Out-of Scope Tool. Project Scope defines:
50
What are the boundaries, the starting and ending steps of a process, of the initiative?
Page
Green Belt
Is the Team Leader for a Project within own functional area
Selects other members of his project team
Defines the goal of project with Champion & team members
Defines the roles and responsibilities for each team member
Identifies training requirements for team along with Black Belt
Helps make the Financial Score Card along with his CFO
Black Belt
Leads project that are cross-functional in nature (across functional areas)
Ends role as project leader at the close of the turnover meeting
Trains others in Six Sigma methodologies & concepts
Sits along with the Business Unit Head and helps project selection
Provides application assistance & facilitates team discussions
Helps review projects with Business Unit Head
Informs Business Unit Head of project status for corrective action
Team Member:
A Team Member is chosen for a special skill or competence
Team Members help design the new process
Team Members drive the project to completion
Deployment Champion:
Responsible for the overall Six Sigma program within the company
Reviews projects periodically
Adds value in project reviews since he is hands-on in the business
Clears road blocks for the team
Has the overall responsibility for the project closure
Project Plan
The project plan shows the timeline for the various activities required for the project. Some of the tools that can
be used to create the project timeline are the Network diagram or the GANTT chart. We may also like to identify
the activities on critical path, milestones for tollgate reviews to ensure that timely completion of the
project.*Please refer to Chapter 4 to learn more about Project Management practices
Project Charter sections are largely interrelated: as the scope increases, the timetable and the deliverables also
expand. Whether initiated by management or proposed by operational personnel, many projects initially have
too broad a scope. As the project cycle time increases, the tangible cost of the project deployment, such as cost
due to labour and material usage, will increase. The intangible costs of the project will also increase: frustration
due to lack of progress, diversion of manpower away from other activities, and delay in realization of project
benefits, to name just a few. When the project cycle time exceeds 6 months or so, these intangible costs may
result in the loss of critical team members, causing additional delays in the project completion. These “world
peace” projects, with laudable but unrealistic goals, generally serve to frustrate teams and undermine the
credibility of the Six Sigma program.
52
Page
Active
Employee Project Manager
Employees Employee
Setup Data and Team
Set Up Record
Resources Active
Contractor Project Manager
Contractors Contractor
Setup Data and Team
Record
A SIPOC helps the Six Sigma team and those working on the process to agree the boundaries of what they will
be working on. It provides a structured way to discuss the process and get consensus on what it involves
before rushing off and drawing process maps.
1. Define the Outputs of the process. These are the tangible things that the process produces (e.g. a
report, or letter).
2. Define the Customers of the process. These are the people who receive the Outputs. Every Output
should have a Customer.
3. Define the Inputs to the process. These are the things that trigger the process. They will often be
54
have a Supplier. In some “end-to-end” processes, the supplier and the customer may be the same
person.
5. Define the sub-processes that make up the process. These are the activities that are carried out to
convert the inputs into outputs.
2. Primary and Secondary critical to quality metrics are identified from voice of customer, voice of business and
cost of poor quality. They relate to the legs of the balanced scorecard.
3. A project charter is created which consists of Background of the project, CTQ and Goals, Business case, Team
members, Project Scope and Project Plan.
b) Business Case should be clearly documented as it creates urgency in top management and
management realizes the need of the project
c) Project Scope – In-scope and Out-scope elements can be listed or we may use SIPOC to scope the
project. It reduces the risk of scope creep.
e) Project Charter is a living document and can/may be revised. Revisions should be signed off by the
project sponsor
4. SIPOC (Supplier, Input, Process, Output, Customer) helps us identify the inputs and their suppliers, output and
their customers and the high level process map. It can also be used to scope and define boundaries of a project.
55
Page
What is the problem statement – detailing (what) is the problem, (when) was the problem first seen, (where)
was it seen, and what is the (magnitude or extent) of the problem. Is the problem measured in terms of Quality,
Cycle Time, Cost Efficiency, net expected financial benefits? We must ensure there is no assumptions about
causes and solutions.
Does a goal statement exist that defines the results expected to be achieved by the process, with reasonable
and measurable targets? Is the goal developed for the “what” in the problem statement, thus measured in
terms of Quality, Cycle Time or Cost Efficiency?
Does a financial business case exist, explaining the potential impact (i.e. measured in dollars) of the project on
the organisation budgets, Net Operating Results, etc.?
Is the project scope reasonable? Have constraints and key assumptions been identified?
Who is on the team? Are they the right resources and has their required time commitment to the project been
confirmed by your Sponsor and team?
What is the high level Project plan? What are the key milestones (i.e. dates of tollgate reviews for DMAIC
projects)?
Who are the customers for this process? What are their requirements? Are they measurable? How were the
requirements determined?
Who are the key stakeholders? How will they be involved in the project? How will progress be communicated
to them? Do they agree to the project?
What kinds of barriers / obstacles will need assistance to be removed? Has the risk mitigation plan to deal with
the identified risks been developed?
56
Page
6
Measure
Measure Phase Overview
The primary purpose of the Measure phase is to answer the question, "How are we doing?" In other words, the
team must baseline the current state of each Critical to Quality or Critical to Process measure. Many times in a
project, the critical measure identified by the team is not an already reported measure. Although the team may
not think the measure is meeting the goal, they need to collect data to verify the current performance level.
The objective is to identify and/or validate the improvement opportunity, develop the business processes, define
critical customer requirements, and prepare an effective project team. The three steps that enable us to do so
are:
Operational Definition
An Operational Definition is a precise definition of the specific Y (output measure) to be measured. The data
collected using this definition will be used to baseline the performance. The purpose of this definition is to
provide a single, agreed upon meaning for each specific Y (output). This helps in ensuring reliability and
consistency during the measurement process. Although the concept is simple, the task of creating a definition
should not be underestimated. A clear concise operational definition will ensure reliable data collection and
reduction in measurement error.
Table 13
58
Page
Attribute measurement systems are the class of measurement systems where the measurement value is one
of a finite number of categories. The most common of these is a go/no-go gage which has only two possible
results. Other attribute systems, for example visual standards, may result in five to seven classifications, such
as excellent, good, fair, poor, very poor.
Attribute Agreement Analysis: Some measurement systems categorize items by their attributes to separate
“good” items from “bad” ones, sort samples into “blue,” “green,” and “cyan” groups, and assign invoices to
“engineering,” “production,” or “sales” departments. These types of measurement systems are called attribute
measurement systems because they determine or measure one or more attributes of the item being inspected.
The question is, how repeatable and reliably can one of these systems determine the specific attribute we are
looking for? For example, how repeatable and reliably does your attribute measurement system detect “bad”
disk drives from among all the “good” ones being completed in production?
To quantify how well an attribute measurement system is working, Attribute Agreement Analysis is performed.
59
Page
Attribute Repeatability & Reproducibility studies can be done using statistical software packages which provide
graphical output and other summary information; however, they are often done by hand due to the
straightforward nature of the calculations.
60
Page
2. Create a “master” standard that designates each of the test samples into its true attribute category.
3. Select two or three typical inspectors and have them review the sample items just as they normally
would in the measurement system, but in random order. Record their attribute assessment for each
item.
4. Place the test samples in a new random order, and have the inspectors repeat their attribute
assessments. (Don’t reveal the new order to the inspectors!). Record the repeated measurements.
5. For each inspector, go through the test sample items and calculate the percentage of items where their
first and second measurements agree. This percentage is the repeatability of that inspector.
6. Going through each of the sample items of the study, calculate the percentage of times where all of the
inspectors’ attribute assessments agree for the first and second measurements for each sample. This
percentage is the reproducibility of the measurement system.
7. You can also calculate the percent of the time all the inspectors’ attribute assessments agree with each
other and with the “master” standard created in Step 2. This percentage, which is referred to as the
Accuracy (effectiveness) of the measurement system
Industry Standard
Data
Team Accuracy Measurement system status Remarks
Collection
collection
Page
Reproducibility:_____________________________________
________________________________________________
________________________________________________
Accuracy
Appraiser Accuracy: -
________________________________________________
________________________________________________
________________________________________________
Team Accuracy:- -
________________________________________________
________________________________________________
________________________________________________ 62
Page
Repeatability
Individual Accuracy
Reproducibility
Team Accuracy
63
Page
Repeatability
Individual Accuracy
Reproducibility
Team Accuracy
Q. The service team at Fantard Shuttered Bank checks application forms to verify whether all documents are
complete. There have been recent complaints about sending incomplete application forms to the credit check
team. The Service manager wanted to assess the reliability of the current inspection activity and therefore got
25 application forms checked by 4 service team executives. The service manager sent the same application forms
again to the 4 service team executives for inspection. The 4 service team executives weren’t aware that they
were inspecting the same forms. Is this Measurement system acceptable? Worksheet: Shuttered Bank.mtw
Repeatability
Individual Accuracy
Reproducibility
Team Accuracy
The figure above shows an example data collection plan for a project focused on reducing cycle time.
One major determinant of the duration of a Six Sigma project is the time required to collect the data. The time
required is dependent on how frequently the data are available in the process and the ease of collection. The
team needs to be involved in the data collection to make sure it is done right and that any anomalies or problems
are recorded and understood. This will make the analysis of the data in the Analyse phase easier.
65
Page
Sigma Level
Sigma Level is a very commonly used metric for evaluating performance of a process. Sigma level can be
computed for continuous data, discrete data and yield; therefore can be computed for most Y measures. Sigma
level corresponds to a certain Defects per million opportunities (DPMO). Higher the Sigma Level (Z), lower is the
DPMO which simply means lower defect rate.
Table 15
66
Page
Example: Varsigma has specified to the Hotel that the temperature in the training room should be 21+-3
degrees. The average room temperature is 22 and standard deviation is 2. What is the Sigma level?
Example: 500 mobiles were inspected. We inspected 5 characteristics in each mobile. We observed 224 defects.
What is the Sigma level?
Example: 1000 projectors were inspected. We observed 116 defects. What is the Sigma level?
Example: 1000 shirts were inspected. 31 shirts were defective. What is the Sigma level?
Term Expansion
OFE Opportunities for error
TOFE Total Opportunities for error
DPO Defects per opportunity
DPU Defects per unit
DPMO Defects per million opportunities
RTY Rolled throughput Yield
FTY or FPY First time Yield or First Pass Yield
67
Page
Exercise 1: A shirt manufacturer wants to assess the performance of shirt cutting machine. They have audited
100 16'’ neck size shirts. The shirt accepted specification is 16+-0.5. Please compute the Zst value. The
customer expects a minimum 95% of the shirt collars should be within specifications.
Worksheet: Neck Measurement.mtw
Exercise 2: A KPO (Knowledge Process Outsourcing) processes reimbursement receipts for their customers. The
time to process should be no more than 30 minutes and the customer expects at least 90% of invoices should
be processed within 30 minutes. What is the probability of processing the reimbursement bills within 30
minutes? Please compute the Zst value.
Worksheet: Reimbursement bill.mtw
Exercise 3: The maximum time allowed to complete a platelet count test is 90 minutes. The lab analyst
randomly selected a sample and recorded the time taken to carry out each test. Please compute the current
Sigma Level (Zst).
Worksheet: Platelet count.mtw
Exercise 4: The circumference of the cricket ball should be between 224mm-229 mm. Dookaburra collected a
sample of 100 cricket balls and recorded the circumference of the cricket balls. Please compute Zst.
Worksheet: Cricket Balls.mtw
Exercise 5: Laptops are being inspected to check whether the following components are functional: RAM,
Motherboard, Screen, Hard disk, Bluetooth, DVD-Drive, and Memory Slot. 500 laptops are inspected and 119
defects are observed. What is Zst?
Exercise 6: The circuit board of projectors is inspected. The OFE is unknown. 1000 projectors are inspected and
43 defects are observed. What is Zst?
Exercise 7: Phone calls are recorded at a call centre and later evaluated. A sample of 200 calls are heard by the
quality representatives and evaluated based on a Call Quality Checklist. There are 14 Opportunities for error in
each call. 284 defects were observed in the sample of 200 calls. What is Zst?
Process capability is the ability of the process to meet the requirements set for that process. Capability indices
determine process capability within respect to variation and proximity to target. Capability indices are used for
continuous data and are unit less statistics or metrics.
Cp
It is the potential capability indicating how well a process could be if it were centred on target. This is
not necessarily its actual performance because it does not consider the location of the process, only the
spread. It doesn't take into account the closeness of the estimated process mean to the specification
limits.
It compares the width of a two-sided specification limits to the effective short term width of the process.
The specification width is the distance between USL and LSL.
To determine the process width, Six Sigma practitioners defined the effective limits of any process as
being three standard deviation limits from the average. In a normally distributed population, 99.73% of
the variation is within ± 3s of Process Average and therefore considered as Process width
Cp only compares the width of specification and process and therefore any change of process average
isn’t accounted in Cp
Example: VarSigma has specified to the Hotel that temperature in the training room should be between 21+-
3 degrees. The average room temperature is 22 and standard deviation is 0.5. What is Cp?
70
Page
It is the resultant process capability as it not only considers variation; it also takes into account the
location of the data by considering the closeness of the mean to the target and specification limits.
It compares width of the specification and width of the process while also accounting for any change in
process average (location of central tendency)
Rule of thumb: Cpk >1.33, indicates that the process or characteristic is capable in short term.
Values less than this may mean variation is too wide compared to specification or the process average
(location) is away from target or a combination of width and location
Mean =22
sst = 0.5
Cpu =(USL-Mean)/3sst
Inputs
USL=24 = (24-22)/(3*0.5)=1.33
Cpk
LSL=18 Cpl =(Mean-LSL)/3sst
=(22-18)/(3*0.5)=2.67
Target=21
Cpk = min (Cpu, Cpl)
=min (1.33,2.67)
=1.33
71
Page
24 18 21 23 0.5
24 18 21 22 0.5
24 18 21 21 0.5
24 18 21 20 0.5
24 18 21 19 0.5
Questions to answer
Inferences
If Cpk = Cp, Process mean _____ Target
If Cp=1, Process Width _____ Specification Width
72
Cp
Cpk
Move Mean closer to target Reduce Variation
Exercise 2: An extrusion die is used to produce aluminium rods. The diameter of the rods
is a critical quality characteristic. Specifications on the rods are 0.5035 +- 0.0010 inch.
The process average is .5031 and estimated standard deviation is 0.0003. Please
compute Cp, Cpk
Cp
Cpk
Move Mean closer to target Reduce Variation
73
Page
74
Page
Has the team identified the specific input (x), process (x), and output (y) measures needing to be
collected for both effectiveness and efficiency categories (I.e. Quality, Speed, and Cost Efficiency
measures)?
Has the team developed clear, unambiguous operational definitions for each measurement and tested
them with others to ensure clarity and consistent interpretation?
Has a clear, reasonable choice been made between gathering new data or taking advantage of existing
data already collected by the organization?
Has an appropriate sample size and sampling frequency been established to ensure valid representation
of the process we are measuring?
Has the measurement system been checked for repeatability and reproducibility, potentially including
training of data collectors?
Has the team developed and tested data collection forms or check sheets which are easy to use and
provide consistent, complete data?
Has baseline performance and process capability been established? How large is the gap between
current performance and the customer (or project) requirements?
7
Analyze
Analyze Phase Overview
In the Analyze phase, the question to be answered is "What is wrong?" In other words, in this phase the team
determines the root causes of the problems of the process. The team identified the problems of the process in
the Define and Measure phases of the project. The key deliverable from this phase is validated root causes. The
team will use a variety of tools and statistical techniques to find these root causes. The team must choose the
right tools to identify these root causes as if there is no fixed set of tools for a certain situation. The tools chosen
are based on the type of data that were collected and what the team is trying to determine from the data. The
team usually would use a combination of graphical and numerical tools. The graphical tools are important to
understand the data characteristics and to ensure that the statistical analyses are meaningful (e.g., not
influenced by outliers). The numerical (or statistical) analyses ensure that any differences identified in the graphs
are truly significant and not just a function of natural variation.
The team should have already identified the process outputs - CTQ's and CTP's, or the little Y's - in the Define
phase. The X's are process and input variables that affect the CTQ's and CTP's. The first consideration in the
Measure phase is to identify the x data to collect while base lining these y variables. The y data are needed to
establish a baseline of the performance of the process. The x data are collected concurrently with the Y's in this
phase so that the relationships between the X's and Y's can be studied in the Analyse phase will impact the Big
Y's - the key metrics of the business. And if Six Sigma is not driven from the top, a Green/Black Belt may not see
the big picture and the selection of a project may not address something of criticality to the organization.
The objective is to identify and/or validate the improvement opportunity, develop the business processes, define
critical customer requirements, and prepare an effective project team. The three steps that enable us to do so
are:
The first step in analyzing process map is developing detailed process maps. The initial AS-IS process maps should
always be created by cross functional team members and must reflect the actual process rather than an ideal or
desired state.
Start with a high level map: It can be very useful to start with a high level process map of say five to ten
steps. Six Sigma team would have developed a High Level process map (SIPOC) in Define Phase. This
helps to establish the scope of the process, identify significant issues and frame the more detailed map.
Six Sigma team may choose a Top-Down Flow Chart describes the activities of the process in a
hierarchical order or Functional Deployment Flow Chart that shows the different functions that are
responsible for each step in the process flow chart.
Map the process like a flowchart detailing each activity, arrow and decision. For each arrow, boxes, and
diamond, list its function and the time spent (in minutes, hours, days).
77
Page
78
Page
Responsible
Clerk Supervisor Materials Scheduler
Steps Management
Log-in Order
Prioritize Order N
Review for
Specifications Y
Materials Explosion
Schedule Fabrication
Inspection N
Distribution Y
Time: One of the most effective lean tools used in understanding waste in a process map is Value add/ Non-
Value add analysis. It assists in analysing
Time-per-event (reducing cycle time)
Process repeats (preventing rework)
Duplication of effort (identifying and eliminating duplicated tasks)
Unnecessary tasks (eliminating tasks that are in the process for no apparent reason)
Identifying and segregating Value-added and non-value-added tasks
W________________________________________________
O_________________________________________________
R__________________________________________________
M_________________________________________________
P_________________________________________________
I__________________________________________________
T__________________________________________________
80
Page
Step 1: Map the process like a flowchart detailing each activity, arrow and decision.
Step 2: For each arrow, box, and diamond, list its function and the time spent (in minutes, hours, days)
Step 3: Now become the customer. Step into their shoes and ask the following questions:
Step 4: If the answer to any of these questions is “yes”, then the step may be non-value-added.
o If so, can we remove it from the process? Much of the idle, non-value-adding time in a process
lies in the arrows: Orders sit in in-boxes or computers waiting to be processed, calls waiting in
Step 5: How can activities and delays be eliminated, simplified, combined, or reorganized to provide a
o Investigate hand-off points: how can you eliminate delays and prevent lost, changed, or
misinterpreted information or work products at these points? If there are simple, elegant, or
obvious ways to improve the process now, revise the flowchart to reflect those changes.
81
Page
Amber and Benjamin have opened a hot dog stand at their local park. They offer a hot dog with choice of
fresh fruit and beverage to walk up customers between 10 AM and 2 PM. Customers put on their own
condiments. Customers say their hot dogs are good, but the wait is a little long.
After two weeks, they have a brisk and growing business. Benjamin and Amber notice they are barely
keeping up with the customer demand, and making a little money after buying their supplies at the end of
each day. They would like to improve their process to meet growing customer demand. They collected the
following data for their business processes and need help analyzing it.
Risk assessment is the determination of quantitative and qualitative value of risk related to a concrete situation and a
recognized threat (also called hazard).
Identification of risks
Failure modes and effect analysis is a structured method to identify failure modes, determine the severity of the
failure, cause of the failure, frequency of the failure, current controls in place and efficiency of the controls. This enables
us to evaluate current risks in the process and thereafter developing an action plan to mitigate risks.
Failure Mode and Effects Analysis is a structured and systematic process to identify potential design and process failures
before they have a chance to occur with the ultimate objective of eliminating these failures or at least minimizing their
occurrence or severity. FMEA helps in
Identifying the areas and ways in which a process or system can fail (failure mode)
Identifying and prioritizing the actions that should be taken to reduce those risks
Page
Often, catastrophes are the result of more than one failure mechanism which occur in tandem.
Each individual failure mechanism, if it had occurred by itself, would probably have resulted in a far less “deadly
outcome”.
Effects are usually events that occur downstream that affect internal or external customers.
Root causes are the most basic causes within the process owner’s control. Causes, and root causes, are in the
background; they are an input resulting in an effect.
Failures are what transform a cause to an effect; they are often unobservable.
Identifying and evaluating potential product and process related failure modes, and the effects of the potential
failures on the process and the customers
84
Identifying process variables on which to focus process controls for occurrence reduction or increased
detection of the failure conditions
Page
Enabling the establishment of a priority system for preventive/corrective actions and controls
During the development of a PFMEA, a team may identify design opportunities which if implemented, would either
eliminate or reduce the occurrence of a failure mode. Such information should be provided to Process Improvement/
Design expert for consideration and possible implementation.
PFMEA is developed and maintained by multi-disciplinary (or cross functional) team typically led by the Green Belt/
Black Belt/Process Owner.
Developing PFMEA
The PFMEA begins by developing a list of what the process is expected to do and not do.It is also known as the
process intent
A flow chart of the general process should be developed. It should identify the product/process characteristics
associated with each operation.
The process flow diagram is a primary input for development of PFMEA. It helps establish the scope of the
analysis.
o The initial flow diagram is generally considered a high level process map
o A more detailed process map is needed for identification of potential failure modes
The PFMEA should be consistent with the information in the process flow diagram
Requirement(s) should be identified for each process/ function. Requirements are the outputs of each
operation/step and related to the requirements of the product/service. The Requirement provides a
description of what should be achieved at each operation/step. The Requirements provide the team with a
basis to identify potential failure modes
o Performance indicators (Cp, Cpk, Sigma level, FTY, RTY) data of similar process
Page
Once we have most of the required information, we can fill out the Process FMEA form
Process or
Prepared Page ____ of
Product
by: ____
Name:
RPN
Potential Responsibility
Process Step Potential Potential
Cause(s)/ Recommended and
OCC
DET
SEV
OCC
DET
RPN
/ Requirement Failure Effects
Mechanism(s Controls Controls Action(s) Completion Actions
Function Mode of Failure
) of Failure Prevention Detection Date Taken
d g i
a1 a2 b c f h h j k l m n n n n
‘a1’- Process Step and Function: Process Step/Function can be separated into two columns or combined into
a single column. Process steps may be listed in the process step/function column, and requirement listed in
the requirements column
o Enter the identification of the process step or operation being analyzed, based on the numbering
process and terminology. Process numbering scheme, sequencing, and terminology used should be
consistent with those used in the process flow diagram to ensure traceability and relationships to
other documents.
o List the process function that corresponds to each process step or operation being analyzed. The
process function describes the purpose or intent of the operation.
‘a2’- Requirements: are the inputs to the process specified to meet the design intent and other customer
requirements.
o List the requirements for each process function of the process step or operation being analyzed.
o If there are multiple requirements with respect to a given function, each should be aligned on the
86
form with the respective associated failure modes in order to facilitate the analysis
Page
o List the potential failure mode(s) for the particular operation in terms of the process requirement(s).
o Potential failure modes should be described in technical terms, not as a symptom noticeable by the
customer
‘c’- Potential effect(s) of failure are defined as the effects of the failure mode as perceived by the customer(s).
o The effects of the failure should be described in terms of what the customer might notice or
experience, remembering that the customer may be an internal customer, as well as the End User.
o For the End User, the effect should be stated in terms of product or system. If the customer is the next
operation or subsequent operation (s), the effects should be stated in terms of process/operation
performance.
A few questions we can ask to determine the potential effect of the failure
o Does the potential failure Mode physically prevent the downstream processing or cause potential
harm to equipment or operators?
o What would happen if an effect was detected prior to reaching the End User?
‘d’- Severity is the value associated with the most serious effect for a given failure. The team should agree on
evaluation criteria and a ranking system and apply them consistently, even if modified for individual process
analysis. It is not recommended to modify criteria for ranking values 9 and 10. Failure modes with a rank of 1
should not be analyzed further.
‘e’- Classification column may be used to highlight high priority failure modes or causes that may require
additional engineering assessment. This column may also be used to classify any product or process
characteristics (e.g., critical, key, major, significant) for components, subsystems, or systems that may require
additional process controls.
‘f’- Potential causes(s) of Failure Mode is defined as a indication of how the failure could occur, and is described
in terms of something that can be corrected or can be controlled. Potential causes of failure may be an
indication of a design or process weakness, the consequence of which is the failure.
o Identify and document every potential cause for each failure mode. The cause should be detailed as
concisely and completely as possible.
o There may be one or more causes that can result in the failure mode being analyzed. This results in
87
multiple lines for each cause in the table or form. Only specific errors or malfunction should be listed.
Page
o The occurrence ranking number is a relative ranking within the scope of FMEA and may not reflect
the actual likelihood of occurrence.
o If statistical data are available from a similar process, the data should be used to determine the
occurrence ranking. In other cases, subjective assessment may be used to estimate the ranking.
‘h’-Current process controls are descriptions of the controls that can either prevent to the extent possible, the
cause of failure from occurring or detect the failure mode or cause of failure should it occur. There are two
types of Process Controls to consider
o Prevention: Eliminate (prevent) the cause of the failure or the failure mode from occurring, or reduce
its rate of occurrence
o Detection: Identify (detect) the cause of failure or the failure mode, leading to the development or
associated corrective action(s) or counter-measures
o The preferred approach is to first use prevention controls, if possible. The initial occurrence rankings
will be affected by the prevention controls provided they are integrated as part of the process. The
initial detection rankings will be based on process control that either detect the cause of failure, or
detect the failure mode
‘i’-Detection is the rank associated with the best detection control listed in the detection controls column.
Detection is a relative ranking within the scope of the individual FMEA. Detection is a relative ranking within
the scope of the individual FMEA. In order to achieve a lower ranking, generally the planned detection control
has to be improved.
o When more than one control is identified, it is recommended that the detection ranking of each
control be included as part of the description. Record the lowest ranking in the detection column
o Assume the failure has occurred and then assess the capabilities of all the “Current Process Controls”
to detect the failure
o Do not automatically presume that the detection ranking is low because the occurrence is low, but do
assess the ability of the process controls to detect low frequency failure modes and prevent them
from going further in the process.
o Random quality checks are unlikely to detect the existence of an isolated problem and should not
influence the detection ranking.
88
‘j’ - RPN stands for Risk Priority Number which is the product of Severity, Occurrence and Detection
Page
o Within the scope of the individual FMEA, this value can range between 1 and 1000
o The initial focus of the team should be oriented towards failure modes with highest severity ranks.
When the severity is 9 or 10, it is imperative that the team ensure that the risk is addressed through
existing design controls or recommended actions.
o For failure modes with severities of 8 or below the team should consider causes having the highest
occurrence or detection rankings. It is the team’s responsibility to look at the information, decide upon
an approach, and determine how to best prioritize their risk reduction efforts which best sever their
organization and customers.
‘k’- Recommended Action (s) – In general, prevention actions (i.e. reducing the occurrence are preferable to
detection actions. The intent of any recommended action is to reduce rankings in the following order: severity,
occurrence, and detection. Example approaches to reduce these are explained below:
o To reduce Severity(S) Ranking: Only a design or process revision can bring about a reduction in the
severity ranking.
o To reduce Occurrence (O) Ranking: To reduce occurrence, process and design revisions may be
required. A reduction in the occurrence ranking can be effected by removing or controlling one or
more of the causes of the failure mode through a product or process design revision.
o To reduce Detection (D) Ranking: The preferred method is the use of error/mistake proofing. A
redesign of the detection methodology may result in a reduction of the detection ranking. In some
cases, a design change in the process step may be required to increase the likelihood of detection (i.e.
reduce the detection ranking)
‘l’ - Responsibility and Target Completion data – Enter the name of the individual and organization responsible
for completing each recommended action including the target completion date.
‘m’- Action(s) Taken and Completion Date- After the action has been implemented, enter a brief description
of the action taken and actual completion date
‘n’- Severity, Occurrence, Detection and RPN – After the preventive, correction action has been completed,
determine and record the resulting severity, occurrence, and detection rankings. Calculate and record the
resulting RPN
89
Page
The degree of occurrence is measured on a scale of 1 to 10, where 10 signifies the highest probability of occurrence.
50 per thousand
9
1 in 20
20 per thousand
High 8
1 in 50
10 per thousand
7
1 in 100
2 per thousand
6
1 in 500
.5 per thousand
Moderate 5
1 in 2,000
.1 per thousand
4
1 in 10,000
No detection Almost
No process control; cannot detect or is not analyzed 10
Opportunity Impossible
Not likely to detect at any Failure Mode and or/ Error (Cause) is not easily detected
9 Very Remote
stage (e.g. Random Audits).
Problem Prevention
from being made.
Page
Key Points
_________________________________________
_________________________________________
93
_________________________________________
Page
_________________________________________
_________________________________________
VarSigma’s Lean Six Sigma Green Belt Book of Knowledge
Case Study- Invoice Processing
FMEA Case Narrative by Subject Matter Expert
During the invoice analysis process, incorrect customer address will result in invoice not reaching the
customer. It is considered an inoperable error and therefore ranked 8 on severity scale. It may be
because of customer providing incorrect data, which would happen 1 in 50000 cases. It can be also
be a data entry error in Customer Account Creation Process and does happen 1 in 100 cases of the
cases. There aren't any checks available for data entry. We do confirm the address with the customer
before customer account is created and there aren't any errors after confirming with the customer
During invoice analysis, incorrect tax amount and tax code in invoice are common failure modes.
Customer doesn’t pay the invoice amount until invoice is corrected. It is ranked 8 on Severity scale. It
can happen due to incorrect tax code in Order Management system or Account Receivable system or
Tax code changes weren't updated in Account Receivable system. It isn't possible to validate incorrect
tax code entry or tax code changes not updated in Account Receivable system however Incorrect Tax
code in Order Management system is automatically validated in Order Management system and it
detects 97.5% + errors. Incorrect tax code does occur 1 in 10000 cases, Incorrect Tax code in Account
Receivable system occur around 1 in 400 , Tax code changes not updated occur almost 1.25% of the
cases.
During invoice follow up, it is observed that analysts aren’t able to understand customer queries and
therefore payments are delayed (Severity -7) due to delayed resolution of queries. This may happen
due to analyst communication skills which occurs approximately 1 in 200 cases or the analyst is not
aware of customer open issues which occurs 1 in 150000 cases. There isn’t a control mechanism for
analyst communication skills however call logs are mandatory for every customer contact. The call
logs available 97.5% + of the calls.
Key Points
_________________________________________
94
_________________________________________
Page
_________________________________________
_________________________________________
Copyright © VarSigma.com Dr. Shantanu Kumar
Qualitative Screening
Most project executions require a cross-functional team effort because different creative ideas at different levels
of management are needed to understand the potential causes or factors. These ideas are better generated
through brainstorming sessions.
Brainstorming
Brainstorming is used at the initial steps and during the Analyze phase of a project to identify potential factors
that affect the output. It is a group discussion session that consists of encouraging a voluntary generation of a
large volume of creative, new, and not necessarily traditional ideas by all the participants. It is very beneficial
because it helps prevent narrowing the scope of the issue being addressed to the limited vision of a small
dominant group of managers. Since the participants come from different disciplines, the ideas that they bring
forth are very unlikely to be uniform in structure. They can be organized for the purpose of finding root causes
of a problem and suggest palliatives. If the brainstorming session is unstructured, the participants can give any
idea that comes to their minds, but this might lead the session to stray from its objectives.
Asking "Why?" may be a favourite technique of your three year old child in driving you crazy, but it could teach
you a valuable Six Sigma quality lesson. The 5 Whys is a technique used in the Analyze phase of the Six Sigma
DMAIC methodology. By repeatedly asking the question "Why" (five is a good rule of thumb),Green Belt/Black
Belt can peels away the layers of symptoms which can lead to the root cause of a problem. Although this
technique is called "5 Whys," you may find that you will need to ask the question fewer or more times than five
before you find the issue related to a problem. The benefits of 5 Why’s is that it is a simple tool that can be
completed without statistical analysis. Table 16 shows an illustration of the 5 Why’s analysis. Based on this
analysis, we may decide to take out the non-value added signature for the director.
Table 16: 5 Why's Example
If the ideas generated by the participants to the brainstorming session are few (less than 15), it is easy to clarify,
combine them, determine the most important suggestions, and make a decision. However, when the suggestions
are too many it becomes difficult to even establish a relationship between them. An affinity diagram or KJ
method (named after its author, Kawakita Jiro) is used to diffuse confusion after a brainstorming session by
organizing the multiple ideas generated during the session. It is a simple and cost-effective method that consists
of categorizing a large amount of ideas, data, or suggestions into logical groupings according to their natural
relatedness. When a group of knowledgeable people discuss a subject with which they are all familiar, the ideas
they generate should necessarily have affinities. To organize the ideas, perform the following:
a. The first step in building the diagram is to sort the suggestions into groups based on their relatedness
and a consensus from the members.
b. Determine an appropriate header for the listings of the different categories.
c. An affinity must exist between the items on the same list and if some ideas need to be on several lists,
let them be.
d. After all the ideas have been organized; several lists that contain closely related ideas should appear.
Listing the ideas according to their affinities makes it much easier to assign deliverables to members of
the project team according to their abilities.
Cause-and-Effect Analysis
The cause-and-effect (C&E) diagram—also known as a fishbone (because of its shape) or Ishikawa diagram
(named after Kaoru Ishikawa, its creator)—is used to visualize the relationship between an outcome and its
different causes. There is very often more than one cause to an effect in business; the C&E diagram is an
analytical tool that provides a visual and systematic way of linking different causes (input) to an effect (output).
It shows the relationship between an effect and its first, second and third order causes.
It can be used to identify the root causes of a problem. The building of the diagram is based on the sequence of
events. “Sub-causes” are classified according to how they generate “sub-effects,” and those “sub-effects”
become the causes of the outcome being addressed.
The first step in constructing a fishbone diagram is to define clearly the effect being analyzed.
The second step consists of gathering all the data about the key process input variables (KPIV), the
potential causes (in the case of a problem), or requirements (in the case of the design of a production
process) that can affect the outcome.
The third step consists of categorizing the causes or requirements according to their level of importance
or areas of pertinence. The most frequently used categories are:
o Manpower, machine, method, measurement, mother-nature, and materials for manufacturing
o Equipment, policy, procedure, plant, and people for services
Subcategories are also classified accordingly; for instance, different types of machines and computers can be
classified as subcategories of equipment.
The last step is the actual drawing of the diagram. The diagram is immensely helpful to draw a mind map
96
The fishbone diagram does help visually identify the potential root causes of an outcome. Further statistical
analysis is needed to determine which factors contribute the most to creating the effect.
300 80
Frequency
Percent
60
200
40
100
20
0 0
Complaints
or
t
od ol lo
c g rs er
p Fo ntr Al kp to th
s
Co Hs v a O
an s k
El
e
Tr m
p De
Te
Frequency 164 90 45 34 22 19 17
Percent 41.9 23.0 11.5 8.7 5.6 4.9 4.3
Cum % 41.9 65.0 76.5 85.2 90.8 95.7 100.0
Figure 18
Worksheets: Complaints.mtw
98
Page
Human Resources – A human resources manager wants to know which day of the week the greatest number
of resumes are received
Information Technology – The Green Belt team needs to investigate which departments are using the most
LAN storage
Sales – A salesperson wants to review last quarter’s sales figures by product line
Accounting – The accounting manager wants to review late payments by customer segment
Administration – The administration head wants to review total complaints by type of service.
Exercise: Post a recent promotional campaign, Kitibank monthly credit card applications increased from 4500 to 8000.
They would usually have 11% incomplete applications however now the % increased to 19%. The process manager
listed down all the sections in the form and frequency of each section not been completed. Please carry out appropriate
analysis to identify the vital few sections which when fixed will reduce % incomplete applications.
Worksheet: CC_Applications.mtw
Solutions
To conduct a hypothesis test, the first step is to state the business question involving a comparison. For instance,
the team may wonder if there is a difference in variability seen in thickness due to two different material types.
Once the business question is posed, the next step is to convert the business language or question into statistical
language or hypothesis statements. Two hypothesis statements are written.
The first statement is the null hypothesis, H0. This is a statement of what is to be disproved.
The second statement is the alternative hypothesis, Ha. This is a statement of what is to be proved. Between the
two statements, 100% of all possibilities are covered. The hypothesis will be focused on a parameter of the
population such as the mean, standard deviation, variance, proportion, or median.
The type of hypothesis test that could be conducted is based on the data type (discrete or continuous) of the y
data. For instance, if the data are continuous, the analysts may want to conduct tests on the mean, median, or
variance. If the data are discrete, the analysts may want to conduct a test on proportions.
Hypothesis Testing
A statistical hypothesis is an assumption about a population parameter. This assumption may or may not be
true. The best way to determine whether a statistical hypothesis is true would be to examine the entire
population. Since that is often impractical, researchers typically examine a random sample from the population.
If sample data are not consistent with the statistical hypothesis, the hypothesis is rejected.
Hypothesis Tests
Statisticians follow a formal process to determine whether to reject a null hypothesis, based on sample data.
This process, called hypothesis testing, consists of four steps.
o State the hypotheses: This involves stating the null and alternative hypotheses. The hypotheses are
stated in such a way that they are mutually exclusive. That is, if one is true, the other must be false.
o Formulate an analysis plan: The analysis plan describes how to use sample data to evaluate the null
hypothesis. The evaluation often focuses around a single test statistic.
o Analyze sample data: Find the value of the test statistic (mean score, proportion, t-score, z-score, etc.)
described in the analysis plan.
o Interpret results: Apply the decision rule described in the analysis plan. If the value of the test statistic is
unlikely, based on the null hypothesis, reject the null hypothesis.
100
Page
Decision Errors
There is always possibility of errors in the decisions we make. Two types of errors can result from a hypothesis
test. They are:
o Type I error. A Type I error occurs when the researcher rejects a null hypothesis when it is true.
The probability of committing a Type I error is called the significance level. This probability is
also called alpha, and is often denoted by α.
o Type II error. A Type II error occurs when the researcher fails to reject a null hypothesis that is
false. The probability of committing a Type II error is called Beta, and is often denoted by β. The
probability of not committing a Type II error is called the Power of the test.
State the null hypothesis H0, and the alternative hypothesis Ha.
Choose the level of significance () and the sample size, n.
Determine the appropriate test statistic and sampling distribution.
Collect the data and compute the sample value of the appropriate test statistic.
Calculate the p-value based on the test statistic and compare the p-value to .
Make the statistical decision. If the p-value is greater than or equal to , we fail to reject the null
hypothesis. If the p-value is less than (), we reject the null hypothesis.
Express the statistical decision in the context of the problem.
101
Page
Case Studies
Q. An analyst at a department store wants to evaluate a recent credit card promotion. To
this end, 500 cardholders were randomly selected. Half received an ad promoting a reduced
interest rate on purchases made over the next three months, and half received a standard
seasonal advertisement. Did the advertisement promoting reduced interest rate increase
purchases? Worksheet: Promotion.mtw
Solution:
Y (Response) ____________________________________________________________
102
X (Factor) _______________________________________________________________
Page
Test___________________________________________________________________
Copyright © VarSigma.com Dr. Shantanu Kumar
Normality test
103
Page
2-sample t test
104
Page
X (Factor) _______________________________________________________________
Test___________________________________________________________________
Normality test
105
Page
Bartlett test
106
Page
X (Factor) _______________________________________________________________
Test___________________________________________________________________
X (Factor) _______________________________________________________________
108
Test__________________________________________________________________
X (Factor) _______________________________________________________________
Test___________________________________________________________________
109
Page
X (Factor) _______________________________________________________________
Test___________________________________________________________________
110
Page
Exercise: TeleCall uses 4 centres around the globe to process customer order forms. They audit a certain %
of the customer order forms. Any error in order form renders it defective and has to be reworked before
processing. The manager wants to check whether the defective % varies by centre. Worksheet:
CustomerOrderForm.mtw
The k samples are all assumed to come from populations with the same variance
• It tests the independence between categorical variables. For example, a manager of three
customer support call centers wants to know if a successful resolution of a customer's problem
(yes or no) depends on which branch receives the call. The manager tallies the successful and
unsuccessful resolutions for each branch in a table, and performs a chi-square test of
independence on the data. In this case, the chi-square statistic quantifies how the observed
distribution of counts varies from the distribution you would expect if no relationship exists
between call center and a successful resolution.
112
Page
Administrative – A financial analyst wants to predict the cash needed to support growth and increases
in capacity.
Market/Customer Research – The marketing department wants to determine how to predict a
customer’s buying decision from demographics and product characteristics.
Hospitality – Food and Beverage manager wants to see if there is a relationship between room service
delays and order size.
Customer Service - GB is trying to reduce call length for potential clients calling for a good faith estimate
on a mortgage loan. GB thinks that there is a relationship between broker experience and call length.
Hospitality - The Green Belt suspects that the customers have to wait too long on days when there are
many deliveries to make at Six Sigma Pizza.
In the scenarios mentioned here, there is something common, i.e. we want to explore the relationship between
Output and Input, or between two variables. Correlation and Regression helps us explore statistical relationship
between two continuous variables.
Correlation
Correlation is a measure of the relation between two or more continuous variables. The Pearson correlation
coefficient is a statistic that measures the linear relationship between the x and y. The symbol used is r. The
correlation value ranges from -1 to 1. The closer the value is to 1 in magnitude, the stronger the relationship
between the two.
A value of zero, or close to zero, indicates no linear relationship between the x and y.
A positive value indicates that as x increases, y increases.
A negative value indicates that as x increases, y decreases.
The Pearson correlation is a measure of a linear relationship, so scatter plots are used to depict the relationship
visually. The scatter plot may show other relationships.
Correlation Analysis measures the degree of linear relationship between two variables
o Range of correlation coefficient : -1 to +1
• Perfect positive relationship +1
• No Linear relationship 0
• Perfect negative relationship -1
o If the absolute value of the correlation coefficient is greater than 0.85, then we say there is a good
relationship.
• Example: r = 0.9, r = -0.95, r = 1.0, r = -0.9 describe good relationship
• Example: r = 0.0, r = -0.3, r = +0.24 describe poor relationship.
113
Correlation values of -1 or 1 imply an exact linear relationship. However, the real value of correlation is in
quantifying less than perfect relationships. Finding that two variables are correlated often informs a
regression analysis which attempts to further describe this type of relationship.
Page
1000
800
Weight gained
600
400
200
Figure 19
Regression
A natural extension of correlation is regression. Regression is the technique of determining a mathematical equation
that relates the x's to the y. Regression analysis is also used with historical data - data where the business already
collects the y and associated x's. The regression equation can be used as a prediction model for making process
improvements.
Simple linear regression is a statistical method used to fit a line for one x and one y. The formula of the line is
where b0 is the intercept term and b1 is the slope associated with x. These beta terms are called the coefficients
of the model. The regression model describes a response variable y as a function of an input factor x. The larger
the b1 term, the more change in y given a change in x.
Types of Variables
Input Variable (X’s): These are also called predictor variables or independent variables. It is best if the
variables are continuous
Output Variable (Y’s): These are also called response variables or dependent variables (what we’re trying
to predict). It is best if the variables are continuous.
R-squared-also known as Coefficient of determination- The coefficient of determination (r2) is the ratio of the
regression sum of squares (SSR) to the total sum of squares (SST). It represents the % variation in output
(dependent variable) explained by input variables/s or Percentage of response variable variation that is
explained by its relationship with one or more predictor variables.
Prediction and Confidence Interval: These are types of confidence intervals used for predictions in regression
and other linear models.
Prediction Interval: It represents a range that a single new observation is likely to fall given specified
settings of the predictors.
Confidence interval of the prediction: It represents a range that the mean response is likely to fall
given specified settings of the predictors.
The prediction interval is always wider than the corresponding confidence interval because of the added
uncertainty involved in predicting a single response versus the mean response.
Exercise: Is there a statistical relationship between calories consumed and weight gained? If 3300 calories are
consumed, how much weight will be gained? Worksheet: Calories Consumed.mtw
Graph>Scatter Plot
115
Page
Stat>Regression>Regression
Exercise: Regional Sales Manager of a Cosmetic company want to investigate the relationship between
total advertisement expenditure and total sales. The Regional Sales Manager also wants to predict total
116
sales based on total advertisement expenditure. Analyze the data to find the following: Rsq, Regression
Equation, if Adspend is 45000, Sales =________________? Worksheet: Adspend_Sales.mtw
Page
5. Project leader must check whether statistically validated factors (Critical Xs) have sufficient capability to
realize improvement in Y (output measure)
117
Page
Has the team conducted a value-added and cycle time analysis, identifying areas where time and resources
are devoted to tasks not critical to the customer?
Has the team examined the process and identified potential bottlenecks, disconnects, and redundancies that
could contribute to the problem statement?
Has the team analyzed data about the process and its performance to help stratify the problem, understand
reasons for variation in the process, and generate hypothesis as to the root causes of the current process
performance?
Has an evaluation been done to determine whether the problem can be solved without a fundamental ‘white
paper’ recreation of the process? Has the decision been confirmed with the Project Sponsor?
Has the team investigated and validated (or revalidated) the root cause hypothesis generated earlier, to gain
confidence that the “vital few” root causes have been uncovered?
Does the team understand why the problem (the Quality, Cycle Time, or Cost Efficiency issue identified in
the Problem Statement) is being seen?
Has the team been able to identify any additional ‘Quick Wins’?
Have ‘learning’s’ to-date required modification of the Project Charter? If so, have these changes been
approved by the Project Sponsor and the Key Stakeholders?
Have any new risks to project success been identified, added to the Risk Mitigation Plan, and a mitigation
strategy put in place?
118
Page
8
Improve
Improve Phase Overview
In the Improve phase, the team has validated the causes of the problems in the process and is ready to generate
a list of solutions for consideration. They will answer the question "What needs to be done?" As the team moves
into this phase, the emphasis goes from analytical to creative. To create a major difference in the outputs, a new
way to handle the inputs must be considered. When the team has decided on a solution to present to
management, the team must also consider the cost/benefit analysis of the solutions as well as the best way to
sell their ideas to others in the business. The deliverables from the Improve phase are:
Proposed solution(s)
Cost/benefit analysis
Presentation to management
Pilot plan
Ideation Techniques
Brainstorming (Round-Robin)
Most project executions require a cross-functional team effort because different creative ideas at
different levels of management are needed in the definition and the shaping of a project. These ideas
are better generated through brainstorming sessions. Brainstorming is a tool used at the initial steps or
during the Analyze phase of a project. It is a group discussion session that consists of encouraging a
voluntary generation of a large volume of creative, new, and not necessarily traditional ideas by all the
participants. It is very beneficial because it helps prevent narrowing the scope of the issue being
addressed to the limited vision of a small dominant group of managers. Since the participants come from
different disciplines, the ideas that they bring forth are very unlikely to be uniform in structure. They can
be organized for the purpose of finding root causes of a problem and suggest palliatives.
Dr. Edward de Bono developed a technique for helping teams stay focused on creative problem solving
by avoiding negativity and group arguments. Dr. Bono introduced Six Thinking Hats which will represent
different thought processes of team members, and also discussed how we can harness these thoughts
to generate creative ideas. These hats are:
The White Hat thinking requires team members to consider only the data and information at hand.
With white hat thinking, participants put aside proposals, arguments and individual opinions and review
only what information is available or required.
The Red Hat gives team members the opportunity to present their feelings or intuition about the subject
without explanation or need for justification. The red hat helps teams to surface conflict and air feelings
openly without fear of retribution. Use of this hat encourages risk-taking and right-brain thinking.
The Black Hat thinking calls for caution and critical judgment. Using this hat helps teams avoid
“groupthink” and proposing unrealistic solutions. This hat should be used with caution so that creativity
is not stifled.
The Blue Hat is used for control of the brainstorming process. The blue hat helps teams evaluate the
thinking style and determine if it is appropriate. This hat allows members to ask for summaries and helps
the team progress when it appears to be off track. It is useful for “thinking about thinking.”
120
The Green Hat makes time and space available for creative thinking. When in use, the team is
encouraged to use divergent thinking and explore alternative ideas or options.
Page
Structured Probing methods are extremely helpful in lateral thinking and problem solving approaches.
Process Improvement experts also consider challenging an idea, or disproving an idea to be an initiation
point for creative ideas. Most scientists, innovators will like to probe to understand the existence,
validity and feasibility of an idea and this helps in improving and optimizing the idea, and may also trigger
a new idea.
Benchmarking
Benchmarking is a popular method for developing requirements and setting goals. In more conventional terms,
benchmarking can be defined as measuring your performance against that of best-in-class companies, determining
how the best-in-class achieve those performance levels, and using the information as the basis for your own
company’s targets, strategies, and implementation.
Benchmarking involves research into the best practices at the industry, firm, or process level. Benchmarking goes
beyond a determination of the “industry standard” it breaks the firm’s activities down to process operations and
121
looks for the best-in-class for a particular operation. For example, to achieve improvement in their parts
distribution process Xerox Corporation studied the retailer L.L. Bean.
Page
Benchmarking must have a structured methodology to ensure successful completion of thorough and accurate
investigations. However, it must be flexible to incorporate new and innovative ways of assembling difficult-to-
obtain information. It is a discovery process and a learning experience. It forces the organization to take an external
view, to look beyond itself. The essence of benchmarking is the acquisition of information.
Benchmarking is based on learning from others, rather than developing new and improved approaches. Since the
process being studied is there for all to see, benchmarking cannot give a firm a sustained competitive advantage.
Although helpful, benchmarking should never be the primary strategy for improvement. Competitive analysis is an
approach to goal setting used by many firms. This approach is essentially benchmarking confined to one’s own
industry. Although common, competitive analysis virtually guarantees second-rate quality because the firm will
122
always be following their competition. If the entire industry employs the approach hit will lead to stagnation for
the entire industry, setting them up for eventual replacement by outside innovators.
Page
Develop a list of the selection criteria. For evaluating product designs, list VOC requirements; for evaluating
improvement proposals, list customer requirements or organizational improvement goals.
Develop a list of all potential improvement solutions or all product designs to be rated.
Select one potential improvement or product design as the baseline - all other proposals are compared to
the baseline.
o For product designs, the baseline is usually either the current design or a preferred new design.
o For improvement proposals, the baseline is usually the improvement suggested by the team or an
improvement that has strong management support.
o The current solution in place may also be used as a baseline
Enter the baseline proposal in the space provided.
Enter the selection criteria along the left side of the matrix and the alternative product or improvement
proposals across the top of the matrix.
Apply a weighting factor to all the selection criteria. These weights might not be the same for all projects, as
they can reflect localized improvement needs or changes in customer requirements. A 1-to-9 scale or 1-to-
5 scale can be used for weighting the importance of the selection criteria, using 1 for the least important
criteria and 5 or 9 for the most important criteria.
Based on team input, score how well the baseline proposal matches each of the selection criteria. Use a 1-
to-9 or 1-to-5 scale for scoring the baseline, using 5 or 9 for very strong matches to the criteria, and 1 for
very poor matches to the criteria. We may also use 1 for strong match. The moderator may define the scale.
For each alternative proposal, the team should determine whether the alternative is Better, the Same, or
Worse than the baseline, relative to each of the selection criteria:
o Better results in a +1 score
o Same results in a 0 score
o Worse results in a -1 score.
Multiply the scores by the criteria weights and add them together to obtain the weighted score.
123
Page
Sum of Same 4 1 2 1
Sum of Positives 0 0 0 1
Sum of Negatives 0 3 2 2
Each Proposed Solution is rated on how well it addresses a selection criterion compared with the Baseline Solution.
For each Proposed Solution, select whether it is Better than the Baseline Solution, the Same as the Baseline Solution
(default), or Worse than the Baseline Solution.
Solution Scores
Based on your choice of Better, Same or Worse for each Proposed Solution (relative to the Baseline Solution), three
scores are calculated:
Weighted Score: Each Better rating receives a raw score of 1, each same rating receives a raw score of 0, and
each Worse rating receives a raw score of -1. The raw scores are multiplied by the Importance of Selection
Criteria, and the sum of the raw scores times the Importance is the Weighted Score. A higher Weighted Score
is better.
Worse/ Weighted Negative Score: Tracks the number of times that a Proposed Solution is rated worse than
the Baseline Solution. The lower the Worse Score, the better a proposed solution is relative to the Baseline
Solution.
Better/Weighted Positive Score: Tracks the number of times that a Proposed Solution is rated better than
the Baseline Solution. The higher the Better Score, the better a Proposed Solution is relative to the Baseline
124
Solution.
Page
By design, brainstorming generates a long list of ideas. However, also by design, many are not realistic or feasible.
The Multi-voting activity allows a group to narrow their list or options into a manageable size for sincere
consideration or study. It may not help the group make a single decision but can help the group narrow a long list of
ideas into a manageable number that can be discussed and explored. It allows all members of the group to be
involved in the process and ultimately saves the group a lot of time by allowing them to focus energy on the ideas
with the greatest potential.
Brainstorm a list of options: Conduct the brainstorming activity to generate a list of ideas or options.
Review the list from the Brainstorming activity: Once Green Belt/Black Belt have completed the list, clarify
ideas, merge similar ideas, and make sure everyone understands the options. Note: at this time the group is
not to discuss the merits of any idea, just clarify and make sure everyone understands the meaning of each
option.
Participants vote for the ideas that are worthy of further discussion: Each participant may vote for as many
ideas as they wish. Voting may be by show of hands or physically going to the list and marking their choices
or placing a dot by their choices. If they so desire, participants may vote for every item.
Identify items for next round of voting: Count the votes for each item. Any item receiving votes from half
the people voting is identified for the next round of voting. For example, if there are 12 people voting, any
item receiving at least six votes is included in the next round. Signify the items for the next vote by circling
or marking them with a symbol, i.e., all items with a star by the number will be voted on the next round.
Vote again. Participants vote again, however this time they may only cast votes for half the items remaining
on the list. In other words, if there are 20 items from the last round that are being voted on, a participant
may only vote for ten items.
Repeat steps 4 and 5. Participants continuing voting and narrowing the options as outlined in steps 4 and 5
until there is an appropriate number of ideas for the group to analyze as a part of the decision-making or
problem solving process. Generally groups need to have three to five options for further analysis.
Discuss remaining ideas. At this time the group engages in discussing the pros and cons of the remaining
125
Nominal Group Technique is most often used after a brainstorming session to help organize and prioritize ideas
NGT is a good method to use to gain group consensus, for example, when various people (program staff,
stakeholders, community residents, etc.) are involved in constructing a logic model and the list of outputs for a
specific component is too long and therefore has to be prioritized. In this case, the questions to consider would be:
126
“Which of the outputs listed are most important for achieving our goal and are easier to measure? Which of our
outputs are less important for achieving our goal and are more difficult for us to measure?”
Page
Generating Ideas: The moderator presents the question or problem to the group in written form and reads
the question to the group. The moderator directs everyone to write ideas in brief phrases or statements and
to work silently and independently. Each person silently generates ideas and writes them down.
Recording Ideas: Group members engage in a round-robin feedback session to concisely record each idea
(without debate at this point). The moderator writes an idea from a group member on a flip chart that is
visible to the entire group, and proceeds to ask for another idea from the next group member, and so on.
There is no need to repeat ideas; however, if group members believe that an idea provides a different
emphasis or variation, feel free to include it. Do not proceed until all members’ ideas have been documented.
Discussing Ideas: Each recorded idea is then discussed to determine clarity and importance. For each idea,
the moderator asks, “Are there any questions or comments group members would like to make about the
item?” This step provides an opportunity for members to express their understanding of the logic and the
relative importance of the item. The creator of the idea need not feel obliged to clarify or explain the item;
any member of the group can play that role.
Voting on Ideas: Individuals vote privately to prioritize the ideas. The votes are tallied to identify the ideas
that are rated highest by the group as a whole. The moderator establishes what criteria are used to prioritize
the ideas. To start, each group member selects the five most important items from the group list and writes
one idea on each index card. Next, each member ranks the five ideas selected, with the most important
receiving a rank of 5, and the least important receiving a rank of 1 (Green Belt/Black Belt may change the
rank i.e. rank 1 can be the best and rank 5 can be the worst).
127
After members rank their responses in order of priority, the moderator creates a tally sheet on the flip chart
with numbers down the left-hand side of the chart, which correspond to the ideas from the round-robin.
The moderator collects all the cards from the participants and asks one group member to read the idea
Page
number and number of points allocated to each one, while the moderator records and then adds the scores
on the tally sheet. The ideas that are the most highly rated by the group are the most favoured group actions
or ideas in response to the question posed by the moderator.
Delphi Technique is a method of relying on a panel of experts to anonymously select their responses using a secret
ballot process. After each round, a facilitator provides the summary of the experts’ opinions along with the reasons
for their decisions. Participants are encouraged to revise their answers in light of replies from other experts. The
process is stopped after pre-defined criteria such as number of rounds. The advantage of this technique is that if
there are team members who are boisterous or overbearing, they will not have much of an impact on swaying the
decisions of other team members.
The Delphi technique, mainly developed by Dalkey and Helmer (1963) at the Rand Corporation in the 1950s, is a
widely used and accepted method for achieving convergence of opinion concerning real-world knowledge solicited
from experts within certain topic areas.
The Delphi technique is a widely used and accepted method for gathering data from respondents within their domain
of expertise. The technique is designed as a group communication process which aims to achieve a convergence of
opinion on a specific real-world issue. The Delphi process has been used in various fields of study such as program
planning, needs assessment, policy determination, and resource utilization to develop a full range of alternatives,
explore or expose underlying assumptions, as well as correlate judgments on a topic spanning a wide range of
disciplines. The Delphi technique is well suited as a method for consensus-building by using a series of questionnaires
delivered using multiple iterations to collect data from a panel of selected subjects. Subject selection, time frames
for conducting and completing a study, the possibility of low response rates, and unintentionally guiding feedback
from the respondent group are areas which should be considered when designing and implementing a Delphi study.
Delphi technique’s application is observed in program planning, needs assessment, policy determination, resource
utilization, marketing & sales and multiple other business decision areas.
Organizations do customize these steps to meet their requirements, as they may have time constraints and large
number of iterations may not be possible.
The statistically designed experiment usually involves varying two or more variables simultaneously and obtaining
multiple measurements under the same experimental conditions. The advantage of the statistical approach is
threefold:
Interactions can be detected and measured. Failure to detect interactions is a major flaw in the OFAT
approach.
Each value does the work of several values. A properly designed experiment allows Green Belt/Black Belt to
use the same observation to estimate several different effects. This translates directly to cost savings when
using the statistical approach.
Experimental error is quantified and used to determine the confidence the experimenter has in his
conclusions.
Much of the early work on the design of experiments involved agricultural studies. The language of experimental
design still reflects these origins. The experimental area was literally a piece of ground. A block was a smaller piece
of ground with fairly uniform properties. A plot was smaller still and it served as the basic unit of the design. As the
plot was planted, fertilized and harvested, it could be split simply by drawing a line. A treatment was actually a
treatment, such as the application of fertilizer. Unfortunately for the Six Sigma analyst, these terms are still part of
the language of experiments.
Experimental area can be thought of as the scope of the planned experiment. For us, a block can be a group of results
from a particular operator, or from a particular machine, or on a particular day—any planned natural grouping which
should serve to make results from one block more alike than results from different blocks. For us, a treatment is the
factor being investigated (material, environmental condition, etc.) in a single factor experiment. In factorial
experiments (where several variables are being investigated at the same time) we speak of a treatment combination
and we mean the prescribed levels of the factors to be applied to an experimental unit. For us, a yield is a measured
result and, happily enough, in chemistry it will sometimes be a yield.
129
Page
The objective is to determine the optimum combination of inputs for desired output considering constraints.
Simulation
Simulation is a means of experimenting with a detailed model of a real system to determine how the system will
respond to changes in its structure, environment, or underlying assumptions. A system is defined as a combination
of elements that interact to accomplish a specific objective. A group of machines performing related manufacturing
operations would constitute a system. These machines may be considered, as a group, an element in a larger
production system. The production system may be an element in a larger system involving design, delivery, etc.
Simulations allow the system or process designer to solve problems. To the extent that the computer model behaves
as the real world system it models, the simulation can help answer important questions. Care should be taken to
prevent the model from becoming the focus of attention. If important questions can be answered more easily
without the model, then the model should not be used. The modeller must specify the scope of the model and the
level of detail to include in the model. Only those factors which have a significant impact on the model’s ability to
serve its stated purpose should be included. The level of detail must be consistent with the purpose. The idea is to
create, as economically as possible, a replica of the real world system that can provide answers to important
questions. This is usually possible at a reasonable level of detail. Well-designed simulations provide data on a wide
variety of systems metrics, such as throughput, resource utilization, queue times, and production requirements.
While useful in modelling and understanding existing systems, they are even better suited to evaluate proposed
process changes. In essence, simulation is a tool for rapidly generating and evaluating ideas for process
improvement. By applying this technology to the creativity process, Six Sigma improvements can be greatly
accelerated.
Software can then be used to simulate the process a number of times and calculate the performance of CTQ at the
end of the series.
Page
132
Page
What narrowing and screening techniques were used to further develop and qualify potential solutions?
Do the proposed solutions address all of the identified root causes, or at least the most critical?
Were the solutions verified with the Project Sponsor and Stakeholders? Has an approval been received to implement?
Was a pilot run to test the solution? What was learned? What modifications were made?
Has the team seen evidence that the root causes of the initial problems have been addressed during the pilot? What
are the expected benefits?
Has the team considered potential problems and unintended consequences (FMEA) of the solution and developed
preventive and contingency actions to address them?
Has the proposed solution been documented, including process participants, job descriptions, and if applicable, their
estimated time commitment to support the process?
Has the team been able to identify any additional ‘Quick Wins’?
Have ‘learning’s’ to-date required modification of the Project Charter? If so, have these changes been approved by the
Project Sponsor and the Key Stakeholders?
Have any new risks to project success been identified and added to the Risk Mitigation Plan
133
Page
9
Control
Control Phase Overview
In the Control phase, the emphasis is on maintaining the gains achieved. The question the team is trying to answer
is, "How can we guarantee performance?". In the Improve phase, the team had a successful pilot and also got an
opportunity to tweak the solution. They used this information to plan the solution implementation and carried out
full scale implementation. It is time now to ensure that, when they finish the project, the success that they have
observed will be sustained. This involves transferring the responsibility to the process owner.
134
Page
Control Plan
Process Documentation Communication Plan
Data Collection Plan Audit/Inspection Plan
Mistake Proofing System Risk Mitigation System
Response/Reaction Plan Statistical Process Control
135
Page
Source Inspection: Inspection carried out at source or as close to the source of the defect as possible.
Mistakes detected close to source can be reworked or corrected before it is passed.
Informative Inspection: Inspection carried out to investigate the cause of any defect found, so action can be
taken. It only provides information on defect after it has occurred.
Judgment Inspection: Inspection carried out to separate good units from bad units once processing has
occurred. It doesn’t decrease the defect rate
In any process, regardless of how well-designed or carefully maintained it is, a certain amount of inherent or natural
variability will always exist. This natural variability or “background noise” is the cumulative effect of many small,
essentially unavoidable causes. In the framework of statistical quality control, this natural variability is often called
a “stable system of chance causes (common causes).” A process that is operating with only chance causes of variation
present is said to be in statistical control. In other words, the chance causes (common causes) are an inherent part
of the process.
In a process, other kinds of variability may occasionally be present. This variability in key quality characteristics can
arise from sources like improperly adjusted machines, operator errors, defective raw materials, untrained resources,
system outage etc. Such variability is generally large when compared to the background noise, and usually stands
out. We refer to these sources of variability that are not part of the chance cause pattern as assignable causes or
special causes. A process that is operating in the presence of assignable causes is said to be out of control. Production
processes will often operate in the in-control state, producing acceptable product for relatively long periods of time.
Occasionally, however, assignable causes will occur, seemingly at random, resulting in a “shift” to an out-of-control
state where a large proportion of the process output does not conform to requirements. A major objective of
statistical process control is to quickly detect the occurrence of assignable causes or process shifts so that
investigation of the process and corrective action may be undertaken before many nonconforming units are
manufactured. The control chart is an online process-monitoring technique widely used for this purpose.
137
Page
6.25
Individual Value
_
6.00 X=6.015
5.75
LCL=5.601
5.50
1 3 5 7 9 11 13 15 17 19 21 23 25
Observation
Figure 22
Worksheet: Detergent.mtw
A typical control chart is shown in Figure 22, which is a graphical display of a quality characteristic that has been
measured or computed from a sample versus the sample number or time. Often, the samples are selected at periodic
intervals such as every hour. The chart contains a center-line (CL) that represents the average value of the quality
characteristic corresponding to the in-control state. (That is, only chance causes are present.) Two other horizontal
lines, called the upper control limit (UCL) and the lower control limit (LCL) are also shown on the chart.
There is a close connection between control charts and hypothesis testing. Essentially, the control chart is a test of
the hypothesis that the process is in a state of statistical control. A point plotting within the control limits is
equivalent to failing to reject the hypothesis of statistical control, and a point plotting outside the control limits is
equivalent to rejecting the hypothesis of statistical control.
The most important use of a control chart is to improve the process. We have found that, generally
Most processes do not operate in a state of statistical control.
Consequently, the routine and attentive use of control charts will identify assignable causes. If these causes
can be eliminated from the process, variability will be reduced and the process will be improved.
The control chart will only detect assignable causes. Management, operator, and engineering action will
usually be necessary to eliminate the assignable cause. An action plan for responding to control chart signals
is vital.
In identifying and eliminating assignable causes, it is important to find the underlying root cause of the problem and
139
to attack it. A cosmetic solution will not result in any real, long-term process improvement. Developing an effective
system for corrective action is an essential component of an effective SPC implementation.
Page
Type of Data
Counts Classifications
Defects Defectives
Fixed Sample Variable Fixed Sample Variable Subgroup Size Subgroup Size Subgroup Size
Size Sample Size Size Sample Size of 1 - 2-8 >8
Minitab
Stat>Control Charts>Attributes Charts>c
Q. A manufacturer of projector lamps wants to assess whether or not its process is in control. Samples of lamps were
taken every hour for three shifts (24 samples of 300 lamps) and tested to see whether or not they work. Defectives
are lamps that do not work. Is the process in control? Worksheet: Projector.mtw
Q. Quality team inspects transactions versus checklist. There are seven opportunities for error in each transaction.
100 transactions are inspected every day and numbers of errors are recorded.
Is the process in control? Worksheet: Checklist.mtw
Q. An accounts department started an improvement project to try to reduce the number of internal purchase forms
that its users completed incorrectly. The number of incomplete forms for every day is recorded. Is the process in
control? Worksheet: Purchase Form.mtw
Q. A Winegrower wants to assess the stability of bottle filling process. The fill heights of 12 bottles at the same
temperature are inspected every week. Is the process in control? Worksheet: Fill Height.mtw
Q. Credit card application processing time is recorded for 1 credit card every day to determine whether the process
is in control. Is the process in control? Worksheet: Credit Card.mtw
141
Page
The team should consider standardization and replication opportunities to significantly increase the impact on the
sigma performance of processes to far exceed the anticipated results by the pilot and solution implementation.
As the implementation expands to other areas, four implementation approaches can be combined or used
independently. The appropriate approach will depend on the resources available, the culture of the organization and
the requirements for a fast implementation. The four approaches are:
A sequenced approach is when a solution is fully implemented in one process or location; implementation
begins at a second location.
A parallel approach is when the solution is implemented at two or more locations or processes
simultaneously.
A phased approach is when a pre-determined milestone is achieved at one location; the implementation at
a second location begins.
A flat approach is when implementation is done at all target locations, companywide.
142
Page
The results of the improvement and financial benefits need to be monitored (generally) for one year.
The project leader should prepare a Project Closure Document. A Project Closure document displays the results of
the project, control activities, status of incomplete tasks, approval of key stakeholders (for example, the process
owner, finance, quality systems, and environmental) to confirm that the project is complete. This document is often
used as the formal hand-off of the project to the process owner. It provides a formal place to record final project
results, document key stakeholder approvals and a record of the status of the improved process at the completion
of the project. This record may be the basis for ongoing auditing.
A Six Sigma project does not really "end" at the conclusion of the Control phase. There should be opportunities to
extend the success of this project team in other areas. The team and champion may share the knowledge gained
with others, replicate the solution in other processes, and develop standards for other processes based on what they
learned from their solution implementation. The team may continue to examine the process to look for opportunities
for continuous process improvement.
project sponsor.
Page
Has the team prepared all the essential documentation for the improved process, including revised/new Standard
Operating Procedures (SOP’s), a training plan, and a process control system?
Have the right measures been selected, and documented as part of the Process Control System, to monitor
performance of the process and the continued effectiveness of the solution? Has the metrics briefing plan/schedule
been documented? Who owns the measures? Has the Process Owner’s job description been updated to reflect the
new responsibilities? What happens if minimum performance is not achieved?
Has the solution been effectively implemented? Has the team compiled results data confirming that the solution has
achieved the goals defined in the Project Charter?
Has the Financial Benefit Summary been completed? Has the Resource Manager reviewed it?
Has the process been transitioned to the Process Owner, to take over responsibility for managing continuing
operations? Do they concur with the control plan?
Has the team forwarded other issues/opportunities, which were not able to be addressed, to senior management?
Has the hard work and successful efforts of our team been celebrated?
145
Page
Acronyms
AAA Attribute Agreement Analysis
AIAG Automotive Industry Action Group
ANOVA Analysis of Variance
CCR Critical Customer Requirement
cdf Cumulative distribution function
COPQ Cost of Poor Quality
CP, CPk Process Capability Indices (Short Term)
CTQ Critical to Quality
DFSS Design for Six Sigma
DMADV Define, Measure, Analyze, Design, Validate/Verify
DMAIC Define, Measure, Analyze, Improve, Control
DOE Design of Experiments
DPMO Defects per Million Opportunities
DPU Defects per Unit
FMEA Failure Modes & Effects Analysis
HACCP Hazard Analysis and Critical control points
IDOV Identify, Design, Optimize, Validate/Verify
Kaizen Continuous Improvement
KPI Key Performance Indicator
LCL Lower Control Limit
146
Muda Waste
PDCA Plan Do Check Act
Histogram Graph>Histogram
Analysis
Stat>Basic Statistics>Paired t
Stat>ANOVA>One-Way ANOVA
Hypothesis Test – Variation Stat>Basic Statistics>Test for 2 Variances
Stat>ANOVA>Test for Equal Variances
Hypothesis Test – Medians Stat>Nonparametric>One-Sample Wilcox
>Kruskal-Wallis
>Mood's Median Test
Chi-Square Test Stat>Tables>Cross Tabulation and Chi-Square
148
Stat>Tables>Chi-Square test
One-Way ANOVA Stat>ANOVA>One-Way
Regression
The company CFO reviewed data on pending customer payments and found that over the last 12 months, they
were not receiving the invoice payment within 60 days. To avoid cash crunch, the company also had to borrow
$9million @ 9% annual interest rate to fund operating expenses.
The CFO wasn’t convinced with performance of collections team in finance department and therefore initiated
a Six Sigma project to reduce delay in receiving payment.
The CFO identified the 2 potential project leaders (Project Leader ‘S’ & ‘B’) and apprised them about the project.
The project leaders wanted to identify a particular measure which relates to characteristic that satisfies the
requirement (CTQ)
Some initial data analysis by the CFO showed that payment for 20% of the Invoices weren’t received within 60
days and therefore Project leader ‘B’ felt that the % of invoices for which payment is not received within 60 days
should be considered late payments (defectives) with a goal of reducing defective % from 20% to 5%.Project
Leader ‘S’ believed that average time taken to get the invoice paid should be reduced maintaining an upper
limit of 60 days.
If any of these goals are achieved, the company will not need the loan to fund their operating expenditure. They
also budgeted around $30K for training, $50K for CRM changes and additional project team cost as mentioned
below. The project leader will like to calculate the business benefits for a tenure of one year.
Please refer to excel sheet “Define” >>>Human Resource Requirement. This is the projected resource
requirement for the project
Project Leader
Project Sponsor
Team Members
Business Process:
Business Case
Business Scope
Metrics Goal
CTQ
Benefit to External
Customers:
Define
Measure
Analyze
Improve
Control
Team Buy in
150
Sponsor approval
Page
Project Leader S/B wants to clearly define the CTQs before carrying out Measurement System Analysis
CTQ Performance Characteristics
CTQ Data Type Operational Definition LSL USL Target
151
Page
Project Leaders want to collect data for ‘Y’ to baseline performance. Please help them document data collection plan
Data Collection Plan
Operational definition of Y
Data source
Data period
Support required
152
Late Payment %
The Project team brainstormed to identify the factors affecting the time to receive payments. The brainstorming
sessions excerpt is available in a worksheet. Please use the excerpts to plot it on an Ishikawa Diagram (C&E Diagram)
and identify the potential Xs (inputs) for which we should collect data to validate whether they critically affect the
time taken to close an invoice. Please refer to the excel sheet “ Analyze” >> Ishikawa
Critical X validation
Project Leader ‘B’ also wants to find out whether the potential X’s are critical or not. He wants to ascertain whether
these X’s really affect % of Late Payments (defectives). So he collected data for all the potential X’s. Please refer to
PP1&2_Analyze_Hypothesis Test_Defectives.MTW
Project Leader ‘B’ has also collected the type of causes found in each defective Invoice and collated it in an excel
sheet. What are the vital few causes that should be reduced to reduce the number of defectives? Please refer to
PP1&2_Analyze_Defectives.MTW
Project Leader ‘S’ wanted to analyze the current process to understand the critical causes for failures in the process.
The team brainstormed all possible failure modes, its effect and potential causes. They also collected data on
occurrence and detection. Please carry out a suitable analysis to evaluate the risks in the current process. Please
refer to the excel sheet “ Analyze” >> FMEA
Solution Generation
The project leaders have identified alternative solutions and also identified possible +s and – s of each solution.
Please help the project leaders to evaluate these alternative solutions to choose an optimum solution. Please refer
to the excel sheet “Improve”>> “Solutions”
The selected solutions are
______________________________________________________________________________________________
_________________________________________________________________________
_________________________________________________________________________
_________________________________________________________________________
154
_________________________________________________________________________
_________________________________________________________________________
Page
Late Payment %
Control Charts
The project leaders want to continue monitoring Y (output) and also two of the X’s that they couldn’t completely
mistake proof. Therefore it was decided to control chart them. Please use appropriate control chart s and draw
suitable inferences. Please refer to PP1&2_Control.MTW
Y or X Inference
Y1:
Y2:
X1:
X2:
155
Page
DEFINE: The CFO asked Mr. ‘S’ and Mr. ‘B’ to understand the background and identify the appropriate CTQ, Goals
Mr. S identified
CTQ: Goal:
Mr. B identified
CTQ: Goal:
If either of the goals are achieved, the minimum business benefit calculated over a tenure of one year is
$________________
Mr. S and B listed all inputs and their suppliers, high level process map and output and their customers
Once the project charter was completed, it was signed off by the Project sponsor. At the conclusion of Define Phase,
Mr. S and B have identified the CTQ that relates to the output, created a project charter and high level process map.
MEASURE: To ensure the entire project team is absolutely clear about CTQ definition, Mr. S and B documented the
CTQ definition in CTQ performance characteristic sheet.
Mr. S logically validates the reliability of the measurement system by checking the definition in the automated
system however Mr. B has to carry out Attribute Agreement Analysis for the measurement system of the his CTQ.
In Attribute Agreement Analysis, the Team Accuracy of the measurement system is _____ and therefore Mr. B
considers the measurement system to be ____________
Mr. S and B now collected data to evaluate current performance of the identified CTQ. The sigma level computed
for their respective CTQs were
Project Leader CTQ Zlt Zst
S
ANALYZE: Mr. S and Mr. B now want to identify the critical Xs that are affecting the CTQs. Six Sigma team with the
SME brainstormed to identify the potential Xs, grouped them in logical categories using an Affinity diagram and then
explored the cause and effect relationship using a Fishbone Diagram.
The identified potential Xs are
Tax Issue
Customer Region
Inexperienced Analyst
156
Product Type
Invoice Priority Type
Page
Customer Type
IMPROVE: Mr. S and Mr. B have now identified the critical Xs that affect the output. They will like to generate
alternative solutions to influence the identified critical Xs. They brainstormed and benchmarked to generate,
evaluate and select alternative solutions for ‘Tax Issues’ and ‘Customer type’. However to mistake proof ‘ Customer
Address’, they brainstormed ideas and carried out Nominal group technique. The selected solution is
__________________________________________________________________.They also generated alternative
methods to train the analyst and evaluated the alternatives using Pugh Matrix analysis. The selected solution is
_______________________________________________________________
Mr. S and Mr. B checked whether the CTQs have really improved by comparing Sigma level before and after.
Project Leader CTQ Zst (Before) Zst (After)
S
CONTROL: Mr. S and Mr. B created the control plan to sustain the achieved improvement. They revised the standard
operating procedure, process maps, risk mitigation plans. They also set up a warning mechanism to detect the
presence of special causes in Y and couple of Xs using control charts. The control plan was reviewed by the process
owner, signed off and handed over to process owner
Y or X Control Chart Inference
157
Y1 :
Y2 :
Page
X1 :
X2:
www.varsigma.com
You may reach us at [email protected] www.varsigma.com