Six Sigma Black Belt Courseware
Six Sigma Black Belt Courseware
CHAPTER 1
1 Overview of Six Sigma
Introduction
Organizations exist for one purpose; to create value. The organization is considered
to be effective if it satisfies customers and its shareholders. If the organization is
able to add value with minimum resources, it becomes efficient. The role of Six
Sigma is to assist management in producing the most value with the minimum
amount of resources, that is, in achieving efficiency. The organization does this by
applying scientific Six Sigma principles to processes and products like the DMAIC
(Define-Measure-Analyze-Improve-Control) approach or the DFSS (Design for Six
Sigma) approach to design efficient products or processes. A number of companies
have found that upon embracing the Six Sigma initiative, the business enterprise
prospers.
So what is Six Sigma? Six Sigma can be said to be a well-known quality management
technique to control the defects arising in various processes in an organization,
producing an efficiency of 99.9997%. It can be defined as a tool which is used to
measure the digression from the mean, or the deviation from the desired goal or
target, and at the same time to effectively manage and eliminate those deviations.
Six Sigma uses teams that are allocated well defined projects that have an effect on
the business. It also provides key team members training in advanced statistical
tools and project management which is necessary for the project.
The history of Six Sigma dates back to the 1920s when tools like Edwards 14 points
and Ishikawa’s 'seven quality tools' (control chart, check sheets, run chart,
histogram, scatter diagram, Pareto chart and flowchart) were developed. It would
be wrong to say that Robert Galvin “invented” Six Sigma. Instead, he just applied the
tools used by people like Shewhart, Edwards and Ishikawa. The contribution of all
these scholars is remarkable as far as the theories of Six Sigma are concerned.
Carl Frederick Gauss (1777-1855) introduced the concept of normal curve. Edward
Deming, the ‘Godfather’ of quality brought about an immense change in the
approach towards quality in the early 1950s. However, Six Sigma was put into
practical use only when it was introduced by Motorola in the early 1980s as a
method to reduce product failure level in the next five years. This was a challenge
and it required prompt action and a deep analysis. It was only in the mid 1990s that
they revealed that they had adopted this unique method.
The value of Six Sigma can be ascertained from the fact that big names in the
corporate world are making regular use of it to improve efficiency. It is a data-
driven approach and it strives to improve the process at every step. It focuses on
reducing process variation and improving process control. The basic aim of this
approach is to reach for a situation which can be termed as ‘perfect’. The
companies measure their performance according to the sigma level of their
business process. Another approach called Lean Sigma (described in Chapter 9)
focuses on driving out waste and promotes work standardization and flow.
The earlier trend was to aim for three or four performance levels. However, with
increasing competition the goal has risen and the companies now strive to achieve
the Six Sigma level. This increased level of competition means that the standard has
risen from 6,200 and 67,000 problems per million opportunities to 3.4
problems/defects per million opportunities. Six Sigma means six standard
deviations from the mean. The reason for the raise in the bar is higher customer
expectations who demand better quality and great service. The customer is
increasingly becoming aware and making a strong-hold in the market.
The primary aim, in fact, of the Six Sigma approach is to keep customer
requirements in mind. So, the main focus of an organization is its customers or the
clients. They are the ones who influence the decision-making process in an industry
and for whom steps are taken to improve efficiency and performance. Six Sigma
helps to reduce costs and, it is, therefore possible to make optimum use of the
resources of the company.
To illustrate the above point, consider a business process outsourcing center where
the main focus, as in any other industry, is usually, the customers. Customer
satisfaction is their primary aim and at the end of the day, what they are looking at
is happy customers who would increase their business. Assume that Company X
manufactures computers. A customer orders a desktop and opts for a next day
delivery (after it is manufactured). If, for example, there is a discount of 50% on the
order if the delivery is not on time, then it is a loss for the company. If only 65% of
the computers are delivered on time, the process will be at ‘level 2’ sigma. If 92% of
the computers are delivered on time, the process will be at ‘level 3’ sigma. If the
company delivers 99.4% of the computers on time, then the company’s
performance will be at ‘level 4’ sigma. However, if the company wants to achieve
‘level 6’ sigma, it will have to deliver 99.9997% computers on time.
Therefore, it becomes very clear as to how much defective the process is and where
the problem lies. It is a very reliable way to ascertain losses and the loopholes in
the process. The Six Sigma processes are executed by Six Sigma Green Belts, Six
Sigma Black Belts and are overseen by Six Sigma Master Black Belts. Black Belts
save companies approximately $230,000 per project and can work on 4 to 6
projects a year. It is a very disciplined approach and produces consistent results.
Credibility is the key to it as the top management is involved and excellent results
can be assured.
Six Sigma is very critical to quality. An interesting fact is that it proves the notion
wrong that such a technique can work only for big companies. Actually, it can be
applied effectively to small businesses as well. It is a result-oriented approach
which produces excellent results with optimum utilization of resources. It saves
time, money and is one of the best ways to improve customer satisfaction.
The philosophical idea behind Six Sigma is that all systems are seen as processes
that involve inputs and outputs, and these can be measured, improved and
controlled. Six Sigma aims to control the outputs by influencing the inputs. It uses a
set of qualitative and statistical tools to steer process improvement.
2. Business Systems and Processes
What is a Process?
The business processes are different for different organizations. Some processes
common to all organizations include drafting a plan, manufacturing, establishing
customer relations, communication, and providing great service to customers.
These processes can be further divided into sub-processes like design creation and
research and development. Also, the processes can be represented graphically on a
process map which makes it easy for them to be executed.
Six Sigma is gaining popularity by the day and being extensively used across all the
business sectors. Whether it is banking, health care, insurance, business process
outsourcing, telecommunications or even the military, the usage of Six Sigma has
become imperative as it sets certain standards and stands for quality. There are
numerous business systems and umpteen processes that have to be taken care of
in a particular organization. It is very important that the defects be removed at the
lowest level or in every process to reduce the overall defect. In order to produce a
result-oriented effort, it is important to identify different business systems and
processes.
In the present day, the tasks in every organization are varied and the processes
quite complex. Therefore, to achieve better results, it is very important that people
from cross-functional departments come together and contribute their skills
towards achieving the process goal, instead of each department working under a
departmental authority. There needs to be a role-reversal as far as assigning the
tasks is concerned. Multi-tasking needs to be recognized. This is possible only if
each system and sub-system is clearly defined and the role of each and every staff
member demarcated. This is where Six Sigma steps in. It reduces waste by
producing much more productive staff and minimizing errors. Everybody is
assigned a specific role and he or she is aware what and how each thing needs to
be done. A list of some of the business systems that deploy Six Sigma is given
below.
Health Care
Consider the healthcare sector. The face of this sector is changing for the better
and the tasks of the management are piling up. New discoveries are very frequent
here. Also with increased competition, the processes here have become quite
varied and complex. This means that errors have also increased manifold. If the
patients are not satisfied with the services they are receiving at the hospital, it
means that the management has failed in its purpose.
Take the case of a factory worker in India who meets with an accident in the middle
of the night, and does not have the necessary resources to get operated at a private
clinic. The government hospital that he is admitted to closes the OPD at 10 PM.
Now OPD forms the process whereas the healthcare is a business system taken as
a whole. It becomes the responsibility of the authorities concerned to make sure
that the patients do not face this issue in the future.
It is time that the management starts providing better service to its patients; one
that is safer and produces greater patient satisfaction rate. The hospitals which
earlier made use of Plan-Do-Study-Act (discussed in the later part of this chapter) to
foster improvements are also adopting the Six Sigma model. In fact, many
companies in the healthcare sector have successfully implemented the Six Sigma
model. They are now focusing on an environment which does not believe in passing
the blame.
In a recent study done by the North Carolina Baptist Hospital, it was proven that a
Six Sigma process improvement team that was assigned the task of shifting heart
attack patients from the Emergency ward to the cardiac lab, reduced the hospital’s
mean time by 41 minutes.
The outsourcing sector is not a recent discovery. One of the earliest and oldest
approaches was applied to a company which outsourced in categories like
document copying, data entry and scanning. However, with the passage of time,
companies started outsourcing the entire process to a vendor. The vendor would
then, through a contract, agree to buy all the assets of the company and then hire
the same company's employees to carry out the process.
The current trend is to hire an Outsourcing Partner to carry out the company's
back-office programs. This approach helps maximize the outputs by providing
additional labor, equipment, technology and infrastructure. In fact, the BPO boom
is such that it has paved the way for many new innovations. It has made possible to
outsource knowledge based jobs. Knowledge Process Outsourcing is a recent
phenomenon that requires manpower which is adept in specialized knowledge.
Terms like "moving up the value chain" and "business transformation" are
synonymous with the BPO sector. They stand for cost efficiency and maximizing
profits. This sector keeps in mind three aspects: customer, process and employee.
Six Sigma is a highly disciplined approach which helps in delivering near "perfect"
products and services and, therefore, is appropriate for this sector as customer
satisfaction is the key in this sector.
There are some common steps which are needed to change the face of a BPO
industry and drive it towards success. Take an example of an inbound customer
contact center. Average Handling Time (AHT) is one of the basic criteria in such a
process. Assume that the AHT in this organization is 8 minutes. If the employees
are spending more time on one or more calls, it indicates that the total number of
calls for that employee would also decrease because average handling time is
proportionate to the number of calls. Therefore, to improve efficiency and increase
the number of calls, it is important that the employees be well versed with the
process and should provide a standardized solution for a particular issue.
So, the process that needs improvement in this case is the one which requires
reducing the AHT. Therefore, the BPO should make sure that the agents who are
handling the calls should have appropriate knowledge about the products and
should keep their conversation clear and to the point. Through Six Sigma, the
employees strive for perfection and try to achieve 100% accuracy. At the end of the
day, what is required is a satisfied customer who will increase business for the
company which also means increased profits for the company.
Insurance
As stated earlier, Six Sigma applies to all kinds of organizations and processes and
insurance is one of them. Insurance companies, these days, offer insurance for
various purposes such as health, vehicles, fire and life. This is one sector which is
very time-consuming as it requires a lot of paper work, underwriting functions and
adjustments. With umpteen organizations providing insurance, the field has
become quite competitive and therefore a lot of efficiency is required to lead.
What is required in the insurance sector, as in any other, is the need to understand
client requirements (in this case policy owner requirement). Another requirement
of this sector is reliability so that the clients trust the insurance company. None of
the insurance companies will eliminate their underwriting function just to make the
customers happy. However, they can modify these functions to suit customers’
needs.
Military
It might seem quite strange to hear that the military world would require a
mechanism like Six Sigma. When you think of the military, the image that would
come to mind is that of men who are conditioned by a lot of rules and regulations.
However, it is true that a lot of business goes on in the army as well. The military
buys equipment and machines, procures arms and ammunition. Besides, there is
routine work like payment of salary packages. A lot of money is spent during
recruitment. The attrition rate is also rising as well which converts into rising costs.
Although, there is no continuous struggle to cut costs and maximize profits as in
other business processes, yet running the military is no less than running a
business.
The Six Sigma method has helped in building a better work force and a methodical
organizational process in the military.
Six Sigma is a powerful tool that helps implement innovations which can transform
organizations for the better. The trained belts apply Six Sigma methods to various
processes such as manufacturing, repair, sale and maintenance. Six Sigma also
proves effective for sectors such as banking and education. The business processes
should be, therefore, tailor-made to suit the needs of the business systems. The
business systems, should, on the other hand, make sure that all the information
about the process and the improvement it requires is possessed by it. Both the
business process and the system depend on each other for success. Below is a
diagram which shows the relationship between a business system, a business
process and its sub-processes.
3. Process Inputs, Outputs and Feedback
The words ‘input’, ‘process’ and ‘output’ may sound like technical terms, but in
reality, these words are applied in day-to-day life. Take a very simple example, that
of preparing tea. The inputs that go into making tea are water, sugar, tea bags and
milk. The process is to boil water and put the ingredients in it. The output is tea.
This analogy can be used to understand business systems. Business systems are of
course more complex and sophisticated.
The example of the tea making process can be illustrated with the help of a
diagram given by Albert Einstein:
This process framework is too narrow in scope and gives only a broad outline of the
process. However, as stated earlier, business systems do not work on such simple
lines. They are vast processes and require expertise.
Inputs
Processes
Outputs
The dictionary terms output as an act or process of producing or, simply put,
production. The result of carrying out the process in a systematic and productive
manner is termed as delivering outputs. The outputs of a business system result
from the internal processes that go on in an organization. These outputs can be in
the form of goods and services or can be simply ideas, thoughts or some
information-based data. These outputs are the revenue-earning material for an
organization. Also, an output of one department may be the input of another.
Consider a company that manufactures pizzas. The first thing that needs to be
done is to knead the dough and prepare the pizza base. The next thing is to garnish
it with cheese, vegetables, oregano and basil; and then bake it in the oven. So, the
first stage of preparing the pizza base may seem like the output on the one hand.
However, it is also an input in the entire pizza making process. Similarly, the freshly
prepared pizza may be the output in the production unit (kitchen) and a form of
input in the sales department.
The business process is much more complex than it seems and needs to be
supervised by a controller who regulates the activities of the group. There is a lot of
active communication that goes on between the controller and the personnel
assigned to carry out the tasks. The loop-holes in the process need to be looked
into. Besides, there is the intricate task of procuring the inputs and selling the
outputs.
The Six Sigma tool makes sure that the inputs or the resources are optimally used.
This also means that these inputs are used in such a manner that maximum return
is guaranteed on the investment. The process should be such that it is least time-
consuming, is very efficient and the output should be the one that yields maximum
profit.
Feedback
Feedback is very vital for any organization. If the feedback is positive, it is an added
advantage. The process of converting input into a productive output requires a lot
of hard work and patience. So, it is very important to have a feedback and get to
know what the customers feel about the product or service that they are using. The
organization can learn from the feedback and also get to know about the loop
holes, if any. They can, then, improve upon it and hope to satisfy the customers.
On the other hand, feedback is important for the customers as they are able to
voice their concerns and give their feedback (positive or negative) about the
product or service they are using.
B. Agents of Change
Six Sigma is not a completely new method of changing an organization; in fact, Six
Sigma forces change in an organization to occur in a systematic manner. The
foremost aim of the management in traditional organizational setups is to develop
systems to create value for customers and shareholders. This task is an ongoing
process because management systems have to constantly strive to sustain their
competitiveness in the face of shifting customer trends and tastes. Then there is
always the threat of competitors who try to woo away customers by constant
innovation. The external factors like capital markets are always presenting new
ways to secure return on investments. For the business to survive in a fast moving
and dynamic environment, the organization needs to respond quickly and
effectively to change. This emphasizes the significance of and the need for change
in management systems.
At the project level, there are many factors that necessitate change. It is the project
itself that necessitates change. Although, every process of the project is planned
during the planning phase, there can be many fluctuations and variations
eventually in the project’s scheme of things. Various factors like change in the
project scope, alteration in the time schedules, variations in the proposed costs,
divergence in the design, pattern, quality, or specifications of the deliverables, and
modification in the technology calls for change.
The Six Sigma methodology adopts change by intrinsically integrating change into
their management systems. Six Sigma does not try to change the management
system altogether, but it creates the infrastructure in such a way that change
becomes intrinsic in the everyday scheme of things. Six Sigma creates full time or
part time positions like Green Belt, Black Belt or Master Black Belt to facilitate this
change. (This has been discussed in the subsequent sections.)
The functioning of Six Sigma demands that the organization constantly find new
ways of improving their current systems. It is more than just about operating the
system the routine way. New techniques are employed; new procedures are
implemented to suit shifting customer and shareholder needs. Statistical and
analytical techniques are applied at all levels and metrics to facilitate this change.
There has to be rigorous training to effect this change, and one of the basic things
that needs to change is communication. Leadership has to ascertain that
communication about incorporating Six Sigma is effective, devoid of loopholes; else
there will be stiff resistance to change.
2. Managing Change
a problem statement
an objective
a baseline metric
the other related metrics for the project
One of the foremost responsibilities of the management is to look for the trends in
the macro environment so that the desired changes can be identified and new
programs can be easily initiated. Management has to explain the importance of the
change. The change plan must also include a communication plan and training
requirements to lessen the effect of the change in the team involved in the change.
It also has to calculate how the change will impact employee acceptance, reaction
and motivation, operating procedures, and technological requirements. The
management then has to see the change imperative percolates down to all levels.
In addition, it has to review and monitor the change to check effectiveness, and
make adjustments where necessary. At the same time, management has to support
employees as they undergo the process of change.
Roles
The roles that various personnel play when a change is initiated: (Hutton, 1994)
The official change agent, sometimes called the champion, is the person who is
officially designated to assist management to carry out the change process.
Sponsors are the senior leaders who are authorized to legitimize the change.
The advocate is the person who sees the need for a change.
The informal change agents are the personnel who assist in managing the change
initiative voluntarily.
1. The change agent imparts education and training to the personnel involved in
the change. Training is giving technical instruction and making them undergo
practical exercises to help them know how to perform a task. Education means
making them able to think or make them change the way they think. They provide
management with alternative ideas to tackle the change.
2. Change agents hold important positions in the organization and therefore they
are pivotal in bringing about quality improvements. They help coordinating the
development of the change and implementing the change.
3. A change agent helps the organization to organize review programs about its
strengths and weaknesses. A change is necessitated usually to eliminate the
weaknesses and focus on the strengths.
4. The change agent also mark that the top management are giving enough time
and commitment to the change effort, without which the change effort will refuse
to take off. The change agents act as ‘coach’ to the senior leaders, they advise and
coax the leadership into action and continuously remind them if the goals aren’t
being met.
5. Change agents use projects as the instrument for change. Projects have to be
planned in such a way that they are aligned with the change initiative’s goals. The
change agent can deal with the resistance to change by the following methods:
The change agent can deal with the resistance to change by the following methods:
1. Enterprise Leadership
The most important responsibility of the top management is to assign the roles and
responsibilities to responsible personnel who will assist them in the deployment of
Six Sigma. Moreover, the task of linking the project to the organizational goal is
equally important. Six Sigma is supervised by the top management. They answer
the question as to who will be the leader of the project and are, therefore one of
the core elements. The Green Belts, Black Belts and the most talented and
accomplished, Six Sigma Master Black Belts assist the top management in the
deployment of a Six Sigma project. Although these positions sound like terms
having a relation with karate or martial arts, yet they are not, in any way, related to
the sport. The term came to be associated with the Six Sigma tool when a plant
manager in Motorola reported to Mikel Harry and the team that the Six Sigma tools
were "kicking the heck out of variation" as far as production was concerned.
Executive Leadership
An executive leader is the chief executive officer of a company who carries out the
most important responsibility in the management. He ranks the highest in the
hierarchy ladder in a company or an organization. He holds a permanent position
in the organization. He is the one who represents the needs of the customers and
communicates them to the higher authority. He executes the policies of the
company and reports directly to the Board of Directors. Just as a CEO is the senior
manager in a company, the executive leader holds the senior-most position in the
Six Sigma hierarchy. He is also responsible for allocating duties to the junior
officers.
Project Champions
They are also senior-level officers and managers who synchronize Six Sigma
projects. The project champions look for prospective projects and oversee the
ongoing project. Project Champions lay out the initial draft of the project. They are
the ones who are responsible for managing the budget, eliminating the problems
and ensuring that the projects are completed on time. They are skilled as far as the
statistical concepts and tools are concerned. It is important to have one Six Sigma
champion per project. The Project Champions also make sure that they are in
constant touch with the Master Black Belts and hold regular meetings with them
about the project.
Master black-belts are experts who have hands-on-knowledge and who act as
mentors for Black and Green Belts. They are responsible for selecting, prioritizing
and implementing Six Sigma projects. They also revise the Six Sigma training
manuals. However, the main job of the Master Black Belt is to train the other
members involved in the Six Sigma project which means that he should have
thorough knowledge of the project they are working on. Moreover, they are
assigned a particular function of management like finance or resource
management.
They get the prefix of “Master” after having gained experience in the said field after
training Green and Black Belts for a number of years. This means that they should
possess very strong communication and motivation skills and they should, at the
same time, have technical competence. They should be able to explain all the
statistical tools and concepts with ease and be ready to face challenges and
problems.
Black Belts
Black Belts form the technical support team in a Six Sigma project. They are highly
trained and have a hands-on-knowledge of the statistical tools and methods. They
should also possess presentation and analytical skills. They should have the ability
to take risks and at the same time be innovative enough to come up with
something new. They usually form a team of 4 to 6 people per project.
The role performed by Black Belts is that of a coach. His main job is to train others
involved in the Six Sigma project by stating examples, holding seminars and
conducting workshops. They handle 3-4 projects per year and deliver significant
results. They impact an organization in the most significant manner by saving huge
amounts per project. Black Belts hold a significant position in the cadre of players in
a Six Sigma project and can also assist the Master Black Belts, if need be.
Green Belts
The time required for each level of position in the Six Sigma varies. However, what
is required at each level is to choose the capable personnel from the organization
who are ready for innovations. They should be the ones who can communicate
effectively and bring about significant changes in the organization.
The Six Sigma leaders should be able to demarcate the advantages of strategic
planning and at the same time be able to determine the steps needed to complete
the ongoing project. They also oversee whether the duties are being performed in
the right fashion and they should also be able to choose the right people who will
be able to perform efficiently in a team. There should be enough flexibility in the
organization. It means that the team members should also be given enough
freedom to take independent decisions whenever required.
A successful leader will always work in accordance with the market needs. He
anticipates the needs of the customers and molds his business strategy with effect
to them. He is always prepared to take risks and lay the performance standards
according to the current trends.
DMAIC
Define Phase
This is the first phase of DMAIC. In this phase, the key factors like Critical to Quality
(CTQ) variables and problems present in the process and as identified by the
customers, are defined. A process is an ordered sequence in which input
metamorphoses into an output. The process that needs to be amended is clearly
defined by an acronym called SIPOC which stands for Supplier-Input-Process-
Output-Customer.
The Voice of the Customer is critical to define the goals of the organization. There
are other issues to be taken care of as well and they include cycle time, cost and
defect reduction. The essence of Six Sigma is to solve problems that are impacting
business. The process of improvements starts immediately with the "Define" step.
When a Six Sigma project is launched, goals are chalked out to have an idea about
the degree of satisfaction among customers. These goals are further broken up into
secondary goals such as cycle time reduction, cost reduction, or defect reduction.
The Define Phase comprises of base lining and benchmarking the processes that
need improvement. Goals and sub-goals are specified and the infrastructure to
accomplish these goals is established. An assessment of changes in the
organization is also taken into consideration.
Measure Phase
The second phase is the measure phase in which the reviewing of information and
collection of data takes place. This phase helps measure the performance of the
ongoing process. In this phase the data collected is quantified. The process is
measured in terms of defects per million opportunities. This is imperative for Six
Sigma because only if the measurement is correct will the results be good. The
important thing to be kept in mind while measuring is that there should be cost and
time savings.
The important thing required in this phase is that the measurement system should
be one which can be substantiated when required. It should be correct to the core
and orderly.
Analyze Phase
The Analyze phase is the one where the interrelation between the variables and the
impact they have on the quality is studied. This is also the phase where the root
cause of the defect is analyzed, new opportunities are searched for and the
difference between the current and the target performance is found out. The idea
behind this kind of analysis is to find out the inputs that directly affect the final
output. Also, it can help to answer several questions like:
The analysis helps to determine the blend of inputs that can affect the output.
If an X input is changed, will it alter output Y?
If one input is changed does it affect the other inputs?
In the analysis phase, it becomes easy to determine the variables which would
affect the CTQ factors.
Improve Phase
The Improve phase comes next in line after the analyze phase in DMAIC. The
personnel working on the project select the method that would work best for the
project keeping the organization goals in mind. The root cause analysis is
chronicled in the Analysis stage. The Improve stage is the one which implements
the conclusions drawn through root cause analysis.
In this phase, an organization can improve, retain or negate the results of root
cause analysis. In this phase (like the analysis phase), the Open-Narrow-Close
approach is used. The approach makes it easy to narrow down the options and
choose the best solution. However, the emphasis remains on choosing the result
which leads to maximum customer satisfaction. This is also the phase where an
innovative resolution is found out and it is executed on an experimental basis.
Control Phase
It is very important to maintain the standard that has been established. The control
phase is the one where improvements that have taken place are sustained. This is
done by chronicling the improvements and also keeping a check on the new
process that has been created by mitigating the defects. This is done so that the
defects that were earlier present in the process or the product are absent in the
new process or product.
There are different kinds of problem-solving tools in a Six Sigma project. DMAIC is
the most popular one. However, there are other tools also which are beneficial for
some organizations than DMAIC. Let us take a look at those and compare them
with each other and with DMAIC.
Although DMAIC and DMADV are two different Six Sigma methods, they have some
things in common. Both the methods are used to reduce the defects to 3.4 per
million opportunities. Both make use of facts and are enforced by Green Belts,
Black Belts and Master Black Belts. Also, both of them share the first three
acronyms. It is important to note that they both are learning techniques which help
to maximize profits for the organization and help them climb the growth chart.
Despite the similarities, there are differences between the two tools.
D - Define - To define the objectives of the project and the demands of the
customer.
D- Define - To define the new project and determine the new demands of the
customer.
DMAIC is used when the existing products or policies are not up to the required
standards and the customer satisfaction goals are not met. On the other hand,
DMADV is used when new products and processes need to be introduced in the
organization and also to maximize profits. Another reason to implement this
methodology (DMADV) is when DMAIC has been introduced but is not producing
any results.
This is the basic and the very first model for Six Sigma. It is also referred to as
“Shewhart Cycle” or “Deming Cycle”. This is so because the model was developed by
Walter Shewhart in the 1930s and later used by W. Edwards Deming in the 1950s.
This is also referred to as “Deming Wheel”. This is an active model that never ends
and consistently strives to improve the process. It is like a vicious circle in which one
act leads to the other but there is no end. Similarly in PDCA, there is no end to
continuous improvements. The PDCA cycle can be represented in the form of a
diagram.
Plan
It is very important to lay down the goals of the project. A draft is prepared to list
the objectives that need to be accomplished in accordance with the policies of the
organization. Planning is a phase in which a design for an improvement is made
and at the same time old policies and procedures are revised for the better.
Do
This step simply means to put the plan into action. The plan of action needs to be
implemented to get results for the betterment of the organization. This is usually
done on an experimental basis to test the validity of the claim made by the plan.
After the plan is executed, its performance is measured through different
techniques.
Techniques applied - conflict resolution and on-the-job-training.
Check
This is an important phase because it leads to control over the new policy. It is very
significant to ascertain whether the process has undergone any improvements and
whether it is beneficial for the organization in the long run. All the improvements
are evaluated and their results discussed.
Act
In this stage, the organization can accept, adopt or reject the proposed policy. The
pros and cons of the new policy are measured on such factors as increase in profits
or sales, the time involved and the use of resources. If the organization thinks that
the policy is worth adopting then they can either continue with or modify it
according to the current need. If need be, they can scrap it all and adopt a different
plan altogether.
PDCA should be enforced frequently and it should, always strive to improve the
process. It is mostly based on trial and error method and usually the best method is
found out after a few unsuccessful attempts. Also, it is a right amalgam of both the
Eastern Lean approach called Kaizen and the Western approach which measures
the amount of success achieved.
This approach has certain disadvantages as it consumes a lot of time, money and
resources. Also, it is difficult to apply it practically. Dr. Demings has mentioned 14
points for the PDCA cycle which are listed below:-
Apart from the above mentioned models, the Six Sigma approach also makes use
of other models such as SEA (Select-Experiment-Adapt) and SEL (Select-Experiment-
Learn). The difference between them and the PDCA model is that these two models
have positive loops in between which make the process much more forceful than
the PDCA model. The PDCA model is a circular system and it makes the process
very cumbersome. If there is a mistake, the whole process has to begin anew. A
new plan needs to be drafted and implemented, on an experimental basis. This
consumes a lot of time, money and resources. On the other hand, the SEA and the
SEL models are applied when there is a positive feedback from other agents, unlike
the PDCA model where the feedback from the other agents is not taken into
account.
Definition of Deployment
The literal meaning of the word deployment is to install or set up. Deployment is a
military term which means to install the troops. In Six Sigma also, the term is used
in context to placement of the personnel to carry out the Six Sigma project.
Deployment in Six Sigma is a very crucial step for an organization that adopts the
Six Sigma principles. This is because Six Sigma does not involve carrying out routine
tasks. Instead, it is about reducing or eliminating the defects and at the same time it
also strives to improve the process for the better. This means a deviation from the
regular activities that are being carried out in the organization and a sincere effort
to improve the process. Six Sigma is a concerted effort that needs equal
participation from all the members.
Six Sigma is a digression from the regular tasks and, therefore, it requires an effort
on the part of the top management to motivate the people working on the project.
Many people have a fear to adopt things which are new. It is the top management’s
responsibility to clearly communicate the goals of the project and encourage the
personnel to easily adapt to the change. It should be clear to the personnel as to
how they and the organization are going to benefit from the project.
The Six Sigma plan has to be communicated clearly by the leadership so that the
vision is accepted by all the stakeholder groups- customers, employees,
shareholders and suppliers. Six Sigma means bringing in a cultural change in the
organization; so a well-defined communication method has to be in place.
Communication brings clarity and removes fear related to change; it should be such
that people are ready to accept the change. The management should be able to
remove all doubts and apprehensions that the personnel deployed for the project
might have regarding the project. Management should make clear that the
organization is serious about its commitment to the Six Sigma project.
The communication gap can lead to problems later on and, therefore, should be
none. The roles and responsibilities of the personnel should be clearly demarcated
and the tools and the policies that need to be adopted should be clearly
communicated.
The responsibility and development of the communication lies with the Process
Owner and he is accountable to the Six Sigma executive council. He will need to
bring a team together to execute the plan. He will report to the overall sponsor of
the Six Sigma deployment authority. The communication plan will need to address
the needs for each stakeholder group related to the project.
Leadership’s primary role is to imbibe a vision for Six Sigma, its role is to see that
Six Sigma is not only implemented but it is incorporated into the business ethos of
the organization. Their main responsibility is to link Six Sigma goals and objectives
to the organizational goals because the company has to keep long term objectives
in mind.
However, any project is taken up keeping in mind the overall growth or long term
profits for the company. It could also for increased ROI (Return on Investments) or
to increase sales. The Six Sigma project should be implemented in such a manner
that the normal functioning of the organization is not affected. This necessitates
creation of new positions in the form of change agents, and modifying departments
and reward systems.
The executives must have total commitment to the implementation of Six Sigma
and accomplish the following:
Creation and Agreement of Strategic Business Objectives, that is to identify the key
business issues
Creation of Core, Key and Sub-Enabling Processes, that is to establish a Six Sigma
Leadership Team
Organization of support for the Six Sigma program in the organization
Decision on how new positions or change agents will be created; and the reporting
authorities of each, for e.g., identification of masters or process owners for each
key business issue
Decision of employing cross-functional teams
Definition of timelines for transitioning from a traditional enterprise to a Six Sigma
oriented enterprise
Decision on whether Six Sigma will be a centralized or a decentralized function
Creation and Validation of Measurement “Dashboards”
Decision of incorporating Six Sigma performances into the reward system
Decision on how much ROI is expected from the Six Sigma program
Decision on how much of financial, intellectual and infrastructural resources are to
be dedicated to the project
Continuous evaluation of the Six Sigma implementation and deployment process
and making the necessary changes
The people working on a Six Sigma project have to be selected very carefully. The
leaders of the Six Sigma and the champions of the sport have a lot of things in
common. Both require a lot of restraint and a lot depends on quickness of action.
They both follow the principle of, “strike while the iron is hot”. The success of a Six
Sigma project relies on its leaders. The management is comprised of Six Sigma
Champions and the Executive Leader and they are assisted by team leaders who
are the Green Belts, Black Belts and the Master Black Belts.
The people who are the lowest in the hierarchy in the organization are made
familiar with the basic tools. They are provided enough freedom to take
independent decisions without consulting the top management whenever the need
arises. This technique is more successful when the Six Sigma becomes more
developed.
This means to train the managers and help them adapt the new skills required to
make the project a success. Often, people have a fear of unknown and new things.
Therefore, the managers should be motivated to take up the new job and they
would gradually learn by experience and adapt to the change.
The people working on a Six Sigma project, for example, the Black Belts, do not
carry out routine tasks. They are very different from the regular engineers, quality
analysts and technicians. They are multifaceted people, across cross-functional
departments, who are involved partly in routine activities, but work full time for Six
Sigma projects. It is imperative that they are chosen with care, because they will be
responsible for the success of the Six Sigma project. They will be accountable to not
one but many supervisors who are overseeing the Six Sigma project.
Instead of creating a special position for the Black Belt, a permanent position can
be created for the post. Change agents are always required to enable an
organization to grow. So, it is better if a permanent position is created for them.
The Black Belts gain hands-on-experience by working on several projects and this
makes them adept at delivering great results. Another option can be to infuse new
and fresh blood into a Six Sigma project so that maximum people can get exposure
to the position of a Black Belt.
The first step is to initialize the Six Sigma project by drafting the plan. This is done
by outlining the goals and putting the infrastructure in place.
The next step is to divide the tasks among the personnel chosen for the task. They
are, then, trained and facilitated to complete the task conveniently.
The third step is to enforce the projects and improve the performance which helps
in generating profits.
The fourth step is to have a broader perspective of the endeavor that the
management has taken up. It means to involve the other organizational units in
the project.
Fifthly, it is very important to maintain the standards that have been created. This
is done through constant upgradation and research and development.
2. Risk Analysis
Risk analysis is an important part of a Six Sigma project. Six Sigma is a tool that
involves taking risks.
There are ten identical balls which are randomly moving inside a closed box. The
probability at any given point of time of all the ten balls lying in the same half of the
box is 2/11. The formula for expected profit is EP = Profit * Probability. The risk
assessment in real life is not this simple because the probabilities are usually not
known and need to be ascertained.
To begin with, the process has to be studied in detail and the areas that are prone
to risk are to be identified. A fishbone diagram is useful to carry out such a study.
This is a cause and effect research tool that helps to find out the cause and effect of
implementing a new policy or process.
Secondly, the risk is measured on three parameters; which are severity, frequency
and detectability. Severity and frequency are factors that are taken into
consideration as both short term and long term factors; whereas detectability is
taken into consideration only as a short term factor. These risk factors are
compared to the other factors that could increase the potential for risk. The risk
factors are measured on a scale known as Risk Priority Number or RPN.
The RPN number helps to identify the defects. It is also useful to give precedence to
them in the way they are present in the product or process. This is a qualitative
approach and helps to assess the severity of the defect present in the process. RPN
can range anything between 1 and 1000.
An Alpha risk is often said to be more catastrophic in the short run. On the other
hand, Beta risk holds more prominence in the long run. It is, however, difficult to
determine the risk which would be beneficial for a particular organization. The
decision regarding the kind of risk to be chosen requires a detailed study which is
done as and when the project proceeds. Although, Alpha and Beta risks are
reciprocals of each other, yet they share a relation. Precautions to ward one of
them increase the risk for other to appear. Therefore, it becomes imperative to
collect relevant data to achieve the right blend for both.
SWOT Analysis
SWOT is an important aspect of risk analysis done in the context of any commercial
activity. It is a strategy based analysis and stands for strength (S), weakness (W),
opportunity (O), and threat (T). The first two can be categorized as intrinsic factors.
However, the last two form the factors that impact an organization from the
outside. SWOT analysis is instrumental in coordinating an organization’s resources
to its best ability. Moreover, it keeps in mind the stiff competition in the market that
an organization has to face. It plays an important role in choosing and
implementing new policies and procedures.
The strengths and weaknesses are intrinsic to an organization. Strengths are the
strong points of an organization and which also might be exclusive to an
organization. On the other hand, the weaknesses are the weak factors of an
organization. They may include factors like high costs or poor quality. The
opportunities are the external chances that a company may get to prove its worth.
The threats are the ones that a company may face from newly framed laws or cut-
throat competition.
A major threat facing many organizations is not being able to change fast enough in
an environment where the pace of change is accelerating, fueled in no small way by
developments in the business.
Closed-loop assessment is an important tool that is useful for any organization and
at each level of hierarchy. It is a system that helps to analyze and improve the
current processes. As the name indicates, closed-loop assessment means to assess
and determine the gaps in the processes or systems of an organization. The lessons
gathered from previous experience are used to identify more such similar
opportunities. It is also done in order to ensure that the errors that were
committed in the past are not repeated.
The Black Belts and the Master Black Belts are assigned the tedious task of diffusing
useful information. They measure and assess whether the assumptions they had
made were correct and whether the SWOT analysis bore any fruitful results. They
also have to measure the goals accomplished, the kind of problems encountered,
the opportunities received and the lessons learnt.
It should be noted that the knowledge should be disseminated to the right people
at the right time, keeping the costs in mind. Accountability should be fixed and
there should be constant communication amongst the employees about the
policies. Simply put, knowledge management is a method which is applied by
organizations to get the maximum benefit out of the employees possessing
technical know-how and other intellectual resources.
Closed loop assessment and knowledge management are significant tools that can
be manipulated in a productive manner for the upward growth of an organization.
They are useful methods for generating revenue and value through easily available
assets in the company, namely human resources and other knowledge based tools.
CHAPTER 2
2 Business Process Management
1. Process Elements
Coordination is imperative at all levels in all organizations. The right blend of all the
elements is important to achieve a near perfect product and this is what a Six
Sigma project strives for. The aim of any Six Sigma project is also to deliver defect-
free products. The correct combination of the process elements and a disciplined
approach to turn them into productive output is required. It is important to
consider the generic process elements that may affect a product.
It would be better if the combination of the process elements is known. This would
help determine the reasons as to what factors affect the product. Also, the Black
Belts and Master Black Belts should be able to make out if alterations done to one
element affect the other elements. The process elements that are resistant to
change or which are most likely to get affected by unforeseen changes or events
are also demarcated.
The key process elements comprise of customer, process and employee. They are
the base on which the reputation of any organization thrives. Of the three,
customers occupy the most prominent place as the profits of any organization
depend on them. It is true that customer is the king. Customer satisfaction depends
on the performance, quality, services and reliability of the brand. If the customer is
not satisfied he will look for options and this will, in turn, be a loss for the
organization.
Process is another important element. The process should be such that it produces
products of the best quality. This is because the quality of the products is directly
related to customer satisfaction. The process should be clear to the employees to
enforce it and at the same time, be made keeping the customer demands in mind.
The process should, therefore, be looked at from customer’s perspective and not
from the organization’s.
The main aim, in fact, of any Six Sigma project is to reduce the defects by
minimizing wastage. Wastage is minimized if the process is implemented in the
best possible manner by the employees and which does not leave any room for
errors and, therefore, wastage. If this is done efficiently, this will in turn, improve
the quality and also the number of customers. The most important effect it would
have is to increase profits which is the ultimate goal of any organization.
2. Owners and Stakeholders
The stakeholders form the core team in a market. They are the ones who assume
that they should receive the maximum value in the form of an exchange. It is very
important, therefore, for all the stakeholders to cooperate with each other to
receive the maximum benefit. It becomes imperative for every organization to
value all the stakeholders equally.
The employees need to be treated as if they are assets of the company. If they are
not motivated from time to time, they tend to lose interest and ultimately their
performance drops. In addition, if possible, they should be given rewards and
recognition repeatedly to keep their spirits high and to lower the attrition rate. This
is because the employees are the ones who are the greatest assets for the
company and it is very important to keep them happy. However, the most
prominent stakeholders are the customers. Everything depends on their
satisfaction, as they are the ones who would bring the maximum profit for the
organization.
To achieve the best results, it is very important that the stakeholders participate in
all the important activities of the organization. This is important as the goals and
objectives of the organization should be in their interest as they are the ones who
would reap maximum benefit out of it.
Project Management acts as a tool, which enables a Project Manager or the Black
Belt to streamline the various processes involved in the project. It is also important
to identify the stakeholders and determine their expectations. They are the people
who are involved directly or indirectly in the project and have a share in the profits
gained in the project.
These projects differ from processes and operations; which are ongoing functional
work, and which are permanent and repetitive in nature. The management of these
two systems is often different and requires different sets of skills and viewpoints.
Therefore the need for project management arises.
Besides, project management is replete with challenges. The first is to ensure that a
project is delivered within the defined time frame and approved budget. The
second is that all the resources (people, materials, space, quality, risks involved)
have been put to optimal use to meet the predefined objectives. Project
Management enables a project manager to successfully complete a project. The
two most important aspects of project management are time and costs and the
project will have disastrous consequences if it is not guided by these two factors, as
the time limit will be missed and costs involved will be more.
If the project manager or Black Belt wants his project to win profits his organization
expects from it, he will have to construct his project around the concepts of project
management- the art and science of doing things on time with the efforts of others.
Thus it is project management which equips the Black Belt with the requisite tools
to succeed in his project and fulfill the expectations of the stakeholders.
4. Project Measures
It is important to establish the key performance metrics to measure the
performance in the organization and to determine whether the process is being
implemented in the correct manner. To measure the performance, it is also
important to make sure that the metrics chosen is also the appropriate one. One of
the important attributes of a good metric is, that it should be customer driven and
should aim at providing the maximum customer satisfaction. It should also indicate
the past performances and show the recent trends. This is important as it would
help compare the performance and determine the loop-holes in the current
process.
Another very important attribute required of a metric is that it should be clear and
easily understood by all the employees. In fact, the metrics should be developed by
the people who are going to make use of it. It is also mandatory that the metrics
should be in line with the policies, procedures and values of the company.
5. Criteria- If the above mentioned process elements are the internal factors,
environment, restrictions and government policies are the external ones. They
affect the performance directly or indirectly. They are important because if they are
major hindrances in a project, a policy can be modified or slashed according to it.
6. Measurement means- This is a parameter that determines the way in which the
internal and external process elements would be applied so that their performance
can be measured.
8. Metrics of specification - The actual metrics that would be used are defined and
their functionality is determined. This definition also includes the way the data is
collected, used and implemented.
There should be metrics which should equate the goals of the organization to the
actual position. The answerability should be fixed and there should be people who
should supervise and advice. Accountability should also be fixed for finances. At the
same time, it is very important to keep a check on the sales figures. If the quality is
growing better but the sales are stagnant then it is a matter of concern for the
organization. Also, there should be a metric to make sure the demand matches the
supply.
The main focus, however, of any organization remain the customers. It is very
important to frame the policies according to their needs as they are the principle
source of revenue. Happy customers ensure better business and profitability for
the organization. So, there should be customer satisfaction surveys or contact
centers where the customers can voice their grievances. Data can be collected in
the form of questionnaires and the views of the customers taken. This can prove to
be an important metric and be very effective in improving customer satisfaction.
There can be metrics on which the cycle time reduction, quality, defects per unit,
waste and productivity can be measured. Six Sigma is an important method to
reduce defects and help ensure efficient working in the organization. These metrics
can be enforced with the help of Six Sigma leaders in the organization and can
prove to be very beneficial.
It is important to have well designed dashboards that would aid in interpreting the
metrics. This is because they help to determine the limit for a particular metric. If
the metric is in control, it means that the process does not require special
attention. However, if the metric is crossing the line of limit, it means that it
requires special attention.
Project Documentation
The project report would be incomplete if it does not contain the customer’s views.
It should also mention the name of the members working on the project and the
tasks assigned to them. The way they carry out their duties and the finances
involved in the project should be included. In addition, the names of the people
who are keeping a check on the project, how often they monitor the project, and
the decision making authorities should be clearly mentioned.
The customers are the main focus of any organization. It is very important to
streamline the demands of the customers. Six Sigma is a useful tool in assessing
customer satisfaction. Customer satisfaction not only rests in the quality of the
product but also in such services as the ordering procedure, delivery, mode of
payment and after sales. It is very important to make changes keeping the
customers’ demand in mind.
The data collected from customer satisfaction surveys are useful in determining
customer demands and the performance of the product. This also helps the people
working on the Six Sigma project because they are able to categorize the
customers’ according to their needs. They do not share the same viewpoint about
the product and its cost as the manufacturer or the service provider. So, it becomes
very important to mould the policies according to the organization goals as well as
the customers’ demands.
A customer views a product from the point of view of its usefulness, price and
quality. A product or a service has no relevance for the customer is not of any use
for the organization. Customer satisfaction is the biggest accomplishment for any
organization and without it; the Six Sigma project cannot achieve its desired end.
Some organizations categorize the customers on the basis of their needs and roles
they play in the organization. One of these categories of customers is who put in
their money for upcoming projects of the organization. They are very important
because they bring great value by investing a major chunk of the money.
Customers also can be categorized in the form of small traders and businessmen.
There are customers who might be good for an organization in the long term. It is
important to value these customers also.
The customers should be pampered so much that they do not feel the need to go
to the competitor. It is very important to pay heed to each customer’s needs and
he/she should be made to feel that he is important and his/her opinion is valued.
His problem should be resolved at the first go in the best possible manner. If the
customers know that they are valuable for the organization they would certainly
help improve business and increase sales.
Therefore, it is very important to identify the customer and his needs because it is
the customer who is and will remain the core element for an organization in the
long run.
Quality call monitoring is an important way to hear the voice of the customer. For
instance, in a contact centre if an agent’s call is monitored, it means that his/her
performance is being measured and at the same time it will also help to determine
the pros and cons of certain processes and the customer satisfaction can be easily
evaluated.
Kano Model
Kano model is another quality measurement technique to measure client
happiness. It can be said to be a useful tool in evaluating and prioritizing customer
requirements. Not all requirements are equally important to all customers, and the
attribute of a product will be ranked differently by different customers in their need
chart. Therefore this model is used to rank requirements according to the
importance of each segment’s needs, which differentiates between must haves and
differential attributes.
Applying Kano model in the workplace to learn customer requirements will change
the Black Belt’s viewpoint towards customer satisfaction. The team will be able to
know which values and services the customer covets for the most, and how to plan
for operations in the Six Sigma program.
Exciters/Delighters: These are hidden attributes which delight the customer and
lead to high levels of satisfaction if they are present, but do not cause any
dissatisfaction if the product lacks this feature.These ‘delighters’ are the surprise
elements in the product and companies can use this attribute to set their product
apart from their competitors. In course of time, as expectations rise, today’s
delighter’s become tomorrow’s basics. For example, a car with an inbuilt television
can be today’s delighter, but can be a basic tomorrow.
In the figure below, the entire basic attribute curve lies in the lower half of the
chart, indicative of neutrality even with improved execution, and dissatisfaction
with their absence. The exciters curve lies entirely in the upper part of the graph.
The more the exciters, the higher is the level of satisfaction. The performance
attributes are shown as a 45° line passing through the center of the graph.
Sample Survey
Definition
The organization might interact with its customers to access the customer
sensitivity to the company’s product, measure the quality of service or determine if
the current level of quality is at par with the company’s identified goals. The
organization might want to judge why employee behavior or morale is changing,
what the customer’s buying experience is, or what the responses for a new product
are.
Determining the Margin of Error: The desired margin of error and confidence
level has to be determined. (The concept of margin of error is elaborated below).
Selecting the sample size: The process team will decide on the sample that needs
to be selected from the population. A sample is a subset of a universal population,
like the number of night time customers out of the total customers in the round-the
clock restaurant of a five star chain.
The samples maybe randomly collected, which ensures that it is unbiased, and in
which each element or respondent has an equal chance of occurrence. The sample
may also be a representative sample which is a sample with an exact reflection of a
larger population. To truly represent a population, the sampler and analyzer of data
must take into consideration the variables like a diverse and changing population.
Selecting a Sampling Method: The next step is to select the sampling method (For
more information on Sampling Methods, see Chapter 5- Six Sigma, Measure) Sampling
methods are simple random sampling, stratified sampling, clustered sampling, and
systematic sampling etc.
Make the sampling plan: The subsequent step is to document the sampling plan;
this involves when and how to construct the survey.
Selecting a Sampling Method: The next step is to select the sampling method (For
more information on Sampling Methods, see Chapter 5- Six Sigma, Measure) Sampling
methods are simple random sampling, stratified sampling, clustered sampling, and
systematic sampling etc.
Survey Construction
Open-ended questions- Here the respondents frame their own answers without
any limitations
Ranking Questions- The response choices are ranked according to some criterion,
like importance.
Fill-in-the blank questions
Yes/No questions
Likert’s scale- This response type is used to determine the strength of a response.
Likert stated that a scale of 1 to 5 is better than a range of 1 to 10 because people
tend to ignore larger ranges and hardly use the entire range of choices, and
instead opt for very low ranges, like 1,2 or very high ranges like 9 or 10
Semantic differentials-This response type measures respondent’s choices in two
bipolar values. The values than may lie between the two possible options are not
stated. The values are usually two contrasting adjectives. For e.g. Very Good and
very Bad
2. After selecting the sample, the next thing to be done is to design the samples.
Sample design is determining how many persons or elements (respondents) are to
be included in the survey to ensure the success of the survey.
3. The next step would be to develop the questionnaire. A questionnaire must truly
reflect the situation facing the company and be aimed to fulfill the goal of the
survey.
4. After that, the questionnaire is tested on a small sample, also known as a pilot
study. This is done to test the accuracy and clarity of questions.
7. The next obvious step would be to collect the filled up questionnaires, also
known as data.
The format of the questions should be in line with the focus of the survey. The
questions should be relevant, concise, and clear and in a language the respondent
understands.
To get unbiased answers, the question itself should be unbiased. The answer
choices should be clear and mutually exclusive so that it becomes easy to
understand and choose from.
Margin of Error:
When a survey is conducted, a sample is selected and the data gathered from the
survey is generalized for the larger population. Margin of error is a tool used to
determine how precise the collected data are, or how precisely the survey
measures the true feelings of the whole population.
For example, it is not logistically possible for an organization to measure the entire
population, say of customers, on the satisfaction level of using a particular product.
Rather, samples of customers are taken from the whole population of customers.
Margin of error is used to gauge how precisely this sample is judged.
A margin of error is calculated for one of the three confidence intervals- 99%, 95%
or 90%. The most commonly used is the 95% confidence interval. The larger the
margin of error, the lesser is the confidence interval.
For example, in a pre poll survey, the larger the margin of error, the lesser is the
level of confidence that the survey’s reported percentage will be close to the poll’s
true percentage.
Another tool for the Six Sigma team to know customer requirements or employee
information is through focus groups. A focus group is a selected group of
customers who are unfamiliar with each other, collected together to answer a set
group of questions. They are hand-picked because they have a number of common
characteristics that are relevant to the subject of study of the focus group. The
discussion is conducted several times to ascertain trends in product and service,
and in knowing customer requirements and perceptions.
The facilitator of the focus group creates an environment which permits different
perceptions and opinions, without threatening or pressurizing the participants. The
aim of the focus group is to reach a consensus about a particular area of interest by
analyzing these discussions.
Six Sigma focus groups are helpful during strategic planning process, trying new
ideas and customers, generating information during surveys, validating a CTQ tree
(which shall be described in the next step) etc.
1. Focus groups generate ideas, because a good facilitator may have the penchant
for following up additional questions based on the participants’ answers.
2. Focus groups also stimulate ideas in a greater number than when individual
interviews are conducted.
1. An inexperienced and untrained facilitator may not be able to analyze the result.
2. Bringing groups together under one physical location might be more costly than
what the company envisioned.
Critical-To-Quality Tree
Six Sigma is about looking for causes. The aim behind a Six Sigma process is to find
the reason behind a particular phenomenon. The team tries to find out what’s
“critical” to the success of the process chosen for improvement.
What’s Critical?
Depending on what is being analyzed, the word ‘critical’ could have diverse
connotations ranging from the satisfaction of the customer, to the quality and
dependability of the product. It could also be the cycle time of manufacture of the
product or cost of the final product or service.
The following table lists a number of “CTX”s, or the critical variables that influence a
product.
Most CT trees begin with the output of customer satisfaction at the top and others
follow. The steps to create a CTQ tree are:
1. Step one is to identify the customer. First the team has to do a CTQ as to whether
the identified customers need to be segmented. Here need for segmentation of
customer arises when the different customers have different requirements. In the
following example, the example of the pizza delivery process is used. The customer
ordering a pizza maybe a high school graduate or an office executive. Here there is
no need for segmentation because the requirements in getting a pizza delivered
are almost the same across all ages.
2. Step two is to identify the customer’s need. The customer’s need is in level 1 of
the tree as shown in the picture. The high school graduate is in need of a pizza and
so calls up a pizza delivery outlet.
3. The next step is to identify the first set of requirements for this need. Two to
three measures need to be identified to run the process. In the example, the data
collected by the process leader indicated that the speed and the accuracy of
delivery, the quality of the pizza, the variety in the menu and add-ons in the menu
card were crucial requirements while ordering a pizza. Thus the first three branches
of the CTQ tree will be formed with these factors. These are in level 2 of the CTQ
tree.
4. The step that follows is to try to take each level 2 element in the CTQ tree to
another degree of specialty. In the example, the process leader found out that
while delivering the pizza on time, it was necessary that the correct variety be
delivered. He also found out that it was important for the customer that the pizza
be hot, taste good and look good. Similarly, data regarding the range of items in the
menu pointed out that the types or numbers of items, the add-on condiments were
important. All these factors are to be put level 3 of the CTQ tree as shown in the
figure.
5. The final step is to validate the requirements with the customer. The CTQ tree is
created as a result of the project team’s brainstorming. The needs and
requirements need to be validated with the customer because in many cases, what
the team considers important may not be likewise with the customers. Customer
validation can be made through Focus groups, sample surveys, customer interviews
etc.
Customer Complaints
A company might also communicate with its customers and employees through
case studies, field experiments and by the already available data. New technologies
like data mining and data warehousing are also used.
Analysis can be done in two ways. They are data analysis and process analysis.
Data analysis is analysis of the data collected, mainly if the feedback from the
customer implies that the process needs effectiveness. The goal of data analysis is
to take the data that was collected previously, and scan it for clues to explain
customer dissatisfaction. A careful look at the data would make the problems more
visible to the team working on the process. The best way to analyze data is with the
help of histograms and other charts and graphs.
Process analysis is analyzing the process itself through process maps the team
created in the define phase, known as process analysis, mainly if the feedback from
the customer implies that the process needs efficiency. For example, reducing the
cycle time or completing a task on time requires process analysis.
The process team mostly uses a combination of the two to arrive at the root cause
behind customer dissatisfaction.
There are various tools to understand customer feedback. They are discussed
below:
1. Brainstorming
Brainstorming is a tool that is used for generating new ideas for solving a problem.
It is a creative method that helps in solving a problem by listing the number of
options that can be applied to solve the problem and then choosing the optimal
one. The brainstorming tool is used at all the levels of problem solving. This
technique is a strong tool to know what enhancements can be done in a given
solution or approach.
Definition:
Charts and Graphs provide the best way to analyze measures of a process. They
present data or information in the form of visual displays. These tools are used to
see whether a certain characteristic is changing over time, getting better or worse.
There are many types of charts and graphs, they are discussed below:
Pareto Chart analysis is used to understand the most significant reasons for
customer dissatisfaction. This helps enterprises to identify which problems to tackle
first to obtain the quickest improvement. A Pareto chart is a specialized vertical bar
graph that exhibits data collected in such a way that important points necessary for
the process under improvement can be demarcated. It exhibits the comparative
significance of all the data. It is used to focus on the largest improvement
opportunity by emphasizing the "crucial few" problems as opposed to the many
others.
b. Histogram
c. Scatter Diagram
The Scatter Diagram is a tool used for establishing a correlation between two sets
of variables. It is used to depict the changes that occur in one set of variables while
changing the values of the other set of variables. This diagram does not determine
the exact relationship between two variables, but only determines whether the two
set of variables are related to each other or not; and if they are related, then how
strong the relationship is.
d. Run Charts
A run chart, also known as a line graph, is a kind of control chart that is used to
display process performance over time. The Run charts are basically used for
interpreting the trends that occur in the data, if any.
Run charts are basically used for keeping a check on the process’ performance.
Run charts are useful in discovering the patterns that occur over time.
Run charts are easy to interpret; any one can guess from the chart’s behavior
whether the process’ performance is normal or abnormal.
e. Control Charts
Control charts are used to understand how the performances are changing over
time. It is defined as a graphical tool for monitoring changes that occur within a
process because of some common cause. Control charts help to show the trends in
the average or the variation, which further helps in the debugging process. A
control charts consists of a run chart, centerline and upper and lower limits
determined statistically.
f. Line Graphs
Line graphs are used to show the behavior of a process with changing time. The
behavior of the process is the specific characteristics of a process. Line graphs are
used to depict the changes in the process; whether the process is getting better or
worse, or remains the same. These graphs are perhaps the first step in defining a
problem to be solved. These graphs can also show cycle time, defects or cost
overtime.
g. Pie-Charts
A Pie chart is a simple circular chart that is cut into slices. Each slice represents the
frequencies of the collected data. The bigger the slice, the higher is the number or
percentage. These charts are best used to represent the discrete data.
h. Multi-Vari Charts
1. Tests of Significance
Six Sigma takes the help of complex statistical tools to test out planned solutions to
see if they are appropriate for fixing the problem. One such statistical tool is tests
of significance. When a statistic is significant, it means that it is very reliable.
Significance tells us about the differences or the relationships, but it does not tell
about the strength of the relationship; or whether it is strong or weak, large or
small. The strength usually depends upon the sample size.
2. T tests
T- Tests are used to compare two averages. They may be used to compare a variety
of averages such as, effects of weight reduction techniques used by two groups of
individuals, complications occurring in patients after two different types of
operations or accidents occurring at two different junctions.
3. Correlation Analysis
4. Regression Analysis
5. Chi-square Test
Chi-square test is a non parametric test of statistical significance, which is used, for
bivariate tabular analysis. It is in all probability the most commonly used non-
parametric test. Chi-square test is quite a flexible test and can be used in a number
of circumstances. Chi-square being a popular method of testing discrete data, takes
into consideration the weaker and less accurate data.
There are three models of components in ANOVA. They are fixed, random and
clear.
The analysis phase is very significant in mending the loop-holes in the process. The
above mentioned tools are very helpful in analyzing customer data. The analysis
phase helps in bringing about improvements in the process and achieving
customer satisfaction.
Once the customer data is collected and analyzed, a lot of things come to light. The
customers are easily able to establish their demands and voice their concerns. This
information can be converted into critical-to-satisfaction requirements for the
business. The critical-to-quality (CTQ) tree can be used to convert customer needs
and priorities into measurable requirements for products and services. As
mentioned earlier, the customer’s word is important, and the voice of the customer
is a significant factor and must be borne in mind for maximum customer
satisfaction.
The Six Sigma model falls short as far as fulfilling the customer expectations is
concerned. An organization cannot pursue its goals and satisfy all customer
demands at the same time. Therefore, an organization tries to reach the right blend
by innovating techniques and at the same not putting customer interests at stake.
Quality Function Deployment (QFD) is a process that is followed by the Voice of the
Customer (VOC). The VOC lays down the demands of the customers and helps to
establish the quality metrics. These metrics are a graphical representation of the
effects of the planning process. The QFD metrics help to view the real picture. The
organization is able to compare its goals to the stiff competition in the market.
These metrics are created by each department separately which makes it very
accurate.
QFD is a very effective system because it keeps in mind the customer preferences
and then helps in designing and molding the product according to their needs. All
the personnel in the organization contribute equally to the designing and quality
control activity. QFD is, in fact, a written version of the planning and decision
making process. The QFD approach can be studied in four different phases:
Selection Phase
In this phase, the product or the area which needs improvisation needs to be
chosen. The team belonging to a specific department is then selected and then the
direction of the QFD study is defined.
Aspects Phase
The interdepartmental team looks at the product from different aspects such as the
level of demand, usability, cost and reliability.
Discovery Phase
In this phase, the team searches for areas that need to be amended as far as
improved technology, cost reduction and better usability is concerned.
Enforcement Phase
The team working on the product explains the way the manufacturing of the
improved product will be carried out.
Another benefit to be gained from applying a QFD process is that all the team
members are equally involved in the development of a product. This kind of
brainstorming helps to increase the utility of the product and make it user-friendly.
In addition, QFD accentuates the strength of Six Sigma by clearly outlining the
significant customer satisfaction factors and bringing it in sync to the competition in
the market.
The whats and the hows are two prime ingredients of a QFD diagram. This makes it
very helpful for a Six Sigma project. The Voice of the Customer can be heard clearly
and it can lead to effective decision-making. A QFD diagram is given below. (Thomas
Pyzdek, 1976)
7. Big Ys, Little Ys
Six Sigma sets its sight on delivering products and services with a zero-defect rate.
However, the main concern of a commercial organization always remains
maximum profit. So, even when the organization decides to start a new project, the
Six Sigma leaders need to define the project in numerical terms. These metrics are
very important as they help in determining the most suitable project for the
organization. These are the projects called Big Ys which the Six Sigma leaders will
execute. The Little Ys are the smaller units of the chosen project which are
implemented by the Green or the Black Belts under the aegis of the Six Sigma
leaders.
The Big Y is to be associated with the critical requirements of the customer. The Big
Y is used to create Little Ys which ultimately drive the Big Ys. For instance, in any
service industry, the overall customer satisfaction is the Big Y and some of the
elements of achieving customer satisfaction are quality check, delivery on time and
after sales service (Little Ys). The Big Ys delineate at all the levels of an organization.
It can be the business, the operations or the process level. The little Ys at the
current level become the Big Ys at the subsequent level. The Big Ys are the
measures of the result, while the Little Ys evaluate the cause-and-effect
relationships between the different units of a project and the measures of the
process. It is very important to keep a check on the Little Ys to achieve a better Big
Y.
C. Business Results
Six Sigma is a specialized tool and makes use of a lot of metrics to evaluate the
performance of products and services. Also, there are a number of parameters on
which the quality of a product and the performance of the process is measured.
DPMO
DPMO ranks among the most important metrics of Six Sigma in evaluating
performance. In fact, the primary aim of a Six Sigma project is to achieve this
standard mark. DPMO stands for Defects per Million Opportunities and the goal of
a Six Sigma project is to attain less than 3.4 DPMO. A process that is able to achieve
this landmark is said to have accomplished Six Sigma. This approach calculates a
number of opportunities with defects rather than calculating a number of units
with defects.
The only disadvantage that DPMO has is that opportunity is a subjective term and it
is very difficult to define it in simple terms. Take the example of a call centre agent
who is trying to take down the email address of a customer for further
correspondence. It is possible that he takes down ‘t ‘ for ‘p’ or misses one letter or
does not spell it correctly or by mistake fails to add the @ symbol before the IP
address. So, these opportunities should be tried to be defined in simple terms so
that the customers and the laymen is able to understand them.
PPM
This is synonymous with DPMO. It refers to the defects that happen in parts per
million opportunities.
DPU
It stands for Defects per Unit. There are some units that are or get defected while
manufacturing. DPU represents the number of defected units per total units. Let us
assume that 100 parts are manufactured in a day. So, if 5 parts are defected, it
means that the DPU is 0.05.
A unit here refers to the end product delivered to the customer. The Poisson
distribution, a model that represents the defects that are present in the product or
any other abstract or concrete deviation that occurs in the process is discussed in
the later chapters.
RTY
A product passes through a lot of stages during manufacturing. It may happen that
some of the products get defective and need to be reworked upon. The ones that
get defective and ultimately get rejected are accounted for. Many times however,
the ones that are reworked upon or manufactured again are not added. When the
rework is not added up, the yield is referred to as first time yield or process yield.
However, when the rework is added up, it is referred to as first pass yield or rolled
throughput yield. The latter is usually lower than the former.
where the y i values are the outputs at each step before rework.
The costs that are incurred as a result of producing defective material are known as
the cost of poor quality. This also includes the cost that is generated while trying to
meet the gap between the preferred and the actual product or service delivered.
This cost also includes the cost of chances that were lost because the resources
were wasted and ultimately in overcoming the shortcomings. Cost of poor quality
also includes the cost incurred on labor, inclination and the cost on raw materials.
The only costs that the COPQ does not include are the detection and prevention
costs. All the other costs included in it are the ones which add up to the point the
product or the service is rejected.
2. Benchmarking
Definitions
"Benchmarking is a tool to help you improve your business processes. Any business
process can be benchmarked."
"Benchmarking is the process of identifying, understanding, and adapting
outstanding practices from organizations anywhere in the world to help your
organization improve its performance."
"Benchmarking is a highly respected practice in the business world. It is an activity
that looks outward to find best practice and high performance and then measures
actual business operations against those goals."
Benchmarking, in the past, has been associated with studying one’s own products
and services in terms of the competitor’s. However, these days it is synonymous
with terms like “step-change”, “breakthrough” and “rediscovery”. These days,
benchmarking is more about how things are done. Unlike earlier, these days it is
about, the best practice followed. It is not important to adopt the norms followed
by the best organization. Rather the stress is on following the best practice.
The best way to inculcate the benchmarking tools and practices in the organization
is to develop it into a process. Camp has listed some steps that need to be followed
to implement it. They are:
2. Analysis- Includes considering the loop-holes and visualizing the future level of
performance.
4. Action- Includes evolving a plan of action, executing it and then, evaluating the
performance.
5. Maturity- Includes becoming a pro in the new process. The new method is
merged in the ongoing process.
Pros
1. Improves performance.
Cons
1. It is not just important to imitate but to imitate in the right manner, in the
manner that best suits one’s organization.
Financial Benefits
Six Sigma is a very useful tool for an organization that is ready to take big risks and
increase the profit margins considerably. This kind of progress requires
considerable planning, patience and a positive outlook. Six Sigma helps an
organization to reap financial benefits. The goal of any Six Sigma project is to
reduce the defects to 3.4 per million opportunities. It means that it will lead to
unprecedented gains. If the profit margin is 40 per cent and an organization wants
to increase it to 70 per cent Six Sigma is what it should deploy. Effective Six Sigma
helps an organization pocket big profits.
A Six Sigma initiative helps to increase profits and at the same time reduce costs.
The resources are optimally utilized and there is minimal wastage. A formal chart
must be made which lists out the financial benefits gained in terms of profits
gained, cost savings, cash flow and return on investment. These findings must be
chronicled for further use and the financial progress must be tracked.
NPV
NPV stands for Net Present Value of a particular investment. This is a metric that
helps visualize the future salaries and the profits that would be locked in with the
help of future investments. These values are then decreased by a discount rate to
have a ‘time value of money’. NPV is basically used to compare the financial benefits
that are reaped in through long term projects with the cash flow that is spread
through several years.
The formula to calculate NPV is:
ROI
ROI stands for Return on Investment. This is the key metrics to assess the financial
position of an organization. It is calculated thus:
ROI is a useful but not the final word to calculate financial benefits. The important
thing is to either have cost savings or earn big profits. The organization that is able
to achieve both definitely, has an edge over others.
CHAPTER 3
3 Project Management and Selecting Six Sigma Projects
The core of Six Sigma is to solve problems that are affecting business
performances. An organization employing the Six Sigma systems of managing their
affairs expects to derive benefits from them. These benefits could be lower costs,
enhanced productivity, rise in efficiency, or higher customer satisfaction by
reducing defects in their processes.
These are gained from a series of Six Sigma projects, big or small. They could be
projects within a department or across departments. At the business level, these
projects are the agents of action that carry out the business strategy and hand out
the results. They drive change in an organization.
Black Belts are the project leaders or managers and they act as a coach for the
DMAIC project. Black Belts are the full-time people dedicated to tackling critical
change opportunities. They are known to be adept in applying the right skills and
tools with effortless grace to achieve the project’s goals. Six Sigma teams fail to be
effective without a hard working Black Belt. They “baby-sit”, inspire, and manage
colleagues, they have to oversee projects that are complex in nature, that impact
the business greatly, provide the greatest returns to business, and satisfy customer
requirements. They have to have the highest levels of statistical expertise, be
masters at advanced tools, and masters at project management. Besides, they have
to possess good administrative and leadership sense.
Project management spells out the techniques required to take a project towards
its desired end in the prescribed time limit and approved budget. Project
Management thus acts as a tool, which enables a Project Manager or the Black Belt
to streamline the various processes involved in the project. A project is carried out
by team of experts and is headed by the project leaders who are usually the Black
Belts. The Black Belt, as a project leader takes the use of complex project
management tools and project tracking tools. This in turn helps him accomplish his
objective of producing a quality deliverable.
Stakeholders are a term oft repeated in project management. They are the people
who are involved directly or indirectly in the project and have a share in the profits
gained in the project. Therefore it is important to identify the stakeholders and
determine their expectations. Stakeholders usually are the shareholders, project
manager, customers and the project sponsor who are providing the funds.
These projects differ from processes and operations; which are ongoing functional
work, and which are permanent and repetitive in nature. The management of these
two systems is often different and requires different sets of skills and viewpoints.
Therefore the need for project management arises.
Besides, project management is replete with challenges. The first is to ensure that a
project is delivered within the defined time frame and approved budget. The
second is that all the resources (people, materials, space, quality, risks involved)
have been put to optimal use to meet the predefined objectives. Project
Management enables a project manager to successfully complete a project. The
two most important aspects of project management are time and costs and the
project will have disastrous consequences if it is not guided by these two factors, as
the time limit will be missed and costs involved will be more.
If the project manager wants his project to win profits his organization expects
from it, he will have to construct his project around the concepts of project
management- the art and science of doing things on time with the efforts of others.
Thus it is project management which equips the Black Belt with the requisite tools
to succeed in his project and fulfill the expectations of the stakeholders.
The project leader/ Black Belt captains the team and guides the team, plans the
strategies and risks to take, tracks the progress of the team members, monitors the
entire project, and inputs control measures to ensure the project does not waver
from its defined goals. He is primarily responsible for getting the team started; he is
the one responsible for translating all the plans into action. He is the one who will
get accolades for the success and he will be held accountable if the project fails to
deliver.
2. Project Characteristics
Every project has some characteristic features. They are listed below:
Example of a Project
A car company is going to initiate a project “Z” on December 1, 2006. The project is
required to be successfully completed by November 1, 2007. The project entails
developing a new technology with the help of which the car manufacturers will be
able to propel the cars by a mixture of two gases - hydrogen and oxygen. The
company would hand over the newly created technology to its manufacturing
division. The manufacturing division will use it to develop a prototype of a new car.
The project team would comprise of ten engineers who will report to the Project
Manager. The Project Manager would be reporting directly to the company's CEO.
Everything has been judiciously planned. It is a path breaking project and therefore
the company puts a high value on its success. The company's share holders are
keenly awaiting this novel technology, which will boost the future car sales.
Under the DMAIC framework, the top management with some support from the
Black Belt or project manager needs to do the initial project planning. This involves
identifying major project tasks, estimating the time involved, and assigning the
responsibilities for each task within each of the project’s five phases.
Project Plan: After the Six Sigma project definition is complete, the next step is to
make the project plan. Theproject planshows the ‘whys’ and ‘hows’ of a project. A
plan is a model for potential project work, and it allows potential flaws to be
identified so that they can be rectified in time. The plan demarcates every team
member’s role. At the same time it also emphasizes how the different parts are
linked to each other, and when the goals are accomplished and when to draw the
line.
Project Charter: One of the first tasks of the Six Sigma team is to develop
the Project Charter. The Project Charter is a one-page written document of a
project. It summarizes the key elements of a project- the reasons for doing the
project, the goal of the project, the project plan and scope, a review of the roles and
responsibilities, and the project deliverables.
The project charter is typically drafted by the Project Champion, and refined and
added to by the Black Belt and the team. It is generally seen that the charter is
revised and adapted a few times over as the problems become clearer and data
becomes available. It usually changes over the course of the DMAIC project.
Towards the completion of the Define phase, the charter should define the scope
the project wants to accomplish.
The project charter makes a great impact on project success as this document
encapsulates what the management wants done, and what the Black Belts and
Champions have collectively agreed to achieve. It is a kind of agreement among all
groups involved in the project. It becomes necessary to ascertain that the
management, project leader, team members and customers possess complete
understanding about the project elements to ensure project success. A clearly
written project charter helps to pass on the vital information to the team members,
and ensure they remain in the loop. A Black Belt Project can be adversely affected if
the project charter is not in place.
1. Charter/Plan Elements
Business Case: What impact will the project have on the business, or external
customers?What are the current costs of poor quality?What is the importance of
the process? While creating the project charter for any project, first the business
case has to be identified. There has to be a link between why the project exists and
its impact on some strategic business objective.
Let us take the example of a customer care center of a telecom company. This
company plans a project to improve the call quality of its customer care executives
(CCEs) in all its processes. This move would increase the overall customer
satisfaction which will give the brand a visible edge amongst its competitors. This
would result in higher sales of that particular telecom brand. Thus the project-
improving quality- is impacting a strategic business objective, i.e., higher sales and
higher profits.
For example, the problem statement in a call center’s process could look like this-
The Customer Satisfaction Index has fallen by 10% this quarter as compared to the
last quarter. This has had a negative impact on the sales figure which has fallen by
14% vis a vis the same time period last year.
Project Title: This is how the project will be named in reports and best-practice
databases. For example,if the project or process is to improve call quality or
effectiveness, a possible title could be Enhancing Customer Delight or Call Center
Cycle Time.
Process Name: This will be the functions and outputs of the organization that the
project will be focusing on.
Project Scope: The scope of the project would be to define the boundaries within
which the project team will be working on. The scope of the project should be
apparent. It should be achievable within four to six months. Projects fail for the
reason that the scope is too large for the time agreed upon. To achieve this, each
project team should create a consensus on what would be the project scope for
their project. The start and end points of the process are given in the scope.
For example, the scope in the call center would be to achieve the desired customer
satisfaction index levels through increased monitoring of calls for the next three
months.
Project Leader (Black Belt): It is necessary to identify the person leading the
project or the process improvement project. It is done to make the management
aware of who’s the driving force behind the project. This person is responsible for
team coordination, assuring task completion etc. He also acts as a formal point of
contact with the project sponsor (the person who is financing the project).
Project Start Date: For smoothness in documentation purposes, the project start
date has to be finalized. This is the date the charter is defined.
Projected Project Conclusion Date: The anticipated end date of the project has to
be given because a project cannot continue indefinitely. This provides the team
with adequate time to plan and finish the project in the specified business setting,
project complexity, and work-load setting, holidays, and so on.
Goals and Expected Benefitsof the Project: Once the scope has been created, the
project team has to formulate a set of attainable goals and objectives that are
achievable within a finite time frame. It should also anticipate the expected benefits
of the project. The idea is to set demanding but practical targets.
For example, the goal of the process in the above example could be to increase
customer satisfaction index. The total number of surveyed customers who rate the
customer service experience of a particular process should increase by the desired
percentage figure (as defined in the targets). The expected benefits could be-
Increased customer delight will lead to a higher sales figure.
Team Members, Their Roles, and Deliverables: The project team should include
meticulously chosen team members, and their roles and responsibilities should be
carefully defined. It should include people who have the expertise, who are the
most qualified to carry the chosen project to its completion; and those who are
strategically important to the process. Every project should have a team leader (as
mentioned above), either a Green Belt or a Black Belt. The project mentor and the
sponsor should also be mentioned.
Project Milestones: It is important that the project goals set by the team be
attained within the defined time frame (project start and estimated stop points).
The important milestones (phase of Six Sigma methodology-DMAIC) between these
dates have to be mentioned. A good project leader should ensure that the team
can achieve this by providing the team with the required project management
resources.
2. Planning Tools
There are a number of tools available to aid the project manager or Black Belt to
plan his project, such as Gantt chart, PERT Chart, Planning trees, QFD, Budget etc.
They are explained in detail below.
a. Gantt Chart
A Gantt chart is a bar chart that displays the tasks of a project, when each task is
expected to start, and when they are scheduled to end. The horizontal bars are
shaded with the project’s progress, to show which tasks have been completed or
how much of a task has been completed. The left end of the horizontal bar
represents the expected beginning of a task, and the right end of the bar
represents the expected completion date of the project. People assigned to each
task also can be shown.
The Gantt chart is most commonly used in project management. This simple tool
helps in keeping track of the time frame of a project. It can be used when
communicating plans or status of a project. It helps in planning the proper
allocation of the resources needed for the completion of the project. Moreover it
can be used to plan the timetable and monitor tasks within a project.
A Gantt chart is simple and easy to construct. It is done in the following way:
1. Identify the tasks that need to be done to accomplish the project, and identify the
time frames. Also list the sequence of the tasks.
3. Write down each task and milestone of the project vertically down the left side of
the page. Draw a bar under the appropriate times on the timeline for activities that
occur over a period of time. The left end of the horizontal bar indicates the
beginning of the activity, and the right end of the bar indicates the expected
completion date of the task. Draw the outlines of the bars; you can fill them as the
activities take place to show their status.
4. Place a vertical marker (e.g. an arrow) to show your present position on the
timeline. This chart is similar to an arrow diagram; but the Gantt chart is easier to
construct, can be understood at a glance, and makes it easier to visualize progress.
In the Gantt chart, relationships (people who are assigned to a particular task) and
dependencies of tasks (on other tasks, resources or skill needed) are not shown.
These are shown more clearly in the arrow diagram. But these details can be shown
by drawing additional columns.
In the Chart, there are ten weeks denoted in the timeline. The Chart shows the
status of the project at Wednesday of the sixth week. The team has completed six
tasks, till forecasting and manpower planning. Technical configuration and
infrastructure setup is a long drawn out process, and slightly more than half of that
is estimated to be complete. Therefore, about two-thirds of that bar is shaded. The
task of recruitment also has finished by more than half and that part of the bar is
shaded. The team has not yet started training and quality monitoring setup; they
are behind schedule for these two tasks. Maybe they should reallocate their
resources in order to cover those tasks simultaneously.
A single view of the Gantt chart helps to monitor the progress of the project. It can
be inferred from the chart that the project is running ten days late from the
stipulated time that was given to the allotees of the commercial building.
Large-scale projects are complex and need a lot of planning, scheduling, and
coordinating of numerous interrelated activities. Some activities can be performed
sequentially, while others can be performed parallel to other activities. To support
these tasks, methods based on the use of networks were developed in the 1950s,
namely PERT and CPM.
In the CPM (critical path method), the time estimates were understood to be
deterministic (having an outcome that can be predicted). On the other hand, they
were assumed probabilistic in PERT. Today, both the techniques have been
combined and the differences are mainly historical. A version of the PERT chart is
called an Activity Network Diagram.
1. Discuss all activities or tasks that are needed to complete the project.
2. Determine the sequence of the tasks. Before an activity or task begins, all
activities that should precede it should be completed. Ascertain which task is to be
carried out first. Identify which task can be carried out simultaneously with this
task. This can be placed to the left or right of the first task.
3. Identify the next task, and place it below the first task. See if there is any task to
be worked out concurrent to this. Concurrent tasks should be lined parallel to the
left or right.
4. Continue this process and construct a diagram. The tasks are represented with
arrows.
5. The beginning or end of the task is called an event. Draw circles for events,
between each two tasks. Therefore, events are nodes that separate tasks.
7. Identify task times or the time needed to complete each activity. Write the time
on each task’s arrow.
8. Determine the critical path. The longest time from the beginning to the end of
the project is called the critical path. This should be marked with a heavy line or
color. The project’s critical path includes those tasks that must be started or
completed on time to avoid delays in the completion of the project.
1. Work out the earliest time (ES) for each task and earliest finish (EF). The
earliest time is the expected time an event will occur if the preceding activities are
started as early as possible. Earliest Finish for each task = ES + time taken to
complete the task.
2. Work out the latest time that each task can begin and conclude with. These are
known as Latest Start (LS) and Latest Finish (LF). The latest time is the projected
time an event can happen without disturbing the project completion beyond its
earliest time. To calculate this, work backwards, start from the latest finish date to
the latest start date. Latest Finish = the smallest LS of all tasks immediately
following this one Latest Start = LF - time taken to complete this task
Draw a separate box for each task. Make a time box divided into four quadrants as
shown in figure below.
Slack time for an event is the difference between the latest times and earliest
times for a given event. The slack for an event indicates how much delay in the
happening of the event can be allowed without delaying project completion,
assuming everything else remains on schedule.
Total Slack = LS - ES = LF - EF
Therefore, the events that have slack times of zero are said to lie on the critical path
of the project. It is to be noted that it is only activities having zero slack can lie on
the critical path, and no others can. The delay in an activity lying on the critical path
leads to the delay in the entire project. Moreover, once the critical path activities
are traced, the project team has to find ways to shorten it and ensure there is no
slippage.
Task start and end dates with the latest start; and end dates without effecting the
project completion time
3.2 Tree Diagram
c. Tree Diagram
A tree diagram is an important project planning tool. The tree diagram helps to
identify all aspects of a project, right down to the work package level. Sometimes
the tree diagram used in project planning is also called a Work Breakdown
Structure (WBS). It displays the structure of a project; showing how the broad
categories of the project can be broken down into smaller details. This diagram
shows the overall picture of a project’s steps, the logical flow of actions from the
identified goals. The tree diagram is also used to display existing projects in an easy
to understand diagram.
The tree diagram starts with one item that branches into two or more stems. Each
of these branch into two or more, and so on. The main trunk is the generalized
goal, and the multiple branches are the finer levels of action. The tree diagram is a
generic tool that has wide applications apart from project planning.
1. Identify the statement of the goal or project plan, or whatever is being studied.
Write it at the top (this will make a vertical tree) or far left of the working surface
(this will make a horizontal tree).
2. Subdivide the goal into various sub categories. Ask a question that will lead you
to the next level of detail. For example, for a goal or work breakdown structure, the
team could ask “Which tasks must be done to meet this goal? What is required to
accomplish this?” Answers could be taken from the brainstorming sessions or
affinity diagrams. Write these items in a line below or to the right. Arrows have to
be used to show the links between the ideas. Ensure that these items will be
sufficient to answer the questions above. This is called a “necessary and sufficient
check”.
3. Each of the new idea statements now becomes a goal or problem statement. For
each of these ideas, ask questions again to unearth then next level of detail. Jot
down the next level of sub-headings and link them with the previous line of
statements with arrows. Do a “necessary and sufficient check” again for each of
these items.
4. Continue the process till all the fundamental components are covered.
Example: The following tree diagram is a project for increasing the productivity of
customer care executives in a BPO. The goal of the project is to reduce the average
call handling time, (the average time taken by each customer care executive to
handle customer calls) which will have a positive effect on productivity.
3. Project Documentation
The data should be presented in a manner that it is clear to every person involved.
It should be based on facts and reliable data and presented with the help of
spreadsheets, storyboards, phased reviews and presentations to the executive
team. It is done by the Black Belt or the team members. The data collected in this
manner through these mechanisms has to be amalgamated and synthesized in a
phased manner, throughout the project, so that they are useful in the improvement
and implementation process.
A checksheet is a structured form used to collect, organize and analyze data. This
is designed in such a way that all the necessary facts can be secured. Data
representing the ‘when’s or ‘how’s of the ongoing project can be captured in a
checksheet. In addition, the frequency of pattern of targeted events, or the
problems or defects that might occur while the project is in progress, can be
recorded in a check sheet.
A storyboard, as the name suggests, is a visual display of thoughts. It tells the story
of a plan or project. All the aspects of the project are made visible at once to
everybody involved. It is a representation of both the creative and analytical aspects
to a project. It is highly useful when documenting or displaying project activities or
results.
4. Charter Negotiation
The project charter, as discussed in the prior sections, is a statement of what the
project is about and the organization’s commitment of resources, schedule, and
cost to the project. It is common that the project charter is negotiated and
modified, by the stakeholders and the project manager, once it has been prepared.
The Black Belt needs to identify the key stakeholders with who he will be
negotiating the project charter.
Essentials of Negotiation:
Negotiation, in general terms, is a process through which all parties want to reach a
mutual solution for things they own or control. The objective of negotiation is to
carve out a win-win situation for all negotiating parties by appreciating each other’s
needs, while keeping in mind the shared goals of both. All parties intend to be
benefited after the negotiation process. Charter negotiation includes a list of
assumptions and a list of constraints of which a mutual understanding has to be
sculpted out.
There are various kinds of negotiation conducted during a project. These include
initial negotiations while finalizing the contract, change negotiations while drafting
the plan and design, and negotiations during project closure. Negotiations can be
on the project scope or project boundaries with the project sponsor or stakeholder.
It can also be on the project’s resources (time schedule, price terms, manpower,
and business conditions). Negotiations can also be carried out on a daily basis with
team members about their commitment towards the project. These are the key
elements in determining the final output of the project.
Negotiation is best done when the project has not officially started. The focus
should be on reasoning the ‘why’s and ‘how’s, rather than on giving the
affirmations.
Successful negotiation is about getting things done the best way without giving
away too much. The project’s negotiator should know how much he is willing to
give in to settle the matter, and what the other party is intending to achieve in the
negotiating table. This will help in zeroing in on the final agreement.
The project negotiator and the stakeholders, who could be the customers,
shareholders or team members, should keep themselves focused on the project’s
goals. The basis of negotiation is being clear about the goals.
The negotiation should be under control right from the start as the project
manager has to fit in several factors like financial constraints, the delivery
schedule, the list of tasks to be concluded and profits. This has to be ensured by
the project manager to avoid being on the wrong side of the agreement.
While negotiating it is important not to hold on to the costs, and give away
something that is significant to business. Balancing between money and things
more valuable to business pays rich dividends to the project.
The process of negotiation must be given the same importance as drafting and
execution of the project.
While negotiating the charter with the team, it is necessary to make a detailed
Work Breakdown Structure (WBS) to help communicate project details to the team.
Extracting commitment from the team is vital for the project’s successful
completion.
While entering into the negotiz`ating process, the following things have to be kept
in mind:
1. While negotiating the scope, the first thing to be kept in mind is to understand
the scope to be negotiated. What are the areas within the scope that can be
adjusted to the needs of the stakeholders? What are the boundaries within which
the negotiation can be carried out? The project negotiator has to fix these
parameters. Scope includes all the work required to complete the project.
2. The project negotiator should focus on the interests and the issues rather than
on the people who are sitting in the negotiating table.
4. Justify the scope and establishing the credibility of the project is the next step in
the negotiating process.
5. Listening carefully and identifying the opposite party’s real interests and the
reasons underlying them is the next step. The project negotiator should unearth
the benefits his team will accrue from the negotiation.
6. It is imperative to keep track of how much has been conceded.
7. The project negotiator should politely say no if the need arises, without breaking
down the negotiations.
9. Lastly, it is important to secure approval about the project plan. The project
negotiator has to ensure that the working relationships with the project
stakeholders stay intact for future negotiations.
The process of deploying the Six Sigma initiative can be started by creating the
teams to work on the Six Sigma projects. Six Sigma teams are led by the Black Belt,
Green Belt, or a Champion. The teams are made up of a motley crowd, who
contribute their personal skills and faculties to the project. This section discusses
what the belts can do to ensure team success.
After the business case is stated, the project statement is written, and the scope is
defined, teams have to be developed. The Black Belt has to prioritize actions, using
the DMAIC framework, and select the teams who will work on each phase of the
DMAIC project. He has a critical role to play in formulating teams because he is
ultimately responsible for the team’s success. His leadership skills come to the fore
when he has to work on team building and initiate team communications.
Selecting the right team members and allocating roles within the team is the
foremost task of the Black Belt. After the team has been formed, his role lies in
being not only a leader, but also that of a facilitator and a motivator. He should
understand team dynamics, and have conflict mediating skills.
The formation and development of an effective team is instrumental in achieving
sustainable success in any Six Sigma project. The Black Belt can make use of the
combined skills of the team to address customer needs, reduce variances in the
processes, and successfully deliver the project.
1. Team Initiation
Team initiation is identifying the team of individuals and skills needed to complete
the project. The crucial factor is how to become a team- how to bring a group of
strangers with variegated skills to meet the challenge of completing the project.
While launching a team, the team leader should ensure the needed skills exist in
the team. Detailed attention should be paid to the challenge of bringing the team
together for the first time, and for additional meetings. A lot depends on the team
manager to help the team get to know each other, trust each other. There should
be clarity about the team’s goals right from the start, and once the team is clear
about what to do, organizational procedures and best practices guide the team to
coordinate their activities.
The team should be small and at the same time, have the sufficient expertise and
necessary skills for the completion of the project. A core team typically consists of
six or fewer people. The team should be clear about the scope, goals and
boundaries of the project. The team must understand that its goals have to be in
sync with the organization’s goals. The team has to have the support from the
management. Every member has a role to play and responsibilities to perform.
Every member has to follow certain guidelines and rules while pursuing the
project.
The team leader has to extract a commitment towards the project from them, and
drive home the importance of the time schedule. He should know how to shift roles
and blend skills, set targets, fix assignments, and hold people accountable. Also,
team performance is driven by empowerment and positive group dynamics.
2. Team Selection
Team selection is an art and careful attention should be paid while selecting teams.
Only those people who have the appropriate skills needed for the completion of
the project should be taken. Members should be chosen for the project only
because they possess the technical skill, organizational skill or skill related to the
subject matter of the project. People who have some knowledge about the current
process can be taken into the team. Sometimes the team can include customers
and suppliers of the process.
Teams should consist of the number of people necessary to finish the project.
Smaller teams work faster and display results more quickly. Larger teams need
greater coordination and sometimes sub teams have to be created under them.
Forming: In this initial stage, the team members meet each other for the first time
and get familiar with the courses of action. They explore the team goals and the
project scope. Here group interaction is hesitant, members are cautious about how
they will fit in. The decision making process is controlled by the leader and he
performs a significant role in steering the group forward.
Storming: The storming stage follows forming. Conflicts arise between members,
and between members and the team leader, in this stage. Members demand to
know their roles and responsibilities. They question the team leader’s authority as
far as group objectives are concerned. They tend to question procedures and
systems. Defying the attempts of the leader to move the group towards
independence is a common feature.
While managing conflicts, the leader should keep the following elements in mind:
6. He should persistently steer the group away from its leader; towards
independence.
Norming: During the norming stage, acceptability about norms and procedures
come in. The team takes responsibility of its goals, procedures and conduct in this
stage. The members accept that the DMAIC tools will help them in achieving their
goals. Group norms are enforced and strengthened by the group itself. Respect for
and willingness to help other team members arise in this stage.
Performing: If all goes according to plan, the team reaches this final stage.
Members realize their potential. They take pride in their group, their contribution to
the group, and their accomplishments. They are confident about giving assistance
to the improvement initiatives of the project.
Adjourning: This fifth phase involves completing the task and breaking up the
team.
Recognition: This phase involves identifying and thanking the team members for
their contribution to the task.
According to Thomas Pyzdek, author of The Six Sigma Handbook (1976), members
of the team take on two basic roles: group task roles and group maintenance roles.
The development of these roles is important for the team building process; which is
the process by which the team learns to function as a unit rather than a collection
of individuals with no coherent goals.
Group task rolesinvolve those functions related to facilitating and coordinating the
group’s effort to select, define, and find a solution to a problem. They include behaviors
shown in the table below:
Group maintenance roles are intended to bring group cohesiveness and group-
centered behavior. They include behaviors shown in the table below:
Counterproductive Group Roles: There are some “not always helpful roles” which
may hinder the team building process;these are called counterproductive roles. The
group leader has to identify such behavior and subtly provide a feedback. Some of these
roles are discussed below:
Management’s Role:
It is very important for the management to ensure that the group gets sufficient
time to become effective. It has to ensure that the dimensions of the group are not
altered due to any one or more members being asked to leave the group unless
required as a critical exception. These conditions if met will help the group to
progress through the four team stages.
At the same time, the group dimensions should not also be impacted by addition of
more temporary members. However, this will involve lots of discipline on behalf of
both the management and the group members.
1. The team should comprehend that they are a group of people with group goals.
2. There should be clarity about group goals and they should be relevant to the
group’s needs.
6. The team is an interacting and collective team. Vital decisions should be arrived
at through group consensus. This provides constructive controversy, equal power,
involvement, and at the same time, realization of potential of group members.
7. The group needs to be high on cohesion. Team members should like each other,
a high level of trust and acceptance should exist. They should be satisfied with their
participation in the group. They should emit positive vibes and there should be
positive group dynamics- room for innovativeness, enough freedom to take
independent decisions and productive arguments.
Selecting a Facilitator:
Facilitators should be endowed with the qualities listed below: (Schuman 1996)
2. He should remain unbiased relating to issues and concerns within the group.
3. He should use methods and systems that the group’s social and intellectual
process understands.
4. He should appreciate the desire of the group to learn from the problem-solving
process.
2. Is trusted by all group members as being fair and neutral; and who has no
personal interest in the result.
3. Assists them in understanding the problem solving techniques and helps them
polish their own decision making skills.
1. An unbiased external facilitator is useful when there is distrust and doubts about
bias in the facilitator.
6. When the situation is highly complex and unique, bringing in an expert can help
the group delve into the problem and resolve it.
Sometimes the problem under discussion lacks proper definition or is viewed from
a different perspective by different people. An unprejudiced person can offer his
suggestions and pitch in his analysis.
Task-related activities concern themselves with the reason behind team formation,
the team charter, and the team goals. They are listed below:
1. The facilitator should be selected before the team itself is formed. The facilitator
selects the team members, and designs the team charter. (The team charter is
discussed in the earlier part of this chapter)
2. He assists in developing team goals based on the charter, and acts as the
mediator between the team and management.
4. He has to assure that sufficient records of the team’s projects are kept; and see
that records should contain the current status of the project. He has to arrange for
blue-collar support.
5. He has to plan meetings, invites the strategically important people and has to
ensure their attendance. He has to see the meeting starts on time and the
proceedings are smooth.
He has to act as the mode of communication between team members and people
outside the team, gain the support and cooperation of non-team members.
productivity
quality
cycle time
grievances
medical usage
service
turnover
dismissals
counseling usage
employee attitudes
customer attitudes
customer complaints
There are some measures to weigh the performance of processes of the teams.
Project success and failure should be monitored. These can be measured in terms
like: leaders trained, projects started, projects dropped, projects completed,
projects rejected, improved productivity, number of teams, inactive teams,
improved service etc. (Aubrey and Felkins, 1998)
Individuals in a team could have excellent skills and phenomenal creativity, but they
might be unable to bring all these skills together because of lack of team
coordination, or lack of knowledge of the common tools. For a team to be
successful, it should master not only the quality improvement processes, but also
the team tools and team effectiveness.
Team tools facilitate the team to perform effectively and efficiently. Tools enable
them to achieve their goals and objectives, or arrive at a consensus regarding team
issues. A major factor in team dynamics is that no decision is taken till it wins the
tacit approval from all team members. Some of these tools are described below.
a. Affinity Diagrams
Affinity diagrams can be constructed using existing data like survey results,
drawings, letters, or data gathered from brainstorming. They can be used before
creating a storyboard or tree diagram (discussed in the subsequent sections of this
chapter). They can be used together with other techniques like cause and effect
diagrams and interrelationship diagraphs.
Write the ideas on small pieces of paper or sticky notes. Randomly paste these
notes on a working surface which is visible to everyone.
The team has to work in silence during this stage. Look for patterns in the ideas.
Look for ideas that seem to be correlated. Place them next to each other by
moving the sticky note. Repeat the process until all ideas are grouped. There could
be notes which fit into any category. It’s alright to have such “stand-alones”. You
can also move a note someone else has moved before.
It is alright to talk now. The team can now review and assess these final groupings.
Any unsuspected patterns or reasons why the notes were shifted can be
discussed. Select a heading for each group that would capture the essence of the
group. The grouping of these ideas will assist the team in taking a decision, or
making a plan.
2. Brainstorm ideas. Have the team write down their ideas silently, for a set of time.
3. List each idea by asking each of the members to state aloud their ideas. The
facilitator records it in the flipchart. Discussion is not allowed, and questions cannot
be asked at this stage. Ideas given need not be from the team member’s given list.
1. Discuss the top 50 ideas. Ideas can be removed from the list only if there is
approval by everybody. Discussions will elucidate meanings, spell out the analysis,
or express agreements or conflicts.
2. Rank ideas through multivoting. Pass out index cards to all members. The
following table can be used as a guide.
3. Let each member write down their choices from the list of the given ideas, one
choice per card.
4. Each member then has to rank their choices and write the rank on the cards.
6. The group analyses and discusses the results. Examine the number of times an
item was selected. Calculate the total of the ranks for each item.
The item(s) which got the highest score(s) (total of ranks) can help the team to use
these for further discussion or analysis. A decision is possible only if it is preceded
by group consensus on importance of the items that got the highest score(s).
c. Multivoting
The Force Field diagram helps in the understanding opposite forces of a required
change: the driving forces that work towards the process of change, and the
restraining forces that block any improvement or change taking place. The Force
Field diagram can be used to display the forces that would push to reverse or nullify
the changes, and create counterforces that would sustain these changes.
A change cannot occur if the opposing forces are equal. The ‘drivers’ have to be
stronger than the ‘restrainers’ for a change to occur. After doing a comprehensive
study of the forces, the team can design an action plan or take a decision to pave
the way for the desired change:
1. Write the change that is desired, or the problem that is to be remedied. Draw a
vertical line.
2. Brainstorm all possible causes that push the drivers, or cause the problem to
occur. Write them down on the left side of the line. Indicate the strength of that
force by drawing right pointing arrows whose length represents that strength.
3. Similarly, on the right side of the line, write down all the possible restrainers, and
draw left-pointing arrows to indicate the strength of the restrainers.
4. To bring in a desired change, discuss all the ways in which restrainers can be
removed. To study the causes of a problem, discuss on how to reduce the driving
forces.
Example: The Service Levels (how quickly and how efficiently a customer care
executive responds to a customer’s call) of a telecom customer care company
(inbound environment) in a particular process are going down below the targeted
levels. The force field diagram can be used to analyze how the performance of the
customer care executive can be enhanced. The drivers and restraints are as follows:
Conflict resolution is helping the team or opposing parties see the common goals
and strive towards achieving those common goals.
Conflict resolution is an activity which is managed by the team leader (Black Belt,
Green Belt etc.) and the facilitator. It is worth mention here that constructive
conflict should be encouraged. The real reasons behind the apparent conflict
should be found out and a consensus should be reached.
A primary rule in conflict resolution should be that no decision about the team’s
objective or goal should be taken until its wins the nod from all participating
members.
There are various tools to eliminate conflict. Some of them are discussed below:
a. Consensus Technique
This technique will take time to be effective, or will fail altogether, in certain
situations such as issues which involve deep-rooted differences or win-lose
confrontations. Arbitration from a higher authority will be required in this case.
Avoid bargaining for your own position; display sensitiveness for other’s opinions
and consider them in further discussions on similar points.
Avoid win-lose confrontations; it is not necessary that a winner should arise in
every situation. In such moments, look for the next best solution.
Look for reasons behind the disagreement; and double check the intention behind
the agreement. See if participants arrive at the ‘yes’ for the same basic reasons, or
if they have some underlying motive.
Avoid sidestepping or reducing conflict by methods such as bargaining, coin
tossing, majority vote, trading out etc.
Avoid negativity and accept that the group does have the potential for reaching a
decision.
Look at controversy positively; this provides equal power and involvement, throws
up creative options, and at the same time, the potential of group members is
realized.
b. Brainstorming
Brainstorming is a tool for generating ideas in a short period of time. Apart from
using it when creative ideas have to be generated, it is used as a consensus tool to
involve participation from the entire group. Lists of ideas or solutions are chalked
out and then the final choice is made from the options that are available.
2. Discuss the topic of the problem or the conflict. It is very important to see that
everyone involved in the session fully supports the query that needs to be
discussed for new ideas.
3. Allow some minutes of silence for all to think about the topic.
4. Let people speak their ideas, and note down every idea.
c. Multivoting
This tool has already been detailed in the earlier sections. See Team Effectiveness Tools.
d. Interest-Based Bargaining
The best solution to a problem often emerges from a variety of options and
perspectives. This is a characteristic feature of interest-based bargaining. Interest-
based bargaining is a method of negotiation that tries to meet the inherent needs
or interests of both parties. Unlike traditional bargaining, opposing parties are
allowed to convey what’s important about the issue under discussion, rather than
fighting out for a specific solution or arguing about a specific position. This makes it
possible to jointly create solutions that are acceptable to both sides. In this, neither
side has to give up their fundamental beliefs, and a win-win situation is created for
all.
The first step in integrative bargaining is to find out the interests. The next step
involves finding out the ‘whys’ and what are the things obstructing the working out
of those whys. If the fundamental interests are known, a solution can be carved
out.
6. Motivation Techniques
Motivation is the psychological process that gives behavior purpose and direction
(Kreitner, 1995).
To be successful, team managers or team leaders, should know what keeps a team
motivated to carry on their tasks within the context of the roles they play.
According to Maslow’s theory of motivation, recognition feeds the need for self
esteem and sense of belonging. Unless these needs of self actualization are
realized, pride in work and feeling of accomplishment will not emerge.
7. Organizational Roadblocks
Although this structure has advantages, it is not without inherent flaws. This
structure often creates desire to resist changes which are organizational roadblocks
to change. Modern business scenario demands that value for customers be created
by attracting resources from various parts of the organization. An organization is
known by its people, and everybody should be given a chance to contribute their
skills and expertise to value-creation. Therefore, organizations should change the
traditional approach to work and cut across these lines to focus on customer
satisfaction. The Six Sigma way is a customer driven methodology which enables
cross functional teams to come together and makes value for customers.
The standard operating procedures (SOPs); in other words, the formal rules of an
organization are in itself a barrier to change. These rules are made to rectify past
problems and often stay in existence long after the problem is over. Most of the
time, the top leadership is reluctant to submit to a rule changing process. The
management has to display flexibility and not succumb to the spiral of writing down
too many rules for every procedure if it wants to implement change in its system.
The detailed rules and procedures that define work is another roadblock. The
requirements of different projects are different, and these procedures obstruct
these changes. Also, another barrier to change is the requirement to take
permission from various departments, experts, boards and various other bodies.
Management has to do away with these limitations to ensure the smooth
functioning of projects. Lack of support from leadership and barriers of groups
which disturb team dynamics when a change is required are other barriers to
change.
The most significant barrier to change is, perhaps, human nature. It is a part of
human nature to resist anything that threatens our present status. In the individual
level, barriers to change constitute:
Fear of treading into new uncharted territory like introducing new ways of
managing a process, or inducting a new process itself
Fear of making a mistake or fear that the new method will fail to produce the
desired results
Apart from internal barriers, there are external roadblocks to change. Government
bodies and private agencies necessitate organizations to follow a maze of rules and
regulations before they can embark on some new project.
The leadership must recognize these barriers to change and focus on removing
these barriers. It is their responsibility to remove these roadblocks towards
organizational improvement. The first step to achieve this can be done by asserting
a desire to reduce or remove the problem. The employees can be trained on the
use of problem solving tools, and a model in which the improvement will be
implemented can be designed. Communicating the solution to all levels and
recognition to all who helped implement the solution will help remove the barriers
to change.
This has been discussed in the earlier section. See Team Effectiveness Tools.
2. Tree Diagrams
The tree diagram is used to stratify ideas into subsequent levels of detail. The idea
becomes easier to understand or the problem easier to solve by the tree diagram.
The tree diagram starts with one item that branches into two or more stems. Each
of these branch into two or more, and so on. The main trunk is the generalized
goal, and the multiple branches are the finer levels of action.
The tree diagram is a generic tool that has wide applications. It can be used when
The procedure to draw a tree diagram has already been discussed in the earlier
sections.See Planning Tools.
3. Interrelationship Diagraphs
1. The group has to define the particular issue or problem under discussion.
2. Write down all the factors or ideas on pieces of paper. These have to be pasted
on a large flip-chart or any working surface.
3. Link each factor to all others. Use an arrow, also known as “influence arrow”, to
link related factors.
4. Draw the “influence arrows” from the factors that influence to those which are
influenced.
5. If two factors influence each other, the arrow should be drawn to reflect the
stronger influence.
7. The elements with the most outgoing arrows will be root causes or drivers.
8. The ones with the most incoming arrows will be key outcomes or results.
4. Matrix Diagrams
while trying to comprehend how groups of items relate to one another or affect
one another
while comparing the efficiency and effectiveness of the options
when comparing cause-and-effect relationships
An L-shaped matrix relates two set of elements to each other, and sometimes one
set of elements to itself. The elements are compared by placing them in the first
row and top column.
A T-shaped matrix relates three set of elements in such a way that sets X and Y are
related to Z, but X and Y are not related to each other.
A Y-shaped matrix relates three set of elements in such a way that each set is
related to the other two set of elements, i.e. they are related in a circular manner.
Suppose X is related to Y, and Y is related to Z, then Z is also related to X.
An X-shaped matrix relates four set of elements and each set is related to two
other set of elements in a circular manner. Suppose W is related to X, X is related
to Y, Y is related to Z, Z is related to W, but W is not related to Y, or X is not related
to Z.
A C-shaped matrix relates three set of elements simultaneously, in a 3-dimensional
manner. It is difficult to draw and therefore is rarely used.
Steps in generating a matrix diagram:
2. Choose the format for the matrix ( these will be discussed subsequently)
4. Think of what to tell, which relationship to state, with symbols on the matrix.
There are some commonly used symbols like ‘X’s and blanks, or check marks to
indicate ‘yes’ or ‘no’. There are more symbols which make the matrix more
understandable. These may show the strength of the relationship between two
items, or what role the item plays in the activity.
5. Compare the sets of elements, item by item. Place the appropriate symbol at the
intersection of the box of the paired items for each comparison.
6. Analyze the matrix for patterns. This information can be used for further analysis
or to resolve a problem.
The T-shaped matrix relates four product models P, Q, R, S (say tires) (group X), to
their manufacturing locations (group Y) and to their buyer groups (group Z). The
matrix can be viewed in different ways to pinpoint different relationships. For
example, Ford is a major buyer of Tire P, but it buys Tire S in small volumes. Tire Q
is produced in large volumes in the Rome unit, and in small volumes in the Paris
unit, and is bought in big volumes by BMW. Volkswagen is the only customer who
buys all the tire types.
5. Prioritization Matrix
vital root causes are already recognized and the most important ones have been
identified
when three or more issues have to compared, especially when the decision is vital
to the organization and some issues are subjective
the resources for the progress are limited and only a vital few activities must be
focused upon
2. Then rate each option according to the intensity of the correlation with the
criteria or according to how well it meets the criterion.
3. Finally, combine all ratings for a final ranking of the options numerically.
The main problems are penned along with the various options, which are then
multiplied with the weight assigned to each criterion.
The prioritization matrix is like a grid, showing various options on the top and
decision criteria on the left side. Weights are also mentioned with the decision
criteria. The final score is calculated by multiplying the ratings with the weights for
each criterion. Once the ratings have been summed up, the best optimum solution
will be chosen with the highest score.
In any kind of planning there might be many things that could go wrong. The
process decision program charts or PDPC helps to foresee the problems that could
be encountered in a plan undergoing development. To avert and avoid such
problems, countermeasures are developed beforehand. PDPC helps to revise the
plan with the intent to avoid the problem or to be ready with the solution to
counter the problem.
{also called network diagram, node diagram, CPM (Critical Path Method) chart}
(variation: PERT chart)
tasks within a complex project or process has to be scheduled and their progress
has to be monitored
the project schedule is critical with critical consequences if the project is not
completed in time or leads to gains if completed before time.
1. Discuss all activities or tasks that are needed to complete the project.
2. Determine the sequence of the tasks. Before an activity or task begins, all
activities that should precede it should be completed. Ascertain which task is to be
carried out first. Identify which task can be carried out simultaneously with this
task. This can be placed to the left or right of the first task.
3. Identify the next task, and place it below the first task. See if there is any task to
be worked out concurrent to this. Concurrent tasks should be lined parallel to the
left or right.
4. Continue this process and construct a diagram. The tasks are represented with
arrows. The beginning or end of the task is called an event. Draw circles for events,
between each two tasks. Therefore, events are nodes that separate tasks.
In figure 17, event 2 and the dummy between 2 and 3 have been added to separate
tasks P and Q.
In figure 18, R cannot start until both tasks Pand Q are over, and a fourth task S
cannot start before P is complete, but S does not have to wait for Q. A dummy can
be inserted between end of task P and start of task R.
6. When the network is made, label events in sequence, with event numbers inside
the circles.
7. Identify task times or the time needed to complete each activity. Write the time
on each task’s arrow.
8. Determine the critical path. The longest time from the beginning to the end of
the project is called the critical path. This should be marked with a heavy line or
color. The project’s critical path includes those tasks that must be started or
completed on time to avoid delays in the completion of the project.
There are four time values for each event: its earliest time of start and earliest time
of finish; and its latest time of start and finish.
1. Work out the earliest time (ES) for each task and earliest finish (EF).
The earliest time is the expected time an event will occur if the preceding activities
are started as early as possible.
Earliest Finish for each task = ES + time taken to complete the task.
2. Work out the latest time that each task can begin and conclude with. These are
known as Latest Start (LS) and Latest Finish (LF). The latest time is the projected
time an event can happen without disturbing the project completion beyond its
earliest time. To calculate this, work backwards, start from the latest finish date to
the latest start date.
Latest Finish = the smallest LS of all tasks immediately following this one
Latest Start = LF - time taken to complete this task
Draw a separate box for each task. Make a time box divided into four quadrants as
shown in figure below.
Slack time for an event is the difference between the latest times and earliest times
for a given event. The slack for an event indicates how much delay in the happening
of the event can be allowed without delaying project completion, assuming
everything else remains on schedule.
Therefore, the events that have slack times of zero are said to lie on the critical path
of the project. It is to be noted that it is only activities having zero slack can lie on
the critical path, and no others can. The delay in an activity lying on the critical path
leads to the delay in the entire project. Moreover, once the critical path activities
are traced, the project team has to find ways to shorten it and ensure there is no
slippage.
CHAPTER 4
4 The Define Phase
Introduction
The first step in the DMAIC model is to define the project. It is understood in the
Define phase that a number of problems that is affecting business have been
identified by the management (through VOC tools which have been discussed in
the earlier chapter) and practical solutions have to be worked out for them. A
challenge that management faces is to spot these problems in such a way that
application of Six Sigma to them gives the maximum benefits. After the problems
are identified, the projects to work on have to be decided by the champions, belts,
and process owners. There can be many Six Sigma projects that run in parallel in
the organization, with champions, Black Belts and Green Belts working throughout
the organization. The implementation of the project is performed by these people.
The focus of a project is to resolve a problem that is affecting the core performance
areas of the business such as customer or employee satisfaction, costs, process
capability, output and rework, cycle time or responsiveness, defective services and
revenue. In the Six Sigma process, the problem first metamorphoses from a
practical problem to a statistical problem, then a statistical solution is found out
which is later transformed into a practical solution.
The project is defined by stating the project scope, using tools like Pareto charts,
high level process maps, work breakdown structures, and the 7M tools (See
Chapter 3 for a description of the 7M tools)
If there are more than two Ys (output variables), it is likely that the project is too
large in scope. The most logical step would be to break down the project into two
or more projects. To understand the performance of Y, you have to have a better
understanding of the process steps that lead to Y. A high level process map has to
be used here to show the full scope of the process.
The following illustration shows the selection of a process output for improvement.
Two things should be kept in mind while selecting a process. One, it should
recognize those particular performance parameters which will help the company
have a financial gain. Two, it should aim to effect customer satisfaction positively.
A process can be measured on any of the following criteria like defects per million
opportunities, cost saving, capacity of the process or the time taken for production
of a unit. It is a cross-functional approach and is totally focused on the outcome.
A process map is an illustration of the flow of the process, showing the sequence of
tasks with the aid of flowcharting symbols. A process map may illustrate a small
part of the operation or the complete operation. It consists of a series of actions
which change some given inputs into the previously defined outputs.
Macro process maps increase the visibility of any process. This improves
communication. It is used before drawing a more detailed flowchart of a process.
Example: The following macro process map shows the main steps in taking a call
by a customer care executive in a BPO.
The problem area is that the AHT(Average Handling Time) of a customer care
executive (CCE) is more than the time limit specified by the head of operations,
which reflects on the profitability and effectiveness of the contact center. Problems
arise when the customer gets adamant, expresses dissatisfaction in the answer
provided by the CCE and insists on further information. This increases the handling
time. The customer may also start abusing the CCE and even disconnect the call.
This leads to problems. At times, the CCE provides alternative solutions or escalates
the call to the team leader or manager, which contributes to increase in handling
time.
Pareto Charts
A Pareto chart is a specialized vertical bar graph that exhibits data collected in such
a way that important points necessary for the process under improvement can be
demarcated. It exhibits the comparative significance of all the data. It is used to
focus on the largest improvement opportunity by emphasizing the "crucial few"
problems as opposed to the many others.
The Pareto chart is based on the Pareto principle. The Pareto principle has to be
understood before getting to know the Pareto chart. The Pareto principle was
proposed by management thinker Joseph M. Juran. It was named after the Italian
economist Vilfredo Pareto, who observed that 80% of the wealth in Italy was owned
by 20% of the people.
“80% of the business defects are caused by only 20% of the errors”
The Pareto chart is a bar graph and is used to graphically summarize and display
the relative importance of the differences between groups of data. It is useful for
non-numeric data.
The next step in preparing a Pareto chart is to calculate the cumulative percentages
of the data supplied above. The following table can be derived from the data given
above.
Finally a line graph can be prepared to see what the main problems are. The
following line graph is drawn from the preceding table data using Ms Excel and
plotted with the cumulative percentage against the complaints of the customers.
The X axis is plotted as complaints of the customers and the Y axis as the
cumulative percentage.
All the problems that fall to the left of the 80% line are the few problems accounting
for most of the complaints. They are:
1. Not hot
2. Late delivery
3. No extras
4. Wrong Billing
5. Wrong Pizza
6. Lesser ingredient
7. No delivery in a particular area
These account for 80% of the problems encountered in the home delivery of the
pizza. If these are immediately taken care of, then 80% of the problems can be
solved.
The Work Breakdown Structure is a structure that is defined as a process for defining
the final and intermediate products of a product and their relationships. (Ruskin and
Estes, 1994)
While defining a project, it becomes necessary to break down the project tasks into
several hierarchal components. The WBS shows the breakdown of the deliverables
and tasks that are necessary to accomplish a particular project. It records the scope
of the project; and it pinpoints all the aspects of a project or process, right till the
work package level. The WBS is a variation of the tree diagram and is constructed in
the same way as a tree diagram.
In Six Sigma, only defining the problem is not enough, it is necessary to define the
magnitude of the problem (defect level) in measurable terms (for e.g., cycle time,
quality, cost etc.). In other words, the process metrics have to be established. (CTQ,
CTC, CTS etc.) What are the operational definitions of these metrics? Will these
same metrics be used after the completion of the project to measure the success of
the project? These questions have to be addressed.
Six Sigma sets its sight on delivering products and services with a zero-defect rate.
However, the main concern of a commercial organization always remains
maximum profit. Therefore, a point to be kept in mind while selecting Six Sigma
projects is that they should exude some financial benefit, either by reducing cost
(by eliminating rework, scrap, inefficiencies, excess inventory etc.) or through
growth in revenue, or achieving some strategic goal. So the Six Sigma leaders need
to define the project in numerical terms or metrics. These metrics are very
important as they help in determining the most suitable Six Sigma project for the
organization.
They are customer centered and focused on indicators that provide value to the
customers, such as product quality, service dependability and timeliness of the
delivery, or are associated with internal work process that address system cost
reduction, waste reduction, coordination and team work, innovation, and customer
satisfaction.
They measure performance across time, which shows trends rather than
snapshots.
They provide direct information at the level at which they are applied.
They are linked with the organization’s mission, strategies, and actions. They
contribute to organizational direction and control.
They are collaboratively developed by teams of people who provide, collect,
process, and use data.
1. CTx Parameters - CTC, CTQ, CTS
In Six Sigma, the process metrics can be categorized into three parts. These are
cost, quality and schedule. They are also referred to as “critical to” characteristics
(CTx) as they play a crucial role in the success of any enterprise. These
characteristics are used to decide which project to focus on: should the focus be on
quality, cost or schedule projects?
These metrics express the issues and provide the motivation for greater customer
satisfaction. They may also ascertain the products and services which are being
offered or will be offered by a business process.
A CTQ or Critical to Quality characteristic has a major influence on the suitability for
use of the product produced by the process. Critical to Quality characteristic aligns
upgrading the creative efforts with the customer needs. In simpler words, CTQ is
the expectation of a customer from a product.
At the time of working out of the metrics, the following metrics should not be
included
Those metrics for which adequate or accurate data can not be collected.
Those metrics that are complicated and could not be explained easily to others.
Those metrics which make the employees work not towards the best interest of
the company but towards fulfilling their targets.
The metrics used in Six Sigma are considered as Big Ys, Little Ys.
Big Ys are the complex results that a Six Sigma project aims to develop. These are
the projects which the Six Sigma leaders will execute. The Little Ys are the smaller
units of the chosen project which are implemented by the Green or the Black Belts
under the aegis of the Six Sigma leaders.
The Big Y is to be associated with the critical requirements of the customer. The Big
Y is used to create Little Ys which ultimately drive the Big Ys.
For instance, in any service industry, the overall customer satisfaction is the Big Y
and some of the elements of achieving customer satisfaction are quality check,
delivery on time and after sales service (Little Ys). The Big Ys delineate at all the
levels of an organization. It can be the business, the operations or the process level.
The little Ys at the current level become the Big Ys at the subsequent level. The Big
Ys are the measures of the result, while the Little Ys evaluate the cause-and-effect
relationships between the different units of a project and the measures of the
process. It is very important to keep a check on the Little Ys to achieve a better Big
Y.
C. Problem Statement
The baseline level is utilized when estimating the potential financial benefits while
targeting a level of improvement.
The problem statement states the planned issue the project team wants to
improve. It reflects a better understanding of the problem. It explicitly explains
what needs to improve, the need for the project at that particular time, where the
problem is occurring, what is the seriousness or extent of the damage of the
problem, and the financial implications of the problem. It explains the need of
implementing the Six Sigma effort.
The project team selected will work out a plan to break the large project into
smaller projects and pass them on to the various teams. The problem statement is
documented by the senior leaders of the Six Sigma team.
Key Elements of a Good Problem Statement
Recruiting time by Human Resource for customer care executives, team leaders,
and resolution specialists for the Alabama process of the Customer Care Division, is
behind the goal of 30 days 90% of the time. The average time to fill a request is 70
days in the human resource employee recruitment process over the last 10
months. This is adding costs of $ 150, 000 per month in overtime labor, contract
labor, and rework costs.
D. Project Financials
After the baseline performance is established in the problem statement, and the
goal for improvement is asserted, the financial benefit of achieving this goal can be
ascertained. This can be done by estimating what will be the costs incurred at the
operating level when the Six Sigma effort is employed against the present costs.
The annual benefits are to be calculated by ascertaining these differences.
Quality Costs
Quality is in itself not a cost. But quality is a force that raises profits through lower
costs. Therefore a cost of quality means any cost that is incurred because the
quality of the product or service produced is not perfect. Costs such as rework and
scrap, excess material or cost of reordering to replace defective products are
quality costs. Quality cost is also known as cost of poor quality.
1. Prevention Costs: These are costs that are incurred when undertaking activities
to prevent poor quality. For example, quality improvement team meetings, quality
planning, product review etc.
2. Appraisal costs: These are the costs associated with measuring, auditing, and
evaluating products or services to test adherence to quality standards or
performance requirements. For example, inspection tests for purchased stock, in
process and final inspection, product or process audits etc.
3. Failure Costs: These are the costs that are incurred from products and services
that do not adhere to the required standards or fail to meet customer needs. These
are classified into:
Poor quality or high quality costs bring down the revenue of the company through
higher expenses and poor customer satisfaction. Therefore the goal of any
company should be to lower the quality costs to the lowest possible level.
According to Juran and Gryna(1998), the cost of failure declines as conformance
quality levels improve towards perfection, while the cost of appraisal and prevention
increases. The total cost of failure and the cost of appraisal and failure determine
the level of quality costs.
E. Project Scheduling
An organization may undertake many Six Sigma projects at a time, with limited
resources available to them. This makes it mandatory for the organization to
schedule the projects in order of importance to get the best possible results out of
them.
Type of schedule
This is associated with the difficulty in implementation of the project. A PERT chart
is used for large and compound projects, where there are a number of
interconnected tasks. The PERT chart, also known as activity network diagram,
represents the interdependency and associations between tasks. This helps in
planning.
For smaller projects, a Gantt chart may be used. A Gantt chart is a two- dimensional
tool (it shows the task with the time frame) and does not represent the relationship
between the tasks.
The completion of a key task in any project is called the milestone of the project. An
important deliverable of any phase can also be termed as a milestone.
Every project has its own unique milestones. Some could be approval of the
requirements, approval of the trial product, approval of the design, packaging of
the product for shipment, shipping of the product manufactured to the customer,
and so on.
Milestones appear at the end of a work breakdown structure. The success of any
task can be measured by the milestones achieved.
Estimating the task duration is a very important part of project scheduling. It plays a
vital role in estimation of costs in the later stages.
Task estimation is done to stabilize customer relations and uphold the team
morale. As task estimation is affected by staffing and costing activities, it is
continually done during the planning process.
Setting a realistic time frame for any task is very important during project
scheduling because a task is mostly underestimated. Underestimation leads to a
rush to complete it.
The task priorities should be clearly defined to avoid unnecessary conflicts during
the execution of the project.
The critical path of any project is the longest path taken from the start to the end of
the project. The project’s earliest possible completion time is calculated by working
out the critical path.
As the resources are limited, anyone scheduling the process will have to deal with
the risks involved. In any good schedule, a scheduler has to make allowances for
the forthcoming or expected risks. The following can be done to remedy these
risks:
In cases where major risks are involved, an extra work break down structure for
the handling of the risk should be included. Some funds should be allocated to
deal with the risks.
The tasks in which risks are inherent, extra time should be allocated.
Thus project definition involves identifying the area of improvement, identifying the
processes that are creating the problem, and determining the current level of
performance, how much the problem is costing the company, and lastly what will
be the financial benefit of undertaking the project.
CHAPTER 5
5 The Measure Phase
In the Define step, the Y has been established, and the problem statement has
been made by the Black Belt. The subsequent step in the Six Sigma methodology is
to identity key measures that are required to evaluate success and meet critical
customer requirements. This is possible by determining the important few factors
that determine the performance or behavior of the process. This is initiated by
examining the process to reveal the key process steps and key inputs for each
process. Then these key inputs are prioritized and then the potential effects of this
prioritized list on the CTXs of the process are studied. The potential ways in which
the process could go wrong are estimated.
The objectives of the Measure step can be summarized in the following steps:
1. Tools
The following is a description of some of the tools which help in process analysis.
These graphical tools serve as a better way to gain perception about the internal
workings and external effects of a process.
a. Flowcharts
A flow chart is a diagrammatic representation of the nature and the flow of work in
a process. The elements that can be included are: sequence of actions, inputs and
outputs entering or leaving the process, decisions that must be made, people who
become involved in the process, time durations at each step etc. Representations in
a flow chart, also known as a flow diagram has numerous benefits.
A flow chart uses symbols. Each symbol has a specific meaning. These symbols are
connected by arrows which indicate the flow of the process. The symbols are
described below:
Oval - indicates the start and end point of the process. They usually contain the
words START and STOP, or END.
Diamond - represents a decision point in a flow chart. It has two arrows coming out
of it, corresponding to yes and no or true or false.
Circle - represents a place marker. It is used when a line or page has to be changed
with the flowchart. This symbol is then numbered and placed at the end of the line
or the page.
On the next line or page, this symbol is used with the same number so a reader of
the chart can follow the path.
A process map is an illustration of the flow of the process, showing the sequence of
tasks with the aid of flowcharting symbols. Process maps give a picture of how work
flows through the company. A process map may illustrate a small part of the
operation or the complete operation. It consists of a series of actions which change
some given inputs into the previously defined outputs.
The first process map helps to identify the problems in the current system, and to
improve the current system. The expected process map explains each step in
detail.
During the mapping session a list of actions is also created. This list defines in detail
the changes required to change the process from the current map to the expected
map.
A value added flow chart is a method to improve on the cycle times and eventually
productivity, by visually sorting out value-adding steps from non-value-adding steps
in a process. It is a very simple yet effective way. The steps are described below:
1. List all the steps involved in the particular process. To do this, draw a diagram
box for every step from start to the end.
2. Determine the time currently required for the completion of every step of the
process. Include this time to each box. (See Figure 24.a)
3. Determine the total cycle time by adding the time taken by each step.
4. Some of the steps listed above are those which do not add any value to the
process. Such steps are inspecting, checking, revise, stocking, transporting,
delivering etc.
5. Shift such boxes (as explained above) to the right of the flow chart. (See Figure
24.b)
6. Determine the total non-value added time by adding the time taken by each non
value adding step.
7. Some of the steps listed above are those which add value to the product. Such
steps are assembling, painting, stamping etc.
8. Shift such boxes (as explained above) to the left of the flow chart. (See Figure
24.b)
9. Determine the total value added time by adding the time taken by each value
adding step.
10. Construct a pie chart to display the percentage time taken by non-value adding
steps. (See Figure 24.c)
11. Using benchmarking and analysis, decide the target process configuration.
12. Pictorially represent the target process and calculate the total target cycle time
(See Figure 24.d)
13. Explore the non value adding steps and identify the procedures which could be
trimmed down or can be done away with to save time.
14. Explore the value adding steps and identify the procedures which could be
improved to reduce the cycle time.
15. Make a flow chart of the enriched process. Keep looking for further loopholes in
the process till the target is achieved.
5.1 SIPOC, Box Whisker Plots, Cause and Effect Diagrams and Check Sheets
c. SIPOC
SIPOC is a tool that helps to define the specific portion of the overall business
process that is targeted for improvement. It is a method of applying mapping to
sub processes until arriving at that part of the process allocated for improvement. It
is often used at the start of the project, when the key elements of the project have
to be defined. It is also used when information is not clear about what the process
inputs and outputs are, and who the suppliers and customers are.
1. Set up a working surface that will enable the team to post additions to the SIPOC
diagram. This can be a flipchart, sticky notes posted to a wall or a transparency.
2. Identify the process. Create a macro process map; map it into four or five high
level steps.
3. Identify all outputs of the process. Attach them on the working surface.
4. Identify the customers who receive these outputs. Record them separately.
5. Identify the inputs that the process needs for it to function properly. Again attach
these separately.
6. Identify the input’s suppliers. Again record them separately.
7. Review the work to edit omissions, unclear phrases, duplications etc.
8. Draw a complete SIPOC diagram.
A box and whisker plot is a graph that summarizes the most important statistical
characteristics of a frequency distribution for easy understanding and comparison.
(Nancy R. Tague, 2005) A box and whisker plot, also known simply as a box plot,
looks like a box representing the central mass of the variation and has thin lines
which extend out on either side, called whiskers, representing the spread of the
distribution. The box plot is simple to construct but displays a good amount of
information; therefore it is a potent tool.
box plot is used when analyzing the most important information about a batch of
data or when two or more sets of data have to be compared. It can also be used
when data on some other graph, like control charts, have to be summarized.
Steps in creating a box and whisker plot (Craig Gygi, Neil DeCarlo, Bruce Williams,
2005)
1. Rank the captured set of data measurements for the characteristic. Reorder the
captured data from the least to the greatest values. Refer to the numbers as X
1……X n.
2. Find the median of the data. Median is the observation value in the ordered data
where half the values are larger and half are smaller. If the number of observations
(n) in the data is odd, the median will be (n+1)/2 th value. Median = X (n+1)/2 If the
number of observations (n) is even, the median is the average of the two middle
values i.e. n/2 th and (n/2 +1) th value.
3. Find the first quartile Q 1. The first quartile is the point in the ranked ordered
sequence, where 25% of the observations fall below this value.
4. Find the third quartile Q 3 . This is the point in the ranked ordered sequence,
where 75% of the observations fall below this value.
7. Create a horizontal line, representing the scale of measure for the characteristic.
This scale could be minutes for time, number of defects in an inspected part,
centimeters for length etc.
8. Construct the box. Draw a box spanning the first quartile Q 1 to the third quartile
Q 3. Draw a vertical line in the box corresponding to the calculated median value.
9. Draw the whiskers. Draw two horizontal lines, one stretching out from Q 1 to the
smallest observation X min, and another extending from Q 3 to the biggest
observation X max.
10. Repeat Steps 1 through 9 for each additional characteristic to be plotted and
compared against the same horizontal scale.
Box and whisker plots are used to compare two or more distributions. These may
be before or after a process is viewed. For example, to find out if two or more
variation distributions are same or different, a box plot can be created. From figure
26.a it can be seen that distribution Q has the lowest level. But it still overlaps the
performance of distribution P, meaning that it may not be that different. But
distribution R does not overlap with P and Q and has a much higher value. It also
has a much broader spread to its variation.
e. Cause and Effect Diagrams
Any process improvement initiative entails fighting the causes of variation. There
can be a huge number of causes for variations to a given problem. Dr. Ishikawa is
credited for creating the cause and effect diagram, which is a tool that is used for
brainstorming possible causes of a problem in a graphical (tree structured) format.
It is also known as the Fishbone diagram and the Ishikawa Diagram . This
technique is called fishbone diagram because it resembles the skeleton of the fish.
A fishbone diagram helps in getting to the root cause of the problem. It consists of
a fish head at one end of the diagram- which states the problem. Besides this fish
head, there is a fish spine and there are bones attached to the spine. The bones
attached to the spine state the reasons which are causing the problem.
Define the problem: List down the exact problem, in detail. It should be stated in a
box, called the fish head. After stating the problem, draw a horizontal line, across
the box.
Brainstorm the causes: Attach sliding lines, called the bones of the fish, to the fish
spine. These bones will state the causes because of which the problem occurred.
Write down as many possible causes as could be involved. The major categories
typically involved are:
Further brainstorm the ‘brainstormed’ causes: Sketch out the smaller lines
coming out of the larger bones, which will depict the possible causes within each
category that may be affecting the problem. This helps breaking down a complex
problem into smaller problems. Repeat this step until there is no breaking down of
a problem into a sub-problem.
Analyze the fishbone diagram: Finally analyze the diagram and draw out results
by measuring the root cause.
Example: Suppose the MNC dealing in the home delivery of pizzas wants to find
out the various causes that are leading to a fall in their customer base. They depict
the problem graphically, by putting the problem and causes under the fishbone
diagram.
The following is the fishbone diagram, tailored to the “pizza home delivery”
example:
In the fish head, the pizza problem has been defined. The main causes leading to
the problem are defined under the fish bones. The causes are then further sub-
classed into generic problems, like, one of the cause, which is “pizza not delivered in
time”, has been further categorized into sub-causes, which infer the reasons why
the pizza couldn’t be delivered in time. The reason could be any one of these: traffic
congestion, the scooter’s tire was punctured or the pizza delivery boy couldn’t
locate the address easily.
f. Check Sheets
A check sheet is a most common tool for collecting data. A check sheet is a
structured form consisting of a list of items for collecting and analyzing data. It
helps display the frequency of the data. It contains pre-recorded descriptions of
events that are likely to occur. A well thought out check sheet consists of questions
like: “Is everything done?” “How often does the problem occur?” “Have all
inspections been performed?”
Check sheets are tremendously useful for solving a problem and for process-
improvement. Data collected in a check sheet can be used as inputs for other tools
such as Pareto diagrams and histograms. They can be in the form of:
process check sheets where ranges of measurement values are written and actual
observations are marked
defect check sheets where defects are described and frequencies are recorded
defect location check sheets which are actual diagrams that show where the
problem occurs
cause and effect check sheets in which the problem area is shown by marking that
area in the cause and effect diagram
Each time the targeted event takes place; record it on the check sheet.
Example 1: The following table represents a defect check sheet in the delivery
process of a pizza manufacturing chain.
Example 2: The following figure shows a check sheet used by HR to collect data on
causes of increasing attrition rates in a BPO.
It is clear from the data collected that slow growth and high stress levels
contributed to high attrition levels in one month. This data can be used for further
analysis.
Stem and leaf plots are a quick way to examine the shape and spread of the data. It
is a graphical method of displaying data. It is a type of histogram that displays
individual data.
Example: The following data shows the weights of male football players in the
National Football League, 2005.
143, 145, 149, 158, 159, 164, 167, 168, 167, 178, 170, 172, 178, 174, 180, 185, 194,
193, 192, 200, 209, 205, 203, 204, 206, 218, 215, 225, 228, 229, 226
The following table shows a stem and leaf display of the data. To draw a stem and
leaf display, first note that the data ranges from 140s to 220s, counting by tens.
Note down a column of stems, starting with 14(representing the weights in 140s)
and ending with 22(representing the weights in the 220s). Draw a vertical line
separating the Stems, from the Leaves. The leaves represent the multiples of 1.
The next step is to enter all raw scores into the stem and leaf display. A weight of
145 would be recorded by placing a 5 against the stem 14; a weight of 205 would be
recorded by placing a 5 against the stem 20. Continue this process until all the data
is plotted. The resulting display is like the shape of a histogram plotted by the same
data.
For example, (1) the mean of the three values 3, 5 and 8 is 5.33
(2) The employee benefits like travel allowances, sick leave, healthcare costs and so
on, used by the employees of any organization in any given fiscal year.
(3) A study of customer call handling time in a BPO in a particular process in a given
month. Conclusions can be made about the average handling time of a sample of
selected customers. Questions like why processing time varies for every customer,
or are different processes facing the same problem are not addressed in an
enumerative study.
For example, children of ages 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16 play certain
computer games. Analytical studies would infer that the average age of all children
who play that game is 11.
Population Parameter
Sample Statistic is a statistical property of a set of data set, such as the mean or
standard deviation of the sample. The value of the statistic is known with certainty
because it is calculated using all the items of the set of data.
(1)an enumerative study is defined as a study in which action will be taken on the
universe. “Universe” is defined as the entire group of people, items, or services
under a particular study. Sampling a selected lot of defects to determine the nature
of defects of the entire lot is a case of an enumerative study. Enumerative studies
draw conclusions about the universe actually studied. The aim of this study is
estimation of parameters. This is the deductive approach. It involves counting
techniques for huge numbers of possible outcomes.
2. Sampling Distributions
Most Six Sigma projects involving enumerative studies deal with samples, and not
populations. Some common formulae that are of interest to Six Sigma are given
below.
1. The empirical distribution assigns the probability 1/n to each X i in the sample.
Thus the mean of the distribution is
4. Another sampling statistic is the standard deviation of the average, also called
the standard error (SE). This is given by the following formula:
It is evident from the above formula that the standard error of the mean is
inversely proportional to the square root of the sample size. This relationship is
shown in the graph below:
It is seen that averages of n=4 have a distribution half as variable as the population
from which the samples are drawn.
Irrespective of the shape of the distribution of the population or the universe, the
distribution of average values of samples drawn from that universe will tend toward a
standard normal distribution i.e., with mean 0 and standard deviation 1, for a large
sample size or when n tends to infinity.(Thomas Pyzdek, 1976)
In other words, the distribution of an average tends to be normal, even when the
distribution from which the average is calculated is definitely not normal. A remarkable
thing about this theorem is that no matter what the shape of the original
distribution is, the sampling distribution of the mean approaches a normal
distribution.
Furthermore, the average of sample averages will have the same average as the
universe, and the standard deviation of the averages will be equal to the standard
deviation of the universe divided by the square root of the sample size.
The central limit theorem has many practical implications. The Central Limit
Theorem provides the basis for many statistical process control tools, like quality
control charts, which are used widely in Six Sigma. By the Central Limit Theorem,
you can use means of small samples to evaluate any process using the normal
distribution.
The statistical methods described in the preceding section are enumerative. In Six
Sigma applications of enumerative statistics, inferences about populations based
on data from samples are made. Statistical inference is concerned with decision
making. For example, sample means and standard deviations can be used to
foretell future performance like long term yields or possible failures.
The sample statistics discussed above: sample mean, sample standard deviation,
and sample variance are point estimators. These are single values used to
represent population parameters. An interval about the statistics that has a
predetermined probability of including the true population parameter can also be
found out. This interval is called the confidence interval or confidence limits.
Confidence intervals can be both one-sided and two sided.
For example, if the mean income in a sample is $6000, it may be desirable to know
the interval in which the mean income of the parameter probably lies. This is
expressed in terms of confidence limits.
Six Sigma uses analytic statistics most of the time, but sometimes enumerative
statistics prove useful. Analytic methods are used to locate the fundamental
process dynamics and to improve and control the processes involved in Six Sigma.
C. Collecting and Summarizing Data
The data collection plan is built while measuring the process. A process can be
improved by studying the information gathered from data collected from the actual
process. This data collected has to be accurate and relevant to the quality issue
being taken up under the Six Sigma project. Any data collection plan includes:
A brief overview of the project, along with the problem statement (stating why the
data has to be collected)
A list of questions, which should be answered by the data collected
Determining the data type which will be suitable for the data a process is
generating
Determining the number of iterations of the data collected that will be enough to
present the change in the chart
A list of the measures to be taken, once the data has been collected
The name of the person who will be collecting the data and when
A good data collection plan facilitates the accurate and efficient collection of data.
After the data is collected, it must be figured out that what kind of data a particular
process holds. Before measuring the data, it is necessary to know the type of data
you are analyzing so that you can apply an appropriate tool to the data.
The following section gives the definitions and classification of data. After studying
the data, it becomes essential to identify opportunities to transform the attribute
data to variable measures.
1. Types of Data
No two things are exactly alike; therefore there are inherent differences in the data.
Each characteristic under study is referred to as a variable. In Six Sigma, these are
known as CTQ or critical to quality characteristics for a process, product or service.
Attribute (Discrete) Data: Attribute data, also known as discrete data, can take on
only a finite number of points. Typically such data is counted in whole numbers.
Attribute data cannot be broken down into smaller units. For example, the number
of family members cannot be 4.5. No additional meaning can be added to such
data. For example, the number of defects in a sample is discrete data.
Some other examples of attribute data are:
Variable (Continuous) data: Variable data, also known as continuous data, is data
which can have any value on a continuous scale. Continuous data exists on
intervals, or on several intervals. Variable data can have almost any numeric value
and can be meaningfully forked into finer increments or decrements, depending
upon the precision of the measurement system.
For example: The height of a person on a ruler can be read as 1.2 meters, 1.05
meters or 1.35 meters.
The important distinction between attribute data and variable data is that variable
data can be meaningfully added or subtracted, while attribute data cannot be
meaningfully added or subtracted.
2. Scales of Measurement
The next step in data collection is to define and apply measurement scales to the
data collected.
The idea behind measurement is that improvement in a process can begin only
when quality is measured or quantified. Essentially,a numerical assignment to a
non-numerical element is called measurement. Measurements communicate
certain information about the relationship between one element and the other
elements.
a. Nominal Scale: This shows the simplest and weakest kind of measurement.
They are a form of classification. This shows only the presence or absence of an
attribute. The data collected by nominal scale is called attribute data. For example,
success/fail, accept/reject, correct/incorrect.
Nominal measurements can represent a membership or a designation like
(1=female, 2=male). The statistics used in nominal scale are percent, proportion,
chi-square tests etc.
b. Ordinal Scale: This scale has a natural order of the values. This scale can express
the degree of how much one item is more or less than another. But the space
between the values is not defined. For example, product popularity rankings can be
high, higher, and highest. Product attributes can be taste, or attractiveness. This
scale can be studied with mathematical operators like =, ≠, <, >.
Statistical techniques can be applied to ordinal data like rank order correlation.
Ordinal data is converted to nominal data and analyzed using Binomial or Poisson
models in quality improvement models like Six Sigma.
d. Ratio Scale: In this scale, measurements of an object in two different metrics are
related to one another by an invariant ratio. (Thomas Pyzdek, 1976). For example, if an
object’s mass was calculated in pounds (x) and kilograms (y), then x/y = 2.2 for all
values of x and y. This means that a transformation from one ratio measurement
scale to another is executed by a transformation of the form y = ax, a >0, e.g.,
pounds = 2.2 × kilograms. 0 has a meaning here, it means an absence of mass.
Another example is temperature measured in Kelvin. There is no value possible
below 0° Kelvin. Weight below 0 lbs is a meaningful absence of weight.
Statistical techniques can be applied to ratio measurements like correlations,
multiple regression, T-tests, and F-tests.
5.3 Methods for Collecting Data (application)
Data constitute the foundation for statistical analysis. Data from the process which
has to be analyzed can be collected by applying tools such as:
1. Check Sheets: These are the most common tool for collecting data. They permit
the user to collect data from a process in an easy and systematic manner. (See the
previous section)
2. Control Charts: These are graphs used to study how a process changes over
time. Through control charts, current data can be compared to historical control
limits and conclusions can be drawn on whether the process is in control, or
displays variation(out of control) due to special causes of variation. (To read more on
control charts, see: Chapter 8- Six Sigma, Control)
4. Survey: Sample Surveys are data collected from various groups of people to
gather information about their knowledge or opinion about a product or
process. (To read more on surveys, see: Chapter 2: Business Process Management)
Coding Data
However, sometimes the data variables need not be coded. If you are using weight
or age as a variable of interest, the age or weight itself can be used. Coding
becomes necessary when the analysis does not take values as they are. For
example, when you have to code the group of responses “< 18 years” , “18 to 30
years ”, “> 30 years” etc., you can use <18 years = 1, 18 to 30 years = 2, and so on.
Therefore for each numeric variable to be analyzed, either actual values or coded
values are used.
Binary Coding
When there is qualitative data, or as such observations are not available in the
given data, attributes are used. To characterize this data, sometimes binary coding
is used. If a certain character or event in the data that needs to be checked is
present, it is denoted by 1. If it is absent, it is denoted by 0. This can be shown by:
or example, if the efficiency of workers is measured as those who work for 8 hours
are efficient, Χ (efficiency) will be 1 if a worker works for 8 hours everyday and 0 if
he works for less than 8 hours a day.
It is necessary to select the sampling method according to how the sample data is
going to be used. There is no strict norm as to which sampling plan will be
employed for data collection and analysis; a decision has to be made on the basis
of experience and needs of the data collector. The following is a guideline on a few
sampling techniques. Every sampling method has been developed for some specific
purpose.
In simple random sampling, each element in the sample space has an equal chance
of getting selected in the sample. Hence the probability of any event can be
determined by listing all the possible units in the sample space.
Stratified Sampling
The person using the sample data must be conscious of the presence of stratified
groups, and must document a report such that the interpretations are relevant only
to the sample selected and may not represent the universal population.
Systematic Sampling
In this sampling technique, each n th element is selected from the sample space.
The sampling interval, n, is calculated as:
For example, if there are 2000 samples in the sample space, and the number of
samples is 50, then 2000/50 = 40; hence every 40 th sample will be selected.
Clustered Sampling
In clustered sampling, all the units are grouped into clusters and a number of
clusters are selected randomly to represent the total population. Then all units
within selected clusters are included in the sample. The elements within the
clusters can be homogeneous or heterogeneous but there should be heterogeneity
between clusters.
The difference between cluster sampling and the stratified sampling is that in
cluster sampling, each cluster is treated as the sampling unit and hence analysis is
done on the number of clusters; whereas in stratified sampling, the analysis is done
on elements within strata.
5. Descriptive Statistics
Descriptive statistics are used to explain the properties of distributions of data from
samples. The following section describes the more frequently used descriptive
statistical measures.
The measures of central tendency are the various ways of describing a single
central value of the entire mass of data. The central value is called average. The two
main objectives of the study of averages are:
i. to get a single value that describes the characteristic of the entire group.
ii. to facilitate comparison.
Three averages: mean, median and mode are of interest to Six Sigma.
Mean: Arithmetic mean or simply mean is the value obtained by the sum of all data
values divided by the total number of data observations. It is the most widely used
measure of central tendency.
Population Mean
Sample Mean
Median: The median refers to the middle value in a distribution of a data set. One
half of the items in the data set have a value the size of the median value or
smaller, and one half has a value the size of the median value or larger. It splits the
data into two parts. It is to be noted that the median is the average of the middle
two values for an even set of data items.
Mode: The mode or modal value is that value in a series of data that occurs with
the highest frequency. It is possible for data sets to have more than one mode.
While this statement is pretty helpful in interpreting the mode, it cannot be applied
safely to any distribution because of the erratic nature of sampling. Rather, mode
should be thought as the value around which the data items are most closely
concentrated. It is the value which has the most frequency density in its immediate
neighborhood.
Measures of Dispersion
The measures of central tendency give one single figure that represents the entire
data set. But it becomes necessary to describe the variability or dispersion of
observations because average alone cannot give an adequate description of the
data set. Measures of dispersion help in describing the spread of dispersion. The
dispersion, (also known as scatter, spread or variation) measures the extent to
which the items vary from some central value.
Range: The range of a set of data values is the difference between the largest or
smallest values.
R = Largest - Smallest
Variance, Standard Deviation: The variance is the sum of squared deviations from
the mean divided by the sample size. The standard deviation is the square root of
variance.
The Coefficient of Variation (COV): This is equal to the standard deviation divided
by the mean and is expressed as a percentage.
1. The normal distribution has a skewness zero, zero signifies perfect symmetry.
2. Positive skewness signifies that the tail of the distribution is more extended on
the side above mean.
3. Negative skewness signifies that the tail of the distribution is more extended on
the side below mean.
Probability Distributions: Probability distributionsare relative frequency
distributions when the number of observations is made very large. These are the
distributions for the probability of random variables. These random variables may
be continuous or discrete in nature. For continuous random variables, probability
density function (p.d.f) is used and for discrete random variables, probability mass
function (p.m.f) is used.
Probability Density Function (p.d.f): The p.d.f explains the nature of random
variables in continuous case. It forms a bell shaped distribution from normally
grouped data.
When the random variables are normally distributed, there are symmetric p.d.fs
with mean = mode = median, meaning they are at the same point. Mathematically,
if f(x) is a continuous distribution function of the random variable x, and is always
positive, i.e., then p.d.f will be,
Cumulative Distribution Function (c.d.f): The c.d.f represents the area under the
probability distribution function to the left of X. The c.d.f is used for both
continuous and discrete data. It is denoted by:
And,
6. Graphical Methods
Graphical methods of data analysis include box plots, stem and leaf plots, run
charts or trend charts, scatter diagrams, histograms, normal probability plots,
Weibull plots. Data constitute the foundation for statistical analysis. The best way to
analyze data and measure a process is with the help of charts, graphs, or pictures.
Charts and graphs are the most commonly used tools for displaying and analyzing
data as they offer a quick and easy way to visualize what the data characteristics
are. They show and compare changes and relationships.
2. Stem and Leaf Plots display the variation of histograms and it is useful for data
sets (n<200) . (See topic: Process Analysis and Documentation in the previous section.)
3. Trend Charts/ Run Charts Trend charts (also known as run charts) are typically
used to display different trends in data over time. A trend chart is actually a quality
improvement technique and is used to monitor processes. A goal line is also added
in the chart to define the target to be achieved. One of the main advantages this
chart offers is that it helps in discovering patterns that occur over a period of time.
Trend Charts should be used for introductory analysis of continuous data or data
arranged in a time order. A trend chart of continuous data should be drawn before
doing other analysis. Analysis of run charts is used to find out if the patterns in the
data have developed because of common causes or special causes of variation.
Answers to questions like “Was the process under statistical control for the
observed period” are provided by the run chart. If the answer is no, then there
must have been special causes of variation that affected the process. If the answer
is yes, then process capability analysis can be used to approximate the long term
performance of the process (See topic: Process Capability Analysis)
A run chart should not be used if more than 30% of the data numbers are the
same. Also run charts are not very sensitive to SPC, they cannot detect single points
which are characteristically different form others; hence they may not be able to
detect special causes of variation in spite of their presence.
The various steps involved in creating a trend chart are:
Data gathering: The data should be collected over a period of time and it should
be gathered in a chronological manner. The data collection can start at any point
and end at any point.
Data organizing: The collected data is then integrated and is divided into two sets
of values, i.e., x and y. The values for ‘x-axis’ represent time, and the values for ‘y-
axis’ represent the measurements taken from the source of operation.
Preparing the chart: The y values versus the x values are plotted, using an
appropriate scale that will make the points on the graph visible. Next, vertical lines
for the x values are drawn to separate time intervals such as weeks. Horizontal
lines are drawn to show where trends in the process, or in the operation, occur or
will occur.
Interpreting the chart: After preparing the chart, the data is interpreted and
conclusions are drawn that will be beneficial to the process or operation.
Example: Suppose you are the new manager in a company and you are disturbed
by the trend of certain employees coming late. You have decided to monitor the
employees’ punctuality over the next four weeks. You decided to note down by how
much time they get late everyday (on an average basis) and then construct a trend
chart.
Data Gathering: Cluster the data for each day over the next four weeks. Record
the data in an ordered manner as shown in the following:
Organizing Data: Determine what should be the values on x-axis and what should
be the values on y-axis. Assume day of the week on the x-axis and time on the y-
axis.
Preparing the chart: Plot the y values versus the x values on a graph sheet (on
paper) or using another computer tool like Excel or Minitab. Draw horizontal or
vertical lines on the graph where trends or deviations occur.
Interpreting Data: Conclusions can be drawnonce the trend chart has been
prepared. Results can then be interpreted by the analysts in the analysis phase. It is
very clear from the chart above that employees usually take more time to reach
office on Mondays.
4. Histograms
The shapes of histograms vary depending on the choice of the size of the intervals.
The horizontal axis depicts the range and scale of observations involved. The
vertical axis shows the number of data points in various intervals, i.e., the
frequency of observations in the intervals. The values on the horizontal axis are
called the upper limits (intervals) of data points.
Uses of a Histogram
A histogram makes it easy to see the scattering of data (the dispersion and central
tendency) and thus it becomes clear where the variable occurs in a critical state. It
makes comparison of the distribution to process requirements easy.
Histograms are also used as quality control tool. It is used in the analysis and
finding possible answers to quality control problems. But histograms should be
drawn along with control charts or run charts because histograms do not display
the ‘out of control’ processes as they do not show the time sequence of data.
When data is obtained from different sources, the data can be stratified by plotting
different histograms.
A histogram is an efficient tool which can be used in the early phase of data
analysis. For a better analysis, it is combined with the concept of normal curve. A
few questions are generally used to interpret the histogram, which are,
Disadvantages of Histograms
Histograms are an important tool in the initial phase of data analysis due to the
ease with which it can be created. But in statistical process control, the histogram
does not give any clue regarding how the process was operating at the time of data
collection.
In the example discussed previously about employees who come late, the
histogram can show how the data is dispersed (on a daily basis) for the duration of
a month:
5. Scatter Diagrams
6. Probability Plots
Probability plots are a graphical technique to check which distribution (e.g. normal,
Weibull etc.) a particular data set is following. This technique is used to verify the
collected data against any known distribution. A probability plot shows the
probability of a certain event occurring at different places within a given time
period. Each sample is selected in such a manner that each event within the sample
space has a known chance of being selected. While sampling for any event, every
observation from which the sample is drawn has a known probability of being
selected into the sample.
Probability plots give a better insight into the physical environment of a process.
With moderately small samples, probability plots produce reasonable results.
Probability plots show estimates of process yields.
When plotted on a graph, these events usually bunch around the mean, which
occurs in a Bell curve (See topic: Basic Process Capability). This theoretical
distribution of events allows the calculation of the probability of a certain event
occurring in the sample space.
A probability plot consists of a center line and two outer bands, one above the
center line and one below it. The nearer the data points are to the center or middle
line, the better it is thought to fit the distribution. If all the points lie within the two
outer bands then the data set is thought to be a good fit to the probability model
being used.
A straight line in a probability plot indicates that the data set is following that
particular distribution. But a bend in the plot suggests that the data set is from
more than two distributions.
The positive aspect of a probability plot is that the data need not to be divided into
intervals. Also probability plots works better for a smaller number of data points.
On the other side, probability plots need to use the correct probability distribution.
Goodness of Fit test is a type of statistical test where the legitimacy of one
hypothesis is tested without the specification of an alternative hypothesis.
1. To define a test statistic (some function of data measuring the distance between
the hypothesis and data) and
2. To calculate the probability of obtaining data which have a still larger value of this
test statistic than the value observed, assuming the hypothesis is true.
The result obtained is known as the size of test or the confidence level.
Probabilities which are less than 1% show a poor fit. Probabilities which are close to
1% indicate a fit which is too good to occur very frequently and may be a sign of
error.
The most common tests for goodness-of-fit are chi square test, Kolmogorov test,
Cramer-Smirnov-Von-Mises test, runs etc.
The Pearsonian chi square test is used to test if an observed distribution conforms
to any other distribution. The method consists of organizing the observations into
frequency table with classes. The formulae is,
(For more details on Goodness of Fit Tests, refer to chapter 6: Black Belt, Analyze)
Distributions reveal a lot of information about the data being studied. They reveal
the way in which probabilities are connected with the data numbers under
observation. Plots of the distribution shape can tell how probabilities change over a
range of values.
Binomial Distribution
Binomial distribution is used in situations where there are just two equally exclusive
outcomes of a trial.
It shows the probability of getting ‘d’ successes in a sample of 'n' taken from an
'infinite' population where the probability of a success is 'p'.
The equation which will give the probability of getting x defectives in a sample of n
units is shown by the following equation. It is known as the binomial probability
distribution. The formula for a binomial distribution is:
where,
q = 1- p = probability of failure
Binomial distribution is best utilized when the sample size is less then 10% of the
lot size. Binomial distribution can also be analyzed using Microsoft Excel
Poisson Distribution
Juran (1988) recommends using the Poisson distribution with a minimum sample
size of 16, the population size should be at least 10 times the sample size and the
probability of occurrence ‘p’ on each trial should be less than 0.1.
e = 2.7182818
Normal Distribution
Two parameters determine the normal distribution. The mean (μ) locates the
centre of the distribution and standard deviation (σ) measures the spread of the
distribution.
The probability density function (p.d.f) for a continuous normal distribution is:
where,
π = 3.1416
The mean (μ) locates the center of the distribution; standard deviation (σ) measures
the spread of the distribution. Almost all probability is within ± 3σ of the mean.
Graphing the results of normal distribution results in a bell shaped curve.
The area of the curve represents the proportion of the process output that falls
within a range of values. These values which can also be obtained from the normal
distribution table are:
The test statistic used for normal distribution is
This statistic follows Standard Normal Distribution with mean zero and Variance
one.
Chi-square Test
Chi-square test is a non parametric test. It is in all probability the most commonly
used non-parametric test. Chi-square test is quite a flexible test and can be used in
a number of circumstances.
Goodness of fit
Test for homogeneity
Test of independence
Unlike the parametric tests, the Chi-square test does not require the data sample to
be normally distributed. It assumes that the variable is normally distributed in the
population from which the particular sample is taken.
where,
e = 2.71828
Student’s T-Test
T-Tests are used to compare two averages. They may be used to compare a variety
of averages, such as, effects of weight reduction techniques used by two groups of
individuals, complications occurring in patients after two different types of
operations or accidents occurring at two different junctions.
The t-test may be used when sample size is small i.e., less than 30. A t-test may be
calculated if the means, the standard deviation and the number of data points are
known. If raw data is used then these measures should be calculated before
performing the t-test.
F Distributions
The F distribution is used to calculate the probability values in ANOVA, and it is the
distribution of two estimates of variance. It is the ratio of two chi square
distributions with the respective degrees of freedom. The chi square obtained has
been divided by its degrees of freedom.
Bivariate Distribution
This distribution describes the behavior of two Gaussian variables x and y. The
Bivariate normal distribution has 5 parameters, namely, the two means µ 1 and µ 2,
the standard deviations σ 1 and σ 2 and the correlation between the two variables
ρ. This is a three dimensional distribution.
Exponential Distribution
The shape of the distribution is determined by just one parameter i.e. lambda l .
Lambda l is equal to 1/µ.
Lognormal Distribution
where, μ and σ are the mean and standard deviation of the variable's logarithm
respectively.
Weibull Distribution
β here is the shape parameter, θ is the characteristic life or scale parameter, and t 0
is the location parameter.
A constant failure rate suggests that items are failing from random events. Wear
out is suggested by an increasing failure rate; some parts are more likely to fail with
the passage of time.
The main concept in statistical process control (SPC) is that every measurable
phenomenon is a statistical distribution. This means an observed set of data is
made up of a sample of the effects of unknown chance causes. After everything has
been done to eliminate special causes of variation, a certain amount of variability
showing the state of control will always remain.
The three basic concepts of a distribution are location, spread, and shape. The
location is the value of the distribution, such as mean. The spread of the
distribution is the amount by which smaller values differ by bigger ones, such as,
standard deviation and variance. The shape of a distribution refers to the patterns
like peakedness and symmetry. A distribution can be bell shaped, rectangular
shaped, etc.
F. Measurement Systems
1. Measurement Methods
To know the organization’s position in the market in future, the present system has
to be measured. The measurement of processes, people involved, strategies
applied, products generated and performance help the organization to follow and
appraise each stage of the production process.
There are several measurement methods and instruments like gauge blocks,
attribute screens, micrometers, calipers, optical comparators, tensile strength,
titration, etc. (See topic: Design of Experiments)
In Six Sigma methodology, decisions are guided by the analysis obtained from the
measurements recorded. An error in a measuring system may result in an error in
the judgment taken by the management. The function of Measurement System
Analysis is to check the efficiency of a measurement system by evaluating its
accuracy, precision and stability.
This is a first step that precedes any kind of decision making. It is conducted before
collecting data to make sure that the measurements which will be subsequently
collected will be done without any bias. It will also ensure that the measurements
collected will be reproducible by the system used and will be the same if repeated
by other operators. It is done so that a reasonable conclusion about the quality of
the measurement system can be made.
The errors of a measurement system can be categorized into two groups: accuracy
and precision.
1. Accuracy refers to difference between the measurement and the actual value.
2. Precision refers to the variation observed when the same thing is measured by
the instrument repeatedly.
Linearity, stability and bias refer to the accuracy of the measurement system.
Repeatability and reproducibility refer to the precision of the measurement system.
Three characteristics should be evaluated to check the precision of a measurement
system; statistical control of the measurement system, increment of the
measurement, and standard deviation of the measurement system.
Bias
Linearity
Stability
It is the change in bias over time. It is the variation in the measurements recorded
by an appraiser of the same parts over a period of time using the same method. A
system is said to be stable if the results of the measurement system are same at
varying points of time.
Statistical stability is determined by the use of control charts. Control charts are
prepared and evaluated to determine the stability of any measurement system. A
stable measurement system will show no ‘out of control’ signals.
Repeatability
The tools that are chosen are as a rule, are determined by the characteristics of the
measurement system itself.
3. Metrology
Calibration
Calibration of equipment helps the organization using it to identify the defects. The
process of evaluation of any instrument of measurement of an unverified accuracy
to an instrument of acknowledged accuracy to identify the variation from the
required performance is called calibration. It is done in order to ensure the
tolerance of the equipment through out its use.
To ascertain that the measurements which will be taken by the instrument will be
consistent through out.
To ensure that the measurements taken by the instrument will be accurate.
To ascertain the reliability of the instrument.
Uncertainty of Measurement
It is very important not to confuse between error and uncertainty. Both may seem
alike but are vastly different. Error refers to the difference between the true value
and the measured value. Uncertainty refers to the quantification of the suspicion
about the measurement outcome.
Instrumental errors due to ageing, wear and tear, interference and noise (for
electrical instruments) may cause uncertainty in measurement.
Measurement process
The process of measurement may not be user friendly and may result in the
subject not cooperating.
Calibration uncertainty
These can also be called imported uncertainties as the calibration of the instrument
can add to the uncertainty.
Operator skill
The dexterity and assessment of the person recording the measurement also add
to the uncertainty of the measurement.
Sample selected
The sample selected from the population should be a good representation of the
process to be assessed.
Environment
Process capability, as the name suggests, is the study of capability or the ability of a
process in any organization. This study answers the very basic question which is, “Is
this process capable or good enough?” Process capability refers to the capability of
any process to produce a defect free product when the process is in a state of
statistical control.
Process capability study can also be used to appraise the ability of any process to
meet the needed specifications. To put it simply, it answers this question, “How
capable is the process in terms of producing output within the specification limit?”
There are two ways of calculating capability- one is based on measuring the
variability of the process (the inherent variation within a sample) directly.
The other is based on counting the number of defects produced by the process.
DPMO is a common term used for ‘defect rate’ in Six Sigma. This method requires
extensive efforts to collect and analyze the data. The goal of such a study is to make
a projection of the number of defects expected from the process in the long term
by using a sample of data items from a process.
Process capability analysis can be performed with both attribute data and
continuous data, if the process is in statistical control. (See topic: SPC) Process
capability analysis done on processes that are not under statistical control
produces erroneous figures of process capability, and therefore should never be
performed. For example, if the means of successive samples fluctuate greatly, or
they are clearly off the target specification, these problems should be rectified first.
a. Bringing the process in a state of statistical control for a given period of time by
use of control charts.
b. Comparing to what extent the long-term process performance complies with the
management or engineering specifications.
In Six Sigma, process capability studies are done at the beginning and at the end
of a study to check the degree of improvement in the attained state. It studies the
capability of a process in ideal conditions over a short time, e.g. 2 hours to 24 hrs.
Process engineers are responsible for the process capability studies. Knowledge of
the short term stability and capability at the end is one if the benefits of such a
study.
Process performance studies on the other hand tell us about the long term
performance of a process. They determine the long term stability and capability of
the process by using the total process variability in the standard capability
computations.
As any organization has limited resources, it is very important to choose the right
process for improvement. Limited resources here means limited funds and limited
manpower which can be devoted totally to the improvement process; so all the
processes cannot go through the improvement process simultaneously. The tools
for selecting the process which needs to undergo improvement at the earliest are
Pareto analysis and fishbone diagrams.
Once the process has been selected the next important step is to define the scope
of the process. A process is not only the manufacturing process but it involves
everything. It is a combination of machines, tools, methods, and the people
involved in the particular process. Every factor involved in the process has to be
identified at this step of the process capability study.
As this study aims at the improvement of the process, it disrupts the normal
functioning of the process. This kind of study also involves considerable
expenditure both in terms of manpower and material. Every requirement should be
met for such a kind of study. At this stage, it becomes imperative to use all the
techniques of project management like planning, scheduling, and reporting.
The measurement system should be first evaluated for its capability and validity.
The measurement system should be evaluated for tolerance, accuracy and
precision.
The control plan has a dual job. On one hand it isolates and controls as many
variables as possible for the study. On the other hand it provides a method to track
those variables which cannot be controlled.
The choice of the SPC method depends upon what attributes are used up to this
point in the study. One of the attribute charts is to be used if performance measure
is an attribute. Similarly variable charts are used for process performance
measures studied on a continuous scale.
Data collection and analyzing should always be done by more than one person as it
ensures more accurate collection and accuracy. It helps to catch accidental errors
while data collection or analyzing. Control charts should be used here.
The control plan has a dual job. On one hand it isolates and controls as many
variables as possible for the study. On the other hand it provides a method to track
those variables which cannot be controlled.
The choice of the SPC method depends upon what attributes are used up to this
point in the study. One of the attribute charts is to be used if performance measure
is an attribute. Similarly variable charts are used for process performance
measures studied on a continuous scale.
A special cause of variation can be a good cause or a bad one. The idea is to identify
it. It may take months to identify a special cause. For instance, inadequately trained
operators producing the variability cannot be considered as a special cause; rather
it is the inadequate training which is the special cause. Therefore, harmful special
causes are usually removed by removing the cause itself. On the other hand,
advantageous special causes are sometimes actually embedded in the routine
operating procedure.
After the process comes under a state of control, the process capability can be
computed. This can be done by the methods described in the latter part of this
chapter. Once the statistical figures of the process capability are worked out, they
are then compared with the managerial objective for the process.
After a stable state or a state of statistical control has been achieved, measures
should be taken to sustain the same and also to enhance it. What is more
important here is the atmosphere of the company which helps in the process of
sustaining the stable state and also continuously improving on it. SPC is one such
measure.
The voice of the process and the voice of the customer both have an effect on each
other. This two way relationship in Six Sigma is called capability. Capability implies
how well the voice of the process or the performance of the process meets with
customers expectations.
Statistically, process capability compares the result of an ‘in control’ process to the
specification limits by using capability indices. A measurement control system
ensures that process capability indices are measures of the capability of a process
to produce the final product or service, according to the customer’s specifications
or some other measurable characteristics. The output of a stable process is
compared with a specification to see how well the process meets the specification.
The capability indices draw attention to where improvement is required.
When you buy a pizza, as a customer, you have certain expectations. You want the
taste of a particular type the same every time you order it. If it was too crispy or too
salty, you would immediately sense it and feel dissatisfied. The pizza company is
aware of this and controls the amount of ingredients of each pizza or oven
temperature that goes into making one. Therefore the pizza manufacturer sets
specifications for the making its pizzas consistent. The value that separates
acceptable from unacceptable performances is called specification.
Histograms and control charts together are used to express process capability.
Process capability is expressed using the following indices:
Cp can be computed as the distance between the upper and lower specification
limit (process width) divided by 6 times its standard deviation or sigma. (These
specification limits are not statistically calculated. They are set by the customer
requirements and the process economics.) This index makes a direct comparison of
the natural tolerance of the process to engineering specifications.
USL and LSL: The difference between USL and LSL defines the range of output
which the process must meet. (USL-LSL) is also called as the specification range.
6σ: It is called the “natural tolerance” of the process. If 6σ is less, the output data of
the process would be more stable.
If Cp<1, it means that the denominator value is more the numerator. Hence the
value of (USL- LSL) is less, which depicts that the process width is wider than the
specification limits. Thus it can be interpreted that a process is not capable of
generating outputs which abide by specifications and the process is generating
significant number of defects.
If Cp= 1, it means that the process is just meeting the specifications but is still
generating 0.3% defects. The general minimum accepted value for Cp is 1.33.
If Cp>1, it means that the process variation is less than the specification range and
it ensures that the specification range is narrow enough. The process is potentially
capable, if the process mean is centered on the specified engineering target.
However, defects may occur if the process is not centered on the target value.
Generally, a larger value of Cp is preferred.
For a Six Sigma process, i.e, a process that generates 3.4 defects per million
opportunities, including a 1.5 sigma shift, the value of Cp would be 2.
Limitations of Cp:
It cant be used unless there are upper and lower specification limits
It does not account for process entering, i.e., if the process average is not properly
centered, the result will be wrong.
Cu is the difference between the process mean and the upper specification limit
divided by 3 sigma.
2. Cu is the difference between the process mean and the upper specification limit
divided by 3 sigma.
1. Cl is the difference between the process mean and the lower specification limit
divided by 3 sigma.
2. Cpk is the smallest value that is calculated of Cu and Cl is called the adjusted short
term capability. In other words, Cpk is the difference between the process mean and
the nearest specification limit divided by 3 times its standard deviation (sigma).
While calculating Cpk it is assumed that the process distribution is normal. If this
value is ≥ 1, then about 99.7 % of all the products of the process being studied will
be within the prescribed specification limit. If this value is less than 1, the process
variation is too wide compared to the specification (more non-conforming products
are being produced); so the process needs to be improved. For a Six Sigma
process, Cpk would be 2.
This index compares the width of the process with the width of the specification
and also accounts for any error in the placement of the central tendency. That is
why in recent times, Cp has been replaced by Cpk.
6. ZL measures process location relative to its standard deviation and the lower
requirement. The higher the ZL the better. For a Six Sigma process, ZL would be +6.
7. ZMIN is the smaller of ZU and ZL values. It is used in calculating CPK. For a Six Sigma
process, ZMIN would be +6.
There are some other indices called process performance indices and as the name
suggests, they study the actual process performance over a period of time. Process
performance indicators help to look at how the total variation in the process
compares to the specification. These indices are Pp, Ppk, and Cpm. Here special
causes are included to calculate total variation. These are called long-term
capability indices. They are equally important because no process operates only for
the short-term.
According to AIAG manuals, Pp and Ppk are based on statistically stable processes
with normally distributed data. The formula for computing Pp and Ppk are very
similar to those of Cp and Cpk. The main difference in their formulae lies in
calculating the standard deviation.
The sample standard deviation, s, is calculated directly from the data as:
n = sample size
and
Cpm is the process capability measured against performance to a target. This value
measures the width of the specification range as compared to the spread of the
output of the process, together with an error term about how far the mean of the
distribution is away from the target.
When the center of the specification is the targeted value, C p, Cpk, and Cpm will be
equal. When close to the center, it is often found that the C pk and Cpm values are
approximately equal. But if the mean is more than one standard deviation away
from the target, then different views of the process capability will be interpreted
from the three indices.
The following describes the control chart method of analyzing process capability of
attributes data. (Thomas Pyzdek, 1976)
If the process capability does not level up to the management requirements, immediate
action should be taken to correct the process. It is to be noted that ‘problem solving’ may
not help but only result in tampering. Even if the capability meets requirement
specifications, ways of process improvement should be constantly discovered. The
control charts will provide verification of the improvement.
From the above description, it is evident that there are two values for Cpk , which is
the short- term and the other is long-term Ppk . These differences happen because
any time a sample is processed to calculate performance; it is done so over a short
period of time. But processes vary over time. Mikel Harry, the pioneer of Six Sigma,
lays down that even the most consistent of processes display a 1.5 sigma shift. In
reality, sometimes the short term capability simply may be the best capability and
the actual performance may not be good enough as interpreted by this short term
calculation. In some cases, the shift may be smaller than the 1.5 sigma shift. In
other cases, when the process is not under statistical control, the shift may be
wider than the 1.5 sigma shift. The answer to this can be to calculate a number
of Cpk calculations to determine a unique shift.
The Sigma score (Z) can be calculated from the mean (x bar) and standard deviation
σ. If the short term standard deviation is known, the sigma score calculated will be
the short term sigma score ZST.
If the long term standard deviation is known, the sigma score calculated will be the
long term sigma score ZLT.
where SL is the specification limit.
The short term sigma score represents the best variation performance that can be
expected from the current process. But in the real world, a process varies over
time. The performance of a process is affected by shift, drift and trends. Six Sigma
permits the study of the effects of short term variation, while making realistic long
term projections, relative to the process’s specifications.
The following graph shows the short term variation of a process together with its
long term variation.
In the graph, the process stays within specifications in the short term. But in the
long run the process is affected by long term influences (like drift and shift) and the
process variation is expanded, causing it to generate defects beyond the
specification limit. The defects are shown in the shaded portion.
Practitioners of Six Sigma projected that mathematically shifting a process’s short term
distribution closer to its specification limit by a distance of 1.5 times its short term
standard deviation (σ ST) would approximately be equivalent to the number of defects
produced in the long term. This can be applied directly to the calculation of short term
and long term sigma scores. This is shown in the diagram given below.
The figure shows the short term distribution used to project the long term
performance.
But the shifted distribution being approximately equal to the long term distribution,
the same equation can be written as
Therefore, it is a common practice of Six Sigma analysts to first calculate the short
term sigma score and then translate it into the long term defect rate performance
(Z LT). This Z LT is communicated in terms of defects per million opportunities
(DPMO).
Till this point, process capability of normal data has been described. What are the
options for analyzing process capability for the data that is not normal? The fact
acknowledged by statisticians is that most studies including Six Sigma processes
assume normality, because it is easy to assume normality for data when the errors
associated with that assumption would be insignificant.
The output of many process do not form normal distributions. Characteristics like
height, weight etc. can never have an exactly normal distribution. Similarly, there
are processes that involve life data and reliability studies, or processes where
financial information has to be tracked or customer service processes that follow a
non-normal distribution. Other such non-normal distributions are cycle time,
average handling time (for calls), calls per hour, shrinkage, perpendicularity,
straightness, and so on. But non-normal data can be transformed to resemble a
normal distribution. There are a number of techniques in statistics that are used in
performing this method.
There are other options for calculating process capability indices for non-normal
data. It is possible to calculate capability with non-normal data.. But it is sometimes
more helpful to create a more useful data set of the non-normal data, this can be
done by:
1. Sub group averaging: Averaging the sub groups using control charts usually
produces a normal distribution. It works on the central limit theorem too. It is to be
noted that when the data is highly skewed, more samples are needed.
2. Segmenting data through stratification often clubs non-normal data into normal
groups of data. This is dividing the data into subsets with similar characteristics.
3. Mathematically transforming data and the specifications through Box-Cox
transformations, logit transformation etc.
5. Non-parametric distributions are used for tests when the data is not normal.
They are used for tests using medians rather than means. They are generally used
to compare data groups whose sample sizes are less than 100.(For more information
on non-parametric tests, see Chapter 6: Black Belt, Analyze) For larger samples, Central
Limit Theorem can be used.
Process Capability
The following flowchart shoes the flow of process capability calculation, starting
with a data set of continuous data and then analyzing it for non-normality and then
calculating capability for the non-normal data.
Process Capability with Subsets
When data from a process looks like a non-normal but known distribution, like a
Weibull distribution or log-normal distribution, defect rates or process capability
can be calculated using properties of the distribution given the parameters of the
distribution and the specification limits. The Cpk that would have the same
proportion nonconforming for a normal distribution can be determined afterwards.
Another method is to transform the raw data into an approximately normal
distribution, and measure capability using this assumption and transformed
specification limits. A skewed distribution can be transformed into a normal
distribution using natural logarithms. Minitab software can be used to transform
the raw data specification limits, and calculate process capability on the
transformed data.
When the above methods cannot be used, the best way to calculate capability is to
collect data on the defects themselves and summarize the results. This method
cannot infer about data beyond the available sample; so collecting much larger
data samples than is necessary for continuous data can help overcome this issue.
The following formula is used to calculate the RTY for an N-step process (or N-
featured product):
Rolled Throughput Yield = (1− DPMO 1/ 1,000,000) × (1− DPMO 2/ 1,000,000) × (1−
DPMO 3/ 1,000,000) …… (1− DPMO N/ 1,000,000)
where, DPMO i is the defects per million opportunities for step i in the process. The
sigma level equivalent for RTYs is given in the table in the end. These are the
estimated ‘process’ sigma levels.
To calculate the Normalized Yield and Sigma Level which is a kind of average, the
following formula is used:
When calculating DPMO, you would not actually measure the defects over a million
opportunities. That is a long drawn out process. Instead, you calculate DPMO using
DPO as an estimate, in the following manner:
where,
DPO is a measurement of capability. Any item that you work upon in Six Sigma is
called a unit. A unit may be a product that is manufactured discretely. It may be a
new design or an invoice of receipt or a loan application. An opportunity of any
product or process or service is a special characteristic that has the ability to turn
into a defect or success. Success or failure is known as the conformity to the
opportunity’s specification.
For example, a process with four individual process steps with the following DPMO
levels at each step is shown below:
Rolled Throughput Yield= .995 × .985 × .999 × .99995 = 0.979. This means if
production of 1000 units were started with, the output would be only 979 units. It
should be remembered that RTY is worse than the worst yield of any process. When
the process becomes complex and the number of process steps become high, the
RTY will continue to erode.
The sigma level equivalent of this process RTY is 3.5. This is the estimated “process”
sigma level. This is calculated in the following manner:
Corresponding to the value of yield, i.e., 0.979, the value of Z ST equals to 4.62. (See
table below) The sigma score of the shifted distribution is
The sigma level equivalent of this process’s normalized yield is 4.1. This is the
estimated “organization” sigma level. (See table below).
CHAPTER 6
6 The Analyze Phase
Objectives
The true reason why a problem could exist in the process is unearthed in the
Analyze phase.
Data analysis can be divided into two phases: the explanatory phase and the
confirmatory phase. Before actually studying the problem and establishing a cause
and effect theory, one must thoroughly examine the data for patterns and trends
or gaps. This is called exploratory data analysis.
Exploratory Data Analysis (EDA) is an approach for data analysis that utilizes a
variety of techniques (mostly graphical) to:
1. Resistance
It refers to the insensitivity of a method to a small change in the data. If the small
amount of the data is tainted, the method should not produce significantly different
conclusions.
2. Residuals
Residuals are what remain after removing the effect of a model .For example; one
might subtract the mean from each value, or look at deviations about a regression
line.
3. Re-expression
4. Visual Display
It helps the analyst to examine graphically to point out the regularities and
abnormalities in the data.
There are a wide number of EDA methods and techniques, but two of them are
used frequently in Six Sigma: stem-and-leaf plots and box plots. However, graphics
of EDA are simple enough to be drawn by hand.
1. Multi-Variate Studies
Multi Variate studies is the study about the identification of the benefits of
visualization of the relationships between key process input and output variables.
They involve the matching up of data visualization techniques with equivalent
images and also with examples of the types of data to which they are best well-
matched. They also match the families of variation shown by Multi-Variate charts
with examples.
Multi-Variate Charts
A multivariate chart is a control chart for variables data (See Chapter 8: Black Belt,
Control for information on control charts). Multivariate Charts are used to find out
shifts in the mean or the association (covariance) between numerous linked
parameters.
The T 2 control chart, based upon Hotelling T 2 statistic, is used to detect shifts in
the process. This statistic is calculated for the process' Principal Components, which
are linear combinations of the Process Variables. The Principal Components (PC)
are independent of one another however, the Process Variables may be correlated
with one another. Independence of components is necessary for the analysis. The
PCs may be used to estimate the data and thereby provide a basis for an estimate
of the prediction error. The number of PCs may never exceed the number of
process variables and is often constrained to be fewer.
1. The Squared Prediction Error (SPE) chart may also be used to detect shifts. The
SPE is based on the error between the raw data and a fitted Principal Component
model to that data.
A Multivariate Analysis (MVA) may be valuable in SPC whenever there is more than
one process variable. This becomes more useful when the effect of multiple
parameters is dependent or there is a correlation between some parameters.
Sometimes the true source of variation may not be measurable.
An important point is that almost all processes are multivariate but analysis is
frequently not required because there are only a few independent controlled
variables. However, even when the variables become dependent, the use of a single
control chart for each variable increases the probability of randomly finding a
variable ‘out of control’; the more variables there are, the more likely it is that one
of those charts will contain an ‘out of control’ condition even when the process has
not shifted. Thus, the probability of taking a wrong decision (or probability of Type 1
error) is increased if each variable is controlled separately. So the control region for
two separately acting variables is a rectangle; an ellipse would be formed as the
control region for two jointly-acting parameters.
The use of regression analysis is very important in Six Sigma. Regression analysis
helps the analyst to study cause and effect of a problem. This can be used in every
stage of problem solving and planning process.
Regression is the study of analysis of data aimed at discovering how one or more
variables (called independent or predictor variables) affect the other variables
(called dependent or response variables). Such analysis is called regression. It tells
about the nature of relationship between two or more variables.
For e.g., (1) you may be interested in studying the relationship between blood
pressure and age or between height and weight of a person. Here only two
variables are used. This is an example of Simple Linear Regression.
(2) The response of an experimental animal to some drug may depend on the size
of the dose and the age and weight of the animal. Here more than two variables are
used. So it is a case of Multiple Regression.
One difference among the means of those populations is rather to make inferences
about the relationship of mean of the response variables. These inferences are
made through the parameter of the model.
For Example:
2. Estimating the amount of sales associated with levels of expenditure for various
types of advertising.
Regression Line
For the amount of change that normally takes place in variable Y for a unit change
in X, a line will have to be fitted to the points plotted in the scatter diagram. This is
called regression line or linear regression.
The regression line tells about the average relationship between two variables for
the whole series. It is also called the line of average relationship.
Y= α + β X
When this equation describes the line marking the path of the points in a scatter
diagram, it is called regression equation. The line it describes is called the line of
regression of Y on X.
The values of and α and β in the equation are termed constants i.e. these values
are fixed. The first constant α indicates the value of Y when X=0, it is also called the
Y-intercept.
The value β indicates the slope of the regression line and it gives us a measure of
change in Y for a unit change in X. It is also called regression coefficient of Y on X. If
you know the values of α and β, you can easily compute the value of Y for a given
value of X.
The values of α and β are calculated with the help of the following two normal
equations:
Standard Error
If the measure of scatter of points from the regression line is less than the measure
of the scatter of observed values of y from their mean, it can be inferred that the
regression equation is likely to be useful in estimating Y. The scatter of points from
the regression line is called standard error of estimating Y.
Y = Observed value of Y
Y C = Estimated value of Y
Regression Model:
In the simple linear regression model two variables, X and Y are taken. The
following are the assumptions underlying the simple linear regression model:
Assumptions
a. The values of independent variable X are said to be fixed by the investigator i.e. X
is referred as non-random variable.
b. The variable X is measured without error i.e. the magnitude of the measurement
error in X is negligible.
c. For each value of X, there is a sub population of Y values. For the usual inferential
procedures of estimation and hypothesis testing to be valid, these sub populations
must be normally distributed.
e. The values of Y are statistically independent i.e. the values of Y chosen at one
value of X in no way depend on the values of Y chosen at another value of X.
where, y is a typical value from one of the subpopulations of Y, α and β are called
population regression coefficients and e is the error term.
where, e shows the amount by which y deviates from the mean of the
subpopulations of Y values from which it is drawn. e’s for each subpopulation are
also normally distributed with a variance equal to the common variance of the
subpopulations of Y values.
Scatter Diagram
A first step that is useful in studying the relationship between two variables is to
prepare a scatter diagram of the data. The points are plotted by assigning values of
independent variables X to the horizontal axis and values of the dependent variable
Y to the vertical axis. The pattern made by the points plotted on the scatter diagram
usually suggests the basic nature and strength of the relationship between two
variables. These impressions suggest that the relationship between two variables
may be describing by a straight line crossing the Y-axis below the origin and making
approximately a 45-degree angle with X-axis. It looks as if it would be simple to
draw, freehand, through the data points the line that describe the relationship
between X and Y. In fact, it is not likely that any freehand line drawn through the
data will be the line that best describe the relationship, since freehand lines will
reflect any defects of vision or judgment of the person drawing the line.
Usage of Scatter Diagram: Scatter diagrams are used to study cause and effect
relationships in Six Sigma. The underlying assumption is that the independent
variables are causing a change in response variables. It answers questions like, “In
the production process, is output of machine A better than Output of machine B?”
etc.
The method usually employed for obtaining the desired line is known as the
method of least squares, and the resulting line is called the least- squares line. The
least-squares line does not pass through the observed points that are plotted on
the scatter diagram.
The line that you have drawn through the points is best in this sense if:
The sum of the squared vertical deviations of the observed data points (y i) from
the least-squares line is smaller than the sum of the squared vertical deviations of
the data points from any other line.
Once the regression equation has been obtained it must be evaluated to determine
whether it adequately describes the relationship between two variables and
whether it can be used effectively for prediction and estimation purposes.
The relationship between X and Y is not linear; a curvilinear model provides a better
fit to the data; sample data are not likely to yield equations that are useful for
predicting Y when X is given.
For testing hypotheses about β the test statistic when σ2y/x is known as
1. Test Statistic (when σ2y/x is known):
where, β0 is the hypothesized value of β. The hypothesized value of β does not have
to be zero and
Decision Rule
Test is reject H0, if the calculated value is greater than tabulated value i.e. ( |Z|>
Zα/2) the test is significant. This means that there is a linear relationship between
dependent variable Y and independent variable X. We conclude that the slope of
the true regression line is not zero.
When H0 is true and the assumptions are met, the test statistic is distributed as
Student’s t distribution with n-2 degrees of freedom. The level of significance α =
0.05 or 0.01 is known. The tabulated value is t α/2, n-2.
Decision Rule
Test is reject H0, if the calculated value is greater than tabulated value i.e. ( |t|> t α/2,
n-2), the test is significant. This means that there is a linear relationship between
dependent variable Y and independent variable X. It can be concluded that the
slope of the true regression line is not zero.
If the results of the evaluation of the sample regression equation indicate that there
is a relationship between two variables of interest, the regression equation can be
put to practical use. There are two ways in which the equation can be used. It can
be used to predict what value Y is likely to assume given a particular value of X.
When the normality assumption is met, a prediction interval for this predicted value
of Y may be constructed.
You can also use the regression equation to estimate the mean of the
subpopulation of Y values assumed to exist at any particular value of X. Again, if the
assumption of normally distributed populations holds, a confidence interval for this
parameter may be constructed.
If the assumption are met, and when σ 2y/x is unknown, then 100(1 - α) percent
prediction interval for Y is given by
where xp is the particular value of x at which you wish to obtain a prediction interval
for Y and the degrees of freedom used in selecting t are n - 2.
The 100(1 -α )percent confidence interval for μy/x , when σ2y/x is unknown , is given
by
b. The Multiple Least squares Linear Regression Model
Assumptions
b. The variable X is measured without error i.e. the magnitude of the measurement
error in X is negligible.
c. For each set of Xi , there is a sub population of Y values. For the usual inferential
procedures of estimation and hypothesis testing to be valid, these sub populations
must be normally distributed.
e. The values of Y are statistically independent i.e. the values of Y chosen at one set
of X in no way depend on the values of Y chosen at another set of X values.
where, yj is a typical value from one of the subpopulations of Y values, the β i are
called the regression coefficients, xlj, x2j……………..xkj are respectively, particular
values of the independent variables X1, X2, ……………Xk and ej is a random variable
with mean 0 and variance σ2.
When the above equation consists of one dependent and two independent
variables, the model is written as:
A plane in three-dimensional space may be fitted to the data points. When the
model contains more than two independent variables, it is described geometrically
as a Hyper-plane.
In this equation, β0 represents the point where the plane cuts the Y-axis; that is, it
represents the Y-intercept of the plane. β1measures the average change in Y for a
unit change in xl when x2 remains unchanged, and β2 measures the average change
in Y for a unit change in xl when x2 remains unchanged. For this reason β1 and
β2 are referred to as partial coefficients.
Once the regression equation has been obtained it must be evaluated to determine
whether it adequately describes the relationship between two variables and
whether it can be used effectively for prediction and estimation purposes.
To determine whether the overall regression is significant, you have to evaluate the
strength of the linear relationship between dependent variables Y and the
independent variables Xi individually. That is, when you want to test the null
hypothesis, that is βi = 0 against the alternatives βi ≠ 0 (i=1,2,…….k), the validity of
the procedures rests on the assumptions stated earlier: that for each combination
of Xi values there is a normally distributed subpopulation of Y values with variance
σ 2.
To test the null hypothesis that βi is equal to some particular value , say, βi0 , the
following t statistic may be computed:
where the degrees of freedom are equal to n - k - 1 and sbi is the standard deviation
of the b i. When H0 is true and the assumptions are met, the test statistic is
distributed as Student’s t distribution with n - k - 1 degrees of freedom. The level of
significance α = 0.05 is known. The tabulated value is t α/2, n - k - 1.
The test is reject H 0, if the calculated value is greater than tabulated value, i.e.
(|t|>tα/2, n - k - 1). This shows that the test is significant, which means there is a
significant linear relationship between dependent variables Y and independent
variables X
6.2 Using the Multiple Regression Equation
where Sŷ is the standard error of the prediction. The 100(1-α) % Prediction Interval
for a particular value of Y given particular values of the Xi is as follows:
Correlation analysis deals with the association between two or more variables or it
is an attempt to determine the degree of relationship between two variables, when
the relationship is of a quantitative nature.
For example:
(1) to check the effect of increase in rainfall up to a point and the production of
rice.
(3) to check whether there exists some relationship between age of husband and
age of wife.
The use of correlation analysis is very important in Six Sigma. Correlation analysis
helps the analyst to study cause and effect of a problem. This can be used in every
stage of problem solving and planning process.
1. Most of the variables show some kind of relationship. For example, there is
relationship between price and supply, income and expenditure, etc. With the help
of correlation analysis you can measure in one figure the degree of relationship
existing between the variables.
2. Once you know that the variables are closely related, you can estimate the value
of one variable given the value of another with the help of regression analysis.
Correlation Assumption
The following assumptions must hold for inferences about the population to be
valid when sampling is from bivariate distributions.
1. A causes B
2. B causes A
3. A and B influence each other continuously
4. A or B both are influenced by C or
5. The correlation is due to chance
The above data shows a Perfect Positive Relationship between income and weight,
i.e., as the income is increasing the weight is increasing and the rate of change
between two variables in the same.
c. Both the variables may be mutually influencing each other so that neither
can be designated as the cause and the other the effect . There maybe a high
degree of correlation between the variables but it is difficult to pinpoint as to which
is the cause and which is the effect. For e.g., as the price of commodity increases its
demand goes down and so price is the cause and demand is the effect. But it is also
possible that increased demand of a commodity is due to growth of the population.
Now, the cause is the increased demand, the effect is price.
Coefficient of Correlation
Covariance = Σ x y / N
When r = - 1, it means that there is a perfect negative correlation.
1. Take the deviations of X series from the mean of X and denote these deviations
by x.
3. Take the deviations of Y series from the mean of Y and denote these deviations
by y.
5. Multiply the deviations of X and Y series and obtain the total i.e. Σ x y.
Since r is a pure number, shifting the origin and changing the scale of the series
does not affect the value of correlation coefficient.
Each formal and informal procedure is complementary and both have a place in
residual analysis.
The residual models are also called Measurement Error Models. For simple Linear
Measurement Error model, the goal is to estimate a straight line fit between two
variables from bivariate data, both of which are measured with error.
One of the most important differences between residual models and ordinary
regression models is concerned with model identifiability. It is common to assume
that all random variables in the residual model are jointly normal. This means that
different sets of parameters can lead to the same joint distributions of x and y. In
this situation, it is impossible to estimate consistently the parameters from the
data.
B. Hypothesis Testing
For example: A hospital administrator may hypothesize that the average length of
stay of patients admitted to the hospitals is five days or a physician may
hypothesize that a certain drug will be effective in 90 % of cases for which it is used.
By means of hypothesis testing one determines whether or not such statements
are compatible with the available data.
a. Hypothesis Testing
2. Set up a hypothesis
The first thing in hypothesis testing is to set up a hypothesis about a population
parameter. There are two statistical hypothesis involved in hypotheses testing.
For example, A psychologist who wishes to test whether or not a certain class of
people have a mean I.Q. higher than 100 might establish the following null and
alternatives hypotheses:
2. Test statistic
The test statistic is some statistic that may be computed from the data of the
sample. There are many possible values that the test statistic may assume. The test
statistic serves as a decision maker, since the decision is to reject or not to reject
the null hypothesis depends on the magnitude of the test statistic. In general,
For example, A test statistic used for large sample size in a continuous normal
distribution is
The distribution of the test statistic follows standard normal distribution if the null
hypothesis is true.
3. Decision Rule
The decision rule tells about the rejection of the null hypothesis if the value of the
test statistic that is computed from the sample is one of the values in the rejection
region (region where values that are less likely to occur if the null hypothesis is
true), and not to reject the null hypothesis if the computed value of the test statistic
is one of the values in the non rejection region.
The decision as to which values go into the rejection region and which ones go into
the non-rejection region is made on the basis of the desired level of significance, α.
The term reflects the fact that hypothesis tests are sometimes called significance
tests, and the computed value of the test statistic that falls in the rejection region is
called ‘significant’.
A small value of α is selected in order to make the probability of rejecting a true null
hypothesis small. The more frequently encountered values of α are .01, .05 and
.10.
The following diagram illustrates the region in which one would accept or reject the
null hypothesis when it is being tested at 5 percent level significance and a two-tail
is employed. It may be noted that 2.5 percent of the area under the curve is located
in each tail.
Types of Errors
1. The hypothesis is true but your test rejects it. (Type I error)
2. The hypothesis is false but your test accepts it. (Type II error)
3. The hypothesis is true but your test accepts it. (Correct Decision)
4. The hypothesis is false but your test rejects it. (Correct Decision)
For example:
Assume that the difference between two population means is actually zero. If the
test of significance when applied to the sample means gives that the difference in
population means is significant, you make a Type I Error. On the other hand,
suppose there is a true difference between the two population means. Now if the
test of significance leads to the judgment “not significant”, you commit a Type II
Error.
The measure of how well the test is working is called the power of test.
c. Sample Size
Size of sample means the number of sampling units selected from a population for
investigation. A smaller sample but well selected sample may be superior to a large
but badly selected sample. Sample size should neither be too small nor too large. It
should be ‘optimum’. If the sample is used which is larger than necessary, resources
are wasted, if the sample is smaller than required the objectives of the analysis may
not be achieved.
The formula used for determining the sample size depending upon the availability
of information is given as
2. Point and Interval Estimation
For example: Suppose a survey is done to check the education level of the
students in a city during last two years. The data is collected but it is difficult to go
through the entire data to check this level. So a better way is to collect a sample of
the records and find average of the education level for the selected areas. From
this, an estimate of the mean education level of the students in the state can be
computed.
A point estimate is a single numerical value used to estimate the corresponding
unknown population parameter. The procedure in point estimation is to select a
random sample of n observations from a population and then from these
observations find the estimate of the statistic, used as the estimator of the
population. The point estimation is a single point on the real number scale. It gives
the exact value of the parameter under investigation.
For example: On the basis of a sample study, if you estimate the average income
of the people living in a city as $300 it will be a point estimate i.e., this means that
the calculated value from the sample of the given population is used as a point
estimator of population mean.
An interval estimate would always be specified by two values, i.e., the lower one
and the upper one. If the interval estimation refers to the estimation of a
parameter by a random interval, called the confidence interval, whose end points
are L and U with L<U. L is called the lower confidence limit and U is called the upper
confidence limit. If θ is taken as a population parameter, then the confidence
interval is given by
For example: On the basis of a sample study, if the estimate of the average income
of the people living in a city lies between as $900 and $1000, it will be an interval
estimate i.e. the value of the population parameter lies in this interval.
Unbiasedness of an Estimator
For Example: The administrator of a large hospital is interested in the mean age of
the patients admitted to the hospital during a given year. From a sample of records
it is found that the mean age of the patients admitted that year is 56. If the value
expected in totality is again 56, this means that the estimator is unbiased. The value
of the population parameter is same as the expected value of the test statistic.
For example: In a survey on people from a city in Ohio, two samples are taken of
size 500 each and it was found that average beef eaters are 280 and 285 for the two
samples. Both are unbiased means the expected value is the same as the actual
value. It was found that variation due to sampling in the first sample is 0.2 and in
the second sample is 0.5. It is seen that the average taken from first sample shows
a smaller variation with totality than the second sample. So the first sample
estimator is relatively more efficient than the second.
Standard Error
The standard deviation of the sampling distribution is called the Standard Error. It
measures the sampling variability due to chance or random forces.
Standard Error provides an idea about the unreliability of a sample. The greater the
standard error, the greater is the departure of actual frequencies from the
expected ones. And hence greater is the reliability.
With the help of standard Error you can determine the limits within which the
parameter values are expected to lie.
Confidence Interval
Tolerance Interval
The Tolerance interval, estimates the range which should contain a certain
percentage of a sample measured in the population. It is based upon only a sample
of the entire population. It is not a 100% confidence interval. i.e. you cannot be
100% sure that interval will contain the specified proportion. So, it can be
concluded that there are two different proportions related with the tolerance
interval. The first is a degree of confidence, and a second is percent coverage.
For example, you may be 95% confident that 80% of the population will fall within
the range specified by the tolerance interval.
The interval which is used to estimate what future values of the population
parameter will be, based on the present and past values of the sample taken, is
called prediction interval. The confidence and tolerance intervals estimate only
present the population parameters while the prediction limits measure the future
parameters. The minimum is recommended in order to determine a standard
deviation. To determine the prediction limits, sample mean and standard deviation
is known from the background data of sample size, n. Once you decide how many
sampling periods and how many samples will be collected per sampling period, you
can determine the prediction interval by using the equation:
6.5 Test for Means, Variance and Proportions
Test whether the diets A and B differ significantly as regards their effect on increase
in weight at 5% level of significance.
Solution:
n1 = 5, n2 = 7
The calculated value is less than the tabulated value and hence the null hypothesis
is accepted. The experiment provides no evidence against the hypothesis.
Therefore, it is concluded that diets A and B do not differ significantly as regards
their effect on increase in weight is concerned.
For a given data available for analysis consisting of random samples drawn from a
normally distributed population, the test hypothesis used for analysis of variance is
6.6 Hypothesis testing for the ratio of Two Population variances
H0 : p = p0 (null hypothesis)
H1 : p ≠ p0 (2-sided alternative)
where, p0 is the specified value of the sampling distribution p for a given sample .
When H0 is true and the assumptions are met, the test statistic is distributed as
standard normal distribution with mean zero and variance one. The level of
significance α = 0.05 is known. The tabulated value is Z α for a one sided and Zα/2 for
a two sided alternative.
Decision Rule
The test is reject H0, if the calculated value is greater than tabulated value i.e.
(|Z|>Zα/2), then the test is significant. This means there is a significant difference
between actual and estimated sample proportion.
For example:
A survey on people affected by cancer in a city found that 18 out of 423 were
affected. You wish to know if you can conclude that fewer than 5 percent of people
in the sampled population are affected by cancer at 5 percent level of significance.
Solution:
From the data it can be concluded that from the response of 423 individuals of
which 18 possessed the characteristic of interest,
p = 18/423 =0.426.
H0 : p<0.05
HA : p≥0.05
Since calculated value is greater than the tabulated value, the test is reject H 0.
This shows that the population proportion of cancer affected people may be 0.05 or
more.
For example, to find out the effect of training on some employees or to find out the
efficacy of a coaching class, a hypothesis test based on this type of data is known as
a paired comparisons test.
Given are two dependent random samples of size n 1 and n 2 from the same
population having normal distribution. The test hypothesis used in this case is
Decision Rule
The test is reject H0, if the calculated value is greater than the tabulated value i.e. (
|t| > tn-1,α/2), then the test is significant for a 2-sided alternative and for ( t > tn-1, α)
the test is significant for a 1-sided alternative. This means that there is a difference
between the actual mean and estimated mean.
For example:
12 students were given intensive coaching and 4 tests were conducted in a month.
The scores of tests 1 and 4 are given below. Do the scores from the 1 to 4 show an
improvement?
Since the calculated value is greater than the tabulated value, the test is to reject
the null hypothesis.
Definition: The goodness of fit test enables one to ascertain how appropriately
theoretical distributions of frequencies fit expected distributions from the sample.
When an ideal frequency curve is fitted to the data, it is found out that how this
curve fits with the observed facts.
where, i = 1,2,3…………… n
When H0 is true and the assumptions are met, the test statistic is distributed as chi-
square (X2) distribution with k-r degree of freedom. The level of significance α = 0.05
or 0.01 is known. The tabulated value is X2α/2,n-1 for an upper side and X21-α/2,n-1 for a
lower side in a two-sided alternative. The tabulated value is X 2α,n-1 for upper side
and X21-α,n-1 for lower side for a one-sided alternative.
Here, k = Number of groups for which observed and expected frequencies are
available.
For Example:
No. of errors: 0 1 2 3 4 5
No. of accounts: 6 13 13 8 4 3
On the basis of this information, can it be concluded that the errors are distributed
according to the Poisson process?
Solution:
To solve this question you have to obtain the Expected frequencies by supplying
Poisson distribution and test the goodness of fit by X 2 test.
No. of errors: 0 1 2 3 4 5
Test Hypothesis
ANOVA is defined as a technique where the total variation present in the data is
portioned into two or more components having specific source of variation. In the
analysis, it is possible to attain the contribution of each of these sources of
variation to the total variation. It is designed to test whether the means of more
than two quantitative populations are equal. It consists of classifying and cross-
classifying statistical results and helps in determining whether the given
classifications are important in affecting the results.
Normality
Homogeneity
Independence of error
Whenever any of these assumptions is not met, the analysis of variance technique
cannot be employed to yield valid inferences.
The variance between samples measures the difference between the sample mean
of each group and the overall mean. It also measures the difference from one
group to another. The sum of squares between the samples is denoted by SSB. For
calculating variance between the samples, take the total of the square of the
deviations of the means of various samples from the grand average and divide this
total by the degree of freedom, k-1 , where k = no. of samples.
The total variance measures the overall variation in the sample mean.The total sum
of squares of variation is denoted by SST. The total variation is calculated by taking
the squared deviation of each item from the grand average and dividing this total
by the degree of freedom, n-1 where n = total number of observations.
5. Decision Rule
At a given level of significance α =0.05 and at n-k and k-1 degrees of freedom, the
value of F is tabulated from the table. On comparing the values, if the calculated
value is greater than the tabulated value, reject the null hypothesis. That means the
test is significant or there is a significant difference between the sample means.
Applicability of ANOVA
Analysis of variance has wide applicability In Six Sigma in the analysis of the data
derived from experiments. It is used for two different purposes:
For example:
Use analysis of variance and determine whether the machines are significantly
different in their mean speed at 5% level of significance.
Solution:
Take the hypothesis that the population means are equal for three samples.
The calculated value of F is more than the tabulated value; the null hypothesis is
rejected at 5% level of significance. Hence the machines are significantly different in
their mean speed.
In a two way analysis of variance, the treatments constitute different levels affected
by more than one factor. For example, sales of car parts, in addition to being
affected by the point of sale display, might also be affected by the price charged,
the location of store and the number of competitive products. When two
independent factors have an effect on the dependent factor, analysis of variance
can be used to test for the effects of two factors simultaneously. Two sets of
hypothesis are tested with the same data at the same time.
Suppose there are k populations which are from normal distribution with unknown
parameters. A random sample X1, X2, X3……………….. Xk is taken from these
populations which hold the assumptions.
The null hypothesis for this is that all population means are equal against the
alternative that the members of at least one pair are not equal. The hypothesis
follows:
The steps in carrying out the analysis are:
The variance between rows measures the difference between the sample mean of
each row and the overall mean. It also measures the difference from one row to
another. The sum of squares between the rows is denoted by SSR. For calculating
variance between the rows, take the total of the square of the deviations of the
means of various sample rows from the grand average and divide this total by the
degree of freedom, r-1 , where r= no. of rows.
The variance between columns measures the difference between the sample mean
of each column and the overall mean. It also measures the difference from one
column to another. The sum of squares between the columns is denoted by SSC.
For calculating variance between the columns, take the total of the square of the
deviations of the means of various sample columns from the grand average and
divide this total by the degree of freedom, c-1 , where c= no. of columns.
The total variance measures the overall variation in the sample mean.The total sum
of squares of variation is denoted by SST. The Total variation is calculated by taking
the squared deviation of each item from the grand average and divide this total by
degree of freedom, n-1 where n= total number of observations.
F = SSC / SSE
F = SSR / SSE
6. Decision Rule
At a given level of significance α=0.05 and at n-k and k-1 degrees of freedom, the
value of F is tabulated from the table. On comparing the values, if the calculated
value is greater than the tabulated value, reject the null hypothesis. This means that
the test is significant or, there is a significant difference between the sample means.
SSC = Sum of Squares between columns
c = No. of Columns
SE = SST - (SSC+SSR)
For example:
Solution:
c = No. of columns = 4
r = No. of rows = 4
HA : All five workers differ with respect to their mean productivity or at least two
differ with respect to their mean productivity
The calculated values of F are more than the tabulated value at 5% level of
significance. The null hypothesis is rejected. Hence the mean production is not the
same for the four Machines. Also employees differ with respect to mean
productivity.
7. Contingency Tables
In categorical data analysis or when data is in the form of attributes, contingency
tables are used to record and analyze the relationship between two or more
variables. Since only two categorical variables are taken, these are called 2×2
Contingency Tables because each variable has two factors. Suppose there are two
attributes A and B, where A shows the presence of some character and B shows the
presence of another character. If α shows the absence of A and β shows the
absence of B, then contingency table is shown by the following:
For example:
A survey amongst customers of a pizza chain was conducted to study the customer
satisfaction. The observations are as follows: there are two variables, Customer
Satisfaction (Happy or Not Happy) and Pizza Quality (Good or Bad). Observe the
values of both variables in a random sample of 200 customers. A contingency table
can be used to express the relationship between these two variables, as follows:
The figures in the happy customers (satisfied) column and the subsequent bottom
rows are called marginal totals and the figure in the rightmost corner is called
the grand total.
The table shows that the proportion of customers happy with good quality of pizza
is almost the same as the proportion of customers happy with bad quality of pizza.
However the two proportions are not identical, and there must be some difference
between these two variables. The statistical significance of difference between
these categorical variables is explained by using some tests that are Pearson's chi-
square test, a G-test or Fisher's exact test. These tests follow the assumption that
the entries in the contingency table must be random samples from the population
contemplated in the null hypothesis. If the proportion of individuals in the different
columns varies with rows (and, therefore, vice versa) It can be said that the table
shows contingency between the two variables. If there is no contingency, it is said
that the two variables are independent. Any number of rows and columns may be
used in contingency tables. There may also be more than two variables, but higher
order contingency tables are not used. The relationship between ordinal variables,
or between ordinal and categorical variables, may also be represented in
contingency tables. The degree of association between the two variables can be
represented by a φ called, phi coefficient defined by
where, χ2 is derived from the Pearson test, and N is the grand total number of
observations. φ varies from 0 (corresponding to no association between the
variables) to 1 (complete association). For symmetrical variables, there is a
complete association but they do not reach a maximum of 1 with complete
association in asymmetrical tables (those where the number of rows and columns
are not equal).
8. Non-Parametric Tests
Definition
Most of the statistical tests requires the assumption that the population data from
which the samples are drawn is normally distributed and mean and standard
deviation can be derived from the samples and used to estimate the corresponding
population parameters. The parametric tests are based on these assumptions.
Sometimes the data are from non-normal distributions and meaningful sample
statistics cannot be calculated, and no assumptions can be made about the
population parameter such as the mean and the variance. Tests based on these
data are classified as Non-Parametric Tests. Since these tests do not depend on
the shape of the distribution, they are called distribution free tests.
These tests are useful for the data which are nominal and ordinal in nature i.e. for
categorical data.
1. Non parametric tests are distribution free i.e. they are not a statement about the
population parameter values. Chi-Square Goodness of Fit and the Tests of
Independence are examples of tests possessing theses advantages.
2. Nonparametric tests may be used when the firm of the sampled population is
unknown.
3. These tests do not require lengthy and time consuming calculations. If significant
results are obtained, no further computation is necessary.
4. Nonparametric procedures may be applied when the data being analyzed consist
merely of ranking or classifications. These tests are applicable to all types of
qualitative data in rank i.e. nominal scaling, ordinal scaling and ratio scaling.
A nonparametric procedure that may be used to test the null hypothesis that two
independent samples have been drawn from populations with equal medians is the
Mood’s Median Test.
Assumptions
1. The samples are selected independently and at random from their respective
populations.
3. Variables are continuous in nature and two samples do not have to be of equal
size.
where, MA is the median score of first sample and MB is the median of the second
sample.
Test Statistic
The tests statistic used in this case are X 2 with a 2×2 contingency table.
where a, b, c, d are the observed cell frequencies. Since for X 2 tests, the degree of
freedom is (r-1)(c-1), for a 2×2 contingency table, the result is (2-1)(2-1)=1 degree of
freedom. To compute median, arrange the observations in increasing order, obtain
it’s median. Now determine for each group the number of observations falling
above or below the median. The resulting frequencies are arranged in a 2×2
contingency table.
Decision Rule
Kruskal-Wallis Tests
Assumptions
Hypothesis
Test statistic
The n1,n2,…………nk are k observations which are arranged from smallest to largest
in order of their magnitude. After assigning ranks to these observations the rank
sums of each sample are calculated.
where, n1,n2,…………nk are the number in each of k samples and N is the number of
observations in all samples combined.
N = n1+n2+…………+nk
The sign test used for comparing two population distributions ignores the actual
magnitude of the paired observations. A statistical test that makes use of more
information inherent in the data and compensates the loss of magnitudes by
utilizing the relative magnitudes of the observations is called Mann-Whitney tests.
This test helps to determine whether two samples have come from identical
populations.
Assumptions
1. The two samples of size n and m are independent and randomly drawn from
their respective populations.
2. The variables available for analysis are continuous.
3. The measurement scale is at least ordinal.
4. If the populations differ at all, they differ only with respect to their medians.
Hypothesis
H0 : Two samples have come from identical populations and two populations have
equal medians.
If the two populations are symmetric, within each population the mean and median
is same.
Test Statistic
To calculate the test statistic, combine the two samples and rank them from
smallest to largest. Tied observations are assigned a rank equal to the mean of the
rank positions for which they are tied. The Test statistic is
Levene’s Test
The Levene’s test is used to test k samples for equal variances. Equal variances for k
samples mean homogeneity of variances. This test is used to verify the assumption
that variances are equal across groups or samples. The Levene test is an analog of
Bartlett test used in parametric distributions i.e. the data from normal or nearly
normal distributions. For the analysis of variance, F test is used with k samples with
unequal standard deviations.
Hypothesis
CHAPTER 7
7 The Improve Phase
The Improve phase comes next in line after the analyze phase in DMAIC. The goal
of Six Sigma is improvement. This is the point where improvements are worked on,
and systems are reorganized or reconstructed around these improvements. The
personnel working on the project select the solution that would work best for the
project keeping the organization goals in mind.
The Improve stage is the one which implements the conclusions drawn through
analysis. The data or process analysis is chronicled in the Analysis stage. In the
Improve phase, an organization can improve, retain or negate the results of the
analysis.
In this phase, the solutions are discussed and narrowed down to the best options
among them. The best solution is chosen. However, the emphasis remains on
choosing the result which leads to maximum customer satisfaction. This is also the
phase where an innovative resolution is found out and it is executed on an
experimental basis. In Six Sigma, an experiment is designed before it is tried. Six
Sigma offers powerful tools of experiments to assist in improvement. Design of
experiments is the main experimental tool adapted by Six Sigma.
DOE is gaining popularity by the day and has been implemented across several
sectors including chemical, automobile, software development and engineering. It
is a powerful tool to minimize wastage by reducing costs and increase process
yields by reducing variation in the process.
1. Terminology
Earlier, DOE was performed mainly on agriculture and the terminology used in it
clearly reflects it. A design of experiment is one, where, one or more variables,
known as independent variables are demarcated and modified with regards to the
previously drafted plan. The data is collected and studied to find out the effect a
variable or a number of variables have on the process. There are some other
variables also which are associated with DOE. They are:
Response Variable
Primary Variable
There are certain variables on which an organization can keep a restraint and which
produce a result. They are known as primary variables. The primary variables can
either be quantitative or qualitative.
Background Variable
The variables that cannot be taken as constant are known as background variables.
These variables can nullify the effect of primary variables if they are not taken into
consideration properly. The best way to keep these variables in control is
through blocking.
Blocking- The experimental designs split the findings into different blocks. The
blocks are filled up in a sequence. However, the pattern of each block is distinct.
Take the example of a Cola Company. Assume that they are testing an ingredient
for a soft drink. They choose two ingredients (both are substitutes for one another).
Name the ingredients X and Y . There are four samples each of X and Y. It is
suggested that all the four samples be used at the same time. However, the mixing
process requires that two ingredients be used at a time. The mixing process then
becomes a ‘blocking’ factor.
Interaction
This is a situation where the result of one variable depends on the other variable.
Error
Factor
Level
Replication
Replication is the process of carrying out a part of the experiment or the complete
experiment again. The advantage of replication is that it reduces the effect of
random variation on the results. It also gives freedom to the analyst to gauge the
experimental error. However, it is also true that the experimental error can be
gauged without replicating the entire experiment. If there is variation when all the
other variables are held constant, the cause is definitely other than the variables
being controlled by the analyst. Replication also helps to reduce biasedness.
Outer Array
The thing that was innovative about Taguchi’s approach was that each experiment
was replicated through an outer array. An outer array is an orthogonal array that
deliberately imitates the sources of variation that a product might face in reality.
Experiments are conducted in the presence of a lot of noise factors to make it
robust. An outer array is used to minimize the effect obtained as a result of the
combination of different noise factors.
Orthogonal Arrays
Objectives of DOE
The objective of carrying out the experiment must be in sync with the goals of the
organization. The main objective, however, remains the optimization of the product
and process designs.
1. Examining the effects of various factors like variables, parameters and inputs on
the performance.
2. Solving the manufacturing problems by carrying out investigative experiments.
3. Checking the effect of individual factors on the performance.
4. Checking the blend of factors that would be best suited to get better
performance.
5. Finding the factor that can be optimally utilized and at the same time give the
best performance.
6. Finding out the cost savings by carrying out a particular experiment.
7. Finding out the solution to develop sensitivity for the process so that it can
become immune to noise factors.
8. Finding out ways to reduce variation in the performance.
DOE users are aware that factors and interactions with statistical significance
should be included in the mathematical model that predicts the response surface
of the process or product being investigated. Factors that do not show statistical
significance should generally be omitted from the model. However, single-factor
effects can sometimes be statistically insignificant and yet be significant via
hierarchy.
Selecting Factors
It is very important to choose the factors before carrying out the experiment. This is
because they are the main ingredients of the process. Factors are categorized as
controllable or uncontrollable. There are uncontrollable factors like noise factors.
However, they can be controlled using blocking and randomization.
Noise factors
According to Genichi Taguchi, loss function is the measure of quality. The quadratic
loss function established by him measures the level of customer dissatisfaction due
to the poor performance of the product. Poor or average performance of the
product and variation are major factors due to which the product fails to achieve
the desired target. There are certain uncontrolled factors which lead to variation in
the performance. They are known as noise factors. Therefore, it is very important to
choose the design for the product or the process which has least number of noise
factors or which is indifferent to them.
The variables which affect the performance of the product from outside like
government policies and environment are known as external source of noise.
The internal variables or the internal settings of the product that affect its
performance are known as internal noise factors. Whenever a product deviates
from complying with its internal features, it becomes the internal source of noise
for the product.
Randomization
Measurement Methods
It is important to choose the measurement methods that would be best suited for
the experiment keeping the organizational goals in mind. This is the next step after
selecting the level of factors and responses.
Attribute Screens
Gauge Blocks
Gauge blocks are another measuring tool used in DOE. They are also known as
Johansson gauges or slip gauges. They are used to measure accuracy and are
lapped measuring standards. They are also used as references to set up measuring
instruments like micrometers, sine bars and dial indicators.
Calipers
Calipers are devices which help to measure linear dimensions which include
distance between two parallel sides or a diameter. Calipers are used for both
internal and external measurements. Calipers come in various shapes and forms.
Micrometers
Optical Comparators
These are devices which project a magnified image of a part profile onto a screen.
The part profile is then compared to a standard overlay or scale. Optical
comparators are used to measure complex parts. In addition to dimensional
readings, optical comparators are used to detect defects like burs, scratches and
incomplete features.
Tensile Strength
Tensile testing involves making a specimen of the test. The specimen is gripped at
the ends using suitable equipment in a tensile testing machine and an axial force is
exerted slowly until the specimen fails. The shape of the specimen is important as
the griping forces can apply forces which can make the specimen fail prematurely.
Titration
Types of Design
There are different kinds of models available for DOE. Experiments can be made to
suit the goals of a particular organization. A few of the more common types of
designs are discussed below:
Fixed-Effects Model
This is an experimental model where all the possible factors are taken into account
by the experimenter. For instance, if there are four factors affecting the product all
four will be taken into account.
This is another experimental model where one sample can act as a substitute for
another. For instance, if we have four factors only two or three are taken into
account.
Mixed Model
This is a model which combines the features of both fixed-effects and random-
effects model.
Randomized Design
The findings in an experimental design are divided into particular blocks. These
blocks follow a particular pattern. However, the pattern within the blocks is
random. The idea of carrying out such an experiment is that a different treatment
would be meted out to each member or each component in a particular block.
Example
Latin-Square Design
This is a design where each member appears just once in a particular row or
column. This kind of a design is particularly useful when two non-homogeneous
members need to be tested together. The term Latin-square was first used in
agricultural experiments and the square was actually a square piece of land. The
term is now applied to other areas as well like where non-homogeneous elements
are used. For instance, in an industrial experiment the non-homogeneous elements
can be men, machines and materials. The Latin-square qualifies on two grounds
and they are:
The analyst who will head the experiment should clearly explain as to why the
experiment is being carried out and how will it benefit the organization. The
experiment should be clearly documented and supported by all the members. Two
concepts are very important for an analyst and they are replication and
randomization.
Power in DOE is the probability that the procedure of the test would prove to be of
some statistical significance. Power and sample size is interrelated. The sample size
should be of the size of the type 1 or alpha error, which is the actual size of the
effect and the size of the experimental error. These are detrimental while
computing statistical power. It is not easy to calculate power as the calculations are
quite complex. So, it is usually taken casually and not followed in practical terms.
Power also depends on the size of experimental results. If the null hypothesis is
wrong by a substantial amount, power will be higher than when the null hypothesis
is wrong by a small amount. Moreover, if there is any error in measuring, it may
increase the power by a considerable amount.
Efficiency
Efficiency is one of the important principles of a design. The basic aim, in fact, of a
DOE is to find out the best possible method to minimize wastage and increase
efficiency. A DOE is carried out so that there can be product and process
optimization. So, DOE leads to cost savings, optimal utilization of resources and
ultimately efficiency.
Interaction
There is interaction between the factors when one factor influences the result of
another factor. It influences the optimum condition and the foreseen results. It is
imperative to decide the interactions before starting the experiment. Therefore, it is
an important principle of DOE. There are two types of interactions. They are:
1. Synergistic: According to this kind of interaction, two variables are good when
they are together, for instance team effort.
2. Antagonistic: According to this kind of interaction, two variables are not good.
Confounding
Confounding is a design where some of the treatment effects are calculated by the
same linear combination of the experimental observations as some of the effects of
blocking. If this is the case, both the treatment and the blocking effects are
confounded. Simply speaking, confounding means that one factor affects the other.
In general terms, confounding implies that the value of a main effect estimate
comes from the main effect as well as from higher order interactions which are not
statistically significant.
One-Factor Experiments are the ones where several results obtained through
experimentation can be compared. This kind of an experiment allows the
experimenter to make a comparison of results obtained through experimentation
which are independent and, most probably, will give a different mean for each
group. The important thing to be taken into account is whether all the means are
equal or not.
One- Factor Experiments can be analyzed in several ways. The results can be
plotted on a SPC chart which includes historical data from the standard process.
The baseline rules, then apply, to assess the conditions which are not in control to
check if the process has been modified in some way. The experimenter needs to
gather data from a number of sub-groups to arrive at a conclusion. However, he
needs to keep in mind that a single sub-group could also fall out of the existing
limits of control.
Another substitute for the SPC chart is the F-test. It is used to compare the means
of alternate treatments. A cyclist, for instance, is preparing for a cycle race and
wants to gauge the time he would take to complete the race if he applies different
strategies. He compares the usual time he takes to complete the race to the two
times he would take if he applies two alternate strategies. He calculates the time
and records ten data points for each alternative.
The table above shows that the two alternate strategies that the cyclist employs (B
and C) help complete him cover the track quickly. An F-test is performed to
determine whether the difference is due to statistical significance or is just a
coincidence. For a given confidence level, the F-ratio provides a statistic that can be
compared to a probability distribution table to determine whether the treatment
means are significantly different. There are various kinds of F-distributions.
However, the most common ones are those which make use of two levels of
confidence: 95% and 99%. The following table shows the F-ratio calculation for the
above mentioned example of a cyclist:
The F-ratio of 3.61, is then, compared to a value from the F-distribution table for the
confidence level that has been chosen. The tabulated value of F at 2, 27 degrees of
freedom is 3.35. Since calculated value is greater than the tabulated values, the null
hypothesis is rejected. This shows that there is a significant difference in means due
to random chance at 5% level of significance.
This is a design where all the combinations of factors are experimented. The
number of runs meted out is calculated by number of levels raised to the power of
n. For instance, if an experiment has 2 factors and number of levels are 3, then the
number of runs would be 3 2, that is, 9. So, if an experiment has a lot of factors, it is
better to conduct screening experiments on them first to reduce the number of
potential factors.
Full Factorial Experiments make use of the Yates method. It is important to arrange
the data in a fixed order to use the Yates algorithm. A factor is evaluated at both
high and low levels in an experimental design. The high levels are denoted by a ‘−’
sign and low levels are denoted by a ‘+’ sign. A full factorial table may look like this:
The advantage of a full factorial experiment is that many factors can be studied at
one time. However, the disadvantage is that if the number of factors goes up, it also
shoots up the cost.
Two-level factorial experiments are used to examine the effect of various factors at
the same time. This kind of a design saves a lot of costs as its run-size is a fraction
of the total number of combinations of the factors’ levels. The designs are
positioned according to the power they possess to detect the effect of factors.
Aberration is the most common method to position the designs. If an experiment
has k factors, the aberration criteria lists the number of effects of order 1, 2… k.
The number of runs suggested for a two- level factorial design is (n-p) 2[n] [-] [p] for
2 [n] [-] [p] fractional factorial experiments. Extra runs are needed to design two-
level factorial experiments in blocks of size two to gauge all the effects which are
available. The accuracy in these experiments is different because different numbers
of observations are used for estimation in the analysis of resulting data. Also, the
substitution between run-size reduction and possibly negligible effects is a matter
of concern when the number of factors is great.
The advantage of using two or multi-level designs is that the error rate is very less.
If we perform the same tests with single response methods the error rate would go
up as many individual tests are being performed at the same time. Sometimes, the
individual analysis proves to be of no use as there are many responses. This is
where multi or two-level designs come handy. Moreover, the interrelationships
between the response variables can be utilized using two-level fractional factorial
designs.
Introduction
This kind of a quality check involves examining and modifying the process,
predicting and rectifying the errors, reviewing and disposition of the product. On-
line quality control also includes follow-up on the defective products shipped to the
customer.
This kind of a quality check includes activities that involve cost savings and quality
improvement methods at the process and product designing stages. Off-line quality
check is a part of the product development cycle. There are three facets to this
control. They are listed below:
a. System Design
This is a stage which does not involve any statistical calculations. Rather, this is a
stage where scientific and technological know-how is applied. It is used to produce
an epitomized design. This design acts as the prototype for the process design
characteristics and the initial settings of the product are based on it. This is a stage
where innovation is the key. New ideas and knowledge are disseminated to
determine the ways to efficiently produce the best product.
b. Parameter Design
c. Tolerance Design
This is a design to establish tolerance for the products or processes to reduce the
cost of manufacturing and lifetime costs. It is the next and the final step, after the
parameter design, while specifying the product and the process designs to identify
the tolerance among the factors. The factors affect variation and they are modified
only after the parameter stage because only those factors are modified whose
target quality values have not been achieved.
Performance Statistics
Performance statistics are established to measure the effect of noise factors on the
performance settings. Taguchi makes use of a lot of performance statistics which
are an indication of the variation in the performance. These statistics also help to
reduce the loss and raise the level of performance.
Signal to Noise Ratios
Signal factors, unlike the noise factors, are the ones which are under the control of
the user of the product. Sending a short message on a mobile is in the user’s hands
and therefore it makes it a signal factor. However, the delivery of the message is
not in the hands of the user. So, it is a noise factor because it is uncontrollable. The
best product is the one which responds to the user’s signals and is unaffected by
noise factors. Therefore the goal of the experiment should be to maximize the
signal-to-noise ratio.
If the experiment is not being performed keeping a particular goal in mind, then the
signal to noise ratio should be based on Mean Squared Deviation (MSD) to examine
repeated results. Quality can be quantified as per a particular product’s effect to
noise factors and signal factors. The advantage of MSD is that it is in line with
Taguchi’s concept of quality.
Smaller-the-better:
In cases where you want to minimize the occurrences of some undesirable product
characteristics, you would compute the following S/N ratio:
Expected Loss
The producers tend to ignore the expected loss because it prevents the markets to
operate efficiently . However, Taguchi is of the view that a reduction in these losses
increases brand loyalty, sales and ultimately profits. The parameter design gets
more meaning when the quadratic loss function is modeled. The quadratic model is
advantageous because it helps achieve the target value faster and reduces the
variation in the process.
8. Mixture Experiments
A mixture experiment involves the integration of various components which
provide a stable value. A mixture experiment takes place while mixtures of
components that sum to a constant are analyzed. For example, if you want to
optimize the effect of a fertilizer on a yield, consisting of 4 different brands of
fertilizers, then the sum of the proportions of all fertilizers in each mixture must be
100%. Thus, the task of optimizing mixtures commonly occurs in food-processing,
refining, or the manufacturing of chemicals. A number of designs have been
developed to address specifically the analysis and modeling of mixtures.
Triangular Coordinates
The mixture proportions are usually summarized using the triangular or ternary
graphs. If you have a mixture that contains 3 components A, B and C, they can be
summarized by a point in the triangular coordinate system defined by all the three
variables. The following are 6 different mixtures of the 3 components, A, B and C.
The sum for each mixture is 1.0, so the values for the components in each mixture
can be interpreted as proportions. If this data is graphed on a regular 3D scatter
plot, the points would form a triangle in the 3D space. The points inside the triangle
where the sum of the component value is equal to 1 consist of valid mixtures. So, it
is sufficient to plot the triangle to summarize the proportions for each mixture.
Response Surface Analysis or Method (RSM) is another important tool like design of
experiments as it is also used to measure the effects of the variation in the process
characteristics. The difference between the two is that RSM is a better tool to
measure non-linear process performance factors like process contours. However,
DOE is suitable to study only the high and low levels of performance variation. DOE
is more time consuming than RSM. An analysis, taking 8 to 10 factors in
consideration, would take one week to complete using RSM methodology but with
DOE it would take about two to three weeks.
The RSM methodology, in a complex engineering system, for instance, would help
to assess the values of the design variables. These are those design variables which
optimize the performance settings according to the constraints of the system. RSM
is also used to get the mathematical tables which gauge the functional relationships
between the performance settings and design variables.
RSM is also helpful as it helps to minimize the cost associated with computing
failure probability. RSM is adapted to triumph over the failure probability designs as
a function of the design variables. Failure probability designs are noisy and unfit for
gradient-based optimization. RSM is also used for optimization of tasks which are
otherwise difficult to quantify.
The RSM approach also carries with it some drawbacks. This approach is very costly
as it helps to get very high level of accurate and reliable estimates.
An important result from calculus that the gradient points in the direction of the
maximum increase of a function is used in the steepest ascent (descent) technique
to decide how the initial settings of the parameters should be changed to result in
an optimal value of the response variable. The direction of movement is made
proportional to the estimated sensitivity of the performance of each variable.
The parallel-tangent points are obtained from bitangents and inflection points of
occluding contours when the problem is not an n-dimensional elliptical surface.
Parallel tangent points are points on the occluding contour where the tangent is
parallel to a given bitangent or the tangent at an inflection point.
The experiments are designed with center points having first order. If the curvature
is non-considerable, the experimental section would not contain the center point
and the path of steepest ascent can be found. If the curvature is outsized, then
additional testing is required for a second order design to be carried out.
The usually used classical experimental designs are Central Composite Design
(CCD) and Box-Behnken (BB). The commonly used space filling designs are: Random
Latin Hyper cubes, Orthogonal Array (OA), and Orthogonal Array-based Latin
hypercube.
In the classical design and analysis of experiments, sample points are taken out in
the design, or multiple data points are considered for random variation. In this
case, extra sample points are taken to fill the gap or to cover the design space. The
space-filling designs present better “coverage” of the design space and they are
widely used. For more than 3 design variables, spaces filling experimental designs
give better results than classical experiments designs.
Box-Behnken Design
Box-Behnken requires only three levels, which are coded as-1, 0 and +1. This design
was created by Box and Behnken by combining two-level factorial designs with
incomplete block designs. This method creates designs with advantageous
statistical properties and most importantly, only a fraction of the experiments are
needed for a three-level factorial. These designs offer limited blocking options
except for the one with three-factor level.
Central Composite Design uses only 9 treatments and a star pattern with one
treatment at a central position. It has a centered and symmetric form. It is a second
order design which is mostly used to optimize tablet formulations. CCD makes it
possible to develop response surfaces which permit the ranking of each variable
according to its significance on the responses which are calculated. Thus, it helps to
predict the formulation composition that will produce a desired response.
Latin hypercube sampling was introduced by McKay, Beckman and Conover. A Latin
hypercube sample is represented as 3-tuple LH(m, n, s) where m, n, and s are the
number of sample points, the number of input variables, and the number of
symbols (integers from 1 to s) respectively. A general condition to satisfy is that m is
a multiple of s.
A feature exhibited by a Latin Hypercube sampling is that if you divide any input
variable into s equally spaced bins, each bin gets k=m/s data points. One more
significant feature of this sampling method is that it generally gives a lower variance
in function approximation.
Orthogonal Arrays
It is a set of tables to find out the trial conditions and calculate the number of
experiments. Orthogonal Arrays are represented as:
2 = Number of levels
5= Number of factors
L-4(2)2 , L-8 (2)3, L-16 (2)4, L-32 (2)5, L-64 (2)6 : ALL same Level 2
Orthogonal Array based Latin Hypercube Sampling was introduced by Tang. The
sample points obtained from this method are Latin hypercube samples. However,
the set of sample points with the transformation are [sX ji ] orthogonal arrays.
1. you divide any input variable into s equally spaced bins, each bin gets exactly one
sample point.
2. you divide any two input variables into s*s bins, each bin contains exactly one
sample point.
Rotatability
Importance
Both the traditional experimental designs and the Taguchi method are radical
methods. They help in the product and process optimization both in terms of levels
and design factors during the production process. They can prove to be very useful
in improving quality during the design stage. Although these methods have a lot of
utility, they are very costly in terms of money, time and manpower. Also, most of
the times, they hinder the manufacturing process. Therefore, they are not carried
out actively as a routine.
The idea behind conducting the experiment is to replace the stagnant working of a
process by an orderly scheme which causes minute changes in the controlled
variables. The result of these minute changes is analyzed and this helps steer the
process in the direction of improvement.
1. If X is taken as the current production level, it would be set as the new center of
specification and would be determined by designed experimentation.
2. Acceptable levels like X- D and X+ D which are within the specification limit,
should be chosen.
3. The quality of the process at all three production levels (X, X- D and X+ D) should
be evaluated and the process which produces the highest quality of product should
be chosen.
If X is taken as the first factor at the current production level and Y as the second
factor, evaluation of the quality of the output at all different combinations of X and
Y including the ∆ must be done. The combination which produces the highest
quality is the new center point.
The Three-Factor EVOP follows the same procedure as the Two-Factor but with
three factors.
CHAPTER 8
8 The Control Phase
Objectives
The objectives of the CONTROL phase are to:
The Six Sigma project is nearly complete. The goals have been met and the
customer has accepted the deliverables. You must be thinking now what? There is
one thing to be careful about- you have to see the process or project doesn’t
backslide. Control is essential to ensure there is no backsliding. An organization has
to ensure the gains are permanent, there is process stability. A solution is of little or
no value if it isn’t sustained over a long period of time.
This is the phase where changes which were made in the X’s are maintained in
order to sustain or hold the improvements in the resulting Y’s.
The Control Phase is the last step to sustaining the improvement in the DMAIC
methodology. The Control phase is characterized by completing the project work
and handing over the improved process to the process owner. The control phase
gets special emphasis in Six Sigma as it helps to ensure that the solution stays
permanent and it provides additional data to make further improvements. It has
been recorded in previous experiences that hard earned results are very difficult to
sustain if the process is left to itself. There is inherent self control in a well designed
process unlike poor processes which require external control.
The main objective in any production process is to control and maintain the quality
of the manufactured product or service so that it conforms to specified quality
standards. The process must have taken into account that the proportion of
defective items was not too large. This is known as process control. On the other
hand, product control means controlling the quality of the product by studying the
product at crucial points, through sampling inspection plans.
The objective of the Control Phase is to ensure that the improved processes now
enable the key variables of the process to stay within the maximum acceptable
limits, by using tools like SPC.
The SPC expert collects information about the process and does a statistical
analysis on that information. He can then take necessary action to ensure that the
overall process stays in-control and to allow the product to meet the desired
specifications. He can recommend ways and means to reduce variations, optimize
the process, and perform a reliability test to see if the improvements work.
Statistical process control means planned collection and effective use of data for
studying causes of variations in quality, either as between processes, procedures,
materials, machines, etc., or over periods of time. This cause and effect analysis is
then fed back into the system with a view to continuous action on the process of
handling, manufacturing, packaging, transporting and delivery at end-use.
These are those variations which results from many minor causes that behave in a
random manner. Chance causes of variation are stable patterns of variation. There
is no way in which they can completely be eliminated. One has to allow for certain
variation within this stable pattern, usually termed as allowable variation. The range
of such variation is known as natural tolerance of the process.
These are variations that may be attributed to special non-random causes. These
are also termed as preventable variation. These variations can creep in at any stage
of the process, right from the arrival of the raw material to the final delivery of
goods. Such variations can be the result of several factors such as change in the
raw material, a new operation, improper machine setting, broken parts, mechanical
faults in a plant, etc. Assignable causes can be identified and eliminated. These are
to be discovered in the production process before it becomes defective.
1. Without SPC, the basis for decisions regarding quality improvement is based on
intuitions. SPC provides a scientific background for decision regarding the product
improvement.
2. SPC helps in the detection and correction of many production troubles.
3. SPC brings about a substantial improvement in the product quality and reduction
of spoilage and rework.
4. SPC gives information about when to leave a process alone and when to take
action to correct troubles.
5. In the presence of good statistical control by the supplier, the previous lots
supply evidence on the present lots and it provides better quality assurance at
lower inspection cost.
6. If testing is destructive in nature, a process in control gives confidence in the
quality of untested product.
7. PC reduces the waste of time and material to a certain extent by giving an early
warning about the occurrence of defects. 2. Selection of Variable
2. Selection of Variable
Selection of variable involves selecting those variables among Control charts which
are a tool in statistical process control. The variable chosen for the control of
average (X-bar) and Range (R) chart must be something that can be measured and
expressed in number, such as a dimension, hardness, tensile strength, weight etc.
The real basis of choice is always the possibility that costs can be reduced or
prevented. From the stand point of possibility of reducing production costs, the
candidate for a control chart is any quality characteristic that is causing rejection or
rework involving substantial costs. From the inspection and acceptance points of
view, destructive testing is always suggested as an opportunity to use the control
charts to reduce costs. In general, if acceptance is on a sampling basis and the
quality tested can be expressed as a measured variable, it is important to examine
these costs by basing acceptance on the control charts for variables. The best
chance to save costs, are in places that would not be suggested either by an
examination of costs of spoilage, or rework of inspection costs.
While selecting variables for the initial application of the control chart technique, it
is important not only to choose those variables with opportunities for cost saving
but to meticulously select a type of saving that everyone, including those in a
supervisory capacity and the management, will readily accept as being a real saving.
The statistical tools applied in the process control are control charts (discussed in
the subsequent sections). The primary objectives of process control are (a) to keep
the manufacturing process in control so that the proportion of defective units is not
excessive and (b) to determine whether a state of control exists.
3. Rational Sub-Grouping
The first sub-group consists of products that produced as much as possible at one time; the next
sub-group consists of products that produced as much as possible at a later time and so forth.
This method follows the rule for selection of rational sub-groups of permuting a minimum
chance for variation within a sub-group and a maximum chance for variation from sub-group to
sub-group. It gives the best estimate of capabilities of a process obtainable if assignable causes
of variation can be eliminated and it provides a more sensitive measure of shifts in the process
average.
The other sub-group consists of products intended to be the representation of the entire
production over a given period of time, the next sub-group consists of products intended to be
the representation of all the production of approximately the same quality of products in a later
period and so forth. This method, in which one of the purposes of the control chart is to
influence decisions on acceptance of the product, is preferred.
The highest and the lowest number in the sub-group must first be identified. With
large sub-groups it is better to mark the highest value with the letter H and the
lowest with the letter L. The Range is calculated by subtracting the lowest value
from the highest value i.e. R = (H-L).
4. Analysis of Control Charts
A control chart is a statistical tool principally used for the study and control of
repetitive processes. It is a graphical tool used for presenting data so as to directly
expose the frequency and extent of variations from the established standard goals.
Control charts are simple to construct and easy to interpret and they tell at a glance
whether or not the process is in control, i.e., whether a process lies within the
tolerance limits.
A Six Sigma enterprise or any industry in general faces two kinds of problems:
a. To check whether the process is conforming to the standards.
b. To improve the level of standard and reduce variability consistent with cost
considerations.
Shewart’s control charts provide an answer to both. They provide criteria for
detecting lack of statistical control. Control charts are the running records of the
performance of the process and, as such, they contain a vast store of information
on potential improvements.
1. A central line to indicate the desired standard or level of the process (CL)
2. An Upper Control Limit (UCL)
3. A Lower Control Limit (LCL)
In the control chart, the upper control limit (UCL) and the lower control limit (LCL)
are usually plotted as dotted lines and the central line (CL) is plotted as a dark line.
If t is the underlying statistic, then these values depend on the sampling
distribution of t and are given by:
CL = E (t)
From time to time a sample is taken and the data are plotted on the graph paper.
As long as the sample points fall within the upper and lower control limits there is
no cause for worry, as the variation between the sample points can be attributed to
chance causes. The problem occurs only when a sample point falls outside the
control limits. This is considered as a danger signal, which indicates that assignable
causes give rise to variations.
The statistical tools for data analysis in quality control of the manufactured
products are given by four techniques.
Variables are those quality characteristics of a product which are measurable and
can be expressed in specific units of measurement such as diameter, tensile
strength, life etc. Such variables are of continuous type and follows normal
probability law. For quality control of such type of data, two types of control charts
are used and technically these charts are known as:
The X- bar chart is used to show the quality averages of the samples drawn from a
given process and the range chart is used to show the variability of the quality
produced by a given process.
During the production process, some amount of variation is produced in the items.
The control limits in the X-bar and R charts are so placed that they show the
presence or absence of assignable causes of variation in the
The range chart is used to show the variability of the quality produced by a given
process. The R chart is generally presented along with the X-bar chart. The general
procedure for constructing the R chart is similar to that of the X-bar chart. The
values required for constructing an R charts are:
Construction Procedure for Control Charts for X-bar and R
Control charts are plotted on a rectangular co-ordinate axis. The vertical scale
represents the statistical measure of X-bar and R, and the horizontal scale
represents the sample number.
2. Control Charts for Attributes
Attributes are those product characteristics which cannot be measured. Such
characteristics can only be identified by their presence or absence from the
product, for example, whether a bottle has cracks or not. Attributes may be judged
either by the proportion of units that are defective or by the number of defects per
unit. For quality control of such type of data, two control charts are used:
This chart is used in attributes if the quality characteristics of the product are not
amenable to measurement but can be identified by their absence or presence from
the product or by classifying product as defective or non-defective.
This is used with advantage when the characteristic representing the quality of a
product is a discrete variable. For example, the number of surface defects observed
on a sheet of photographic film.
If sample size n is varying then the statistic used is u = c/n. If ni is the sample size
and c i is the total number of defects observed in the ith sample then,
ui = ci / ni ( i=1,2,……….k)
gives the average number of defects per unit for the ith sample.
The pattern of variation in the data follows Poisson distribution with equal mean
and variance. If c is taken as Poisson-Variate with parameter Λ, then,
When standards are not given: Λ is not known.
Λ is estimated by the mean number of defects per unit. Thus, if c i is the number of
defects observed on the ith inspected unit, then estimate of Λ is given by
6. PRE - Control
PRE-control is a statistical technique that can be used with X-bar (Average) and R (
Range) control charts. This technique helps the operators to control the process so
that the proportion of defective items is reduced during the production process. It
describes the process situations and causes of variations that could produce
defects, and establishes control limits without the using the normal calculations
that are used with upper control limits and lower control limits.
PRE-control can draw control limits and conclusions for a small number of
subgroups while SPC requires 25 subgroups. PRE-control gives feedback about the
process from the start. In PRE-controls limits, the process is centered between
specified control limits, that is, the control limits lie with in the specification limits.
PRE-control does not allow any calculations and plotting. It branches into three
parts to provide control information. This technique can be explained with the help
of a symmetric normal distribution curve. This curve shows variations within the
spread of a production process that may produce an increase in defects in the
production's process.
Two PRE-control (PC) lines in the normal curve can be drawn to set the control
limits, each one quarter of the way inside of the specification limits. From the figure
given below, it is clear that 86% of the items should be within the PC limit lines, with
a delta of 14%, 7% lies in each specification limit. In other words, roughly 1 part in
14 could fall outside of the constructed PRE-control limits. Thus the chance of two
readings falling outside the PRE-control line is (1/14) x (1/14) or 1/196. This is the
foundation of PRE-control. This should mean that only about one in every 200
pieces are consecutively in a row within the given outer bands. Considering all 4
possible permutations of the consecutive 2 pieces, the chance is 4/196 or nearly
2%. In other words, the operator will get a signal to adjust the process.
Advanced Statistical Process Control
Exact Method of Computing Control Limits for Short and Small Runs
The procedure applied to any situation where a small number of subgroups will be
used to set up a control chart or the procedure applied to short runs consists of
three stages:
Stage three assumes that there are no causes of variation between the runs. If
there are, the process may go out of control. This approach will lead to the use of
standard control charts tables when enough data are accumulated.
1. After the initial setup, run 3 to 10 pieces without adjusting the process.
2. Compute the average and the range of the sample.
3. Compute T = (average - target) / Range
Use absolute values. The target value is usually the specification midpoint or
nominal.
4. If T is less than the critical value T in the table, (given below) accept the setup.
Otherwise adjust the setup to bring it closer to the target. There is approximately 1
chance in 20 that an on-target process will fail this test.
In the SPC technique, as long as the variation in the statistic being plotted remains
within the control limits, the process can be left alone. If a plotted statistic exceeds
a control limit, the cause has to be ascertained. This approach works fine as long as
the process remains static. However, the means of many automated manufacturing
process often go with the flow because of inherent process factors or process drift
produced by common causes. In spite of this, there may be known ways of
intervening in the process to compensate for the drift. This drift can be studied
through the use of a common cause chart. This approach involves creating a chart
of the process mean. Here control limits are not used, but action limits are placed
on the chart. Action limits are computed based on costs rather than on statistical
theory and a prescribed action is taken to bring the process closer to the target
value.
These charts are called ‘common cause charts’ because the changing level of the
process is due to built-in process characteristics. The process mean is tracked by
using exponentially weighted moving averages (EWMA).
1. EWMA Charts use the actual process data to determine the predicted process
value for processes that may be drifting i.e. they can be used when processes have
inherent drift.
2. If the process has trend or cyclical components, the EWMA will reflect the effect
of these components.
3. EWMA Charts provide a forecast of where the next process measurement will be.
This allows feed-forward control.
4. EWMA Charts can be used to take preemptive action to prevent a process from
going too far from the target. EWMA models can be used to develop procedures for
dynamic process control.
5. If the process has inherent non-random components, a EWMA common cause of
control charts should be used.
The relationship between X-bar and EWMA charts helps to understand the EWMA
chart. X-bar charts give 100% of the weight to the current sample and 0% to the
past data. This roughly equals to Λ =1 on an EWMA chart. In this case the data
points are all independent of one another. In contrast, the EWMA charts use the
information from all previous data. The X-bar chart treats all the data points as
coming from a process that does not change its central tendency.
While using an X-bar chart it is not essential that the sampling interval be kept
constant. The process is supposed to behave as if it were a static. However, the
EWMA chart is designed to account for process drift and, therefore, the sampling
interval should be kept constant while using EWMA charts. This is usually not a
problem with automated manufacturing. It is possible to put control limits on the
EWMA chart only when the situation demands so.
The Three Sigma control limits for the EWMA chart are computed as
CUSUM-charts are particularly used for detecting small shifts in the process.
CUSUM-charts have shown more efficiency in detecting small shifts in the
mean of a process. This chart helps in easy visual interpretation of the data.
When it is desired that shifts be detected in the mean that are 2-sigma or less,
these charts are used. This chart also detects process changes more rapidly
than the control chart stability rules. Therefore this chart can be chosen to
monitor for small process shifts (less than 1.5-sigma).
Tabular CUSUM
V-mask CUSUM
Since a CUSUM chart is used for variable data which plots the cumulative sum
of the deviations from a target, a V-mask is used as the control limits. Since
each plotted point on the CUSUM chart uses information from all prior
samples, it detects much smaller process shifts than a normal control chart
would. CUSUM charts are especially effective with a subgroup size of one. Run
tests should not be used since each plotted point is dependent on prior points
as they contain common data values.
CUSUM charts may also be preferred when the subgroups are of size n=1. In
this case, an alternative chart might be the individual X chart, in which case
you would need to estimate the distribution of the process in order to define
its expected boundaries with control limits.
As with other control charts, CUSUM charts are used to monitor processes
over time. The charts' x-axes are time based, so that the charts show a history
of the process. For this reason, you must have data that is time-ordered; i.e.,
data entered in the sequence from which it was generated.
8.3 Understand Appropriate Uses of Moving Averages Charts
The Moving Average chart monitors the process location over time, based on the
average of the current subgroup and one or more prior subgroups .The X-axis of
the charts is dependent on time, so that they can show a history of the process. For
this reason, time-ordered data is needed. If this is not possible, then trends or shifts
in the process may not be detected.
Moving Average - Range Charts may be used when the cell size is less than ten
subgroups. If the cell size is greater than ten, Moving Average - Sigma charts may
be used.
Moving Average - Sigma Charts are a set of control charts used for quantitative and
continuous data in measurement. It monitors the variation between the subgroups
over time. The Moving Average - Range charts also monitor the variation between
the subgroups over time.
The plotted points for a Moving Average - Sigma Chart, called a cell, include the
current subgroup and one or more prior subgroups. Each subgroup within a "cell"
may contain one or more observations, but must all be of the same size.
The control limits on the Moving Average chart are derived from the average
moving sigma, so if the Moving Sigma chart is out of control, the control limits on
the Moving Average chart are meaningless.
These charts are generally used for detecting small shifts in the process mean.
These charts detect shifts of 5-sigma to 2-sigma much faster than Shewhart’s
charts with the same sample size. They are, however, slower in detecting large
shifts in the process mean.
Run tests in this case cannot be used because of the dependence of data points. T
hese charts can also be used when the subgroups are of size n=1.
Another use of the Moving Average Charts is for processes with known intrinsic
cycles. Many accounting processes and chemical processes fit into this
categorization. Suppose you sample at set intervals and set the cell size equal to
the number of subgroups per cycle. As you drop the oldest sample in the cell, you
pick up the corresponding point in the next cycle. If the cyclical nature of the
process is upset, then the new points added will be substantially different, causing
out of control points.
The advantage of CUSUM, EWMA and Moving Average charts is that each
plotted point includes several observations; therefore the central limit
theorem can be used to say that the average of the points or the moving
average is normally distributed and the control limits are clearly defined.
The lean tools for control are described in detail in Chapter 9- Lean Enterprise.
Some of the lean tools are listed below:
1. Poka Yoke
Mistake proofing, also known as Poka Yoke, can be used in control planning to
make sure the problem is eliminated for good.
Poka Yoke is a mechanism that prevents a mistake from being made. It is done by
eliminating or hugely reducing the opportunity for an error or to make the error so
obvious at the first glance that the defect reaching the customer is almost
impossible. Poka Yoke creates the actions that have the ability to eliminate
mistakes, errors, and defects in everyday processes and activities. In other words, it
is used to prevent causes that give rise to defects. Mistakes are not converted to
defects if the errors are discovered and eradicated beforehand.
The Type-1 corrective action, usually believed to be the most effective form of
process control, is a type of control which when applied to a process eliminates the
possibility of an error condition from occurring.
The second most effective type of control is the Type -2 corrective action, also
known as the detection application method. This is a control that discovers when
an error occurs and stops the process flow or shuts down the equipment so that
the defect cannot move forward.
2. 5S
3. Visual Factory
4. Kaizen, Kanban
D. Project Closure
To ensure the permanence of the introduced changes in the Control phase and to
sustain the gains, it is necessary to institutionalize these solutions. The following is
a list of systems and structure changes to make these improvements an accepted
part of the organization.
Communication of Metrics:
It is critical that the change details and metrics be communicated in every step. The
value calculated from multiple measurements is known as a metric. This may
include tolerance, procedure or data sheet related to the change. It should be made
sure that appropriate quality checks, gauges and verification, and operator
feedback are done. Changes in training personnel need to be in place. The new and
better ways of doing things as a result of the Six Sigma project, needs to be
communicated to the personnel involved. All current employees need to be
retrained and new employees receive proper instructions.
Compliance:
It should be made sure that all individuals on the project are in agreement with the
change. It is important to get everyone’s approval, before implementation lest
someone might challenge the change.
Policy Changes
:
The corporate policies also need to be altered along with the results generated
from the project. It needs to be seen if some policies have become obsolete and if
new policies are needed.
To make sure the process or product conforms to requirements, the quality control
department exists in an organization. The quality control activity assures that the
documented changes will result in changes in the way the actual work is done. It
should also be ensured that there is an audit plan for regular surveillance of the
project’s gains.
Revision in budgets:
The Six Sigma project team should adjust the budgets in accordance with the
improvements gained in the process. It should be adjusted to that extent till where
profitability and capital inflow is not affected.
Many Six Sigma projects require engineering changes as a part of fixing the
problem. The project team should ensure that any engineering changes; for e.g, in
manufacturing or software, result in the actual changes being translated to
engineering drawings. Instructions should be handed out to scrap old drawings and
instructions
Six Sigma teams usually find new and improved ways of manufacturing a product. If
new manufacturing plans are not documented, they are likely to be lost. The project
teams should make new manufacturing plans at least for the processes that are
included in the project.
A project closure report is developed once the project is completed and all the
project deliverables have been delivered to the Process Owner or Business Owner.
The Black Belts being the project leaders are entrusted with the responsibility of
making a carefully detailed Project Closure Report to guarantee the project is
brought to a controlled end. The Project Closure Report template is an important
part of project closure. It is a final document produced for the product or process
and is used by senior management, Black Belts, to tie up the “loose ends”. It
contains the framework for communicating project closure information to the main
stakeholders of the Six Sigma project.
The end project report is to be made by the project leader/manager and it should
include the main findings, outcomes, and deliverables. It should be a fair
representation of the project’s degree of success. This project closure report
template should contain:
The Black Belt project leader/manager should hold a review of the Six Sigma project
which is concerned with ensuring the completeness of all the project deliverables.
From this can be deduced what worked well for the project and how to avoid
repeating mistakes. It should be attended by the process owner. The basic question
raised should be if the process delivered the projected end product or service
within the time limit and financial resources at their disposal.
CHAPTER 9
9 Lean Concepts
The word Lean was coined in the early 1990’s by MIT researchers. However, Lean
Manufacturing dates back to the post-World War II era. Its concepts were
developed by Taiichi Ohno, a production executive with Toyota. The Japanese
market was facing a lot of problems as far as fulfilling the demands of the Japanese
markets was concerned. The mass production methods developed by Henry Ford
were not very efficient to economically produce long runs of identical products.
The conditions faced by the manufacturing industry today are similar to those
faced by Japan in the 1940’s. Therefore, Lean methods have become a common
industry practice to enhance the efficiency and improve customer satisfaction. The
Lean method helps to reduce waste, commonly known as muda in an orderly
manner in the value stream. Lean methodology is a challenge to muda. Lean
focuses on value, which is the opposite of muda. Ohno identified the following
kinds of Muda:
1. Defects
2. Overproduction
4. Unnecessary processing
7. Waiting
Shortcomings of TOC
Lean thinking is another way to improve processes. Lean thinking helps to increase
value and minimizes waste. Although, Lean Thinking is usually applied to the
production process to improve efficiency, it can be applied to all the facets in the
organization. The advantages of applying the lean methodology are that it leads to
shorter cycle times, cost savings and better quality.
1. Specifies Value
Value is determined by the customers. It is about customer demands and what the
customers are able and willing to pay for. To find out the preferences of the
customers regarding the existing products and services, methods like, focus
groups, surveys and other methods are used. However, to determine the customer
preferences regarding new products, the DFSS method (Chapter-10) is used. The
voice of the customer (VOC) is very important to determine the value of a product.
The opposite of value is waste or muda.
The product will have value only if it fulfills customer demands. Value plays a major
role in helping to focus on the organization’s goals and in the designing of the
products. It helps in fixing the cost of a particular product and service. An
organization’s job should be to minimize wastage and save costs from various
business processes so that the cost demanded by the customers lead to maximum
profits for the organization.
Value stream is the stream or flow of all the processes which include steps from the
development of the design to the launch and order to the delivery of a specific
product or service. It includes both value added and non-value added activities.
Waste is a non-value added activity. Although it is impossible to achieve 100% value
added processes, yet Lean methodologies help make considerable improvements.
According to the Lean Thinking, there should be a partnership between the buyer
and the seller and the supply chain management to reduce wastage. The supplier
or the seller can be categorized according to the need. He can be classified as non-
significant or significant supplier or a potential partner. The classification can help
to solidify and improve relations between the supplier and the customers or
supplier and the organization.
After the key suppliers are categorized and the role they play for the organization is
determined, the next thing is to take steps to eliminate wastage. Tools such as
process activity mapping and quality filter mapping are used to identify and reduce
waste within the organization and also between the customer and the supplier.
There are two ways to observe the flow of work-logical and physical.
Flow is the step-by-step flow of tasks which move along the value streams with no
wastage or defects. Flow is a key factor in eliminating waste. Waste is a hindrance
which stops the value chain to move forward. A perfect value-stream should be one
which does not hamper the manufacturing process. All the steps from the design to
the launch of the product should be coordinated. The synchronization would help
to reduce wastage and improve efficiency.
Work time does not include lunch or tea breaks or any other process downtime.
Takt Time is used to create short-time work schedules.
Spaghetti Charts
The current state of the physical work flow is plotted on spaghetti charts. A
spaghetti chart is a map of the path that a particular product takes while it travels
down the value stream. The product’s path can be matched to that of a spaghetti
and hence the name. There is a great difference between the distance in the
current set-up and the Lean set-up. The difference in the two distances is known as
muda.
According to this principle, the value stream pulls the customer towards products
or services. Therefore, the manufacturer would manufacture nothing unless a need
is expressed by the customer. The production gets underway according to the
forecasts or according to a pre-determined schedule. In short, nothing is
manufactured unless ordered for by the customer.
If a company is applying a Lean methodology and the principle of pull, it means that
it would require quickness of action and a lot of flexibility. As a result, the cycle time
required to plan, design, manufacture and deliver the products and services also
becomes very short. The communication network for the value chain should also be
very strong in the value chain so that there is no wastage and only those goods are
produced which are required by the customers. The biggest advantage of
the pull system is that non-value added tasks such as research, selection, designing
and experimentation can be minimized.
5. Perfection
Perfection is one of the most important principles of Lean Thinking. This is because
continuous improvement is required to sustain a process. The reason behind
sustaining a process is to eliminate the root causes of poor quality from the
manufacturing process. There are various methods to improve perfection and Lean
Masters work towards improving it.
Lean Masters
Lean masters are individuals from various disciplines with a common goal. They are
individual contributors who focus on the process to improve quality and
performance. Their work is to achieve efficient results. The results either may be for
their own organization or for their suppliers. The best way to achieve perfection
with the suppliers is through collaborative value engineering, supplier councils,
supplier associations, and value stream mapping between customers and
suppliers.
Advantages
A non-value added activity is one which neither adds value to the external customer
nor provides any competitive advantage to the organization. Non-value added
activities fail to meet the criteria for value-adding which includes rework, inspection
and control. One of the main objectives of a Six Sigma project is to eliminate
activities which do not add value.
Non-value added activities add no value to the final output. They are activities
which the customer does not want to pay for. It is important to note that some non-
value added activities are important and unavoidable. Such activities should either
be made a part of value added activities or eradicated in order to save costs and
get a better ROI.
There are eight kinds of wastes or non-value added activities which are identified in
Lean.
1. Overproduction
This is one of the most misleading wastes. Overproduction simply means that a
product is made earlier and faster than its requirement. It leads to collection of
unwanted stock. Overproduction happens when an organization wants to produce
products cheaply in bulk, wants to cover up quality deficiencies, breakdown of
machinery, unbalanced workload or a long process set-up. However,
overproduction also leads to the unnecessary production of products which are not
needed and so there is wastage of time, money, resources and personnel.
A Lean analysis helps to spot and eradicate the production of units which are no
longer in use or the ones which are obsolete in technology.
2. Inventory
3. Defects
Defect is a key waste which includes wastage in terms of men, machines, materials,
sorting or rework. Any product which requires scrapping, replacement or repair is
also included in the category of defective products. The reason why products
develop defects can be many. The main ones include unskilled workers, ineffectual
control over the process, lack of maintenance and imperfect engineering
specifications.
Lean analysis helps to recognize defects in the manufacturing process and helps
eradicate the production of faulty units which cannot be sold or used.
4. Processing
Processing is a waste which adds zero value to the product or service from the
customer’s perspective. It comprises of spare copies of paperwork and other
surplus processing for unforeseen problems which might occur in the future. Waste
also occurs in the form of acceleration of process to meet targets.
Lean methodology comes handy to reflect unwanted steps or work elements which
add no value to the product.
5. Transportation
6. Waiting
Waiting means idle time. It comprises waiting for parts from up-stream operations,
waiting for tools; arrangements and directions form higher authority. The time
wasted in measuring and procuring information also makes up for idle time and is
considered a waste. Idle time is the one when no value is added. In fact, waiting for
manpower/labor is a matter of greater concern than the usage of machinery.
7. Motion
Any movement in terms of people or machinery that adds zero value to the product
is wastage in terms of motion. The examples of motion waste include time wasted
in hunting for tools, extra product handling, and arrangement of products, walking
and loading. The reasons for motion wastage include poor infrastructure,
incompetent labor, weak processing and constant changes in agenda setting.
8. People
Cycle time is the time needed to complete a particular task or process. Time is
money in business and therefore cycle time is an important criterion to judge a
manufacturing process. The time taken from the customer placing the order to the
product getting delivered is an example of cycle time. This process is made up of
many sub-processes which include taking down the order, assembling, packaging,
and finally shipping the product.
Cycle Time Reduction means to recognize more efficient and effective ways to carry
out the tasks. It means to eradicate or minimize the non-value added activities
which are a source of wastage. The cycle time can be reduced during the set up of
machines, inspection and during experimentation. Cycle time reduction increases
the production throughput drastically. Also, it decreases the amount of working
capital required and the operating expenses.
Cycle time reduction forms an important facet of the manufacturing process. The
reduction in cycle time proves fruitful both to the customers and the organization.
The customers want that the cycle time of order and delivery should be very less
and at the same time it is important for the organization because it saves time,
money and resources and helps generate profits much quickly. Cycle time
reduction increases customer satisfaction and therefore many producers are
reforming their supply operations.
A supplier’s performance is judged on four main factors. They are price, quality,
performance and on-time delivery. The customers have started evaluating the
performance on two more criteria. They are short cycle time on-time delivery and
response to customer feedback. Both the criteria, are, in fact, very critical and help
in retaining old customers and attracting new customers.
It is not just the manufacturing process which contributes to long cycle times. The
causes of its longevity are both internal and external. The “push” manufacturing
model has made the manufacturers adapt to changes which have led to short cycle
times. This has happened because, these days, manufacturers do not prefer to
stock the products. Instead the products are tailor-made and produced only when
the customers demand it or according to sales forecasts. The following measures
are being taken by the top management to reduce the cycle time.
1. Management of Demand
The manufacturers can use enhanced sales forecasting processes. At the same
time, it is important to keep the customer feedback in mind so that the production
takes place as and when the customer needs it.
3. Lean Manufacturing
There are two mechanisms in any given organization- process improvement and
process control. Control means to sustain the current improved performance of the
process. If there are no indications regarding the deviation in the performance of
the process, then standard operating procedures (SOPs) are considered. On the
other hand, improvement implies to conduct experiments and alter the
performance to produce better results. When the improvement is made, the SOPs
are altered and a new way of doing things is established.
According to Imai (1986) the job responsibilities regarding the improvement and
maintenance of a process are divided according to the level of position held by
personnel in the organization. The figure below represents how and where Kaizen
fits in an organizational hierarchy.
In the figure drawn above, there is a portion which goes beyond Kaizen. It is a point
of radical innovation. This is where Lean Thinking is related to Six Sigma. The figure
drawn below illustrates the point.
For achieving a reduction in the cycle time, a cross functional process map
needs to be developed. This means that a team of individuals from every
division is selected. They in turn map each step of the process of product
development from beginning to end. Two kinds of maps are developed; a map
of the current functioning of the process and another map for the expected
process map.
The first process map helps to identify the problems in the current system,
and to improve the current system. The expected process map explains each
step in detail.
During the mapping session a list of actions is also created. This list defines in
detail the changes required to change the process from the current map to
the expected map.
For more information on process mapping and cycle time reduction through flow
charts refer to Chapter-5, Black Belt, Measure
9.6 Lean Tools
Lean is in fact, most of the times, considered just a set of tools. Kanban, Kaizen and
Poka -Yoke are the most popular tools of Lean. The need for Lean tools grew out of
the problem of inefficiency and standardizing the process. The Lean tools allow a
perfect flow for the organization.
1. 5s
1. Sort
It means to clear the work area. The items which are required or important for a
particular area in the workplace should only have things which are required. The
things which are not required should be sorted out.
2. Set in Order
The items which are required should be put in a proper place so that they can be
easily accessed when the need arises.
3. Shine
Shine implies keeping the workplace clean and clear. Cleanliness includes
housekeeping efforts and keeping the dirt away form the workplace. Cleanliness
not only ensures improvement in the appearance but at the same time safety while
working.
4. Standardize
The clean-up and the storage methods should be standardized. The best practices
should be followed by everybody in the organization to set an example and to
standardize the efforts.
5. Sustain
The effort to keep the workplace should be sustained. 5s’s involve a change in the
old practices of the organization. The culture change should be imbibed in every
employee of the organization.
Advantages
This Lean tool motivates the employees to develop their work environment and
ultimately helps in reducing wastage, time and in-process inventory. There is
optimum utilization of space and easy accessibility of tools and materials used
during work. The 5s are also a base for other lean tools like TPM and just-in-time
production.
2. Visual Factory
Visual Factory is a tool which allows for easy access to information for everybody to
see and understand. This information can be used for continuous improvements. If
the knowledge about all the tools, parts, production systems and metrics is clearly
stated everybody in the organization can understand the standing of the system at
a glance. The presence of information simplifies things and manageability. Visual
factory is like a visual aid which helps to know the what, when, where, who and how
and why of any work place. Simply put, it helps to view the current status of an
organization.
3. Kanban
Kanban is a word taken from the Japanese language and implies “card-signal.”
Kanban/Pull systems help in the optimization of resources in an organization. They
depend on customer demands rather than on sales forecasts. There are no stocks
which lie in the store room waiting for the customer demands. Kanban is a signal
card which indicates that the system is ready to receive the input. It helps to
manage the flow of the material in the manufacturing system.
The concept of Kanban is many years old. The ‘two bin system’ was used in the UK
much before the Japanese production tools became popular in the 1970s. The
Kanban system is very easy to understand and deploy in an organization. It is very
popular in industries where there is a stable demand and flow. Usually the demand
is less and supplies more in the manufacturing industry. So, Kanban cannot be
applied to the entire process. However, there may be sub-processes to which it can
be applied.
It is a system where the supply of raw materials and other components is ongoing.
The workers have a never ending supply of what, when and where they need
something. The Kanban system has the following benefits:
1. It decreases the stock and prevents the products form becoming obsolete.
4. It increases output.
5. It saves costs.
4. Poka-Yoke
Poka-Yoke is a Japanese term which stands for mistake proofing. The term was
invented by Shigeo Shingo. It can easily recognize flaws in a product and thwart the
manufacturing of incorrect parts. It is a signal to specify a trait in a product or a
process. It is the first step in proofing a system. Poka-Yoke is mandatory as it saves
time and money for the organization. Defects like scrap rework and other defects
can be prevented in the first place with the help of Poka-Yoke. Poka-Yoke puts limits
on the errors and helps in the accurate completion of the project.
5. SMED
Single Minute Exchange of Dies (SMED) is a method adopted for quick changeovers.
The SMED tool is applicable for all kinds of industry, be it a small retail shop or a car
manufacturing company. SMED is also known as "Quick Die Change" or “change-
over” and it is the time period between the completion of the last task till the
beginning of the next task. The time required to get the tools, raw materials and
complete the paperwork includes SMED.
The SMED tool is an important part of the Lean Manufacturing as it helps to save
costs and helps avoid loop-holes. Decreasing setup time at capacity constraining
resources is very significant because the throughput of the entire organization is at
stake at these nodes. The SMED process reduces setup time and increases
utilization, improves competence and quantity. Switchover required for machinery
and rooms when switching from one room to another can be done in a short span
of time and it serves as an important agent of change.
Benefits
4. Increase in quantity.
5. Increase in elasticity.
6. Standard Work
The norms that are established to standardize the work should be clearly
documented. The documentation ensures that the rules are being followed
and the work is being carried out in a consistent manner to ensure efficiency
and eradicate waste. The documents should be regularly modified for
continuous improvement as it brings to fore the areas that need
development.
Introduction
TPM originated from TQM. TQM had evolved as a result of Dr. W. Edwards Deming's
effect on the Japanese industry. The concepts of quality introduced by Dr. Deming
were very popular and became a way of life for the Japanese industries. He
introduced statistical procedures and the quality management methods that
emerged because of them. This new theory of quality is known as Total Quality
Management or TQM.
Origin
The original source of the concept of TPM is debated. According to some, it was
invented by American manufacturers about forty years ago. Some say that it was
invented by Nippondenso, a Japanese manufacturer of automotive electrical parts
in the late 1960’s. The concepts associated with TPM were derived and executed in
the Japanese industries by Seiichi Nakajima, an officer with the Institute of Plant
Maintenance in Japan. The first widely held TPM conference took place in the
United States in 1990.
The concept of TPM followed the theory of productive maintenance. The theory of
productive maintenance was not appropriate for the maintenance environment.
According to the theory of TPM, everybody from the workers to the top
management is involved in the equipment maintenance. Everybody in the
organization feels that it is his moral duty to maintain the machinery. The operators
of the machines examine, clean, oil and alter the machines themselves. They, even
perform simple calibrations on their machines. Simply put, everybody in the
organization is familiar with terms like zero breakdowns, maximum productivity
and zero defects.
TPM gives a lot of freedom and at the same time provides a sense of responsibility
in the employees. TPM is an effort that requires some time for effective
implementation. It is initially carried out in small teams and gradually it spreads in
the entire organization.
Application
When the coordinator is convinced that the work force is able to comprehend the
TPM concepts, the action teams who would carry out the TPM program are formed.
The operators of the machine, maintenance personnel, supervisors and upper
management are included in a team. These are people who have a direct bearing
on the issue at hand. Each team member is held equally responsible for their
actions. The TPM coordinator heads the team until the concepts are practically put
to use and the team members become proficient with them. The teams often begin
by addressing small problems and move on to solve the problems involving
complexity.
The tasks of the action teams include indicating the problem areas, specifying a
course of action and implementing the corrective measures. In good TPM
programs, the team members pay a visit to the cooperating plants to study and
evaluate the work in progress using the TPM methodology. The comparative
process is a measurement technique and a significant aspect of the TPM
methodology.
Ford, Eastman Kodak, Allen Bradley and Harley Davidson are some of the big
names using the TPM methodology. There is tremendous hike in productivity by
making use of the methodology. Also, there is a great reduction in down time,
decrease in the stock of spare parts and increase in the number of on-time
deliveries.
TPM is the done thing these days. The importance of TPM in some companies is
such that their success depends on it. It is suited for all kind of industries like
construction, transportation and many other industries. The most important
consideration for a TPM program remains full commitment from the entire work
force because it would result in high ROI.
CHAPTER 10
DFSS is an acronym for Design for Six Sigma. DFSS does not have a universal
definition. Instead, it is defined differently by different organizations. The DFSS for
every organization is tailor-made to suit its needs. This makes DFSS an approach,
not a methodology. The main function of DFSS is designing or re-designing of a
product or service from the base level. 4.5 is the least Sigma level for a DFSS
product. But it can obtain the 6 sigma level depending on the product.
The important factor in DFSS is knowledge about customer preferences. This is
because the product is being redesigned from a very low level and it is important to
know the needs of the customers before DFSS is executed. DFSS helps in the
implementation of Six Sigma in a particular product or service as early as possible.
It is a landmark in achieving customer satisfaction. DFSS helps an organization gain
a chunk of the market share and at the same time it is an approach to achieve big
ROI (Returns on Investment).
There are fundamentals of a successful DFSS process. The most important among
them is the constancy in critical processes. The core of a DFSS project lies in
forecasting and forecasting is based on thorough knowledge of the process
capability.
1. Functional Requirements
The usual Six Sigma projects work on the principle of DMAIC whereas the DFSS
projects work on the IDOV methodology. IDOV stands for: Identify, Design,
Optimize and Verify. The identify phase helps to recognize the functional
requirements for a new product or process. It also involves translating the
functional requirements to technical requirements and linking the goals of
the project to the organizational goals. This translation is known as the
“transfer function” or “prediction equation”. The information or the data
about the product can be obtained using a process map or product drawings.
In rare circumstances, the data can be obtained using design of experiments.
The information about the product or establishing the transfer function (the
relationship between inputs and outputs) helps to forecast the quality of a
particular project. It helps to make clear whether a particular product would
fulfill customer expectations or not.
The transfer function also enables to get knowledge about the effect of one
input on a particular output. The understanding about these relationships
helps to modify the design which would hit the goal and apply settings that
would reduce variability. This is known as the concept of robust design.
Earlier, only performance targets and costs were known in advance. However,
with the help of the transfer function, it has become easy to determine a
specific quality level and the progress towards the goal can be measured
throughout the process.
10.2 Noise Strategies
The variables which affect the performance of the product from outside like
government policies and environment.
The internal variables or the internal settings of the product that affect its
performance are known as internal noise factors. Whenever a product deviates
from complying with its internal features, it becomes the internal source of noise
for the product.
The designer who chooses the design has to be very particular about the designs
he chooses. The design should be such that it has the least deviation from the
perfect design. A design which is immune to the noise factors is known as a
minimum sensitivity design or a robust design. There is a methodical way to reduce
sensitivity to designs. It is called a parameter design.
Parameter Design
This is a design to establish tolerance for the products or processes to reduce the
cost of manufacturing and lifetime costs. It is the next and the final step, after the
parameter design, while specifying the product and the process designs to identify
the tolerance among the factors. The factors affect variation and they are modified
only after the parameter stage because only those factors are modified whose
target quality values have not been achieved.
There are three methods through which tolerance can be estimated. They are Cut
and Paste method, the Control Chart method, and the Root Square Error method.
The Cut and Paste method is popular among the followers of Dr. E.M. Goldratt. A
project manager collects the overstated estimates of the duration of tasks (given by
the developers) and cuts them in half. The project, is then, modeled on the
decreased estimates of duration. The tolerances of the constituents are estimated
as a percentage (which is usually 50%) of the deterministic estimates of duration of
their sequence of tasks. The tolerance of the project is determined as the
percentage of the deterministic estimate of the longest sequence of tasks. The
longest sequence is referred to as the critical chain.
The Cut and Paste method is a simplified method and can be used by a layman.
However, its disadvantage is that it is a linear model of variation. So, as far as the
sequence of tasks is concerned, it only adds variance linearly. It is not able to add
variation linearly. Variation boosts up with the square root of the tasks in a
sequence. Therefore, the linear model which is provided by the cut and paste
method is inconsistent as far as mathematics is concerned.
The biggest disadvantage of the Cut and Paste method is that the estimates
provided by the developers are reduced by the managers. This develops a gap
between the managers and the developers. The individuals who have a direct
bearing on the logistical performance of a product are isolated.
b. Control Chart
The normalized values of project duration are graphed on a control chart according
to this method. The normalizing values which are used are the planned (baseline)
duration estimates. For instance, a project which had a planned duration of 100
business days and an actual duration of 140 business days is represented as having
the normalized duration of 1.4 on a control chart. The difference between the
control limit and the mean of the normalized duration values is the basis for
computing the ensuing project tolerances.
The advantage of the control chart method is that it encapsulates all the variation
displayed during the project duration. This helps to get a precise estimate of the
required tolerance value. The disadvantage of the control chart method is that the
tolerance calculations are very complex. Almost every product in a development
organization exhibits varying degrees of variation. These variations are erratic and
huge and this is what makes the control chart method inappropriate at times.
c. Root-Square-Error (RSE)
The component tolerance is computed as the square root of the sum of the
squares of the differences, for the tasks in each component sequence. The same
calculation also gives an estimate of the tolerance of the project with the difference
values being those which correspond to the tasks of the primary sequence in the
project.
The figure given below provides a sample calculation. The sum of the differences
when squared is equal to 1098 business days. The corresponding project tolerance
is approximately 33 business days. The 33-day tolerance value gives a commitment
duration which matches the high confidence level for the whole project.
The project tolerances which are calculated using the RSE method are considered
the absolute minimum values. This is due to the fact that RSE method considers
only task-level variation. The amount of variation in project duration is significantly
huge. Therefore, the values computed using the RSE method is unsuitably small.
However, the biggest advantage of the RSE method is that it involves developers in
the construction of models for the project. The involvement of the developers in
the project increases trust and they feel less alienated.
Tolerance and Process Capability
For information on Process Capability see Chapter-5, Six Sigma Improvement
Methodology and Tools--Measure
10.4 Failure Mode and Effects Analysis (FMEA)
The demand for high quality and dependable products is very high. The increased
potential and functionality of the products makes it very difficult for the
manufacturers to fulfill the increasing demand of the customers. The dependability
was conventionally achieved through massive experimentation and through such
methods as probabilistic reliability modeling. However, the problem with these
tools was that they could be implemented in the last stages of development. FMEA
has the advantage of being implemented in the early stages of development.
Failure Mode and Effects Analysis (FMEA) is a powerful engineering quality tool
which helps to recognize and oppose potential factors that might lead to failure. It
is used in the early stages for all types of products and processes. The method is
very simple to use and can be easily used by a layman. It is not easy to forecast
each and every failure mode but the development team tries to devise a wide list of
possible failure modes.
The early and continuous use of FMEA early in the design process allows an
engineer to figure out failures and manufacture dependable and safe products that
would leave the customers satisfied. The information of the products or the
processes produced with the help of FMEA serves as a chronicle for further use.
Types of FMEAs
There are different kinds of FMEAs. Some are used more than others. The kinds of
FMEAs are listed below.
Usage of FMEA
The engineers, in the past, have done a god job as far as assessing the functions
and the design of products and processes in the design phase is concerned.
However, they have not been able to achieve much in terms of reliability and
quality. The engineers are usually concerned with designing a product or process
that is safe for the user. The FMEA is a tool for the engineers to produce safe,
dependable and customer-friendly products. The FMEA serves a lot of purposes for
the engineers.
1. It helps them to build the product and service requirements which reduce the
possibility of failure.
2. It helps in the assessment of the needs expressed by the customers and others
involved in the designing of products and services so that they do not turn out to be
potential failures.
3. It helps to point out the settings of the design which might lead to failure and
therefore pull them out of the system so that they cause no harm.
5. It helps to trace and handle the possible risks that might develop in the design.
Tracing the risks helps in its documenting and is a major factor in the success or
failure of future projects.
6. It guarantees that if a failure occurs; it will not affect the customer of the product
or service.
Pros
There are many benefits associated with FMEA. Some of them are listed below.
1. The first step is to describe the product or the service and its function. It is very
important to have a thorough knowledge of the product or process. This knowledge
helps the engineers streamline the products and the processes which fall under the
intended function. This knowledge is essential as product failure leads to increase
in costs and dissatisfaction of the customers which would lead to aversion towards
the product.
2. The next step is the creation of a block diagram of the product or the process. A
block diagram depicts the major components or the steps involved in the process.
The steps or the components are linked through lines and their relationships are
shown. This helps to create a structure around which the FMEA can be formed.
Then a coding system is developed to identify the system elements. The block
diagram should always be included with the form of FMEA.
3. The next step is to complete the headings on the FMEA form worksheet. These
headers can be modified according to use.
4. The fourth step is the listing of the components of the product or the steps
involved in the process in a logical manner.
5. The failure modes are identified and the manner in which a part, a system, a sub
system or a process could be potential threats is defined. The potential failure
modes could be due to corrosion, deformation, cracking or electrical short.
6. A failure mode in one part could be the reason for a failure mode in any other
part. Therefore, it is important to list the failures in technical terms. Failure modes
should be listed for function of each part of the product or each step of the
process. Then, the failure mode is pointed out. The recording of the previous
failures helps to detect failure modes in similar products or processes in the future.
7. The next step is the description of the failure modes. The engineer working on
the project should be able to identify the failure mode and the effect it would have.
The effect is the product in terms of the failure of a function as seen from the
perspective of the customer. The failure as seen from the internal or the external
customer’s point of view should be kept in mind. A customer would consider the
product a failure if it causes an injury or harm to him in any way, or is he incurs a
problem in operating the product or if its performance is not up to the mark. A
numerical rank should be established for the gravity of the effect of the failure
mode. A common industry standard is where 1 stands for no effect and 10
indicates severe system failure which affects system operation and is a threat to the
safety without warning. Ranking is used to determine the severity of the failure
mode and determine whether the failure would be a minor or a major one. The
ranking helps in the prioritization of the failure modes.
8. This step involves the identification of each failure mode. A failure cause is the
weakness in any design which might result in a failure of the product or process.
The possible factors that might cause failure should be clearly documented. The
factors should be listed in technical terms and not as symptoms of a failure. The
potential causes could be contamination, improper alignment or improper
operating conditions.
11. This step helps to ascertain the probability of detection. Detection is the
evaluation of the probability that the Current Controls will determine the cause of
the failure mode and thus prevent it from reaching the customer.
12. The RPN or Risk Priority Number is evaluated. The RPN is a mathematical
product of numerical severity, probability and detection ratings. RPN is used to
prioritize items which require extra attention in terms of quality planning or action.
13. The next step is to determine the steps which need to be taken to tackle
potential failures which have a high RPN. These steps include particular inspection
of products or processes, choice of different parts or materials, redesigning the
product or process to avoid failure modes.
14. The accountability and a target completion date are fixed for these actions. The
responsibility is made clear cut and facilitates tracing.
15. The actions which are taken are indicated. After the steps have been taken, their
severity, probability, detection and RPN’s are reviewed to make sure if any further
action is required.
16. The FMEA should be upgraded from time to time as the design or the process
changes or if some new information comes up.
Failure Modes and Effects Analysis (FMEA) Form
It is important to identify the alteration in the early stages of the design. The later
the change occurs, the more the increase in the costs. DFX is an important part as it
helps in the selection of the idea about the product in the identify phase and in the
assessment and management of the risk in the design phase. The revealing of the
alterations in design in the early stages of development leads to cost savings,
improves quality and reduces time to market.
DFX is a generic stage which is customized to develop more DFX tools quickly and
continuously. The resulting DFX tools share a commonality and are easily executed
and coordinated. This DFX process is, then, viewed from a wider perspective of
Business Process Reengineering (BPR) to obtain a product which is produced minus
any defects.
Design for Production is the foremost tools of DFX. It refers to the processes which
assess the performance of the production system. It answers questions such as
“How much time it will take to complete the order?” or “How much stock is needed
to keep the international supply chain running?” It is important to possess
knowledge about the design of the product, the needs of production and the
production system to answer these questions.
The DFX family is ever expanding and houses the family members listed below. The
X in DFX can be substituted for different variables.
This is a method of designing the parts of the product in a way that aids their
manufacturing. DFM enhances manufacturability and provides manufacturing cost
data for a product and its parts.
The importance of DFM was realized during World War II when the demand to build
better weapons in the shortest possible turnaround time was high. Also, there was
a shortage of resources. However, after the war, there was prosperity and a speedy
industrial growth. The functions of design and manufacturing were carried out by
isolated departments which resulted in an orderly development of products.
In the late 1950s and 1960s, organizations began to realize that the methodologies
that were currently being used were not suitable for fulfilling the need of automatic
and reprogrammable manufacturing. Gradually, organizations began to customize
their designs and processes and started carrying out independent research for the
same. The pressure from global competition and the desire to reduce lead time led
to the rediscovery of DFM. The personnel from both the design and manufacturing
departments were roped in to carry out the design projects. The manufacturing
engineers took active part from the early stages and advised about the potential
ways to improve manufacturability.
The need for DFA arose when the problem to increase the level of automated
assembly highlighted deficiencies in current product design with respect to
automation capability. Design for Assembly helps in the simplification of the form
of the product, it decreases the components and thus decreases the total cost of
components. The design should be easy to assemble. Therefore, it reduces both
the assembly and production costs. Although, these days the X in DFX can be
replaced by variables of cost, disassembly and recyclability, yet the variable of ease
of assembly is constantly upgraded.
This is a more recent method which helps in the easy disassembly of products and
helps in quick and easy maintenance of products. This method is also helpful when
a product or component needs to be recycled.
This method helps to give precise estimates of the changeover potential of the
production machinery and the effect of the characteristics of the final design.
This is also known as the Design for Maintenance. It takes into account the way in
which the subassemblies can be exchanged as quickly and easily as possible. The
maintainability of a product depends on the chances of failure of a particular
component or subassembly. This is accomplished by enhancing the ease of
assembly and disassembly of these particular components. This also helps to justify
the additional cost incurred to increase their capability.
Other designs which are used are DFR—Design for Reliability, DFC—Design for Cost,
DFQ—Design for Quality, DFD—Design for Diagnosis, DFI—Design for
Inspection/Design for International and DFG—Design for Green.
Roles of DFX
1. Culture in an Organization
The execution of a DFX project requires an active interest from all the employees
from all the departments. The departments would then report to the Board of
Directors. An ideal DFX project requires a DFX champion who would report to the
Board of Directors. This is important as having high expectations from the lower
and the middle management (who is busy performing their routine tasks) is
otherwise useless.
The champion’s work would be to pressurize the departments to allocate time and
resources for the DFX project. The champion will have to be highly influential and
should be able to convince the employees for their involvement. He should be able
to disseminate knowledge about DFX, its goals and the benefits it would have for
the organization. A DFX project requires assistance from all the departments. For
big organizations, where DFX is already employed, the DFX engineers report directly
to the respective functional managers like R and D, manufacturing or engineering
managers.
2. Concurrent Engineering
DFX should serve as an ongoing tool. It should be merged with the other goals of
the company like increasing customer satisfaction. DFX helps to reduce the product
development time, quality and reliability of the product and also customer
satisfaction. It also helps in the reduction of the cycle time which is a major
indication of success.
The rules which are applied are not quantitative. Although the rules are either
based on a formal employee’s experience or based on a formal checklist, it requires
a human to interpret and apply them to each case. This is because each case is
unique in its own way. While this is the case, it is also true that it is not feasible to
start every design from scratch and some skill is required on the part of the
designer to infer and use the rules appropriately.
Advantages of DFX
The major chunk of the cost is dedicated to the design stage before the production
of the product. However, the truth is that the majority of the cost is incurred when
actual production begins and after the design is accepted. Therefore, it becomes
important to consider the production and assembly problems at the product design
stage to save costs and increase productivity.
Introduction
TRIZ is a powerful tool that aids the designers to decipher the problems creatively.
Both TRIZ and Axiomatic designs have developed independently and have no
influence of the many other design strategies which have evolved outside Russia.
Nam Suh’s Axiomatic Design and TRIZ can be successfully applied to a
manufacturing process.
Axiomatic Design
Professor Nam Suh’s seminal text ‘The Principles of Design’ published in 1990 talks
about the Axiomatic Design approach. This approach is mainly deals with the
organization and employment of means to determine whether the key to a given
design is ‘good’ or ‘bad’. The Axiomatic Design is built around two axioms.
According to this axiom, the ‘good’ design occurs when the Functional
Requirements of the design are independent of each other.
According to this axiom, the ‘good’ design occurs when a minimum ‘information’
matter is attained.
Both the axioms are logical in nature. Suh describes the design process by drawing
parallels with the feedback control loop to explain the design process. A designer
creates a design solution based on the inputs available, the needs of the customers
and a preferred output of the customers. The next step is to use the two axioms as
logical tests, that is, the response from the customers and evaluate it to find out
how appropriate is it to be used as a design solution.
TRIZ
Althusser realized during research that principles found while evaluating patents
from one industry could be applied to other industries as well. TRIZ also grew out of
the transfer of principles from one area of work to another. TRIZ principles are
applied not only to fields associated with engineering but also to other technical
and non-technical disciplines.
TRIZ is interdisciplinary and has relations with ontology, logic, systems of science,
psychology, history of culture and others. TRIZ has now been employed by many
Fortune 500 companies to successfully solve complex technical issues. The TRIZ
methodology, these days, is not only limited to manufacturing. It has spread to
other areas like medicine, business management and computer programming.
One of the basic concepts of TRIZ is contradictions between two elements. For
instance, if you want more clarity in the camera of a mobile phone, you need a
mobile phone with a camera that has a higher pixel quality. If the pixel quality is
improved, it would increase the price of the camera. So, to get a better quality, you
have to pay a higher price. Althusser calls these Technical Contradictions. A
designer who is inventing a solution faces these contradictions. However, instead of
resolving them he swaps one of the contradictory characteristics for another. He
invents a new solution instead of resolving the contradiction.
Nam Suh in his book states the example of the problem faced by General Motors in
the early 80s. The designers at GM faced a problem with wheel covers which at that
time, were held on by simple spring clips. If the spring force was too small, the
wheel covers fell off and if the spring force was too high, the vehicle owners had
difficulty whenever the wheel was required to be removed. GM designers put in a
lot of scientific research and focused on the needs of the customers for resolving
this issue.
A series of customer trials using wheel covers with different spring forces were
conducted. The designers measured the level of satisfaction of customers with each
of the different cases. The results are shown in the figure drawn below. It was
discovered that 100% of the customers were happy from the viewpoint of the ease
of cover removal if the force needed to remove the cover was 30N or less. 100% of
the customers were satisfied that wheel covers would not fall off if the force was
35N or more.
So, the optimum spring retention force required was somewhere between 30 and
35 N. The designers also realized that mass-production would lead to statistical
variation in the attainable spring force. The functional requirement for the wheel
cover in terms of retention force was, therefore, 34 ± 4N. This was the best solution
in non-TRIZ terms as it dissatisfied the minimum number of customers. The data
available form GM showed that 34 ± 4N would dissatisfy 2 to 6% of the customers.
The Axiomatic approach was ale to harmonize the design variables to attain the
required Functional Requirements.
As far as TRIZ is concerned, it would immediately recognize the wheel cover issue as
a design contradiction. The TRIZ approach is built around ‘design without
compromise’ premise. TRIZ strives to eradicate the contradictions. TRIZ would
recognize this contradiction as Physical Contradiction and would seek to separate
the contradictory requirements in time. So, according to TRIZ, the retention force
would be high when the car is moving and low when it is not. The ‘Contradictions’
aid the designers to find ways to eliminate the contradictions.