0% found this document useful (0 votes)
322 views394 pages

Six Sigma Black Belt Courseware

This document provides an overview of Six Sigma and business processes. It discusses that Six Sigma aims to help organizations achieve efficiency by applying scientific principles to processes. It also discusses that a process adds value by taking various inputs like materials, resources, and actions. Business systems are made up of coordinated business processes that work together to develop outputs. Six Sigma practitioners use the DMAIC approach to comprehensively understand processes and systems in order to improve them.

Uploaded by

Nassif El Dada
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
322 views394 pages

Six Sigma Black Belt Courseware

This document provides an overview of Six Sigma and business processes. It discusses that Six Sigma aims to help organizations achieve efficiency by applying scientific principles to processes. It also discusses that a process adds value by taking various inputs like materials, resources, and actions. Business systems are made up of coordinated business processes that work together to develop outputs. Six Sigma practitioners use the DMAIC approach to comprehensively understand processes and systems in order to improve them.

Uploaded by

Nassif El Dada
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 394

Six Sigma Black Belt Courseware

CHAPTER 1
1 Overview of Six Sigma

A. Six Sigma and the Organization

Introduction

Organizations exist for one purpose; to create value. The organization is considered
to be effective if it satisfies customers and its shareholders. If the organization is
able to add value with minimum resources, it becomes efficient. The role of Six
Sigma is to assist management in producing the most value with the minimum
amount of resources, that is, in achieving efficiency. The organization does this by
applying scientific Six Sigma principles to processes and products like the DMAIC
(Define-Measure-Analyze-Improve-Control) approach or the DFSS (Design for Six
Sigma) approach to design efficient products or processes. A number of companies
have found that upon embracing the Six Sigma initiative, the business enterprise
prospers.

Many companies have effectively implemented this methodology. General Electric,


Allied Signal, Honeywell, Honda, Sony, Cannon, Boeing Satellite Systems, American
Express are some of them.

So what is Six Sigma? Six Sigma can be said to be a well-known quality management
technique to control the defects arising in various processes in an organization,
producing an efficiency of 99.9997%. It can be defined as a tool which is used to
measure the digression from the mean, or the deviation from the desired goal or
target, and at the same time to effectively manage and eliminate those deviations.
Six Sigma uses teams that are allocated well defined projects that have an effect on
the business. It also provides key team members training in advanced statistical
tools and project management which is necessary for the project.

It is used in many industries like banking, telecommunications, business process


outsourcing, insurance, healthcare and many more.
1. Value of Six Sigma

The history of Six Sigma dates back to the 1920s when tools like Edwards 14 points
and Ishikawa’s 'seven quality tools' (control chart, check sheets, run chart,
histogram, scatter diagram, Pareto chart and flowchart) were developed. It would
be wrong to say that Robert Galvin “invented” Six Sigma. Instead, he just applied the
tools used by people like Shewhart, Edwards and Ishikawa. The contribution of all
these scholars is remarkable as far as the theories of Six Sigma are concerned.

Carl Frederick Gauss (1777-1855) introduced the concept of normal curve. Edward
Deming, the ‘Godfather’ of quality brought about an immense change in the
approach towards quality in the early 1950s. However, Six Sigma was put into
practical use only when it was introduced by Motorola in the early 1980s as a
method to reduce product failure level in the next five years. This was a challenge
and it required prompt action and a deep analysis. It was only in the mid 1990s that
they revealed that they had adopted this unique method.

The value of Six Sigma can be ascertained from the fact that big names in the
corporate world are making regular use of it to improve efficiency. It is a data-
driven approach and it strives to improve the process at every step. It focuses on
reducing process variation and improving process control. The basic aim of this
approach is to reach for a situation which can be termed as ‘perfect’. The
companies measure their performance according to the sigma level of their
business process. Another approach called Lean Sigma (described in Chapter 9)
focuses on driving out waste and promotes work standardization and flow.

The earlier trend was to aim for three or four performance levels. However, with
increasing competition the goal has risen and the companies now strive to achieve
the Six Sigma level. This increased level of competition means that the standard has
risen from 6,200 and 67,000 problems per million opportunities to 3.4
problems/defects per million opportunities. Six Sigma means six standard
deviations from the mean. The reason for the raise in the bar is higher customer
expectations who demand better quality and great service. The customer is
increasingly becoming aware and making a strong-hold in the market.

The primary aim, in fact, of the Six Sigma approach is to keep customer
requirements in mind. So, the main focus of an organization is its customers or the
clients. They are the ones who influence the decision-making process in an industry
and for whom steps are taken to improve efficiency and performance. Six Sigma
helps to reduce costs and, it is, therefore possible to make optimum use of the
resources of the company.

To illustrate the above point, consider a business process outsourcing center where
the main focus, as in any other industry, is usually, the customers. Customer
satisfaction is their primary aim and at the end of the day, what they are looking at
is happy customers who would increase their business. Assume that Company X
manufactures computers. A customer orders a desktop and opts for a next day
delivery (after it is manufactured). If, for example, there is a discount of 50% on the
order if the delivery is not on time, then it is a loss for the company. If only 65% of
the computers are delivered on time, the process will be at ‘level 2’ sigma. If 92% of
the computers are delivered on time, the process will be at ‘level 3’ sigma. If the
company delivers 99.4% of the computers on time, then the company’s
performance will be at ‘level 4’ sigma. However, if the company wants to achieve
‘level 6’ sigma, it will have to deliver 99.9997% computers on time.

Therefore, it becomes very clear as to how much defective the process is and where
the problem lies. It is a very reliable way to ascertain losses and the loopholes in
the process. The Six Sigma processes are executed by Six Sigma Green Belts, Six
Sigma Black Belts and are overseen by Six Sigma Master Black Belts. Black Belts
save companies approximately $230,000 per project and can work on 4 to 6
projects a year. It is a very disciplined approach and produces consistent results.
Credibility is the key to it as the top management is involved and excellent results
can be assured.

Six Sigma is very critical to quality. An interesting fact is that it proves the notion
wrong that such a technique can work only for big companies. Actually, it can be
applied effectively to small businesses as well. It is a result-oriented approach
which produces excellent results with optimum utilization of resources. It saves
time, money and is one of the best ways to improve customer satisfaction.

The philosophical idea behind Six Sigma is that all systems are seen as processes
that involve inputs and outputs, and these can be measured, improved and
controlled. Six Sigma aims to control the outputs by influencing the inputs. It uses a
set of qualitative and statistical tools to steer process improvement.
2. Business Systems and Processes

What is a Process?

A process is a sequence of events that lead to an output. A business process is a


coordinated effort that produces great results. It is a mixture of several parts which
include raw materials, resources, manpower, and roles and actions to accomplish a
certain task. In other words, a process adds value to an input. A process is ongoing
and this is what leads to production of goods and services.

The business processes are different for different organizations. Some processes
common to all organizations include drafting a plan, manufacturing, establishing
customer relations, communication, and providing great service to customers.
These processes can be further divided into sub-processes like design creation and
research and development. Also, the processes can be represented graphically on a
process map which makes it easy for them to be executed.

A comprehensive understanding of the business processes and then improving


upon them forms one of the most important aspects of a Six Sigma project. The
policy which is followed by the Six Sigma practitioners to improve the processes is
defined by the DMAIC which stands for define, measure, analyze, improve and
control. DMAIC would be discussed in detail in the subsequent chapters.

What is a Business System?

A business system is a wider term than a business process. Several business


processes make up a business system. When a process is implemented and the
inputs put in place, a business system is developed. A business system is made up
of several coordinated processes. A business system makes sure that all the inputs
are available and no process has a scarcity of resources. The main aim of a
business system is to continually develop the processes and outputs. So, one of the
main responsibilities of a business system is the continuous collection and scrutiny
of the business process to improve the final output.

Six Sigma is gaining popularity by the day and being extensively used across all the
business sectors. Whether it is banking, health care, insurance, business process
outsourcing, telecommunications or even the military, the usage of Six Sigma has
become imperative as it sets certain standards and stands for quality. There are
numerous business systems and umpteen processes that have to be taken care of
in a particular organization. It is very important that the defects be removed at the
lowest level or in every process to reduce the overall defect. In order to produce a
result-oriented effort, it is important to identify different business systems and
processes.

In the present day, the tasks in every organization are varied and the processes
quite complex. Therefore, to achieve better results, it is very important that people
from cross-functional departments come together and contribute their skills
towards achieving the process goal, instead of each department working under a
departmental authority. There needs to be a role-reversal as far as assigning the
tasks is concerned. Multi-tasking needs to be recognized. This is possible only if
each system and sub-system is clearly defined and the role of each and every staff
member demarcated. This is where Six Sigma steps in. It reduces waste by
producing much more productive staff and minimizing errors. Everybody is
assigned a specific role and he or she is aware what and how each thing needs to
be done. A list of some of the business systems that deploy Six Sigma is given
below.

Health Care
Consider the healthcare sector. The face of this sector is changing for the better
and the tasks of the management are piling up. New discoveries are very frequent
here. Also with increased competition, the processes here have become quite
varied and complex. This means that errors have also increased manifold. If the
patients are not satisfied with the services they are receiving at the hospital, it
means that the management has failed in its purpose.

Take the case of a factory worker in India who meets with an accident in the middle
of the night, and does not have the necessary resources to get operated at a private
clinic. The government hospital that he is admitted to closes the OPD at 10 PM.
Now OPD forms the process whereas the healthcare is a business system taken as
a whole. It becomes the responsibility of the authorities concerned to make sure
that the patients do not face this issue in the future.

It is time that the management starts providing better service to its patients; one
that is safer and produces greater patient satisfaction rate. The hospitals which
earlier made use of Plan-Do-Study-Act (discussed in the later part of this chapter) to
foster improvements are also adopting the Six Sigma model. In fact, many
companies in the healthcare sector have successfully implemented the Six Sigma
model. They are now focusing on an environment which does not believe in passing
the blame.
In a recent study done by the North Carolina Baptist Hospital, it was proven that a
Six Sigma process improvement team that was assigned the task of shifting heart
attack patients from the Emergency ward to the cardiac lab, reduced the hospital’s
mean time by 41 minutes.

Business Process Outsourcing (BPO)

The outsourcing sector is not a recent discovery. One of the earliest and oldest
approaches was applied to a company which outsourced in categories like
document copying, data entry and scanning. However, with the passage of time,
companies started outsourcing the entire process to a vendor. The vendor would
then, through a contract, agree to buy all the assets of the company and then hire
the same company's employees to carry out the process.

The current trend is to hire an Outsourcing Partner to carry out the company's
back-office programs. This approach helps maximize the outputs by providing
additional labor, equipment, technology and infrastructure. In fact, the BPO boom
is such that it has paved the way for many new innovations. It has made possible to
outsource knowledge based jobs. Knowledge Process Outsourcing is a recent
phenomenon that requires manpower which is adept in specialized knowledge.

Terms like "moving up the value chain" and "business transformation" are
synonymous with the BPO sector. They stand for cost efficiency and maximizing
profits. This sector keeps in mind three aspects: customer, process and employee.
Six Sigma is a highly disciplined approach which helps in delivering near "perfect"
products and services and, therefore, is appropriate for this sector as customer
satisfaction is the key in this sector.

There are some common steps which are needed to change the face of a BPO
industry and drive it towards success. Take an example of an inbound customer
contact center. Average Handling Time (AHT) is one of the basic criteria in such a
process. Assume that the AHT in this organization is 8 minutes. If the employees
are spending more time on one or more calls, it indicates that the total number of
calls for that employee would also decrease because average handling time is
proportionate to the number of calls. Therefore, to improve efficiency and increase
the number of calls, it is important that the employees be well versed with the
process and should provide a standardized solution for a particular issue.
So, the process that needs improvement in this case is the one which requires
reducing the AHT. Therefore, the BPO should make sure that the agents who are
handling the calls should have appropriate knowledge about the products and
should keep their conversation clear and to the point. Through Six Sigma, the
employees strive for perfection and try to achieve 100% accuracy. At the end of the
day, what is required is a satisfied customer who will increase business for the
company which also means increased profits for the company.

Insurance

As stated earlier, Six Sigma applies to all kinds of organizations and processes and
insurance is one of them. Insurance companies, these days, offer insurance for
various purposes such as health, vehicles, fire and life. This is one sector which is
very time-consuming as it requires a lot of paper work, underwriting functions and
adjustments. With umpteen organizations providing insurance, the field has
become quite competitive and therefore a lot of efficiency is required to lead.

What is required in the insurance sector, as in any other, is the need to understand
client requirements (in this case policy owner requirement). Another requirement
of this sector is reliability so that the clients trust the insurance company. None of
the insurance companies will eliminate their underwriting function just to make the
customers happy. However, they can modify these functions to suit customers’
needs.

Therefore, to achieve a targeted number of satisfied customers and at the same


time to make optimum use of resources, it is important that the insurance
company adopt a policy that is very systematic. The processes should be improved
according to the goals of the organization but at the same time be profitable and
lead to better customer satisfaction. Six Sigma is one such process which helps to
reduce costs and at the same time does not compromise with quality. It helps in
minimizing errors and reducing rework. This, in turn, reduces costs and maximizes
profits.

Military

It might seem quite strange to hear that the military world would require a
mechanism like Six Sigma. When you think of the military, the image that would
come to mind is that of men who are conditioned by a lot of rules and regulations.
However, it is true that a lot of business goes on in the army as well. The military
buys equipment and machines, procures arms and ammunition. Besides, there is
routine work like payment of salary packages. A lot of money is spent during
recruitment. The attrition rate is also rising as well which converts into rising costs.
Although, there is no continuous struggle to cut costs and maximize profits as in
other business processes, yet running the military is no less than running a
business.

The Six Sigma method has helped in building a better work force and a methodical
organizational process in the military.

Six Sigma is a powerful tool that helps implement innovations which can transform
organizations for the better. The trained belts apply Six Sigma methods to various
processes such as manufacturing, repair, sale and maintenance. Six Sigma also
proves effective for sectors such as banking and education. The business processes
should be, therefore, tailor-made to suit the needs of the business systems. The
business systems, should, on the other hand, make sure that all the information
about the process and the improvement it requires is possessed by it. Both the
business process and the system depend on each other for success. Below is a
diagram which shows the relationship between a business system, a business
process and its sub-processes.
3. Process Inputs, Outputs and Feedback

The words ‘input’, ‘process’ and ‘output’ may sound like technical terms, but in
reality, these words are applied in day-to-day life. Take a very simple example, that
of preparing tea. The inputs that go into making tea are water, sugar, tea bags and
milk. The process is to boil water and put the ingredients in it. The output is tea.
This analogy can be used to understand business systems. Business systems are of
course more complex and sophisticated.

The example of the tea making process can be illustrated with the help of a
diagram given by Albert Einstein:
This process framework is too narrow in scope and gives only a broad outline of the
process. However, as stated earlier, business systems do not work on such simple
lines. They are vast processes and require expertise.

Inputs

The dictionary terms input as something that is put in as conversion of characteristics


usually with the intent of sizable recovery in the form of output, or as a component of
production (as land, labor, or raw materials).

Inputs in a business system comprise of raw materials, funds, equipment,


information and research. Inputs can also be intangible things such as time, energy
and ideas. Inputs are often procured through a supplier. They are the very
foundations of business systems. Without inputs, processes cannot be
implemented and production cannot be carried out.

Processes

The dictionary meaning of processes is a series of actions, changes, or functions


which bring about a result. In an organization, a process can be termed as a
provision made to convert the raw input into productive output which will make the
business work. An organization takes into consideration all the physical, technical
and mechanical factors to accomplish the goals. The ideas are executed into
functions and the funds are invested in a way that yield productive returns and
maximize profits. It is the innovativeness of the personnel, the productivity of the
machines and the technical information that can be transformed into great
products and services.

Outputs

The dictionary terms output as an act or process of producing or, simply put,
production. The result of carrying out the process in a systematic and productive
manner is termed as delivering outputs. The outputs of a business system result
from the internal processes that go on in an organization. These outputs can be in
the form of goods and services or can be simply ideas, thoughts or some
information-based data. These outputs are the revenue-earning material for an
organization. Also, an output of one department may be the input of another.
Consider a company that manufactures pizzas. The first thing that needs to be
done is to knead the dough and prepare the pizza base. The next thing is to garnish
it with cheese, vegetables, oregano and basil; and then bake it in the oven. So, the
first stage of preparing the pizza base may seem like the output on the one hand.
However, it is also an input in the entire pizza making process. Similarly, the freshly
prepared pizza may be the output in the production unit (kitchen) and a form of
input in the sales department.

The business process is much more complex than it seems and needs to be
supervised by a controller who regulates the activities of the group. There is a lot of
active communication that goes on between the controller and the personnel
assigned to carry out the tasks. The loop-holes in the process need to be looked
into. Besides, there is the intricate task of procuring the inputs and selling the
outputs.

Another factor which requires a lot of consideration is customer feedback.


Customer feedback is a great motivator and a very powerful tool to get value for
money. If the customer is not satisfied with the end product, he would switch to
another product that guarantees quality and this is the last thing any organization
would want. The customer is the king and his needs have to be taken care of.
Therefore, the output can be used as a thread to measure the digressions from the
quality and improve performance.

The Six Sigma tool makes sure that the inputs or the resources are optimally used.
This also means that these inputs are used in such a manner that maximum return
is guaranteed on the investment. The process should be such that it is least time-
consuming, is very efficient and the output should be the one that yields maximum
profit.

Feedback

Feedback is very vital for any organization. If the feedback is positive, it is an added
advantage. The process of converting input into a productive output requires a lot
of hard work and patience. So, it is very important to have a feedback and get to
know what the customers feel about the product or service that they are using. The
organization can learn from the feedback and also get to know about the loop
holes, if any. They can, then, improve upon it and hope to satisfy the customers.

On the other hand, feedback is important for the customers as they are able to
voice their concerns and give their feedback (positive or negative) about the
product or service they are using.

1.1 Agents of Change

B. Agents of Change

1. Why is Change Important?

Businesses or processes are usually resistant to change. The first reaction to a


change will be resistance to change. If the customer or end-user is not satisfied with
the process or product or service, no process improvement initiative will succeed.
Therefore, anticipating the change resistance and being prepared with the tactics to
deal with the resistance is imperative to success of any organization or enterprise.

Six Sigma is not a completely new method of changing an organization; in fact, Six
Sigma forces change in an organization to occur in a systematic manner. The
foremost aim of the management in traditional organizational setups is to develop
systems to create value for customers and shareholders. This task is an ongoing
process because management systems have to constantly strive to sustain their
competitiveness in the face of shifting customer trends and tastes. Then there is
always the threat of competitors who try to woo away customers by constant
innovation. The external factors like capital markets are always presenting new
ways to secure return on investments. For the business to survive in a fast moving
and dynamic environment, the organization needs to respond quickly and
effectively to change. This emphasizes the significance of and the need for change
in management systems.

At the project level, there are many factors that necessitate change. It is the project
itself that necessitates change. Although, every process of the project is planned
during the planning phase, there can be many fluctuations and variations
eventually in the project’s scheme of things. Various factors like change in the
project scope, alteration in the time schedules, variations in the proposed costs,
divergence in the design, pattern, quality, or specifications of the deliverables, and
modification in the technology calls for change.

In spite of this necessity for change, business enterprises are reluctant to


incorporate change in their management systems. Most of them resist change till
there are tell-tale signs that the business is not producing the expected results.
They realize the urgency for change when stakeholder groups start complaining, or
market share starts showing a downward trend, which could be an indication of fall
in competitiveness of the product or service. Other occasions that necessitate
change are decreasing share prices or customer complaints touching incredible
proportions. Change like integrating new technology into the organization improves
quality and efficiency and proves to be competitive advantage over others. Change
can also be enforced on the organization through internal policy changes, or
through government rules and other such external factors. Scores of projects fail to
succeed because they lack proper change control management. Sometimes major
losses are incurred before the change can be implemented. People lose their jobs,
and sometimes their careers too.

The Six Sigma methodology adopts change by intrinsically integrating change into
their management systems. Six Sigma does not try to change the management
system altogether, but it creates the infrastructure in such a way that change
becomes intrinsic in the everyday scheme of things. Six Sigma creates full time or
part time positions like Green Belt, Black Belt or Master Black Belt to facilitate this
change. (This has been discussed in the subsequent sections.)

The functioning of Six Sigma demands that the organization constantly find new
ways of improving their current systems. It is more than just about operating the
system the routine way. New techniques are employed; new procedures are
implemented to suit shifting customer and shareholder needs. Statistical and
analytical techniques are applied at all levels and metrics to facilitate this change.

There has to be rigorous training to effect this change, and one of the basic things
that needs to change is communication. Leadership has to ascertain that
communication about incorporating Six Sigma is effective, devoid of loopholes; else
there will be stiff resistance to change.

2. Managing Change

Organizational change management is the practice of developing change in an


organization in a planned and phased manner. The main goal of change
management is to facilitate collective gains for the organization’s business and for
all personnel involved in the change. When management reacts to factors which
are external to the organization (macroeconomic factors), it is called ‘reactive’
change management. When the management is initiating the change in order to
achieve an objective, or makes change a part of the regular routine, it is called
‘proactive’ change management. Six Sigma is a proactive method of initiating
change; it makes change a part of the daily scheme of things. (This has been
discussed in the preceding section)

The first stage in managing change is planning the change by making:

 a problem statement
 an objective
 a baseline metric
 the other related metrics for the project

One of the foremost responsibilities of the management is to look for the trends in
the macro environment so that the desired changes can be identified and new
programs can be easily initiated. Management has to explain the importance of the
change. The change plan must also include a communication plan and training
requirements to lessen the effect of the change in the team involved in the change.
It also has to calculate how the change will impact employee acceptance, reaction
and motivation, operating procedures, and technological requirements. The
management then has to see the change imperative percolates down to all levels.
In addition, it has to review and monitor the change to check effectiveness, and
make adjustments where necessary. At the same time, management has to support
employees as they undergo the process of change.

Roles

The roles that various personnel play when a change is initiated: (Hutton, 1994)

 The official change agent, sometimes called the champion, is the person who is
officially designated to assist management to carry out the change process.
 Sponsors are the senior leaders who are authorized to legitimize the change.
 The advocate is the person who sees the need for a change.
 The informal change agents are the personnel who assist in managing the change
initiative voluntarily.

The Goal of a Change Agent

(Thomas Pyzdek, 1976)

 Change the way people in the organization think.


 Change the norms (standards, models, patterns) of the organization.
 Change the organization’s systems or processes.

The Way Change Agents Work to Implement Change

1. The change agent imparts education and training to the personnel involved in
the change. Training is giving technical instruction and making them undergo
practical exercises to help them know how to perform a task. Education means
making them able to think or make them change the way they think. They provide
management with alternative ideas to tackle the change.

2. Change agents hold important positions in the organization and therefore they
are pivotal in bringing about quality improvements. They help coordinating the
development of the change and implementing the change.

3. A change agent helps the organization to organize review programs about its
strengths and weaknesses. A change is necessitated usually to eliminate the
weaknesses and focus on the strengths.

4. The change agent also mark that the top management are giving enough time
and commitment to the change effort, without which the change effort will refuse
to take off. The change agents act as ‘coach’ to the senior leaders, they advise and
coax the leadership into action and continuously remind them if the goals aren’t
being met.

5. Change agents use projects as the instrument for change. Projects have to be
planned in such a way that they are aligned with the change initiative’s goals. The
change agent can deal with the resistance to change by the following methods:

The change agent can deal with the resistance to change by the following methods:

 Create a sense of urgency for the change .


 Give prior information about the launch of the change to the people directly or
indirectly involved in the change.
 Make the process difficult to ignore by binding the success of the process with
personal trainings, timely reviews, or other important processes.
 Use appropriate communication techniques like checklists, dashboards, one-on-
one training.
 Do appropriate research to validate the change by citing examples.
 The sooner you implement the change, the better because there is no ‘right time’
for change.
 Prepare contingency plans in case the change effort does not take off.

1.2 Six Sigma Implementation Process

C. Six Sigma Implementation Process

The structure of the Six Sigma functions consists of the following:

1. Enterprise Leadership

Leadership today requires strong analytical capabilities, intelligence, the ability to


take risks and decisions, the ability to motivate others and communicate effectively.
A leader should be one who believes in team work. Enterprise leadership is an
important constituent of Six Sigma. It is important to list the tasks and how they
need to be implemented; however, it is also important to know who will lead the
others in carrying out these tasks. This is an important step and requires a lot of
planning because the tasks need to be executed in a proper fashion.

The most important responsibility of the top management is to assign the roles and
responsibilities to responsible personnel who will assist them in the deployment of
Six Sigma. Moreover, the task of linking the project to the organizational goal is
equally important. Six Sigma is supervised by the top management. They answer
the question as to who will be the leader of the project and are, therefore one of
the core elements. The Green Belts, Black Belts and the most talented and
accomplished, Six Sigma Master Black Belts assist the top management in the
deployment of a Six Sigma project. Although these positions sound like terms
having a relation with karate or martial arts, yet they are not, in any way, related to
the sport. The term came to be associated with the Six Sigma tool when a plant
manager in Motorola reported to Mikel Harry and the team that the Six Sigma tools
were "kicking the heck out of variation" as far as production was concerned.

Executive Leadership

An executive leader is the chief executive officer of a company who carries out the
most important responsibility in the management. He ranks the highest in the
hierarchy ladder in a company or an organization. He holds a permanent position
in the organization. He is the one who represents the needs of the customers and
communicates them to the higher authority. He executes the policies of the
company and reports directly to the Board of Directors. Just as a CEO is the senior
manager in a company, the executive leader holds the senior-most position in the
Six Sigma hierarchy. He is also responsible for allocating duties to the junior
officers.

An executive leader is also sometimes referred to as the Process


Owner or Business Process Executive. The executive leader is the one who has to
follow the Six Sigma tools keeping in mind the goals of the company. It is very
important to keep a balance between the two. Also, the leader should motivate
others by providing compensation in the form of incentives and lending full support
whenever others need it. Another responsibility that he carries is to allocate the
budget and train the front line workers who execute the process.

Project Champions

They are also senior-level officers and managers who synchronize Six Sigma
projects. The project champions look for prospective projects and oversee the
ongoing project. Project Champions lay out the initial draft of the project. They are
the ones who are responsible for managing the budget, eliminating the problems
and ensuring that the projects are completed on time. They are skilled as far as the
statistical concepts and tools are concerned. It is important to have one Six Sigma
champion per project. The Project Champions also make sure that they are in
constant touch with the Master Black Belts and hold regular meetings with them
about the project.

Master Black Belts

Master black-belts are experts who have hands-on-knowledge and who act as
mentors for Black and Green Belts. They are responsible for selecting, prioritizing
and implementing Six Sigma projects. They also revise the Six Sigma training
manuals. However, the main job of the Master Black Belt is to train the other
members involved in the Six Sigma project which means that he should have
thorough knowledge of the project they are working on. Moreover, they are
assigned a particular function of management like finance or resource
management.

They get the prefix of “Master” after having gained experience in the said field after
training Green and Black Belts for a number of years. This means that they should
possess very strong communication and motivation skills and they should, at the
same time, have technical competence. They should be able to explain all the
statistical tools and concepts with ease and be ready to face challenges and
problems.

Black Belts

Black Belts form the technical support team in a Six Sigma project. They are highly
trained and have a hands-on-knowledge of the statistical tools and methods. They
should also possess presentation and analytical skills. They should have the ability
to take risks and at the same time be innovative enough to come up with
something new. They usually form a team of 4 to 6 people per project.

The role performed by Black Belts is that of a coach. His main job is to train others
involved in the Six Sigma project by stating examples, holding seminars and
conducting workshops. They handle 3-4 projects per year and deliver significant
results. They impact an organization in the most significant manner by saving huge
amounts per project. Black Belts hold a significant position in the cadre of players in
a Six Sigma project and can also assist the Master Black Belts, if need be.

Green Belts

Green Belts form 5 to 10 percent of the organization. They consist of professional


staff and facilitate group activities. They undertake about 2 projects per year and
usually they form the regular staff which works part-time on such projects.
However, they can also be the team leaders for some of the projects. Moreover,
companies these days, look for employees who qualify, at least for the position of a
Green Belt.

The time required for each level of position in the Six Sigma varies. However, what
is required at each level is to choose the capable personnel from the organization
who are ready for innovations. They should be the ones who can communicate
effectively and bring about significant changes in the organization.

The Six Sigma leaders should be able to demarcate the advantages of strategic
planning and at the same time be able to determine the steps needed to complete
the ongoing project. They also oversee whether the duties are being performed in
the right fashion and they should also be able to choose the right people who will
be able to perform efficiently in a team. There should be enough flexibility in the
organization. It means that the team members should also be given enough
freedom to take independent decisions whenever required.

A successful leader will always work in accordance with the market needs. He
anticipates the needs of the customers and molds his business strategy with effect
to them. He is always prepared to take risks and lay the performance standards
according to the current trends.

2. Six Sigma Methodologies

DMAIC

From a practical viewpoint, it is very essential to generate a master deployment


plan as a road map throughout the Six Sigma implementation cycle. The master
plan can be developed and divided into five phases: Define, Measure, Analyze,
Improve, and Control. This is also known as the DMAIC model. It is a kind of
improved version of the PDCA (plan-do-check-act) model.

The detailed steps for each phase are described as follows:

Define Phase

This is the first phase of DMAIC. In this phase, the key factors like Critical to Quality
(CTQ) variables and problems present in the process and as identified by the
customers, are defined. A process is an ordered sequence in which input
metamorphoses into an output. The process that needs to be amended is clearly
defined by an acronym called SIPOC which stands for Supplier-Input-Process-
Output-Customer.

The Voice of the Customer is critical to define the goals of the organization. There
are other issues to be taken care of as well and they include cycle time, cost and
defect reduction. The essence of Six Sigma is to solve problems that are impacting
business. The process of improvements starts immediately with the "Define" step.
When a Six Sigma project is launched, goals are chalked out to have an idea about
the degree of satisfaction among customers. These goals are further broken up into
secondary goals such as cycle time reduction, cost reduction, or defect reduction.

The Define Phase comprises of base lining and benchmarking the processes that
need improvement. Goals and sub-goals are specified and the infrastructure to
accomplish these goals is established. An assessment of changes in the
organization is also taken into consideration.

Measure Phase
The second phase is the measure phase in which the reviewing of information and
collection of data takes place. This phase helps measure the performance of the
ongoing process. In this phase the data collected is quantified. The process is
measured in terms of defects per million opportunities. This is imperative for Six
Sigma because only if the measurement is correct will the results be good. The
important thing to be kept in mind while measuring is that there should be cost and
time savings.

The important thing required in this phase is that the measurement system should
be one which can be substantiated when required. It should be correct to the core
and orderly.

Analyze Phase

The Analyze phase is the one where the interrelation between the variables and the
impact they have on the quality is studied. This is also the phase where the root
cause of the defect is analyzed, new opportunities are searched for and the
difference between the current and the target performance is found out. The idea
behind this kind of analysis is to find out the inputs that directly affect the final
output. Also, it can help to answer several questions like:

 The analysis helps to determine the blend of inputs that can affect the output.
 If an X input is changed, will it alter output Y?
 If one input is changed does it affect the other inputs?

In the analysis phase, it becomes easy to determine the variables which would
affect the CTQ factors.

Improve Phase

The Improve phase comes next in line after the analyze phase in DMAIC. The
personnel working on the project select the method that would work best for the
project keeping the organization goals in mind. The root cause analysis is
chronicled in the Analysis stage. The Improve stage is the one which implements
the conclusions drawn through root cause analysis.

In this phase, an organization can improve, retain or negate the results of root
cause analysis. In this phase (like the analysis phase), the Open-Narrow-Close
approach is used. The approach makes it easy to narrow down the options and
choose the best solution. However, the emphasis remains on choosing the result
which leads to maximum customer satisfaction. This is also the phase where an
innovative resolution is found out and it is executed on an experimental basis.

Control Phase

It is very important to maintain the standard that has been established. The control
phase is the one where improvements that have taken place are sustained. This is
done by chronicling the improvements and also keeping a check on the new
process that has been created by mitigating the defects. This is done so that the
defects that were earlier present in the process or the product are absent in the
new process or product.

There are different kinds of problem-solving tools in a Six Sigma project. DMAIC is
the most popular one. However, there are other tools also which are beneficial for
some organizations than DMAIC. Let us take a look at those and compare them
with each other and with DMAIC.

The Similarities between DMAIC and DMADV

Although DMAIC and DMADV are two different Six Sigma methods, they have some
things in common. Both the methods are used to reduce the defects to 3.4 per
million opportunities. Both make use of facts and are enforced by Green Belts,
Black Belts and Master Black Belts. Also, both of them share the first three
acronyms. It is important to note that they both are learning techniques which help
to maximize profits for the organization and help them climb the growth chart.

The Differences between DMAIC and DMADV

Despite the similarities, there are differences between the two tools.

DMAIC stands for:

D - Define - To define the objectives of the project and the demands of the
customer.

M - Measure - To measure and assess the current state of affairs.


A - Analyze - To determine the loop holes in the working of the organization.

I - Improve - To improve the process by finding ways to reduce the defects.

C - Control - To control and maintain the improved performance.

DMADV stands for:

D- Define - To define the new project and determine the new demands of the
customer.

M - Measure - To measure and assess the current situation of the organization.

A - Analyze - To analyze the pros and cons of the new project.

D - Design - To design a draft of the new plan.

V - Verify - To verify the design and determine whether it is appropriate to meet


customer demands.

DMAIC is used when the existing products or policies are not up to the required
standards and the customer satisfaction goals are not met. On the other hand,
DMADV is used when new products and processes need to be introduced in the
organization and also to maximize profits. Another reason to implement this
methodology (DMADV) is when DMAIC has been introduced but is not producing
any results.

1.3 PDCA- Plan Do Check Act

PDCA- Plan Do Check Act

This is the basic and the very first model for Six Sigma. It is also referred to as
“Shewhart Cycle” or “Deming Cycle”. This is so because the model was developed by
Walter Shewhart in the 1930s and later used by W. Edwards Deming in the 1950s.
This is also referred to as “Deming Wheel”. This is an active model that never ends
and consistently strives to improve the process. It is like a vicious circle in which one
act leads to the other but there is no end. Similarly in PDCA, there is no end to
continuous improvements. The PDCA cycle can be represented in the form of a
diagram.

Plan

It is very important to lay down the goals of the project. A draft is prepared to list
the objectives that need to be accomplished in accordance with the policies of the
organization. Planning is a phase in which a design for an improvement is made
and at the same time old policies and procedures are revised for the better.

Techniques applied - customer/supplier mapping, flowcharting, brainstorming,


Pareto-analysis, evaluation matrix and cause and effect diagrams.

Do

This step simply means to put the plan into action. The plan of action needs to be
implemented to get results for the betterment of the organization. This is usually
done on an experimental basis to test the validity of the claim made by the plan.
After the plan is executed, its performance is measured through different
techniques.
Techniques applied - conflict resolution and on-the-job-training.

Check

This is an important phase because it leads to control over the new policy. It is very
significant to ascertain whether the process has undergone any improvements and
whether it is beneficial for the organization in the long run. All the improvements
are evaluated and their results discussed.

Techniques applied - graphical analysis and control charts.

Act

In this stage, the organization can accept, adopt or reject the proposed policy. The
pros and cons of the new policy are measured on such factors as increase in profits
or sales, the time involved and the use of resources. If the organization thinks that
the policy is worth adopting then they can either continue with or modify it
according to the current need. If need be, they can scrap it all and adopt a different
plan altogether.

PDCA should be enforced frequently and it should, always strive to improve the
process. It is mostly based on trial and error method and usually the best method is
found out after a few unsuccessful attempts. Also, it is a right amalgam of both the
Eastern Lean approach called Kaizen and the Western approach which measures
the amount of success achieved.

Techniques applied - process mapping and process standardization.

This approach has certain disadvantages as it consumes a lot of time, money and
resources. Also, it is difficult to apply it practically. Dr. Demings has mentioned 14
points for the PDCA cycle which are listed below:-
Apart from the above mentioned models, the Six Sigma approach also makes use
of other models such as SEA (Select-Experiment-Adapt) and SEL (Select-Experiment-
Learn). The difference between them and the PDCA model is that these two models
have positive loops in between which make the process much more forceful than
the PDCA model. The PDCA model is a circular system and it makes the process
very cumbersome. If there is a mistake, the whole process has to begin anew. A
new plan needs to be drafted and implemented, on an experimental basis. This
consumes a lot of time, money and resources. On the other hand, the SEA and the
SEL models are applied when there is a positive feedback from other agents, unlike
the PDCA model where the feedback from the other agents is not taken into
account.

3. Deployment of Six Sigma

Definition of Deployment

The literal meaning of the word deployment is to install or set up. Deployment is a
military term which means to install the troops. In Six Sigma also, the term is used
in context to placement of the personnel to carry out the Six Sigma project.
Deployment in Six Sigma is a very crucial step for an organization that adopts the
Six Sigma principles. This is because Six Sigma does not involve carrying out routine
tasks. Instead, it is about reducing or eliminating the defects and at the same time it
also strives to improve the process for the better. This means a deviation from the
regular activities that are being carried out in the organization and a sincere effort
to improve the process. Six Sigma is a concerted effort that needs equal
participation from all the members.

Six Sigma is a digression from the regular tasks and, therefore, it requires an effort
on the part of the top management to motivate the people working on the project.
Many people have a fear to adopt things which are new. It is the top management’s
responsibility to clearly communicate the goals of the project and encourage the
personnel to easily adapt to the change. It should be clear to the personnel as to
how they and the organization are going to benefit from the project.

Communication for Six Sigma

The Six Sigma plan has to be communicated clearly by the leadership so that the
vision is accepted by all the stakeholder groups- customers, employees,
shareholders and suppliers. Six Sigma means bringing in a cultural change in the
organization; so a well-defined communication method has to be in place.
Communication brings clarity and removes fear related to change; it should be such
that people are ready to accept the change. The management should be able to
remove all doubts and apprehensions that the personnel deployed for the project
might have regarding the project. Management should make clear that the
organization is serious about its commitment to the Six Sigma project.

The communication gap can lead to problems later on and, therefore, should be
none. The roles and responsibilities of the personnel should be clearly demarcated
and the tools and the policies that need to be adopted should be clearly
communicated.

The responsibility and development of the communication lies with the Process
Owner and he is accountable to the Six Sigma executive council. He will need to
bring a team together to execute the plan. He will report to the overall sponsor of
the Six Sigma deployment authority. The communication plan will need to address
the needs for each stakeholder group related to the project.

1.4 Organizational Goals and Objectives

D. Organizational Goals and Objectives

1. Linking Projects to Organizational Goals

Leadership’s primary role is to imbibe a vision for Six Sigma, its role is to see that
Six Sigma is not only implemented but it is incorporated into the business ethos of
the organization. Their main responsibility is to link Six Sigma goals and objectives
to the organizational goals because the company has to keep long term objectives
in mind.

However, any project is taken up keeping in mind the overall growth or long term
profits for the company. It could also for increased ROI (Return on Investments) or
to increase sales. The Six Sigma project should be implemented in such a manner
that the normal functioning of the organization is not affected. This necessitates
creation of new positions in the form of change agents, and modifying departments
and reward systems.

Strategies Employed by Leadership in Six Sigma Programs

Only a bright, knowledgeable and an involved management can lead an


organization to a success in its quality efforts like Six Sigma projects. The various
steps elucidated below address what leadership must do, to ensure that the Six
Sigma program in the organization is successful.

The executives must have total commitment to the implementation of Six Sigma
and accomplish the following:

 Creation and Agreement of Strategic Business Objectives, that is to identify the key
business issues
 Creation of Core, Key and Sub-Enabling Processes, that is to establish a Six Sigma
Leadership Team
 Organization of support for the Six Sigma program in the organization
 Decision on how new positions or change agents will be created; and the reporting
authorities of each, for e.g., identification of masters or process owners for each
key business issue
 Decision of employing cross-functional teams
 Definition of timelines for transitioning from a traditional enterprise to a Six Sigma
oriented enterprise
 Decision on whether Six Sigma will be a centralized or a decentralized function
 Creation and Validation of Measurement “Dashboards”
 Decision of incorporating Six Sigma performances into the reward system
 Decision on how much ROI is expected from the Six Sigma program
 Decision on how much of financial, intellectual and infrastructural resources are to
be dedicated to the project
 Continuous evaluation of the Six Sigma implementation and deployment process
and making the necessary changes

Selecting the Belts

The people working on a Six Sigma project have to be selected very carefully. The
leaders of the Six Sigma and the champions of the sport have a lot of things in
common. Both require a lot of restraint and a lot depends on quickness of action.
They both follow the principle of, “strike while the iron is hot”. The success of a Six
Sigma project relies on its leaders. The management is comprised of Six Sigma
Champions and the Executive Leader and they are assisted by team leaders who
are the Green Belts, Black Belts and the Master Black Belts.

Below is a graphical representation of the hierarchical positions held by the


personnel working on a Six Sigma project.
According to The Six Sigma Handbook(1976), by Thomas Pyzdek, effort is required
on the part of the top management to carefully select the belts and the people
working on a Six Sigma project.

1. Train the Masses

The people who are the lowest in the hierarchy in the organization are made
familiar with the basic tools. They are provided enough freedom to take
independent decisions without consulting the top management whenever the need
arises. This technique is more successful when the Six Sigma becomes more
developed.

2. Train the Managers

This means to train the managers and help them adapt the new skills required to
make the project a success. Often, people have a fear of unknown and new things.
Therefore, the managers should be motivated to take up the new job and they
would gradually learn by experience and adapt to the change.

3. Use the Experts in Other Areas

The people working on a Six Sigma project, for example, the Black Belts, do not
carry out routine tasks. They are very different from the regular engineers, quality
analysts and technicians. They are multifaceted people, across cross-functional
departments, who are involved partly in routine activities, but work full time for Six
Sigma projects. It is imperative that they are chosen with care, because they will be
responsible for the success of the Six Sigma project. They will be accountable to not
one but many supervisors who are overseeing the Six Sigma project.

4. Create Permanent Change Agent Positions

Instead of creating a special position for the Black Belt, a permanent position can
be created for the post. Change agents are always required to enable an
organization to grow. So, it is better if a permanent position is created for them.
The Black Belts gain hands-on-experience by working on several projects and this
makes them adept at delivering great results. Another option can be to infuse new
and fresh blood into a Six Sigma project so that maximum people can get exposure
to the position of a Black Belt.

The Lifecycle of a Six Sigma Initiative


Six Sigma is not an easy task to implement and requires a lot of planning. Below is a
brief summary of the way a Six Sigma initiative is carried out.

 The first step is to initialize the Six Sigma project by drafting the plan. This is done
by outlining the goals and putting the infrastructure in place.
 The next step is to divide the tasks among the personnel chosen for the task. They
are, then, trained and facilitated to complete the task conveniently.
 The third step is to enforce the projects and improve the performance which helps
in generating profits.
 The fourth step is to have a broader perspective of the endeavor that the
management has taken up. It means to involve the other organizational units in
the project.
 Fifthly, it is very important to maintain the standards that have been created. This
is done through constant upgradation and research and development.

2. Risk Analysis

Risk analysis is an important part of a Six Sigma project. Six Sigma is a tool that
involves taking risks.

There are ten identical balls which are randomly moving inside a closed box. The
probability at any given point of time of all the ten balls lying in the same half of the
box is 2/11. The formula for expected profit is EP = Profit * Probability. The risk
assessment in real life is not this simple because the probabilities are usually not
known and need to be ascertained.

A business system is the set of processes that constitute an organization. Whenever


a new development is considered, it is advisable to keep the systems approach in
mind. This means that the improvements should be such that they link the project
goals to the organizational goals. When a system is operated less than at its best
mode, it is called sub optimization. The modifications which are made might
optimize individual processes but sub optimize the system as a whole.

A pizza making company uses a particular kind of cheese as an ingredient in some


of its pizzas. The customers want that particular cheese in all the pizzas that they
consume. The presence of that particular ingredient in the pizza increases the cost
both to the customer and to the company. However, the company realizes that the
presence of this flavor of cheese in all the varieties of pizzas may increase the sales
because of its general likeability. This is a risk the company is willing to take.
Although, the ingredient may be expensive, yet it would bring in good profits for the
pizza making company through increase in sales.

Risk assessment involves doing a feasibility study of introducing a new process or


product, or exploring areas that could help convert a risk into an opportunity. It
takes into consideration both short-term and long-term goals of the organization.
There are a few steps that need to be pondered over to mitigate the chances of
risk.

To begin with, the process has to be studied in detail and the areas that are prone
to risk are to be identified. A fishbone diagram is useful to carry out such a study.
This is a cause and effect research tool that helps to find out the cause and effect of
implementing a new policy or process.

Secondly, the risk is measured on three parameters; which are severity, frequency
and detectability. Severity and frequency are factors that are taken into
consideration as both short term and long term factors; whereas detectability is
taken into consideration only as a short term factor. These risk factors are
compared to the other factors that could increase the potential for risk. The risk
factors are measured on a scale known as Risk Priority Number or RPN.

The RPN number helps to identify the defects. It is also useful to give precedence to
them in the way they are present in the product or process. This is a qualitative
approach and helps to assess the severity of the defect present in the process. RPN
can range anything between 1 and 1000.

An Alpha risk is often said to be more catastrophic in the short run. On the other
hand, Beta risk holds more prominence in the long run. It is, however, difficult to
determine the risk which would be beneficial for a particular organization. The
decision regarding the kind of risk to be chosen requires a detailed study which is
done as and when the project proceeds. Although, Alpha and Beta risks are
reciprocals of each other, yet they share a relation. Precautions to ward one of
them increase the risk for other to appear. Therefore, it becomes imperative to
collect relevant data to achieve the right blend for both.
SWOT Analysis

SWOT is an important aspect of risk analysis done in the context of any commercial
activity. It is a strategy based analysis and stands for strength (S), weakness (W),
opportunity (O), and threat (T). The first two can be categorized as intrinsic factors.
However, the last two form the factors that impact an organization from the
outside. SWOT analysis is instrumental in coordinating an organization’s resources
to its best ability. Moreover, it keeps in mind the stiff competition in the market that
an organization has to face. It plays an important role in choosing and
implementing new policies and procedures.

SWOT analysis is mostly applied to a company or an organization’s way of making


sales, merchandising a product, an innovative move for the business, assessing the
competitor’s worth, looking for a new provider or a profitable opportunity to invest.
This is one method where the four factors namely; strength, weakness,
opportunities and threats are taken into consideration and this makes it easier to
take a decision and implement a new policy.

The strengths and weaknesses are intrinsic to an organization. Strengths are the
strong points of an organization and which also might be exclusive to an
organization. On the other hand, the weaknesses are the weak factors of an
organization. They may include factors like high costs or poor quality. The
opportunities are the external chances that a company may get to prove its worth.
The threats are the ones that a company may face from newly framed laws or cut-
throat competition.

SWOT analysis may seem simple; however, it assumes significant proportions in a


business or financial organization. The tasks in an organization are complex and
varied and oversimplification, as far as SWOT is concerned, is not correct. It is a very
useful tool as far as analysis is concerned; however, it does not go well with straight
beliefs.

3. Closed-Loop Assessment/ Knowledge Management

A major threat facing many organizations is not being able to change fast enough in
an environment where the pace of change is accelerating, fueled in no small way by
developments in the business.

Closed-loop assessment is an important tool that is useful for any organization and
at each level of hierarchy. It is a system that helps to analyze and improve the
current processes. As the name indicates, closed-loop assessment means to assess
and determine the gaps in the processes or systems of an organization. The lessons
gathered from previous experience are used to identify more such similar
opportunities. It is also done in order to ensure that the errors that were
committed in the past are not repeated.

The management of products or the working of processes in an organization


depends on a lot of factors. They are past experiences, resources used and the
costs incurred. Product management also depends on the feasibility study done on
the product; the most effective and efficient way of manufacturing the product or
service. This is where the closed loop assessment comes in.

Closed-loop assessment helps to closely assess a product and the components


used, to examine loopholes and rectify them. The personnel decide if a component
needs to be standardized or whether it can be reused. They also see if the product
satisfies factors like customer satisfaction metrics, quality gradients, costs, and
profits. The performance of the employees is also among the key factors. All these
are taken into consideration so that the errors that were committed in the past do
not recur.

Knowledge management is another significant tool for an organization. It helps in


disseminating useful knowledge among the personnel working for the Six Sigma
project or the organization as a whole. Knowledge management and closed-loop
assessment are closely related. While it is important to determine the loops in the
systems for smooth functioning, it is also important to disseminate knowledge
among the employees on how to bridge those gaps. It is not an easy task because
the knowledge needs to travel down to all employees and they must look at it from
the same perspective. Everyone should have the same idea about the company’s
business purpose, values and method of operation.

The Black Belts and the Master Black Belts are assigned the tedious task of diffusing
useful information. They measure and assess whether the assumptions they had
made were correct and whether the SWOT analysis bore any fruitful results. They
also have to measure the goals accomplished, the kind of problems encountered,
the opportunities received and the lessons learnt.

Knowledge management can lead to a series of innovations and help accumulate


and spread knowledge in a productive manner. However, it is important to note
that all the lessons have to be learnt keeping in mind the goal of customer
satisfaction. New ideas and innovations should be welcomed and the employees
should be ready to face new challenges, and explore new ways of conducting
processes.

It should be noted that the knowledge should be disseminated to the right people
at the right time, keeping the costs in mind. Accountability should be fixed and
there should be constant communication amongst the employees about the
policies. Simply put, knowledge management is a method which is applied by
organizations to get the maximum benefit out of the employees possessing
technical know-how and other intellectual resources.

Closed loop assessment and knowledge management are significant tools that can
be manipulated in a productive manner for the upward growth of an organization.
They are useful methods for generating revenue and value through easily available
assets in the company, namely human resources and other knowledge based tools.

CHAPTER 2
2 Business Process Management

A. Process vs. Functional View

1. Process Elements

Coordination is imperative at all levels in all organizations. The right blend of all the
elements is important to achieve a near perfect product and this is what a Six
Sigma project strives for. The aim of any Six Sigma project is also to deliver defect-
free products. The correct combination of the process elements and a disciplined
approach to turn them into productive output is required. It is important to
consider the generic process elements that may affect a product.

It would be better if the combination of the process elements is known. This would
help determine the reasons as to what factors affect the product. Also, the Black
Belts and Master Black Belts should be able to make out if alterations done to one
element affect the other elements. The process elements that are resistant to
change or which are most likely to get affected by unforeseen changes or events
are also demarcated.

Simulation is a common software used in Six Sigma processes. Simulation can


prove to be a great asset as far as minimizing waste, controlling deviations and
quality control is concerned. The process of simulation is very effective as it can be
made to check the validity of a new process or changes made to the current
processes. This, in turn, would, decrease the possibilities of risk during actual
production. Stress is laid on measuring the process and the performance accurately
which will help ensure the effectiveness of the new process or changes made to the
current process. Also, simulation is helpful because it helps enforce processes
which have been investigated and experimented thoroughly.

The simulation software is also significant because it makes use of visual


descriptions. This makes it easier for everybody to understand and effectively
implement the process. The process elements like men, machines, finances, belts
and other resources are clearly represented through charts and diagrams and the
relation between them is demarcated. This makes it possible to comprehend the
process easily and consider its effectiveness in the organization. For instance, with
the help of simulation, the finance manager would be easily able to find out if the
current amount of finances would be sufficient for the production in the near
future. Reservation for unforeseen changes can be made well in advance with the
help of simulation. The Black Belts and the Master Black Belts become thorough
with the process and this knowledge empowers them to take correct decisions and
make contingency plans.

The key process elements comprise of customer, process and employee. They are
the base on which the reputation of any organization thrives. Of the three,
customers occupy the most prominent place as the profits of any organization
depend on them. It is true that customer is the king. Customer satisfaction depends
on the performance, quality, services and reliability of the brand. If the customer is
not satisfied he will look for options and this will, in turn, be a loss for the
organization.

Process is another important element. The process should be such that it produces
products of the best quality. This is because the quality of the products is directly
related to customer satisfaction. The process should be clear to the employees to
enforce it and at the same time, be made keeping the customer demands in mind.
The process should, therefore, be looked at from customer’s perspective and not
from the organization’s.

The employee is another core element in an organization. It is the human resources


of any organization that make the process function. As far as Six Sigma is
concerned, it is driven only with the help of Green Belts, Black Belts and Master
Black Belts. The Six Sigma project requires the participants and especially, the
leaders, to be professional, committed and participative. They should also, at the
same time, possess thorough knowledge of the process and statistical tools.

The main aim, in fact, of any Six Sigma project is to reduce the defects by
minimizing wastage. Wastage is minimized if the process is implemented in the
best possible manner by the employees and which does not leave any room for
errors and, therefore, wastage. If this is done efficiently, this will in turn, improve
the quality and also the number of customers. The most important effect it would
have is to increase profits which is the ultimate goal of any organization.
2. Owners and Stakeholders

An organization can have several stakeholders, which comprise of people like


stockholders, customers, clients, suppliers, employees and their families. In an
organization, all the stakeholders hold relevance because they add value in some
way or the other. The stockholders put in the finances and expect to earn profits in
return. The customers pay money to get the value for the services or products they
have purchased. The employees put in their ideas, services and labor to earn
emotional and monetary satisfaction. This kind of an exchange is based on the
assumption that all the stakeholders are performing to the best of their ability.

The stakeholders form the core team in a market. They are the ones who assume
that they should receive the maximum value in the form of an exchange. It is very
important, therefore, for all the stakeholders to cooperate with each other to
receive the maximum benefit. It becomes imperative for every organization to
value all the stakeholders equally.

The employees need to be treated as if they are assets of the company. If they are
not motivated from time to time, they tend to lose interest and ultimately their
performance drops. In addition, if possible, they should be given rewards and
recognition repeatedly to keep their spirits high and to lower the attrition rate. This
is because the employees are the ones who are the greatest assets for the
company and it is very important to keep them happy. However, the most
prominent stakeholders are the customers. Everything depends on their
satisfaction, as they are the ones who would bring the maximum profit for the
organization.

To achieve the best results, it is very important that the stakeholders participate in
all the important activities of the organization. This is important as the goals and
objectives of the organization should be in their interest as they are the ones who
would reap maximum benefit out of it.

3. Project Management and Benefits

Project Management is an organized way of managing a project. A project is a ‘plan


or a proposal, or an undertaking requiring concerted effort’. Every project has a
beginning and an end, the tasks to be carried out, and resources that will be
required in the course of the project. The goal of project management is to ensure
the deliverables of the projects undertaken; within the cost, defined scope, time
schedule, and quality boundaries.
A deliverable is said to be a quality deliverable if it has been produced within the
resources the company has earmarked for it and satisfies the expectations of the
stakeholders. A project deliverable is, therefore, a measurable unit of project
output.

Project Management acts as a tool, which enables a Project Manager or the Black
Belt to streamline the various processes involved in the project. It is also important
to identify the stakeholders and determine their expectations. They are the people
who are involved directly or indirectly in the project and have a share in the profits
gained in the project.

Why is Project Management so Important?

Project Management has garnered a lot of importance. A project is undertaken to


create a unique product or service; it is a one time activity. This could be design,
development, or any other one-off activity. The management of these projects
requires varying technical skill and expertise.

These projects differ from processes and operations; which are ongoing functional
work, and which are permanent and repetitive in nature. The management of these
two systems is often different and requires different sets of skills and viewpoints.
Therefore the need for project management arises.

Besides, project management is replete with challenges. The first is to ensure that a
project is delivered within the defined time frame and approved budget. The
second is that all the resources (people, materials, space, quality, risks involved)
have been put to optimal use to meet the predefined objectives. Project
Management enables a project manager to successfully complete a project. The
two most important aspects of project management are time and costs and the
project will have disastrous consequences if it is not guided by these two factors, as
the time limit will be missed and costs involved will be more.

If the project manager or Black Belt wants his project to win profits his organization
expects from it, he will have to construct his project around the concepts of project
management- the art and science of doing things on time with the efforts of others.
Thus it is project management which equips the Black Belt with the requisite tools
to succeed in his project and fulfill the expectations of the stakeholders.

4. Project Measures
It is important to establish the key performance metrics to measure the
performance in the organization and to determine whether the process is being
implemented in the correct manner. To measure the performance, it is also
important to make sure that the metrics chosen is also the appropriate one. One of
the important attributes of a good metric is, that it should be customer driven and
should aim at providing the maximum customer satisfaction. It should also indicate
the past performances and show the recent trends. This is important as it would
help compare the performance and determine the loop-holes in the current
process.

Another very important attribute required of a metric is that it should be clear and
easily understood by all the employees. In fact, the metrics should be developed by
the people who are going to make use of it. It is also mandatory that the metrics
should be in line with the policies, procedures and values of the company.

A performance measurement model has been developed by Kenneth H. Rose,


Quality Progress (1995) which consists of eight steps. They have been summarized
below:

1. Category of performance- This is an important criteria which helps to


determine the categories of performance in an organization. This depends on the
mission statement and the values of the organization.

2. Goals of the performance- This is another important criteria which helps to


determine the goals of the organization. The performance should be in line with the
goals of the organization which can range from profit-earning to achieving
maximum customer satisfaction.

3. Indicators of performance-This is the most crucial step in the model as it helps


to determine the loop-holes by measuring the progress of the performance. This is
also the stage where the unimportant elements are side-lined and the emphasis is
laid on those which help improve the performance.

4. Measure elements- The process elements like customer satisfaction, employee


performance and the efficiency of the process is measured here. The other
elements which cannot be measured are ignored to save time and money.

5. Criteria- If the above mentioned process elements are the internal factors,
environment, restrictions and government policies are the external ones. They
affect the performance directly or indirectly. They are important because if they are
major hindrances in a project, a policy can be modified or slashed according to it.

6. Measurement means- This is a parameter that determines the way in which the
internal and external process elements would be applied so that their performance
can be measured.

7. Metrics of notions- This is a kind of foundation to develop specific metrics and


validate the claim of a particular process.

8. Metrics of specification - The actual metrics that would be used are defined and
their functionality is determined. This definition also includes the way the data is
collected, used and implemented.

The performance should be measured on certain metrics. There should be some


criteria on which the functioning of the organization must be based. The obvious
queries that need to be made should be regarding the expected profits and the
benefits that an organization would have from a certain project that it plans to
undertake. Also, the current shortcomings of the organization should be
highlighted. The deficiency that would be cured because of the current risk
undertaken should be taken into consideration. The do’s and the don’ts need to be
made clear to all involved in the project.

There should be metrics which should equate the goals of the organization to the
actual position. The answerability should be fixed and there should be people who
should supervise and advice. Accountability should also be fixed for finances. At the
same time, it is very important to keep a check on the sales figures. If the quality is
growing better but the sales are stagnant then it is a matter of concern for the
organization. Also, there should be a metric to make sure the demand matches the
supply.

The main focus, however, of any organization remain the customers. It is very
important to frame the policies according to their needs as they are the principle
source of revenue. Happy customers ensure better business and profitability for
the organization. So, there should be customer satisfaction surveys or contact
centers where the customers can voice their grievances. Data can be collected in
the form of questionnaires and the views of the customers taken. This can prove to
be an important metric and be very effective in improving customer satisfaction.
There can be metrics on which the cycle time reduction, quality, defects per unit,
waste and productivity can be measured. Six Sigma is an important method to
reduce defects and help ensure efficient working in the organization. These metrics
can be enforced with the help of Six Sigma leaders in the organization and can
prove to be very beneficial.

Balanced scorecards are very important to maintain balance in the organization. It


is very important to consider one goal at a time and then evaluate all of them
equally. These goals established by the shareholders, customers and employees
are then transformed into metrics. Also, the community is reversed every year. If in
the current year the prime focus is the customer’s interest, it would be on
shareholders or the employees or the owners the next year. Also, a particular
community is viewed taking into consideration different factors such as quality,
costs and even the focus on these factors keeps changing every year.

It is important to have well designed dashboards that would aid in interpreting the
metrics. This is because they help to determine the limit for a particular metric. If
the metric is in control, it means that the process does not require special
attention. However, if the metric is crossing the line of limit, it means that it
requires special attention.

Project Documentation

It is very important to document the details of the project. This is necessary


because changes and new policies are made in the course of the project, and it
makes the task simpler. Project documentation involves gathering information and
making a note of all the processes in a Six Sigma program. The documentation
helps save time and helps control the wastage of resources, and acts as a record
for future reference. Moreover, time is saved as far as training personnel on any
new project is concerned. The documentation can be done in the form of a
business letter, memo or a project report.

The project documentation should include sections which consist of different


stages like planning, implementation, budgeting and time management. This is
important because the project report needs to be clear as it might be reviewed by
the stakeholders. The documentation should be clear, concise and brief and easily
comprehensible for a layman. All the people working on the Six Sigma project need
to give their full cooperation for the documentation as far as the tasks assigned to
them are concerned.

The project report would be incomplete if it does not contain the customer’s views.
It should also mention the name of the members working on the project and the
tasks assigned to them. The way they carry out their duties and the finances
involved in the project should be included. In addition, the names of the people
who are keeping a check on the project, how often they monitor the project, and
the decision making authorities should be clearly mentioned.

2.1 Voice of the Customer

B. Voice of the Customer

1. Identifying the Customer

The customers are the main focus of any organization. It is very important to
streamline the demands of the customers. Six Sigma is a useful tool in assessing
customer satisfaction. Customer satisfaction not only rests in the quality of the
product but also in such services as the ordering procedure, delivery, mode of
payment and after sales. It is very important to make changes keeping the
customers’ demand in mind.

The data collected from customer satisfaction surveys are useful in determining
customer demands and the performance of the product. This also helps the people
working on the Six Sigma project because they are able to categorize the
customers’ according to their needs. They do not share the same viewpoint about
the product and its cost as the manufacturer or the service provider. So, it becomes
very important to mould the policies according to the organization goals as well as
the customers’ demands.

A customer views a product from the point of view of its usefulness, price and
quality. A product or a service has no relevance for the customer is not of any use
for the organization. Customer satisfaction is the biggest accomplishment for any
organization and without it; the Six Sigma project cannot achieve its desired end.

Some organizations categorize the customers on the basis of their needs and roles
they play in the organization. One of these categories of customers is who put in
their money for upcoming projects of the organization. They are very important
because they bring great value by investing a major chunk of the money.
Customers also can be categorized in the form of small traders and businessmen.
There are customers who might be good for an organization in the long term. It is
important to value these customers also.

The customers should be pampered so much that they do not feel the need to go
to the competitor. It is very important to pay heed to each customer’s needs and
he/she should be made to feel that he is important and his/her opinion is valued.
His problem should be resolved at the first go in the best possible manner. If the
customers know that they are valuable for the organization they would certainly
help improve business and increase sales.

Therefore, it is very important to identify the customer and his needs because it is
the customer who is and will remain the core element for an organization in the
long run.

2. Collecting Customer Data

In today’s competitive scenario, it has become imperative to keep the customers


happy and, therefore retain them in the long term. It is just not enough to take
internal decisions. It is very important to capture the customer’s voice and hear his
concerns also. Continuous interaction with the customer will help in building better
relationships with them.

The customer satisfaction is measured on such metrics as quality of the product,


how many calls they make after they buy the product and the feedback they give.
The feedback that the customers provide is an important metric and can be critical
in improving the quality. The feedback can be measured in the form of interviews,
customer satisfaction surveys, customer complaints and focus groups. The
customer feedback is very essential because it helps in determining the loopholes
and improving the performance of the employees.

Quality Call Monitoring

Quality call monitoring is an important way to hear the voice of the customer. For
instance, in a contact centre if an agent’s call is monitored, it means that his/her
performance is being measured and at the same time it will also help to determine
the pros and cons of certain processes and the customer satisfaction can be easily
evaluated.

Kano Model
Kano model is another quality measurement technique to measure client
happiness. It can be said to be a useful tool in evaluating and prioritizing customer
requirements. Not all requirements are equally important to all customers, and the
attribute of a product will be ranked differently by different customers in their need
chart. Therefore this model is used to rank requirements according to the
importance of each segment’s needs, which differentiates between must haves and
differential attributes.

Applying Kano model in the workplace to learn customer requirements will change
the Black Belt’s viewpoint towards customer satisfaction. The team will be able to
know which values and services the customer covets for the most, and how to plan
for operations in the Six Sigma program.

Product attributes can be classified as:

Basic/Threshold attributes: Threshold attributes are those whichthe customer


normally assumes to be present in the product. Their absence will cause
dissatisfaction among customers. However, the customer will remain neutral even
if these attributes are provided in a better way. For example, refrigerators come
with freezers and door handles. A sleeker handle or a frost-free freezer will not
cause any more satisfaction in the customer.

Performance/Linear attributes: The presence of performance attributes are


directly proportional to customer satisfaction. There are high levels of satisfaction if
their performance is high and dissatisfaction if their performance is low. For
example, the time spent waiting in line at the check-in counter of an airport
terminal. This attribute is represented as a linear and symmetric line in the graph. A
high level of execution of these linear attributes can add to product
competitiveness.

Exciters/Delighters: These are hidden attributes which delight the customer and
lead to high levels of satisfaction if they are present, but do not cause any
dissatisfaction if the product lacks this feature.These ‘delighters’ are the surprise
elements in the product and companies can use this attribute to set their product
apart from their competitors. In course of time, as expectations rise, today’s
delighter’s become tomorrow’s basics. For example, a car with an inbuilt television
can be today’s delighter, but can be a basic tomorrow.

In order to survive cut-throat competition, and to lead in the market, companies


need to be constantly innovative and research what is the current level of customer
quality to meet customer expectations. A higher grade of execution of performance
attributes and inclusion of one or more delighters/exciters will provide stiff
competition to similar players.

In the figure below, the entire basic attribute curve lies in the lower half of the
chart, indicative of neutrality even with improved execution, and dissatisfaction
with their absence. The exciters curve lies entirely in the upper part of the graph.
The more the exciters, the higher is the level of satisfaction. The performance
attributes are shown as a 45° line passing through the center of the graph.

Sample Survey

Definition

Definition Another voice of customer tool or method for gathering customer


feedback, are sample surveys.

A set of written questions that is sent to a group of selected customers to obtain


answers that will enable corporate decision making is called a sample survey. Data
are collected from a sample of a universal population to reach a conclusion about
the inherent features of the whole population.

It is an important tool to determine which requirements are most important to the


customer.

The organization might interact with its customers to access the customer
sensitivity to the company’s product, measure the quality of service or determine if
the current level of quality is at par with the company’s identified goals. The
organization might want to judge why employee behavior or morale is changing,
what the customer’s buying experience is, or what the responses for a new product
are.

According to Thomas Pyzdek, ‘Sample surveys are usually used to answer


descriptive questions (“How do things look?”) and normative questions (“How well
do things compare with our requirements?”).’

It is not humanly and logistically possible to count everything that happens in a


process. Sample surveys enable the process leader to collect the no-nonsense
responses of the customer, analyze, and reach a conclusion from them. Sampling
saves time and money and gives reliable data to analyze a problem.

Determining what to measure: The first thing that needs to be decided in a


sample survey is what to measure, or the population to be studied. . For example, a
five star chain may want to measure the quality of food served in its 24 hour
restaurant. The customer care division for a telecom brand may want to assess how
the billing system can be made less erroneous. After this a sampling frame has to
be set up.

Determining the Margin of Error: The desired margin of error and confidence
level has to be determined. (The concept of margin of error is elaborated below).

Selecting the sample size: The process team will decide on the sample that needs
to be selected from the population. A sample is a subset of a universal population,
like the number of night time customers out of the total customers in the round-the
clock restaurant of a five star chain.

The samples maybe randomly collected, which ensures that it is unbiased, and in
which each element or respondent has an equal chance of occurrence. The sample
may also be a representative sample which is a sample with an exact reflection of a
larger population. To truly represent a population, the sampler and analyzer of data
must take into consideration the variables like a diverse and changing population.

Selecting a Sampling Method: The next step is to select the sampling method (For
more information on Sampling Methods, see Chapter 5- Six Sigma, Measure) Sampling
methods are simple random sampling, stratified sampling, clustered sampling, and
systematic sampling etc.

Make the sampling plan: The subsequent step is to document the sampling plan;
this involves when and how to construct the survey.

Selecting a Sampling Method: The next step is to select the sampling method (For
more information on Sampling Methods, see Chapter 5- Six Sigma, Measure) Sampling
methods are simple random sampling, stratified sampling, clustered sampling, and
systematic sampling etc.

Survey Construction

1. The first important step in constructing a successful survey is to develop the


measure of the survey. Measures of responses can be taken in the form of:

 Open-ended questions- Here the respondents frame their own answers without
any limitations
 Ranking Questions- The response choices are ranked according to some criterion,
like importance.
 Fill-in-the blank questions
 Yes/No questions
 Likert’s scale- This response type is used to determine the strength of a response.
Likert stated that a scale of 1 to 5 is better than a range of 1 to 10 because people
tend to ignore larger ranges and hardly use the entire range of choices, and
instead opt for very low ranges, like 1,2 or very high ranges like 9 or 10
 Semantic differentials-This response type measures respondent’s choices in two
bipolar values. The values than may lie between the two possible options are not
stated. The values are usually two contrasting adjectives. For e.g. Very Good and
very Bad

2. After selecting the sample, the next thing to be done is to design the samples.
Sample design is determining how many persons or elements (respondents) are to
be included in the survey to ensure the success of the survey.

3. The next step would be to develop the questionnaire. A questionnaire must truly
reflect the situation facing the company and be aimed to fulfill the goal of the
survey.

4. After that, the questionnaire is tested on a small sample, also known as a pilot
study. This is done to test the accuracy and clarity of questions.

5. Now the final questionnaire will be produced.

6. This would be followed by preparation of mailing material and dispatching the


same to the sample population.

7. The next obvious step would be to collect the filled up questionnaires, also
known as data.

8. Data collected has to be collated and reduced to enable analysis.

9. The last step would be to analyze this data.

Tactics to develop questions :

Surveys should be designed with the proper professional expertise or survey


experience. Question framers should meticulously study the respondent group and
ensure that the area under discussion is understood by the respondents so that
they give appropriate responses.

The format of the questions should be in line with the focus of the survey. The
questions should be relevant, concise, and clear and in a language the respondent
understands.

To get unbiased answers, the question itself should be unbiased. The answer
choices should be clear and mutually exclusive so that it becomes easy to
understand and choose from.

The responses should also be quantified wherever possible.

Margin of Error:
When a survey is conducted, a sample is selected and the data gathered from the
survey is generalized for the larger population. Margin of error is a tool used to
determine how precise the collected data are, or how precisely the survey
measures the true feelings of the whole population.

For example, it is not logistically possible for an organization to measure the entire
population, say of customers, on the satisfaction level of using a particular product.
Rather, samples of customers are taken from the whole population of customers.
Margin of error is used to gauge how precisely this sample is judged.

A margin of error is calculated for one of the three confidence intervals- 99%, 95%
or 90%. The most commonly used is the 95% confidence interval. The larger the
margin of error, the lesser is the confidence interval.

For example, in a pre poll survey, the larger the margin of error, the lesser is the
level of confidence that the survey’s reported percentage will be close to the poll’s
true percentage.

The formula for calculating the margin of error is-

Where, p = Estimate of percentage of respondents answering a question

n = Number of respondents that answered the question

The formula involves three basic steps:

1. The amount of variability in the sample denoted by p


2. The standard degree of precision of a 95% confidence interval
3. The sample size denoted by n

2.2 Focus Groups and Critical-To-Quality Tree


Focus Groups

Another tool for the Six Sigma team to know customer requirements or employee
information is through focus groups. A focus group is a selected group of
customers who are unfamiliar with each other, collected together to answer a set
group of questions. They are hand-picked because they have a number of common
characteristics that are relevant to the subject of study of the focus group. The
discussion is conducted several times to ascertain trends in product and service,
and in knowing customer requirements and perceptions.

The facilitator of the focus group creates an environment which permits different
perceptions and opinions, without threatening or pressurizing the participants. The
aim of the focus group is to reach a consensus about a particular area of interest by
analyzing these discussions.

Six Sigma focus groups are helpful during strategic planning process, trying new
ideas and customers, generating information during surveys, validating a CTQ tree
(which shall be described in the next step) etc.

Advantages of focus groups:

1. Focus groups generate ideas, because a good facilitator may have the penchant
for following up additional questions based on the participants’ answers.

2. Focus groups also stimulate ideas in a greater number than when individual
interviews are conducted.

Disadvantages of focus groups:

1. An inexperienced and untrained facilitator may not be able to analyze the result.

2. Bringing groups together under one physical location might be more costly than
what the company envisioned.

3. Dominating personalities may influence the opinion under discussion.

4. There may be difference of opinion from group to group, making it difficult to


gather a consensus on the issue under discussion.

Critical-To-Quality Tree
Six Sigma is about looking for causes. The aim behind a Six Sigma process is to find
the reason behind a particular phenomenon. The team tries to find out what’s
“critical” to the success of the process chosen for improvement.

A Critical-to-Quality (CTQ) tree is another tool to find out customer requirements, or


which critical to quality factors are being addressed. This helps the team to
streamline the general needs of the customer to more specific needs. It is useful in
confirming and brainstorming the needs of the customer of the process targeted
for improvement. It is done to see that the voice of the customer was not collected
and reported against internal standards.

What’s Critical?

Depending on what is being analyzed, the word ‘critical’ could have diverse
connotations ranging from the satisfaction of the customer, to the quality and
dependability of the product. It could also be the cycle time of manufacture of the
product or cost of the final product or service.

The following table lists a number of “CTX”s, or the critical variables that influence a
product.
Most CT trees begin with the output of customer satisfaction at the top and others
follow. The steps to create a CTQ tree are:

Steps in creating a Critical to Quality Tree:

1. Step one is to identify the customer. First the team has to do a CTQ as to whether
the identified customers need to be segmented. Here need for segmentation of
customer arises when the different customers have different requirements. In the
following example, the example of the pizza delivery process is used. The customer
ordering a pizza maybe a high school graduate or an office executive. Here there is
no need for segmentation because the requirements in getting a pizza delivered
are almost the same across all ages.

2. Step two is to identify the customer’s need. The customer’s need is in level 1 of
the tree as shown in the picture. The high school graduate is in need of a pizza and
so calls up a pizza delivery outlet.

3. The next step is to identify the first set of requirements for this need. Two to
three measures need to be identified to run the process. In the example, the data
collected by the process leader indicated that the speed and the accuracy of
delivery, the quality of the pizza, the variety in the menu and add-ons in the menu
card were crucial requirements while ordering a pizza. Thus the first three branches
of the CTQ tree will be formed with these factors. These are in level 2 of the CTQ
tree.

4. The step that follows is to try to take each level 2 element in the CTQ tree to
another degree of specialty. In the example, the process leader found out that
while delivering the pizza on time, it was necessary that the correct variety be
delivered. He also found out that it was important for the customer that the pizza
be hot, taste good and look good. Similarly, data regarding the range of items in the
menu pointed out that the types or numbers of items, the add-on condiments were
important. All these factors are to be put level 3 of the CTQ tree as shown in the
figure.

5. The final step is to validate the requirements with the customer. The CTQ tree is
created as a result of the project team’s brainstorming. The needs and
requirements need to be validated with the customer because in many cases, what
the team considers important may not be likewise with the customers. Customer
validation can be made through Focus groups, sample surveys, customer interviews
etc.

Customer One On One Interview

A customer one-on-one interview asks a customer a number of questions that will


validate the CTQ tree. The advantage here is that the interviewer can record
responses and follow up the answer to get more detailed answers. The
disadvantage of this method is it is expensive and there is the necessity to have
experienced interviewers who can spontaneously raise more questions that would
logically follow.

Customer Complaints

Customer complaints and suggestions provide feedback to the management, which


may be positive or disapproving. They provide opportunities for individual
customers to have their say. These methods are characterized by selection bias, so
they seldom provide statistically correct information.

A company might also communicate with its customers and employees through
case studies, field experiments and by the already available data. New technologies
like data mining and data warehousing are also used.

3. Analyzing Customer Data


Once the data gathering is complete, the next step is to analyze this customer
feedback by using various statistical, graphical and qualitative tools. Analysis is a
significant step also because it helps to quantify the opportunity for the product.
This is important while implementing some measures to rectify those errors.
Everybody is equally responsible if the customer is not satisfied.

Analysis can be done in two ways. They are data analysis and process analysis.

Data analysis is analysis of the data collected, mainly if the feedback from the
customer implies that the process needs effectiveness. The goal of data analysis is
to take the data that was collected previously, and scan it for clues to explain
customer dissatisfaction. A careful look at the data would make the problems more
visible to the team working on the process. The best way to analyze data is with the
help of histograms and other charts and graphs.

Process analysis is analyzing the process itself through process maps the team
created in the define phase, known as process analysis, mainly if the feedback from
the customer implies that the process needs efficiency. For example, reducing the
cycle time or completing a task on time requires process analysis.

The process team mostly uses a combination of the two to arrive at the root cause
behind customer dissatisfaction.

There are various tools to understand customer feedback. They are discussed
below:

A. Process and Data Analysis Tools

1. Brainstorming

Brainstorming is a tool that is used for generating new ideas for solving a problem.
It is a creative method that helps in solving a problem by listing the number of
options that can be applied to solve the problem and then choosing the optimal
one. The brainstorming tool is used at all the levels of problem solving. This
technique is a strong tool to know what enhancements can be done in a given
solution or approach.

2. Cause and Effect Diagram (fish-bone)


One of the best ways of reaching at root cause is “Fish Bone Diagram”. Fish bone
diagram helps the users to visualize various causes leading to the problem. Once all
the causes have been brainstormed, they are graphed and their sub problems are
noted. The fishbone diagram is not applicable for problem-searching but is
employed for problem-solving by the team members. The advantage of using
fishbone diagram is that it helps the team to focus on why the problem occurs and
not to just detect the problem; hence this tool is also very helpful in the Analysis
Phase.

3. Process Flow Analysis

Definition:

"A process is usually defined as a set of interrelated work activities characterized by


a set of specific inputs and value added tasks that make a procedure for a set of
specific outputs"

A process flow consists of a flowchart of a particular work or process. Analyzing this


flowchart to investigate the process and identify the problems in the process is
called process flow analysis. It is the Black Belts who focus on the processes and
recognize the poor processes that result in problems, high cost, and low quality.
They are involved in the making of a breakthrough strategy to recognize functional
problems that are linked with operational issues.

4. Charts and Graphs

Charts and Graphs provide the best way to analyze measures of a process. They
present data or information in the form of visual displays. These tools are used to
see whether a certain characteristic is changing over time, getting better or worse.

There are many types of charts and graphs, they are discussed below:

a. Pareto Chart Analysis

Pareto Chart analysis is used to understand the most significant reasons for
customer dissatisfaction. This helps enterprises to identify which problems to tackle
first to obtain the quickest improvement. A Pareto chart is a specialized vertical bar
graph that exhibits data collected in such a way that important points necessary for
the process under improvement can be demarcated. It exhibits the comparative
significance of all the data. It is used to focus on the largest improvement
opportunity by emphasizing the "crucial few" problems as opposed to the many
others.

b. Histogram

Histograms are used as a graphic tool to display continuous occurrence of data


values and attempt to show the number of times a value has occurred most
frequently and least frequently. In a Histogram, the size is shown on the horizontal
axis and the frequencies of each size are shown on the vertical axis. The bar lengths
are proportional to the relative frequencies of the data.

c. Scatter Diagram

The Scatter Diagram is a tool used for establishing a correlation between two sets
of variables. It is used to depict the changes that occur in one set of variables while
changing the values of the other set of variables. This diagram does not determine
the exact relationship between two variables, but only determines whether the two
set of variables are related to each other or not; and if they are related, then how
strong the relationship is.

d. Run Charts

A run chart, also known as a line graph, is a kind of control chart that is used to
display process performance over time. The Run charts are basically used for
interpreting the trends that occur in the data, if any.

 Run charts are basically used for keeping a check on the process’ performance.
 Run charts are useful in discovering the patterns that occur over time.
 Run charts are easy to interpret; any one can guess from the chart’s behavior
whether the process’ performance is normal or abnormal.

e. Control Charts

Control charts are used to understand how the performances are changing over
time. It is defined as a graphical tool for monitoring changes that occur within a
process because of some common cause. Control charts help to show the trends in
the average or the variation, which further helps in the debugging process. A
control charts consists of a run chart, centerline and upper and lower limits
determined statistically.

f. Line Graphs
Line graphs are used to show the behavior of a process with changing time. The
behavior of the process is the specific characteristics of a process. Line graphs are
used to depict the changes in the process; whether the process is getting better or
worse, or remains the same. These graphs are perhaps the first step in defining a
problem to be solved. These graphs can also show cycle time, defects or cost
overtime.

g. Pie-Charts

A Pie chart is a simple circular chart that is cut into slices. Each slice represents the
frequencies of the collected data. The bigger the slice, the higher is the number or
percentage. These charts are best used to represent the discrete data.

h. Multi-Vari Charts

Multi-Vari chart is used to display a pattern of variation. It is used to identify the


causes of variations, such as variations between a subgroup.

2.3 Tools for Statistical Analysis

B. Tools for Statistical Analysis

Statistical tools are used to determine whether customer attitudes or specific


performance measures have changed or not. They are used to see if these changes,
if any, are statistically significant or not.

1. Tests of Significance

Six Sigma takes the help of complex statistical tools to test out planned solutions to
see if they are appropriate for fixing the problem. One such statistical tool is tests
of significance. When a statistic is significant, it means that it is very reliable.
Significance tells us about the differences or the relationships, but it does not tell
about the strength of the relationship; or whether it is strong or weak, large or
small. The strength usually depends upon the sample size.

Steps in Statistical Significance Testing:

Form the Research Hypothesis


Form the Null Hypothesis
Identify a probability of error level
Identify and calculate the test for statistical significance
Define the results

2. T tests

T- Tests are used to compare two averages. They may be used to compare a variety
of averages such as, effects of weight reduction techniques used by two groups of
individuals, complications occurring in patients after two different types of
operations or accidents occurring at two different junctions.

3. Correlation Analysis

Correlation is the relationship between two data sets of variable. It is a technique,


which is used to define a co-relationship between two quantitative and continuous
variables. Correlation is said to be positive when the value of one variable increases
with the increase of the other variable. On the other hand, when the value of one
variable decreases with the increase in the other variable, it is said to be negatively
correlated. If one variable has no effect on the other variable, there is no
correlation between the two.

4. Regression Analysis

To determine the relationship between variables, regression analysis is used. It is


used to determine the effect of one variable upon another. For example, the effect
of demand on supply, or the effect of customer relations on increasing sales, the
effect of sale announcements on the increasing sales, or the effect of a trained
management on customer satisfaction, can be studied using regression analysis.

5. Chi-square Test

Chi-square test is a non parametric test of statistical significance, which is used, for
bivariate tabular analysis. It is in all probability the most commonly used non-
parametric test. Chi-square test is quite a flexible test and can be used in a number
of circumstances. Chi-square being a popular method of testing discrete data, takes
into consideration the weaker and less accurate data.

The Chi-square test uses three types of analysis:


 Goodness of Fit It determines if the sample being used was taken from the
population.

 Test for Homogeneity It is based on the proposition that population is


homogeneous in character.

 Test for Independence It takes into consideration the null hypothesis

6. Analysis of Variance (ANOVA)

ANOVA (Analysis of Variance) is a statistical technique, which is used for analyzing


experimental data. ANOVA is very much related to the t-test explained above. The
main difference is that with the help of ANOVA tests, the difference between the
means of two or more groups can be studied, whereas a t-test can measure the
difference between the means of only two groups.

There are three models of components in ANOVA. They are fixed, random and
clear.

7. Design of Experiments (DOE)

Design of experiments is a quality improvement methodology that is used for


investigating a process. Under design of experiments, the data which is the
outcome of an experiment is analyzed. It also has to be ensured that appropriate
and enough data are available for performing the experiment. This process of
analyzing data and its outcome is known as an Experimental Design.

The analysis phase is very significant in mending the loop-holes in the process. The
above mentioned tools are very helpful in analyzing customer data. The analysis
phase helps in bringing about improvements in the process and achieving
customer satisfaction.

4. Establishing Customer Requirements

Once the customer data is collected and analyzed, a lot of things come to light. The
customers are easily able to establish their demands and voice their concerns. This
information can be converted into critical-to-satisfaction requirements for the
business. The critical-to-quality (CTQ) tree can be used to convert customer needs
and priorities into measurable requirements for products and services. As
mentioned earlier, the customer’s word is important, and the voice of the customer
is a significant factor and must be borne in mind for maximum customer
satisfaction.

5. Goal Posts vs. Kano

The Kano model is a very vivid representation of customer needs and


requirements. It helps the organization to know the customer preferences and the
voice of the customer is heard easily. Although Six Sigma is a very effective tool, yet
it is not the final word on reducing the defects. This is because customer
expectations can be very high and they can focus on factors, which may seem
insignificant to the organization. If we look from the organization’s perspective,
profits and organizational goals are also the prime factors apart from customer
satisfaction.

The Six Sigma model falls short as far as fulfilling the customer expectations is
concerned. An organization cannot pursue its goals and satisfy all customer
demands at the same time. Therefore, an organization tries to reach the right blend
by innovating techniques and at the same not putting customer interests at stake.

6. Quality Function Deployment

Quality Function Deployment (QFD) is a process that is followed by the Voice of the
Customer (VOC). The VOC lays down the demands of the customers and helps to
establish the quality metrics. These metrics are a graphical representation of the
effects of the planning process. The QFD metrics help to view the real picture. The
organization is able to compare its goals to the stiff competition in the market.
These metrics are created by each department separately which makes it very
accurate.

QFD is a very effective system because it keeps in mind the customer preferences
and then helps in designing and molding the product according to their needs. All
the personnel in the organization contribute equally to the designing and quality
control activity. QFD is, in fact, a written version of the planning and decision
making process. The QFD approach can be studied in four different phases:
Selection Phase

In this phase, the product or the area which needs improvisation needs to be
chosen. The team belonging to a specific department is then selected and then the
direction of the QFD study is defined.

Aspects Phase

The interdepartmental team looks at the product from different aspects such as the
level of demand, usability, cost and reliability.

Discovery Phase

In this phase, the team searches for areas that need to be amended as far as
improved technology, cost reduction and better usability is concerned.

Enforcement Phase

The team working on the product explains the way the manufacturing of the
improved product will be carried out.

QFD is an integrated method to match the customer requirements to product


specifications. The point where the two matrices intersect represents the
correlation between the two matrices and is known as the ‘requirement matrix’.
When this matrix is accentuated by showing the correlation of the two matrices, the
result is the ‘house of quality’. This storehouse of matrices helps to establish the
important Critical-to-Quality parameters which are a key to determine customer
requirements. QFD is also very useful because it helps to determine the credibility
of a product even before it is manufactured or launched.

Another benefit to be gained from applying a QFD process is that all the team
members are equally involved in the development of a product. This kind of
brainstorming helps to increase the utility of the product and make it user-friendly.
In addition, QFD accentuates the strength of Six Sigma by clearly outlining the
significant customer satisfaction factors and bringing it in sync to the competition in
the market.

The whats and the hows are two prime ingredients of a QFD diagram. This makes it
very helpful for a Six Sigma project. The Voice of the Customer can be heard clearly
and it can lead to effective decision-making. A QFD diagram is given below. (Thomas
Pyzdek, 1976)
7. Big Ys, Little Ys

Six Sigma sets its sight on delivering products and services with a zero-defect rate.
However, the main concern of a commercial organization always remains
maximum profit. So, even when the organization decides to start a new project, the
Six Sigma leaders need to define the project in numerical terms. These metrics are
very important as they help in determining the most suitable project for the
organization. These are the projects called Big Ys which the Six Sigma leaders will
execute. The Little Ys are the smaller units of the chosen project which are
implemented by the Green or the Black Belts under the aegis of the Six Sigma
leaders.
The Big Y is to be associated with the critical requirements of the customer. The Big
Y is used to create Little Ys which ultimately drive the Big Ys. For instance, in any
service industry, the overall customer satisfaction is the Big Y and some of the
elements of achieving customer satisfaction are quality check, delivery on time and
after sales service (Little Ys). The Big Ys delineate at all the levels of an organization.
It can be the business, the operations or the process level. The little Ys at the
current level become the Big Ys at the subsequent level. The Big Ys are the
measures of the result, while the Little Ys evaluate the cause-and-effect
relationships between the different units of a project and the measures of the
process. It is very important to keep a check on the Little Ys to achieve a better Big
Y.

2.4 Business Results

C. Business Results

1. Process Performance Metrics

Six Sigma is a specialized tool and makes use of a lot of metrics to evaluate the
performance of products and services. Also, there are a number of parameters on
which the quality of a product and the performance of the process is measured.

DPMO

DPMO ranks among the most important metrics of Six Sigma in evaluating
performance. In fact, the primary aim of a Six Sigma project is to achieve this
standard mark. DPMO stands for Defects per Million Opportunities and the goal of
a Six Sigma project is to attain less than 3.4 DPMO. A process that is able to achieve
this landmark is said to have accomplished Six Sigma. This approach calculates a
number of opportunities with defects rather than calculating a number of units
with defects.

The only disadvantage that DPMO has is that opportunity is a subjective term and it
is very difficult to define it in simple terms. Take the example of a call centre agent
who is trying to take down the email address of a customer for further
correspondence. It is possible that he takes down ‘t ‘ for ‘p’ or misses one letter or
does not spell it correctly or by mistake fails to add the @ symbol before the IP
address. So, these opportunities should be tried to be defined in simple terms so
that the customers and the laymen is able to understand them.

PPM

This is synonymous with DPMO. It refers to the defects that happen in parts per
million opportunities.

DPU

It stands for Defects per Unit. There are some units that are or get defected while
manufacturing. DPU represents the number of defected units per total units. Let us
assume that 100 parts are manufactured in a day. So, if 5 parts are defected, it
means that the DPU is 0.05.

The DPU is calculated thus:

DPU = Number of Defects/ Total Number of Product Units

A unit here refers to the end product delivered to the customer. The Poisson
distribution, a model that represents the defects that are present in the product or
any other abstract or concrete deviation that occurs in the process is discussed in
the later chapters.

RTY

A product passes through a lot of stages during manufacturing. It may happen that
some of the products get defective and need to be reworked upon. The ones that
get defective and ultimately get rejected are accounted for. Many times however,
the ones that are reworked upon or manufactured again are not added. When the
rework is not added up, the yield is referred to as first time yield or process yield.
However, when the rework is added up, it is referred to as first pass yield or rolled
throughput yield. The latter is usually lower than the former.
where the y i values are the outputs at each step before rework.

Cost of Poor Quality-COPQ

The costs that are incurred as a result of producing defective material are known as
the cost of poor quality. This also includes the cost that is generated while trying to
meet the gap between the preferred and the actual product or service delivered.
This cost also includes the cost of chances that were lost because the resources
were wasted and ultimately in overcoming the shortcomings. Cost of poor quality
also includes the cost incurred on labor, inclination and the cost on raw materials.
The only costs that the COPQ does not include are the detection and prevention
costs. All the other costs included in it are the ones which add up to the point the
product or the service is rejected.

2. Benchmarking

Benchmarking means assessing one’s performance but in terms of the


performance of the competitors. Once the performance is measured and an
organization is able to determine its own performance, it tries to find out the
factors that work for the organizations that are the best in their class. Then, the
organization strives to achieve that position by implementing the same techniques
that its competitors are using.

Definitions

"Benchmarking is a tool to help you improve your business processes. Any business
process can be benchmarked."
"Benchmarking is the process of identifying, understanding, and adapting
outstanding practices from organizations anywhere in the world to help your
organization improve its performance."
"Benchmarking is a highly respected practice in the business world. It is an activity
that looks outward to find best practice and high performance and then measures
actual business operations against those goals."

The benchmarking process was introduced by Robert. C. Camp. Xerox Corporation


wanted to improve their parts distribution process. They inculcated the
benchmarking process in their organization and Camp was one of the pioneers to
carry out the process. Camp acknowledges the introduction of the process to
Japanese manufacturers who introduced this concept as a quality improvement
tool to challenge the supremacy of their American counterparts. These days,
because of the internet and wide-spread knowledge about the process,
benchmarking has become a very frequently used tool in organizations all over the
world.

Benchmarking, in the past, has been associated with studying one’s own products
and services in terms of the competitor’s. However, these days it is synonymous
with terms like “step-change”, “breakthrough” and “rediscovery”. These days,
benchmarking is more about how things are done. Unlike earlier, these days it is
about, the best practice followed. It is not important to adopt the norms followed
by the best organization. Rather the stress is on following the best practice.

The best way to inculcate the benchmarking tools and practices in the organization
is to develop it into a process. Camp has listed some steps that need to be followed
to implement it. They are:

1. Planning- Includes identifying what needs to be benchmarked and which


companies to consider as a reference point.

2. Analysis- Includes considering the loop-holes and visualizing the future level of
performance.

3. Integration- Includes disseminating the results of benchmarking and


establishing new goals to be accomplished by the organization.

4. Action- Includes evolving a plan of action, executing it and then, evaluating the
performance.

5. Maturity- Includes becoming a pro in the new process. The new method is
merged in the ongoing process.

Pros

1. Improves performance.

2. Helps establish goals that are tough yet achievable.

3. Helps to minimize wastage because no new experiments need to be done.


4. Helps share innovative ideas and the best practices between benchmarking
organizations.

Cons

1. It is not just important to imitate but to imitate in the right manner, in the
manner that best suits one’s organization.

2. The benchmarking tool requires a lot of supervision.

Benchmarking is an ongoing process because learning has no end and an


organization that needs to grow will always be in need of innovative ideas to
improve performance.

Financial Benefits

Six Sigma is a very useful tool for an organization that is ready to take big risks and
increase the profit margins considerably. This kind of progress requires
considerable planning, patience and a positive outlook. Six Sigma helps an
organization to reap financial benefits. The goal of any Six Sigma project is to
reduce the defects to 3.4 per million opportunities. It means that it will lead to
unprecedented gains. If the profit margin is 40 per cent and an organization wants
to increase it to 70 per cent Six Sigma is what it should deploy. Effective Six Sigma
helps an organization pocket big profits.

A Six Sigma initiative helps to increase profits and at the same time reduce costs.
The resources are optimally utilized and there is minimal wastage. A formal chart
must be made which lists out the financial benefits gained in terms of profits
gained, cost savings, cash flow and return on investment. These findings must be
chronicled for further use and the financial progress must be tracked.

NPV

NPV stands for Net Present Value of a particular investment. This is a metric that
helps visualize the future salaries and the profits that would be locked in with the
help of future investments. These values are then decreased by a discount rate to
have a ‘time value of money’. NPV is basically used to compare the financial benefits
that are reaped in through long term projects with the cash flow that is spread
through several years.
The formula to calculate NPV is:

ROI

ROI stands for Return on Investment. This is the key metrics to assess the financial
position of an organization. It is calculated thus:

ROI is a useful but not the final word to calculate financial benefits. The important
thing is to either have cost savings or earn big profits. The organization that is able
to achieve both definitely, has an edge over others.

CHAPTER 3
3 Project Management and Selecting Six Sigma Projects

The core of Six Sigma is to solve problems that are affecting business
performances. An organization employing the Six Sigma systems of managing their
affairs expects to derive benefits from them. These benefits could be lower costs,
enhanced productivity, rise in efficiency, or higher customer satisfaction by
reducing defects in their processes.

These are gained from a series of Six Sigma projects, big or small. They could be
projects within a department or across departments. At the business level, these
projects are the agents of action that carry out the business strategy and hand out
the results. They drive change in an organization.

A. Selecting the Right Projects


The first step in any Six Sigma initiative is project initiation, and the center of a Six
Sigma undertaking is linking projects to the goal of the organization. Projects have
to be focused on the right goals. This responsibility lies with the top management
or senior leadership; who are the Executive Council or the Project Sponsor. They
identify the project, determine the main concern, and launch the project; with some
assistance from the belts. Once the project goes underway, their roles are reversed.
The role of a Black Belt or a Green Belt starts there.

Moreover, one of the stakeholder groups; customers, shareholders, or employees,


are effected by Six Sigma projects. Therefore, selecting the right projects should be
given as much importance as executing them. Projects cost money, take time and
disrupt normal functioning of the organization. Therefore, only those projects in
which success is ensured should be taken up. Process improvement projects
should be restricted to only those processes that are important to the
organization.

B. Project Management and Benefits

Black Belts are the project leaders or managers and they act as a coach for the
DMAIC project. Black Belts are the full-time people dedicated to tackling critical
change opportunities. They are known to be adept in applying the right skills and
tools with effortless grace to achieve the project’s goals. Six Sigma teams fail to be
effective without a hard working Black Belt. They “baby-sit”, inspire, and manage
colleagues, they have to oversee projects that are complex in nature, that impact
the business greatly, provide the greatest returns to business, and satisfy customer
requirements. They have to have the highest levels of statistical expertise, be
masters at advanced tools, and masters at project management. Besides, they have
to possess good administrative and leadership sense.

Project Management is an organized way of managing a project. A project is a ‘plan


or a proposal, or an undertaking requiring concerted effort’. Every project has a
beginning and an end, the tasks to be carried out, and resources that will be
required in the course of the project. The goal of project management is to ensure
the deliverables of the projects undertaken; within the cost, defined scope, time
schedule, and quality boundaries.

Project management spells out the techniques required to take a project towards
its desired end in the prescribed time limit and approved budget. Project
Management thus acts as a tool, which enables a Project Manager or the Black Belt
to streamline the various processes involved in the project. A project is carried out
by team of experts and is headed by the project leaders who are usually the Black
Belts. The Black Belt, as a project leader takes the use of complex project
management tools and project tracking tools. This in turn helps him accomplish his
objective of producing a quality deliverable.

A deliverable is said to be a quality deliverable if it has been produced within the


resources the company has earmarked for it and satisfies the expectations of the
stakeholders. A project deliverable is, therefore, a measurable unit of project
output.

Stakeholders are a term oft repeated in project management. They are the people
who are involved directly or indirectly in the project and have a share in the profits
gained in the project. Therefore it is important to identify the stakeholders and
determine their expectations. Stakeholders usually are the shareholders, project
manager, customers and the project sponsor who are providing the funds.

1. Importance of Project Management

Project Management has garnered a lot of importance. A project is undertaken to


create a unique product or service; it is a one time activity. This could be design,
development, or any other one-off activity. The management of these projects
requires varying technical skill and expertise.

These projects differ from processes and operations; which are ongoing functional
work, and which are permanent and repetitive in nature. The management of these
two systems is often different and requires different sets of skills and viewpoints.
Therefore the need for project management arises.

Besides, project management is replete with challenges. The first is to ensure that a
project is delivered within the defined time frame and approved budget. The
second is that all the resources (people, materials, space, quality, risks involved)
have been put to optimal use to meet the predefined objectives. Project
Management enables a project manager to successfully complete a project. The
two most important aspects of project management are time and costs and the
project will have disastrous consequences if it is not guided by these two factors, as
the time limit will be missed and costs involved will be more.
If the project manager wants his project to win profits his organization expects
from it, he will have to construct his project around the concepts of project
management- the art and science of doing things on time with the efforts of others.
Thus it is project management which equips the Black Belt with the requisite tools
to succeed in his project and fulfill the expectations of the stakeholders.

The project leader/ Black Belt captains the team and guides the team, plans the
strategies and risks to take, tracks the progress of the team members, monitors the
entire project, and inputs control measures to ensure the project does not waver
from its defined goals. He is primarily responsible for getting the team started; he is
the one responsible for translating all the plans into action. He is the one who will
get accolades for the success and he will be held accountable if the project fails to
deliver.

2. Project Characteristics

Every project has some characteristic features. They are listed below:

 Every project has a life cycle and some predefined objectives.


 Every project is unique in nature.
 All projects carry risks with them.
 Every project is an amalgamation of different activities; all these activities have to be fulfilled to
make the project a successful one.
 A project is a collaborative activity, it involves integrated effort.
 Every project is a single entity and is the responsibility of the project manager.
 Every project grows through the following stages- initiation, planning, execution, control and
closure.

Example of a Project

A car company is going to initiate a project “Z” on December 1, 2006. The project is
required to be successfully completed by November 1, 2007. The project entails
developing a new technology with the help of which the car manufacturers will be
able to propel the cars by a mixture of two gases - hydrogen and oxygen. The
company would hand over the newly created technology to its manufacturing
division. The manufacturing division will use it to develop a prototype of a new car.

The project team would comprise of ten engineers who will report to the Project
Manager. The Project Manager would be reporting directly to the company's CEO.
Everything has been judiciously planned. It is a path breaking project and therefore
the company puts a high value on its success. The company's share holders are
keenly awaiting this novel technology, which will boost the future car sales.

C. Project Plan and Project Charter

Under the DMAIC framework, the top management with some support from the
Black Belt or project manager needs to do the initial project planning. This involves
identifying major project tasks, estimating the time involved, and assigning the
responsibilities for each task within each of the project’s five phases.

Project Plan: After the Six Sigma project definition is complete, the next step is to
make the project plan. Theproject planshows the ‘whys’ and ‘hows’ of a project. A
plan is a model for potential project work, and it allows potential flaws to be
identified so that they can be rectified in time. The plan demarcates every team
member’s role. At the same time it also emphasizes how the different parts are
linked to each other, and when the goals are accomplished and when to draw the
line.

According to Thomas Pyzdek (1976), a good project plan will include:

 the methodology of the project (DMAIC)


 a statement of the goal
 a cost/benefit analysis
 a feasibility analysis
 a listing of major steps to be taken
 a timetable for completion
 a description of the resources required

Project Charter: One of the first tasks of the Six Sigma team is to develop
the Project Charter. The Project Charter is a one-page written document of a
project. It summarizes the key elements of a project- the reasons for doing the
project, the goal of the project, the project plan and scope, a review of the roles and
responsibilities, and the project deliverables.

The project charter is typically drafted by the Project Champion, and refined and
added to by the Black Belt and the team. It is generally seen that the charter is
revised and adapted a few times over as the problems become clearer and data
becomes available. It usually changes over the course of the DMAIC project.
Towards the completion of the Define phase, the charter should define the scope
the project wants to accomplish.

The project charter makes a great impact on project success as this document
encapsulates what the management wants done, and what the Black Belts and
Champions have collectively agreed to achieve. It is a kind of agreement among all
groups involved in the project. It becomes necessary to ascertain that the
management, project leader, team members and customers possess complete
understanding about the project elements to ensure project success. A clearly
written project charter helps to pass on the vital information to the team members,
and ensure they remain in the loop. A Black Belt Project can be adversely affected if
the project charter is not in place.

1. Charter/Plan Elements

A project charter comprises of the following elements:

Business Case: What impact will the project have on the business, or external
customers?What are the current costs of poor quality?What is the importance of
the process? While creating the project charter for any project, first the business
case has to be identified. There has to be a link between why the project exists and
its impact on some strategic business objective.

Let us take the example of a customer care center of a telecom company. This
company plans a project to improve the call quality of its customer care executives
(CCEs) in all its processes. This move would increase the overall customer
satisfaction which will give the brand a visible edge amongst its competitors. This
would result in higher sales of that particular telecom brand. Thus the project-
improving quality- is impacting a strategic business objective, i.e., higher sales and
higher profits.

Problem it is Addressing / Problem Statement: Once the business case is stated,


the problem statement should be given. What is wrong with the process, product or
service? The problem statement states the tactical issue the project team wants to
improve. A good problem statement should describe the time period (how long the
problem has been existing), its impact to business, and the deviation between the
desired state and the current state. Moreover the problem statement should be
specific and measurable.

For example, the problem statement in a call center’s process could look like this-
The Customer Satisfaction Index has fallen by 10% this quarter as compared to the
last quarter. This has had a negative impact on the sales figure which has fallen by
14% vis a vis the same time period last year.

Project Title: This is how the project will be named in reports and best-practice
databases. For example,if the project or process is to improve call quality or
effectiveness, a possible title could be Enhancing Customer Delight or Call Center
Cycle Time.

Process Name: This will be the functions and outputs of the organization that the
project will be focusing on.

Project Scope: The scope of the project would be to define the boundaries within
which the project team will be working on. The scope of the project should be
apparent. It should be achievable within four to six months. Projects fail for the
reason that the scope is too large for the time agreed upon. To achieve this, each
project team should create a consensus on what would be the project scope for
their project. The start and end points of the process are given in the scope.

For example, the scope in the call center would be to achieve the desired customer
satisfaction index levels through increased monitoring of calls for the next three
months.

Project Leader (Black Belt): It is necessary to identify the person leading the
project or the process improvement project. It is done to make the management
aware of who’s the driving force behind the project. This person is responsible for
team coordination, assuring task completion etc. He also acts as a formal point of
contact with the project sponsor (the person who is financing the project).

Project Start Date: For smoothness in documentation purposes, the project start
date has to be finalized. This is the date the charter is defined.

Projected Project Conclusion Date: The anticipated end date of the project has to
be given because a project cannot continue indefinitely. This provides the team
with adequate time to plan and finish the project in the specified business setting,
project complexity, and work-load setting, holidays, and so on.

Process Measurements: The different measures that will determine the


effectiveness of the project are the process measurements. All the necessary
measurements should be listed, but they should be within the scope of the project.
For example, will the cycle time be in days, weeks or months? Will the internal call
quality audit scores be in percentage figures?

Goals and Expected Benefitsof the Project: Once the scope has been created, the
project team has to formulate a set of attainable goals and objectives that are
achievable within a finite time frame. It should also anticipate the expected benefits
of the project. The idea is to set demanding but practical targets.

For example, the goal of the process in the above example could be to increase
customer satisfaction index. The total number of surveyed customers who rate the
customer service experience of a particular process should increase by the desired
percentage figure (as defined in the targets). The expected benefits could be-
Increased customer delight will lead to a higher sales figure.

Team Members, Their Roles, and Deliverables: The project team should include
meticulously chosen team members, and their roles and responsibilities should be
carefully defined. It should include people who have the expertise, who are the
most qualified to carry the chosen project to its completion; and those who are
strategically important to the process. Every project should have a team leader (as
mentioned above), either a Green Belt or a Black Belt. The project mentor and the
sponsor should also be mentioned.

Project Milestones: It is important that the project goals set by the team be
attained within the defined time frame (project start and estimated stop points).
The important milestones (phase of Six Sigma methodology-DMAIC) between these
dates have to be mentioned. A good project leader should ensure that the team
can achieve this by providing the team with the required project management
resources.

The following figure is a template of a project charter. The example of a


customer satisfaction in a customer care center of a telecom brand is
depicted in this project charter.
3.1 Planning Tools

2. Planning Tools

There are a number of tools available to aid the project manager or Black Belt to
plan his project, such as Gantt chart, PERT Chart, Planning trees, QFD, Budget etc.
They are explained in detail below.

a. Gantt Chart

A Gantt chart is a bar chart that displays the tasks of a project, when each task is
expected to start, and when they are scheduled to end. The horizontal bars are
shaded with the project’s progress, to show which tasks have been completed or
how much of a task has been completed. The left end of the horizontal bar
represents the expected beginning of a task, and the right end of the bar
represents the expected completion date of the project. People assigned to each
task also can be shown.

The Gantt chart is most commonly used in project management. This simple tool
helps in keeping track of the time frame of a project. It can be used when
communicating plans or status of a project. It helps in planning the proper
allocation of the resources needed for the completion of the project. Moreover it
can be used to plan the timetable and monitor tasks within a project.

A Gantt chart is simple and easy to construct. It is done in the following way:

1. Identify the tasks that need to be done to accomplish the project, and identify the
time frames. Also list the sequence of the tasks.

2. Draw a horizontal timeline along the top of the page.

3. Write down each task and milestone of the project vertically down the left side of
the page. Draw a bar under the appropriate times on the timeline for activities that
occur over a period of time. The left end of the horizontal bar indicates the
beginning of the activity, and the right end of the bar indicates the expected
completion date of the task. Draw the outlines of the bars; you can fill them as the
activities take place to show their status.

4. Place a vertical marker (e.g. an arrow) to show your present position on the
timeline. This chart is similar to an arrow diagram; but the Gantt chart is easier to
construct, can be understood at a glance, and makes it easier to visualize progress.
In the Gantt chart, relationships (people who are assigned to a particular task) and
dependencies of tasks (on other tasks, resources or skill needed) are not shown.
These are shown more clearly in the arrow diagram. But these details can be shown
by drawing additional columns.

Example 1: A Gantt chart showing the acquisition and implementation of a new


process in a BPO.
Note : The arrow denotes the current date.

In the Chart, there are ten weeks denoted in the timeline. The Chart shows the
status of the project at Wednesday of the sixth week. The team has completed six
tasks, till forecasting and manpower planning. Technical configuration and
infrastructure setup is a long drawn out process, and slightly more than half of that
is estimated to be complete. Therefore, about two-thirds of that bar is shaded. The
task of recruitment also has finished by more than half and that part of the bar is
shaded. The team has not yet started training and quality monitoring setup; they
are behind schedule for these two tasks. Maybe they should reallocate their
resources in order to cover those tasks simultaneously.

The Gantt chart can be drawn in another manner. It is shown below.


Example 2: A Gantt chart showing the project of construction of a commercial
building.

A single view of the Gantt chart helps to monitor the progress of the project. It can
be inferred from the chart that the project is running ten days late from the
stipulated time that was given to the allotees of the commercial building.

b. PERT (Program Evaluation and Review Technique) Chart

Large-scale projects are complex and need a lot of planning, scheduling, and
coordinating of numerous interrelated activities. Some activities can be performed
sequentially, while others can be performed parallel to other activities. To support
these tasks, methods based on the use of networks were developed in the 1950s,
namely PERT and CPM.

In the CPM (critical path method), the time estimates were understood to be
deterministic (having an outcome that can be predicted). On the other hand, they
were assumed probabilistic in PERT. Today, both the techniques have been
combined and the differences are mainly historical. A version of the PERT chart is
called an Activity Network Diagram.

The PERT chart is a pictorial representation of the chain of actions to be executed in


a project to complete the project. It is a representation of the project’s best
schedule, and displays the interdependencies between various tasks. It shows the
resource problems with their solutions. It also shows something called the ‘critical
path’ of the ‘critical’ tasks; those tasks that must be completed on time to maintain
the project’s deadline. Arrows are used to show the sequence of the tasks.

Steps in constructing PERT:

1. Discuss all activities or tasks that are needed to complete the project.

2. Determine the sequence of the tasks. Before an activity or task begins, all
activities that should precede it should be completed. Ascertain which task is to be
carried out first. Identify which task can be carried out simultaneously with this
task. This can be placed to the left or right of the first task.

3. Identify the next task, and place it below the first task. See if there is any task to
be worked out concurrent to this. Concurrent tasks should be lined parallel to the
left or right.

4. Continue this process and construct a diagram. The tasks are represented with
arrows.

5. The beginning or end of the task is called an event. Draw circles for events,
between each two tasks. Therefore, events are nodes that separate tasks.

6. Use dummies to indicate problem situations or extra events. A dummy is a


dotted arrow used to demarcate tasks that would otherwise start and stop with the
same events. It is also used to show logical sequence. Dummies are not real tasks.

7. Identify task times or the time needed to complete each activity. Write the time
on each task’s arrow.

8. Determine the critical path. The longest time from the beginning to the end of
the project is called the critical path. This should be marked with a heavy line or
color. The project’s critical path includes those tasks that must be started or
completed on time to avoid delays in the completion of the project.

Finding the Critical Path


There are four time values for each event: its earliest time of start and earliest time
of finish; and its latest time of start and finish.

1. Work out the earliest time (ES) for each task and earliest finish (EF). The
earliest time is the expected time an event will occur if the preceding activities are
started as early as possible. Earliest Finish for each task = ES + time taken to
complete the task.

2. Work out the latest time that each task can begin and conclude with. These are
known as Latest Start (LS) and Latest Finish (LF). The latest time is the projected
time an event can happen without disturbing the project completion beyond its
earliest time. To calculate this, work backwards, start from the latest finish date to
the latest start date. Latest Finish = the smallest LS of all tasks immediately
following this one Latest Start = LF - time taken to complete this task

Draw a separate box for each task. Make a time box divided into four quadrants as
shown in figure below.

Slack time for an event is the difference between the latest times and earliest
times for a given event. The slack for an event indicates how much delay in the
happening of the event can be allowed without delaying project completion,
assuming everything else remains on schedule.

Total Slack = LS - ES = LF - EF

Therefore, the events that have slack times of zero are said to lie on the critical path
of the project. It is to be noted that it is only activities having zero slack can lie on
the critical path, and no others can. The delay in an activity lying on the critical path
leads to the delay in the entire project. Moreover, once the critical path activities
are traced, the project team has to find ways to shorten it and ensure there is no
slippage.

Benefits of Using PERT


A PERT chart provides the following information in advance:

 Probable project completion time


 The probability of completing a project prior to the specified date

Task start and end dates with the latest start; and end dates without effecting the
project completion time
3.2 Tree Diagram
c. Tree Diagram

A tree diagram is an important project planning tool. The tree diagram helps to
identify all aspects of a project, right down to the work package level. Sometimes
the tree diagram used in project planning is also called a Work Breakdown
Structure (WBS). It displays the structure of a project; showing how the broad
categories of the project can be broken down into smaller details. This diagram
shows the overall picture of a project’s steps, the logical flow of actions from the
identified goals. The tree diagram is also used to display existing projects in an easy
to understand diagram.

The tree diagram starts with one item that branches into two or more stems. Each
of these branch into two or more, and so on. The main trunk is the generalized
goal, and the multiple branches are the finer levels of action. The tree diagram is a
generic tool that has wide applications apart from project planning.

It can be used to find out root causes of problems as part of process


improvements, used to correct the existing plans or processes as part of the
implementation plan.

Steps in Constructing a Tree Diagram:

1. Identify the statement of the goal or project plan, or whatever is being studied.
Write it at the top (this will make a vertical tree) or far left of the working surface
(this will make a horizontal tree).

2. Subdivide the goal into various sub categories. Ask a question that will lead you
to the next level of detail. For example, for a goal or work breakdown structure, the
team could ask “Which tasks must be done to meet this goal? What is required to
accomplish this?” Answers could be taken from the brainstorming sessions or
affinity diagrams. Write these items in a line below or to the right. Arrows have to
be used to show the links between the ideas. Ensure that these items will be
sufficient to answer the questions above. This is called a “necessary and sufficient
check”.

3. Each of the new idea statements now becomes a goal or problem statement. For
each of these ideas, ask questions again to unearth then next level of detail. Jot
down the next level of sub-headings and link them with the previous line of
statements with arrows. Do a “necessary and sufficient check” again for each of
these items.

4. Continue the process till all the fundamental components are covered.

5. Do a review of the whole diagram to check for sufficiency and completeness. Do


a check for all the sub-headings. Are all of them necessary for the main objective?
Will they help achieve the anticipated goals?

Example: The following tree diagram is a project for increasing the productivity of
customer care executives in a BPO. The goal of the project is to reduce the average
call handling time, (the average time taken by each customer care executive to
handle customer calls) which will have a positive effect on productivity.
3. Project Documentation

Documentation is a crucial component in the project planning process. It is a


method to disseminate information and facts about the project to all the team
members and other people involved. It involves chronicling each and every
movement in the project, right from planning till its completion. Who is responsible
for the project? Why is the project important? What are the financial implications?
Who are the team members and what are their roles and responsibilities? What are
the goals, metrics, and deliverables of the project? What was lost before the project
was undertaken? What would be the planning tools? Moreover, data about the
project’s progress and activities at every step has to be documented.

The data should be presented in a manner that it is clear to every person involved.
It should be based on facts and reliable data and presented with the help of
spreadsheets, storyboards, phased reviews and presentations to the executive
team. It is done by the Black Belt or the team members. The data collected in this
manner through these mechanisms has to be amalgamated and synthesized in a
phased manner, throughout the project, so that they are useful in the improvement
and implementation process.

Tools: A place where checksheet data is collected and organized is known as


a spreadsheet. It is a table of values arranged in rows and columns. The data will
be much easier to use if the spreadsheet is well designed. Spreadsheets are useful
when recording project activity.

A checksheet is a structured form used to collect, organize and analyze data. This
is designed in such a way that all the necessary facts can be secured. Data
representing the ‘when’s or ‘how’s of the ongoing project can be captured in a
checksheet. In addition, the frequency of pattern of targeted events, or the
problems or defects that might occur while the project is in progress, can be
recorded in a check sheet.

A storyboard, as the name suggests, is a visual display of thoughts. It tells the story
of a plan or project. All the aspects of the project are made visible at once to
everybody involved. It is a representation of both the creative and analytical aspects
to a project. It is highly useful when documenting or displaying project activities or
results.

4. Charter Negotiation
The project charter, as discussed in the prior sections, is a statement of what the
project is about and the organization’s commitment of resources, schedule, and
cost to the project. It is common that the project charter is negotiated and
modified, by the stakeholders and the project manager, once it has been prepared.

The Black Belt needs to identify the key stakeholders with who he will be
negotiating the project charter.

Essentials of Negotiation:

Negotiation, in general terms, is a process through which all parties want to reach a
mutual solution for things they own or control. The objective of negotiation is to
carve out a win-win situation for all negotiating parties by appreciating each other’s
needs, while keeping in mind the shared goals of both. All parties intend to be
benefited after the negotiation process. Charter negotiation includes a list of
assumptions and a list of constraints of which a mutual understanding has to be
sculpted out.

There are various kinds of negotiation conducted during a project. These include
initial negotiations while finalizing the contract, change negotiations while drafting
the plan and design, and negotiations during project closure. Negotiations can be
on the project scope or project boundaries with the project sponsor or stakeholder.
It can also be on the project’s resources (time schedule, price terms, manpower,
and business conditions). Negotiations can also be carried out on a daily basis with
team members about their commitment towards the project. These are the key
elements in determining the final output of the project.

A lot of things are included in charter negotiation:

 Negotiation is best done when the project has not officially started. The focus
should be on reasoning the ‘why’s and ‘how’s, rather than on giving the
affirmations.
 Successful negotiation is about getting things done the best way without giving
away too much. The project’s negotiator should know how much he is willing to
give in to settle the matter, and what the other party is intending to achieve in the
negotiating table. This will help in zeroing in on the final agreement.
 The project negotiator and the stakeholders, who could be the customers,
shareholders or team members, should keep themselves focused on the project’s
goals. The basis of negotiation is being clear about the goals.
 The negotiation should be under control right from the start as the project
manager has to fit in several factors like financial constraints, the delivery
schedule, the list of tasks to be concluded and profits. This has to be ensured by
the project manager to avoid being on the wrong side of the agreement.
 While negotiating it is important not to hold on to the costs, and give away
something that is significant to business. Balancing between money and things
more valuable to business pays rich dividends to the project.
 The process of negotiation must be given the same importance as drafting and
execution of the project.
 While negotiating the charter with the team, it is necessary to make a detailed
Work Breakdown Structure (WBS) to help communicate project details to the team.
Extracting commitment from the team is vital for the project’s successful
completion.

Understanding Charter Negotiations

While entering into the negotiz`ating process, the following things have to be kept
in mind:

1. While negotiating the scope, the first thing to be kept in mind is to understand
the scope to be negotiated. What are the areas within the scope that can be
adjusted to the needs of the stakeholders? What are the boundaries within which
the negotiation can be carried out? The project negotiator has to fix these
parameters. Scope includes all the work required to complete the project.

2. The project negotiator should focus on the interests and the issues rather than
on the people who are sitting in the negotiating table.

3. Focus should be on interests of the group, and not positions.

4. Justify the scope and establishing the credibility of the project is the next step in
the negotiating process.

5. Listening carefully and identifying the opposite party’s real interests and the
reasons underlying them is the next step. The project negotiator should unearth
the benefits his team will accrue from the negotiation.
6. It is imperative to keep track of how much has been conceded.

7. The project negotiator should politely say no if the need arises, without breaking
down the negotiations.

8. Closure negotiation is a very important aspect of negotiation and special


attention is required to complete it. The negotiator has to choose the best among a
range of solutions. He has to select what’s best for the project by being objective
and realistic.

9. Lastly, it is important to secure approval about the project plan. The project
negotiator has to ensure that the working relationships with the project
stakeholders stay intact for future negotiations.

3.3 Six Sigma Team Leadership

D. Six Sigma Team Leadership

The process of deploying the Six Sigma initiative can be started by creating the
teams to work on the Six Sigma projects. Six Sigma teams are led by the Black Belt,
Green Belt, or a Champion. The teams are made up of a motley crowd, who
contribute their personal skills and faculties to the project. This section discusses
what the belts can do to ensure team success.

After the business case is stated, the project statement is written, and the scope is
defined, teams have to be developed. The Black Belt has to prioritize actions, using
the DMAIC framework, and select the teams who will work on each phase of the
DMAIC project. He has a critical role to play in formulating teams because he is
ultimately responsible for the team’s success. His leadership skills come to the fore
when he has to work on team building and initiate team communications.

Selecting the right team members and allocating roles within the team is the
foremost task of the Black Belt. After the team has been formed, his role lies in
being not only a leader, but also that of a facilitator and a motivator. He should
understand team dynamics, and have conflict mediating skills.
The formation and development of an effective team is instrumental in achieving
sustainable success in any Six Sigma project. The Black Belt can make use of the
combined skills of the team to address customer needs, reduce variances in the
processes, and successfully deliver the project.

1. Team Initiation

Team initiation is identifying the team of individuals and skills needed to complete
the project. The crucial factor is how to become a team- how to bring a group of
strangers with variegated skills to meet the challenge of completing the project.
While launching a team, the team leader should ensure the needed skills exist in
the team. Detailed attention should be paid to the challenge of bringing the team
together for the first time, and for additional meetings. A lot depends on the team
manager to help the team get to know each other, trust each other. There should
be clarity about the team’s goals right from the start, and once the team is clear
about what to do, organizational procedures and best practices guide the team to
coordinate their activities.

The team should be small and at the same time, have the sufficient expertise and
necessary skills for the completion of the project. A core team typically consists of
six or fewer people. The team should be clear about the scope, goals and
boundaries of the project. The team must understand that its goals have to be in
sync with the organization’s goals. The team has to have the support from the
management. Every member has a role to play and responsibilities to perform.
Every member has to follow certain guidelines and rules while pursuing the
project.

The team leader has to extract a commitment towards the project from them, and
drive home the importance of the time schedule. He should know how to shift roles
and blend skills, set targets, fix assignments, and hold people accountable. Also,
team performance is driven by empowerment and positive group dynamics.

2. Team Selection

Team selection is an art and careful attention should be paid while selecting teams.
Only those people who have the appropriate skills needed for the completion of
the project should be taken. Members should be chosen for the project only
because they possess the technical skill, organizational skill or skill related to the
subject matter of the project. People who have some knowledge about the current
process can be taken into the team. Sometimes the team can include customers
and suppliers of the process.

Teams should consist of the number of people necessary to finish the project.
Smaller teams work faster and display results more quickly. Larger teams need
greater coordination and sometimes sub teams have to be created under them.

3. Stages in Team Development

According to Bruce W. Tuckman (1965), team development is described in five


stages:

Forming: In this initial stage, the team members meet each other for the first time
and get familiar with the courses of action. They explore the team goals and the
project scope. Here group interaction is hesitant, members are cautious about how
they will fit in. The decision making process is controlled by the leader and he
performs a significant role in steering the group forward.

Storming: The storming stage follows forming. Conflicts arise between members,
and between members and the team leader, in this stage. Members demand to
know their roles and responsibilities. They question the team leader’s authority as
far as group objectives are concerned. They tend to question procedures and
systems. Defying the attempts of the leader to move the group towards
independence is a common feature.

While managing conflicts, the leader should keep the following elements in mind:

1. He should not try to enforce rules on members.

2. In the event of any dispute, it would be advantageous to gain a consensus and


adopt a decision on the issue.

3. He should investigate the true reasons behind the conflict.

4. He should perform the role of a mediator between member groups.


5. He should not hesitate to confront wasteful and unproductive conduct.

6. He should persistently steer the group away from its leader; towards
independence.

Norming: During the norming stage, acceptability about norms and procedures
come in. The team takes responsibility of its goals, procedures and conduct in this
stage. The members accept that the DMAIC tools will help them in achieving their
goals. Group norms are enforced and strengthened by the group itself. Respect for
and willingness to help other team members arise in this stage.

Performing: If all goes according to plan, the team reaches this final stage.
Members realize their potential. They take pride in their group, their contribution to
the group, and their accomplishments. They are confident about giving assistance
to the improvement initiatives of the project.

Adjourning: This fifth phase involves completing the task and breaking up the
team.

Recognition: This phase involves identifying and thanking the team members for
their contribution to the task.

3.4 Team Dynamics and Performance

E. Team Dynamics and Performance

1. Team Member Roles and Responsibilities

According to Thomas Pyzdek, author of The Six Sigma Handbook (1976), members
of the team take on two basic roles: group task roles and group maintenance roles.
The development of these roles is important for the team building process; which is
the process by which the team learns to function as a unit rather than a collection
of individuals with no coherent goals.

Group task rolesinvolve those functions related to facilitating and coordinating the
group’s effort to select, define, and find a solution to a problem. They include behaviors
shown in the table below:

Group maintenance roles are intended to bring group cohesiveness and group-
centered behavior. They include behaviors shown in the table below:
Counterproductive Group Roles: There are some “not always helpful roles” which
may hinder the team building process;these are called counterproductive roles. The
group leader has to identify such behavior and subtly provide a feedback. Some of these
roles are discussed below:

Management’s Role:
It is very important for the management to ensure that the group gets sufficient
time to become effective. It has to ensure that the dimensions of the group are not
altered due to any one or more members being asked to leave the group unless
required as a critical exception. These conditions if met will help the group to
progress through the four team stages.

At the same time, the group dimensions should not also be impacted by addition of
more temporary members. However, this will involve lots of discipline on behalf of
both the management and the group members.

2. Team Facilitation Techniques

Facilitation is the art of assisting a group in accomplishing its goals. Facilitation is


making things easy for the group to undertake activities for problem solving, and
aid in decision making. It also involves in making possible the tasks that are needed
to run a successful meeting.

A facilitator takes up the roles a leader in group communication. He makes things


easy for the group by facilitating the activities and functions of the group. He not
only binds the group together as a cohesive unit, he ensures that group behavior is
productive. He coaches and mentors the group to overcome problematic behavior
like dominance, aggression, feuding, digressions, floundering etc.

Principles of Team Leadership and Facilitation:

1. The team should comprehend that they are a group of people with group goals.

2. There should be clarity about group goals and they should be relevant to the
group’s needs.

3. There has to be accuracy and fluency in presenting ideas to the group.

4. There has to be equal participation from all group members.

5. Team members must be empowered with participation and leadership in equal


amounts. This ensures responsibility towards the team’s goals, and commitment to
implement the group’s decisions.

6. The team is an interacting and collective team. Vital decisions should be arrived
at through group consensus. This provides constructive controversy, equal power,
involvement, and at the same time, realization of potential of group members.

7. The group needs to be high on cohesion. Team members should like each other,
a high level of trust and acceptance should exist. They should be satisfied with their
participation in the group. They should emit positive vibes and there should be
positive group dynamics- room for innovativeness, enough freedom to take
independent decisions and productive arguments.

8. Problem-solving abilities should be high among team members. Problems should


be eliminated in such a way that they don’t resurface. Selecting a Facilitator:

Selecting a Facilitator:

Facilitators should be endowed with the qualities listed below: (Schuman 1996)

1. He should posses the ability to visualize the complete problem-solving and


decision-making process.

2. He should remain unbiased relating to issues and concerns within the group.

3. He should use methods and systems that the group’s social and intellectual
process understands.

4. He should appreciate the desire of the group to learn from the problem-solving
process.

Facilitation shows the best results when the facilitator:

1. Views the problem-solving and decision making process strategically and


thoroughly, and narrows down to specific procedures, that mirrors the group’s
needs and tasks at hand.

2. Is trusted by all group members as being fair and neutral; and who has no
personal interest in the result.

3. Assists them in understanding the problem solving techniques and helps them
polish their own decision making skills.

Using an Outside Facilitator:


Using an Outside Facilitator: Using an outside facilitator involves extra costs. An
outside facilitator should be used only under the following
circumstances: (Schuman, 1996)

1. An unbiased external facilitator is useful when there is distrust and doubts about
bias in the facilitator.

2. When the members feel intimidated by the internal facilitator, an outside


facilitator eases the situation and brings out the cognitive abilities of members.

3. Rivalries between individuals and organizations can be alleviated by the presence


of an outside facilitator.

4. In a critical situation, where immediate decision is desired, the use of a facilitator


hastens the decision making process.

5. An outside facilitator brings down the cost of meeting, which is a roadblock to


communication.

6. When the situation is highly complex and unique, bringing in an expert can help
the group delve into the problem and resolve it.

Sometimes the problem under discussion lacks proper definition or is viewed from
a different perspective by different people. An unprejudiced person can offer his
suggestions and pitch in his analysis.

Facilitating the Group Task Process

According to Thomas Pyzdek, author of The Six Sigma Handbook(1976), team


activities can be divided into two phases: task-related and maintenance-related.

Task-related activities concern themselves with the reason behind team formation,
the team charter, and the team goals. They are listed below:

1. The facilitator should be selected before the team itself is formed. The facilitator
selects the team members, and designs the team charter. (The team charter is
discussed in the earlier part of this chapter)

2. He assists in developing team goals based on the charter, and acts as the
mediator between the team and management.

3. He aids the team in creating a realistic schedule. (Project scheduling is discussed


in chapter 4)

4. He has to assure that sufficient records of the team’s projects are kept; and see
that records should contain the current status of the project. He has to arrange for
blue-collar support.

5. He has to plan meetings, invites the strategically important people and has to
ensure their attendance. He has to see the meeting starts on time and the
proceedings are smooth.

He has to act as the mode of communication between team members and people
outside the team, gain the support and cooperation of non-team members.

Facilitating the Group Maintenance Process

Group maintenance tasks involve maintaining the camaraderie between group


members, and seeing that the group does not digress from its goals. The facilitator
has to observe unwarranted behavior, and diplomatically provide a feedback to the
team.

3. Team Performance Evaluation

Team performance evaluation is measuring team progress in relation to team


goals. Before starting the evaluation process, some baselines or metrics of the
goals have to be established. This can be done through benchmarking, and other
means. (These are discussed in chapter 1) It is important that records of the
performance are maintained.

Some measurable performance measures are (Thomas Pyzdek, 1976):

 productivity
 quality
 cycle time
 grievances
 medical usage
 service
 turnover
 dismissals
 counseling usage

Some measures of team performance which cannot be measured are (Thomas


Pyzdek, 1976):

 employee attitudes
 customer attitudes
 customer complaints

There are some measures to weigh the performance of processes of the teams.
Project success and failure should be monitored. These can be measured in terms
like: leaders trained, projects started, projects dropped, projects completed,
projects rejected, improved productivity, number of teams, inactive teams,
improved service etc. (Aubrey and Felkins, 1998)

3.5 Team Effectiveness Tools

4. Team Effectiveness Tools

Individuals in a team could have excellent skills and phenomenal creativity, but they
might be unable to bring all these skills together because of lack of team
coordination, or lack of knowledge of the common tools. For a team to be
successful, it should master not only the quality improvement processes, but also
the team tools and team effectiveness.

Team tools facilitate the team to perform effectively and efficiently. Tools enable
them to achieve their goals and objectives, or arrive at a consensus regarding team
issues. A major factor in team dynamics is that no decision is taken till it wins the
tacit approval from all team members. Some of these tools are described below.

a. Affinity Diagrams

The affinity diagram organizes various ideas into meaningful categories by


recognizing their natural relationships. It is used to reduce and refine the long,
complex, and raw data into a smaller number of dimensions and categories. This
method helps to bring out the team’s intuition and creativity levels. This tool is used
when group consensus is necessary. This technique was created by Japanese
anthropologist Jiro Kawakita in the 1960s.

Affinity diagrams can be constructed using existing data like survey results,
drawings, letters, or data gathered from brainstorming. They can be used before
creating a storyboard or tree diagram (discussed in the subsequent sections of this
chapter). They can be used together with other techniques like cause and effect
diagrams and interrelationship diagraphs.

The affinity diagram is created in the following manner:

 Write the ideas on small pieces of paper or sticky notes. Randomly paste these
notes on a working surface which is visible to everyone.
 The team has to work in silence during this stage. Look for patterns in the ideas.
Look for ideas that seem to be correlated. Place them next to each other by
moving the sticky note. Repeat the process until all ideas are grouped. There could
be notes which fit into any category. It’s alright to have such “stand-alones”. You
can also move a note someone else has moved before.
 It is alright to talk now. The team can now review and assess these final groupings.
Any unsuspected patterns or reasons why the notes were shifted can be
discussed. Select a heading for each group that would capture the essence of the
group. The grouping of these ideas will assist the team in taking a decision, or
making a plan.

b. Nominal Group Technique

The nominal group technique is a highly structured method of generating a list of


ideas to be acted upon. In other words, it is a technique for group brainstorming,
and it supports contributions from everyone.Nominal Group Technique is used
when:

 the group has new members


 some members are not participating
 the group is unable to reach a consensus

The NGT is created in the following manner:

Part 1: A formalized brainstorm


1. Define the task under brainstorming; clarify the task until everyone understands
it.

2. Brainstorm ideas. Have the team write down their ideas silently, for a set of time.

3. List each idea by asking each of the members to state aloud their ideas. The
facilitator records it in the flipchart. Discussion is not allowed, and questions cannot
be asked at this stage. Ideas given need not be from the team member’s given list.

Part 2: Arriving at the decision

1. Discuss the top 50 ideas. Ideas can be removed from the list only if there is
approval by everybody. Discussions will elucidate meanings, spell out the analysis,
or express agreements or conflicts.

2. Rank ideas through multivoting. Pass out index cards to all members. The
following table can be used as a guide.

3. Let each member write down their choices from the list of the given ideas, one
choice per card.

4. Each member then has to rank their choices and write the rank on the cards.

5. Record the group’s choices and ranks.

6. The group analyses and discusses the results. Examine the number of times an
item was selected. Calculate the total of the ranks for each item.

The item(s) which got the highest score(s) (total of ranks) can help the team to use
these for further discussion or analysis. A decision is possible only if it is preceded
by group consensus on importance of the items that got the highest score(s).
c. Multivoting

Multivoting enables selecting a few options or possibilities from a broad array of


choices. Multivoting can be called straight voting because it enables the team to
give priority to the item that gets the highest votes, and not the item that is favored
by any. It is used when the list has to be narrowed down, or when something is to
be decided by group consensus.

Multivoting is generally referred to as a follow-up to brainstorming. In the


brainstorming session, a large number of raw ideas are presented which are
further organized into meaningful categories as presented in the Affinity diagrams.
These organized sets of ideas are further reduced, in an order of importance
including common problems and causes, into a final list. The list usually comprises
of 3-5 items. Each member is allowed to select the number of items he /she thinks
most important. Then, each member ranks each item according to how important
he/she perceives it to be. After this, the votes are tallied and the rankings are
summed up. The item which receives the highest ranking from the group is
considered for further analysis.

d. Force Field Diagram

The Force Field diagram helps in the understanding opposite forces of a required
change: the driving forces that work towards the process of change, and the
restraining forces that block any improvement or change taking place. The Force
Field diagram can be used to display the forces that would push to reverse or nullify
the changes, and create counterforces that would sustain these changes.

A change cannot occur if the opposing forces are equal. The ‘drivers’ have to be
stronger than the ‘restrainers’ for a change to occur. After doing a comprehensive
study of the forces, the team can design an action plan or take a decision to pave
the way for the desired change:

a. Decrease the forces restraining progress


b. Increase the forces which lead to movement towards the desired goal
c. To implement a solution
d. To locate causes of a problem

The Force Field Diagram can be created in the following manner:

1. Write the change that is desired, or the problem that is to be remedied. Draw a
vertical line.

2. Brainstorm all possible causes that push the drivers, or cause the problem to
occur. Write them down on the left side of the line. Indicate the strength of that
force by drawing right pointing arrows whose length represents that strength.

3. Similarly, on the right side of the line, write down all the possible restrainers, and
draw left-pointing arrows to indicate the strength of the restrainers.

4. To bring in a desired change, discuss all the ways in which restrainers can be
removed. To study the causes of a problem, discuss on how to reduce the driving
forces.

Example: The Service Levels (how quickly and how efficiently a customer care
executive responds to a customer’s call) of a telecom customer care company
(inbound environment) in a particular process are going down below the targeted
levels. The force field diagram can be used to analyze how the performance of the
customer care executive can be enhanced. The drivers and restraints are as follows:

3.6 Negotiation and Conflict Resolution Techniques

5. Negotiation and Conflict Resolution Techniques

Conflict resolution is helping the team or opposing parties see the common goals
and strive towards achieving those common goals.

Conflict resolution is an activity which is managed by the team leader (Black Belt,
Green Belt etc.) and the facilitator. It is worth mention here that constructive
conflict should be encouraged. The real reasons behind the apparent conflict
should be found out and a consensus should be reached.

A primary rule in conflict resolution should be that no decision about the team’s
objective or goal should be taken until its wins the nod from all participating
members.

There are various tools to eliminate conflict. Some of them are discussed below:

a. Consensus Technique

The consensus technique or consensus building is a decision making tool that is


essential to any team effort trying to arrive at a solution to the problem under
discussion. It is a conflict resolution technique to settle complicated issues, with
multi-dimensional aspects. This technique brings together the stakeholders’ views
to arrive at a shared conclusion. The success of this technique requires an unbiased
facilitator.

This technique will take time to be effective, or will fail altogether, in certain
situations such as issues which involve deep-rooted differences or win-lose
confrontations. Arbitration from a higher authority will be required in this case.

The various ways in which consensus can be minimized are:

 Avoid bargaining for your own position; display sensitiveness for other’s opinions
and consider them in further discussions on similar points.
 Avoid win-lose confrontations; it is not necessary that a winner should arise in
every situation. In such moments, look for the next best solution.
 Look for reasons behind the disagreement; and double check the intention behind
the agreement. See if participants arrive at the ‘yes’ for the same basic reasons, or
if they have some underlying motive.
 Avoid sidestepping or reducing conflict by methods such as bargaining, coin
tossing, majority vote, trading out etc.
 Avoid negativity and accept that the group does have the potential for reaching a
decision.
 Look at controversy positively; this provides equal power and involvement, throws
up creative options, and at the same time, the potential of group members is
realized.

b. Brainstorming

Brainstorming is a tool for generating ideas in a short period of time. Apart from
using it when creative ideas have to be generated, it is used as a consensus tool to
involve participation from the entire group. Lists of ideas or solutions are chalked
out and then the final choice is made from the options that are available.

Steps in a brainstorming exercise:

1. Discuss the rules of brainstorming with the group:


No judgment on other’s ideas are allowed
All ideas are written down An idea can be anything; it borders from the weird to the
wild.

2. Discuss the topic of the problem or the conflict. It is very important to see that
everyone involved in the session fully supports the query that needs to be
discussed for new ideas.

3. Allow some minutes of silence for all to think about the topic.

4. Let people speak their ideas, and note down every idea.

5. Repeat the process till it produces no more ideas.

6. Do away with duplicate and irrelevant ideas.

c. Multivoting

This tool has already been detailed in the earlier sections. See Team Effectiveness Tools.

d. Interest-Based Bargaining

The best solution to a problem often emerges from a variety of options and
perspectives. This is a characteristic feature of interest-based bargaining. Interest-
based bargaining is a method of negotiation that tries to meet the inherent needs
or interests of both parties. Unlike traditional bargaining, opposing parties are
allowed to convey what’s important about the issue under discussion, rather than
fighting out for a specific solution or arguing about a specific position. This makes it
possible to jointly create solutions that are acceptable to both sides. In this, neither
side has to give up their fundamental beliefs, and a win-win situation is created for
all.

This approach is also referred to as integrative, win-win, or mutual gains bargaining.


This approach is preferred more than the traditional positional bargaining because
it produces better results. Positional bargaining involves bargaining parties holding
rigid viewpoints and results either in a compromise or no result at all.

The first step in integrative bargaining is to find out the interests. The next step
involves finding out the ‘whys’ and what are the things obstructing the working out
of those whys. If the fundamental interests are known, a solution can be carved
out.

6. Motivation Techniques

There are many definitions for motivation.

Motivation is the psychological process that gives behavior purpose and direction
(Kreitner, 1995).

Motivation is a predisposition to behave in a purposive manner to achieve specific,


unmet needs (Buford, Bedeian, & Lindner, 1995)

Motivation, in short, can be defined as a psychological force; an inducement or an


incentive, which drives people or teams to work in a particular manner; or to
accomplish certain individual or organizational objectives. The act of motivating is
reinforcing energies of individuals or groups to propel them to fulfill their duties.

The team or group has to stay motivated to sustain consistent performances, to be


continuously productive. Team motivation is an important prerequisite to effective
leadership and management. A motivated team is essential for the team effort to
be successful and for the team to survive.

To be successful, team managers or team leaders, should know what keeps a team
motivated to carry on their tasks within the context of the roles they play.

Some Motivation Techniques


1. Show appreciation for good work done by team members.
2. As a team leader, know the needs of each team member because different
people have different needs. You can do this by observing, asking questions, or
listening.
3. Understand that motivation is an ongoing process; it is necessary to sustain an
environment that constantly motivates.
4. Having face to face meetings with the team makes them feel they are listened to
and their needs taken care of.
5. Recognize delegation as a means of motivation because it allows team members
to assume stronger responsibility in their jobs.
6. Make team members realize they have impacted organizational performance by
communicating how team results have contributed to organizational results.
7. Provide feedback on performance and method of working which impacts team
participation.
8. Provide rewards and recognition on performance, such as payments and
promotion, appreciation and public commendation.

Some Facts about Team Recognition and Reward

Employee recognition is a type of employee motivation which an enterprise


undertakes to single out employees and thank them for the contribution they have
made to the company’s success. Employees will be motivated when they are
facilitated by management to produce a product or service of the highest quality.

According to Maslow’s theory of motivation, recognition feeds the need for self
esteem and sense of belonging. Unless these needs of self actualization are
realized, pride in work and feeling of accomplishment will not emerge.

It is imperative for the company to consistently analyze their recognition systems,


because they reflect the company’s working values. These values are the ones
which control employee behavior, and they might not essentially be what
management advocates. For example, a company declaring customer satisfaction
as one of its values but working on only increasing customer base in various
geographical locations may not see customer satisfaction as one of its current
values.

Recognition may be in the form of:

1. Public recognition: This is liked by most employees, and it conveys a message to


all employees about what is important to the organization.

2. Cash rewards: This is a method of employee motivation but this form of


recognition is often assumed to be a part of the compensation system.

It is important to note that recognition should not be mistaken for compensation.


Recognition should be given for showing effort. If recognition is based only on goal
achievement, employees will tend to expect cash award every time they achieve a
goal. Moreover, recognition should not give rise to positions like winners or losers.

7. Organizational Roadblocks

Most organizations have hierarchical structures; they are command-and-control


driven. The management culture in traditional organizations is such that work is
divided into separate and distinct units or departments with clearly demarcated job
responsibilities and authorities. One feature that is inherent in this structure is that
people designated to particular tasks are the exclusive authority in those tasks.
These are ‘functional specialists’ whose aim is to primarily focus on their own work
areas. These domains are zealously guarded and any infringement by outsiders is
not accepted.

Although this structure has advantages, it is not without inherent flaws. This
structure often creates desire to resist changes which are organizational roadblocks
to change. Modern business scenario demands that value for customers be created
by attracting resources from various parts of the organization. An organization is
known by its people, and everybody should be given a chance to contribute their
skills and expertise to value-creation. Therefore, organizations should change the
traditional approach to work and cut across these lines to focus on customer
satisfaction. The Six Sigma way is a customer driven methodology which enables
cross functional teams to come together and makes value for customers.

The standard operating procedures (SOPs); in other words, the formal rules of an
organization are in itself a barrier to change. These rules are made to rectify past
problems and often stay in existence long after the problem is over. Most of the
time, the top leadership is reluctant to submit to a rule changing process. The
management has to display flexibility and not succumb to the spiral of writing down
too many rules for every procedure if it wants to implement change in its system.
The detailed rules and procedures that define work is another roadblock. The
requirements of different projects are different, and these procedures obstruct
these changes. Also, another barrier to change is the requirement to take
permission from various departments, experts, boards and various other bodies.
Management has to do away with these limitations to ensure the smooth
functioning of projects. Lack of support from leadership and barriers of groups
which disturb team dynamics when a change is required are other barriers to
change.

The most significant barrier to change is, perhaps, human nature. It is a part of
human nature to resist anything that threatens our present status. In the individual
level, barriers to change constitute:

 Fear of treading into new uncharted territory like introducing new ways of
managing a process, or inducting a new process itself
 Fear of making a mistake or fear that the new method will fail to produce the
desired results

Apart from internal barriers, there are external roadblocks to change. Government
bodies and private agencies necessitate organizations to follow a maze of rules and
regulations before they can embark on some new project.

The leadership must recognize these barriers to change and focus on removing
these barriers. It is their responsibility to remove these roadblocks towards
organizational improvement. The first step to achieve this can be done by asserting
a desire to reduce or remove the problem. The employees can be trained on the
use of problem solving tools, and a model in which the improvement will be
implemented can be designed. Communicating the solution to all levels and
recognition to all who helped implement the solution will help remove the barriers
to change.

3.7 Management and Planning Tools

F. Management and Planning Tools

The management tools are focused on managing and planning quality


improvement techniques. These tools are often referred to as the 7M tools. These
are explained below.
1. Affinity Diagrams

This has been discussed in the earlier section. See Team Effectiveness Tools.

2. Tree Diagrams

(Also called hierarchy diagram, analytical tree, tree analysis)

The tree diagram is used to stratify ideas into subsequent levels of detail. The idea
becomes easier to understand or the problem easier to solve by the tree diagram.
The tree diagram starts with one item that branches into two or more stems. Each
of these branch into two or more, and so on. The main trunk is the generalized
goal, and the multiple branches are the finer levels of action.

The tree diagram is a generic tool that has wide applications. It can be used when

 broad options have to be narrowed down to specific details


 developing interrelated steps to achieve an objective
 finding out root causes of problems as part of process improvements
 correcting the existing plans or procedures
 developing events or actions that will impact a solution

The procedure to draw a tree diagram has already been discussed in the earlier
sections.See Planning Tools.

3. Interrelationship Diagraphs

Interrelationship Diagraphs are used to examine the relationship between complex


issues. It is made to illustrate the relationship between various factors, areas, or
processes. It can also be used as a means of organizing disjointed ideas (usually
generated from brainstorming). The analysis by an Interrelationship Diagraph aids
in making a distinction between elements which operate as the root causes and
those which are the outcomes of the root cause. These root causes can then be
used for further analysis in problem resolution.

Steps in generating an interrelationship diagraph:

1. The group has to define the particular issue or problem under discussion.

2. Write down all the factors or ideas on pieces of paper. These have to be pasted
on a large flip-chart or any working surface.

3. Link each factor to all others. Use an arrow, also known as “influence arrow”, to
link related factors.

4. Draw the “influence arrows” from the factors that influence to those which are
influenced.

5. If two factors influence each other, the arrow should be drawn to reflect the
stronger influence.

6. Count the arrows.

7. The elements with the most outgoing arrows will be root causes or drivers.

8. The ones with the most incoming arrows will be key outcomes or results.

Example: A pizza chain is involved in home delivery of pizzas. An interrelationship


diagraph can be derived and the interrelationship between the various factors like
the product (pizza) quality, sales, delivery time, quality of manpower etc. can be
found out.
From the interrelationship diagraph, it is clearly visible that the most number of
arrows are originating from incompetent staff. They are root cause of the outcome,
that is low sales and eventually fall in profits.

4. Matrix Diagrams

A Matrix diagram is an analysis tool that compares relationships between two,


three, or four sets of data. It gives information about the nature of correlation
between the elements, such as its strength, roles donned by various individuals, or
measurements. It is a representation of elements in a tabular form.

A Matrix diagram can be used:

 while trying to comprehend how groups of items relate to one another or affect
one another
 while comparing the efficiency and effectiveness of the options
 when comparing cause-and-effect relationships

when designating responsibilities for tasks among a group of options.

Different shapes of matrix diagrams

 An L-shaped matrix relates two set of elements to each other, and sometimes one
set of elements to itself. The elements are compared by placing them in the first
row and top column.
 A T-shaped matrix relates three set of elements in such a way that sets X and Y are
related to Z, but X and Y are not related to each other.
 A Y-shaped matrix relates three set of elements in such a way that each set is
related to the other two set of elements, i.e. they are related in a circular manner.
Suppose X is related to Y, and Y is related to Z, then Z is also related to X.
 An X-shaped matrix relates four set of elements and each set is related to two
other set of elements in a circular manner. Suppose W is related to X, X is related
to Y, Y is related to Z, Z is related to W, but W is not related to Y, or X is not related
to Z.
 A C-shaped matrix relates three set of elements simultaneously, in a 3-dimensional
manner. It is difficult to draw and therefore is rarely used.
Steps in generating a matrix diagram:

1. Define which set of elements have to be compared.

2. Choose the format for the matrix ( these will be discussed subsequently)

3. Make the grid; list the items as row/column headings.

4. Think of what to tell, which relationship to state, with symbols on the matrix.
There are some commonly used symbols like ‘X’s and blanks, or check marks to
indicate ‘yes’ or ‘no’. There are more symbols which make the matrix more
understandable. These may show the strength of the relationship between two
items, or what role the item plays in the activity.

5. Compare the sets of elements, item by item. Place the appropriate symbol at the
intersection of the box of the paired items for each comparison.

6. Analyze the matrix for patterns. This information can be used for further analysis
or to resolve a problem.

Example: L-shaped Matrix


The L-shaped matrix is the most basic and common matrix format. This matrix
relates a pizza chain’s objectives to its systems or procedures. For example,
effective advertising has a strong relation with increasing the financials of the
company. Regular checks on cooking procedures have a direct impact (strong
relationship) on food quality, and so on.

Example: T-shaped Matrix

The T-shaped matrix relates four product models P, Q, R, S (say tires) (group X), to
their manufacturing locations (group Y) and to their buyer groups (group Z). The
matrix can be viewed in different ways to pinpoint different relationships. For
example, Ford is a major buyer of Tire P, but it buys Tire S in small volumes. Tire Q
is produced in large volumes in the Rome unit, and in small volumes in the Paris
unit, and is bought in big volumes by BMW. Volkswagen is the only customer who
buys all the tire types.

The Y-shaped matrix shows the correlation between customer requirements of a


Pizza company, the departments involved, and the internal process metrics. The
matrix tells something about the requirements of the customer: on-time delivery.
The delivery department is primarily involved in delivering the pizzas. The two
metrics, order lead time and kitchen inventory, are most strongly related to on-time
delivery. It can also be seen that delivery has a weak relationship with order lead
time, and no relationship with kitchen inventory levels. Again, carrier performance
directly impacts delivery of pizzas. Therefore, it can be concluded that maybe the
requirement of on-time delivery needs to be reevaluated.

Example: X-shaped Matrix


The T-shaped matrix becomes an X-shaped matrix by adding another dimension to
the T-shaped matrix. In the example of the T-shaped matrix given above, a fourth
group of items, transport lines, are related to the production units they cater to and
the buyers who use them. The product types are related to the production units
and to the buyers, but not to the transport lines. For example, Quick Express and
Blue Lines are the major transporters based on volume. On the other hand, Trans
World and Green Dart seem to be the minor transporters. Mercedes is the only
major buyer of Type R tires. Volkswagen buys all the tire types.

3.8 Prioritization Matrix

5. Prioritization Matrix

“A prioritization matrix is an L-shaped matrix that makes pair-wise comparisons of a list


of options to a set of criteria in order to choose the best options” Nancy R. Tague(1955).
A prioritization matrix is prepared to logically narrow down the focus of the team to
a select few options. It is designed before exhaustive execution planning is done. It
is one of the most the thorough and painstaking decision making tools.

Prioritization matrices can be used when:

 vital root causes are already recognized and the most important ones have been
identified
 when three or more issues have to compared, especially when the decision is vital
to the organization and some issues are subjective
 the resources for the progress are limited and only a vital few activities must be
focused upon

Steps in generating a prioritization matrix:

1. First, decide the importance of each criterion, assign numerical weights.

2. Then rate each option according to the intensity of the correlation with the
criteria or according to how well it meets the criterion.

3. Finally, combine all ratings for a final ranking of the options numerically.

Example: The Sales Department of a well-known pizza making company found


certain problems when the company’s motivational survey was held. The company
decided to focus on the problematic areas using the Prioritization Matrix.

The main problems are penned along with the various options, which are then
multiplied with the weight assigned to each criterion.

The prioritization matrix is like a grid, showing various options on the top and
decision criteria on the left side. Weights are also mentioned with the decision
criteria. The final score is calculated by multiplying the ratings with the weights for
each criterion. Once the ratings have been summed up, the best optimum solution
will be chosen with the highest score.

6. Process Decision Program Charts (PDPC)

In any kind of planning there might be many things that could go wrong. The
process decision program charts or PDPC helps to foresee the problems that could
be encountered in a plan undergoing development. To avert and avoid such
problems, countermeasures are developed beforehand. PDPC helps to revise the
plan with the intent to avoid the problem or to be ready with the solution to
counter the problem.

A process decision program chart should be prepared before implementing any


plan especially when:

 the plan is sizeable and complex


 the proposed plan has to be finished on schedule
 there is high cost of failure

Steps in generating a process decision program chart:

 Develop a high level tree diagram of the plan.


 In the final level, brainstorm the problems that could be encountered for every
task cited. Classify the criteria for identifying problems i.e., those problems which
effect the scheduled completion date.
 Assess all the problems and cross out any that are impossible and those with trivial
outcomes. Classify those risks which would need a countermeasure. Illustrate
these problems as the next level in the tree diagram linked to the tasks cited.
 Now brainstorm the counter measures for the problems illustrated. Isolate those
countermeasures which would minimize the problem. The countermeasures may
be those that would bring changes to the plan or those that would provide a
solution if the problem occurred. Show these countermeasures as a fifth level with
irregular or jagged lines.
 Determine the feasibility of each countermeasure. It should be measured on such
criteria as price, time required, effort needed for execution, and efficacy etc. For
example, those countermeasures should be implemented which are cost effective,
like, amount of time a problem will cost or amount of time a countermeasure will
save, etc.
The Diagram given below represents a Process Decision Program chart:

7. Activity Network Diagrams/ Arrow Diagrams

{also called network diagram, node diagram, CPM (Critical Path Method) chart}
(variation: PERT chart)

The activity network diagram is a pictorial representation of the chain of actions to


be executed in a project to complete the project. It is a representation of the
project’s best schedule, and displays the interdependencies between various tasks.
It shows the resource problems with their solutions. It also shows something called
the ‘critical path’ of the ‘critical’ tasks; those tasks that must be completed on time
to maintain the project’s deadline. Arrows are used to show the sequence of the
tasks.

The activity network diagram is used when:

 tasks within a complex project or process has to be scheduled and their progress
has to be monitored
 the project schedule is critical with critical consequences if the project is not
completed in time or leads to gains if completed before time.

Steps in constructing an activity network diagram:

1. Discuss all activities or tasks that are needed to complete the project.

2. Determine the sequence of the tasks. Before an activity or task begins, all
activities that should precede it should be completed. Ascertain which task is to be
carried out first. Identify which task can be carried out simultaneously with this
task. This can be placed to the left or right of the first task.

3. Identify the next task, and place it below the first task. See if there is any task to
be worked out concurrent to this. Concurrent tasks should be lined parallel to the
left or right.

4. Continue this process and construct a diagram. The tasks are represented with
arrows. The beginning or end of the task is called an event. Draw circles for events,
between each two tasks. Therefore, events are nodes that separate tasks.

5. Use dummies to indicate problem situations or extra events. A dummy is a


dotted arrow used to demarcate tasks that would otherwise start and stop with the
same events. It is also used to show logical sequence. Dummies are not real tasks.

In figure 17, event 2 and the dummy between 2 and 3 have been added to separate
tasks P and Q.
In figure 18, R cannot start until both tasks Pand Q are over, and a fourth task S
cannot start before P is complete, but S does not have to wait for Q. A dummy can
be inserted between end of task P and start of task R.

6. When the network is made, label events in sequence, with event numbers inside
the circles.

7. Identify task times or the time needed to complete each activity. Write the time
on each task’s arrow.

8. Determine the critical path. The longest time from the beginning to the end of
the project is called the critical path. This should be marked with a heavy line or
color. The project’s critical path includes those tasks that must be started or
completed on time to avoid delays in the completion of the project.

Finding the Critical Path:

There are four time values for each event: its earliest time of start and earliest time
of finish; and its latest time of start and finish.

1. Work out the earliest time (ES) for each task and earliest finish (EF).
The earliest time is the expected time an event will occur if the preceding activities
are started as early as possible.
Earliest Finish for each task = ES + time taken to complete the task.

2. Work out the latest time that each task can begin and conclude with. These are
known as Latest Start (LS) and Latest Finish (LF). The latest time is the projected
time an event can happen without disturbing the project completion beyond its
earliest time. To calculate this, work backwards, start from the latest finish date to
the latest start date.
Latest Finish = the smallest LS of all tasks immediately following this one
Latest Start = LF - time taken to complete this task
Draw a separate box for each task. Make a time box divided into four quadrants as
shown in figure below.

Slack time for an event is the difference between the latest times and earliest times
for a given event. The slack for an event indicates how much delay in the happening
of the event can be allowed without delaying project completion, assuming
everything else remains on schedule.

Therefore, the events that have slack times of zero are said to lie on the critical path
of the project. It is to be noted that it is only activities having zero slack can lie on
the critical path, and no others can. The delay in an activity lying on the critical path
leads to the delay in the entire project. Moreover, once the critical path activities
are traced, the project team has to find ways to shorten it and ensure there is no
slippage.

CHAPTER 4
4 The Define Phase

Introduction

When a project’s goal can be achieved by improving an existing product, process or


service, the DMAIC model is used. This chapter and the following chapters
elaborate DMAIC in detail.
A. Project Definition

The first step in the DMAIC model is to define the project. It is understood in the
Define phase that a number of problems that is affecting business have been
identified by the management (through VOC tools which have been discussed in
the earlier chapter) and practical solutions have to be worked out for them. A
challenge that management faces is to spot these problems in such a way that
application of Six Sigma to them gives the maximum benefits. After the problems
are identified, the projects to work on have to be decided by the champions, belts,
and process owners. There can be many Six Sigma projects that run in parallel in
the organization, with champions, Black Belts and Green Belts working throughout
the organization. The implementation of the project is performed by these people.

The focus of a project is to resolve a problem that is affecting the core performance
areas of the business such as customer or employee satisfaction, costs, process
capability, output and rework, cycle time or responsiveness, defective services and
revenue. In the Six Sigma process, the problem first metamorphoses from a
practical problem to a statistical problem, then a statistical solution is found out
which is later transformed into a practical solution.

The writing of the project charter, a document issued by senior management,


marks the beginning of the project definition. (See Chapter 3 for details on project
charter). The project charter specifies the project definition. The Define phase of the
DMAIC model begins here. How well the project is defined determines to a great
extent how successful the project is.

The project is defined by stating the project scope, using tools like Pareto charts,
high level process maps, work breakdown structures, and the 7M tools (See
Chapter 3 for a description of the 7M tools)

Steps in the project definition process


(Craig Gygi, Neil DeCarlo, Bruce Williams, 2005)

1. Determine the Y; that is what specifically needs to be improved or which


characteristics or outputs of the process needs to be improved.
2. Identify the associated processes and their physical locations.
3. Determine the baseline performance for each Y chosen.
4. Identify the cost and impact of the problem.
5. Write the problem statement
6. Identify candidates for the project team
7. Obtain approvals and launch

If there are more than two Ys (output variables), it is likely that the project is too
large in scope. The most logical step would be to break down the project into two
or more projects. To understand the performance of Y, you have to have a better
understanding of the process steps that lead to Y. A high level process map has to
be used here to show the full scope of the process.

The following illustration shows the selection of a process output for improvement.

Two things should be kept in mind while selecting a process. One, it should
recognize those particular performance parameters which will help the company
have a financial gain. Two, it should aim to effect customer satisfaction positively.

A process can be measured on any of the following criteria like defects per million
opportunities, cost saving, capacity of the process or the time taken for production
of a unit. It is a cross-functional approach and is totally focused on the outcome.

High Level Process Map (Macro Process Map)

A process map is an illustration of the flow of the process, showing the sequence of
tasks with the aid of flowcharting symbols. A process map may illustrate a small
part of the operation or the complete operation. It consists of a series of actions
which change some given inputs into the previously defined outputs.

The steps in drawing a process map ( Galloway, 1994)

1. Select a process to be mapped


2. Define the process
3. Map the primary process
4. Map alternative paths
5. Map inspection points
6. Use the map to improve the process
A high level or macro process map is an illustration of the flow of the process. A
macro process map is a flowchart that illustrates only the major steps or operations
of the process.

Macro process maps increase the visibility of any process. This improves
communication. It is used before drawing a more detailed flowchart of a process.

Example: The following macro process map shows the main steps in taking a call
by a customer care executive in a BPO.

The problem area is that the AHT(Average Handling Time) of a customer care
executive (CCE) is more than the time limit specified by the head of operations,
which reflects on the profitability and effectiveness of the contact center. Problems
arise when the customer gets adamant, expresses dissatisfaction in the answer
provided by the CCE and insists on further information. This increases the handling
time. The customer may also start abusing the CCE and even disconnect the call.
This leads to problems. At times, the CCE provides alternative solutions or escalates
the call to the team leader or manager, which contributes to increase in handling
time.

Therefore, the Y or output needing improvement here is the AHT.

Pareto Charts

A Pareto chart is a specialized vertical bar graph that exhibits data collected in such
a way that important points necessary for the process under improvement can be
demarcated. It exhibits the comparative significance of all the data. It is used to
focus on the largest improvement opportunity by emphasizing the "crucial few"
problems as opposed to the many others.

The Pareto chart is based on the Pareto principle. The Pareto principle has to be
understood before getting to know the Pareto chart. The Pareto principle was
proposed by management thinker Joseph M. Juran. It was named after the Italian
economist Vilfredo Pareto, who observed that 80% of the wealth in Italy was owned
by 20% of the people.

This principle can be applied to work related to business:

“80% of the business defects are caused by only 20% of the errors”

“80% of your results are produced from 20% of your efforts”

“80% of the profit to a company is earned by 20% of the customers”

“80% of the complaints to a business are caused by 20% of the products or


services”

The Pareto chart is a bar graph and is used to graphically summarize and display
the relative importance of the differences between groups of data. It is useful for
non-numeric data.

This principle is applied to business operations because it is assumed that large


percentage (80%) of the problems are caused by few percentages (20%) of the
processes. So the Pareto chart helps by narrowing the very few areas of concern,
while analyzing the process.
Take an example of a multi-national company dealing in the home delivery of
pizzas, that wants to check the problem areas while delivering the pizzas. The data
collected is displayed in the following table:

The next step in preparing a Pareto chart is to calculate the cumulative percentages
of the data supplied above. The following table can be derived from the data given
above.
Finally a line graph can be prepared to see what the main problems are. The
following line graph is drawn from the preceding table data using Ms Excel and
plotted with the cumulative percentage against the complaints of the customers.
The X axis is plotted as complaints of the customers and the Y axis as the
cumulative percentage.

All the problems that fall to the left of the 80% line are the few problems accounting
for most of the complaints. They are:

1. Not hot
2. Late delivery
3. No extras
4. Wrong Billing
5. Wrong Pizza
6. Lesser ingredient
7. No delivery in a particular area

These account for 80% of the problems encountered in the home delivery of the
pizza. If these are immediately taken care of, then 80% of the problems can be
solved.

In this way, Pareto analysis helps in determining which problems to concentrate


your efforts on.

Work Breakdown Structure (WBS)

The Work Breakdown Structure is a structure that is defined as a process for defining
the final and intermediate products of a product and their relationships. (Ruskin and
Estes, 1994)

While defining a project, it becomes necessary to break down the project tasks into
several hierarchal components. The WBS shows the breakdown of the deliverables
and tasks that are necessary to accomplish a particular project. It records the scope
of the project; and it pinpoints all the aspects of a project or process, right till the
work package level. The WBS is a variation of the tree diagram and is constructed in
the same way as a tree diagram.

The WBS can be represented by a tree diagram:


Note: See also Chapter 3, Figure 8: Tree Diagram for example of Work Breakdown
Structure

4.1 Goals and Metrics

B. Goals and Metrics

In Six Sigma, only defining the problem is not enough, it is necessary to define the
magnitude of the problem (defect level) in measurable terms (for e.g., cycle time,
quality, cost etc.). In other words, the process metrics have to be established. (CTQ,
CTC, CTS etc.) What are the operational definitions of these metrics? Will these
same metrics be used after the completion of the project to measure the success of
the project? These questions have to be addressed.

Six Sigma sets its sight on delivering products and services with a zero-defect rate.
However, the main concern of a commercial organization always remains
maximum profit. Therefore, a point to be kept in mind while selecting Six Sigma
projects is that they should exude some financial benefit, either by reducing cost
(by eliminating rework, scrap, inefficiencies, excess inventory etc.) or through
growth in revenue, or achieving some strategic goal. So the Six Sigma leaders need
to define the project in numerical terms or metrics. These metrics are very
important as they help in determining the most suitable Six Sigma project for the
organization.

Rose (1995) has listed a number of attributes of good metrics:

 They are customer centered and focused on indicators that provide value to the
customers, such as product quality, service dependability and timeliness of the
delivery, or are associated with internal work process that address system cost
reduction, waste reduction, coordination and team work, innovation, and customer
satisfaction.
 They measure performance across time, which shows trends rather than
snapshots.
 They provide direct information at the level at which they are applied.
 They are linked with the organization’s mission, strategies, and actions. They
contribute to organizational direction and control.
 They are collaboratively developed by teams of people who provide, collect,
process, and use data.
1. CTx Parameters - CTC, CTQ, CTS

In Six Sigma, the process metrics can be categorized into three parts. These are
cost, quality and schedule. They are also referred to as “critical to” characteristics
(CTx) as they play a crucial role in the success of any enterprise. These
characteristics are used to decide which project to focus on: should the focus be on
quality, cost or schedule projects?

These metrics express the issues and provide the motivation for greater customer
satisfaction. They may also ascertain the products and services which are being
offered or will be offered by a business process.

A CTC or Critical to Cost characteristic has a major influence on the cost of


producing a product or service. It involves increasing the production capacity,
without increasing the resources. It is used to reduce production overhead and also
decrease warrant claims.

A CTS or Critical to Schedule characteristic has a major influence on the capacity to


deliver the product or the service being provided on time.

A CTQ or Critical to Quality characteristic has a major influence on the suitability for
use of the product produced by the process. Critical to Quality characteristic aligns
upgrading the creative efforts with the customer needs. In simpler words, CTQ is
the expectation of a customer from a product.

At the time of working out of the metrics, the following metrics should not be
included

 Those metrics for which adequate or accurate data can not be collected.
 Those metrics that are complicated and could not be explained easily to others.
 Those metrics which make the employees work not towards the best interest of
the company but towards fulfilling their targets.

2. CTx Flow-down Model (Big Ys, Little Ys)

The metrics used in Six Sigma are considered as Big Ys, Little Ys.
Big Ys are the complex results that a Six Sigma project aims to develop. These are
the projects which the Six Sigma leaders will execute. The Little Ys are the smaller
units of the chosen project which are implemented by the Green or the Black Belts
under the aegis of the Six Sigma leaders.

The Big Y is to be associated with the critical requirements of the customer. The Big
Y is used to create Little Ys which ultimately drive the Big Ys.

For instance, in any service industry, the overall customer satisfaction is the Big Y
and some of the elements of achieving customer satisfaction are quality check,
delivery on time and after sales service (Little Ys). The Big Ys delineate at all the
levels of an organization. It can be the business, the operations or the process level.
The little Ys at the current level become the Big Ys at the subsequent level. The Big
Ys are the measures of the result, while the Little Ys evaluate the cause-and-effect
relationships between the different units of a project and the measures of the
process. It is very important to keep a check on the Little Ys to achieve a better Big
Y.

C. Problem Statement

The problem definition is also characterized by:

1. developing a problem statement,


2. showing the current performance level, called the baseline level, and
3. predicting the desired level, known as the expected level or goal.

The baseline level is utilized when estimating the potential financial benefits while
targeting a level of improvement.

The problem statement states the planned issue the project team wants to
improve. It reflects a better understanding of the problem. It explicitly explains
what needs to improve, the need for the project at that particular time, where the
problem is occurring, what is the seriousness or extent of the damage of the
problem, and the financial implications of the problem. It explains the need of
implementing the Six Sigma effort.

The project team selected will work out a plan to break the large project into
smaller projects and pass them on to the various teams. The problem statement is
documented by the senior leaders of the Six Sigma team.
Key Elements of a Good Problem Statement

1. How long the problem has existed.


2. The description of the problem and the metric used
3. In which process, with name and location the problem is occurring
4. The description of its effect on the business.
5. The current position (baseline) and the desired position of the problem. The
baseline measures key input variables, key process variables, and key output
variables.

The following is an example of a problem statement:

Recruiting time by Human Resource for customer care executives, team leaders,
and resolution specialists for the Alabama process of the Customer Care Division, is
behind the goal of 30 days 90% of the time. The average time to fill a request is 70
days in the human resource employee recruitment process over the last 10
months. This is adding costs of $ 150, 000 per month in overtime labor, contract
labor, and rework costs.

D. Project Financials

After the baseline performance is established in the problem statement, and the
goal for improvement is asserted, the financial benefit of achieving this goal can be
ascertained. This can be done by estimating what will be the costs incurred at the
operating level when the Six Sigma effort is employed against the present costs.
The annual benefits are to be calculated by ascertaining these differences.

Quality Costs

To ascertain the financial benefit, it is important to be familiar with the concept of


quality costs. Quality cost systems aid senior management in identifying which Six
Sigma projects to apply their efforts in by selecting opportunities that yield greatest
return on investment. According to Thomas Pyzdek (1976), the quality equation
states that quality lies in “doing the right things” and “not doing the wrong things”. Doing
the right things means including product or service features that delight the customers.
Not doing the wrong things mean avoiding defects and other behaviors that cause
customer dissatisfaction. The quality costs address only the latter aspect of quality.

Quality is in itself not a cost. But quality is a force that raises profits through lower
costs. Therefore a cost of quality means any cost that is incurred because the
quality of the product or service produced is not perfect. Costs such as rework and
scrap, excess material or cost of reordering to replace defective products are
quality costs. Quality cost is also known as cost of poor quality.

The quality costs are a total of the following costs:

1. Prevention Costs: These are costs that are incurred when undertaking activities
to prevent poor quality. For example, quality improvement team meetings, quality
planning, product review etc.

2. Appraisal costs: These are the costs associated with measuring, auditing, and
evaluating products or services to test adherence to quality standards or
performance requirements. For example, inspection tests for purchased stock, in
process and final inspection, product or process audits etc.

3. Failure Costs: These are the costs that are incurred from products and services
that do not adhere to the required standards or fail to meet customer needs. These
are classified into:

a. Internal Failure Costs which are rework, scrap, reinspection etc.


b. External failure costs which are warranty charges, costs of processing customer
complaints, product replacements etc.

Poor quality or high quality costs bring down the revenue of the company through
higher expenses and poor customer satisfaction. Therefore the goal of any
company should be to lower the quality costs to the lowest possible level.
According to Juran and Gryna(1998), the cost of failure declines as conformance
quality levels improve towards perfection, while the cost of appraisal and prevention
increases. The total cost of failure and the cost of appraisal and failure determine
the level of quality costs.

Quality cost management should be included in the charter of senior level


management teams. The way to control quality costs is to find out the actual cause
behind quality lags, and then eliminating these causes. Quality cost management
helps the enterprise to estimate which corrective project or action to choose, which
problems to focus on in the Six Sigma project.

E. Project Scheduling

An organization may undertake many Six Sigma projects at a time, with limited
resources available to them. This makes it mandatory for the organization to
schedule the projects in order of importance to get the best possible results out of
them.

A project schedule is a graphical representation of the predicted tasks, goals,


dependencies, resource requirements, task duration, and deadlines. It is a detailed
schedule which contains information about the tasks to be performed, the people
responsible for the tasks, start and end dates for the tasks and the expected
completion time-span for the tasks. This helps to check the accuracy of the
planning process.

A project schedule should include the following:

Type of schedule

This is associated with the difficulty in implementation of the project. A PERT chart
is used for large and compound projects, where there are a number of
interconnected tasks. The PERT chart, also known as activity network diagram,
represents the interdependency and associations between tasks. This helps in
planning.

For smaller projects, a Gantt chart may be used. A Gantt chart is a two- dimensional
tool (it shows the task with the time frame) and does not represent the relationship
between the tasks.

The specific quantifiable milestones

The completion of a key task in any project is called the milestone of the project. An
important deliverable of any phase can also be termed as a milestone.

Every project has its own unique milestones. Some could be approval of the
requirements, approval of the trial product, approval of the design, packaging of
the product for shipment, shipping of the product manufactured to the customer,
and so on.

Milestones appear at the end of a work breakdown structure. The success of any
task can be measured by the milestones achieved.

Estimating the task duration

Estimating the task duration is a very important part of project scheduling. It plays a
vital role in estimation of costs in the later stages.

Task estimation is done to stabilize customer relations and uphold the team
morale. As task estimation is affected by staffing and costing activities, it is
continually done during the planning process.

Setting a realistic time frame for any task is very important during project
scheduling because a task is mostly underestimated. Underestimation leads to a
rush to complete it.

Setting a time frame is a complex process because it involves dealing with a


number of variables which crop up during the project implementation phase.
Variables like current availability of the required staff, degree of skill and efficiency
of the people involved, misinterpretation and slip-up during the project, should also
be accounted for.

A method of setting a realistic time frame is to study a similar project done


previously by the estimator. Though each project is unique in its own way, historic
records help to approximately estimate both time and cost of the project
undertaken to a great extent.

In the absence of historic records, expert advice should be sought. In addition,


procuring task estimates from different sources can be helpful.

Defining the priorities

The task priorities should be clearly defined to avoid unnecessary conflicts during
the execution of the project.

Define the critical path

The critical path of any project is the longest path taken from the start to the end of
the project. The project’s earliest possible completion time is calculated by working
out the critical path.

Documenting the assumptions

This is important in subsequent stages in project scheduling. Without proper


documentation, the changes which will occur in the schedule later will be
complicated and uncertain.
Identifying and analyzing risks

As the resources are limited, anyone scheduling the process will have to deal with
the risks involved. In any good schedule, a scheduler has to make allowances for
the forthcoming or expected risks. The following can be done to remedy these
risks:

 In cases where major risks are involved, an extra work break down structure for
the handling of the risk should be included. Some funds should be allocated to
deal with the risks.
 The tasks in which risks are inherent, extra time should be allocated.

Reviewing the results

Project scheduling entails contribution by a number of people. After an initial draft


is ready, an appraisal should be done. It can be done by the team who are
scheduled to do the work. They should be consulted regarding the level of
understanding of the work assigned and the time needed to complete the work.

Thus project definition involves identifying the area of improvement, identifying the
processes that are creating the problem, and determining the current level of
performance, how much the problem is costing the company, and lastly what will
be the financial benefit of undertaking the project.

CHAPTER 5
5 The Measure Phase

A. Process Analysis and Documentation

In the Define step, the Y has been established, and the problem statement has
been made by the Black Belt. The subsequent step in the Six Sigma methodology is
to identity key measures that are required to evaluate success and meet critical
customer requirements. This is possible by determining the important few factors
that determine the performance or behavior of the process. This is initiated by
examining the process to reveal the key process steps and key inputs for each
process. Then these key inputs are prioritized and then the potential effects of this
prioritized list on the CTXs of the process are studied. The potential ways in which
the process could go wrong are estimated.

The objectives of the Measure step can be summarized in the following steps:

1. Identify key metrics that will be required to evaluate performance.


2. Make a data collection plan. See if data are available for these metrics. What is
the quality of this data? If data is not available, where should they be obtained
from?
3. After the data is found and validated, analyze the data.
a. Look for trends or patterns in the historical data when historical data is observed
for a certain period. Find out the causes of these patterns through run charts,
control charts etc.
b. Do things look better or worse, and why?
c. What is the historical central tendency? (mean, median, mode)
d. What is the historical variability? (inter quartile range, standard deviation)
e. What is the historical shape or distribution? (histograms, box plots, stem and leaf
plots etc.)
f. Display the variation in the process Observe and track relationships if present
between the variables. (To study continuous data, scatter plots and correlation
analysis can be used.)
4. Establish the current process capability, improvement and goal.

1. Tools

Process improvement is possible by analyzing the information gathered from the


data collected from the actual process. The Measure phase involves the use of
statistical and graphical tools to aid in analysis of the data.

The following is a description of some of the tools which help in process analysis.
These graphical tools serve as a better way to gain perception about the internal
workings and external effects of a process.

a. Flowcharts

A flow chart is a diagrammatic representation of the nature and the flow of work in
a process. The elements that can be included are: sequence of actions, inputs and
outputs entering or leaving the process, decisions that must be made, people who
become involved in the process, time durations at each step etc. Representations in
a flow chart, also known as a flow diagram has numerous benefits.

 A flow chart helps in explaining people the working of the process


 A flow chart can help in the training of newly appointed employees according to the
standardized procedures of the organization
 Problem areas are easy to identify because in the flow chart, all the process steps are
diagrammatically represented. This also helps in simplifying and refining the process.

A flow chart uses symbols. Each symbol has a specific meaning. These symbols are
connected by arrows which indicate the flow of the process. The symbols are
described below:

Oval - indicates the start and end point of the process. They usually contain the
words START and STOP, or END.

RectangularBox - represents the process or an activity in the process.

Parallelogram - represents the input or the output of the process.

Diamond - represents a decision point in a flow chart. It has two arrows coming out
of it, corresponding to yes and no or true or false.

Circle - represents a place marker. It is used when a line or page has to be changed
with the flowchart. This symbol is then numbered and placed at the end of the line
or the page.

On the next line or page, this symbol is used with the same number so a reader of
the chart can follow the path.

Example: The following flowchart shows the acquisition and implementation of a


new process in a BPO.
b. Process Maps

A process map is an illustration of the flow of the process, showing the sequence of
tasks with the aid of flowcharting symbols. Process maps give a picture of how work
flows through the company. A process map may illustrate a small part of the
operation or the complete operation. It consists of a series of actions which change
some given inputs into the previously defined outputs.

The steps in drawing a process map ( Galloway, 1994)

1. Select a process to be mapped


2. Define the process
3. Map the primary process
4. Map alternative paths
5. Map inspection points
6. Use the map to improve the process

Reducing Cycle Time through Process Mapping


For achieving a reduction in the cycle time, a cross functional process map needs to
be developed. This means that a team of individuals from every division is selected.
They in turn map each step of the process of product development from beginning
to end. Two kinds of maps are developed; a map of the current functioning of the
process and another map for the expected process map.

The first process map helps to identify the problems in the current system, and to
improve the current system. The expected process map explains each step in
detail.

During the mapping session a list of actions is also created. This list defines in detail
the changes required to change the process from the current map to the expected
map.

For example of process maps, see chapter 4: Six Sigma, Define)

Reducing Cycle Time through Value Added Flow Charts

A value added flow chart is a method to improve on the cycle times and eventually
productivity, by visually sorting out value-adding steps from non-value-adding steps
in a process. It is a very simple yet effective way. The steps are described below:

1. List all the steps involved in the particular process. To do this, draw a diagram
box for every step from start to the end.

2. Determine the time currently required for the completion of every step of the
process. Include this time to each box. (See Figure 24.a)

3. Determine the total cycle time by adding the time taken by each step.

4. Some of the steps listed above are those which do not add any value to the
process. Such steps are inspecting, checking, revise, stocking, transporting,
delivering etc.

5. Shift such boxes (as explained above) to the right of the flow chart. (See Figure
24.b)

6. Determine the total non-value added time by adding the time taken by each non
value adding step.

7. Some of the steps listed above are those which add value to the product. Such
steps are assembling, painting, stamping etc.

8. Shift such boxes (as explained above) to the left of the flow chart. (See Figure
24.b)

9. Determine the total value added time by adding the time taken by each value
adding step.
10. Construct a pie chart to display the percentage time taken by non-value adding
steps. (See Figure 24.c)

11. Using benchmarking and analysis, decide the target process configuration.

12. Pictorially represent the target process and calculate the total target cycle time
(See Figure 24.d)

13. Explore the non value adding steps and identify the procedures which could be
trimmed down or can be done away with to save time.

14. Explore the value adding steps and identify the procedures which could be
improved to reduce the cycle time.

15. Make a flow chart of the enriched process. Keep looking for further loopholes in
the process till the target is achieved.
5.1 SIPOC, Box Whisker Plots, Cause and Effect Diagrams and Check Sheets
c. SIPOC

SIPOC stands for Suppliers-Inputs-Process-Outputs-Customers. SIPOC is a tool used


to identify all elements of a process before the beginning of a project. It is a high
level flowchart of a process and lists all suppliers, inputs, outputs and customers. It
provides a quick bird’s eye view of the elements of a process. A SIPOC diagram is
mostly employed in the Measure phase of the DMAIC methodology.

SIPOC is a tool that helps to define the specific portion of the overall business
process that is targeted for improvement. It is a method of applying mapping to
sub processes until arriving at that part of the process allocated for improvement. It
is often used at the start of the project, when the key elements of the project have
to be defined. It is also used when information is not clear about what the process
inputs and outputs are, and who the suppliers and customers are.

Steps in creating a SIPOC diagram

1. Set up a working surface that will enable the team to post additions to the SIPOC
diagram. This can be a flipchart, sticky notes posted to a wall or a transparency.
2. Identify the process. Create a macro process map; map it into four or five high
level steps.
3. Identify all outputs of the process. Attach them on the working surface.
4. Identify the customers who receive these outputs. Record them separately.
5. Identify the inputs that the process needs for it to function properly. Again attach
these separately.
6. Identify the input’s suppliers. Again record them separately.
7. Review the work to edit omissions, unclear phrases, duplications etc.
8. Draw a complete SIPOC diagram.

d. Box Whisker Plots

A box and whisker plot is a graph that summarizes the most important statistical
characteristics of a frequency distribution for easy understanding and comparison.
(Nancy R. Tague, 2005) A box and whisker plot, also known simply as a box plot,
looks like a box representing the central mass of the variation and has thin lines
which extend out on either side, called whiskers, representing the spread of the
distribution. The box plot is simple to construct but displays a good amount of
information; therefore it is a potent tool.
box plot is used when analyzing the most important information about a batch of
data or when two or more sets of data have to be compared. It can also be used
when data on some other graph, like control charts, have to be summarized.

Steps in creating a box and whisker plot (Craig Gygi, Neil DeCarlo, Bruce Williams,
2005)

1. Rank the captured set of data measurements for the characteristic. Reorder the
captured data from the least to the greatest values. Refer to the numbers as X
1……X n.

2. Find the median of the data. Median is the observation value in the ordered data
where half the values are larger and half are smaller. If the number of observations
(n) in the data is odd, the median will be (n+1)/2 th value. Median = X (n+1)/2 If the
number of observations (n) is even, the median is the average of the two middle
values i.e. n/2 th and (n/2 +1) th value.

3. Find the first quartile Q 1. The first quartile is the point in the ranked ordered
sequence, where 25% of the observations fall below this value.

4. Find the third quartile Q 3 . This is the point in the ranked ordered sequence,
where 75% of the observations fall below this value.

5. Find the greatest observation X max.

6. Find the lowest observation X min.

7. Create a horizontal line, representing the scale of measure for the characteristic.
This scale could be minutes for time, number of defects in an inspected part,
centimeters for length etc.

8. Construct the box. Draw a box spanning the first quartile Q 1 to the third quartile
Q 3. Draw a vertical line in the box corresponding to the calculated median value.

9. Draw the whiskers. Draw two horizontal lines, one stretching out from Q 1 to the
smallest observation X min, and another extending from Q 3 to the biggest
observation X max.
10. Repeat Steps 1 through 9 for each additional characteristic to be plotted and
compared against the same horizontal scale.

Box and whisker plots are used to compare two or more distributions. These may
be before or after a process is viewed. For example, to find out if two or more
variation distributions are same or different, a box plot can be created. From figure
26.a it can be seen that distribution Q has the lowest level. But it still overlaps the
performance of distribution P, meaning that it may not be that different. But
distribution R does not overlap with P and Q and has a much higher value. It also
has a much broader spread to its variation.
e. Cause and Effect Diagrams

Any process improvement initiative entails fighting the causes of variation. There
can be a huge number of causes for variations to a given problem. Dr. Ishikawa is
credited for creating the cause and effect diagram, which is a tool that is used for
brainstorming possible causes of a problem in a graphical (tree structured) format.

It is also known as the Fishbone diagram and the Ishikawa Diagram . This
technique is called fishbone diagram because it resembles the skeleton of the fish.
A fishbone diagram helps in getting to the root cause of the problem. It consists of
a fish head at one end of the diagram- which states the problem. Besides this fish
head, there is a fish spine and there are bones attached to the spine. The bones
attached to the spine state the reasons which are causing the problem.

The fishbone diagram is employed for problem-solving by the team members. It is


used to collect all the inputs (which are causing the problem) and to present them
in graphical manner. The advantage of using a Fishbone diagram is that besides
detecting the problem, it helps the team to focus on why the problem occurs.

Procedure of implementing a fishbone diagram :

Define the problem: List down the exact problem, in detail. It should be stated in a
box, called the fish head. After stating the problem, draw a horizontal line, across
the box.

Brainstorm the causes: Attach sliding lines, called the bones of the fish, to the fish
spine. These bones will state the causes because of which the problem occurred.
Write down as many possible causes as could be involved. The major categories
typically involved are:

 The 4 M’s: Methods, Machines, Materials, Manpower


 The 4 P’s: Place, Procedure, People, Policies
 The 4 S’s: Surroundings, Suppliers, Systems, Skills

Further brainstorm the ‘brainstormed’ causes: Sketch out the smaller lines
coming out of the larger bones, which will depict the possible causes within each
category that may be affecting the problem. This helps breaking down a complex
problem into smaller problems. Repeat this step until there is no breaking down of
a problem into a sub-problem.

Analyze the fishbone diagram: Finally analyze the diagram and draw out results
by measuring the root cause.

Example: Suppose the MNC dealing in the home delivery of pizzas wants to find
out the various causes that are leading to a fall in their customer base. They depict
the problem graphically, by putting the problem and causes under the fishbone
diagram.

The following is the fishbone diagram, tailored to the “pizza home delivery”
example:
In the fish head, the pizza problem has been defined. The main causes leading to
the problem are defined under the fish bones. The causes are then further sub-
classed into generic problems, like, one of the cause, which is “pizza not delivered in
time”, has been further categorized into sub-causes, which infer the reasons why
the pizza couldn’t be delivered in time. The reason could be any one of these: traffic
congestion, the scooter’s tire was punctured or the pizza delivery boy couldn’t
locate the address easily.

f. Check Sheets

A check sheet is a most common tool for collecting data. A check sheet is a
structured form consisting of a list of items for collecting and analyzing data. It
helps display the frequency of the data. It contains pre-recorded descriptions of
events that are likely to occur. A well thought out check sheet consists of questions
like: “Is everything done?” “How often does the problem occur?” “Have all
inspections been performed?”

Check sheets are tremendously useful for solving a problem and for process-
improvement. Data collected in a check sheet can be used as inputs for other tools
such as Pareto diagrams and histograms. They can be in the form of:
 process check sheets where ranges of measurement values are written and actual
observations are marked
 defect check sheets where defects are described and frequencies are recorded
 defect location check sheets which are actual diagrams that show where the
problem occurs

cause and effect check sheets in which the problem area is shown by marking that
area in the cause and effect diagram

Steps in creating a check sheet

1. Identify the problem to be observed.


2. Decide when the data will be collected and what will be the duration
3. Design the form such that data recorded does not have to be rewritten for
analysis. For example, data can be recorded by simply making check marks or
similar symbols against the fields
4. Label all spaces on the form.

Each time the targeted event takes place; record it on the check sheet.

Example 1: The following table represents a defect check sheet in the delivery
process of a pizza manufacturing chain.

Example 2: The following figure shows a check sheet used by HR to collect data on
causes of increasing attrition rates in a BPO.
It is clear from the data collected that slow growth and high stress levels
contributed to high attrition levels in one month. This data can be used for further
analysis.

h. Stem and Leaf Plots

Stem and leaf plots are a quick way to examine the shape and spread of the data. It
is a graphical method of displaying data. It is a type of histogram that displays
individual data.

Example: The following data shows the weights of male football players in the
National Football League, 2005.
143, 145, 149, 158, 159, 164, 167, 168, 167, 178, 170, 172, 178, 174, 180, 185, 194,
193, 192, 200, 209, 205, 203, 204, 206, 218, 215, 225, 228, 229, 226

The following table shows a stem and leaf display of the data. To draw a stem and
leaf display, first note that the data ranges from 140s to 220s, counting by tens.
Note down a column of stems, starting with 14(representing the weights in 140s)
and ending with 22(representing the weights in the 220s). Draw a vertical line
separating the Stems, from the Leaves. The leaves represent the multiples of 1.

The next step is to enter all raw scores into the stem and leaf display. A weight of
145 would be recorded by placing a 5 against the stem 14; a weight of 205 would be
recorded by placing a 5 against the stem 20. Continue this process until all the data
is plotted. The resulting display is like the shape of a histogram plotted by the same
data.

5.2 Statistics and Probability and Collecting and Summarizing Data

B. Statistics and Probability

The Measure phase, uses statistics to aid in analysis.

1. Drawing Valid Statistical Conclusions

Enumerative/Descriptive Statistics is the branch of statistics that focuses on


collecting, summarizing, and presenting a static set of data. It is the analysis of the
numerical data to provide information about the specific data that is being
analyzed.

For example, (1) the mean of the three values 3, 5 and 8 is 5.33

(2) The employee benefits like travel allowances, sick leave, healthcare costs and so
on, used by the employees of any organization in any given fiscal year.

(3) A study of customer call handling time in a BPO in a particular process in a given
month. Conclusions can be made about the average handling time of a sample of
selected customers. Questions like why processing time varies for every customer,
or are different processes facing the same problem are not addressed in an
enumerative study.

Analytical/Inferential Statistics are used to draw inferences about a broader


population based on the sample data. A proper analytical study is based
on sufficient sample size (the sample should not be too large or too small), and
proper samplingmethods to give confidence about the fact that the selected
sample is representative of the population under study.

For example, children of ages 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16 play certain
computer games. Analytical studies would infer that the average age of all children
who play that game is 11.

Population Parameter

To know about a population parameter, first the meaning of population is to be


understood. A population is the whole group of units, items, and people or services
under particular study for a fixed period of time and for a fixed location.

A population parameter is a statistical measure of a population. The value of a


parameter is estimated from a sample drawn from the population, such as,
population mean and standard deviation. The exact value of a parameter is never
known with certainty.

Sample Statistic is a statistical property of a set of data set, such as the mean or
standard deviation of the sample. The value of the statistic is known with certainty
because it is calculated using all the items of the set of data.

The difference between enumerative and analytical studies

According to Deming (1975),

(1)an enumerative study is defined as a study in which action will be taken on the
universe. “Universe” is defined as the entire group of people, items, or services
under a particular study. Sampling a selected lot of defects to determine the nature
of defects of the entire lot is a case of an enumerative study. Enumerative studies
draw conclusions about the universe actually studied. The aim of this study is
estimation of parameters. This is the deductive approach. It involves counting
techniques for huge numbers of possible outcomes.

An analytic study is defined as a study in which action will be taken on a process to


improve performance in the future. In this study, the focus is on a process and ways
of improving the process. Thus analytic studies direct their efforts on a universe
which is yet to be produced; on predicting a universe of the future. This method is
the inductive method; it provides information for inductive reasoning. Analytical
methods makes use of tools like control charts, run charts, histograms, stem and
leaf plots etc.

In an enumerative study, the environment for study is static, whereas in an


analytical study the environment is dynamic.

Another difference between these studies is that enumerative statistics progress


from predetermined hypotheses, whereas analytic studies aim to help the analyst
in producing new hypotheses. Analytical studies involves in using data to develop
possible explanations, new theories for quality improvement.

2. Sampling Distributions

Most Six Sigma projects involving enumerative studies deal with samples, and not
populations. Some common formulae that are of interest to Six Sigma are given
below.

1. The empirical distribution assigns the probability 1/n to each X i in the sample.
Thus the mean of the distribution is

X is called the sample mean, since, the empirical distribution is determined by a


sample.

2. The variance of the empirical distribution is given by following equation:


The above equation is called the sample variance.

3. The unbiased sample standard deviation is given by the following equation:

4. Another sampling statistic is the standard deviation of the average, also called
the standard error (SE). This is given by the following formula:

It is evident from the above formula that the standard error of the mean is
inversely proportional to the square root of the sample size. This relationship is
shown in the graph below:
It is seen that averages of n=4 have a distribution half as variable as the population
from which the samples are drawn.

3. Central Limit Theorem

The Central Limit Theorem is stated as:

Irrespective of the shape of the distribution of the population or the universe, the
distribution of average values of samples drawn from that universe will tend toward a
standard normal distribution i.e., with mean 0 and standard deviation 1, for a large
sample size or when n tends to infinity.(Thomas Pyzdek, 1976)

In other words, the distribution of an average tends to be normal, even when the
distribution from which the average is calculated is definitely not normal. A remarkable
thing about this theorem is that no matter what the shape of the original
distribution is, the sampling distribution of the mean approaches a normal
distribution.

Furthermore, the average of sample averages will have the same average as the
universe, and the standard deviation of the averages will be equal to the standard
deviation of the universe divided by the square root of the sample size.

This is a symmetric distribution, the mean, median, mode are equal.

The central limit theorem has many practical implications. The Central Limit
Theorem provides the basis for many statistical process control tools, like quality
control charts, which are used widely in Six Sigma. By the Central Limit Theorem,
you can use means of small samples to evaluate any process using the normal
distribution.

Application of Inferential Statistics

The statistical methods described in the preceding section are enumerative. In Six
Sigma applications of enumerative statistics, inferences about populations based
on data from samples are made. Statistical inference is concerned with decision
making. For example, sample means and standard deviations can be used to
foretell future performance like long term yields or possible failures.

The techniques of statistical inference are so designed that it is not possible to be


certain about the correctness of a particular decision, but in the long run the
proportion of correct decisions are known in advance. Any estimate that is based
on a sample has some amount of sampling error. There are several types of errors
that may occur in statistical inference:

Type 1 error refers to rejection of a hypothesis when it should not be rejected.

Type 2 error refers to acceptance of a hypothesis when it should be rejected.


(Hypothesis testing will be discussed in the following chapter in detail: Chapter 6-
Black Belt, Analyze)

The sample statistics discussed above: sample mean, sample standard deviation,
and sample variance are point estimators. These are single values used to
represent population parameters. An interval about the statistics that has a
predetermined probability of including the true population parameter can also be
found out. This interval is called the confidence interval or confidence limits.
Confidence intervals can be both one-sided and two sided.

For example, if the mean income in a sample is $6000, it may be desirable to know
the interval in which the mean income of the parameter probably lies. This is
expressed in terms of confidence limits.

Six Sigma uses analytic statistics most of the time, but sometimes enumerative
statistics prove useful. Analytic methods are used to locate the fundamental
process dynamics and to improve and control the processes involved in Six Sigma.
C. Collecting and Summarizing Data

The data collection plan is built while measuring the process. A process can be
improved by studying the information gathered from data collected from the actual
process. This data collected has to be accurate and relevant to the quality issue
being taken up under the Six Sigma project. Any data collection plan includes:

 A brief overview of the project, along with the problem statement (stating why the
data has to be collected)
 A list of questions, which should be answered by the data collected
 Determining the data type which will be suitable for the data a process is
generating
 Determining the number of iterations of the data collected that will be enough to
present the change in the chart
 A list of the measures to be taken, once the data has been collected
 The name of the person who will be collecting the data and when

A good data collection plan facilitates the accurate and efficient collection of data.

After the data is collected, it must be figured out that what kind of data a particular
process holds. Before measuring the data, it is necessary to know the type of data
you are analyzing so that you can apply an appropriate tool to the data.

The following section gives the definitions and classification of data. After studying
the data, it becomes essential to identify opportunities to transform the attribute
data to variable measures.

1. Types of Data

No two things are exactly alike; therefore there are inherent differences in the data.
Each characteristic under study is referred to as a variable. In Six Sigma, these are
known as CTQ or critical to quality characteristics for a process, product or service.

Attribute (Discrete) Data: Attribute data, also known as discrete data, can take on
only a finite number of points. Typically such data is counted in whole numbers.
Attribute data cannot be broken down into smaller units. For example, the number
of family members cannot be 4.5. No additional meaning can be added to such
data. For example, the number of defects in a sample is discrete data.
Some other examples of attribute data are:

 Zip codes in a country


 “Sweet” or “Sour” taste
 “Regular”, “Medium” or “Large” sizes of pizza
 “Fat” or “Thin” attributes given to a person

Variable (Continuous) data: Variable data, also known as continuous data, is data
which can have any value on a continuous scale. Continuous data exists on
intervals, or on several intervals. Variable data can have almost any numeric value
and can be meaningfully forked into finer increments or decrements, depending
upon the precision of the measurement system.

For example: The height of a person on a ruler can be read as 1.2 meters, 1.05
meters or 1.35 meters.

The important distinction between attribute data and variable data is that variable
data can be meaningfully added or subtracted, while attribute data cannot be
meaningfully added or subtracted.

2. Scales of Measurement

The next step in data collection is to define and apply measurement scales to the
data collected.

The idea behind measurement is that improvement in a process can begin only
when quality is measured or quantified. Essentially,a numerical assignment to a
non-numerical element is called measurement. Measurements communicate
certain information about the relationship between one element and the other
elements.

There are four types of measurement scales for categorical data:

a. Nominal Scale: This shows the simplest and weakest kind of measurement.
They are a form of classification. This shows only the presence or absence of an
attribute. The data collected by nominal scale is called attribute data. For example,
success/fail, accept/reject, correct/incorrect.
Nominal measurements can represent a membership or a designation like
(1=female, 2=male). The statistics used in nominal scale are percent, proportion,
chi-square tests etc.

b. Ordinal Scale: This scale has a natural order of the values. This scale can express
the degree of how much one item is more or less than another. But the space
between the values is not defined. For example, product popularity rankings can be
high, higher, and highest. Product attributes can be taste, or attractiveness. This
scale can be studied with mathematical operators like =, ≠, <, >.
Statistical techniques can be applied to ordinal data like rank order correlation.
Ordinal data is converted to nominal data and analyzed using Binomial or Poisson
models in quality improvement models like Six Sigma.

c. Interval Scale: A variable measured on an interval scale gives the same


information about more or less as ordinal scales do, but interval variables have an
equal distance between each value. In this scale, difference between any two
successive points is equal, like temperature, calendar time, etc. This scale has
measurements where ratios of differences are unchanging. For example, 180°C =
356°F. Conversion between two interval scales is accomplished by the
transformation, y= ax+b, a>0.
Statistical techniques can be applied to interval data like correlations, t-tests,
multiple regression and F-tests.

d. Ratio Scale: In this scale, measurements of an object in two different metrics are
related to one another by an invariant ratio. (Thomas Pyzdek, 1976). For example, if an
object’s mass was calculated in pounds (x) and kilograms (y), then x/y = 2.2 for all
values of x and y. This means that a transformation from one ratio measurement
scale to another is executed by a transformation of the form y = ax, a >0, e.g.,
pounds = 2.2 × kilograms. 0 has a meaning here, it means an absence of mass.
Another example is temperature measured in Kelvin. There is no value possible
below 0° Kelvin. Weight below 0 lbs is a meaningful absence of weight.
Statistical techniques can be applied to ratio measurements like correlations,
multiple regression, T-tests, and F-tests.
5.3 Methods for Collecting Data (application)

3. Methods for Collecting Data (application)

Data constitute the foundation for statistical analysis. Data from the process which
has to be analyzed can be collected by applying tools such as:

1. Check Sheets: These are the most common tool for collecting data. They permit
the user to collect data from a process in an easy and systematic manner. (See the
previous section)

2. Control Charts: These are graphs used to study how a process changes over
time. Through control charts, current data can be compared to historical control
limits and conclusions can be drawn on whether the process is in control, or
displays variation(out of control) due to special causes of variation. (To read more on
control charts, see: Chapter 8- Six Sigma, Control)

3. Design of Experiments: A collection of methods for carrying out controlled,


planned experiments of a process or product, design of experimentsis undertaking
a sequence of experiments that first look at the broader variables, known as
independent variables, and then narrow down to a list of the vital few variables.
The data is collected and studied to find out the effect a variable or a number of
variables have on the process.

4. Survey: Sample Surveys are data collected from various groups of people to
gather information about their knowledge or opinion about a product or
process. (To read more on surveys, see: Chapter 2: Business Process Management)

5. Stratification: This is a way of separating data collected from various sources so


that patterns in the data can be seen. (For more information on stratification, see the
following section)

Coding Data

Thedata collected, no matter from where it is collected, need to be coded before


entering for processing, analyzing and reporting. Each item of data that goes for
processing needs a numeric response attached to it. For example, if you ask the
customers whether they prefer a particular product, the answers could be ‘yes’ or
‘no’ or ‘can’t say’. In this case you cannot use just a check mark or such other
symbols against the answer. A possible way to code it is ‘yes’=1, ‘no’=2, ‘can’t say’=3.
While processing the data, you count the number of 1s, 2s, and 3s.

However, sometimes the data variables need not be coded. If you are using weight
or age as a variable of interest, the age or weight itself can be used. Coding
becomes necessary when the analysis does not take values as they are. For
example, when you have to code the group of responses “< 18 years” , “18 to 30
years ”, “> 30 years” etc., you can use <18 years = 1, 18 to 30 years = 2, and so on.
Therefore for each numeric variable to be analyzed, either actual values or coded
values are used.

Binary Coding

When there is qualitative data, or as such observations are not available in the
given data, attributes are used. To characterize this data, sometimes binary coding
is used. If a certain character or event in the data that needs to be checked is
present, it is denoted by 1. If it is absent, it is denoted by 0. This can be shown by:

Χ (event) = {1, if the event is present, 0, if the event is absent}

or example, if the efficiency of workers is measured as those who work for 8 hours
are efficient, Χ (efficiency) will be 1 if a worker works for 8 hours everyday and 0 if
he works for less than 8 hours a day.

4. Methods for Assuring Data Accuracy

Capturing incorrect or unreliable data turns out to be an expensive affair, and at


the same time slows down the decision making process. The following is a list of
important factors for assuring data accuracy:

 Bias related to tolerances or targets while measuring or registering analog and


digital images should be avoided.
 When data occurs in time series, the order of the data captured should be noted.
 In case an item feature changes over time, the measurement scale should be
recorded immediately and also after the item stabilizes.
 Rounding should be avoided because it dilutes the responsiveness of
measurement. Averages should be calculated to at least one or more decimal
places than individual readings.
 Data entry errors should be removed by filtering the data.
 Guesswork should be avoided while removing errors and objective statistical tests
should be applied while spotting outliers (an observation that is different from the
main trend in the data)
 Every significant classification identification must be noted along with the data.
This information might include time, gauge, auditor, operator, material, process
change etc.

It is necessary to select the sampling method according to how the sample data is
going to be used. There is no strict norm as to which sampling plan will be
employed for data collection and analysis; a decision has to be made on the basis
of experience and needs of the data collector. The following is a guideline on a few
sampling techniques. Every sampling method has been developed for some specific
purpose.

Simple Random Sampling

In simple random sampling, each element in the sample space has an equal chance
of getting selected in the sample. Hence the probability of any event can be
determined by listing all the possible units in the sample space.

Simple random sampling is considered as the simplest sampling technique and is


also preferred because it has a time and economy advantage to it. But this requires
homogeneous distribution of the samples. The sample must be a representative of
the lot; hence the stress of sample selection is laid on sampling plan usage rather
than selection of the sample itself. The sequence of sampling has to be done
through a random plan.

Stratified Sampling

When the distribution of samples is not homogeneous or proportional, stratified


sampling is used. For instance, parts may have been produced on different
machines, or under varying conditions. In this case, the total sample population is
divided into homogeneous subgroups. These subgroups are called strata and this is
followed by applying simple random sampling technique in each stratum. These
strata are based on predetermined criteria such as size, weight, location, assembly
line etc. Each unit in the sample space must be assigned to one stratum only.

The person using the sample data must be conscious of the presence of stratified
groups, and must document a report such that the interpretations are relevant only
to the sample selected and may not represent the universal population.

Systematic Sampling
In this sampling technique, each n th element is selected from the sample space.
The sampling interval, n, is calculated as:

n = Number in population / Number in sample

This technique is also referred to as interval sampling as every n th sample is


selected from the list of sample space.

For example, if there are 2000 samples in the sample space, and the number of
samples is 50, then 2000/50 = 40; hence every 40 th sample will be selected.

Clustered Sampling

In clustered sampling, all the units are grouped into clusters and a number of
clusters are selected randomly to represent the total population. Then all units
within selected clusters are included in the sample. The elements within the
clusters can be homogeneous or heterogeneous but there should be heterogeneity
between clusters.

The difference between cluster sampling and the stratified sampling is that in
cluster sampling, each cluster is treated as the sampling unit and hence analysis is
done on the number of clusters; whereas in stratified sampling, the analysis is done
on elements within strata.

5. Descriptive Statistics

Descriptive statistics are used to explain the properties of distributions of data from
samples. The following section describes the more frequently used descriptive
statistical measures.

Measures of Central Tendency

The measures of central tendency are the various ways of describing a single
central value of the entire mass of data. The central value is called average. The two
main objectives of the study of averages are:

i. to get a single value that describes the characteristic of the entire group.
ii. to facilitate comparison.
Three averages: mean, median and mode are of interest to Six Sigma.

Mean: Arithmetic mean or simply mean is the value obtained by the sum of all data
values divided by the total number of data observations. It is the most widely used
measure of central tendency.

Population Mean

where, Χ is an observation , N is the population size

Sample Mean

Median: The median refers to the middle value in a distribution of a data set. One
half of the items in the data set have a value the size of the median value or
smaller, and one half has a value the size of the median value or larger. It splits the
data into two parts. It is to be noted that the median is the average of the middle
two values for an even set of data items.

Mode: The mode or modal value is that value in a series of data that occurs with
the highest frequency. It is possible for data sets to have more than one mode.

While this statement is pretty helpful in interpreting the mode, it cannot be applied
safely to any distribution because of the erratic nature of sampling. Rather, mode
should be thought as the value around which the data items are most closely
concentrated. It is the value which has the most frequency density in its immediate
neighborhood.

Measures of Dispersion

The measures of central tendency give one single figure that represents the entire
data set. But it becomes necessary to describe the variability or dispersion of
observations because average alone cannot give an adequate description of the
data set. Measures of dispersion help in describing the spread of dispersion. The
dispersion, (also known as scatter, spread or variation) measures the extent to
which the items vary from some central value.

The following are the main measures of dispersion:

Range: The range of a set of data values is the difference between the largest or
smallest values.

R = Largest - Smallest

Variance, Standard Deviation: The variance is the sum of squared deviations from
the mean divided by the sample size. The standard deviation is the square root of
variance.

The Coefficient of Variation (COV): This is equal to the standard deviation divided
by the mean and is expressed as a percentage.

Skewness: Skewness is a measure of symmetry of the distribution.

1. The normal distribution has a skewness zero, zero signifies perfect symmetry.
2. Positive skewness signifies that the tail of the distribution is more extended on
the side above mean.
3. Negative skewness signifies that the tail of the distribution is more extended on
the side below mean.
Probability Distributions: Probability distributionsare relative frequency
distributions when the number of observations is made very large. These are the
distributions for the probability of random variables. These random variables may
be continuous or discrete in nature. For continuous random variables, probability
density function (p.d.f) is used and for discrete random variables, probability mass
function (p.m.f) is used.

Probability Density Function (p.d.f): The p.d.f explains the nature of random
variables in continuous case. It forms a bell shaped distribution from normally
grouped data.
When the random variables are normally distributed, there are symmetric p.d.fs
with mean = mode = median, meaning they are at the same point. Mathematically,
if f(x) is a continuous distribution function of the random variable x, and is always
positive, i.e., then p.d.f will be,

Total probability of continuous distribution is 1.

Probability Mass Function (p.m.f): Similarly, if f(x) is a discrete distribution


function of the random variable x for n values with f(x) ≥0, then, p.m.f will be,
For a given value of x, there is only a single value of f(x) (denoted by the points in
the x-axis in the graph above)

Cumulative Distribution Function (c.d.f): The c.d.f represents the area under the
probability distribution function to the left of X. The c.d.f is used for both
continuous and discrete data. It is denoted by:

And,

where, x is a continuous random variable.

5.4 Graphical Methods, Histograms and Scatter Diagrams

6. Graphical Methods

Graphical methods of data analysis include box plots, stem and leaf plots, run
charts or trend charts, scatter diagrams, histograms, normal probability plots,
Weibull plots. Data constitute the foundation for statistical analysis. The best way to
analyze data and measure a process is with the help of charts, graphs, or pictures.
Charts and graphs are the most commonly used tools for displaying and analyzing
data as they offer a quick and easy way to visualize what the data characteristics
are. They show and compare changes and relationships.

1. Box Plots combine information about the distribution of values, instead of


plotting the actual values.The box plot helps to see the central tendency, and
spread and variability of the data observations. (See topic: Process Analysis and
Documentation in the previous section.)

2. Stem and Leaf Plots display the variation of histograms and it is useful for data
sets (n<200) . (See topic: Process Analysis and Documentation in the previous section.)

3. Trend Charts/ Run Charts Trend charts (also known as run charts) are typically
used to display different trends in data over time. A trend chart is actually a quality
improvement technique and is used to monitor processes. A goal line is also added
in the chart to define the target to be achieved. One of the main advantages this
chart offers is that it helps in discovering patterns that occur over a period of time.

Uses of Trend Charts

Using trend charts can lead to improved process quality.

Trend Charts should be used for introductory analysis of continuous data or data
arranged in a time order. A trend chart of continuous data should be drawn before
doing other analysis. Analysis of run charts is used to find out if the patterns in the
data have developed because of common causes or special causes of variation.
Answers to questions like “Was the process under statistical control for the
observed period” are provided by the run chart. If the answer is no, then there
must have been special causes of variation that affected the process. If the answer
is yes, then process capability analysis can be used to approximate the long term
performance of the process (See topic: Process Capability Analysis)

A run chart should not be used if more than 30% of the data numbers are the
same. Also run charts are not very sensitive to SPC, they cannot detect single points
which are characteristically different form others; hence they may not be able to
detect special causes of variation in spite of their presence.
The various steps involved in creating a trend chart are:

Data gathering: The data should be collected over a period of time and it should
be gathered in a chronological manner. The data collection can start at any point
and end at any point.

Data organizing: The collected data is then integrated and is divided into two sets
of values, i.e., x and y. The values for ‘x-axis’ represent time, and the values for ‘y-
axis’ represent the measurements taken from the source of operation.

Preparing the chart: The y values versus the x values are plotted, using an
appropriate scale that will make the points on the graph visible. Next, vertical lines
for the x values are drawn to separate time intervals such as weeks. Horizontal
lines are drawn to show where trends in the process, or in the operation, occur or
will occur.

Interpreting the chart: After preparing the chart, the data is interpreted and
conclusions are drawn that will be beneficial to the process or operation.

Example: Suppose you are the new manager in a company and you are disturbed
by the trend of certain employees coming late. You have decided to monitor the
employees’ punctuality over the next four weeks. You decided to note down by how
much time they get late everyday (on an average basis) and then construct a trend
chart.

Data Gathering: Cluster the data for each day over the next four weeks. Record
the data in an ordered manner as shown in the following:

Organizing Data: Determine what should be the values on x-axis and what should
be the values on y-axis. Assume day of the week on the x-axis and time on the y-
axis.
Preparing the chart: Plot the y values versus the x values on a graph sheet (on
paper) or using another computer tool like Excel or Minitab. Draw horizontal or
vertical lines on the graph where trends or deviations occur.
Interpreting Data: Conclusions can be drawnonce the trend chart has been
prepared. Results can then be interpreted by the analysts in the analysis phase. It is
very clear from the chart above that employees usually take more time to reach
office on Mondays.

4. Histograms

A pictorial representation of a set of data is known as a histogram. It is a vertical bar


graph . This bar graph or Histogram very crisply displays the wanted information. It
is constructed from a frequency table and thus is also called a Frequency
Histogram. It depicts the distribution or variation of data over a range (range could
be in terms of age, size, length, number etc.), such as dispersion, central tendency.
It determines the shape of the data i.e. normal, bimodal, saw-toothed, cliff-like, and
skewed and so on.

The shapes of histograms vary depending on the choice of the size of the intervals.
The horizontal axis depicts the range and scale of observations involved. The
vertical axis shows the number of data points in various intervals, i.e., the
frequency of observations in the intervals. The values on the horizontal axis are
called the upper limits (intervals) of data points.

Uses of a Histogram

A histogram makes it easy to see the scattering of data (the dispersion and central
tendency) and thus it becomes clear where the variable occurs in a critical state. It
makes comparison of the distribution to process requirements easy.

A histogram is a practical method to identify a distribution. In very large samples,


the histogram will be close to the shape of distribution and it becomes easier to
identify the population distribution in that case.

Histograms are also used as quality control tool. It is used in the analysis and
finding possible answers to quality control problems. But histograms should be
drawn along with control charts or run charts because histograms do not display
the ‘out of control’ processes as they do not show the time sequence of data.

Histograms help in finding solutions for process improvement. When histograms


from different time periods are compared, patterns in them can be studied for
possible solutions.

When data is obtained from different sources, the data can be stratified by plotting
different histograms.

Other uses of a histogram are listed below:

 To check whether the output of a process is normally distributed or not


 To check whether the customer’s requirement can be met by the current process
 To check for process change
 To check the differences in outputs of multiple processes
 To communicate data in a faster and easier way

A histogram is an efficient tool which can be used in the early phase of data
analysis. For a better analysis, it is combined with the concept of normal curve. A
few questions are generally used to interpret the histogram, which are,

 If the particular process is working within the stipulated specification limits?


 If the process is exhibiting a wide variation?
 And finally which appropriate action has to be taken?

Disadvantages of Histograms

At least 50 samples must be considered to represent a true behavior of a


histogram. Though the histograms give information regarding the spread and
distribution, they do not give adequate information regarding the state of control of
the process. As the samples collected are not collected in any particular order, the
time related trends of the process being studied is not depicted.

Histograms are an important tool in the initial phase of data analysis due to the
ease with which it can be created. But in statistical process control, the histogram
does not give any clue regarding how the process was operating at the time of data
collection.

In the example discussed previously about employees who come late, the
histogram can show how the data is dispersed (on a daily basis) for the duration of
a month:
5. Scatter Diagrams

(See Chapter 6- Six Sigma, Analyze)

6. Probability Plots

Probability plots are a graphical technique to check which distribution (e.g. normal,
Weibull etc.) a particular data set is following. This technique is used to verify the
collected data against any known distribution. A probability plot shows the
probability of a certain event occurring at different places within a given time
period. Each sample is selected in such a manner that each event within the sample
space has a known chance of being selected. While sampling for any event, every
observation from which the sample is drawn has a known probability of being
selected into the sample.

Probability plots give a better insight into the physical environment of a process.
With moderately small samples, probability plots produce reasonable results.
Probability plots show estimates of process yields.

Probability plots, also known as Probability Sampling is usually estimated on a scale


from 0 to 1. Any event which is most likely to occur will have a probability nearest to
1. Any event which is least likely to occur will have a probability nearest to 0.

When plotted on a graph, these events usually bunch around the mean, which
occurs in a Bell curve (See topic: Basic Process Capability). This theoretical
distribution of events allows the calculation of the probability of a certain event
occurring in the sample space.

Interpretation of Probability Plots

A probability plot consists of a center line and two outer bands, one above the
center line and one below it. The nearer the data points are to the center or middle
line, the better it is thought to fit the distribution. If all the points lie within the two
outer bands then the data set is thought to be a good fit to the probability model
being used.

A straight line in a probability plot indicates that the data set is following that
particular distribution. But a bend in the plot suggests that the data set is from
more than two distributions.

The positive aspect of a probability plot is that the data need not to be divided into
intervals. Also probability plots works better for a smaller number of data points.

On the other side, probability plots need to use the correct probability distribution.

7. Goodness of Fit Tests

Goodness of Fit test is a type of statistical test where the legitimacy of one
hypothesis is tested without the specification of an alternative hypothesis.

The procedure for such a test is,

1. To define a test statistic (some function of data measuring the distance between
the hypothesis and data) and
2. To calculate the probability of obtaining data which have a still larger value of this
test statistic than the value observed, assuming the hypothesis is true.

The result obtained is known as the size of test or the confidence level.

Probabilities which are less than 1% show a poor fit. Probabilities which are close to
1% indicate a fit which is too good to occur very frequently and may be a sign of
error.

The most common tests for goodness-of-fit are chi square test, Kolmogorov test,
Cramer-Smirnov-Von-Mises test, runs etc.

The Pearsonian chi square test is used to test if an observed distribution conforms
to any other distribution. The method consists of organizing the observations into
frequency table with classes. The formulae is,

The number of degrees of freedom is p − 1

Here, p = the number of parameters estimated from the data


Kolmogorov-Smirnov test is used to test the sample for distributional adequacy.
This test is used to determine if the particular sample being studied is from a
population with a specific distribution. This test comes with its characteristics and
limitations. It does not depend upon the cumulative distribution function being
tested. Unlike chi-square, goodness-of-fit test is an exact measure.

(For more details on Goodness of Fit Tests, refer to chapter 6: Black Belt, Analyze)

5.5 Probability Distributions Commonly Used by Six Sigma Black Belts

D. Probability Distributions Commonly Used by Six Sigma Black Belts

A set of numbers collected from a well defined universe of possible measurements


arising from a property or relationship under study is called distribution. (Thomas
Pyzdek, 1976)

Distributions reveal a lot of information about the data being studied. They reveal
the way in which probabilities are connected with the data numbers under
observation. Plots of the distribution shape can tell how probabilities change over a
range of values.

There are two distinct types of theoretical distributions, corresponding to different


types of data or random variables. In a standard normal distribution, half of the
values lie above the mean i.e. average and the rest of the values lie below the
average.

Binomial Distribution

The binomial distribution is a discrete probability distribution. It is one of the


simplest theoretical distributions. It is used quite often to illustrate the use and
properties of theoretical distributions more generally.

Binomial distribution is used in situations where there are just two equally exclusive
outcomes of a trial.

It shows the probability of getting ‘d’ successes in a sample of 'n' taken from an
'infinite' population where the probability of a success is 'p'.
The equation which will give the probability of getting x defectives in a sample of n
units is shown by the following equation. It is known as the binomial probability
distribution. The formula for a binomial distribution is:

where,

P(x) = probability mass function = probability of getting x defectives

p = Number of non-conforming units in the sample / number of items sampled=


Probability of success

n = Number of units in the sample

q = 1- p = probability of failure

Range: 0<p<1, x= 0,1,2,3,…………

Binomial distribution is best utilized when the sample size is less then 10% of the
lot size. Binomial distribution can also be analyzed using Microsoft Excel

Poisson Distribution

The Poisson distribution describes the number of discrete events occurring in a


series, or a sequence, exhibiting a particular kind of independence.

The Poisson distribution helps in determining the accurate distribution of number


of non conformances in the sample.

Juran (1988) recommends using the Poisson distribution with a minimum sample
size of 16, the population size should be at least 10 times the sample size and the
probability of occurrence ‘p’ on each trial should be less than 0.1.

The formula for Poisson distribution is:


where,

P(x) gives the probability of exactly x occurrences in the sample

μ = Average number of non conformances per unit

x = Number of non conformances in the sample

e = 2.7182818

Range: 0<x<1 and μ>0

Poisson distribution can also be carried out by using Microsoft Excel.

Normal Distribution

The Normal distribution is also known as Gaussian distribution after German


mathematician Carl Friedrich Gauss.

Two parameters determine the normal distribution. The mean (μ) locates the
centre of the distribution and standard deviation (σ) measures the spread of the
distribution.

Normal distribution is one of the statistical distributions which appear in a variety


of applications. This is due to the central limit theorem which tells us that sums of
random variables are approximately normally distributed if the number of
observations is large. (The Central Limit Theorem is described in the following section)

The probability density function (p.d.f) for a continuous normal distribution is:
where,

μ = Population average mean

π = 3.1416

σ = Population standard deviation

e = Base for natural logarithm = 2.3026

Range: -∞<x<∞ , -∞<μ<∞ and σ 2>0

The mean (μ) locates the center of the distribution; standard deviation (σ) measures
the spread of the distribution. Almost all probability is within ± 3σ of the mean.
Graphing the results of normal distribution results in a bell shaped curve.

The normal distribution has a 'bell shape'.

The area of the curve represents the proportion of the process output that falls
within a range of values. These values which can also be obtained from the normal
distribution table are:
The test statistic used for normal distribution is

This statistic follows Standard Normal Distribution with mean zero and Variance
one.

Chi-square Test

Chi-square test is a non parametric test. It is in all probability the most commonly
used non-parametric test. Chi-square test is quite a flexible test and can be used in
a number of circumstances.

Three types of analysis can be drawn from chi square test:

 Goodness of fit
 Test for homogeneity
 Test of independence

The probability that an observed distribution of data is due to chance (sampling


error) alone is determined by nonparametric data. It is based on rankings or
distribution into qualitative categories. The most widespread application of chi-
square test is to study the connection between two variables.

Unlike the parametric tests, the Chi-square test does not require the data sample to
be normally distributed. It assumes that the variable is normally distributed in the
population from which the particular sample is taken.

The steps in performing a Chi-square test are as follows:

 The population should be randomly sampled.


 Raw frequencies should be used in reporting the data and not percentages.
 The variables should be independent variables.
 The groups on independent and dependent variables must be extensive and
exclusive. i.e., reactions should be independent and not influenced by another.
 The observed frequencies should be sizeable.
The probability function of Chi-square distribution is:

where,

e = 2.71828

v = number of degree of freedom.

The chi-square distribution has only one parameter.

Student’s T-Test

T-Tests are used to compare two averages. They may be used to compare a variety
of averages, such as, effects of weight reduction techniques used by two groups of
individuals, complications occurring in patients after two different types of
operations or accidents occurring at two different junctions.

The t-test may be used when sample size is small i.e., less than 30. A t-test may be
calculated if the means, the standard deviation and the number of data points are
known. If raw data is used then these measures should be calculated before
performing the t-test.

t-test can be performed manually or a statistical package like Microsoft Excel or


Minitab software can also be used.

The test statistic is:


where, N is independent measurements, μ is population mean, is the sample mean,
s is the sample variance given by,

The probability function of Student t- distribution is:

where, t = test statistic

C = a constant required to make area under the curve equal to unity.

v = n-1, the number of degrees of freedom.

F Distributions

The F distribution is also known as the Fishers-Snedecor distribution. It is a


continuous probability distribution. F distribution is used to test whether the
observed samples have the same variance or not.

The F distribution is used to calculate the probability values in ANOVA, and it is the
distribution of two estimates of variance. It is the ratio of two chi square
distributions with the respective degrees of freedom. The chi square obtained has
been divided by its degrees of freedom.

The F-distribution is given by:


Other Distributions

Hyper Geometric Distribution

Hyper geometric distribution is used in sampling inspection. It is a discrete


probability distribution. It describes the number of successes in a sequence of n
draws from a finite population without replacement.

The formula for hyper geometric probability distribution is:

where, N is the lot size,

m = number of defectives in the lot

n = the sample size

x = number of defectives in the sample

Hyper geometric probability distribution can also be calculated using Microsoft


Excel.

Bivariate Distribution
This distribution describes the behavior of two Gaussian variables x and y. The
Bivariate normal distribution has 5 parameters, namely, the two means µ 1 and µ 2,
the standard deviations σ 1 and σ 2 and the correlation between the two variables
ρ. This is a three dimensional distribution.

The Bivariate distribution is given by:

Exponential Distribution

The exponential distribution is very close to the Poisson distribution. It is very


valuable in analyzing reliability. While the Poisson distribution is a discrete
distribution, the exponential distribution is continuous. This distribution describes
the time interval between random occurrences. This distribution is used to describe
units that have a constant failure rate.

The formula for exponential distribution is:

The shape of the distribution is determined by just one parameter i.e. lambda l .
Lambda l is equal to 1/µ.

Lognormal Distribution

In probability, the lognormal distribution is defined with reference to the normal


distribution. The distribution of any random variable whose logarithm is normally
distributed is known as the log-normal distribution. A random variable X
is lognormal if its natural logarithm, Y= log(x), is normal.
Lognormal distribution is used for modeling material properties. It is used
extensively in reliability applications to model failure times. It is expected by all the
customers that the products they buy should function for years together. Failure to
do so may result in catastrophic results and customer dissatisfaction.

Lognormal distribution is useful for random variables which are constrained to be


greater than 0. It is characterized by two parameters: Mean and Standard
Deviation.

The lognormal distribution has the probability distribution function (p.d.f):

where, μ and σ are the mean and standard deviation of the variable's logarithm
respectively.

Weibull Distribution

Named after Wallodi Weibull, it is a very flexible distribution. It is useful in reliability


engineering because the parameters can be tailored to suit the product
characteristics. It may also be used to represent manufacturing and delivery times
in industrial engineering problems. It has a wide application in reliability models for
it can take the characteristics of other types of distributions. This distribution is also
useful in failure analysis.

β here is the shape parameter, θ is the characteristic life or scale parameter, and t 0
is the location parameter.

A number of failure characteristics can be modeled with the help of Weibull


distribution, such as infant mortality, random failures, wear-out, and failure-free
periods. Infant mortality suggests a decreasing failure rate. Products which are
defective fail early. The failure rate will decrease eventually as the defective
products will fall out of the population.

A constant failure rate suggests that items are failing from random events. Wear
out is suggested by an increasing failure rate; some parts are more likely to fail with
the passage of time.

E. Statistical Process Control

Statistical Process Control is the employment of valid analytical statistical methods


to pinpoint the existence of special causes of variation in a process. The special
causes of variation are called assignable causes and they should be ascertained and
eliminated. The common causes of variation, called chance causes, should not be
ignored but dealt with by identifying long term process improvement. It is to be
noted that these chance variations cannot be eliminated from the data totally.

The main concept in statistical process control (SPC) is that every measurable
phenomenon is a statistical distribution. This means an observed set of data is
made up of a sample of the effects of unknown chance causes. After everything has
been done to eliminate special causes of variation, a certain amount of variability
showing the state of control will always remain.

The three basic concepts of a distribution are location, spread, and shape. The
location is the value of the distribution, such as mean. The spread of the
distribution is the amount by which smaller values differ by bigger ones, such as,
standard deviation and variance. The shape of a distribution refers to the patterns
like peakedness and symmetry. A distribution can be bell shaped, rectangular
shaped, etc.

(For more information on SPC, see Chapter 8- Six Sigma, Control)text

5.6 Measurement Systems

F. Measurement Systems
1. Measurement Methods

Six Sigma is a process based methodology for pursuing continuous improvement. It


is a disciplined, data driven approach for reducing defects in the various processes
of an organization. These processes range from manufacturing, transaction to the
service industry. The primary aim of Six Sigma is to focus on the customer first and
then use facts and data based on customer requirements to get better results or
improve the process

The Six Sigma methodology entails the implementation of a strategy that is


measurement based, which focuses primarily on improvement of the processes
and reduction of defects. Measurements are vital for any system. It is an important
basis for data gathering and subsequent enhancement of any system.

To know the organization’s position in the market in future, the present system has
to be measured. The measurement of processes, people involved, strategies
applied, products generated and performance help the organization to follow and
appraise each stage of the production process.

Measuring instruments and operators contribute to the variations in any


measurement system. Any measurement system is as good as the instruments and
the operators handling them. In a measurement system problems arise as different
people call the same thing with different names, and sometimes different things
are called with the same name. This messes up the analysis. Therefore, there arises
a need for a uniform measurement system.

There are several measurement methods and instruments like gauge blocks,
attribute screens, micrometers, calipers, optical comparators, tensile strength,
titration, etc. (See topic: Design of Experiments)

2. Measurement System Analysis

In Six Sigma methodology, decisions are guided by the analysis obtained from the
measurements recorded. An error in a measuring system may result in an error in
the judgment taken by the management. The function of Measurement System
Analysis is to check the efficiency of a measurement system by evaluating its
accuracy, precision and stability.

This is a first step that precedes any kind of decision making. It is conducted before
collecting data to make sure that the measurements which will be subsequently
collected will be done without any bias. It will also ensure that the measurements
collected will be reproducible by the system used and will be the same if repeated
by other operators. It is done so that a reasonable conclusion about the quality of
the measurement system can be made.

Measurement systems analysis appraises the whole process of data collection to


ascertain the reliability of the data which will be used for quality analysis and
further improvement of the process.

Measurement systems analysis checks the whole measurement system. It analyzes


the collection of tools, operations to be carried out on the data, procedures to be
undertaken, software to be used and also the personnel that are going to perform
the assignment.

Errors of a Measurement System

The errors of a measurement system can be categorized into two groups: accuracy
and precision.

1. Accuracy refers to difference between the measurement and the actual value.
2. Precision refers to the variation observed when the same thing is measured by
the instrument repeatedly.

Five parameters should be taken into account in Measurement System Analysis.


They are linearity, stability, bias, repeatability and reproducibility.

Linearity, stability and bias refer to the accuracy of the measurement system.
Repeatability and reproducibility refer to the precision of the measurement system.
Three characteristics should be evaluated to check the precision of a measurement
system; statistical control of the measurement system, increment of the
measurement, and standard deviation of the measurement system.

Bias

It is the difference between an observed mean measurement and its reference


value. It is introduced into a sample when data collection is done without any
regard to the influencing key factors. Bias can be determined by choosing one
appraiser and one reference standard. Bias can be calculated as:
Bias = Average of the repeated measurement − Reference value.

Linearity

It is the difference between the measurement by an individual and the known


standard. Linearity can be determined by selecting standards or parts that cover
most of the operating range of the measurement instrument. Linearity assesses if
the bias is consistent across the operating range of the measurement system i.e.
the low and high end of the gage.

Stability

It is the change in bias over time. It is the variation in the measurements recorded
by an appraiser of the same parts over a period of time using the same method. A
system is said to be stable if the results of the measurement system are same at
varying points of time.

Statistical stability is determined by the use of control charts. Control charts are
prepared and evaluated to determine the stability of any measurement system. A
stable measurement system will show no ‘out of control’ signals.

Repeatability

Repeatability is the variation recorded in measurements when the same appraiser


takes a number of measurements using the same technique and instrument on the
same part or product. The conditions inherent in the measurement system give rise
to variations when the measurement system is applied over and over again under
the same situation.

A measurement system is said to be repeatable if it has a consistent variability.

To calculate repeatability, a range or sigma chart of the repeated measurements of


parts that come under a significant section of the process, or the tolerance, can be
constructed. If the range or sigma chart shows that it is “out of control”, there
special causes can be attributed to the variability in the measurement system. If the
range or sigma chart is in control, then repeatability can be determined. This is
done by calculating the standard deviation based on mean range or mean standard
deviation.
Reproducibility

Reproducibility refers to the variation in the average measurements when the


measurements are recorded by more than one appraiser using the same
measuring technique on the same part or product.

A measurement system is said to be reproducible if different appraisers produce


consistent results. Reproducibility or appraiser bias can be computed by comparing
each appraiser’s average with that of other appraisers.

Tools and Techniques of Measurement Systems Analysis

The common tools and techniques of Measurement Systems Analysis are as


follows.

 Attribute Gage Study


 Gage R&R
 ANOVA Gage R&R
 Destructive Testing Analysis

The tools that are chosen are as a rule, are determined by the characteristics of the
measurement system itself.

A general guideline for measurement system acceptability, according to the


Automotive Industry Action Group (AIAG):

 Errors under 10 percent are acceptable


 10 percent to 30 percent error suggests that the system is acceptable depending
on the importance of application, cost of measurement device, cost of repair, and
other factors
 Over 30 percent error is considered unacceptable, and the measurement system
should be improved

Gage R&R- Gage Repeatability and Reproducibility

To statistically segregate the different types of variation in a measurement process


Gage R& R is used. The variations include:
 Repeatability i.e. variation due to equipment
 Reproducibility i.e. variation due to different appraisers
 Pure error or Residual error
 Variation due to interaction effects

It can be applied to any kind of measurement (attribute or variables, indeterminate


or determinate). The ANOVA method and the average and range method are the
common methods used. (See Chapter 6- Six Sigma, Analyze)

It is very important to establish a formal system of management of the


measurement system once the system is found acceptable. This is done to ensure
that the system continues to be reliable and dependable.

3. Metrology

Wrong decisions can be the result of erroneous or inaccurate measurements. They


can cause serious consequences, costing money and even lives. There fore it is very
important to have authentic and accurate measurements approved and
established by the appropriate authorities.

Metrology is the science involving measurements. The idea behind measuring is to


determine a value.

As measurements are necessary, there arises a need to have a standard to make


measuring uniform. Measurement standards calibrate the instruments or the
measurement equipment so that the measurements which are taken are uniform.
They also make the results comparable.

Measurement standards provide measurement traceability to national standard.


With the world wide acceptance of ISO standards of quality, measurement
traceability has become an international requirement. It should be defined with
calibration, verification and the uncertainty of a measurement system.

Calibration

Calibration of equipment helps the organization using it to identify the defects. The
process of evaluation of any instrument of measurement of an unverified accuracy
to an instrument of acknowledged accuracy to identify the variation from the
required performance is called calibration. It is done in order to ensure the
tolerance of the equipment through out its use.

Some important reasons to have the instruments calibrated are:

 To ascertain that the measurements which will be taken by the instrument will be
consistent through out.
 To ensure that the measurements taken by the instrument will be accurate.
 To ascertain the reliability of the instrument.

Uncertainty of Measurement

Measurements are always subjected to uncertainties. The doubt about any


measurement is called the uncertainty of measurement. The fitness of any
measurement can only then be accurately evaluated when the uncertainty in
measurement is evaluated and acknowledged.

It is very important not to confuse between error and uncertainty. Both may seem
alike but are vastly different. Error refers to the difference between the true value
and the measured value. Uncertainty refers to the quantification of the suspicion
about the measurement outcome.

Uncertainty of measurement may arise from the instrument used for


measurement, from the object being measured, by the person doing the
measurement or from any other source. A good way to reduce the uncertainty of
measurement is by having a traceable calibration, cautious computation, efficiency
in record keeping, and through checking.

In Six Sigma uncertainty of measurement is very important. Here, it is used in


calibration where uncertainty of measurement must be reported on the certificate.
It is also used in tests where knowledge of uncertainty is desirable to determine a
pass or fail, to know the tolerance of an instrument.

In reality, measurements are never done in ideal conditions. Uncertainties may


come from the following:
Measuring instrument

Instrumental errors due to ageing, wear and tear, interference and noise (for
electrical instruments) may cause uncertainty in measurement.

Object being measured

Unstable objects may also contribute to uncertainties.

Measurement process

The process of measurement may not be user friendly and may result in the
subject not cooperating.

Calibration uncertainty

These can also be called imported uncertainties as the calibration of the instrument
can add to the uncertainty.

Operator skill

The dexterity and assessment of the person recording the measurement also add
to the uncertainty of the measurement.

Sample selected

The sample selected from the population should be a good representation of the
process to be assessed.

Environment

Environmental conditions also augment the uncertainty of measurement.


Conditions like temperature, air pressure, humidity are some of them.
But some errors or mistakes are not to be confused with uncertainty. Operator
made mistakes made should not be accounted for as uncertainties. They should be
checked by rechecking.

5.7 Process Capability Analysis

G. Process Capability Analysis

Process capability, as the name suggests, is the study of capability or the ability of a
process in any organization. This study answers the very basic question which is, “Is
this process capable or good enough?” Process capability refers to the capability of
any process to produce a defect free product when the process is in a state of
statistical control.

Process capability study can also be used to appraise the ability of any process to
meet the needed specifications. To put it simply, it answers this question, “How
capable is the process in terms of producing output within the specification limit?”

There are two ways of calculating capability- one is based on measuring the
variability of the process (the inherent variation within a sample) directly.

The other is based on counting the number of defects produced by the process.
DPMO is a common term used for ‘defect rate’ in Six Sigma. This method requires
extensive efforts to collect and analyze the data. The goal of such a study is to make
a projection of the number of defects expected from the process in the long term
by using a sample of data items from a process.

Process capability analysis can be performed with both attribute data and
continuous data, if the process is in statistical control. (See topic: SPC) Process
capability analysis done on processes that are not under statistical control
produces erroneous figures of process capability, and therefore should never be
performed. For example, if the means of successive samples fluctuate greatly, or
they are clearly off the target specification, these problems should be rectified first.

Process capability analysis can be done in two stages:

a. Bringing the process in a state of statistical control for a given period of time by
use of control charts.
b. Comparing to what extent the long-term process performance complies with the
management or engineering specifications.

In Six Sigma, process capability studies are done at the beginning and at the end
of a study to check the degree of improvement in the attained state. It studies the
capability of a process in ideal conditions over a short time, e.g. 2 hours to 24 hrs.
Process engineers are responsible for the process capability studies. Knowledge of
the short term stability and capability at the end is one if the benefits of such a
study.

Process performance studies on the other hand tell us about the long term
performance of a process. They determine the long term stability and capability of
the process by using the total process variability in the standard capability
computations.

1. Designing and Conducting a Process Capability Study

To conduct a process capability study certain steps are to be followed.

Selecting the process

As any organization has limited resources, it is very important to choose the right
process for improvement. Limited resources here means limited funds and limited
manpower which can be devoted totally to the improvement process; so all the
processes cannot go through the improvement process simultaneously. The tools
for selecting the process which needs to undergo improvement at the earliest are
Pareto analysis and fishbone diagrams.

Defining the process

Once the process has been selected the next important step is to define the scope
of the process. A process is not only the manufacturing process but it involves
everything. It is a combination of machines, tools, methods, and the people
involved in the particular process. Every factor involved in the process has to be
identified at this step of the process capability study.

Meeting the resources for the study

As this study aims at the improvement of the process, it disrupts the normal
functioning of the process. This kind of study also involves considerable
expenditure both in terms of manpower and material. Every requirement should be
met for such a kind of study. At this stage, it becomes imperative to use all the
techniques of project management like planning, scheduling, and reporting.

Evaluating the measurement system

The measurement system should be first evaluated for its capability and validity.
The measurement system should be evaluated for tolerance, accuracy and
precision.

Preparing a control plan

The control plan has a dual job. On one hand it isolates and controls as many
variables as possible for the study. On the other hand it provides a method to track
those variables which cannot be controlled.

Selecting the method for analysis

The choice of the SPC method depends upon what attributes are used up to this
point in the study. One of the attribute charts is to be used if performance measure
is an attribute. Similarly variable charts are used for process performance
measures studied on a continuous scale.

Collecting and analyzing the data

Data collection and analyzing should always be done by more than one person as it
ensures more accurate collection and accuracy. It helps to catch accidental errors
while data collection or analyzing. Control charts should be used here.

Preparing a control plan

The control plan has a dual job. On one hand it isolates and controls as many
variables as possible for the study. On the other hand it provides a method to track
those variables which cannot be controlled.

Selecting the method for analysis

The choice of the SPC method depends upon what attributes are used up to this
point in the study. One of the attribute charts is to be used if performance measure
is an attribute. Similarly variable charts are used for process performance
measures studied on a continuous scale.

Collecting and analyzing the data


Data collection and analyzing should always be done by more than one person as it
ensures more accurate collection and accuracy. It helps to catch accidental errors
while data collection or analyzing. Control charts should be used here.

Identifying and removing special causes

A special cause of variation can be a good cause or a bad one. The idea is to identify
it. It may take months to identify a special cause. For instance, inadequately trained
operators producing the variability cannot be considered as a special cause; rather
it is the inadequate training which is the special cause. Therefore, harmful special
causes are usually removed by removing the cause itself. On the other hand,
advantageous special causes are sometimes actually embedded in the routine
operating procedure.

Assessing the process capability

After the process comes under a state of control, the process capability can be
computed. This can be done by the methods described in the latter part of this
chapter. Once the statistical figures of the process capability are worked out, they
are then compared with the managerial objective for the process.

Planning for continuous process development

After a stable state or a state of statistical control has been achieved, measures
should be taken to sustain the same and also to enhance it. What is more
important here is the atmosphere of the company which helps in the process of
sustaining the stable state and also continuously improving on it. SPC is one such
measure.

2. Process Capability Indices

The voice of the process and the voice of the customer both have an effect on each
other. This two way relationship in Six Sigma is called capability. Capability implies
how well the voice of the process or the performance of the process meets with
customers expectations.

Statistically, process capability compares the result of an ‘in control’ process to the
specification limits by using capability indices. A measurement control system
ensures that process capability indices are measures of the capability of a process
to produce the final product or service, according to the customer’s specifications
or some other measurable characteristics. The output of a stable process is
compared with a specification to see how well the process meets the specification.
The capability indices draw attention to where improvement is required.

When you buy a pizza, as a customer, you have certain expectations. You want the
taste of a particular type the same every time you order it. If it was too crispy or too
salty, you would immediately sense it and feel dissatisfied. The pizza company is
aware of this and controls the amount of ingredients of each pizza or oven
temperature that goes into making one. Therefore the pizza manufacturer sets
specifications for the making its pizzas consistent. The value that separates
acceptable from unacceptable performances is called specification.

Process capability indices measure whether a process is capable or not, if the


output is consistent or not i.e., if the process mean is centered to the specified
targets. In general, a capable process is the one in which all the measurements fall
inside the specification limits, i.e. inside Upper Specification Limit (USL) and Lower
Specification Limit (LSL). An example of a capable process could be like:

Histograms and control charts together are used to express process capability.
Process capability is expressed using the following indices:

1. Cp is a simple index to show process capability. It compares the width of the


specification limits to the effective short term width of the process. The width or
limits of the process have been defined by Six Sigma pioneers as three standard
deviations away from the mean. These limits cover around 99.7 % of the process
variation.

Cp can be computed as the distance between the upper and lower specification
limit (process width) divided by 6 times its standard deviation or sigma. (These
specification limits are not statistically calculated. They are set by the customer
requirements and the process economics.) This index makes a direct comparison of
the natural tolerance of the process to engineering specifications.

This can also be written as Cp= Engineering tolerance / 6σ


While calculating ‘Cp ’, it is assumed that all the observations drawn from the
sample are normally distributed. ‘µ’and ‘σ’ are assumed as the mean and standard
deviation, respectively, of the normal data. USL, LSL, and T are the upper and lower
specification limits and the target value, respectively.

 USL and LSL: The difference between USL and LSL defines the range of output
which the process must meet. (USL-LSL) is also called as the specification range.
 6σ: It is called the “natural tolerance” of the process. If 6σ is less, the output data of
the process would be more stable.
 If Cp<1, it means that the denominator value is more the numerator. Hence the
value of (USL- LSL) is less, which depicts that the process width is wider than the
specification limits. Thus it can be interpreted that a process is not capable of
generating outputs which abide by specifications and the process is generating
significant number of defects.
 If Cp= 1, it means that the process is just meeting the specifications but is still
generating 0.3% defects. The general minimum accepted value for Cp is 1.33.
 If Cp>1, it means that the process variation is less than the specification range and
it ensures that the specification range is narrow enough. The process is potentially
capable, if the process mean is centered on the specified engineering target.
However, defects may occur if the process is not centered on the target value.
Generally, a larger value of Cp is preferred.
 For a Six Sigma process, i.e, a process that generates 3.4 defects per million
opportunities, including a 1.5 sigma shift, the value of Cp would be 2.

Limitations of Cp:

 It cant be used unless there are upper and lower specification limits
 It does not account for process entering, i.e., if the process average is not properly
centered, the result will be wrong.

Cu is the difference between the process mean and the upper specification limit
divided by 3 sigma.

2. Cu is the difference between the process mean and the upper specification limit
divided by 3 sigma.

1. Cl is the difference between the process mean and the lower specification limit
divided by 3 sigma.

2. Cpk is the smallest value that is calculated of Cu and Cl is called the adjusted short
term capability. In other words, Cpk is the difference between the process mean and
the nearest specification limit divided by 3 times its standard deviation (sigma).
While calculating Cpk it is assumed that the process distribution is normal. If this
value is ≥ 1, then about 99.7 % of all the products of the process being studied will
be within the prescribed specification limit. If this value is less than 1, the process
variation is too wide compared to the specification (more non-conforming products
are being produced); so the process needs to be improved. For a Six Sigma
process, Cpk would be 2.
This index compares the width of the process with the width of the specification
and also accounts for any error in the placement of the central tendency. That is
why in recent times, Cp has been replaced by Cpk.

3. CR is equivalent to the Cp index. It makes a direct comparison of the process to


engineering specifications. A capable process will have a 100% CR index assuming
the process distribution is normal. The acceptable minimum for CR is 75% to allow
for process variation or drift. This index has the same limitations as the Cp index.

4. CM is used to study machine capability indices instead of entire process capability


studies. It uses an 8 sigma spread to represent the natural tolerance of the process
because variation will increase when other sources of variation like material,
tooling, fixtures, are added. A 10 sigma spread is to be used when a machine is
used in a Six Sigma process
5. ZU measures process location or central tendency relative to its standard
deviation and upper requirement. Normally, the higher the value of Z U the better.
For a Six Sigma process, ZU would be +6.

6. ZL measures process location relative to its standard deviation and the lower
requirement. The higher the ZL the better. For a Six Sigma process, ZL would be +6.

7. ZMIN is the smaller of ZU and ZL values. It is used in calculating CPK. For a Six Sigma
process, ZMIN would be +6.

Cpk can also be written as ZMIN/ 3.

5.8 Process Performance Indices

3. Process Performance Indices

There are some other indices called process performance indices and as the name
suggests, they study the actual process performance over a period of time. Process
performance indicators help to look at how the total variation in the process
compares to the specification. These indices are Pp, Ppk, and Cpm. Here special
causes are included to calculate total variation. These are called long-term
capability indices. They are equally important because no process operates only for
the short-term.
According to AIAG manuals, Pp and Ppk are based on statistically stable processes
with normally distributed data. The formula for computing Pp and Ppk are very
similar to those of Cp and Cpk. The main difference in their formulae lies in
calculating the standard deviation.

The sample standard deviation, s, is calculated directly from the data as:

Where, Χ = data observations

n = sample size

and

where, ZMIN is the smaller of ZU(Zupper) and ZL(Zlower) values.


where, Χ double bar is the grand mean, i.e., total of sample means divided by the
number of samples, i.e., (Χ 1 bar + X 2 bar +………………Χ i bar)/ N.

Generally, it is seen that in a preliminary process performance study, lesser factors


of variation are depicted than in a process capability study. Hence the
preliminary Ppk value will be greater than the Cpk value in a process.

Cpm is the process capability measured against performance to a target. This value
measures the width of the specification range as compared to the spread of the
output of the process, together with an error term about how far the mean of the
distribution is away from the target.

When the center of the specification is the targeted value, C p, Cpk, and Cpm will be
equal. When close to the center, it is often found that the C pk and Cpm values are
approximately equal. But if the mean is more than one standard deviation away
from the target, then different views of the process capability will be interpreted
from the three indices.

4. Process Capability for Attributes Data

The following describes the control chart method of analyzing process capability of
attributes data. (Thomas Pyzdek, 1976)

 Collect samples from 25 or more subgroups of data units that is consecutively


produced. Follow the guidelines described in the steps of “Designing and
Conducting a Process Capability Study”
 Plot the results in the appropriate control chart (e.g., c chart). If all groups are in
statistical control, proceed to step no.3. Otherwise identify the special cause of
variation and take action to eliminate it. Observe that a special cause might be
beneficial.
 Using the control limits from the previous step (called the operation control limits),
put the control chart to use for some time. Once you are satisfied that time has
passed for most special causes to have been indentified, and eliminated, as
verified by the control charts, go to step no.4.
 The process capability is estimated as the control chart centerline. The centerline
on the attribute charts is the long term expected quality level of the process, e.g.,
the average proportion defective. This is the level created by the common cause of
variation.

If the process capability does not level up to the management requirements, immediate
action should be taken to correct the process. It is to be noted that ‘problem solving’ may
not help but only result in tampering. Even if the capability meets requirement
specifications, ways of process improvement should be constantly discovered. The
control charts will provide verification of the improvement.

5. Short-term vs. Long-term Shift

From the above description, it is evident that there are two values for Cpk , which is
the short- term and the other is long-term Ppk . These differences happen because
any time a sample is processed to calculate performance; it is done so over a short
period of time. But processes vary over time. Mikel Harry, the pioneer of Six Sigma,
lays down that even the most consistent of processes display a 1.5 sigma shift. In
reality, sometimes the short term capability simply may be the best capability and
the actual performance may not be good enough as interpreted by this short term
calculation. In some cases, the shift may be smaller than the 1.5 sigma shift. In
other cases, when the process is not under statistical control, the shift may be
wider than the 1.5 sigma shift. The answer to this can be to calculate a number
of Cpk calculations to determine a unique shift.

Short-term vs. Long-term Sigma Score

The Sigma score (Z) can be calculated from the mean (x bar) and standard deviation
σ. If the short term standard deviation is known, the sigma score calculated will be
the short term sigma score ZST.

If the long term standard deviation is known, the sigma score calculated will be the
long term sigma score ZLT.
where SL is the specification limit.

The short term sigma score represents the best variation performance that can be
expected from the current process. But in the real world, a process varies over
time. The performance of a process is affected by shift, drift and trends. Six Sigma
permits the study of the effects of short term variation, while making realistic long
term projections, relative to the process’s specifications.

The following graph shows the short term variation of a process together with its
long term variation.

In the graph, the process stays within specifications in the short term. But in the
long run the process is affected by long term influences (like drift and shift) and the
process variation is expanded, causing it to generate defects beyond the
specification limit. The defects are shown in the shaded portion.

Practitioners of Six Sigma projected that mathematically shifting a process’s short term
distribution closer to its specification limit by a distance of 1.5 times its short term
standard deviation (σ ST) would approximately be equivalent to the number of defects
produced in the long term. This can be applied directly to the calculation of short term
and long term sigma scores. This is shown in the diagram given below.

The figure shows the short term distribution used to project the long term
performance.

Therefore, the sigma score of the shifted distribution is

But the shifted distribution being approximately equal to the long term distribution,
the same equation can be written as
Therefore, it is a common practice of Six Sigma analysts to first calculate the short
term sigma score and then translate it into the long term defect rate performance
(Z LT). This Z LT is communicated in terms of defects per million opportunities
(DPMO).

6. Non-Normal Data Transformations (Process Capability for Non-Normal Data)

Till this point, process capability of normal data has been described. What are the
options for analyzing process capability for the data that is not normal? The fact
acknowledged by statisticians is that most studies including Six Sigma processes
assume normality, because it is easy to assume normality for data when the errors
associated with that assumption would be insignificant.

The output of many process do not form normal distributions. Characteristics like
height, weight etc. can never have an exactly normal distribution. Similarly, there
are processes that involve life data and reliability studies, or processes where
financial information has to be tracked or customer service processes that follow a
non-normal distribution. Other such non-normal distributions are cycle time,
average handling time (for calls), calls per hour, shrinkage, perpendicularity,
straightness, and so on. But non-normal data can be transformed to resemble a
normal distribution. There are a number of techniques in statistics that are used in
performing this method.

There are other options for calculating process capability indices for non-normal
data. It is possible to calculate capability with non-normal data.. But it is sometimes
more helpful to create a more useful data set of the non-normal data, this can be
done by:

1. Sub group averaging: Averaging the sub groups using control charts usually
produces a normal distribution. It works on the central limit theorem too. It is to be
noted that when the data is highly skewed, more samples are needed.

2. Segmenting data through stratification often clubs non-normal data into normal
groups of data. This is dividing the data into subsets with similar characteristics.
3. Mathematically transforming data and the specifications through Box-Cox
transformations, logit transformation etc.

4. Usage of different distributions like Weibull distributions, Log-normal


distributions, Exponential distributions, Logistic distributions etc.

5. Non-parametric distributions are used for tests when the data is not normal.
They are used for tests using medians rather than means. They are generally used
to compare data groups whose sample sizes are less than 100.(For more information
on non-parametric tests, see Chapter 6: Black Belt, Analyze) For larger samples, Central
Limit Theorem can be used.

Process Capability

The following flowchart shoes the flow of process capability calculation, starting
with a data set of continuous data and then analyzing it for non-normality and then
calculating capability for the non-normal data.
Process Capability with Subsets

Generally in a business process, different subsets of transactions go through


different parts of the business process. Big dissimilarities between subsets would
make the baseline data look non-normal. To measure capability for this kind of a
situation, data can be subdivided into the individual subset processes. Then
process capability can be calculated on each subset and the DPMOs can be
combined from each subset weighted by their relative percentages.
Transforming the Data

When data from a process looks like a non-normal but known distribution, like a
Weibull distribution or log-normal distribution, defect rates or process capability
can be calculated using properties of the distribution given the parameters of the
distribution and the specification limits. The Cpk that would have the same
proportion nonconforming for a normal distribution can be determined afterwards.
Another method is to transform the raw data into an approximately normal
distribution, and measure capability using this assumption and transformed
specification limits. A skewed distribution can be transformed into a normal
distribution using natural logarithms. Minitab software can be used to transform
the raw data specification limits, and calculate process capability on the
transformed data.

Transforming Continuous Data to Discrete (Attribute) Data

When the above methods cannot be used, the best way to calculate capability is to
collect data on the defects themselves and summarize the results. This method
cannot infer about data beyond the available sample; so collecting much larger
data samples than is necessary for continuous data can help overcome this issue.

5.9 Estimating Process Yield Through RTY and Sigma Level


7. Estimating Process Yield through RTY and Sigma Level

The rolled throughput yield (RTY) combines defects-per-million-opportunities


(DPMO) data for a process or a product. It is an indicator of the overall process
quality level, or throughput. For a process, throughput is a measure of the output
of the process depending on the raw items that goes into making the process. For a
product, throughput is a measure of the quality of the whole product as a result of
the quality of its various parts. Throughput summarizes the results of capability
analysis into a measure of overall performance.

The following formula is used to calculate the RTY for an N-step process (or N-
featured product):

Rolled Throughput Yield = (1− DPMO 1/ 1,000,000) × (1− DPMO 2/ 1,000,000) × (1−
DPMO 3/ 1,000,000) …… (1− DPMO N/ 1,000,000)

where, DPMO i is the defects per million opportunities for step i in the process. The
sigma level equivalent for RTYs is given in the table in the end. These are the
estimated ‘process’ sigma levels.

To calculate the Normalized Yield and Sigma Level which is a kind of average, the
following formula is used:

Normalized Yield = N√ {(1− DPMO 1/ 1,000,000) × (1− DPMO 2/ 1,000,000) × ….(1−


DPMO 3/ 1,000,000) …… (1− DPMO N/ 1,000,000)}

When calculating DPMO, you would not actually measure the defects over a million
opportunities. That is a long drawn out process. Instead, you calculate DPMO using
DPO as an estimate, in the following manner:

DPMO = DPO × 1,000,000

where,
DPO is a measurement of capability. Any item that you work upon in Six Sigma is
called a unit. A unit may be a product that is manufactured discretely. It may be a
new design or an invoice of receipt or a loan application. An opportunity of any
product or process or service is a special characteristic that has the ability to turn
into a defect or success. Success or failure is known as the conformity to the
opportunity’s specification.

For example, a process with four individual process steps with the following DPMO
levels at each step is shown below:

Rolled Throughput Yield= .995 × .985 × .999 × .99995 = 0.979. This means if
production of 1000 units were started with, the output would be only 979 units. It
should be remembered that RTY is worse than the worst yield of any process. When
the process becomes complex and the number of process steps become high, the
RTY will continue to erode.

The sigma level equivalent of this process RTY is 3.5. This is the estimated “process”
sigma level. This is calculated in the following manner:

Corresponding to the value of yield, i.e., 0.979, the value of Z ST equals to 4.62. (See
table below) The sigma score of the shifted distribution is
The sigma level equivalent of this process’s normalized yield is 4.1. This is the
estimated “organization” sigma level. (See table below).

Process σ Levels and (DPMO) Quality Levels

CHAPTER 6
6 The Analyze Phase

Objectives

The objectives of the ANALYZE phase would be to:

 Arrive at the root cause by process analysis or data analysis


 Quantify the opportunity for the project

The true reason why a problem could exist in the process is unearthed in the
Analyze phase.

The goal of analysis can be defined by the equation -

Solving for Y = f (X 1, X 2, …..X n)


The goal in the analysis phase is to determine which factors (Xs) in the process are
the largest contributors to the performance of Y (output).

A. Exploratory Data Analysis

Data analysis can be divided into two phases: the explanatory phase and the
confirmatory phase. Before actually studying the problem and establishing a cause
and effect theory, one must thoroughly examine the data for patterns and trends
or gaps. This is called exploratory data analysis.

Exploratory Data Analysis (EDA) is an approach for data analysis that utilizes a
variety of techniques (mostly graphical) to:

1. maximize insight into a data set


2. reveal underlying structure
3. extract important variables
4. detect outliers and irregularities
5. test underlying assumptions
6. develop economical models; and determine optimal factor settings

Four arguments come into view repeatedly throughout EDA:

1. Resistance

It refers to the insensitivity of a method to a small change in the data. If the small
amount of the data is tainted, the method should not produce significantly different
conclusions.

2. Residuals

Residuals are what remain after removing the effect of a model .For example; one
might subtract the mean from each value, or look at deviations about a regression
line.

3. Re-expression

It involves examination of different scales on which the data are displayed.

4. Visual Display
It helps the analyst to examine graphically to point out the regularities and
abnormalities in the data.

There are a wide number of EDA methods and techniques, but two of them are
used frequently in Six Sigma: stem-and-leaf plots and box plots. However, graphics
of EDA are simple enough to be drawn by hand.

1. Multi-Variate Studies

Multi Variate studies is the study about the identification of the benefits of
visualization of the relationships between key process input and output variables.
They involve the matching up of data visualization techniques with equivalent
images and also with examples of the types of data to which they are best well-
matched. They also match the families of variation shown by Multi-Variate charts
with examples.

Multi-Variate Charts

A multivariate chart is a control chart for variables data (See Chapter 8: Black Belt,
Control for information on control charts). Multivariate Charts are used to find out
shifts in the mean or the association (covariance) between numerous linked
parameters.

Several charts are accessible for multivariate analysis:

The T 2 control chart, based upon Hotelling T 2 statistic, is used to detect shifts in
the process. This statistic is calculated for the process' Principal Components, which
are linear combinations of the Process Variables. The Principal Components (PC)
are independent of one another however, the Process Variables may be correlated
with one another. Independence of components is necessary for the analysis. The
PCs may be used to estimate the data and thereby provide a basis for an estimate
of the prediction error. The number of PCs may never exceed the number of
process variables and is often constrained to be fewer.

1. The Squared Prediction Error (SPE) chart may also be used to detect shifts. The
SPE is based on the error between the raw data and a fitted Principal Component
model to that data.

2. Contribution Charts are presented for determining the Process Variables'


contributions to either the Principal Component (Score Contributions) or the SPE
(Error Contributions) for a given sample. This is principally effective for determining
the Process Variable that is responsible for process shifts. These process variables
are restricted to subgroups of size one.

3. Loading Charts offer an indication of the relative contribution of each Process


Variable towards a given Principal Component for all groups in the analysis.

Uses of Multi-Variate charts

A Multivariate Analysis (MVA) may be valuable in SPC whenever there is more than
one process variable. This becomes more useful when the effect of multiple
parameters is dependent or there is a correlation between some parameters.
Sometimes the true source of variation may not be measurable.

An important point is that almost all processes are multivariate but analysis is
frequently not required because there are only a few independent controlled
variables. However, even when the variables become dependent, the use of a single
control chart for each variable increases the probability of randomly finding a
variable ‘out of control’; the more variables there are, the more likely it is that one
of those charts will contain an ‘out of control’ condition even when the process has
not shifted. Thus, the probability of taking a wrong decision (or probability of Type 1
error) is increased if each variable is controlled separately. So the control region for
two separately acting variables is a rectangle; an ellipse would be formed as the
control region for two jointly-acting parameters.

2. Measuring and Modeling Relationships between Variables

a. Simple Least-Squares Linear Regression

The use of regression analysis is very important in Six Sigma. Regression analysis
helps the analyst to study cause and effect of a problem. This can be used in every
stage of problem solving and planning process.

Regression is the study of analysis of data aimed at discovering how one or more
variables (called independent or predictor variables) affect the other variables
(called dependent or response variables). Such analysis is called regression. It tells
about the nature of relationship between two or more variables.

For e.g., (1) you may be interested in studying the relationship between blood
pressure and age or between height and weight of a person. Here only two
variables are used. This is an example of Simple Linear Regression.

(2) The response of an experimental animal to some drug may depend on the size
of the dose and the age and weight of the animal. Here more than two variables are
used. So it is a case of Multiple Regression.

The Regression Model

It is an application of the linear model where the population of the response or


dependent variables is identified with numeric values of one or more quantitative
or independent variables. The purpose of statistical analysis of a regression model
is not to make an inference.

One difference among the means of those populations is rather to make inferences
about the relationship of mean of the response variables. These inferences are
made through the parameter of the model.

For Example:

1. Estimating weight gain by the addition of different amounts of various dietary


supplements to a man’s diet.

2. Estimating the amount of sales associated with levels of expenditure for various
types of advertising.

Regression Line

For the amount of change that normally takes place in variable Y for a unit change
in X, a line will have to be fitted to the points plotted in the scatter diagram. This is
called regression line or linear regression.

The regression line tells about the average relationship between two variables for
the whole series. It is also called the line of average relationship.

Simple Linear Regression Equation

The standard form of equation describing a line is

Y= α + β X
When this equation describes the line marking the path of the points in a scatter
diagram, it is called regression equation. The line it describes is called the line of
regression of Y on X.

The values of and α and β in the equation are termed constants i.e. these values
are fixed. The first constant α indicates the value of Y when X=0, it is also called the
Y-intercept.

The value β indicates the slope of the regression line and it gives us a measure of
change in Y for a unit change in X. It is also called regression coefficient of Y on X. If
you know the values of α and β, you can easily compute the value of Y for a given
value of X.

The values of α and β are calculated with the help of the following two normal
equations:

Standard Error

If the measure of scatter of points from the regression line is less than the measure
of the scatter of observed values of y from their mean, it can be inferred that the
regression equation is likely to be useful in estimating Y. The scatter of points from
the regression line is called standard error of estimating Y.

It is observed by the form:

where, SY = Standard error of estimate

Y = Observed value of Y
Y C = Estimated value of Y

N = No. of pairs of values

Regression Model:

In the simple linear regression model two variables, X and Y are taken. The
following are the assumptions underlying the simple linear regression model:

Assumptions

a. The values of independent variable X are said to be fixed by the investigator i.e. X
is referred as non-random variable.

b. The variable X is measured without error i.e. the magnitude of the measurement
error in X is negligible.

c. For each value of X, there is a sub population of Y values. For the usual inferential
procedures of estimation and hypothesis testing to be valid, these sub populations
must be normally distributed.

d. The variances of subpopulations of Y are equal and the means of the


subpopulations of Y lie on the same straight line.

e. The values of Y are statistically independent i.e. the values of Y chosen at one
value of X in no way depend on the values of Y chosen at another value of X.

These assumptions may be summarized by means of the following equation, which


is called the regression model:

where, y is a typical value from one of the subpopulations of Y, α and β are called
population regression coefficients and e is the error term.
where, e shows the amount by which y deviates from the mean of the
subpopulations of Y values from which it is drawn. e’s for each subpopulation are
also normally distributed with a variance equal to the common variance of the
subpopulations of Y values.

Scatter Diagram

A first step that is useful in studying the relationship between two variables is to
prepare a scatter diagram of the data. The points are plotted by assigning values of
independent variables X to the horizontal axis and values of the dependent variable
Y to the vertical axis. The pattern made by the points plotted on the scatter diagram
usually suggests the basic nature and strength of the relationship between two
variables. These impressions suggest that the relationship between two variables
may be describing by a straight line crossing the Y-axis below the origin and making
approximately a 45-degree angle with X-axis. It looks as if it would be simple to
draw, freehand, through the data points the line that describe the relationship
between X and Y. In fact, it is not likely that any freehand line drawn through the
data will be the line that best describe the relationship, since freehand lines will
reflect any defects of vision or judgment of the person drawing the line.
Usage of Scatter Diagram: Scatter diagrams are used to study cause and effect
relationships in Six Sigma. The underlying assumption is that the independent
variables are causing a change in response variables. It answers questions like, “In
the production process, is output of machine A better than Output of machine B?”
etc.

The Least-Squares Line

The method usually employed for obtaining the desired line is known as the
method of least squares, and the resulting line is called the least- squares line. The
least-squares line does not pass through the observed points that are plotted on
the scatter diagram.

The line that you have drawn through the points is best in this sense if:

The sum of the squared vertical deviations of the observed data points (y i) from
the least-squares line is smaller than the sum of the squared vertical deviations of
the data points from any other line.

6.1 Evaluating the Regression Equation

Evaluating the Regression Equation

Apply and interpret hypothesis test for regression statistics:

Once the regression equation has been obtained it must be evaluated to determine
whether it adequately describes the relationship between two variables and
whether it can be used effectively for prediction and estimation purposes.

When H0: β = 0 Is Not Rejected: If in the population the relationship between X


and Y is linear, β, the slope of the line that describes this relationship, will be either
positive, negative or zero. If β =0, sample data drawn from the population will yield
regression equations that gives no values for prediction and estimation. Thus,
following a test in which the null hypothesis is not rejected, it can be concluded
either that (a) although the relationship between X and Y may be linear, it is not
strong enough for X to be of much value in predicting and estimating Y, or (b) the
relationship between X and Y is non linear; that is, some curvilinear model provides
a better fit to the data.
The relationship between X and Y is linear, but β is so close to zero that the sample
data are not likely to yield equations that are useful for predicting Y when X is given.

The relationship between X and Y is not linear; a curvilinear model provides a better
fit to the data; sample data are not likely to yield equations that are useful for
predicting Y when X is given.

When H0: β = 0 Is Rejected: The rejection of null hypothesis may be attributed to


one of the following conditions in the population: (a) The relationship is linear and
of sufficient strength to justify the use of sample regression equations to predict
and estimate Y for given values of X.
(b) There is a good fit of the data to a linear model, but some curvilinear model
might provide an even better fit.

Thus, before using a sample regression equation to predict and estimate, it is


desirable to test H0: β = 0. You can do this either by using analysis of variance and
the F statistic or by using t statistic.

Testing H0: β = 0 with the t-statistic:

For testing hypotheses about β the test statistic when σ2y/x is known as
1. Test Statistic (when σ2y/x is known):

where, β0 is the hypothesized value of β. The hypothesized value of β does not have
to be zero and

where, σ2y/x is the unexplained variance of the subpopulations of Y values and b is


the unbiased point estimator of the corresponding parameter β. When H0 is true
and the assumptions met, the test statistic is distributed as standard normal
distribution with mean zero and variance 1. The level of significance (α =0.05 or
0.01) is known and the tabulated value is Z α/2.

Decision Rule

Test is reject H0, if the calculated value is greater than tabulated value i.e. ( |Z|>
Zα/2) the test is significant. This means that there is a linear relationship between
dependent variable Y and independent variable X. We conclude that the slope of
the true regression line is not zero.

2. Test Statistic (when σ2y/x is unknown):

where, Sb is an estimate of σb and t is distributed as Student’s t with n - 2 degrees of


freedom.

When H0 is true and the assumptions are met, the test statistic is distributed as
Student’s t distribution with n-2 degrees of freedom. The level of significance α =
0.05 or 0.01 is known. The tabulated value is t α/2, n-2.

Decision Rule

Test is reject H0, if the calculated value is greater than tabulated value i.e. ( |t|> t α/2,
n-2), the test is significant. This means that there is a linear relationship between
dependent variable Y and independent variable X. It can be concluded that the
slope of the true regression line is not zero.

Using the Regression Equation

If the results of the evaluation of the sample regression equation indicate that there
is a relationship between two variables of interest, the regression equation can be
put to practical use. There are two ways in which the equation can be used. It can
be used to predict what value Y is likely to assume given a particular value of X.
When the normality assumption is met, a prediction interval for this predicted value
of Y may be constructed.

You can also use the regression equation to estimate the mean of the
subpopulation of Y values assumed to exist at any particular value of X. Again, if the
assumption of normally distributed populations holds, a confidence interval for this
parameter may be constructed.

Predicting Y for a Given X

If the assumption are met, and when σ 2y/x is unknown, then 100(1 - α) percent
prediction interval for Y is given by

where xp is the particular value of x at which you wish to obtain a prediction interval
for Y and the degrees of freedom used in selecting t are n - 2.

Estimating the Mean of Y for a Given Χ

The 100(1 -α )percent confidence interval for μy/x , when σ2y/x is unknown , is given
by
b. The Multiple Least squares Linear Regression Model

In the multiple regression model it is assumed that a linear relationship exists


between some variable Y, which is called the dependent variable, and k
independent variables, X1, X2, ………Xk. The independent variables are sometimes
referred to as explanatory variables, because of their use in explaining the variation
in Y.

Assumptions

a. The values of independent variables Xi are said to be fixed by the investigator or


Xi are referred as non-random variables.

b. The variable X is measured without error i.e. the magnitude of the measurement
error in X is negligible.

c. For each set of Xi , there is a sub population of Y values. For the usual inferential
procedures of estimation and hypothesis testing to be valid, these sub populations
must be normally distributed.

d. The variances of subpopulations of Y are equal.

e. The values of Y are statistically independent i.e. the values of Y chosen at one set
of X in no way depend on the values of Y chosen at another set of X values.

These assumptions may be summarized by the following equation, which is called


the multiple regression model:

where, yj is a typical value from one of the subpopulations of Y values, the β i are
called the regression coefficients, xlj, x2j……………..xkj are respectively, particular
values of the independent variables X1, X2, ……………Xk and ej is a random variable
with mean 0 and variance σ2.

When the above equation consists of one dependent and two independent
variables, the model is written as:

A plane in three-dimensional space may be fitted to the data points. When the
model contains more than two independent variables, it is described geometrically
as a Hyper-plane.

The deviation of a point from the plane is represented by

In this equation, β0 represents the point where the plane cuts the Y-axis; that is, it
represents the Y-intercept of the plane. β1measures the average change in Y for a
unit change in xl when x2 remains unchanged, and β2 measures the average change
in Y for a unit change in xl when x2 remains unchanged. For this reason β1 and
β2 are referred to as partial coefficients.

Least-Square Method to Obtain Multiple Regression Equation

Unbiased estimates of parameters β0, β1, β2 …………….. βk of the model specified in


the equation are obtained by the method of least squares. This means that the sum
of squared deviations of the observed values of Y from the resulting regression
surface is minimized. In the three-variable case, by the method of least squares,
sample estimates of β0, β1, β2 …………….. βk are selected in such a way that the
quantity
Evaluating the Multiple Regression Equation

Apply and interpret hypothesis test for regression statistics

Once the regression equation has been obtained it must be evaluated to determine
whether it adequately describes the relationship between two variables and
whether it can be used effectively for prediction and estimation purposes.

Testing the Regression Hypothesis

To determine whether the overall regression is significant, you have to evaluate the
strength of the linear relationship between dependent variables Y and the
independent variables Xi individually. That is, when you want to test the null
hypothesis, that is βi = 0 against the alternatives βi ≠ 0 (i=1,2,…….k), the validity of
the procedures rests on the assumptions stated earlier: that for each combination
of Xi values there is a normally distributed subpopulation of Y values with variance
σ 2.

Hypothesis tests for the βi:

To test the null hypothesis that βi is equal to some particular value , say, βi0 , the
following t statistic may be computed:

where the degrees of freedom are equal to n - k - 1 and sbi is the standard deviation
of the b i. When H0 is true and the assumptions are met, the test statistic is
distributed as Student’s t distribution with n - k - 1 degrees of freedom. The level of
significance α = 0.05 is known. The tabulated value is t α/2, n - k - 1.

The test is reject H 0, if the calculated value is greater than tabulated value, i.e.
(|t|>tα/2, n - k - 1). This shows that the test is significant, which means there is a
significant linear relationship between dependent variables Y and independent
variables X
6.2 Using the Multiple Regression Equation

Using the Multiple Regression Equation

Multiple regression equation is used to obtain a ŷ value when particular values of


two or more X variables present in the equation are given. You may interpret ŷ as
an estimate of the mean of the subpopulations of the Y values assumed to exist for
a particular combination of Xi values.

Under this interpretation ŷ is called an estimate, and its equation is


called estimation equation and the corresponding interval is called confidence
interval. The second interpretation of ŷ value is that it is the value of Y that is most
likely to assume for given values of Xi. In this case ŷ is called predicted value of Y,
and the equation is called prediction equation and the corresponding interval is
called prediction interval.

The 100(1-α) % Confidence Interval for the Mean of a subpopulation of Y values


given particular values of the Xi is as follows:

where Sŷ is the standard error of the prediction. The 100(1-α) % Prediction Interval
for a particular value of Y given particular values of the Xi is as follows:

where, S'ŷ is the standard error of the prediction.

c. Simple Linear Correlation

Correlation analysis deals with the association between two or more variables or it
is an attempt to determine the degree of relationship between two variables, when
the relationship is of a quantitative nature.

For example:
(1) to check the effect of increase in rainfall up to a point and the production of
rice.

(2) to check the effect of trained operators on the output of a process

(3) to check whether there exists some relationship between age of husband and
age of wife.

The use of correlation analysis is very important in Six Sigma. Correlation analysis
helps the analyst to study cause and effect of a problem. This can be used in every
stage of problem solving and planning process.

Significance of the study of correlation

1. Most of the variables show some kind of relationship. For example, there is
relationship between price and supply, income and expenditure, etc. With the help
of correlation analysis you can measure in one figure the degree of relationship
existing between the variables.

2. Once you know that the variables are closely related, you can estimate the value
of one variable given the value of another with the help of regression analysis.

3. In Six Sigma operations, correlation analysis enables the analyst to estimate


costs, sales, prices and other variables on the basis of some other series with which
these costs, sales, or prices may be functionally related.

Correlation Assumption

The following assumptions must hold for inferences about the population to be
valid when sampling is from bivariate distributions.

 For each value of X there is a normally distributed subpopulation of Y values.


 For each value of Y there is a normally distributed subpopulation of X values.
 The joint distributions of X and Y is a normal distribution called bivariate normal
distributions.
 The subpopulations of X and Y values all have the same variance.

Difference between Correlation and Causation


Correlation analysis helps in determining the degree of relationship between two or
more variables - it does not tell anything about cause and effect relationship.
Correlation does not necessarily imply causation though the existence of causation
always implies correlation. Even a high degree of relationship does not necessarily
imply that a relationship of cause or effect exists between the variables. In general,
if factors A and B are correlated, it may be that

1. A causes B
2. B causes A
3. A and B influence each other continuously
4. A or B both are influenced by C or
5. The correlation is due to chance

This can be explained as follows:

a. The Correlation may be due to pure chance, especially in a small


sample. You may get a high degree of correlation between two variables in a
sample but in the universe there may not be any relationship between the two
variables at all. For e.g.
Income ($) : 350 360 370 380 390
Weight (lbs): 120 140 160 180 200

The above data shows a Perfect Positive Relationship between income and weight,
i.e., as the income is increasing the weight is increasing and the rate of change
between two variables in the same.

b. Both the correlated variables may be influenced by one or more other


variables. It is just possible that a high degree of correlation between two variables
may be due to some causes affecting each with the same effect. For example:
Suppose the correlation of teachers’ salaries and the consumption of liquor over a
period of years comes out to be 0.9, this does not prove that teachers drink: nor
does it prove that liquor sale increases teachers’ salaries.

c. Both the variables may be mutually influencing each other so that neither
can be designated as the cause and the other the effect . There maybe a high
degree of correlation between the variables but it is difficult to pinpoint as to which
is the cause and which is the effect. For e.g., as the price of commodity increases its
demand goes down and so price is the cause and demand is the effect. But it is also
possible that increased demand of a commodity is due to growth of the population.
Now, the cause is the increased demand, the effect is price.

Coefficient of Correlation

The coefficient of correlation is said to be a measure of covariance between two


series. The covariance between two series is written as:

Covariance = Σ x y / N
When r = - 1, it means that there is a perfect negative correlation.

When r = 0, it means that there is a no correlation.


Steps to Calculate Correlation Coefficient

1. Take the deviations of X series from the mean of X and denote these deviations
by x.

2. Square these deviations and obtain total i.e. Σx2

3. Take the deviations of Y series from the mean of Y and denote these deviations
by y.

4. Square these deviations and obtain total i.e. Σx2

5. Multiply the deviations of X and Y series and obtain the total i.e. Σ x y.

6. Substitute the values of Σ xy, Σx2, Σy2 in the formula.

Direct Method of Finding Correlation Coefficient

Correlation coefficient can also be calculated without taking deviations of items


either from actual mean or assumed mean, i.e., actual X and Y values. The formula
in such a case is:

Since r is a pure number, shifting the origin and changing the scale of the series
does not affect the value of correlation coefficient.

Apply and interpret a hypothesis test for correlation coefficient


6.3 Diagnostics and Hypothesis Testing

Analysis of Residuals of the Model

The residuals carry important information concerning the appropriation of


assumption. Analyses may include informal graphics to display general features of
the residual as well as formal tests to detect specific departure from underlying
assumptions.

Each formal and informal procedure is complementary and both have a place in
residual analysis.

The residual models are also called Measurement Error Models. For simple Linear
Measurement Error model, the goal is to estimate a straight line fit between two
variables from bivariate data, both of which are measured with error.

The standard linear regression model is


where, X is matrix of known constants. Y is an n-vector of observable response. β is
an unknown parameter. ε is an unobservable error with indicated distributional
properties. To asses the appropriateness of model, for a given problem, it is
necessary to determine if the assumptions about errors are reasonable. Since
errors ε are unobservable, this must be done indirectly using residuals. In
regression model, it is assumed that the error term satisfies:

Standard Residual Plots

Residuals can be used in a variety of graphical and non graphical summaries to


identify in approximate assumptions. Generally, a number of different plots will be
required to extract the available information. Standard Residuals plots are those in
which ei are plotted against fitted values ŷi. The plots are commonly used to
diagnose non linearity and non constant error variance. These plots give relevant
information about the fit of the model.
Identifiability of the Model

One of the most important differences between residual models and ordinary
regression models is concerned with model identifiability. It is common to assume
that all random variables in the residual model are jointly normal. This means that
different sets of parameters can lead to the same joint distributions of x and y. In
this situation, it is impossible to estimate consistently the parameters from the
data.

B. Hypothesis Testing

1. Fundamentals Concepts of Hypothesis Testing

A Hypothesis may be defined as a statement about one or more populations. The


hypothesis is frequently concerned with parameters of the populations about
which the statement is made.

Hypothesis testing is of considerable importance in Six Sigma. The hypothesis test


is designed to make an inference about the true population value at a significant
level of confidence.

For example: A hospital administrator may hypothesize that the average length of
stay of patients admitted to the hospitals is five days or a physician may
hypothesize that a certain drug will be effective in 90 % of cases for which it is used.
By means of hypothesis testing one determines whether or not such statements
are compatible with the available data.

a. Hypothesis Testing

1. Study the data


The nature of the data that form the basis of the testing procedures must be
understood, since this determines the particular test to be employed. Whether the
data consists of counts or measurements must be determined.

2. Set up a hypothesis
The first thing in hypothesis testing is to set up a hypothesis about a population
parameter. There are two statistical hypothesis involved in hypotheses testing.

The null hypothesis is the hypothesis to be tested designated by symbol H 0. This


is referred to as a hypothesis of no difference. In testing processes, the null
hypothesis is either rejected or is not rejected. If the null hypothesis is not rejected,
this means that the data on which the test is based do not provide sufficient
evidence to cause rejection. If the testing procedure leads to rejection, it is said that
the data in hand is not compatible with the null hypothesis.

The alternative hypothesis is a statement of what is believed is true if the sample


data causes you to reject the null hypothesis. It is designated by H A. The null and
alternative hypotheses are complementary.

For example, A psychologist who wishes to test whether or not a certain class of
people have a mean I.Q. higher than 100 might establish the following null and
alternatives hypotheses:

2. Test statistic
The test statistic is some statistic that may be computed from the data of the
sample. There are many possible values that the test statistic may assume. The test
statistic serves as a decision maker, since the decision is to reject or not to reject
the null hypothesis depends on the magnitude of the test statistic. In general,
For example, A test statistic used for large sample size in a continuous normal
distribution is

The distribution of the test statistic follows standard normal distribution if the null
hypothesis is true.

3. Decision Rule
The decision rule tells about the rejection of the null hypothesis if the value of the
test statistic that is computed from the sample is one of the values in the rejection
region (region where values that are less likely to occur if the null hypothesis is
true), and not to reject the null hypothesis if the computed value of the test statistic
is one of the values in the non rejection region.

b. Significance Level, Type I and Type II Errors, Power

The level of significance, designated by α is a probability and, in fact, is the probability of


rejecting a true null hypothesis.

The decision as to which values go into the rejection region and which ones go into
the non-rejection region is made on the basis of the desired level of significance, α.
The term reflects the fact that hypothesis tests are sometimes called significance
tests, and the computed value of the test statistic that falls in the rejection region is
called ‘significant’.

A small value of α is selected in order to make the probability of rejecting a true null
hypothesis small. The more frequently encountered values of α are .01, .05 and
.10.

The following diagram illustrates the region in which one would accept or reject the
null hypothesis when it is being tested at 5 percent level significance and a two-tail
is employed. It may be noted that 2.5 percent of the area under the curve is located
in each tail.

Types of Errors

When a statistical hypothesis is tested there are four possibilities:

1. The hypothesis is true but your test rejects it. (Type I error)
2. The hypothesis is false but your test accepts it. (Type II error)
3. The hypothesis is true but your test accepts it. (Correct Decision)
4. The hypothesis is false but your test rejects it. (Correct Decision)
For example:

Assume that the difference between two population means is actually zero. If the
test of significance when applied to the sample means gives that the difference in
population means is significant, you make a Type I Error. On the other hand,
suppose there is a true difference between the two population means. Now if the
test of significance leads to the judgment “not significant”, you commit a Type II
Error.

It is more dangerous to accept a false hypothesis (Type II Error) than to reject a


correct one (Type I Error). While testing hypothesis the aim should be to reduce
both the errors. But due to fixed sample size, it is not possible to control both the
errors simultaneously. Hence you keep the probability of committing Type I Error at
a certain fixed level, called level of significance and then reduce Type II Error to get
better results.

6.4 Power of a Hypothesis Test and Point and Interval Estimation

Power of a Hypothesis Test

It is important to know how well a hypothesis test works.

The measure of how well the test is working is called the power of test.
c. Sample Size

Size of sample means the number of sampling units selected from a population for
investigation. A smaller sample but well selected sample may be superior to a large
but badly selected sample. Sample size should neither be too small nor too large. It
should be ‘optimum’. If the sample is used which is larger than necessary, resources
are wasted, if the sample is smaller than required the objectives of the analysis may
not be achieved.

Determination of Sample Size

The formula used for determining the sample size depending upon the availability
of information is given as
2. Point and Interval Estimation

Estimation is to use the statistics obtained from the sample as estimate of


unknown parameters of the population from which the sample is drawn.

For example: Suppose a survey is done to check the education level of the
students in a city during last two years. The data is collected but it is difficult to go
through the entire data to check this level. So a better way is to collect a sample of
the records and find average of the education level for the selected areas. From
this, an estimate of the mean education level of the students in the state can be
computed.
A point estimate is a single numerical value used to estimate the corresponding
unknown population parameter. The procedure in point estimation is to select a
random sample of n observations from a population and then from these
observations find the estimate of the statistic, used as the estimator of the
population. The point estimation is a single point on the real number scale. It gives
the exact value of the parameter under investigation.

For example: On the basis of a sample study, if you estimate the average income
of the people living in a city as $300 it will be a point estimate i.e., this means that
the calculated value from the sample of the given population is used as a point
estimator of population mean.

An interval estimate of a population parameter is a statement of two values


between which it is estimated that the parameters lies.

An interval estimate would always be specified by two values, i.e., the lower one
and the upper one. If the interval estimation refers to the estimation of a
parameter by a random interval, called the confidence interval, whose end points
are L and U with L<U. L is called the lower confidence limit and U is called the upper
confidence limit. If θ is taken as a population parameter, then the confidence
interval is given by

For example: On the basis of a sample study, if the estimate of the average income
of the people living in a city lies between as $900 and $1000, it will be an interval
estimate i.e. the value of the population parameter lies in this interval.

In Six Sigma applications, inferences about populations based on data from


samples are made. Any estimate that is based on a sample involves some amount
of sampling error in spite of the fact that they might be the ‘best estimates’ of the
population parameters.

Unbiasedness of an Estimator

An estimator is said to be unbiased if its expected value is identical with the


population parameter being estimated. That is if T is an unbiased estimate of θ,
then E(T) = θ where E(T) is read as the expected value of T. For a finite population
E(T) is obtained by taking the average value of T computed from all possible
samples of a given size that may be drawn from the given population.

For Example: The administrator of a large hospital is interested in the mean age of
the patients admitted to the hospital during a given year. From a sample of records
it is found that the mean age of the patients admitted that year is 56. If the value
expected in totality is again 56, this means that the estimator is unbiased. The value
of the population parameter is same as the expected value of the test statistic.

Efficiency of the Estimators

The efficiency refers to the sampling variability of an estimator. If two competing


estimators are both unbiased, the one with the smaller variance is said to be
relatively more efficient for a given sample size. The smaller the variance of the
estimator, the more concentrated is the distribution of the estimator around the
parameter being estimated and therefore the better this estimator is.

For example: In a survey on people from a city in Ohio, two samples are taken of
size 500 each and it was found that average beef eaters are 280 and 285 for the two
samples. Both are unbiased means the expected value is the same as the actual
value. It was found that variation due to sampling in the first sample is 0.2 and in
the second sample is 0.5. It is seen that the average taken from first sample shows
a smaller variation with totality than the second sample. So the first sample
estimator is relatively more efficient than the second.

Standard Error

The standard deviation of the sampling distribution is called the Standard Error. It
measures the sampling variability due to chance or random forces.

It is used as an instrument in testing a given hypothesis. If the difference between


observed and expected means is more than 1.90, the standard error being at 5%
level of significance, the result of the experiment does not support the hypothesis
at 5% level or the difference is regarded as significant otherwise non significant.

Standard Error provides an idea about the unreliability of a sample. The greater the
standard error, the greater is the departure of actual frequencies from the
expected ones. And hence greater is the reliability.
With the help of standard Error you can determine the limits within which the
parameter values are expected to lie.

Confidence Interval

It is an interval about the Statistics that has a predetermined probability of


including the true population parameter. It estimates the range in which the
population parameter lies. Confidence interval can either be one-sided or two-
sided. A one sided confidence interval places an upper or lower bound on the
values of the parameter while a two-sided confidence interval considers both.

If L is lower and U is upper confidence limit, and θ is taken as a population


parameter, then confidence interval is given by

Tolerance Interval

The Tolerance interval, estimates the range which should contain a certain
percentage of a sample measured in the population. It is based upon only a sample
of the entire population. It is not a 100% confidence interval. i.e. you cannot be
100% sure that interval will contain the specified proportion. So, it can be
concluded that there are two different proportions related with the tolerance
interval. The first is a degree of confidence, and a second is percent coverage.

For example, you may be 95% confident that 80% of the population will fall within
the range specified by the tolerance interval.

The tolerance interval can be determined by using the following equation:


Prediction Interval

The interval which is used to estimate what future values of the population
parameter will be, based on the present and past values of the sample taken, is
called prediction interval. The confidence and tolerance intervals estimate only
present the population parameters while the prediction limits measure the future
parameters. The minimum is recommended in order to determine a standard
deviation. To determine the prediction limits, sample mean and standard deviation
is known from the background data of sample size, n. Once you decide how many
sampling periods and how many samples will be collected per sampling period, you
can determine the prediction interval by using the equation:
6.5 Test for Means, Variance and Proportions

3. Test for Means, Variance and Proportions

1. Testing For Means

a. Testing hypothesis for a single population mean

Consider Hypothesis testing about a population mean under three different


conditions:

 When sampling is from a Normal Distributions with known variance.


 When sampling is from a Normal Distributions with unknown variance.
 When sampling is from a population that is not normally distributed.

For a large sample size, when sampling is from normal distribution


with knownvariance, the test hypothesis is
For example:

For a random sample of 5 persons, fed on a diet A, increased weight in pounds in a


certain period were:

10, 12, 13, 11, 14

For another random sample of 7 persons, fed on a diet B, increased weight in


pounds in a certain period were:

8, 9, 12, 14, 15, 10, 9

Test whether the diets A and B differ significantly as regards their effect on increase
in weight at 5% level of significance.

Solution:
n1 = 5, n2 = 7

X1 = 10, 12, 13, 11, 14

X2 = 8, 9, 12, 14, 15, 10, 9

The calculated value is less than the tabulated value and hence the null hypothesis
is accepted. The experiment provides no evidence against the hypothesis.
Therefore, it is concluded that diets A and B do not differ significantly as regards
their effect on increase in weight is concerned.

2. Tests for Variances


a. Hypothesis Testing For a Single Population Variance

For a given data available for analysis consisting of random samples drawn from a
normally distributed population, the test hypothesis used for analysis of variance is
6.6 Hypothesis testing for the ratio of Two Population variances

b. Hypothesis testing for the ratio of Two Population variances


3. Tests for Proportions
a. Hypothesis Testing for a Single Population Proportion

For quantitative measurement of variables, the presence or absence of a particular


characteristic is found out. If p indicates presence of a particular characteristic and
q indicates the absence of a particular characteristic, the sampling distribution of p
is approximated by normal distribution in accordance with the central limit
theorem.

The Test Hypothesis is

H0 : p = p0 (null hypothesis)

H1 : p ≠ p0 (2-sided alternative)

H2 : p > p0 (1-sided alternative)

H3 : p < p0 (1-sided alternative)

where, p0 is the specified value of the sampling distribution p for a given sample .

The test statistic used is

When H0 is true and the assumptions are met, the test statistic is distributed as
standard normal distribution with mean zero and variance one. The level of
significance α = 0.05 is known. The tabulated value is Z α for a one sided and Zα/2 for
a two sided alternative.

Decision Rule

The test is reject H0, if the calculated value is greater than tabulated value i.e.
(|Z|>Zα/2), then the test is significant. This means there is a significant difference
between actual and estimated sample proportion.

For example:

A survey on people affected by cancer in a city found that 18 out of 423 were
affected. You wish to know if you can conclude that fewer than 5 percent of people
in the sampled population are affected by cancer at 5 percent level of significance.

Solution:

From the data it can be concluded that from the response of 423 individuals of
which 18 possessed the characteristic of interest,

p = 18/423 =0.426.

The test hypothesis is

H0 : p<0.05

HA : p≥0.05

The test statistic is

Since calculated value is greater than the tabulated value, the test is reject H 0.

This shows that the population proportion of cancer affected people may be 0.05 or
more.

4. Paired Comparisons Tests


Definition: For a difference between two population means, it was assumed that
the samples are dependent. This means that the values of observations in one
sample depend on the other in any significant manner. In fact, the two samples
may consist of pairs of observations made on the same object or individual.

For example, to find out the effect of training on some employees or to find out the
efficacy of a coaching class, a hypothesis test based on this type of data is known as
a paired comparisons test.

Given are two dependent random samples of size n 1 and n 2 from the same
population having normal distribution. The test hypothesis used in this case is
Decision Rule

The test is reject H0, if the calculated value is greater than the tabulated value i.e. (
|t| > tn-1,α/2), then the test is significant for a 2-sided alternative and for ( t > tn-1, α)
the test is significant for a 1-sided alternative. This means that there is a difference
between the actual mean and estimated mean.

For example:

12 students were given intensive coaching and 4 tests were conducted in a month.
The scores of tests 1 and 4 are given below. Do the scores from the 1 to 4 show an
improvement?

Since the calculated value is greater than the tabulated value, the test is to reject
the null hypothesis.

Hence the course has been useful.

5. Goodness of Fit Tests

Definition: The goodness of fit test enables one to ascertain how appropriately
theoretical distributions of frequencies fit expected distributions from the sample.
When an ideal frequency curve is fitted to the data, it is found out that how this
curve fits with the observed facts.

For a given random sample of observations drawn from a relevant statistical


population, the test Hypothesis is

H0 : Population is normally distributed


HA : Population is not normally distributed

The test Statistic is

where, i = 1,2,3…………… n

When H0 is true and the assumptions are met, the test statistic is distributed as chi-
square (X2) distribution with k-r degree of freedom. The level of significance α = 0.05
or 0.01 is known. The tabulated value is X2α/2,n-1 for an upper side and X21-α/2,n-1 for a
lower side in a two-sided alternative. The tabulated value is X 2α,n-1 for upper side
and X21-α,n-1 for lower side for a one-sided alternative.

Here, k = Number of groups for which observed and expected frequencies are
available.

r = Number of restrictions imposed on the comparison.

For Example:

In an accounting department of a bank 50 accounts are selected at random and


examined for errors. The following results have been obtained:

No. of errors: 0 1 2 3 4 5
No. of accounts: 6 13 13 8 4 3

On the basis of this information, can it be concluded that the errors are distributed
according to the Poisson process?

Solution:

To solve this question you have to obtain the Expected frequencies by supplying
Poisson distribution and test the goodness of fit by X 2 test.

The probability of Poisson distribution is given by

No. of errors: 0 1 2 3 4 5

Observed Frequency (O): 6 13 13 8 4 3

Expected Frequency (E): 6.24 13.52 13.52 9.01 4.50 1.80

Test Hypothesis

H0 : Errors are distributed according to Poisson distribution.

HA : Errors are not distributed according to Poisson distribution.

The Test Statistic is


Since calculated value is smaller than tabulated value, test is accept H0 i.e.
accepting the null hypothesis. Hence the given information verify that the errors
are distributed according to Poisson distribution.

6.7 Analysis of Variance (ANOVA)

6. Analysis of Variance (ANOVA)

ANOVA is defined as a technique where the total variation present in the data is
portioned into two or more components having specific source of variation. In the
analysis, it is possible to attain the contribution of each of these sources of
variation to the total variation. It is designed to test whether the means of more
than two quantitative populations are equal. It consists of classifying and cross-
classifying statistical results and helps in determining whether the given
classifications are important in affecting the results.

The assumptions in analysis of variance are:

 Normality
 Homogeneity
 Independence of error

Whenever any of these assumptions is not met, the analysis of variance technique
cannot be employed to yield valid inferences.

a. Analysis of Variance in One-Way Classification Model


The steps in carrying out the analysis are:

1. Calculate variance between the samples

The variance between samples measures the difference between the sample mean
of each group and the overall mean. It also measures the difference from one
group to another. The sum of squares between the samples is denoted by SSB. For
calculating variance between the samples, take the total of the square of the
deviations of the means of various samples from the grand average and divide this
total by the degree of freedom, k-1 , where k = no. of samples.

2. Calculate variance within samples

The variance within samples measures the inter-sample or within sample


differences due to chance only. It also measures the variability around the mean of
each group. The sum of squares within the samples is denoted by SSW. For
calculating variance within the samples, take the total sum of squares of the
deviation of various items from the mean values of the respective samples and
divide this total by the degree of freedom, n-k, where n = total number of all the
observations and k = number of samples.

3. Calculate the total variance

The total variance measures the overall variation in the sample mean.The total sum
of squares of variation is denoted by SST. The total variation is calculated by taking
the squared deviation of each item from the grand average and dividing this total
by the degree of freedom, n-1 where n = total number of observations.

4. Calculate the F ratio

It measures the ratio of between-column variance and within-column variance. If


there is a real difference between the groups, the variance between groups will be
significantly larger than the variance within the groups.

5. Decision Rule
At a given level of significance α =0.05 and at n-k and k-1 degrees of freedom, the
value of F is tabulated from the table. On comparing the values, if the calculated
value is greater than the tabulated value, reject the null hypothesis. That means the
test is significant or there is a significant difference between the sample means.

SST = Total sum of squared variation

MSB = Sum of squares between samples

MSW = Sum of squares within samples

F = MSB/MSW = Calculated value

F = Fα,(k-1),(n-k) = Tabulated value

Applicability of ANOVA

Analysis of variance has wide applicability In Six Sigma in the analysis of the data
derived from experiments. It is used for two different purposes:

 It is used to estimate and test hypothesis about population means.


 It is used to estimate and test hypothesis about population variances.

For example:

A manufacturing company has purchased three new machines of different makes


and wishes to determine whether one of them is faster than the others in
producing a certain output. Five hourly production figures are observed at random
from each machine and the results are given below:

Use analysis of variance and determine whether the machines are significantly
different in their mean speed at 5% level of significance.

Solution:

Take the hypothesis that the population means are equal for three samples.

H0 : Machines are equally likely in their mean speed.

HA : Machines are significantly different in their mean speed.

k = 3 (No. of Samples), n = 15 (Total no. of Observations)


F = 20/5 = 4 (Calculated Value)
Fα,(k-1),(n-k) = F0.05,2,12 = 3.88 (Tabulated Value)

The calculated value of F is more than the tabulated value; the null hypothesis is
rejected at 5% level of significance. Hence the machines are significantly different in
their mean speed.

b. Analysis of Variance in Two-Way Classification Model

In a two way analysis of variance, the treatments constitute different levels affected
by more than one factor. For example, sales of car parts, in addition to being
affected by the point of sale display, might also be affected by the price charged,
the location of store and the number of competitive products. When two
independent factors have an effect on the dependent factor, analysis of variance
can be used to test for the effects of two factors simultaneously. Two sets of
hypothesis are tested with the same data at the same time.

Suppose there are k populations which are from normal distribution with unknown
parameters. A random sample X1, X2, X3……………….. Xk is taken from these
populations which hold the assumptions.

The null hypothesis for this is that all population means are equal against the
alternative that the members of at least one pair are not equal. The hypothesis
follows:
The steps in carrying out the analysis are:

1. Calculate variance between the rows

The variance between rows measures the difference between the sample mean of
each row and the overall mean. It also measures the difference from one row to
another. The sum of squares between the rows is denoted by SSR. For calculating
variance between the rows, take the total of the square of the deviations of the
means of various sample rows from the grand average and divide this total by the
degree of freedom, r-1 , where r= no. of rows.

2. Calculate variance between the columns

The variance between columns measures the difference between the sample mean
of each column and the overall mean. It also measures the difference from one
column to another. The sum of squares between the columns is denoted by SSC.
For calculating variance between the columns, take the total of the square of the
deviations of the means of various sample columns from the grand average and
divide this total by the degree of freedom, c-1 , where c= no. of columns.

3. Calculate the total variance

The total variance measures the overall variation in the sample mean.The total sum
of squares of variation is denoted by SST. The Total variation is calculated by taking
the squared deviation of each item from the grand average and divide this total by
degree of freedom, n-1 where n= total number of observations.

4. Calculate the variance due to error

The variance due to error or Residual Variance in the experiment is by chance


variation. It occurs when there is some error in taking observations, or making
calculations or sometimes due to lack of information about the data. The sum of
squares due to error is denoted by SSE. It is calculated as:

Error Sum of Squares = Total Sum of Squares - Sum of Squares between


Columns - Sum of Squares between Rows.

The degree of freedom in this case will be (c-1)(r-1).

5. Calculate the F Ratio

It measures the ratio of between-column variance and within-row variance with


variance due to error.

F = Variance between the Columns / Variance due to Error

F = SSC / SSE

F = Variance between the Rows / Variance due to Error

F = SSR / SSE

6. Decision Rule

At a given level of significance α=0.05 and at n-k and k-1 degrees of freedom, the
value of F is tabulated from the table. On comparing the values, if the calculated
value is greater than the tabulated value, reject the null hypothesis. This means that
the test is significant or, there is a significant difference between the sample means.
SSC = Sum of Squares between columns

SSR = Sum of Squares between rows

SSE = Sum of Squares due to Error

SST = Total of Squares

c = No. of Columns

r = No. of Rows and n = cr

SE = SST - (SSC+SSR)

For example:

In a factory, production can be accomplished by five different workers on four


different types of machines. A simple study of two-way ANOVA examines whether
the five workers differ with respect to mean productivity and whether the mean
productivity is the same for the four different machines. Set up an ANOVA table for
the given information and draw the inference about variance at 5 percent level of
significance.

 Sum of squares for variance between machines = 312.5


 Sum of squares for variance between workmen = 266.0
 Sum of squares for total variation = 675.0

Solution:

c = No. of columns = 4

r = No. of rows = 4

n = Total no. of observations = cr = 16

The null hypothesis to be tested is:


H0 : All five workers do not differ with respect to their mean productivity

HA : All five workers differ with respect to their mean productivity or at least two
differ with respect to their mean productivity

H0 : Mean productivity is the same for four different machines

HA : Mean productivity is different for four different machines

For 3, 9 degrees of freedom F 0.05 = 3.86 (Tabulated value)

F = 9.72(Calculated value for machines)

F = 8.27(Calculated value for workmen)

The calculated values of F are more than the tabulated value at 5% level of
significance. The null hypothesis is rejected. Hence the mean production is not the
same for the four Machines. Also employees differ with respect to mean
productivity.

6.8 Contingency Tables

7. Contingency Tables
In categorical data analysis or when data is in the form of attributes, contingency
tables are used to record and analyze the relationship between two or more
variables. Since only two categorical variables are taken, these are called 2×2
Contingency Tables because each variable has two factors. Suppose there are two
attributes A and B, where A shows the presence of some character and B shows the
presence of another character. If α shows the absence of A and β shows the
absence of B, then contingency table is shown by the following:

For example:

A survey amongst customers of a pizza chain was conducted to study the customer
satisfaction. The observations are as follows: there are two variables, Customer
Satisfaction (Happy or Not Happy) and Pizza Quality (Good or Bad). Observe the
values of both variables in a random sample of 200 customers. A contingency table
can be used to express the relationship between these two variables, as follows:
The figures in the happy customers (satisfied) column and the subsequent bottom
rows are called marginal totals and the figure in the rightmost corner is called
the grand total.

The table shows that the proportion of customers happy with good quality of pizza
is almost the same as the proportion of customers happy with bad quality of pizza.
However the two proportions are not identical, and there must be some difference
between these two variables. The statistical significance of difference between
these categorical variables is explained by using some tests that are Pearson's chi-
square test, a G-test or Fisher's exact test. These tests follow the assumption that
the entries in the contingency table must be random samples from the population
contemplated in the null hypothesis. If the proportion of individuals in the different
columns varies with rows (and, therefore, vice versa) It can be said that the table
shows contingency between the two variables. If there is no contingency, it is said
that the two variables are independent. Any number of rows and columns may be
used in contingency tables. There may also be more than two variables, but higher
order contingency tables are not used. The relationship between ordinal variables,
or between ordinal and categorical variables, may also be represented in
contingency tables. The degree of association between the two variables can be
represented by a φ called, phi coefficient defined by
where, χ2 is derived from the Pearson test, and N is the grand total number of
observations. φ varies from 0 (corresponding to no association between the
variables) to 1 (complete association). For symmetrical variables, there is a
complete association but they do not reach a maximum of 1 with complete
association in asymmetrical tables (those where the number of rows and columns
are not equal).

8. Non-Parametric Tests

Definition

Most of the statistical tests requires the assumption that the population data from
which the samples are drawn is normally distributed and mean and standard
deviation can be derived from the samples and used to estimate the corresponding
population parameters. The parametric tests are based on these assumptions.
Sometimes the data are from non-normal distributions and meaningful sample
statistics cannot be calculated, and no assumptions can be made about the
population parameter such as the mean and the variance. Tests based on these
data are classified as Non-Parametric Tests. Since these tests do not depend on
the shape of the distribution, they are called distribution free tests.

These tests are useful for the data which are nominal and ordinal in nature i.e. for
categorical data.

Non-parametric tests are of considerable importance in Six Sigma, they cover a


huge range of applications in Six Sigma.

Applicability of Non-Parametric Tests

1. Non parametric tests are distribution free i.e. they are not a statement about the
population parameter values. Chi-Square Goodness of Fit and the Tests of
Independence are examples of tests possessing theses advantages.

2. Nonparametric tests may be used when the firm of the sampled population is
unknown.

3. These tests do not require lengthy and time consuming calculations. If significant
results are obtained, no further computation is necessary.
4. Nonparametric procedures may be applied when the data being analyzed consist
merely of ranking or classifications. These tests are applicable to all types of
qualitative data in rank i.e. nominal scaling, ordinal scaling and ratio scaling.

Mood’s Median Test

A nonparametric procedure that may be used to test the null hypothesis that two
independent samples have been drawn from populations with equal medians is the
Mood’s Median Test.

Assumptions

The assumptions used for the tests are

1. The samples are selected independently and at random from their respective
populations.

2. The populations differ only in location parameter otherwise populations are of


the same form.

3. Variables are continuous in nature and two samples do not have to be of equal
size.

The Null Hypothesis used in this case is

where, MA is the median score of first sample and MB is the median of the second
sample.

Test Statistic

The tests statistic used in this case are X 2 with a 2×2 contingency table.
where a, b, c, d are the observed cell frequencies. Since for X 2 tests, the degree of
freedom is (r-1)(c-1), for a 2×2 contingency table, the result is (2-1)(2-1)=1 degree of
freedom. To compute median, arrange the observations in increasing order, obtain
it’s median. Now determine for each group the number of observations falling
above or below the median. The resulting frequencies are arranged in a 2×2
contingency table.

Decision Rule

When H0 is true and the assumption are met, X2 is distributed approximately as


X2 with 1 degree of freedom at α =0.05 level of significance. Test is Reject H 0, if the
calculated value is greater than the tabulated value . X21, α is the tabulated value
obtained from the table in a 2×2 contingency table. This means that there is a
significant difference in the median score of sampled populations.

Kruskal-Wallis Tests

If several independent samples are involved, analysis of variance may be used to


test the null hypothesis that the several population means are equal, if the
population is normally distributed. When the underlying assumptions are not met,
that the population from which the samples are drawn are not normally distributed
with equal variance, the nonparametric analog of the analysis of variance is
Kruskal-Wallis or H - test. In this case, data for analysis consists only of ranks. This
tests helps in testing the null hypothesis that k independent random samples come
from identical populations against the alternative hypothesis that the means of
these samples are not equal.

Assumptions

1. Independent random samples are taken from their respective populations.


2. The measurement scale in this test is at least ordinal.
3. The distributions of values in the sampled populations are identical with the
exception that one or more of the populations are composed of values that tend to
be larger than those of the other populations.

Hypothesis

The null hypothesis used in this case is


H A: At least one of the populations tends to exhibit larger values than at least one
of the other populations.

Test statistic

The n1,n2,…………nk are k observations which are arranged from smallest to largest
in order of their magnitude. After assigning ranks to these observations the rank
sums of each sample are calculated.

The Test statistic used is:

where, n1,n2,…………nk are the number in each of k samples and N is the number of
observations in all samples combined.

R1, R2........... Rk are the rank sums of each sample.

N = n1+n2+…………+nk

For small samples, H is approximately distributed as Chi-Square with k-1 degree of


freedom.
Mann-Whitney Test

The sign test used for comparing two population distributions ignores the actual
magnitude of the paired observations. A statistical test that makes use of more
information inherent in the data and compensates the loss of magnitudes by
utilizing the relative magnitudes of the observations is called Mann-Whitney tests.
This test helps to determine whether two samples have come from identical
populations.

Assumptions

1. The two samples of size n and m are independent and randomly drawn from
their respective populations.
2. The variables available for analysis are continuous.
3. The measurement scale is at least ordinal.
4. If the populations differ at all, they differ only with respect to their medians.

Hypothesis

The null Hypothesis used in this case is:

H0 : Two samples have come from identical populations and two populations have
equal medians.

HA : The populations do not have equal medians (two-sided test)

HB : The median of population 1 is larger than the median of population 2 (one-


sided test)

HC : The median of population 1 is smaller than the median of population 2 (one-


sided test)

If the two populations are symmetric, within each population the mean and median
is same.

Test Statistic

To calculate the test statistic, combine the two samples and rank them from
smallest to largest. Tied observations are assigned a rank equal to the mean of the
rank positions for which they are tied. The Test statistic is

Levene’s Test

The Levene’s test is used to test k samples for equal variances. Equal variances for k
samples mean homogeneity of variances. This test is used to verify the assumption
that variances are equal across groups or samples. The Levene test is an analog of
Bartlett test used in parametric distributions i.e. the data from normal or nearly
normal distributions. For the analysis of variance, F test is used with k samples with
unequal standard deviations.

Hypothesis
CHAPTER 7
7 The Improve Phase

The Improve phase comes next in line after the analyze phase in DMAIC. The goal
of Six Sigma is improvement. This is the point where improvements are worked on,
and systems are reorganized or reconstructed around these improvements. The
personnel working on the project select the solution that would work best for the
project keeping the organization goals in mind.
The Improve stage is the one which implements the conclusions drawn through
analysis. The data or process analysis is chronicled in the Analysis stage. In the
Improve phase, an organization can improve, retain or negate the results of the
analysis.

In this phase, the solutions are discussed and narrowed down to the best options
among them. The best solution is chosen. However, the emphasis remains on
choosing the result which leads to maximum customer satisfaction. This is also the
phase where an innovative resolution is found out and it is executed on an
experimental basis. In Six Sigma, an experiment is designed before it is tried. Six
Sigma offers powerful tools of experiments to assist in improvement. Design of
experiments is the main experimental tool adapted by Six Sigma.

A. Design of Experiments (DOE)

Design of Experiments is an efficient quality improvement tool and, thus, is an


important part of the Improve phase in the Six Sigma DMAIC process. It helps in the
careful implementation of newly designed ventures. This new kind of approach
called the Design of Experiments is often contrasted to the old OFAT or one factor
at a time approach. In this approach more than one factor is taken into
consideration because it saves a lot of time and money.

Dr. Genichi Taguchi, a Japanese engineer, developed some statistical methods by


stressing on reducing variation and then accomplishing target values. The benefit of
using the statistical approach instead of the OLAP approach is that it helps save a
lot of costs. It does this by substituting a single value for several values. Moreover,
the results in this approach are expressed numerically. So, it is easy to detect errors
and improve upon them.

DOE is gaining popularity by the day and has been implemented across several
sectors including chemical, automobile, software development and engineering. It
is a powerful tool to minimize wastage by reducing costs and increase process
yields by reducing variation in the process.

1. Terminology

Earlier, DOE was performed mainly on agriculture and the terminology used in it
clearly reflects it. A design of experiment is one, where, one or more variables,
known as independent variables are demarcated and modified with regards to the
previously drafted plan. The data is collected and studied to find out the effect a
variable or a number of variables have on the process. There are some other
variables also which are associated with DOE. They are:

Response Variable

The variable that is under consideration is known as the Response Variable or


Dependent. In other words, it is known as the final output.

Primary Variable

There are certain variables on which an organization can keep a restraint and which
produce a result. They are known as primary variables. The primary variables can
either be quantitative or qualitative.

Background Variable

The variables that cannot be taken as constant are known as background variables.
These variables can nullify the effect of primary variables if they are not taken into
consideration properly. The best way to keep these variables in control is
through blocking.

Blocking- The experimental designs split the findings into different blocks. The
blocks are filled up in a sequence. However, the pattern of each block is distinct.
Take the example of a Cola Company. Assume that they are testing an ingredient
for a soft drink. They choose two ingredients (both are substitutes for one another).
Name the ingredients X and Y . There are four samples each of X and Y. It is
suggested that all the four samples be used at the same time. However, the mixing
process requires that two ingredients be used at a time. The mixing process then
becomes a ‘blocking’ factor.

Interaction

This is a situation where the result of one variable depends on the other variable.

Error

Error is termed as the variation caused in a collection of findings which cannot be


explained. A Design of Experiment requires an understanding of both random and
lack of fit error.
Treatment

It is a blend of specified factor levels whose effect is compared to the other


treatments. These are possible sources of variability.

Factor

A factor is a controlled and independent variable of an experiment. It has different


levels which are determined by the experimenter. The levels are decided according
to the kind of treatments meted out to the factors. For instance, three drivers are
made to drive three different kinds of cars. In this example, the drivers are the
experimental units and the cars are the levels of the ‘kind of cars’.

Level

The position of each factor in an experiment is known as its level.

Replication

Replication is the process of carrying out a part of the experiment or the complete
experiment again. The advantage of replication is that it reduces the effect of
random variation on the results. It also gives freedom to the analyst to gauge the
experimental error. However, it is also true that the experimental error can be
gauged without replicating the entire experiment. If there is variation when all the
other variables are held constant, the cause is definitely other than the variables
being controlled by the analyst. Replication also helps to reduce biasedness.

Outer Array

The thing that was innovative about Taguchi’s approach was that each experiment
was replicated through an outer array. An outer array is an orthogonal array that
deliberately imitates the sources of variation that a product might face in reality.
Experiments are conducted in the presence of a lot of noise factors to make it
robust. An outer array is used to minimize the effect obtained as a result of the
combination of different noise factors.

Orthogonal Arrays

This is described later under Higher Order Experiments.

2. Planning and Organizing Experiments


It is very important to bear the organizational goals in mind while planning a design
of experiment. The kind of analysis to be done and the results obtained might
change with the change in objectives. A DOE that is carried out using the Taguchi
method is very helpful for the manufacturing industry. The Taguchi method makes
the DOE a success because it is very economical as far as carrying out the product
or process optimization projects is concerned. The need of problem solving can
also be satisfied very sparingly. Also, the method helps save a lot of time.

Objectives of DOE

The objective of carrying out the experiment must be in sync with the goals of the
organization. The main objective, however, remains the optimization of the product
and process designs.

Other goals may include:

1. Examining the effects of various factors like variables, parameters and inputs on
the performance.
2. Solving the manufacturing problems by carrying out investigative experiments.
3. Checking the effect of individual factors on the performance.
4. Checking the blend of factors that would be best suited to get better
performance.
5. Finding the factor that can be optimally utilized and at the same time give the
best performance.
6. Finding out the cost savings by carrying out a particular experiment.
7. Finding out the solution to develop sensitivity for the process so that it can
become immune to noise factors.
8. Finding out ways to reduce variation in the performance.

DOE users are aware that factors and interactions with statistical significance
should be included in the mathematical model that predicts the response surface
of the process or product being investigated. Factors that do not show statistical
significance should generally be omitted from the model. However, single-factor
effects can sometimes be statistically insignificant and yet be significant via
hierarchy.

Selecting Factors

It is very important to choose the factors before carrying out the experiment. This is
because they are the main ingredients of the process. Factors are categorized as
controllable or uncontrollable. There are uncontrollable factors like noise factors.
However, they can be controlled using blocking and randomization.

Noise factors

According to Genichi Taguchi, loss function is the measure of quality. The quadratic
loss function established by him measures the level of customer dissatisfaction due
to the poor performance of the product. Poor or average performance of the
product and variation are major factors due to which the product fails to achieve
the desired target. There are certain uncontrolled factors which lead to variation in
the performance. They are known as noise factors. Therefore, it is very important to
choose the design for the product or the process which has least number of noise
factors or which is indifferent to them.

External Source of Noise

The variables which affect the performance of the product from outside like
government policies and environment are known as external source of noise.

Internal Source of Noise

The internal variables or the internal settings of the product that affect its
performance are known as internal noise factors. Whenever a product deviates
from complying with its internal features, it becomes the internal source of noise
for the product.

Randomization

Randomization is the process of selecting a sequence in such a manner that every


possible event in the sequence is likely. This is important as it will prevent unknown
or uncontrollable variables to impact the final result. Also, randomization helps
ensure valid estimates of experimental error. Randomization can be done using
some mechanical method like random numbers table.

Measurement Methods

It is important to choose the measurement methods that would be best suited for
the experiment keeping the organizational goals in mind. This is the next step after
selecting the level of factors and responses.
Attribute Screens

It is a measurement method which helps to observe the presence or absence of


some characteristics in the units that are taken into consideration. Attribute
screens also help to calculate the units which possess those attributes.

Gauge Blocks

Gauge blocks are another measuring tool used in DOE. They are also known as
Johansson gauges or slip gauges. They are used to measure accuracy and are
lapped measuring standards. They are also used as references to set up measuring
instruments like micrometers, sine bars and dial indicators.

Calipers

Calipers are devices which help to measure linear dimensions which include
distance between two parallel sides or a diameter. Calipers are used for both
internal and external measurements. Calipers come in various shapes and forms.

Micrometers

Micrometers are also used to measure linear dimensions. The traditional


micrometer has a thimble which rotates on a barrel. The thimble rotates on a screw
which opens or closes the micrometer. One full rotation of the thimble alters the
measured dimension by 0.025”.

Optical Comparators

These are devices which project a magnified image of a part profile onto a screen.
The part profile is then compared to a standard overlay or scale. Optical
comparators are used to measure complex parts. In addition to dimensional
readings, optical comparators are used to detect defects like burs, scratches and
incomplete features.

Tensile Strength

Tensile testing involves making a specimen of the test. The specimen is gripped at
the ends using suitable equipment in a tensile testing machine and an axial force is
exerted slowly until the specimen fails. The shape of the specimen is important as
the griping forces can apply forces which can make the specimen fail prematurely.

Titration

Titration is used to determine the concentration of a substance in solution. A


standard reagent of known concentration is added in carefully measured amounts
till a clearly defined reaction is observed. The reaction may be in the form of a color
change or a change in electrical resistance or any other form.

Types of Design

There are different kinds of models available for DOE. Experiments can be made to
suit the goals of a particular organization. A few of the more common types of
designs are discussed below:

Fixed-Effects Model

This is an experimental model where all the possible factors are taken into account
by the experimenter. For instance, if there are four factors affecting the product all
four will be taken into account.

Random- Effects Model

This is another experimental model where one sample can act as a substitute for
another. For instance, if we have four factors only two or three are taken into
account.

Mixed Model

This is a model which combines the features of both fixed-effects and random-
effects model.

Randomized Design

This is an experimental model where the sequence in which the experiment is


performed is completely random. The sequence depends on the discretion of the
experimenter. He can choose the sequence he likes.
Randomized BlockDesign

The findings in an experimental design are divided into particular blocks. These
blocks follow a particular pattern. However, the pattern within the blocks is
random. The idea of carrying out such an experiment is that a different treatment
would be meted out to each member or each component in a particular block.

Example

A researcher is carrying out a study of the popularity of four different television


shows. He chooses 80 people from different strata of society and belonging to
different age groups. They are divided into four groups according to these criteria.
Using the randomized block design, the TV shows are assessed and placed in
groups of four according to their category namely, daily soaps, reality shows, music
shows and sports shows. The four members of each group are then randomly
picked up, one for each of the category. This is known as randomized block design.

Latin-Square Design

This is a design where each member appears just once in a particular row or
column. This kind of a design is particularly useful when two non-homogeneous
members need to be tested together. The term Latin-square was first used in
agricultural experiments and the square was actually a square piece of land. The
term is now applied to other areas as well like where non-homogeneous elements
are used. For instance, in an industrial experiment the non-homogeneous elements
can be men, machines and materials. The Latin-square qualifies on two grounds
and they are:

1. There must be equal number of rows, columns and variation in experiments.


2. The rows and columns should never intersect.

A Latin-square design looks like this:


7.1 Principles of Design of Experiments

3. Principles of Design of Experiments

The success of a Design of Experiment depends on a lot of careful planning. There


are certain things to consider before conducting the experiment. They are:

1. The nature and the goal of the experiment


2. The tangible restrictions while taking measurements
3. The constraints of time, money, resources and manpower

The analyst who will head the experiment should clearly explain as to why the
experiment is being carried out and how will it benefit the organization. The
experiment should be clearly documented and supported by all the members. Two
concepts are very important for an analyst and they are replication and
randomization.

Power and Sample Size

Power in DOE is the probability that the procedure of the test would prove to be of
some statistical significance. Power and sample size is interrelated. The sample size
should be of the size of the type 1 or alpha error, which is the actual size of the
effect and the size of the experimental error. These are detrimental while
computing statistical power. It is not easy to calculate power as the calculations are
quite complex. So, it is usually taken casually and not followed in practical terms.

Power increases in the following conditions. They are:


1. When the experimental variability decreases.
2. When the size of the effect increases.
3. When the sample size increases.

It is important to determine standard deviation before the sample size is calculated.


Standard deviation can be ascertained using research, pilot studies or subject
matter knowledge. It is also important to specify the difference that the researcher
is interested in detecting. The expression of the difference depends on whether a
one or two sample test is being detected. The difference is expressed in the form of
a null hypothesis when a one-sample test is performed. The difference is expressed
in the form of difference between the population means for a two-sample test.

Statistical analysis is the probability of rejecting a false statistical null hypothesis. A


proper Design of Experiment is one which ensures that the power is reasonably
high in order to detect reasonable departures from null hypothesis. If this is not
done, the experiment has no meaning. The success of a design of experiment
depends on factors like the kind of test being done and the sample size.

Power also depends on the size of experimental results. If the null hypothesis is
wrong by a substantial amount, power will be higher than when the null hypothesis
is wrong by a small amount. Moreover, if there is any error in measuring, it may
increase the power by a considerable amount.

Efficiency

Efficiency is one of the important principles of a design. The basic aim, in fact, of a
DOE is to find out the best possible method to minimize wastage and increase
efficiency. A DOE is carried out so that there can be product and process
optimization. So, DOE leads to cost savings, optimal utilization of resources and
ultimately efficiency.

Interaction

There is interaction between the factors when one factor influences the result of
another factor. It influences the optimum condition and the foreseen results. It is
imperative to decide the interactions before starting the experiment. Therefore, it is
an important principle of DOE. There are two types of interactions. They are:

1. Synergistic: According to this kind of interaction, two variables are good when
they are together, for instance team effort.

2. Antagonistic: According to this kind of interaction, two variables are not good.

Taguchi’s method combines orthogonal arrays with an outer array (which


comprises of noise factors) and helps deliver knowledge on interactions between
control and noise factors. These interactions help to create a process which is
immune to uncontrollable noise factors. The people in favor of Taguchi’s method
argue that his designs lead to quick results. The designs remove control factor
interactions by changing the data and by carefully selecting the quality
characteristics. However, western statisticians, like, George Box, are of the view that
Taguchi’s arrays have complex structures which make it difficult to unwind the
interactions.

Confounding

Confounding is a design where some of the treatment effects are calculated by the
same linear combination of the experimental observations as some of the effects of
blocking. If this is the case, both the treatment and the blocking effects are
confounded. Simply speaking, confounding means that one factor affects the other.
In general terms, confounding implies that the value of a main effect estimate
comes from the main effect as well as from higher order interactions which are not
statistically significant.

Confounding designs automatically occur whenever a fractional factorial design is


chosen. They also arise when full factorial designs have to be run in blocks and the
block size is smaller than the number of different treatment combinations.
Confounding decreases the number of factor level combinations and this leads to
greater efficiency of the design. One thing that needs to be kept in mind is that,
there has to be a right blend of the efficiency derived from confounding and the
clarity of the experiment.

4. Design and Analysis of One - Factor Experiments

One-Factor Experiments are the ones where several results obtained through
experimentation can be compared. This kind of an experiment allows the
experimenter to make a comparison of results obtained through experimentation
which are independent and, most probably, will give a different mean for each
group. The important thing to be taken into account is whether all the means are
equal or not.

One- Factor Experiments can be analyzed in several ways. The results can be
plotted on a SPC chart which includes historical data from the standard process.
The baseline rules, then apply, to assess the conditions which are not in control to
check if the process has been modified in some way. The experimenter needs to
gather data from a number of sub-groups to arrive at a conclusion. However, he
needs to keep in mind that a single sub-group could also fall out of the existing
limits of control.

Another substitute for the SPC chart is the F-test. It is used to compare the means
of alternate treatments. A cyclist, for instance, is preparing for a cycle race and
wants to gauge the time he would take to complete the race if he applies different
strategies. He compares the usual time he takes to complete the race to the two
times he would take if he applies two alternate strategies. He calculates the time
and records ten data points for each alternative.

The table above shows that the two alternate strategies that the cyclist employs (B
and C) help complete him cover the track quickly. An F-test is performed to
determine whether the difference is due to statistical significance or is just a
coincidence. For a given confidence level, the F-ratio provides a statistic that can be
compared to a probability distribution table to determine whether the treatment
means are significantly different. There are various kinds of F-distributions.
However, the most common ones are those which make use of two levels of
confidence: 95% and 99%. The following table shows the F-ratio calculation for the
above mentioned example of a cyclist:

The F-ratio of 3.61, is then, compared to a value from the F-distribution table for the
confidence level that has been chosen. The tabulated value of F at 2, 27 degrees of
freedom is 3.35. Since calculated value is greater than the tabulated values, the null
hypothesis is rejected. This shows that there is a significant difference in means due
to random chance at 5% level of significance.

5. Design and Analysis of Full - Factorial Experiments

This is a design where all the combinations of factors are experimented. The
number of runs meted out is calculated by number of levels raised to the power of
n. For instance, if an experiment has 2 factors and number of levels are 3, then the
number of runs would be 3 2, that is, 9. So, if an experiment has a lot of factors, it is
better to conduct screening experiments on them first to reduce the number of
potential factors.

Full Factorial Experiments make use of the Yates method. It is important to arrange
the data in a fixed order to use the Yates algorithm. A factor is evaluated at both
high and low levels in an experimental design. The high levels are denoted by a ‘−’
sign and low levels are denoted by a ‘+’ sign. A full factorial table may look like this:
The advantage of a full factorial experiment is that many factors can be studied at
one time. However, the disadvantage is that if the number of factors goes up, it also
shoots up the cost.

6. Design and Analysis of Two - Level Fractional Factorial Experiments

Two-level factorial experiments are used to examine the effect of various factors at
the same time. This kind of a design saves a lot of costs as its run-size is a fraction
of the total number of combinations of the factors’ levels. The designs are
positioned according to the power they possess to detect the effect of factors.
Aberration is the most common method to position the designs. If an experiment
has k factors, the aberration criteria lists the number of effects of order 1, 2… k.

The number of runs suggested for a two- level factorial design is (n-p) 2[n] [-] [p] for
2 [n] [-] [p] fractional factorial experiments. Extra runs are needed to design two-
level factorial experiments in blocks of size two to gauge all the effects which are
available. The accuracy in these experiments is different because different numbers
of observations are used for estimation in the analysis of resulting data. Also, the
substitution between run-size reduction and possibly negligible effects is a matter
of concern when the number of factors is great.

The advantage of using two or multi-level designs is that the error rate is very less.
If we perform the same tests with single response methods the error rate would go
up as many individual tests are being performed at the same time. Sometimes, the
individual analysis proves to be of no use as there are many responses. This is
where multi or two-level designs come handy. Moreover, the interrelationships
between the response variables can be utilized using two-level fractional factorial
designs.

7. Taguchi Robustness Concepts

Taguchi Robustness Concepts are statistical methods developed by Genichi Taguchi


of Japan. He is a Deming prize winner. Taguchi Robustness Concepts were
introduced by him to improve the quality of manufactured goods. Taguchi was
invited by Ford Motor Company to make a presentation on product design and
manufacturing process.

Introduction

Quality as defined by Taguchi acquires different connotations in the manufacturing


process. Quality, according to Taguchi, is the loss borne by the society from the
time a product gets shipped from the place it is manufactured to its actual
destination, say a showroom, a retail shop or directly to the consumer. According to
this loss function, the society’s loss due to the poor performance of the products is
proportional to the square of the deviation of the performance characteristic from
its target value. Taguchi divides quality into two classes. They are:

1. On-Line Quality Control

This kind of a quality check involves examining and modifying the process,
predicting and rectifying the errors, reviewing and disposition of the product. On-
line quality control also includes follow-up on the defective products shipped to the
customer.

2. Off-Line Quality Control

This kind of a quality check includes activities that involve cost savings and quality
improvement methods at the process and product designing stages. Off-line quality
check is a part of the product development cycle. There are three facets to this
control. They are listed below:

a. System Design

This is a stage which does not involve any statistical calculations. Rather, this is a
stage where scientific and technological know-how is applied. It is used to produce
an epitomized design. This design acts as the prototype for the process design
characteristics and the initial settings of the product are based on it. This is a stage
where innovation is the key. New ideas and knowledge are disseminated to
determine the ways to efficiently produce the best product.

b. Parameter Design

The term parameter is an engineering term which uses product characteristics as


product parameters. This is the design which holds prominence in the off-line
quality control process. This is concerned with identifying the correct design factor
levels. Parameter design is an examination which helps to reduce the variation in
performance. It helps to make the design robust by reducing the number of noise
factors in the process. This, in turn, reduces the loss to be borne by the customer.

Variation in various aspects of performance depends on different environments.


This variation increases the cost of manufacturing and lifetime costs. Parameter
design is a practice to recognize parameter settings that would be best suited for
the organization. This is the reason it is called parameter design.

c. Tolerance Design

This is a design to establish tolerance for the products or processes to reduce the
cost of manufacturing and lifetime costs. It is the next and the final step, after the
parameter design, while specifying the product and the process designs to identify
the tolerance among the factors. The factors affect variation and they are modified
only after the parameter stage because only those factors are modified whose
target quality values have not been achieved.

Traditional quality improvement methods were based on convention. They were


based on the fact that to increase tolerance it is important to improve the quality of
materials, machines and tools. Improvement in the quality of materials and other
things would shoot up the costs. However, the Taguchi method ensures that the
process is of the utmost quality without much increase in the costs.

Performance Statistics

Performance statistics are established to measure the effect of noise factors on the
performance settings. Taguchi makes use of a lot of performance statistics which
are an indication of the variation in the performance. These statistics also help to
reduce the loss and raise the level of performance.
Signal to Noise Ratios

Signal factors, unlike the noise factors, are the ones which are under the control of
the user of the product. Sending a short message on a mobile is in the user’s hands
and therefore it makes it a signal factor. However, the delivery of the message is
not in the hands of the user. So, it is a noise factor because it is uncontrollable. The
best product is the one which responds to the user’s signals and is unaffected by
noise factors. Therefore the goal of the experiment should be to maximize the
signal-to-noise ratio.

If the experiment is not being performed keeping a particular goal in mind, then the
signal to noise ratio should be based on Mean Squared Deviation (MSD) to examine
repeated results. Quality can be quantified as per a particular product’s effect to
noise factors and signal factors. The advantage of MSD is that it is in line with
Taguchi’s concept of quality.

Smaller-the-better:

In cases where you want to minimize the occurrences of some undesirable product
characteristics, you would compute the following S/N ratio:
Expected Loss

Thinkers like Walter.A.Shewhart were of the view that loss is calculated by


computing the cost of poor quality. The cost of poor quality is the cost incurred as a
result of the products which are defective or which require rework. However,
Taguchi viewed loss from a broader perspective. He was of the view that the loss
should also include the loss incurred by the society. The loss to the society could be
in any form like early depreciation of the product. The loss is expected which
means that it might be borne by the potential consumer at some future time. This
loss would occur due to performance variation.

The producers tend to ignore the expected loss because it prevents the markets to
operate efficiently . However, Taguchi is of the view that a reduction in these losses
increases brand loyalty, sales and ultimately profits. The parameter design gets
more meaning when the quadratic loss function is modeled. The quadratic model is
advantageous because it helps achieve the target value faster and reduces the
variation in the process.

According to the traditional management view, there is no loss as long as the


organization is meeting targets. If an organization is far behind in meeting targets,
the situation is the same when it was not able to meet the targets. There is no room
for process distribution as long as the organization is able to achieve targets. Also,
there is no scope for improvement in the performance in this case because there is
no benefit for the organization as the loss to society is nil.

A quadratic loss function looks like this:

7.2 Mixture Experiments

8. Mixture Experiments
A mixture experiment involves the integration of various components which
provide a stable value. A mixture experiment takes place while mixtures of
components that sum to a constant are analyzed. For example, if you want to
optimize the effect of a fertilizer on a yield, consisting of 4 different brands of
fertilizers, then the sum of the proportions of all fertilizers in each mixture must be
100%. Thus, the task of optimizing mixtures commonly occurs in food-processing,
refining, or the manufacturing of chemicals. A number of designs have been
developed to address specifically the analysis and modeling of mixtures.

Triangular Coordinates

The mixture proportions are usually summarized using the triangular or ternary
graphs. If you have a mixture that contains 3 components A, B and C, they can be
summarized by a point in the triangular coordinate system defined by all the three
variables. The following are 6 different mixtures of the 3 components, A, B and C.

The sum for each mixture is 1.0, so the values for the components in each mixture
can be interpreted as proportions. If this data is graphed on a regular 3D scatter
plot, the points would form a triangle in the 3D space. The points inside the triangle
where the sum of the component value is equal to 1 consist of valid mixtures. So, it
is sufficient to plot the triangle to summarize the proportions for each mixture.

B. Response Surface Analysis

1. Objectives and Application

Response Surface Analysis or Method (RSM) is another important tool like design of
experiments as it is also used to measure the effects of the variation in the process
characteristics. The difference between the two is that RSM is a better tool to
measure non-linear process performance factors like process contours. However,
DOE is suitable to study only the high and low levels of performance variation. DOE
is more time consuming than RSM. An analysis, taking 8 to 10 factors in
consideration, would take one week to complete using RSM methodology but with
DOE it would take about two to three weeks.

The RSM methodology, in a complex engineering system, for instance, would help
to assess the values of the design variables. These are those design variables which
optimize the performance settings according to the constraints of the system. RSM
is also used to get the mathematical tables which gauge the functional relationships
between the performance settings and design variables.

In most RSM applications, it is imperative to create an approximation model.


Approximation is important because the inherent function that is the driving force
behind the response is an unknown physical mechanism. RSA also upgrades the
approximation as the design progresses. RSA can be initialized with the help of a
design of experiment or a Taylor series.

RSM is also helpful as it helps to minimize the cost associated with computing
failure probability. RSM is adapted to triumph over the failure probability designs as
a function of the design variables. Failure probability designs are noisy and unfit for
gradient-based optimization. RSM is also used for optimization of tasks which are
otherwise difficult to quantify.

The RSM approach also carries with it some drawbacks. This approach is very costly
as it helps to get very high level of accurate and reliable estimates.

2. Steepest Ascent/Descent Experiments

An important result from calculus that the gradient points in the direction of the
maximum increase of a function is used in the steepest ascent (descent) technique
to decide how the initial settings of the parameters should be changed to result in
an optimal value of the response variable. The direction of movement is made
proportional to the estimated sensitivity of the performance of each variable.

It is generally assumed that performance is linearly related to the change in


controllable variables for minor alterations. Also, each controllable variable is
changed in proportion to the magnitude of its slope. Whenever there is a minor
alteration in the controllable variable, it is corresponding to finding the gradient at a
point. If, for instance, a surface controls N number of variables, it requires N points
around the point of interest.

The parallel-tangent points are obtained from bitangents and inflection points of
occluding contours when the problem is not an n-dimensional elliptical surface.
Parallel tangent points are points on the occluding contour where the tangent is
parallel to a given bitangent or the tangent at an inflection point.

The experiments are designed with center points having first order. If the curvature
is non-considerable, the experimental section would not contain the center point
and the path of steepest ascent can be found. If the curvature is outsized, then
additional testing is required for a second order design to be carried out.

3. Higher Order Experiments

The designing of sample space is required for the formation of an approximation


model. DOE techniques permit a designer to select those points sharply and
accurately so that an appropriate and statistically significant approximation model
can be produced. In order to summarize the general performance of the objective
functions and constraints involved in the design, it is necessary to sample enough
points in the design space while taking into account the computational expense at a
reasonable level.

There are two groups of experimental design :

1. Classical experimental designs

2. Space filling designs.

The usually used classical experimental designs are Central Composite Design
(CCD) and Box-Behnken (BB). The commonly used space filling designs are: Random
Latin Hyper cubes, Orthogonal Array (OA), and Orthogonal Array-based Latin
hypercube.

In the classical design and analysis of experiments, sample points are taken out in
the design, or multiple data points are considered for random variation. In this
case, extra sample points are taken to fill the gap or to cover the design space. The
space-filling designs present better “coverage” of the design space and they are
widely used. For more than 3 design variables, spaces filling experimental designs
give better results than classical experiments designs.
Box-Behnken Design

Box-Behnken requires only three levels, which are coded as-1, 0 and +1. This design
was created by Box and Behnken by combining two-level factorial designs with
incomplete block designs. This method creates designs with advantageous
statistical properties and most importantly, only a fraction of the experiments are
needed for a three-level factorial. These designs offer limited blocking options
except for the one with three-factor level.

Central Composite Design (CCD)

Central Composite Design uses only 9 treatments and a star pattern with one
treatment at a central position. It has a centered and symmetric form. It is a second
order design which is mostly used to optimize tablet formulations. CCD makes it
possible to develop response surfaces which permit the ranking of each variable
according to its significance on the responses which are calculated. Thus, it helps to
predict the formulation composition that will produce a desired response.

Latin Hypercube Sampling

Latin hypercube sampling was introduced by McKay, Beckman and Conover. A Latin
hypercube sample is represented as 3-tuple LH(m, n, s) where m, n, and s are the
number of sample points, the number of input variables, and the number of
symbols (integers from 1 to s) respectively. A general condition to satisfy is that m is
a multiple of s.

A feature exhibited by a Latin Hypercube sampling is that if you divide any input
variable into s equally spaced bins, each bin gets k=m/s data points. One more
significant feature of this sampling method is that it generally gives a lower variance
in function approximation.

Orthogonal Arrays

It is a set of tables to find out the trial conditions and calculate the number of
experiments. Orthogonal Arrays are represented as:

L-(Number of Experiments) (Levels)Factors

L-8 (2)5, where: 8 = Number of experiments

2 = Number of levels

5= Number of factors

Examples of Orthogonal Arrays:

L-4(2)2 , L-8 (2)3, L-16 (2)4, L-32 (2)5, L-64 (2)6 : ALL same Level 2

L-9 (3)2, L-18 (3)2(2), L-27 (3)3: AT 3 & 2 LEVELS

L-16 (4)2 & L-32 (4)2 (2) Modified At 4 LEVEL

OA-based Latin Hypercube Sampling

Orthogonal Array based Latin Hypercube Sampling was introduced by Tang. The
sample points obtained from this method are Latin hypercube samples. However,
the set of sample points with the transformation are [sX ji ] orthogonal arrays.

1. you divide any input variable into s equally spaced bins, each bin gets exactly one
sample point.
2. you divide any two input variables into s*s bins, each bin contains exactly one
sample point.

Rotatability

Rotatability is a feature which is not present in 3-level factorial designs. In a


rotatable design, the variance of the predicted values of y is a function of the
distance of a point from the center of the design. It is not the function of the
direction in which the point lies from the center. Little or no information might exist
about the region which contains the optimum response. So, the experimental
design matrix should not be biased about any examination in any direction.

Contours of Variance of predicted values are concentric circles. In a rotatable


design, the contours which are linked to the variance of the predicted values are
concentric circles.

4. Evolutionary Operation (EVOP)

Evolutionary Operation (EVOP) was introduced in the 1950’s by George Box. It is a


continuous method of operating a full-scale process. It helps to disseminate
information regarding the improvement in the process through a simple
experimental design during the production process. Minute changes are made in
the levels of the process variables so that a considerable difference does not
remain in the product characteristics. EVOP is run by the process operators while
production is underway and satisfactory products are being produced.

Importance

Both the traditional experimental designs and the Taguchi method are radical
methods. They help in the product and process optimization both in terms of levels
and design factors during the production process. They can prove to be very useful
in improving quality during the design stage. Although these methods have a lot of
utility, they are very costly in terms of money, time and manpower. Also, most of
the times, they hinder the manufacturing process. Therefore, they are not carried
out actively as a routine.

The idea behind conducting the experiment is to replace the stagnant working of a
process by an orderly scheme which causes minute changes in the controlled
variables. The result of these minute changes is analyzed and this helps steer the
process in the direction of improvement.

The EVOP method is very apt when:

1. The performance of the process is changing over time.


2. More than three control variables need to be disturbed.
3. Fresh optimization for the product/ process is required with each new lot of
material.

There are one two and three factor EVOP designs.

Single Factor EVOP designs

1. If X is taken as the current production level, it would be set as the new center of
specification and would be determined by designed experimentation.
2. Acceptable levels like X- D and X+ D which are within the specification limit,
should be chosen.
3. The quality of the process at all three production levels (X, X- D and X+ D) should
be evaluated and the process which produces the highest quality of product should
be chosen.

Two- Factor EVOP designs

If X is taken as the first factor at the current production level and Y as the second
factor, evaluation of the quality of the output at all different combinations of X and
Y including the ∆ must be done. The combination which produces the highest
quality is the new center point.

The Three-Factor EVOP follows the same procedure as the Two-Factor but with
three factors.

CHAPTER 8
8 The Control Phase

Objectives
The objectives of the CONTROL phase are to:

 Implement a plan for maintaining the improvement or gains


 Verify long term process capability

The Six Sigma project is nearly complete. The goals have been met and the
customer has accepted the deliverables. You must be thinking now what? There is
one thing to be careful about- you have to see the process or project doesn’t
backslide. Control is essential to ensure there is no backsliding. An organization has
to ensure the gains are permanent, there is process stability. A solution is of little or
no value if it isn’t sustained over a long period of time.

The Control phase establishes standard measures to maintain performance and to


correct problems as required.

This is the phase where changes which were made in the X’s are maintained in
order to sustain or hold the improvements in the resulting Y’s.

The Control Phase is the last step to sustaining the improvement in the DMAIC
methodology. The Control phase is characterized by completing the project work
and handing over the improved process to the process owner. The control phase
gets special emphasis in Six Sigma as it helps to ensure that the solution stays
permanent and it provides additional data to make further improvements. It has
been recorded in previous experiences that hard earned results are very difficult to
sustain if the process is left to itself. There is inherent self control in a well designed
process unlike poor processes which require external control.

Given below are some tools for control planning.

A. Statistical Process Control(SPC)

The main objective in any production process is to control and maintain the quality
of the manufactured product or service so that it conforms to specified quality
standards. The process must have taken into account that the proportion of
defective items was not too large. This is known as process control. On the other
hand, product control means controlling the quality of the product by studying the
product at crucial points, through sampling inspection plans.
The objective of the Control Phase is to ensure that the improved processes now
enable the key variables of the process to stay within the maximum acceptable
limits, by using tools like SPC.

The SPC expert collects information about the process and does a statistical
analysis on that information. He can then take necessary action to ensure that the
overall process stays in-control and to allow the product to meet the desired
specifications. He can recommend ways and means to reduce variations, optimize
the process, and perform a reliability test to see if the improvements work.

Statistical process control means planned collection and effective use of data for
studying causes of variations in quality, either as between processes, procedures,
materials, machines, etc., or over periods of time. This cause and effect analysis is
then fed back into the system with a view to continuous action on the process of
handling, manufacturing, packaging, transporting and delivery at end-use.

The main concept in statistical process control is that every measurable


phenomenon has a statistical distribution. This means an observed set of data
constitutes a sample of the effects of unknown chance causes. It follows that, after
you eliminate assignable causes of variation, there will still remain a certain amount
of variability exhibiting the state of control. A production process is said to be in a
state of statistical control, if it is governed by chance causes alone, in the absence of
assignable causes of variation. A greater consistency in fulfilling customer’s
requirements leads to greater customer satisfaction. Reduced variation in internal
processes leads to less time and money being spent on rework and waste. This
leads to greater profitability and security for your business. SPC is one of the
essential tools necessary to maintain an advantage in the competitive marketplace
of the current day.

As part of an ongoing cycle of continuous process improvement, SPC helps in


modifying your processes to the virtually error free Six Sigma level.

Variation in the quality of the manufactured product or process in the repetitive


process in an industry is inherent and inevitable. These variations of quality
characteristics are broadly classified as being due to two causes:

Chance Causes of Variation

These are those variations which results from many minor causes that behave in a
random manner. Chance causes of variation are stable patterns of variation. There
is no way in which they can completely be eliminated. One has to allow for certain
variation within this stable pattern, usually termed as allowable variation. The range
of such variation is known as natural tolerance of the process.

Assignable Causes of Variation

These are variations that may be attributed to special non-random causes. These
are also termed as preventable variation. These variations can creep in at any stage
of the process, right from the arrival of the raw material to the final delivery of
goods. Such variations can be the result of several factors such as change in the
raw material, a new operation, improper machine setting, broken parts, mechanical
faults in a plant, etc. Assignable causes can be identified and eliminated. These are
to be discovered in the production process before it becomes defective.

For example,a machine produces 20,000 bolts everyday of 3” length. It is very


unlikely that all the bolts are exactly 3” in length. If the measure instrument is
sufficiently precise, some bolts which are slightly less than 3” and some which are
slightly more than 3” could be detected. This leads to possible causes of variation in
the product i.e. they may be due to chance or assignable variations.

1. Objectives and Benefits of SPC

1. Without SPC, the basis for decisions regarding quality improvement is based on
intuitions. SPC provides a scientific background for decision regarding the product
improvement.
2. SPC helps in the detection and correction of many production troubles.
3. SPC brings about a substantial improvement in the product quality and reduction
of spoilage and rework.
4. SPC gives information about when to leave a process alone and when to take
action to correct troubles.
5. In the presence of good statistical control by the supplier, the previous lots
supply evidence on the present lots and it provides better quality assurance at
lower inspection cost.
6. If testing is destructive in nature, a process in control gives confidence in the
quality of untested product.
7. PC reduces the waste of time and material to a certain extent by giving an early
warning about the occurrence of defects. 2. Selection of Variable

2. Selection of Variable
Selection of variable involves selecting those variables among Control charts which
are a tool in statistical process control. The variable chosen for the control of
average (X-bar) and Range (R) chart must be something that can be measured and
expressed in number, such as a dimension, hardness, tensile strength, weight etc.
The real basis of choice is always the possibility that costs can be reduced or
prevented. From the stand point of possibility of reducing production costs, the
candidate for a control chart is any quality characteristic that is causing rejection or
rework involving substantial costs. From the inspection and acceptance points of
view, destructive testing is always suggested as an opportunity to use the control
charts to reduce costs. In general, if acceptance is on a sampling basis and the
quality tested can be expressed as a measured variable, it is important to examine
these costs by basing acceptance on the control charts for variables. The best
chance to save costs, are in places that would not be suggested either by an
examination of costs of spoilage, or rework of inspection costs.

While selecting variables for the initial application of the control chart technique, it
is important not only to choose those variables with opportunities for cost saving
but to meticulously select a type of saving that everyone, including those in a
supervisory capacity and the management, will readily accept as being a real saving.

The statistical tools applied in the process control are control charts (discussed in
the subsequent sections). The primary objectives of process control are (a) to keep
the manufacturing process in control so that the proportion of defective units is not
excessive and (b) to determine whether a state of control exists.

3. Rational Sub-Grouping

Rational sub-grouping means dissection of observations into rational sub-groups.


This was given by Shewart. Control charts provide a statistical test to determine
whether the variation from sub-group to sub-group is consistent with the average
variation from sub-groups. It is essential to determine whether a group of
measurements are statistically homogenous or not so that it gives the maximum
opportunity for variation from one subgroup to another. The basis of all control
charts is the rational subgroup. Rational subgroups are composed of items. The
statistics, such as, the average and range, are computed for each subgroup
separately, and then plotted on the control charts.

Two Schemes Involving Order of Production as a Basis for Sub-Grouping


When the order of production is used as a basis, two fundamental approaches are
possible:

 The first sub-group consists of products that produced as much as possible at one time; the next
sub-group consists of products that produced as much as possible at a later time and so forth.
This method follows the rule for selection of rational sub-groups of permuting a minimum
chance for variation within a sub-group and a maximum chance for variation from sub-group to
sub-group. It gives the best estimate of capabilities of a process obtainable if assignable causes
of variation can be eliminated and it provides a more sensitive measure of shifts in the process
average.
 The other sub-group consists of products intended to be the representation of the entire
production over a given period of time, the next sub-group consists of products intended to be
the representation of all the production of approximately the same quality of products in a later
period and so forth. This method, in which one of the purposes of the control chart is to
influence decisions on acceptance of the product, is preferred.

Calculation of Range for each Sub-group

The highest and the lowest number in the sub-group must first be identified. With
large sub-groups it is better to mark the highest value with the letter H and the
lowest with the letter L. The Range is calculated by subtracting the lowest value
from the highest value i.e. R = (H-L).
4. Analysis of Control Charts

A control chart is a statistical tool principally used for the study and control of
repetitive processes. It is a graphical tool used for presenting data so as to directly
expose the frequency and extent of variations from the established standard goals.
Control charts are simple to construct and easy to interpret and they tell at a glance
whether or not the process is in control, i.e., whether a process lies within the
tolerance limits.

A Six Sigma enterprise or any industry in general faces two kinds of problems:
a. To check whether the process is conforming to the standards.
b. To improve the level of standard and reduce variability consistent with cost
considerations.

Shewart’s control charts provide an answer to both. They provide criteria for
detecting lack of statistical control. Control charts are the running records of the
performance of the process and, as such, they contain a vast store of information
on potential improvements.

A control chart consists of three horizontal lines:

1. A central line to indicate the desired standard or level of the process (CL)
2. An Upper Control Limit (UCL)
3. A Lower Control Limit (LCL)

In the control chart, the upper control limit (UCL) and the lower control limit (LCL)
are usually plotted as dotted lines and the central line (CL) is plotted as a dark line.
If t is the underlying statistic, then these values depend on the sampling
distribution of t and are given by:

UCL = E (t) + 3 S.E. (t)

LCL = E (t) - 3 S.E. (t)

CL = E (t)

From time to time a sample is taken and the data are plotted on the graph paper.
As long as the sample points fall within the upper and lower control limits there is
no cause for worry, as the variation between the sample points can be attributed to
chance causes. The problem occurs only when a sample point falls outside the
control limits. This is considered as a danger signal, which indicates that assignable
causes give rise to variations.

This can be represented in a diagram:


Figure 32: A chart showing Control Limits.

8.1 Selection and Application of Control Charts

5. Selection and Application of Control Charts

The statistical tools for data analysis in quality control of the manufactured
products are given by four techniques.

1. Control Charts for Variables

Variables are those quality characteristics of a product which are measurable and
can be expressed in specific units of measurement such as diameter, tensile
strength, life etc. Such variables are of continuous type and follows normal
probability law. For quality control of such type of data, two types of control charts
are used and technically these charts are known as:

The X- bar chart is used to show the quality averages of the samples drawn from a
given process and the range chart is used to show the variability of the quality
produced by a given process.
During the production process, some amount of variation is produced in the items.
The control limits in the X-bar and R charts are so placed that they show the
presence or absence of assignable causes of variation in the

1. Average - mostly related to machine setting


2. Range - mostly related to the part of the operator

Computational Procedure for X-bar Chart

Computational Procedure for Range Chart

The range chart is used to show the variability of the quality produced by a given
process. The R chart is generally presented along with the X-bar chart. The general
procedure for constructing the R chart is similar to that of the X-bar chart. The
values required for constructing an R charts are:
Construction Procedure for Control Charts for X-bar and R

Control charts are plotted on a rectangular co-ordinate axis. The vertical scale
represents the statistical measure of X-bar and R, and the horizontal scale
represents the sample number.
2. Control Charts for Attributes
Attributes are those product characteristics which cannot be measured. Such
characteristics can only be identified by their presence or absence from the
product, for example, whether a bottle has cracks or not. Attributes may be judged
either by the proportion of units that are defective or by the number of defects per
unit. For quality control of such type of data, two control charts are used:

a. Control Chart for fraction Defective or p-chart

This chart is used in attributes if the quality characteristics of the product are not
amenable to measurement but can be identified by their absence or presence from
the product or by classifying product as defective or non-defective.

The p- chart is designed to control the proportion of defectives per sample. It is


used where the sample size varies from sample to sample. It gives a more
straightforward and less cluttered presentation. Expressing the defectives as a
percentage of production is more meaningful. While dealing with attributes, a
process will be adjudged in statistical control if all the samples are ascertained to
have the same population proportion P. If d is the number of defectives in a sample
of size n then the sample proportion defectives is p = d/n. Hence d is a Binomial-
Variate with parameter n and P.
c. Control Chart for the Number of Defects per unit or c-chart

This is used with advantage when the characteristic representing the quality of a
product is a discrete variable. For example, the number of surface defects observed
on a sheet of photographic film.

An article which does not conform to one or more specifications, is termed


as defective while any instance of the article’s lack of conformity to the
specifications is a defect.Thus, every defective contains one or more of the defects.
The sample size for c-chart may be a single unit. For example, in case of surface
defects, area of surface is the sample size. The pattern of variation in the data
follows Poisson distribution with equal mean and variance. If c is taken as Poisson-
Variate with parameter Λ, then
d. Number of Defects per unit for Variable Sample Size or u - chart

If sample size n is varying then the statistic used is u = c/n. If ni is the sample size
and c i is the total number of defects observed in the ith sample then,

ui = ci / ni ( i=1,2,……….k)

gives the average number of defects per unit for the ith sample.

The pattern of variation in the data follows Poisson distribution with equal mean
and variance. If c is taken as Poisson-Variate with parameter Λ, then,
When standards are not given: Λ is not known.

Λ is estimated by the mean number of defects per unit. Thus, if c i is the number of
defects observed on the ith inspected unit, then estimate of Λ is given by

8.2 PRE - Control

6. PRE - Control

PRE-control is a statistical technique that can be used with X-bar (Average) and R (
Range) control charts. This technique helps the operators to control the process so
that the proportion of defective items is reduced during the production process. It
describes the process situations and causes of variations that could produce
defects, and establishes control limits without the using the normal calculations
that are used with upper control limits and lower control limits.
PRE-control can draw control limits and conclusions for a small number of
subgroups while SPC requires 25 subgroups. PRE-control gives feedback about the
process from the start. In PRE-controls limits, the process is centered between
specified control limits, that is, the control limits lie with in the specification limits.
PRE-control does not allow any calculations and plotting. It branches into three
parts to provide control information. This technique can be explained with the help
of a symmetric normal distribution curve. This curve shows variations within the
spread of a production process that may produce an increase in defects in the
production's process.

Two PRE-control (PC) lines in the normal curve can be drawn to set the control
limits, each one quarter of the way inside of the specification limits. From the figure
given below, it is clear that 86% of the items should be within the PC limit lines, with
a delta of 14%, 7% lies in each specification limit. In other words, roughly 1 part in
14 could fall outside of the constructed PRE-control limits. Thus the chance of two
readings falling outside the PRE-control line is (1/14) x (1/14) or 1/196. This is the
foundation of PRE-control. This should mean that only about one in every 200
pieces are consecutively in a row within the given outer bands. Considering all 4
possible permutations of the consecutive 2 pieces, the chance is 4/196 or nearly
2%. In other words, the operator will get a signal to adjust the process.
Advanced Statistical Process Control

1. Uses of Short-run SPC

Short production runs are an important advanced technique for many


manufacturing companies. The trend in manufacturing has been toward smaller
production runs with the product tailored to the specific needs of individual
customers. The important rule to follow while drawing mean and range chart here
is that the control limits should not be calculated until data are available from at
least 25 subgroups of 5 populations. But all production runs are not based on this
specification. They involve fewer parts than required to start a standard control
chart. Many times, the usual SPC methods can be modified slightly to work with
short runs and small runs. For example, X-bar and R control charts can be created
using moving averages and moving ranges.
However there are some SPC methods that are particularly well suited to
application on short or small runs.

Exact Method of Computing Control Limits for Short and Small Runs

The procedure applied to any situation where a small number of subgroups will be
used to set up a control chart or the procedure applied to short runs consists of
three stages:

1. Finding the process (establishing statistical control).


2. Setting limits for the remainder of the initial run.
3. Setting limits for future runs (after the run is complete, combine the raw data
from the entire run and perform the analysis).

Stage three assumes that there are no causes of variation between the runs. If
there are, the process may go out of control. This approach will lead to the use of
standard control charts tables when enough data are accumulated.

Setup Approval Procedure

The following procedure can be used to determine if a setup is acceptable using a


relatively small number of sample units.

1. After the initial setup, run 3 to 10 pieces without adjusting the process.
2. Compute the average and the range of the sample.
3. Compute T = (average - target) / Range
Use absolute values. The target value is usually the specification midpoint or
nominal.
4. If T is less than the critical value T in the table, (given below) accept the setup.
Otherwise adjust the setup to bring it closer to the target. There is approximately 1
chance in 20 that an on-target process will fail this test.

This procedure correctly compensates for the uncertainties involved when


computing control limits for small amount of data.
Small runs and short runs are common in modern business environments.
Different strategies are needed to deal with these situations. Advance planning is
essential.

2. Uses of EWMA Charts

In the SPC technique, as long as the variation in the statistic being plotted remains
within the control limits, the process can be left alone. If a plotted statistic exceeds
a control limit, the cause has to be ascertained. This approach works fine as long as
the process remains static. However, the means of many automated manufacturing
process often go with the flow because of inherent process factors or process drift
produced by common causes. In spite of this, there may be known ways of
intervening in the process to compensate for the drift. This drift can be studied
through the use of a common cause chart. This approach involves creating a chart
of the process mean. Here control limits are not used, but action limits are placed
on the chart. Action limits are computed based on costs rather than on statistical
theory and a prescribed action is taken to bring the process closer to the target
value.

These charts are called ‘common cause charts’ because the changing level of the
process is due to built-in process characteristics. The process mean is tracked by
using exponentially weighted moving averages (EWMA).

EWMA Charts have a number of advantages for automated manufacturing:

1. EWMA Charts use the actual process data to determine the predicted process
value for processes that may be drifting i.e. they can be used when processes have
inherent drift.
2. If the process has trend or cyclical components, the EWMA will reflect the effect
of these components.
3. EWMA Charts provide a forecast of where the next process measurement will be.
This allows feed-forward control.
4. EWMA Charts can be used to take preemptive action to prevent a process from
going too far from the target. EWMA models can be used to develop procedures for
dynamic process control.
5. If the process has inherent non-random components, a EWMA common cause of
control charts should be used.

The equation for computing EWMA is

The relationship between X-bar and EWMA charts helps to understand the EWMA
chart. X-bar charts give 100% of the weight to the current sample and 0% to the
past data. This roughly equals to Λ =1 on an EWMA chart. In this case the data
points are all independent of one another. In contrast, the EWMA charts use the
information from all previous data. The X-bar chart treats all the data points as
coming from a process that does not change its central tendency.
While using an X-bar chart it is not essential that the sampling interval be kept
constant. The process is supposed to behave as if it were a static. However, the
EWMA chart is designed to account for process drift and, therefore, the sampling
interval should be kept constant while using EWMA charts. This is usually not a
problem with automated manufacturing. It is possible to put control limits on the
EWMA chart only when the situation demands so.

The Three Sigma control limits for the EWMA chart are computed as

A value of Λ near 0 provides more smoothing by giving greater weight to the


historic data, while a Λ value near 1 gives a greater weight to the current data. The
recommended range of Λ is 0.2 to 0.3. Hunter proposes a EWMA control chart
scheme where Λ = 0.4. This value of Λ provides a control chart with approximately
the same statistical properties as an X-chart combined with the run tests. Thus, to
compute the control limits for an EWMA Chart when Λ = 0.4 you simply compute
the X-bar chart control limits and divide the distance between the upper and lower
control limits by 2. The EWMA should remain within these limits.

3. Uses of Cumulative Sum Chart (CUSUM-Chart)

A CUSUM is a process of subtracting a predetermined value, such as a target,


preferred or reference value from each figure in a sequence and progressively
cumulating the differences. The graph of the series of cumulative differences
is known as a CUSUM chart.

CUSUM-charts are particularly used for detecting small shifts in the process.
CUSUM-charts have shown more efficiency in detecting small shifts in the
mean of a process. This chart helps in easy visual interpretation of the data.
When it is desired that shifts be detected in the mean that are 2-sigma or less,
these charts are used. This chart also detects process changes more rapidly
than the control chart stability rules. Therefore this chart can be chosen to
monitor for small process shifts (less than 1.5-sigma).

The CUSUM chart is basically a graphical representation of the trend in the


outcomes of a series of consecutive procedures performed over time. It is
designed to quickly detect change in performance associated with an
unacceptable rate of adverse outcome. At an acceptable level of
performance, the CUSUM curve runs randomly at or above a horizontal line
(no slope/flat). However, when performance is at an unacceptable level, the
CUSUM slopes upward and will eventually cross a decision interval. These
decision intervals are horizontal lines drawn across a CUSUM chart. Thus the
CUSUM line provides an early warning of an adverse trend.

A CUSUM chart is mainly of two types:

Tabular CUSUM

V-mask CUSUM

Since a CUSUM chart is used for variable data which plots the cumulative sum
of the deviations from a target, a V-mask is used as the control limits. Since
each plotted point on the CUSUM chart uses information from all prior
samples, it detects much smaller process shifts than a normal control chart
would. CUSUM charts are especially effective with a subgroup size of one. Run
tests should not be used since each plotted point is dependent on prior points
as they contain common data values.

CUSUM charts may also be preferred when the subgroups are of size n=1. In
this case, an alternative chart might be the individual X chart, in which case
you would need to estimate the distribution of the process in order to define
its expected boundaries with control limits.

As with other control charts, CUSUM charts are used to monitor processes
over time. The charts' x-axes are time based, so that the charts show a history
of the process. For this reason, you must have data that is time-ordered; i.e.,
data entered in the sequence from which it was generated.
8.3 Understand Appropriate Uses of Moving Averages Charts

4. Understand Appropriate Uses of Moving Averages Charts

The Moving Average chart monitors the process location over time, based on the
average of the current subgroup and one or more prior subgroups .The X-axis of
the charts is dependent on time, so that they can show a history of the process. For
this reason, time-ordered data is needed. If this is not possible, then trends or shifts
in the process may not be detected.

There are two types of moving averages charts:

1. Moving Average-Sigma Chart


2. Moving Average-Range Chart

 Moving Average - Range Charts may be used when the cell size is less than ten
subgroups. If the cell size is greater than ten, Moving Average - Sigma charts may
be used.
 Moving Average - Sigma Charts are a set of control charts used for quantitative and
continuous data in measurement. It monitors the variation between the subgroups
over time. The Moving Average - Range charts also monitor the variation between
the subgroups over time.
 The plotted points for a Moving Average - Sigma Chart, called a cell, include the
current subgroup and one or more prior subgroups. Each subgroup within a "cell"
may contain one or more observations, but must all be of the same size.
 The control limits on the Moving Average chart are derived from the average
moving sigma, so if the Moving Sigma chart is out of control, the control limits on
the Moving Average chart are meaningless.
 These charts are generally used for detecting small shifts in the process mean.
These charts detect shifts of 5-sigma to 2-sigma much faster than Shewhart’s
charts with the same sample size. They are, however, slower in detecting large
shifts in the process mean.
 Run tests in this case cannot be used because of the dependence of data points. T
hese charts can also be used when the subgroups are of size n=1.
 Another use of the Moving Average Charts is for processes with known intrinsic
cycles. Many accounting processes and chemical processes fit into this
categorization. Suppose you sample at set intervals and set the cell size equal to
the number of subgroups per cycle. As you drop the oldest sample in the cell, you
pick up the corresponding point in the next cycle. If the cyclical nature of the
process is upset, then the new points added will be substantially different, causing
out of control points.

The advantage of CUSUM, EWMA and Moving Average charts is that each
plotted point includes several observations; therefore the central limit
theorem can be used to say that the average of the points or the moving
average is normally distributed and the control limits are clearly defined.

C. Lean Tools for Control

The lean tools for control are described in detail in Chapter 9- Lean Enterprise.
Some of the lean tools are listed below:

1. Poka Yoke

Mistake proofing, also known as Poka Yoke, can be used in control planning to
make sure the problem is eliminated for good.

Poka Yoke is a mechanism that prevents a mistake from being made. It is done by
eliminating or hugely reducing the opportunity for an error or to make the error so
obvious at the first glance that the defect reaching the customer is almost
impossible. Poka Yoke creates the actions that have the ability to eliminate
mistakes, errors, and defects in everyday processes and activities. In other words, it
is used to prevent causes that give rise to defects. Mistakes are not converted to
defects if the errors are discovered and eradicated beforehand.

An analysis of the cause-and-effect relationship of a defect is the first step towards


the mechanism of Poka Yoke. Then a remedy that wipes out the occurrence of the
mistakes that lead to that defect is applied. Poka Yoke solutions can consist of any
way that helps to ensure the mistake will be eliminated for good. It can be the
creation of a check list, an altered sequence of operation, a computer data entry
form, a message that reminds the user to complete a task etc. Poka Yoke has wide
applicability, especially in engineering, manufacturing, and transactional processes.

Poka Yoke can be done in two ways:

The Type-1 corrective action, usually believed to be the most effective form of
process control, is a type of control which when applied to a process eliminates the
possibility of an error condition from occurring.

The second most effective type of control is the Type -2 corrective action, also
known as the detection application method. This is a control that discovers when
an error occurs and stops the process flow or shuts down the equipment so that
the defect cannot move forward.

Both these methods are effective in control planning.


You will find examples of Poka Yoke in daily life. For e.g. In a normal 3.5 floppy disk,
the top right corner is shaped in such a way that the disk cannot be inserted upside
down. Electronic door locks have mistake proofing devices. They ensure that no
door is left unlocked; doors won’t lock when a door is open or when the engines are
running. Computer data entry forms that won’t let the user proceed to the next
field until the blank is filled is also mistake proofing.

2. 5S

3. Visual Factory

4. Kaizen, Kanban

D. Project Closure

Months of successful Six Sigma Project Planning and Implementation can be


reversed if an adequate amount attention is not paid to project closure.A key step
in any Six Sigma project life-cycle is formally closing the project. Through the Project
Closure Report, the success of future projects, and high quality deliverables can be
ensured. Many plans and adjustments are made during the course of the project, in
relation to the end result. The inputs gathered from the current project are critical
to the outcomes of future projects. The lessons learned can be used to avoid
similar mistakes in future projects. That is why project closure is so important.

Systems and Structure Changes to Institutionalize the Improvement

To ensure the permanence of the introduced changes in the Control phase and to
sustain the gains, it is necessary to institutionalize these solutions. The following is
a list of systems and structure changes to make these improvements an accepted
part of the organization.

Communication of Metrics:

It is critical that the change details and metrics be communicated in every step. The
value calculated from multiple measurements is known as a metric. This may
include tolerance, procedure or data sheet related to the change. It should be made
sure that appropriate quality checks, gauges and verification, and operator
feedback are done. Changes in training personnel need to be in place. The new and
better ways of doing things as a result of the Six Sigma project, needs to be
communicated to the personnel involved. All current employees need to be
retrained and new employees receive proper instructions.

Compliance:

It should be made sure that all individuals on the project are in agreement with the
change. It is important to get everyone’s approval, before implementation lest
someone might challenge the change.

Policy Changes
:
The corporate policies also need to be altered along with the results generated
from the project. It needs to be seen if some policies have become obsolete and if
new policies are needed.

Modification of quality appraisal and audit criteria:

To make sure the process or product conforms to requirements, the quality control
department exists in an organization. The quality control activity assures that the
documented changes will result in changes in the way the actual work is done. It
should also be ensured that there is an audit plan for regular surveillance of the
project’s gains.

Revision in budgets:

The Six Sigma project team should adjust the budgets in accordance with the
improvements gained in the process. It should be adjusted to that extent till where
profitability and capital inflow is not affected.

Modification in engineering drawings:

Many Six Sigma projects require engineering changes as a part of fixing the
problem. The project team should ensure that any engineering changes; for e.g, in
manufacturing or software, result in the actual changes being translated to
engineering drawings. Instructions should be handed out to scrap old drawings and
instructions

Modification in manufacturing planning:

Six Sigma teams usually find new and improved ways of manufacturing a product. If
new manufacturing plans are not documented, they are likely to be lost. The project
teams should make new manufacturing plans at least for the processes that are
included in the project.

Revision in manpower forecasts:

As a result of the Six Sigma project, productivity and efficiency of manpower


increases. It can happen that the Six Sigma program results in less manpower
producing the same output, and this change should be mirrored in the manpower
planning requirements. Higher quality and faster cycle times create more value for
customers and has a positive effect on sales.

Formal Project Closure Report

A project closure report is developed once the project is completed and all the
project deliverables have been delivered to the Process Owner or Business Owner.

The Black Belts being the project leaders are entrusted with the responsibility of
making a carefully detailed Project Closure Report to guarantee the project is
brought to a controlled end. The Project Closure Report template is an important
part of project closure. It is a final document produced for the product or process
and is used by senior management, Black Belts, to tie up the “loose ends”. It
contains the framework for communicating project closure information to the main
stakeholders of the Six Sigma project.

The end project report is to be made by the project leader/manager and it should
include the main findings, outcomes, and deliverables. It should be a fair
representation of the project’s degree of success. This project closure report
template should contain:

 Detailed activities undertaken to close the project/process


 Detail outstanding issues, risks involved, and recommendations to handle them
 Detailed operational matters.

The Black Belt project leader/manager should hold a review of the Six Sigma project
which is concerned with ensuring the completeness of all the project deliverables.
From this can be deduced what worked well for the project and how to avoid
repeating mistakes. It should be attended by the process owner. The basic question
raised should be if the process delivered the projected end product or service
within the time limit and financial resources at their disposal.

CHAPTER 9

9 Lean Concepts

The word Lean was coined in the early 1990’s by MIT researchers. However, Lean
Manufacturing dates back to the post-World War II era. Its concepts were
developed by Taiichi Ohno, a production executive with Toyota. The Japanese
market was facing a lot of problems as far as fulfilling the demands of the Japanese
markets was concerned. The mass production methods developed by Henry Ford
were not very efficient to economically produce long runs of identical products.

The conditions faced by the manufacturing industry today are similar to those
faced by Japan in the 1940’s. Therefore, Lean methods have become a common
industry practice to enhance the efficiency and improve customer satisfaction. The
Lean method helps to reduce waste, commonly known as muda in an orderly
manner in the value stream. Lean methodology is a challenge to muda. Lean
focuses on value, which is the opposite of muda. Ohno identified the following
kinds of Muda:

1. Defects

2. Overproduction

3. Inventories(in process or finished goods)

4. Unnecessary processing

5. Unnecessary movement of people

6. Unnecessary transport of goods

7. Waiting

Womack and Jones (1996) added another kind of muda:

8. Designing goods and services which do not satisfy the customers


9.1 Theory of Constraints

The theory of constraints is a management concept developed by Dr. Eliyahu M.


Goldratt. According to this theory, every process has a constraint. Even if the
workings of an organization are very complex and even if it works on a small
number of variables, it will have at least one constraint. These unknown constraints
hinder the organization from achieving exceptionally high level of performance. If
the performance is hampered, it means that the generation of profits for the said
organization would also be hampered.

The focus of the theory of constraints is to bring an improvement in the system.


Several business processes lead to a business system. When a process is
implemented and the inputs put in place, a business system is developed. A
business system is made up of several coordinated processes. All those processes
strive to achieve a common goal. A constraint in the process is a weak link which
hinders the growth of the entire process thus rendering it weak. The job of a TOC in
a production process is to identify the factors which hamper the speed of the
product.

TOC consists of five steps:

1. Identifying the Constraint


A system is like a chain and one weak link in the chain obstructs the performance.
The chain which delivers the worst performance is believed to possess the
constraint.

2. Exploiting the Constraint


Once the constraint is identified, the next step is to take measures to improve upon
it.

3. Subordinating other Activities to the Constraint


All the other activities come secondary to the constraint. The individual
performance of some of the processes is held subordinate for the benefit of the
system. The speed of the other processes is matched to the speed of the constraint
for the good of the organization.

4. Elevating the Constraint


If the personnel in authority think that the output being delivered is still not
enough, they would increase the investment in equipment and manpower.
5. Repetition
If there is any change, go back to step one.

The Benefits of TOC

1. It increases the ability of the organization to accomplish its goals. It also


increases the net profits and the ROI for an organization.

2. It lessens the confusion in the organization.

3. It reduces the production lead time.

4. It reduces the stock-list. It especially lessens the work-in-process in a


manufacturing process and/or the finished goods in a distribution network.

5. It also helps to improve on-time delivery performance.

6. It gives capacity to the staff to analyze and resolve routine conflicts.

Shortcomings of TOC

The absence of constraints could help an organization earn unlimited profits.


According to TOC, the conventional methods of cost accounting, such as efficiency
and utilization are faulty. This would mean that the organization which has applied/
or would apply the TOC would replace the traditional methods with throughput,
inventory and operating expense. It is easy to calculate the metrics of these
methods. However, it becomes difficult when reality logic trees, undesirable effects,
evaporating clouds and future reality trees need to be calculated during the
problem solving stage. This becomes increasingly difficult for laymen like the front-
line managers and supervisors to calculate..

9.2 Lean Thinking

Lean thinking is another way to improve processes. Lean thinking helps to increase
value and minimizes waste. Although, Lean Thinking is usually applied to the
production process to improve efficiency, it can be applied to all the facets in the
organization. The advantages of applying the lean methodology are that it leads to
shorter cycle times, cost savings and better quality.

Lean thinking embodies five basic principles

1. Specifies Value

Value is determined by the customers. It is about customer demands and what the
customers are able and willing to pay for. To find out the preferences of the
customers regarding the existing products and services, methods like, focus
groups, surveys and other methods are used. However, to determine the customer
preferences regarding new products, the DFSS method (Chapter-10) is used. The
voice of the customer (VOC) is very important to determine the value of a product.
The opposite of value is waste or muda.

Consider a company, ABC, which manufactures mobile handsets. According to the


sales manager, the sales are getting affected due to the high price of the handset.
However, according to the customer feedback the customers are shifting to other
manufacturers due to absence of facilities like radio and MMS in the handsets of
ABC Company. So, ABC Company should be able to determine the value of the
product according to customer feedback and install radio and other facilities in the
handsets which the customers are looking for.

The product will have value only if it fulfills customer demands. Value plays a major
role in helping to focus on the organization’s goals and in the designing of the
products. It helps in fixing the cost of a particular product and service. An
organization’s job should be to minimize wastage and save costs from various
business processes so that the cost demanded by the customers lead to maximum
profits for the organization.

2. Identifies Value Stream

Value stream is the stream or flow of all the processes which include steps from the
development of the design to the launch and order to the delivery of a specific
product or service. It includes both value added and non-value added activities.
Waste is a non-value added activity. Although it is impossible to achieve 100% value
added processes, yet Lean methodologies help make considerable improvements.

According to the Lean Thinking, there should be a partnership between the buyer
and the seller and the supply chain management to reduce wastage. The supplier
or the seller can be categorized according to the need. He can be classified as non-
significant or significant supplier or a potential partner. The classification can help
to solidify and improve relations between the supplier and the customers or
supplier and the organization.

Value Stream Mapping

After the key suppliers are categorized and the role they play for the organization is
determined, the next thing is to take steps to eliminate wastage. Tools such as
process activity mapping and quality filter mapping are used to identify and reduce
waste within the organization and also between the customer and the supplier.
There are two ways to observe the flow of work-logical and physical.

Role of Value Stream Mapping in Lean Thinking

1. It defines value from the customer’s view point.

2. It plots the present state of the value stream.

3. It applies the Lean methodology to spot muda in a process.

4. It helps predict the condition of the process in the future.

5. It develops a conversion plan.

6. It executes the plan.

7. It legalizes the new process.

3. Makes Value-Creating Steps Flow

Flow is the step-by-step flow of tasks which move along the value streams with no
wastage or defects. Flow is a key factor in eliminating waste. Waste is a hindrance
which stops the value chain to move forward. A perfect value-stream should be one
which does not hamper the manufacturing process. All the steps from the design to
the launch of the product should be coordinated. The synchronization would help
to reduce wastage and improve efficiency.

Customer satisfaction is the utmost to make the value flow. It is important to


consider the customer demands and the time of the demands. This theory is
known as the Takt Time. The formula to compute Takt Time is
Takt Time = Available work time / Customer required volume

Work time does not include lunch or tea breaks or any other process downtime.
Takt Time is used to create short-time work schedules.

Spaghetti Charts

The current state of the physical work flow is plotted on spaghetti charts. A
spaghetti chart is a map of the path that a particular product takes while it travels
down the value stream. The product’s path can be matched to that of a spaghetti
and hence the name. There is a great difference between the distance in the
current set-up and the Lean set-up. The difference in the two distances is known as
muda.

4. Pulls Customers towards Products or Services from the Value Stream


Traditional systems were ‘push’ systems. The conventional manufacturers believed
in mass production. Mass production means that a product is produced cheaply in
bulk. The product is then stored and the manufacturers hope that the produced
products would find a market. The Lean Thinking advocates the ‘pull’ system. The
manufacturers who adopt this principle do not produce a product unless it is
demanded by the customer.

According to this principle, the value stream pulls the customer towards products
or services. Therefore, the manufacturer would manufacture nothing unless a need
is expressed by the customer. The production gets underway according to the
forecasts or according to a pre-determined schedule. In short, nothing is
manufactured unless ordered for by the customer.

If a company is applying a Lean methodology and the principle of pull, it means that
it would require quickness of action and a lot of flexibility. As a result, the cycle time
required to plan, design, manufacture and deliver the products and services also
becomes very short. The communication network for the value chain should also be
very strong in the value chain so that there is no wastage and only those goods are
produced which are required by the customers. The biggest advantage of
the pull system is that non-value added tasks such as research, selection, designing
and experimentation can be minimized.

5. Perfection

Perfection is one of the most important principles of Lean Thinking. This is because
continuous improvement is required to sustain a process. The reason behind
sustaining a process is to eliminate the root causes of poor quality from the
manufacturing process. There are various methods to improve perfection and Lean
Masters work towards improving it.

Lean Masters

Lean masters are individuals from various disciplines with a common goal. They are
individual contributors who focus on the process to improve quality and
performance. Their work is to achieve efficient results. The results either may be for
their own organization or for their suppliers. The best way to achieve perfection
with the suppliers is through collaborative value engineering, supplier councils,
supplier associations, and value stream mapping between customers and
suppliers.

It is significant that the above mentioned principles be followed diligently in order


to reduce wastage and deliver quality products and services to the customers. The
customers and the suppliers must work in collaboration to achieve good results.
The effort of Lean Thinking should be to minimize wastage from the value stream
and improve efficiency. The Lean Thinking can be applied with the help of
committed leadership, a persuasive change agent and well-informed employees
and suppliers.

9.3 Continuous Flow Manufacturing

Continuous Flow Manufacturing (CFM) is a tactic which is used in production.


According to this strategy, a continuous improvement effort is required which calls
for an amalgamation of all the fundamentals of the manufacturing process. The
objective of a CFM process is to achieve a balanced production line with minimal
wastage, defect free production and cost savings.

In an era of increased competition, customer is the king. If a product is not


satisfying the customer, it would inevitably hamper the efficiency and quality of the
product. The company which adopts the continuous flow manufacturing (CFM)
system manufactures only the product for which the customer expresses his need.
The pros associated with CFM methodology are greater efficiency, flexibility, cost
savings and greater customer satisfaction. Simply put, CFM is a process to develop
improved process flow through diligent team work and combined problem solving.

The problem solving team is managed by a leadership team, which comprises of


three sub teams. These teams are responsible for identifying and implementing
process flow requirements and they also evaluate ongoing process. The entire
organization acts as a change agent and helps in the improvement of the process.
The efforts of the team lead the organization to adopt a continuous flow
manufacturing environment.

Advantages

1. Increased customer satisfaction.


2. Decreased attrition rate.

3. Better quality and minimal wastage.

4. Better scheduling, reduced flow time and cost savings.

5. Better control over inventory and reduction in- process inventory.

6. Better utilization of resources.

7. Eradication of non value- added tasks.

8. Improved safety practices.


9.4 Non-Value Added Activities

A non-value added activity is one which neither adds value to the external customer
nor provides any competitive advantage to the organization. Non-value added
activities fail to meet the criteria for value-adding which includes rework, inspection
and control. One of the main objectives of a Six Sigma project is to eliminate
activities which do not add value.

Non-value added activities add no value to the final output. They are activities
which the customer does not want to pay for. It is important to note that some non-
value added activities are important and unavoidable. Such activities should either
be made a part of value added activities or eradicated in order to save costs and
get a better ROI.

There are eight kinds of wastes or non-value added activities which are identified in
Lean.

1. Overproduction

This is one of the most misleading wastes. Overproduction simply means that a
product is made earlier and faster than its requirement. It leads to collection of
unwanted stock. Overproduction happens when an organization wants to produce
products cheaply in bulk, wants to cover up quality deficiencies, breakdown of
machinery, unbalanced workload or a long process set-up. However,
overproduction also leads to the unnecessary production of products which are not
needed and so there is wastage of time, money, resources and personnel.

A Lean analysis helps to spot and eradicate the production of units which are no
longer in use or the ones which are obsolete in technology.

2. Inventory

If the supply of a raw material or a finished good or a work-in-process is in excess


of a one-piece flow production process, it is considered a waste. If the inventory is
held for a year it costs approximately 25% of the money. Lean Manufacturing helps
to prevent wastage in terms of unnecessary performance of work-in-progress or
the production and storage of unwanted products.

3. Defects

Defect is a key waste which includes wastage in terms of men, machines, materials,
sorting or rework. Any product which requires scrapping, replacement or repair is
also included in the category of defective products. The reason why products
develop defects can be many. The main ones include unskilled workers, ineffectual
control over the process, lack of maintenance and imperfect engineering
specifications.

Lean analysis helps to recognize defects in the manufacturing process and helps
eradicate the production of faulty units which cannot be sold or used.

4. Processing

Processing is a waste which adds zero value to the product or service from the
customer’s perspective. It comprises of spare copies of paperwork and other
surplus processing for unforeseen problems which might occur in the future. Waste
also occurs in the form of acceleration of process to meet targets.

Lean methodology comes handy to reflect unwanted steps or work elements which
add no value to the product.

5. Transportation

Although transportation is an important aspect of the manufacturing process, yet it


is a non-value added activity as it adds to costs but not to value. In fact, it involves
the use of expensive equipment for the movement of men and material inside and
outside the organization. Costs like space, shelving and the manpower and systems
needed to track the material.

By incorporating Lean manufacturing in the organization, the transportation system


can be reformed. Multiple handling of materials, holdup in material handling and
needless handling can be avoided.

6. Waiting

Waiting means idle time. It comprises waiting for parts from up-stream operations,
waiting for tools; arrangements and directions form higher authority. The time
wasted in measuring and procuring information also makes up for idle time and is
considered a waste. Idle time is the one when no value is added. In fact, waiting for
manpower/labor is a matter of greater concern than the usage of machinery.

7. Motion

Any movement in terms of people or machinery that adds zero value to the product
is wastage in terms of motion. The examples of motion waste include time wasted
in hunting for tools, extra product handling, and arrangement of products, walking
and loading. The reasons for motion wastage include poor infrastructure,
incompetent labor, weak processing and constant changes in agenda setting.

Adopting Lean methodology helps as it exposes fruitless efforts and motions


executed by the employees.

8. People

Wastage of manpower is a matter of concern. This kind of wastage depends on the


recruitment process, styles of management, attrition rate, low motivation by the
higher authority and not using the employees’ ability to the fullest potential
contribute to waste in the form of people. People’s abilities should be utilized fully
in terms of mental and creative level, their skills and experience.

The main goal of Lean Manufacturing is elimination of waste. Waste can be


eliminated by identifying it and eradicating all non-value-added activities. Non-value
adding activities eat up time, money and resources. However, it should also be
noted that activities like accounting, government laws and regulations are
important and cannot be avoided.
9.5 Cycle-Time Reduction

Cycle time is the time needed to complete a particular task or process. Time is
money in business and therefore cycle time is an important criterion to judge a
manufacturing process. The time taken from the customer placing the order to the
product getting delivered is an example of cycle time. This process is made up of
many sub-processes which include taking down the order, assembling, packaging,
and finally shipping the product.

Cycle Time Reduction means to recognize more efficient and effective ways to carry
out the tasks. It means to eradicate or minimize the non-value added activities
which are a source of wastage. The cycle time can be reduced during the set up of
machines, inspection and during experimentation. Cycle time reduction increases
the production throughput drastically. Also, it decreases the amount of working
capital required and the operating expenses.

Cycle time reduction forms an important facet of the manufacturing process. The
reduction in cycle time proves fruitful both to the customers and the organization.
The customers want that the cycle time of order and delivery should be very less
and at the same time it is important for the organization because it saves time,
money and resources and helps generate profits much quickly. Cycle time
reduction increases customer satisfaction and therefore many producers are
reforming their supply operations.

A supplier’s performance is judged on four main factors. They are price, quality,
performance and on-time delivery. The customers have started evaluating the
performance on two more criteria. They are short cycle time on-time delivery and
response to customer feedback. Both the criteria, are, in fact, very critical and help
in retaining old customers and attracting new customers.

It is not just the manufacturing process which contributes to long cycle times. The
causes of its longevity are both internal and external. The “push” manufacturing
model has made the manufacturers adapt to changes which have led to short cycle
times. This has happened because, these days, manufacturers do not prefer to
stock the products. Instead the products are tailor-made and produced only when
the customers demand it or according to sales forecasts. The following measures
are being taken by the top management to reduce the cycle time.

1. Management of Demand

The manufacturers can use enhanced sales forecasting processes. At the same
time, it is important to keep the customer feedback in mind so that the production
takes place as and when the customer needs it.

2. Coordination and Communication

It is important to have cross-departmental coordination and communication to


eliminate wastage. The customers want that the cycle time should be reduced to
the minimum. So, it is imperative that all the processes and departments work
together to achieve better results.

3. Lean Manufacturing

Lean manufacturing is a very important tool to eliminate wastage. It helps to save


cost, time and resources. Also, there is great cycle time reduction and near perfect
delivery performance.

4. Management of the Supply Chain


It is important to manage the supply chain to reduce the cycle time. The supply
chain should be planned and implemented and there should be coordination in the
supply system. The customers these days are very aware and they want the
delivery to be on time. Moreover, reduced cycle time is good for the organization as
it leads to cost savings and optimization of resources.

Concurrent engineering, quality function deployment, and integrated product


deployment are also valuable tools for cycle time reduction.

Reducing Cycle Time through Kaizen

Kaizen is a philosophy which strives for continuous improvement. The term


originated in Japan and is applied to all aspects of life there. However, it has
become a very popular tool and is applied both in manufacturing and service
industries. Kaizen approach believes in making minor improvements and does not
concern itself to make major alterations.

There are two mechanisms in any given organization- process improvement and
process control. Control means to sustain the current improved performance of the
process. If there are no indications regarding the deviation in the performance of
the process, then standard operating procedures (SOPs) are considered. On the
other hand, improvement implies to conduct experiments and alter the
performance to produce better results. When the improvement is made, the SOPs
are altered and a new way of doing things is established.

According to Imai (1986) the job responsibilities regarding the improvement and
maintenance of a process are divided according to the level of position held by
personnel in the organization. The figure below represents how and where Kaizen
fits in an organizational hierarchy.
In the figure drawn above, there is a portion which goes beyond Kaizen. It is a point
of radical innovation. This is where Lean Thinking is related to Six Sigma. The figure
drawn below illustrates the point.

The Kaizen approach is related to the PDCA cycle by Deming/Shewhart. This is


because like PDCA, it involves planning, doing, checking and acting. Kaizen
approach is scientific in approach. The methodology is applied to all the
aspects of the production cycle in Japan and to R and D in other countries.

Reducing Cycle Time through Process Mapping

For achieving a reduction in the cycle time, a cross functional process map
needs to be developed. This means that a team of individuals from every
division is selected. They in turn map each step of the process of product
development from beginning to end. Two kinds of maps are developed; a map
of the current functioning of the process and another map for the expected
process map.

The first process map helps to identify the problems in the current system,
and to improve the current system. The expected process map explains each
step in detail.

During the mapping session a list of actions is also created. This list defines in
detail the changes required to change the process from the current map to
the expected map.

For more information on process mapping and cycle time reduction through flow
charts refer to Chapter-5, Black Belt, Measure
9.6 Lean Tools

It is important that all the processes be coordinated in order to achieve a perfect


flow. A process in any organization begins with the order being placed by the
customer and ends with the product being delivered to him. QFD is an important
tool to improve efficiency and satisfy the customers. Lean Manufacturing also
offers different tools to improve the flow.

Lean is in fact, most of the times, considered just a set of tools. Kanban, Kaizen and
Poka -Yoke are the most popular tools of Lean. The need for Lean tools grew out of
the problem of inefficiency and standardizing the process. The Lean tools allow a
perfect flow for the organization.

1. 5s

It is a starting point in Lean Manufacturing. It is a tool which helps to systematize


and standardize the organization. The 5s have been derived form the Japanese
words Seiri, Seiton, Seison, Seiketsu, and Shitsuke. They were a part of the Toyota
Lean Manufacturing System. These 5s make way for prompt spotting of problems
and quick solutions.

The 5s help to reduce wastage and optimize manufacturing efforts by systemizing


the work environment. The execution of this tool helps to “clean-up” the
organization and makes it a better place to work. The English version of the 5s is
described below:

1. Sort

It means to clear the work area. The items which are required or important for a
particular area in the workplace should only have things which are required. The
things which are not required should be sorted out.

2. Set in Order

The items which are required should be put in a proper place so that they can be
easily accessed when the need arises.

3. Shine

Shine implies keeping the workplace clean and clear. Cleanliness includes
housekeeping efforts and keeping the dirt away form the workplace. Cleanliness
not only ensures improvement in the appearance but at the same time safety while
working.

4. Standardize

The clean-up and the storage methods should be standardized. The best practices
should be followed by everybody in the organization to set an example and to
standardize the efforts.

5. Sustain

The effort to keep the workplace should be sustained. 5s’s involve a change in the
old practices of the organization. The culture change should be imbibed in every
employee of the organization.

Advantages

1. Lead to improvement in the processes.


2. Reduce set-up times.

3. Cycle time reduction.

4. Decrease in the accident rate.

5. Better reliability on the machinery.

This Lean tool motivates the employees to develop their work environment and
ultimately helps in reducing wastage, time and in-process inventory. There is
optimum utilization of space and easy accessibility of tools and materials used
during work. The 5s are also a base for other lean tools like TPM and just-in-time
production.

2. Visual Factory

Visual Factory is a tool which allows for easy access to information for everybody to
see and understand. This information can be used for continuous improvements. If
the knowledge about all the tools, parts, production systems and metrics is clearly
stated everybody in the organization can understand the standing of the system at
a glance. The presence of information simplifies things and manageability. Visual
factory is like a visual aid which helps to know the what, when, where, who and how
and why of any work place. Simply put, it helps to view the current status of an
organization.

3. Kanban

Kanban is a word taken from the Japanese language and implies “card-signal.”
Kanban/Pull systems help in the optimization of resources in an organization. They
depend on customer demands rather than on sales forecasts. There are no stocks
which lie in the store room waiting for the customer demands. Kanban is a signal
card which indicates that the system is ready to receive the input. It helps to
manage the flow of the material in the manufacturing system.

The concept of Kanban is many years old. The ‘two bin system’ was used in the UK
much before the Japanese production tools became popular in the 1970s. The
Kanban system is very easy to understand and deploy in an organization. It is very
popular in industries where there is a stable demand and flow. Usually the demand
is less and supplies more in the manufacturing industry. So, Kanban cannot be
applied to the entire process. However, there may be sub-processes to which it can
be applied.

It is a system where the supply of raw materials and other components is ongoing.
The workers have a never ending supply of what, when and where they need
something. The Kanban system has the following benefits:

1. It decreases the stock and prevents the products form becoming obsolete.

2. It minimizes wastage and scrap.

3. It allows flexibility in the manufacturing process.

4. It increases output.

5. It saves costs.

4. Poka-Yoke

Poka-Yoke is a Japanese term which stands for mistake proofing. The term was
invented by Shigeo Shingo. It can easily recognize flaws in a product and thwart the
manufacturing of incorrect parts. It is a signal to specify a trait in a product or a
process. It is the first step in proofing a system. Poka-Yoke is mandatory as it saves
time and money for the organization. Defects like scrap rework and other defects
can be prevented in the first place with the help of Poka-Yoke. Poka-Yoke puts limits
on the errors and helps in the accurate completion of the project.

5. SMED

Single Minute Exchange of Dies (SMED) is a method adopted for quick changeovers.
The SMED tool is applicable for all kinds of industry, be it a small retail shop or a car
manufacturing company. SMED is also known as "Quick Die Change" or “change-
over” and it is the time period between the completion of the last task till the
beginning of the next task. The time required to get the tools, raw materials and
complete the paperwork includes SMED.

The SMED tool is an important part of the Lean Manufacturing as it helps to save
costs and helps avoid loop-holes. Decreasing setup time at capacity constraining
resources is very significant because the throughput of the entire organization is at
stake at these nodes. The SMED process reduces setup time and increases
utilization, improves competence and quantity. Switchover required for machinery
and rooms when switching from one room to another can be done in a short span
of time and it serves as an important agent of change.

Benefits

1. Decrease in the raw material stock and tools.

2. Decrease in the flow time.

3. Eradication of waste and decrease in non-value added activities.

4. Increase in quantity.

5. Increase in elasticity.

6. Decrease in costs in terms of raw material and waste.

7. Increase in the cash flow.

8. Increase in the competition.

9. Improvement in customer satisfaction.

6. Standard Work

Standard Work is the basic or standard process. It is important to standardize


the work before the Lean tools are applied to a manufacturing process. The
elements of standard work include the speed of work matched to the
customer demand, process ladder to complete a task and material required
by a worker to finish the task. Standard work helps improve stability, quality,
production and safety in the organization.

Standard work is an important tool as far as employee empowerment and


continuous improvement is concerned. In conventional work environments,
the rules or changes were laid down by the industrial engineers or the higher
authority. However, these days the change agents are the employees
themselves. They are the ones who practically work and know the changes
which are required in the organization. Thus they solve problems and assume
responsibility for the same. They can be held accountable for the standards
they create for themselves.

The norms that are established to standardize the work should be clearly
documented. The documentation ensures that the rules are being followed
and the work is being carried out in a consistent manner to ensure efficiency
and eradicate waste. The documents should be regularly modified for
continuous improvement as it brings to fore the areas that need
development.

Standardized work requires employee feedback and control. It is a


benchmark for other tools like 5s and pull methods. Establishing standards
leads to minimal wastage and improves efficiency. The standard work should
be continuously upgraded and the efforts of improvement should be
sustained.
9.7 Total Productive Maintenance (TPM)

Introduction

Total Productive Maintenance (TPM) is a management system which helps in


optimization of production machinery. It involves employees from all the levels and
strives for organized equipment maintenance. TPM reduces the production losses
occurring due to repairs and aids to a great extent. No precautions were taken for
the maintenance of machinery in the beginning during the 1950’s. However, the
factory managers soon realized the importance of preservation of machinery to
increase production.

During the 1970’s the theory of ‘productive maintenance’ appeared. According to


this concept, preventive measures were taken to maintain the machinery according
to a specified schedule. The technical or the engineering staff is mainly responsible
for the maintenance of the equipment.

TPM originated from TQM. TQM had evolved as a result of Dr. W. Edwards Deming's
effect on the Japanese industry. The concepts of quality introduced by Dr. Deming
were very popular and became a way of life for the Japanese industries. He
introduced statistical procedures and the quality management methods that
emerged because of them. This new theory of quality is known as Total Quality
Management or TQM.
Origin

The original source of the concept of TPM is debated. According to some, it was
invented by American manufacturers about forty years ago. Some say that it was
invented by Nippondenso, a Japanese manufacturer of automotive electrical parts
in the late 1960’s. The concepts associated with TPM were derived and executed in
the Japanese industries by Seiichi Nakajima, an officer with the Institute of Plant
Maintenance in Japan. The first widely held TPM conference took place in the
United States in 1990.

The concept of TPM followed the theory of productive maintenance. The theory of
productive maintenance was not appropriate for the maintenance environment.
According to the theory of TPM, everybody from the workers to the top
management is involved in the equipment maintenance. Everybody in the
organization feels that it is his moral duty to maintain the machinery. The operators
of the machines examine, clean, oil and alter the machines themselves. They, even
perform simple calibrations on their machines. Simply put, everybody in the
organization is familiar with terms like zero breakdowns, maximum productivity
and zero defects.

TPM gives a lot of freedom and at the same time provides a sense of responsibility
in the employees. TPM is an effort that requires some time for effective
implementation. It is initially carried out in small teams and gradually it spreads in
the entire organization.

Application

The application of the TPM concepts in an organization requires total commitment


from the entire staff. It is imperative to hire a TPM coordinator for the purpose. The
job of the TPM coordinator is to disseminate the knowledge of TPM concepts
among the employees. It is not an easy task to convince the employees to change
their routine way of working to a new way.

When the coordinator is convinced that the work force is able to comprehend the
TPM concepts, the action teams who would carry out the TPM program are formed.
The operators of the machine, maintenance personnel, supervisors and upper
management are included in a team. These are people who have a direct bearing
on the issue at hand. Each team member is held equally responsible for their
actions. The TPM coordinator heads the team until the concepts are practically put
to use and the team members become proficient with them. The teams often begin
by addressing small problems and move on to solve the problems involving
complexity.

The tasks of the action teams include indicating the problem areas, specifying a
course of action and implementing the corrective measures. In good TPM
programs, the team members pay a visit to the cooperating plants to study and
evaluate the work in progress using the TPM methodology. The comparative
process is a measurement technique and a significant aspect of the TPM
methodology.

Ford, Eastman Kodak, Allen Bradley and Harley Davidson are some of the big
names using the TPM methodology. There is tremendous hike in productivity by
making use of the methodology. Also, there is a great reduction in down time,
decrease in the stock of spare parts and increase in the number of on-time
deliveries.

TPM is the done thing these days. The importance of TPM in some companies is
such that their success depends on it. It is suited for all kind of industries like
construction, transportation and many other industries. The most important
consideration for a TPM program remains full commitment from the entire work
force because it would result in high ROI.

CHAPTER 10

10 Design for Six Sigma

DFSS is an acronym for Design for Six Sigma. DFSS does not have a universal
definition. Instead, it is defined differently by different organizations. The DFSS for
every organization is tailor-made to suit its needs. This makes DFSS an approach,
not a methodology. The main function of DFSS is designing or re-designing of a
product or service from the base level. 4.5 is the least Sigma level for a DFSS
product. But it can obtain the 6 sigma level depending on the product.
The important factor in DFSS is knowledge about customer preferences. This is
because the product is being redesigned from a very low level and it is important to
know the needs of the customers before DFSS is executed. DFSS helps in the
implementation of Six Sigma in a particular product or service as early as possible.
It is a landmark in achieving customer satisfaction. DFSS helps an organization gain
a chunk of the market share and at the same time it is an approach to achieve big
ROI (Returns on Investment).

A. Quality Function Deployment

For QFD refer to Chapter-2,Business Process Management

10.1 Robust Design and Process

There are fundamentals of a successful DFSS process. The most important among
them is the constancy in critical processes. The core of a DFSS project lies in
forecasting and forecasting is based on thorough knowledge of the process
capability.

Another important aspect of DFSS is that it is a cross-functional activity. If the


process stability is handled by the operations, customer needs are looked after by
the marketing department and are communicated in quantitative terms by them.
The design characteristics are optimized through engineering to achieve robust
designs which are unaffected by the intrinsic variation of key processes. The main
purpose of design is the optimization of process and product.

It is important to execute DFSS in an environment where it is easy to fix


responsibility. There should be motivation for the employees from the higher
authority and rewards for the employees. DFSS is important for an organization as
it improves speed, precision and customer satisfaction and reduces labor costs.
DFSS should be implemented in the early phase of the project to make it a success.

1. Functional Requirements

DFSS is a systematic approach which helps to design or redesign products at Six


Sigma level. To accomplish this goal, it is important to understand the customer
requirements. However, it is also important to understand that all these
requirements do not have equal importance. To identify the requirements that
would be beneficial for a DFSS project, it is important to prioritize the customer
needs and define the objectives of the project according to them. Nam Suh,
chairman of the Mechanical Engineering Department at MIT, developed a model to
design or redesign a product or process.

According to Suh, it is important to develop products or services keeping in


mind the four areas: customer, functional, physical and process. The realm of
customers covers customer characteristics, their wants and needs which are
laid down by the customers. The functional domain views products from the
viewpoint of the designer. The physical domain comprises design
characteristics which are chosen to meet the functional goals. The process
realm covers the process variables and it is the compilation of process
attributes which develop the design settings.

Another important aspect of DFSS is understanding the process’s capability.


Comparison of the customer needs and process capability helps to forecast
the level at which an organization would be able to meet customer
requirements. One of the important aspects of DFSS is a concentration on
establishing customer requirements and translating them into technical
requirements. The functional requirements are changed into technical
requirements which are transformed to product stipulations and process
settings. The relations among all these are thoroughly quantified. The
characteristics and capabilities of all of these are transformed from higher to
lower and from lower to higher levels.

The usual Six Sigma projects work on the principle of DMAIC whereas the DFSS
projects work on the IDOV methodology. IDOV stands for: Identify, Design,
Optimize and Verify. The identify phase helps to recognize the functional
requirements for a new product or process. It also involves translating the
functional requirements to technical requirements and linking the goals of
the project to the organizational goals. This translation is known as the
“transfer function” or “prediction equation”. The information or the data
about the product can be obtained using a process map or product drawings.
In rare circumstances, the data can be obtained using design of experiments.
The information about the product or establishing the transfer function (the
relationship between inputs and outputs) helps to forecast the quality of a
particular project. It helps to make clear whether a particular product would
fulfill customer expectations or not.

The transfer function also enables to get knowledge about the effect of one
input on a particular output. The understanding about these relationships
helps to modify the design which would hit the goal and apply settings that
would reduce variability. This is known as the concept of robust design.
Earlier, only performance targets and costs were known in advance. However,
with the help of the transfer function, it has become easy to determine a
specific quality level and the progress towards the goal can be measured
throughout the process.
10.2 Noise Strategies

According to Genichi Taguchi, loss function (discussed in Chapter-7, Improve) is the


measure of quality. The quadratic loss function established by him measures the
level of customer dissatisfaction due to the poor performance of the product. Poor
or average performance of the product and variation are major factors due to
which the product fails to achieve the desired target. There are certain uncontrolled
factors which lead to variation in the performance. They are known as noise
factors. Therefore, it is very important to choose the design for the product or the
process which has the least number of noise factors or which is indifferent to them.

External Source of Noise

The variables which affect the performance of the product from outside like
government policies and environment.

Internal Source of Noise

The internal variables or the internal settings of the product that affect its
performance are known as internal noise factors. Whenever a product deviates
from complying with its internal features, it becomes the internal source of noise
for the product.

The designer who chooses the design has to be very particular about the designs
he chooses. The design should be such that it has the least deviation from the
perfect design. A design which is immune to the noise factors is known as a
minimum sensitivity design or a robust design. There is a methodical way to reduce
sensitivity to designs. It is called a parameter design.

Parameter Design

The term parameter is an engineering term which uses product


characteristics as product parameters. This is the design which holds
prominence in the off-line quality control process. It is concerned with
identifying the correct design factor levels. Parameter design is an
examination which helps to reduce the variation in performance. It helps to
make the design robust by reducing the number of noise factors in the
process. This, in turn, reduces the loss to be borne by the customer.

Variation in various aspects of performance depends on different settings.


This variation increases the cost of manufacturing and lifetime costs. Also,
this kind of design helps in cost savings by fighting the noise factors in the
early stages of the design. Parameter design is a practice to recognize
parameter settings that would be best suited for the organization. This is the
reason it is called parameter design.
10.3 Tolerance Design

This is a design to establish tolerance for the products or processes to reduce the
cost of manufacturing and lifetime costs. It is the next and the final step, after the
parameter design, while specifying the product and the process designs to identify
the tolerance among the factors. The factors affect variation and they are modified
only after the parameter stage because only those factors are modified whose
target quality values have not been achieved.

Traditional quality improvement methods were based on convention. They were


based on the fact that, to increase tolerance it is important to improve the quality
of materials, machines and tools. Improvement in the quality of materials and other
things would shoot up the costs. However, the robustness in the designs ensures
that the process is of the utmost quality without much increase in the costs. The
main impact of using the tolerance design after the parameter design is reduction
in costs.

There are three methods through which tolerance can be estimated. They are Cut
and Paste method, the Control Chart method, and the Root Square Error method.

a. Cut and Paste Method

The Cut and Paste method is popular among the followers of Dr. E.M. Goldratt. A
project manager collects the overstated estimates of the duration of tasks (given by
the developers) and cuts them in half. The project, is then, modeled on the
decreased estimates of duration. The tolerances of the constituents are estimated
as a percentage (which is usually 50%) of the deterministic estimates of duration of
their sequence of tasks. The tolerance of the project is determined as the
percentage of the deterministic estimate of the longest sequence of tasks. The
longest sequence is referred to as the critical chain.

The Cut and Paste method is a simplified method and can be used by a layman.
However, its disadvantage is that it is a linear model of variation. So, as far as the
sequence of tasks is concerned, it only adds variance linearly. It is not able to add
variation linearly. Variation boosts up with the square root of the tasks in a
sequence. Therefore, the linear model which is provided by the cut and paste
method is inconsistent as far as mathematics is concerned.

The biggest disadvantage of the Cut and Paste method is that the estimates
provided by the developers are reduced by the managers. This develops a gap
between the managers and the developers. The individuals who have a direct
bearing on the logistical performance of a product are isolated.

b. Control Chart

The normalized values of project duration are graphed on a control chart according
to this method. The normalizing values which are used are the planned (baseline)
duration estimates. For instance, a project which had a planned duration of 100
business days and an actual duration of 140 business days is represented as having
the normalized duration of 1.4 on a control chart. The difference between the
control limit and the mean of the normalized duration values is the basis for
computing the ensuing project tolerances.
The advantage of the control chart method is that it encapsulates all the variation
displayed during the project duration. This helps to get a precise estimate of the
required tolerance value. The disadvantage of the control chart method is that the
tolerance calculations are very complex. Almost every product in a development
organization exhibits varying degrees of variation. These variations are erratic and
huge and this is what makes the control chart method inappropriate at times.

c. Root-Square-Error (RSE)

Root-Square-Error is another method to compute tolerance. It is adjustable to the


tolerance design of products. The developer provides two estimates of duration per
task. The developer provides an estimate which corresponds to a high level of
confidence. This is called the “safe” estimate. After this, the developer provides an
estimate of the mean process time. This is called the “average” estimate. The
expected variation for the task is ascertained by calculating the difference between
safe and average estimate for the task.

The component tolerance is computed as the square root of the sum of the
squares of the differences, for the tasks in each component sequence. The same
calculation also gives an estimate of the tolerance of the project with the difference
values being those which correspond to the tasks of the primary sequence in the
project.

The figure given below provides a sample calculation. The sum of the differences
when squared is equal to 1098 business days. The corresponding project tolerance
is approximately 33 business days. The 33-day tolerance value gives a commitment
duration which matches the high confidence level for the whole project.
The project tolerances which are calculated using the RSE method are considered
the absolute minimum values. This is due to the fact that RSE method considers
only task-level variation. The amount of variation in project duration is significantly
huge. Therefore, the values computed using the RSE method is unsuitably small.
However, the biggest advantage of the RSE method is that it involves developers in
the construction of models for the project. The involvement of the developers in
the project increases trust and they feel less alienated.
Tolerance and Process Capability
For information on Process Capability see Chapter-5, Six Sigma Improvement
Methodology and Tools--Measure
10.4 Failure Mode and Effects Analysis (FMEA)

The demand for high quality and dependable products is very high. The increased
potential and functionality of the products makes it very difficult for the
manufacturers to fulfill the increasing demand of the customers. The dependability
was conventionally achieved through massive experimentation and through such
methods as probabilistic reliability modeling. However, the problem with these
tools was that they could be implemented in the last stages of development. FMEA
has the advantage of being implemented in the early stages of development.

Failure Mode and Effects Analysis (FMEA) is a powerful engineering quality tool
which helps to recognize and oppose potential factors that might lead to failure. It
is used in the early stages for all types of products and processes. The method is
very simple to use and can be easily used by a layman. It is not easy to forecast
each and every failure mode but the development team tries to devise a wide list of
possible failure modes.

The early and continuous use of FMEA early in the design process allows an
engineer to figure out failures and manufacture dependable and safe products that
would leave the customers satisfied. The information of the products or the
processes produced with the help of FMEA serves as a chronicle for further use.

Types of FMEAs

There are different kinds of FMEAs. Some are used more than others. The kinds of
FMEAs are listed below.

1. System- It focuses on universal system functions.

2. Design- It focuses on apparatus and subsystems.

3. Process-It focuses on production and assembling functions.

4. Service-It focuses on the service functions.


5. Software-It focuses on the functions of software.

Usage of FMEA

The engineers, in the past, have done a god job as far as assessing the functions
and the design of products and processes in the design phase is concerned.
However, they have not been able to achieve much in terms of reliability and
quality. The engineers are usually concerned with designing a product or process
that is safe for the user. The FMEA is a tool for the engineers to produce safe,
dependable and customer-friendly products. The FMEA serves a lot of purposes for
the engineers.

1. It helps them to build the product and service requirements which reduce the
possibility of failure.

2. It helps in the assessment of the needs expressed by the customers and others
involved in the designing of products and services so that they do not turn out to be
potential failures.

3. It helps to point out the settings of the design which might lead to failure and
therefore pull them out of the system so that they cause no harm.

4. FMEA helps to develop techniques to examine the products or services to make


sure that the malfunctioning has been successfully eradicated.

5. It helps to trace and handle the possible risks that might develop in the design.
Tracing the risks helps in its documenting and is a major factor in the success or
failure of future projects.

6. It guarantees that if a failure occurs; it will not affect the customer of the product
or service.

Pros

There are many benefits associated with FMEA. Some of them are listed below.

1. It increases product and process reliability.

2. It ensures better safety.


3. It improves customer satisfaction.

4. It recognizes failure modes in the early stages of development.

5. It records potential risks and the steps taken to eradicate them.

6. It helps in cost savings.

7. It helps in the reduction of warranty costs.

8. It helps in the reduction of wastage and non-value added activities.

9. It improves sales with the improvement in customer satisfaction.

10. It emphasizes problem prevention.

Steps Involved in FMEA

The steps involved in FMEA are described below.

1. The first step is to describe the product or the service and its function. It is very
important to have a thorough knowledge of the product or process. This knowledge
helps the engineers streamline the products and the processes which fall under the
intended function. This knowledge is essential as product failure leads to increase
in costs and dissatisfaction of the customers which would lead to aversion towards
the product.

2. The next step is the creation of a block diagram of the product or the process. A
block diagram depicts the major components or the steps involved in the process.
The steps or the components are linked through lines and their relationships are
shown. This helps to create a structure around which the FMEA can be formed.
Then a coding system is developed to identify the system elements. The block
diagram should always be included with the form of FMEA.

3. The next step is to complete the headings on the FMEA form worksheet. These
headers can be modified according to use.

4. The fourth step is the listing of the components of the product or the steps
involved in the process in a logical manner.
5. The failure modes are identified and the manner in which a part, a system, a sub
system or a process could be potential threats is defined. The potential failure
modes could be due to corrosion, deformation, cracking or electrical short.

6. A failure mode in one part could be the reason for a failure mode in any other
part. Therefore, it is important to list the failures in technical terms. Failure modes
should be listed for function of each part of the product or each step of the
process. Then, the failure mode is pointed out. The recording of the previous
failures helps to detect failure modes in similar products or processes in the future.

7. The next step is the description of the failure modes. The engineer working on
the project should be able to identify the failure mode and the effect it would have.
The effect is the product in terms of the failure of a function as seen from the
perspective of the customer. The failure as seen from the internal or the external
customer’s point of view should be kept in mind. A customer would consider the
product a failure if it causes an injury or harm to him in any way, or is he incurs a
problem in operating the product or if its performance is not up to the mark. A
numerical rank should be established for the gravity of the effect of the failure
mode. A common industry standard is where 1 stands for no effect and 10
indicates severe system failure which affects system operation and is a threat to the
safety without warning. Ranking is used to determine the severity of the failure
mode and determine whether the failure would be a minor or a major one. The
ranking helps in the prioritization of the failure modes.

8. This step involves the identification of each failure mode. A failure cause is the
weakness in any design which might result in a failure of the product or process.
The possible factors that might cause failure should be clearly documented. The
factors should be listed in technical terms and not as symptoms of a failure. The
potential causes could be contamination, improper alignment or improper
operating conditions.

9. The next step is to enter the probability factor. A numerical weight is an


indication of the probability of the occurrence of the cause. A common industry
scale uses 1 as the cause which is least likely and 10 as the cause which is most
likely to occur. 10. The next step identifies the current controls. Current controls are
the devices which help in the prevention of the occurrence of the failure mode
before it reaches its final destination, the customer. The testing, scrutinizing and
other methods should now be identified which have been used or can be used on
similar products to identify failures. If a new product or process is being used,
undetected failure modes may occur. In this situation, it is important to reorganize
the FMEA methods to eradicate those failures.

11. This step helps to ascertain the probability of detection. Detection is the
evaluation of the probability that the Current Controls will determine the cause of
the failure mode and thus prevent it from reaching the customer.

12. The RPN or Risk Priority Number is evaluated. The RPN is a mathematical
product of numerical severity, probability and detection ratings. RPN is used to
prioritize items which require extra attention in terms of quality planning or action.

RPN = (Severity) x (Probability) x (Detection)

13. The next step is to determine the steps which need to be taken to tackle
potential failures which have a high RPN. These steps include particular inspection
of products or processes, choice of different parts or materials, redesigning the
product or process to avoid failure modes.

14. The accountability and a target completion date are fixed for these actions. The
responsibility is made clear cut and facilitates tracing.

15. The actions which are taken are indicated. After the steps have been taken, their
severity, probability, detection and RPN’s are reviewed to make sure if any further
action is required.

16. The FMEA should be upgraded from time to time as the design or the process
changes or if some new information comes up.
Failure Modes and Effects Analysis (FMEA) Form

Design for X or DFX is a structure of knowledge. It provides information about the


methods to reach particular properties of technical systems collected and
organized into suitable forms during the design stage. The contents of this
information are essential and should be tailored according to the designer needs.
Instead of modifying the production system according to the product, the product
must be modified around the production system.

DFX is a value-added service in the manufacturing process. It is used to improve X,


where X is the propagation of functions. A conventional design usually begins with a
rough draft of components and assemblies. The rough drafts are then passed on to
production and assembly engineers, who are responsible for the optimization of
the products. Generally, the production and assembly problems are revealed at this
stage and a request for modifying the design are put across.

It is important to identify the alteration in the early stages of the design. The later
the change occurs, the more the increase in the costs. DFX is an important part as it
helps in the selection of the idea about the product in the identify phase and in the
assessment and management of the risk in the design phase. The revealing of the
alterations in design in the early stages of development leads to cost savings,
improves quality and reduces time to market.

DFX is a generic stage which is customized to develop more DFX tools quickly and
continuously. The resulting DFX tools share a commonality and are easily executed
and coordinated. This DFX process is, then, viewed from a wider perspective of
Business Process Reengineering (BPR) to obtain a product which is produced minus
any defects.

DFX helps to provide meaningful comparative data. It helps to quantify data in


terms of cost, quality and regulatory conformity. DFX is a yardstick in terms of
designs and at the same time it provides an indication of the possible advantages
of one design to another.

Design for Production is the foremost tools of DFX. It refers to the processes which
assess the performance of the production system. It answers questions such as
“How much time it will take to complete the order?” or “How much stock is needed
to keep the international supply chain running?” It is important to possess
knowledge about the design of the product, the needs of production and the
production system to answer these questions.

The DFX family is ever expanding and houses the family members listed below. The
X in DFX can be substituted for different variables.

1. DFM-Design for Manufacturability/Manufacture/Manufacturing

This is a method of designing the parts of the product in a way that aids their
manufacturing. DFM enhances manufacturability and provides manufacturing cost
data for a product and its parts.

The importance of DFM was realized during World War II when the demand to build
better weapons in the shortest possible turnaround time was high. Also, there was
a shortage of resources. However, after the war, there was prosperity and a speedy
industrial growth. The functions of design and manufacturing were carried out by
isolated departments which resulted in an orderly development of products.

In the late 1950s and 1960s, organizations began to realize that the methodologies
that were currently being used were not suitable for fulfilling the need of automatic
and reprogrammable manufacturing. Gradually, organizations began to customize
their designs and processes and started carrying out independent research for the
same. The pressure from global competition and the desire to reduce lead time led
to the rediscovery of DFM. The personnel from both the design and manufacturing
departments were roped in to carry out the design projects. The manufacturing
engineers took active part from the early stages and advised about the potential
ways to improve manufacturability.

2. DFA-Design for Assembly

The need for DFA arose when the problem to increase the level of automated
assembly highlighted deficiencies in current product design with respect to
automation capability. Design for Assembly helps in the simplification of the form
of the product, it decreases the components and thus decreases the total cost of
components. The design should be easy to assemble. Therefore, it reduces both
the assembly and production costs. Although, these days the X in DFX can be
replaced by variables of cost, disassembly and recyclability, yet the variable of ease
of assembly is constantly upgraded.

3. DFMA-Design for Manufacture and Assembly

DFMA is the combination of DFM and DFA. It helps in the optimization of


relationships between design function, manufacturability and ease of assembly.
This helps in cycle time reduction and reduction in the cost of production. DFMA
demands consistent teamwork from all kinds of engineers, starting with the
ideation of the product till the delivery of the product.

4. DFD-Design for Disassembly

This is a more recent method which helps in the easy disassembly of products and
helps in quick and easy maintenance of products. This method is also helpful when
a product or component needs to be recycled.

5. DFC-Design for Changeover

This method helps to give precise estimates of the changeover potential of the
production machinery and the effect of the characteristics of the final design.

6. DFS-Design for Service

This is also known as the Design for Maintenance. It takes into account the way in
which the subassemblies can be exchanged as quickly and easily as possible. The
maintainability of a product depends on the chances of failure of a particular
component or subassembly. This is accomplished by enhancing the ease of
assembly and disassembly of these particular components. This also helps to justify
the additional cost incurred to increase their capability.

7. DFT-Design for Testability


Design for Testability is a method that aids quick testability of subassemblies and
the whole unit. This method also helps to determine how much the system is
automated. This technology has grown tremendously and the systems are now
using BIST to troubleshoot the problem by itself.

Other designs which are used are DFR—Design for Reliability, DFC—Design for Cost,
DFQ—Design for Quality, DFD—Design for Diagnosis, DFI—Design for
Inspection/Design for International and DFG—Design for Green.

Roles of DFX

1. Culture in an Organization

The execution of a DFX project requires an active interest from all the employees
from all the departments. The departments would then report to the Board of
Directors. An ideal DFX project requires a DFX champion who would report to the
Board of Directors. This is important as having high expectations from the lower
and the middle management (who is busy performing their routine tasks) is
otherwise useless.

The champion’s work would be to pressurize the departments to allocate time and
resources for the DFX project. The champion will have to be highly influential and
should be able to convince the employees for their involvement. He should be able
to disseminate knowledge about DFX, its goals and the benefits it would have for
the organization. A DFX project requires assistance from all the departments. For
big organizations, where DFX is already employed, the DFX engineers report directly
to the respective functional managers like R and D, manufacturing or engineering
managers.

2. Concurrent Engineering

Concurrent engineering means the synchronization in the development of the


whole design, its parts, its tooling needs and its assembly process. The concurrent
process helps save a lot of time and costs. Time and costs are saved due to the
eradication of product respins. Respins refers to the reworking of a design. The
reworking can be avoided due to the modifications and corrections in the design
from the early stages of development.
DFX is such a tool which involves people from diverse backgrounds who aid in the
design process. These specialists from various fields possess knowledge of different
components of the complete lifecycle of a product and give suggestions to the
Design Engineers. The knowledge that the specialists share takes the form of
strategies that the Design Engineers adhere to during the design process or the
design review meetings with the field experts.
A product development team generally consists of concept design engineers,
marketing executives, test engineers and customer engineers. DFX is a method
which guarantees accurate information. When DFX is combined with concurrent
engineering, it decreases respins and saves costs. They both are also useful in
reducing the production and development cycle.

3. Strives for Continuous Improvement

DFX should serve as an ongoing tool. It should be merged with the other goals of
the company like increasing customer satisfaction. DFX helps to reduce the product
development time, quality and reliability of the product and also customer
satisfaction. It also helps in the reduction of the cycle time which is a major
indication of success.

Levels of DFX Analysis

There are three different levels of DFX analysis.

1. Following a general set of rules

The rules which are applied are not quantitative. Although the rules are either
based on a formal employee’s experience or based on a formal checklist, it requires
a human to interpret and apply them to each case. This is because each case is
unique in its own way. While this is the case, it is also true that it is not feasible to
start every design from scratch and some skill is required on the part of the
designer to infer and use the rules appropriately.

2. Following a Quantitative Analysis

Every component of the design is assigned a numeric value depending on its


productivity and this is referred to as Design Scorecard. The overall quality of the
design is judged by adding up all the values. The numerical values that are obtained
must be taken as a goal which needs to be minimized. The product is then
redesigned based on this goal. However, even this level requires a thorough
knowledge about the product and the process on the part of the designer.

3. Mechanizing the Complete Process

The mechanization of a process can be carried out using a computer. The


quantitative analysis requires the knowledge of experts and they give valuable
suggestions to make the design a success. If the analysis is carried out with the help
of a computer, it would reduce the workload of a designer. He would then be able
to concentrate more on the creative aspect. He can be more focused as his
attention would not be diverted by memorizing the rules or checklists.

Advantages of DFX

The major chunk of the cost is dedicated to the design stage before the production
of the product. However, the truth is that the majority of the cost is incurred when
actual production begins and after the design is accepted. Therefore, it becomes
important to consider the production and assembly problems at the product design
stage to save costs and increase productivity.

10.6 Special Design Tools

Introduction

TRIZ is a powerful tool that aids the designers to decipher the problems creatively.
Both TRIZ and Axiomatic designs have developed independently and have no
influence of the many other design strategies which have evolved outside Russia.
Nam Suh’s Axiomatic Design and TRIZ can be successfully applied to a
manufacturing process.

Axiomatic Design
Professor Nam Suh’s seminal text ‘The Principles of Design’ published in 1990 talks
about the Axiomatic Design approach. This approach is mainly deals with the
organization and employment of means to determine whether the key to a given
design is ‘good’ or ‘bad’. The Axiomatic Design is built around two axioms.

1. ‘The Independence Axiom’

According to this axiom, the ‘good’ design occurs when the Functional
Requirements of the design are independent of each other.

2. ‘The Information Axiom’

According to this axiom, the ‘good’ design occurs when a minimum ‘information’
matter is attained.

Both the axioms are logical in nature. Suh describes the design process by drawing
parallels with the feedback control loop to explain the design process. A designer
creates a design solution based on the inputs available, the needs of the customers
and a preferred output of the customers. The next step is to use the two axioms as
logical tests, that is, the response from the customers and evaluate it to find out
how appropriate is it to be used as a design solution.

TRIZ

TRIZ is a Russian acronym and stands for "Teoriya Resheniya Izobretatelskikh


Zadatch". It was developed by Genrich Althusser in 1946 and has been in use since
then. TRIZ is a methodology which helps in generating creative ideas for problem-
solving. TRIZ is an algorithmic approach as opposed to techniques like
brainstorming which are based on random generation of ideas. It helps in the
creation of new systems and modification of old systems.

Althusser realized during research that principles found while evaluating patents
from one industry could be applied to other industries as well. TRIZ also grew out of
the transfer of principles from one area of work to another. TRIZ principles are
applied not only to fields associated with engineering but also to other technical
and non-technical disciplines.

TRIZ is interdisciplinary and has relations with ontology, logic, systems of science,
psychology, history of culture and others. TRIZ has now been employed by many
Fortune 500 companies to successfully solve complex technical issues. The TRIZ
methodology, these days, is not only limited to manufacturing. It has spread to
other areas like medicine, business management and computer programming.

One of the basic concepts of TRIZ is contradictions between two elements. For
instance, if you want more clarity in the camera of a mobile phone, you need a
mobile phone with a camera that has a higher pixel quality. If the pixel quality is
improved, it would increase the price of the camera. So, to get a better quality, you
have to pay a higher price. Althusser calls these Technical Contradictions. A
designer who is inventing a solution faces these contradictions. However, instead of
resolving them he swaps one of the contradictory characteristics for another. He
invents a new solution instead of resolving the contradiction.

Nam Suh in his book states the example of the problem faced by General Motors in
the early 80s. The designers at GM faced a problem with wheel covers which at that
time, were held on by simple spring clips. If the spring force was too small, the
wheel covers fell off and if the spring force was too high, the vehicle owners had
difficulty whenever the wheel was required to be removed. GM designers put in a
lot of scientific research and focused on the needs of the customers for resolving
this issue.

A series of customer trials using wheel covers with different spring forces were
conducted. The designers measured the level of satisfaction of customers with each
of the different cases. The results are shown in the figure drawn below. It was
discovered that 100% of the customers were happy from the viewpoint of the ease
of cover removal if the force needed to remove the cover was 30N or less. 100% of
the customers were satisfied that wheel covers would not fall off if the force was
35N or more.
So, the optimum spring retention force required was somewhere between 30 and
35 N. The designers also realized that mass-production would lead to statistical
variation in the attainable spring force. The functional requirement for the wheel
cover in terms of retention force was, therefore, 34 ± 4N. This was the best solution
in non-TRIZ terms as it dissatisfied the minimum number of customers. The data
available form GM showed that 34 ± 4N would dissatisfy 2 to 6% of the customers.
The Axiomatic approach was ale to harmonize the design variables to attain the
required Functional Requirements.

As far as TRIZ is concerned, it would immediately recognize the wheel cover issue as
a design contradiction. The TRIZ approach is built around ‘design without
compromise’ premise. TRIZ strives to eradicate the contradictions. TRIZ would
recognize this contradiction as Physical Contradiction and would seek to separate
the contradictory requirements in time. So, according to TRIZ, the retention force
would be high when the car is moving and low when it is not. The ‘Contradictions’
aid the designers to find ways to eliminate the contradictions.

The Axiomatic approach holds significance in evaluating and optimizing the


theoretical solutions derived from TRIZ in some cases. It also provides a better
understanding of both the hierarchical nature of design and the need to pay due
attention to the inter-relationship which exists between the successive hierarchical
layers.

You might also like