Robust Design
Robust Design
ENGINEERING
USING ROBUST
DESIGN
MADHAV 5. PHADKE
AT&T
QUALITY ENGINEERING
USING
ROBUST DESIGN
MADHAV S. PHADKE
10
ISBN 0-13-745167-9
Foreword
Preface
Acknowledgments
CHAPTER 1 INTRODUCTION
1.8 Summary 10
viii
2.9 Summary 38
3.6 Summary 63
4.9 Summary 93
REFERENCES
INDEX
FOREWORD
The main task of a design engineer is to build in the function specified by the product
planning people at a competitive cost. An engineer knows that all kinds of functions
are energy transformations. Therefore, the product designer must identify what is
input, what is output, and what is ideal function while developing a new product. It is
important to make the product's function as close to the ideal function as possible.
Therefore, it is very important to measure correctly the distance of the product's
performance from the ideal function. This is the main role of quality engineering. In
order to measure the distance, we have to consider the following problems:
1. Identify signal and noise space
2. Select several points from the space
3. Select an adequate design parameter to observe the performance
4. Consider possible calibration or adjustment method
5. Select an appropriate measurement related with the mean distance
As most of those problems require engineering knowledge, a book on quality
engineering must be written by a person who has enough knowledge of engineering.
Dr. Madhav Phadke, a mechanical engineer, has worked at AT&T Bell
Laboratories for many years and has extensive experience in applying the Robust Design
method to problems from diverse engineering fields. He has made many eminent and
pioneering contributions in quality engineering, and he is one of the best qualified
persons to author a book on quality engineering.
XIII
xiv Foreword
The greatest strength of this book is the case studies. Dr. Phadke presents four
real instances where the Robust Design method was used to improve the quality and
cost of products. Robust Design is universally applicable to all engineering fields.
You will be able to use these case studies to improve the quality and cost of your
products.
This is the first book on quality engineering, written in English by an engineer.
The method described here has been applied successfully in many companies in Japan,
USA, and other countries. I recommend this book for all engineers who want to apply
experimental design for actual product design.
G. Taguchi
PREFACE
xv
xvi Preface
The Robust Design method uses a mathematical tool called orthogonal arrays to
study a large number of decision variables with a small number of experiments. It also
uses a new measure of quality, called signal-to-noise (S/N) ratio, to predict the quality
from the customer's perspective. Thus, the most economical product and process
design from both manufacturing and customers' viewpoints can be accomplished at the
smallest, affordable development cost. Many companies, big and small, high-tech and
low-tech, have found the Robust Design method valuable in making high-quality
products available to customers at a low competitive price while still maintaining an
acceptable profit margin.
This book will be useful to practicing engineers and engineering managers from
all disciplines. It can also be used as a text in a quality engineering course for seniors
and first year graduate students. The method is explained through a series of real case
studies, thus making it easy for the readers to follow the method without the burden of
learning detailed theory. At AT&T, several colleagues and I have developed a two and
a half day course on this topic. My experience in teaching the course ten times has
convinced me that the case studies approach is the best one to communicate how to use
the method in practice. The particular case studies used in this book relate to the
fabrication of integrated circuits, circuit design, computer tuning, and mechanical routing.
Although the book is written primarily for engineers, it can also be used by
statisticians to study the wide range of applications of experimental design in quality
engineering. This book differs from the available books on statistical experimental
design in that it focuses on the engineering problems rather than on the statistical
theory. Only those statistical ideas that are relevant for solving the broad class of
product and process design problems are discussed in the book.
Chapters 1 through 7 describe the necessary theoretical and practical aspects of
the Robust Design method. The remaining chapters show a variety of applications
from different engineering disciplines. The best way for readers to use this book is,
after reading each section, to determine how the concepts apply to their projects. My
experience in teaching the method has revealed that many engineers like to see an
application of the method in their own field. Chapters 8 through 11 describe case
studies from different engineering fields. It is hoped that these case studies will help
readers see the breadth of the applicability of the Robust Design method and assist
them in their own applications.
Madhav S. Phadke
I had the greatest fortune to learn the Robust Design methodology directly from its
founder, Professor Genichi Taguchi. It is with the deepest gratitude that I acknowledge
his inspiring work. My involvement in the Robust Design method began when Dr.
Roshan Chaddha asked me to host Professor Taguchi's visit to AT&T Bell
Laboratories in 1980. I thank Dr. Chaddha (Bellcore, formerly with AT&T Bell Labs) for the
invaluable encouragement he gave me during the early applications of the method in
AT&T and also while writing this book. I also received valuable support and
encouragement from Dr. E. W. Hinds, Dr. A. B. Godfrey, Dr. R. E. Kerwin, and Mr.
E. Fuchs in applying the Robust Design method to many different engineering fields
which led to deeper understanding and enhancement of the method.
Writing a book of this type needs a large amount of time. I am indebted to Ms.
Cathy Savolaine for funding the project. I also thank Mr. J. V. Bodycomb and Mr.
Larry Bernstein for supporting the project.
The case studies used in this book were conducted through collaboration with
many colleagues, Mr. Gary Blaine, Mr. Dave Chrisman, Mr. Joe Leanza, Dr. T. W.
Pao, Mr. C. S. Sherrerd, Dr. Peter Hey, and Mr. Paul Sherry. I am grateful to them for
allowing me to use the case studies in the book.
I also thank my colleagues, Mr. Don Speeney, Dr. Raghu Kackar, and Dr. Mike
Grieco, who worked with me on the first Robust Design case study at AT&T.
Through this case study, which resulted in huge improvements in the window
photolithography process used in integrated circuits fabrication, I gained much insight into
the Robust Design method.
xvii
xviii Acknowledgments
I thank Mr. Rajiv Keny for numerous discussions on the organization of the
book. A number of my colleagues read the draft of the book and provided me with
valuable comments. Some of the people who provided the comments are: Dr. Don
Clausing (M.I.T.), Dr. A. M. Joglekar (Honeywell), Dr. C. W. Hoover, Jr. (Polytechnic
University), Dr. Jim Pennell (IDA), Dr. Steve Eick, Mr. Don Speeney, Dr. M.
Daneshmand, Dr. V. N. Nair, Dr. Mike Luvalle, Dr. Ajit S. Manocha, Dr. V. V. S.
Rana, Ms. Cathy Hudson, Dr. Miguel Perez, Mr. Chris Sherrerd, Dr. M. H. Sherif, Dr.
Helen Hwang, Dr. Vasant Prabhu, Ms. Valerie Partridge, Dr. Sachio Nakamura, Dr. K.
Dehnad, and Dr. Gary Ulrich. I thank them all for their generous help in improving
the content and readability of the book. I also thank Mr. Akira Tomishima
(Yamatake-Honeywell), Dr. Mohammed Hamami, and Mr. Bruce Linick for helpful
discussions on specific topics in the book. Thanks are also due to Mr. Yuin Wu (ASI)
for valuable general discussions.
I very much appreciate the editorial help I received from Mr. Robert Wright and
Ms. April Cormaci through the various stages of manuscript preparation. Also, I thank
Ms. Eve Engel for coordinating text processing and the artwork during manuscript
preparation.
The text of this volume was prepared using the UNIX* operating system, 5.2.6a,
and a LINOTRONIC® 300 was used to typeset the manuscript. Mr. Wright was
responsible for designing the book format and coordinating production. Mr. Don Han-
kinson, Ms. Mari-Lynn Hankinson, and Ms. Marilyn Tomaino produced the final
illustrations and were responsible for the layout. Ms. Kathleen Attwooll, Ms. Sharon
Morgan, and several members of the Holmdel Text Processing Center provided electronic
text processing.
Chapter 1
INTRODUCTION
The objective of engineering design, a major part of research and development (R&D),
is to produce drawings, specifications, and other relevant information needed to
manufacture products that meet customer requirements. Knowledge of scientific
phenomena and past engineering experience with similar product designs and
manufacturing processes form the basis of the engineering design activity (see Figure 1.1).
However, a number of new decisions related to the particular product must be made
regarding product architecture, parameters of the product design, the process
architecture, and parameters of the manufacturing process. A large amount of engineering
effort is consumed in conducting experiments (either with hardware or by simulation)
to generate the information needed to guide these decisions. Efficiency in generating
such information is the key to meeting market windows, keeping development and
manufacturing costs low, and having high-quality products. Robust Design is an
engineering methodology for improving productivity during research and development
so that high-quality products can be produced quickly and at low cost.
This chapter gives an overview of the basic concepts underlying the Robust
Design methodology:
• Section 1.1 gives a brief historical background of the method.
• Section 1.2 defines the term quality as it is used in this book.
• Section 1.3 enumerates the basic elements of the cost of a product.
• Section 1.4 describes the fundamental principle of the Robust Design
methodology with the help of a manufacturing example.
1
2 Introduction Chap. 1
• Section 1.5 briefly describes the major tools used in Robust Design.
• Section 1.6 presents some representative problems and the benefits of using the
Robust Design method in addressing them.
• Section 1.7 gives a chapter-by-chapter outline of the rest of the book.
• Section 1.8 summarizes the important points of this chapter.
In the subsequent chapters, we describe Robust Design concepts in detail and, through
case studies, we show how to apply them.
When Japan began its reconstruction efforts after World War II, it faced an acute
shortage of good-quality raw material, high-quality manufacturing equipment and skilled
engineers. The challenge was to produce high-quality products and continue to
improve the quality under those circumstances. The task of developing a methodology
to meet the challenge was assigned to Dr. Genichi Taguchi, who at that time was a
manager in charge of developing certain telecommunications products at the Electrical
Communications Laboratories (ECL) of Nippon Telephone and Telegraph Company
(NTT). Through his research in the 1950s and the early 1960s, Dr. Taguchi developed
the foundations of Robust Design and validated its basic philosophies by applying
Sec. 1.2 What Is Quality? 3
Because the word quality means different things to different people (see, for example,
Juran [J3], Deming [D2], Crosby [C5], Garvin [Gl], and Feigenbaum [Fl]), we need to
define its use in this book. First, let us define what we mean by the ideal quality
4 Introduction Chap. 1
which can serve as a reference point for measuring the quality level of a product. The
ideal quality a customer can expect is that every product delivers the target
performance each time the product is used, under all intended operating conditions, and
throughout its intended life, with no harmful side effects. Note that the traditional
concepts of reliability and dependability are part of this definition of quality. In specific
situations, it may be impossible to produce a product with ideal quality. Nonetheless,
ideal quality serves as a useful reference point for measuring the quality level.
The following example helps clarify the definition of ideal quality. People buy
automobiles for different purposes. Some people buy them to impress their friends
while others buy them to show off their social status. To satisfy these diverse
purposes, there are different types (species) of cars—sports cars, luxury cars, etc.—on the
market. For any type of car, the buyer always wants the automobile to provide reliable
transportation. Thus, for each type of car, an ideal quality automobile is one that
works perfectly each time it is used (on hot summer days and cold winter days),
throughout its intended life (not just the warranty life) and does not pollute the
atmosphere.
When a product's performance deviates from the target performance, its quality
is considered inferior. The performance may differ from one unit to another or from
one environmental condition to another, or it might deteriorate before the expiration of
the intended life of the product. Such deviation in performance causes loss to the user
of the product, the manufacturer of the product, and, in varying degrees, to the rest of
the society as well. Following Taguchi, we measure the quality of a product in terms
of the total loss to society due to functional variation and harmful side effects. Under
the ideal quality, the loss would be zero; the greater the loss, the lower the quality.
In the automobile example, if a car breaks down on the road, the driver would, at
the least, be delayed in reaching his or her destination. The disabled car might be the
cause of traffic jams or accidents. The driver might have to spend money to have the
car towed. If the car were under warranty, the manufacturer would have to pay for
repairs. The concept of quality loss includes all these costs, not just the warranty cost.
Quantifying the quality loss is difficult and is discussed in Chapter 2.
1. Operating Cost. Operating cost consists of the cost of energy needed to operate
the product, environmental control, maintenance, inventory of spare parts and
units, etc. Products made by different manufacturers can have different energy
costs. If a product is sensitive to temperature and humidity, then elaborate and
costly air conditioning and heating units are needed. A high product failure rate
of a product causes large maintenance costs and costly inventory of spare units.
A manufacturer can greatly reduce the operating cost by designing the product
robust—that is, minimizing the product's sensitivity to environmental and usage
conditions, manufacturing variation, and deterioration of parts.
2. Manufacturing Cost. Important elements of manufacturing cost are equipment,
machinery, raw materials, labor, scrap, rework, etc. In a competitive
environment, it is important to keep the unit manufacturing cost (umc) low by using
low-grade material, employing less-skilled workers, and using less-expensive
equipment, and at the same time maintain an appropriate level of quality. This is
possible by designing the product robust, and designing the manufacturing
process robust—that is, minimizing the process' sensitivity to manufacturing
disturbances.
3. R&D Cost. The time taken to develop a new product plus the amount of
engineering and laboratory resources needed are the major elements of R&D
cost. The goal of R&D activity is to keep the umc and operating cost low.
Robust Design plays an important role in achieving this goal because it improves
the efficiency of generating information needed to design products and processes,
thus reducing development time and resources needed for development.
Note that the manufacturing cost and R&D cost are incurred by the producer and
then passed on to the customer through the purchase price of the product. The
operating cost, which is also called usage cost, is borne directly by the customer and it is
directly related to the product's quality. From the customer's point of view, the
purchase price plus the operating cost determine the economics of satisfying the need for
which the product is bought. Higher quality means lower operating cost and vice
versa. Robust Design is a systematic method for keeping the producer's cost low
while delivering a high-quality product, that is, while keeping the operating cost low.
The key idea behind Robust Design is illustrated by the experience of Ina Tile
Company, described in detail in Taguchi and Wu [T7]. During the late 1950s, Ina Tile
Company in Japan faced the problem of high variability in the dimensions of the tiles
it produced [see Figure 1.2(a)]. Because screening (rejecting those tiles outside
specified dimensions) was an expensive solution, the company assigned a team of
expert engineers to investigate the cause of the problem. The team's analysis showed
that the tiles at the center of the pile inside the kiln [see Figure 1.2 (b)] experienced
lower temperature than those on the periphery. This nonuniformity of temperature
distribution proved to be the cause of the nonuniform tile dimensions. The team reported
6 Introduction Chap. 1
that it would cost approximately half a million dollars to redesign and build a kiln in
which all the tiles would receive uniform temperature distribution. Although this
alternative was less expensive than screening it was still too costly.
The team then brainstormed and defined a number of process parameters that
could be changed easily and inexpensively. After performing a small set of well-
planned experiments according to Robust Design methodology, the team concluded that
increasing the lime content of the clay from 1 percent to 5 percent would greatly
reduce the variation of the tile dimensions. Because lime was the least expensive
ingredient, the cost implication of this change was also favorable.
Thus, the problem of nonuniform tile dimensions was solved by minimizing the
effect of the cause of the variation (nonuniform temperature distribution) without
controlling the cause itself (the kiln design). As illustrated by this example, the
fundamental principle of Robust Design is to improve the quality of a product by minimizing the
effect of the causes of variation without eliminating the causes. This is achieved by
optimizing the product and process designs to make the performance minimally
sensitive to the various causes of variation. This is called parameter design. However,
parameter design alone does not always lead to sufficiently high quality. Further
improvement can be obtained by controlling the causes of variation where
economically justifiable, typically by using more expensive equipment, higher grade
components, better environmental controls, etc., all of which lead to higher product cost, or
operating cost, or both. The benefits of improved quality must justify the added
product cost.
A great deal of engineering time is spent generating information about how different
design parameters affect performance under different usage conditions. Robust Design
methodology serves as an "amplifier"—that is, it enables an engineer to generate
information needed for decision-making with half (or even less) the experimental effort.
There are two important tasks to be performed in Robust Design:
1. Measurement of Quality During Design I Development. We want a leading
indicator of quality by which we can evaluate the effect of changing a particular
design parameter on the product's performance.
2. Efficient Experimentation to Find Dependable Information about the Design
Parameters. It is critical to obtain dependable information about the design
parameters so that design changes during manufacturing and customer use can be
avoided. Also, the information should be obtained with minimum time and
resources.
The estimated effects of design parameters must be valid even when other
parameters are changed during the subsequent design effort or when designs of related
subsystems change. This can be achieved by employing the signal-to-noise (SIN) ratio to
measure quality and orthogonal arrays to study many design parameters
simultaneously. These tools are described later in this book.
Sec. 1.5 Tools Used in Robust Design 7
Improved Distribution
c
/
n
¦*-
w
li Initial Distribution
Q
>.
S
«
O
/
Tile Dimension
Target
Acceptable Deviation
Kiln Wall
Burner
Tiles
The Robust Design method is in use in many areas of engineering throughout the
United States. For example, AT&T's use of Robust Design methodology has lead to
improvement of several processes in very large scale integrated (VLSI) circuit
fabrication used in the manufacture of 1-megabit and 256-kilobit memory chips, 32-bit
processor chips, and other products. Some of the VLSI applications are:
• The aluminum etching application originated from a belief that poor photoresist
print quality leads to line width loss and to undercutting during the etching
process. By making the etching process insensitive to photoresist profile variation
and other sources of variation, the visual defects were reduced from 80 percent to
15 percent. Moreover, the etching step could then tolerate the variation in the
photoresist profile.
• The reactive ion etching of tantalum silicide (described in Katz and Phadke
[K3]), used to give highly nonuniform etch quality, so only 12 out of 18 possible
wafer positions could be used for production. After optimization, 17 wafer
positions became usable—a hefty 40 percent increase in machine utilization. Also,
the efficiency of the orthogonal array experimentation allowed this project to be
completed by the 20-day deadline. In this case, $1.2 million was saved in
equipment replacement costs not including the expense of disruption on the factory
floor.
• The poly silicon deposition process had between 10 and 5000 surface defects per
unit area. As such, it represented a serious road block in advancing to line
widths smaller than 1.75 micron. Six process parameters were investigated with
18 experiments leading to consistently less than 10 surface defects per unit area.
As a result, the scrap rate was reduced significantly and it became possible to
process smaller line widths. This case study is described in detail in Chapter 4.
Sec. 1.6 Applications and Benefits of Robust Design 9
All these examples show that the Robust Design methodology offers
simultaneous improvement of product quality, performance and cost, and engineering
productivity. Its widespread use in industry will have a far-reaching economic impact because
this methodology can be applied profitably in all engineering activities, including
product design and manufacturing process design.
The philosophy behind Robust Design is not limited to engineering applications.
Yokoyama and Taguchi [Yl] have also shown its applications in profit planning in
business, cash-flow optimization in banking, government policymaking, and other
areas. The method can also be used for tasks such as determining optimum work force
mix for jobs where the demand is random, and improving the runway utilization at an
airport.
10 Introduction Chap. 1
This book is divided into three parts. The first part (Chapters 1 through 4) describes
the basics of the Robust Design methodology. Chapter 2 describes the quality loss
function, which gives a quantitative way of evaluating the quality level of a product
rather than just the "good-bad" characterization. After categorizing the sources of
variation, the chapter further describes the steps in engineering design and the classification
of parameters affecting the product's function. Quality control activities during
different stages of the product realization process are also described there. Chapter 3 is
devoted to orthogonal array experiments and basic analysis of the data obtained
through such experiments. Chapter 4 illustrates the entire strategy of Robust Design
through an integrated circuit (IC) process design example. The strategy begins with
problem formulation and ends with verification experiment and implementation. This
case study could be used as a model in planning and carrying out manufacturing
process optimization for quality, cost, and manufacturability. The example also has the
basic framework for optimizing a product design.
The second part of the book (Chapters 5 through 7) describes, in detail, the
techniques used in Robust Design. Chapter 5 describes the concept of signal-to-noise ratio
and gives appropriate signal-to-noise ratios for a number of common engineering
problems. Chapter 6 is devoted to a critical decision in Robust Design: choosing an
appropriate response variable, called quality characteristic, for measuring the quality of
a product or a process. The guidelines for choosing quality characteristics are
illustrated with examples from many different engineering fields. A step-by-step procedure
for designing orthogonal array experiments for a large variety of industrial problems is
given in Chapter 7.
The third part of the book (Chapters 8 through 11) describes four more case
studies to illustrate the use of Robust Design in a wide variety of engineering
disciplines. Chapter 8 shows how the Robust Design method can be used to optimize
product design when computer simulation models are available. The differential operational
amplifier case study is used to illustrate the optimization procedure. This chapter also
shows the use of orthogonal arrays to simulate the variation in component values and
environmental conditions, and thus estimate the yield of a product. Chapter 9 shows
the procedure for designing an ON-OFF control system for a temperature controller.
The use of Robust Design for improving the performance of a hardware-software
system is described in Chapter 10 with the help of the UNIX operating system tuning case
study. Chapter 11 describes the router bit life study and explains how Robust Design
can be used to improve reliability.
1.8 SUMMARY
• Through his research in the 1950s and early 1960s, Dr. Genichi Taguchi
developed the foundations of Robust Design and validated the basic, underlying
philosophies by applying them in the development of many products.
• Robust Design uses many ideas from statistical experimental design and adds a
new dimension to it by explicitly addressing two major concerns faced by all
product and process designers:
a. How to reduce economically the variation of a product's function in the
customer's environment.
• The ideal quality a customer can receive is that every product delivers the target
performance each time the product is used, under all intended operating
conditions, and throughout the product's intended life, with no harmful side effects.
The deviation of a product's performance from the target causes loss to the user
of the product, the manufacturer, and, in varying degrees, to the rest of society as
well. The quality level of a product is measured in terms of the total loss to the
society due to functional variation and harmful side effects.
• The three main categories of cost one must consider in delivering a product are:
(1) operating cost: the cost of energy, environmental control, maintenance,
inventory of spare parts, etc. (2) manufacturing cost: the cost of equipment,
machinery, raw materials, labor, scrap, network, etc. (3) R&D cost: the time
taken to develop a new product plus the engineering and laboratory resources
needed.
• The two major tools used in Robust Design are: (1) signal-to-noise ratio, which
measures quality and (2) orthogonal arrays, which are used to study many
design parameters simultaneously.
• The Robust Design method has been found valuable in virtually all engineering
fields and business applications.
Chapter 2
PRINCIPLES OF
QUALITY ENGINEERING
A product's life cycle can be divided into two main parts: before sale to the customer
and after sale to the customer. All costs incurred prior to the sale of the product are
added to the unit manufacturing cost (umc), while all costs incurred after the sale are
lumped together as quality loss. Quality engineering is concerned with reducing both
of these costs and, thus, is an interdisciplinary science involving engineering design,
manufacturing operations, and economics.
It is often said that higher quality (lower quality loss) implies higher unit
manufacturing cost. Where does this misconception come from? It arises because
engineers and managers, unaware of the Robust Design method, tend to achieve higher
quality by using more costly parts, components, and manufacturing processes. In this
chapter we delineate the basic principles of quality engineering and put in perspective
the role of Robust Design in reducing the quality loss as well as the umc. This chapter
contains nine sections:
• Sections 2.1 and 2.2 are concerned with the quantification of quality loss.
Section 2.1 describes the shortcomings of using fraction defective as a measure of
quality loss. (This is the most commonly used measure of quality loss.)
Section 2.2 describes the quadratic loss function, which is a superior way of
quantifying quality loss in most situations.
• Section 2.3 describes the various causes, called noise factors, that lead to the
deviation of a product's function from its target.
13
14 Principles of Quality Engineering Chap. 2
• Section 2.4 focuses on the computation of the average quality loss, its
components, and the relationship of these components to the noise factors.
• Section 2.5 describes how Robust Design exploits nonlinearity to reduce the
average quality loss without increasing umc.
• Section 2.7 discusses different ways of formulating product and process design
optimization problems and gives a heuristic solution.
• Section 2.8 addresses the various stages of the product realization process and
the role of various quality control activities in these stages.
We have defined the quality level of a product to be the total loss incurred by society
due to the failure of the product to deliver the target performance and due to harmful
side effects of the product, including its operating cost. Quantifying this loss is
difficult because the same product may be used by different customers, for different
applications, under different environmental conditions, etc. However, it is important to
quantify the loss so that the impact of alternative product designs and manufacturing
processes on customers can be evaluated and appropriate engineering decisions made.
Moreover, it is critical that the quantification of loss not become a major task that
consumes substantial resources at various stages of product and process design.
It is common to measure quality in terms of the fraction of the total number of
units that are defective. This is referred to as fraction defective. Although commonly
used, this measure of quality is often incomplete and misleading. It implies that all
products that meet the specifications (allowable deviations from the target response) are
equally good, while those outside the specifications are bad. The fallacy here is that
the product that barely meets the specifications is, from the customer's point of view,
as good or as bad as the product that is barely outside the specifications. In reality, the
product whose response is exactly on target gives the best performance. As the
product's response deviates from the target, the quality becomes progressively worse.
Sec. 2.1 Quality Loss Function—The Fraction Defective Fallacy 15
Sony—USA Sony—Japan
Color
Density
Grade
Figure 2.1 Distribution of color density in television sets. (Source: The Asahi, April 17,
1979).
The perceived difference in quality becomes clear when we look closely at the
sets that met the tolerance limits. Sets with color density very near m perform best and
can be classified grade A. As the color density deviates from m, the performance
becomes progressively worse, as indicated in Figure 2.1 by grades B and C. It is clear
16 Principles of Quality Engineering Chap. 2
that Sony-Japan produced many more grade A sets and many fewer grade C sets when
compared to Sony-USA. Thus, the average grade of sets produced by Sony-Japan was
better, hence the customer's preference for the sets made by Sony-Japan.
In short, the difference in the customer's perception of quality was a result of
Sony-USA paying attention only to meeting the tolerances, whereas in Sony-Japan the
attention was focused on meeting the target.
Using a wrong measurement system can, and often does, drive the behavior of people
in wrong directions. The telephone cable example described here illustrates how using
fraction defective as a measure of quality loss can permit suboptimization by the
manufacturer leading to an increase in the total cost, which is the sum of quality loss
and umc.
m-A0 m m+A0
Resistance (Ohms/Mile) —?
Figure 2.2 Distribution of telephone cable resistance, (a) Initial distribution, (b) After
process improvement and shifting the mean.
Sec. 2.1 Quality Loss Function—The Fraction Defective Fallacy 17
The examples above bring out an important point regarding quantification of quality
loss. Products that do not meet tolerances inflict a quality loss on the manufacturer, a
loss visible in the form of scrap or rework in the factory, which the manufacturer adds
to the cost of the product. However, products that meet tolerance also inflict a quality
loss, a loss that is visible to the customer and that can adversely affect the sales of the
product and the reputation of the manufacturer. Therefore, the quality loss function
must also be capable of measuring the loss due to products that meet the tolerances.
0 if | y-m | <Aq
(2 I)
L(y) Aq otherwise v ' '
Here, A0 is the cost of replacement or repair. Use of such a loss function is apt to lead
to the problems that Sony-USA and the cable manufacturer faced and, hence, should be
avoided.
18 Principles of Quality Engineering Chap. 2
Liy) Quality
Loss
¦~A0+-
-?
y
m-A„ m m +A0
(b) Quadratic
Loss Function
m +A0
The quadratic loss function can meaningfully approximate the quality loss in most
situations. Let v be the quality characteristic of a product and m be the target value for
v. (Note that the quality characteristic is a product's response that is observed for
quantifying quality level and for optimization in a Robust Design project.) According
to the quadratic loss function, the quality loss is given by
where k is a constant called quality loss coefficient. Equation (2.2) is plotted in Figure
2.3(b). Notice that at v = m the loss is zero and so is the slope of the loss function.
This is quite appropriate because m is the best value for y. The loss Liy) increases
slowly when we are near m; but as we go farther from m the loss increases more
rapidly. Qualitatively, this is exactly the kind of behavior we would like the quality
loss function to have. The quadratic loss function given by Equation (2.2) is the
simplest mathematical function that has the desired qualitative behavior.
Sec. 2.2 Quadratic Loss Function 19
Note that Equation (2.2) does not imply that every customer who receives a
product with y as the value of the quality characteristic will incur a precise quality loss equal
to L{y). Rather, it implies that the average quality loss incurred by those customers is
L(y). The quality loss incurred by a particular customer will obviously depend on that
customer's operating environment.
It is important to determine the constant k so that Equation (2.2) can best
approximate the actual loss within the region of interest. This is a rather difficult, though
important, task. A convenient way to determine k is to determine first the functional limits for
the value of y. Functional limit is the value of y at which the product would fail in half of
the applications. Let m ± Ao be the functional limits. Suppose, the loss at m ± A0 is
A0. Then by substitution in Equation (2.2), we obtain
k= -t ¦ (2.3)
Note that A 0 is the cost of repair or replacement of the product. It includes the loss
due to the unavailability of the product during the repair period, the cost of transporting
the product by the customer to and from the repair center, etc. If a product fails in an
unsafe mode, such as an automobile breaking down in the middle of a road, then the
losses from the resulting consequences should also be included in A0. Regardless of who
pays for them—the customer, the manufacturer, or a third party—all these losses should
be included in A0.
Substituting Equation (2.3) in Equation (2.2) we obtain
Suppose the functional limits for the color density are m±l. This means about half the
customers, taking into account the diversity of their environment and taste, would find the
television set to be defective if the color density is m± 7. Let the repair of a television set
in the field cost on average A 0 = $98. By substituting in Equation (2.4), the quadratic
loss function can be written as
Thus, the average quality loss incurred by the customers receiving sets with color
density m + 4 is L (rn + 4) = $32, while customers receiving sets with color density m + 2
incur an average quality loss of only L (m + 2) = $8.
Consider a power supply circuit used in a stereo system for which the target output
voltage is 110 volts. If the output voltage falls outside 110 ±20 volts, then the stereo
fails in half the situations and must be repaired. Suppose it costs $100 to repair the
stereo. Then the average loss associated with a particular value y of output voltage is
given by
£(y) = ^T-(y-H0)2=0.25(y-110)2 .
The quadratic loss function given by Equation (2.2) is applicable whenever the quality
characteristic y has a finite target value, usually nonzero, and the quality loss is
symmetric on either side of the target. Such quality characteristics are called nominal-the-
best type quality characteristics and Equation (2.2) is called the nominal-the-best type
quality loss function. The color density of a television set and the output voltage of a
power supply circuit are examples of the nominal-the-best type quality characteristic.
Some variations of the quadratic loss function in Equation (2.2) are needed to
cover adequately certain commonly occurring situations. Three such variations are
given below.
• Smaller-the-better type characteristic. Some characteristics, such as radiation
leakage from a microwave oven, can never take negative values. Also, their
ideal value is equal to zero, and as their value increases, the performance
becomes progressively worse. Such characteristics are called smaller-the-better
type quality characteristics. The response time of a computer, leakage current in
electronic circuits, and pollution from an automobile are additional examples of
this type of quality characteristic. The quality loss in such situations can be
approximated by the following function, which is obtained from Equation (2.2)
by substituting m = 0:
Note this is a one-sided loss function because y cannot take negative values. As
described earlier, the quality loss coefficient k can be determined from the
functional limit, Aq, and the quality loss, A0, can be determined at the functional
limit by using Equation (2.3).
Sec. 2.2 Quadratic Loss Function 21
(2.6)
L(y) = k [y
The rationale for using Equation (2.6) as the quality loss function for larger-the-
better type characteristics is discussed further in Chapter 5. To determine the
constant k for this case, we find the functional limit, Aq, below which more than
half of the products fail, and the corresponding loss Aq. Substituting Ao and AQ
in Equation (2.6), and solving for k, we obtain
k = AQAl (2.7)
ki(y-m)2, y>m
(2.8)
L(y) = k2(y~m)2, y<m
The four different versions of the quadratic loss function are plotted in Figure
2.4. For a more detailed discussion of the quality loss function see Taguchi [T4] and
Jessup [Jl].
Principles of Quality Engineering Chap. 2
(a) Nominal-the-best
? y
m-Ao m m + A„
(b) Smaller-the-better
? y
(c) Larger-the-better
(d) Asymmetric
? y
Let us look at a few common products and identify key noise factors:
• Refrigerator. Some of the important noise factors related to the temperature
control inside a refrigerator are:
— External—the number of times the door is opened and closed, the amount
of food kept and the initial temperature of the food, variation in the
ambient temperature, and the supply voltage variation.
— Unit-to-unit variation—the tightness of the door closure and the amount of
refrigerant used.
— Deterioration—the leakage of refrigerant and mechanical wear of
compressor parts.
• Automobile. The following noise factors are important for the braking distance of
an automobile:
1. External to the process. These are the noise factors related to the environment in
which the process is carried out (ambient temperature, humidity, etc.) and the
load offered to the process. Variation in the raw material and operator errors are
also examples of this category.
2. Process nonuniformity. In some processes, many units are processed
simultaneously as a batch. For example, in wave soldering of printed circuit boards as
many as 1000 or more solder joints may be formed simultaneously. Each solder
joint experiences different processing conditions based on its position on the
board. In some processes, process nonuniformity can be an important source of
variation.
3. Process drift. Due to the depletion of chemicals used or the wearing out of the
tools, the average quality characteristic of the products may drift as more units
are produced.
The following example further clarifies the three types of noise factors in a
manufacturing process:
• Developing photos. Some of the key noise factors for the developing process
are:
Because of the noise factors, the quality characteristic v of a product varies from unit
to unit, and from time to time during the usage of the product. Suppose the
distribution of v resulting from all sources of noise is as shown in Figure 2.5. Let
y\i yi-> '" Jn be n representative measurements of the quality characteristic v taken
on a few representative units throughout the design life of the product. Let v be a
nominal-the-best type quality characteristic and m be its target value. Then, the
average quality loss, Q, resulting from this product is given by
= k {\L-mY n+ cr (2.9)
where |i and o are the mean and the variance of y, respectively, as computed by
H =n|£v;
,"i
and o2 = -L. £ (y.-n)2
n-1 /=i
Thus, the average quality loss has the following two components:
1. k(\l-m)2 resulting from the deviation of the average value of v from the target
2. kG2 resulting from the mean squared deviation of y around its own mean
26 Principles of Quality Engineering Chap. 2
Quality
Loss
Function
Distribution
of y with
mean u and
variance o2
m-A,
1. Screening out bad products. Here, products that are outside certain limits,
m ± A' , are rejected as defective. Typically A'<Aq so that measurement errors
and product deterioration are properly taken into account. The rejected pieces are
either reworked or scrapped. Because inspection, scrap, and rework are
expensive, this method of reducing the variance leads to higher cost per passed
product.
2. Discovering the cause of malfunction and eliminating it. Variance can also be
reduced by discovering the cause of malfunction and eliminating it. For
example, if the cause of malfunction is fluctuation in the ambient temperature, then
the customer is asked to use the equipment in an air-conditioned place. If the
tolerance on a particular component is identified as a major contributor to system
performance, then a narrower tolerance is specified for that component. This
method of reducing the variance, which is frequently used, is also expensive but
usually less expensive than screening.
y=f(x,z) ¦ (2.11)
The deviation, Ay, of the quality characteristic from the target value caused by the
deviations, Ax,, of the noise factors from their respective nominal values can be
approximated by the following formula:
a/ Ax„
a/ Axi + dx2
dxi dx„
(2.12)
Further, if the deviations in the noise factors are uncorrelated, the variance, o^, of y can be expressed in terms of variances, o^., of the individual noise factors as
2 2
Thus, the variance o^ is a sum of the products of the variances of the noise
factors, Q2X., and the sensitivity coefficients, (B//Bx,)2. The sensitivity coefficients are
themselves functions of the control factor values. A robust product (or a robust
process) is one for which the sensitivity coefficients are the smallest. Thus, it is obvious
that for robust products we can allow wider manufacturing tolerances, lower grade
components or materials, and a wider operating environment. The Robust Design
method is a way of arriving at the robust products and processes efficiently.
28 Principles of Quality Engineering Chap. 2
The following example of the design of an electrical power supply circuit vividly
illustrates the exploitation of nonlinearity for reducing sensitivity coefficients. The
dependence of the output voltage, v, on the gain of a transistor, A, in the power supply
circuit is shown in Figure 2.6. It is clear that this relationship is nonlinear. The
relationship between the output voltage and the dividing resistor, B, is linear, as shown in
the figure. To achieve a 110-volt target output voltage, one may choose A} as the
transistor gain and B i as the dividing resistance. Suppose the transistor has ± 20
percent variation as a result of manufacturing variation, environmental variation, and drift.
Then, the corresponding variation in the output voltage would be ±5 volts as indicated
in the figure. Suppose the allowed variation in the output voltage is ±2 volts. That
requirement can be accomplished by reducing the tolerance on the gain by a factor of
about 2.5. This would, however, mean higher manufacturing cost.
125
Nonlinear relationship
is useful for attenuating
sensitivity to noise.
1 110
Transistor
Gain
125
Linear relationship
is useful for shifting
! the mean on target.
I 110
Resistance of
Dividing Resistor
Now, consider moving the gain to A 2 where the corresponding output voltage is
125 volts. Here, for a ±20 percent variation in the gain, the variation in the output
voltage is only about ± 2 volts as shown in the figure. The mean output voltage can be
brought back to the desired nominal of 110 volts by moving the resistance from B\ to
Z?2- Because of linearity, this change in resistance has a negligible effect on the
variation of the output voltage. Thus, we can achieve a large reduction in the variation of
the output voltage by simply changing the nominal values of the transistor gain and the
dividing resistor. This change, however, does not change the manufacturing cost of the
circuit. Thus, by exploiting nonlinearity we can reduce the quality loss without
increasing the product cost.
If the requirements on the variation of the output voltage were even tighter due
to a large quality loss associated with the deviation of the output voltage from the
target, the tolerance on the gain could be tightened as economically justifiable.
Thus, the variance of the output voltage can be reduced by two distinct actions:
1. Move the nominal value of gain so that the output voltage is less sensitive to the
tolerance on the gain, which is noise.
2. Reduce the tolerance on the gain to control the noise.
should be normalized by the projected sales volume) and umc associated with tolerance
design. The role of the quadratic loss function and the average quality loss evaluation
in managing the economics of continuous quality improvement is discussed in detail by
Sullivan [S5].
Although parameter design may not increase the umc, it is not necessarily free of
cost. It needs an R&D budget to explore the nonlinear effects of the various control
factors. By using the techniques of orthogonal arrays and signal-to-noise ratios, which
are an integral part of the Robust Design method, one can greatly improve the R&D
efficiency when compared to the efficiency of the present practice of studying one
control factor at a time or an ad-hoc method of finding the best values of many control
factors simultaneously. Thus, by using the Robust Design method, there is potential to
also reduce the total R&D cost.
Noise
Factors
M
Product / Process
Signal Response
Factor
Control
Factors
that the word parameter is equivalent to the word factor in most of Robust Design
literature):
1. Signal factors (M). These are the parameters set by the user or operator of the
product to express the intended value for the response of the product. For
example, the speed setting on a table fan is the signal factor for specifying the amount
of breeze; the steering wheel angle is a signal factor that specifies the turning
radius of an automobile. Other examples of signal factors are the 0 and 1 bits
transmitted in a digital communication system and the original document to be
copied by a photocopying machine. The signal factors are selected by the design
engineer based on the engineering knowledge of the product being developed.
Sometimes two or more signal factors are used in combination to express the
desired response. Thus, in a radio receiver, tuning could be achieved by using
the coarse and fine-tuning knobs in combination.
2. Noise factors (x). Certain parameters cannot be controlled by the designer and
are called noise factors. Section 2.3 described three broad classes of noise
factors. Parameters whose settings (also called levels) are difficult to control in the
field or whose levels are expensive to control are also considered noise factors.
The levels of the noise factors change from one unit to another, from one
environment to another, and from time to time. Only the statistical
characteristics (such as the mean and variance) of noise factors can be known or specified
but the actual values in specific situations cannot be known. The noise factors
cause the response y to deviate from the target specified by the signal factor M
and lead to quality loss.
3. Control factors (z). These are parameters that can be specified freely by the
designer. In fact, it is the designer's responsibility to determine the best values
of these parameters. Each control factor can take multiple values, called levels.
When the levels of certain control factors are changed, the manufacturing cost
does not change; however, when the levels of others are changed, the
manufacturing cost also changes. In the power supply circuit example of Section 2.5, the
transistor gain and the dividing resistance are control factors that do not change
the manufacturing cost. However, the tolerance of the transistor gain has a
definite impact on the manufacturing cost. We will refer to the control factors
that affect manufacturing cost as tolerance factors, whereas the other control
factors simply will be called control factors.
Robust Design projects can be classified on the basis of the nature of the signal
factor and the quality characteristic. In some problems, the signal factor takes a
constant value. Such problems are called static problems. The other problems are called
dynamic problems. These and other types of problems are described in Chapter 5.
Thus far in this chapter, we have described the basic principles of quality
engineering, including the quadratic loss function, the exploitation of nonlinearity, and
the classification of product or process parameters. All this material creates a
foundation for discussing the optimization of the design of products and processes in the next
section.
Note that strategies 1 and 2 are the extreme strategies a supplier can follow to
remain a preferred supplier. In between, there are infinitely many strategies, and
strategy 3 is an important one among them.
Consider the strategy of minimizing the manufacturing cost while delivering a specified
quality level. The engineering problem of optimizing a product or process design to
reflect this strategy is difficult and fuzzy. First, the relationship between the numerous
parameters and the response is often unknown and must be observed experimentally.
Secondly, during product or process design the precise magnitudes of noise factor
variations and the costs of different grades of materials, components, and tolerances are not
known. For example, during product design, exact manufacturing variations are not
known unless existing processes are to be used. Therefore, writing a single objective
function encompassing all costs is not possible. Considering these difficulties, the
following strategy has an intuitive appeal and consists of three steps: (1) concept design,
(2) parameter design, and (3) tolerance design. These steps are described below.
1. Concept design. In this step, the designer examines a variety of architectures and
technologies for achieving the desired function of the product and selects the
most suitable ones for the product. Selecting an appropriate circuit diagram or a
sequence of manufacturing steps are examples of concept design activity. This is
a highly creative step in which the experience and skill of the designer play an
important role. Usually, only one architecture or technology is selected based on
the judgment of the designer. However, for highly complex products, two or
three promising architectures are selected; each one is developed separately, and,
in the end, the best architecture is adopted. Concept design can play an
important role in reducing the sensitivity to noise factors as well as in reducing the
manufacturing cost. Quality Function Deployment (QFD) and Pugh's concept
selection method are two techniques that can improve the quality and
productivity of the concept design step (see Clausing [CI], Sullivan [S6], Hauser and
Clausing [HI], and Cohen [C4]).
2. Parameter design. In parameter design, we determine the best settings for the
control factors that do not affect manufacturing cost, that is, the settings that
minimize quality loss. Thus, we must minimize the sensitivity of the function of
the product or process to all noise factors and also get the mean function on
target. During parameter design, we assume wide tolerances on the noise factors
and assume that low-grade components and materials would be used; that is, we
fix the manufacturing cost at a low value and, under these conditions, minimize
the sensitivity to noise, thus minimizing the quality loss. If at the end of
parameter design the quality loss is within specifications, we have a design with the
lowest cost and we need not go to the third step. However, in practice the
¦y
quality loss must be further reduced; therefore, we always have to go to the third
step.
Robust Design and its associated methodology focus on parameter design. A full
treatment of concept design is beyond the scope of this book. Tolerance design is
discussed briefly in Chapters 8 and 11 with case studies.
The goal of this section is to delineate the major quality control activities during the
various stages of the life cycle of a product and to put in perspective the role of the
Robust Design methodology. Once a decision to make a product has been made, the
life cycle of that product has four major stages:
1. Product design
3. Manufacturing
4. Customer usage
The quality control activities in each of these stages are listed in Table 2.1. The
quality control activities in product and process design are called off-line quality
control, whereas the quality control activities in manufacturing are called on-line quality
control. For customer usage, quality control activities involve warranty and service.
Product Design
During product design, one can address all three types of noise factors (external, unit-
to-unit variation, and deterioration), thus making it the most important stage for
improving quality and reducing the unit manufacturing cost. Parameter design during
this stage reduces sensitivity to all three types of noise factors and, thus, gives us the
following benefits:
• The benefits that can be derived from making manufacturing process design
robust are also realized through parameter design during product design.
During manufacturing process design, we cannot reduce the effects of either the
external noise factors or the deterioration of the components on the product's performance
in the field. This can be done only through product design and selection of material
and components. However, the unit-to-unit variation can be reduced through
36 Principles of Quality Engineering Chap. 2
Ability to
Reduce Effect of
Noise Factors
Product Quality
Realization Control Unit to
Step Activity External Unit Deterioration Comments
Source: Adapted from G. Taguchi, "Off-line and On-line Quality Control System," International
Conference on Quality Control, Tokyo, Japan, 1978.
process design because parameter design during process design reduces the sensitivity
of the unit-to-unit variation to the various noise factors that affect the manufacturing
process.
For some products the deterioration rate may depend on the design of the
manufacturing process. For example, in microelectronics, the amount of impurity has a
direct relationship with the deterioration rate of the integrated circuit and it can be
Sec. 2.8 Role of Various Quality Control Activities 37
controlled through process design. For some mechanical parts, the surface finish can
determine the wear-out rate and it too can be controlled by the manufacturing process
design. Therefore, it is often said that the manufacturing process design can play an
important role in controlling the product deterioration. However, it is not the same
thing as making the product's performance insensitive to problems, such as impurities
or surface finish. Reducing the sensitivity of the product's performance can only be
done during product design. In the terminology of Robust Design, we consider the
problems of impurities or surface finish as a part of manufacturing variation (unit-to-
unit variation) around the target.
The benefits of parameter design in process design are:
• The expense and time spent in final inspection and on rejects can be reduced
greatly.
• Raw material can be purchased from many sources and the expense of incoming
material inspection can be reduced.
• Less expensive manufacturing equipment can be used.
• Wider variation in process conditions can be permitted, thus reducing the need
and expense of on-line quality control (process control).
Manufacturing
Screening. Here, the goal is to stop defective units from being shipped. In
certain situations, the manufacturing process simply does not have adequate
capability—that is, even under nominal operating conditions the process produces
a large number of defective products. Then, as the last alternative, all units
produced can still be measured and the defective ones discarded or repaired to
prevent shipping them to customers. In electronic component manufacturing, it
is common to burn-in the components (subject the components to normal or high
stress for a period of time) as a method for screening out the bad components.
Customer Usage
With all the quality control efforts in product design, process design, and
manufacturing, some defective products may still get shipped to the customer. The only way to
prevent further damage to the manufacturer's reputation for quality is to provide field
service and compensate the customer for the loss caused by the defective product.
2.9 SUMMARY
• Quality engineering is concerned with reducing both the quality loss, which is
the cost incurred after the sale of a product, and the unit manufacturing cost
(umc).
In the formulae above, An is the functional limit and A0 is the loss incurred at
the functional limit. The target values of the response (or the quality
characteristic) for the three cases are m, 0, and oo, respectively.
• A product response that is observed for the purpose of evaluating the quality loss
or optimizing the product design is called a quality characteristic. The
parameters (also called factors) that influence the quality characteristic can be classified
into three classes:
1. Signal factors are the factors that specify the intended value of the
product's response.
2. Noise factors are the factors that cannot be controlled by the designer.
Factors whose settings are difficult or expensive to control are also called
noise factors. The noise factors themselves can be divided into three broad
classes: (1) external (environmental and load factors), (2) unit-to-unit
variation (manufacturing nonuniformity), and (3) deterioration (wear-out,
process drift).
3. Control factors are the factors that can be specified freely by the designer.
Their settings (or levels) are selected to minimize the sensitivity of the
product's response to all noise factors. Control factors that also affect the
product's cost are called tolerance factors.
• Depending on marketing strategy and corporate policy, a supplier can adopt one
of many optimization strategies for becoming a preferred supplier. Among them,
three noteworthy strategies are: (1) minimize manufacturing cost while
delivering the same quality as the competition, (2) minimize the quality loss while
keeping the manufacturing cost the same as the competition, and (3) minimize
the sum of the quality loss and the manufacturing cost. Regardless of which
optimization strategy is adopted, one must first perform parameter design.
• The life cycle of a product has four major stages: (1) product design,
(2) manufacturing process design, (3) manufacturing, and (4) customer usage.
Quality control activities during product and process design are called off-line
quality control, while those in manufacturing are called on-line quality control.
Warranty and service are the ways for dealing with quality problems during
customer usage.
• A product's sensitivity to all three types of noise factors can be reduced during
product design, thus making product design the most important stage for
improving quality and reducing umc. The next important step is manufacturing process
design through which the unit-to-unit variation (and some aspects of
deterioration) can be reduced along with the umc. During manufacturing, the unit-to-unit
variation can be further reduced, but with less cost effectiveness than during
manufacturing process design.
Chapter 3
MATRIX EXPERIMENTS
USING
ORTHOGONAL ARRAYS
• Section 3.1 describes the matrix experiment and the concept of orthogonality.
• Section 3.2 shows how to analyze the data from matrix experiments to determine
the effects of the various parameters or factors. One of the benefits of using
orthogonal arrays is the simplicity of data analysis. The effects of the various
factors can be determined by computing simple averages, an approach that has an
41
42 Matrix Experiments Using Orthogonal Arrays Chap. 3
intuitive appeal. The estimates of the factor effects are then used to determine
the optimum factor settings.
• Section 3.3 presents a model, called additive model, for the factor effects and
demonstrates the validity of using simple averages for estimating the factor
effects.
Consider a project where we are interested in determining the effect of four process
parameters: temperature (A), pressure (B), settling time (C), and cleaning method (D)
on the formation of certain surface defects in a chemical vapor deposition (CVD)
process. Suppose for each parameter three settings are chosen to cover the range of
interest.
The factors and their chosen levels are listed in Table 3.1. The starting levels
(levels before conducting the matrix experiment) for the four factors, identified by an
underscore in the table, are: T0°C temperature, P0 mtorr pressure, t0 minutes of
settling time, and no cleaning. The alternate levels for the factors are as shown in the
table; for example, the two alternate levels of temperature included in the study are
(T0-25)°C and (70+25)°C. These factor levels define the experimental region or the
region of interest. Our goal for this project is to determine the best setting for each
parameter so that the surface defect formation is minimized.
The matrix experiment selected for this project is given in Table 3.2. It consists
of nine individual experiments corresponding to the nine rows. The four columns of
the matrix represent the four factors as indicated in the table. The entries in the matrix
represent the levels of the factors. Thus, experiment 1 is to be conducted with each
factor at the first level. Referring to Table 3.1, we see that the factor levels for
experiment 1 are (70-25)°C temperature, (F0-200) mtorr pressure, t0 minute settling time,
Sec. 3.1 Matrix Experiment for a CVD Process 43
and no cleaning. Similarly, by referring to Tables 3.2 and 3.1, we see that experiment
4 is to be conducted at level 2 of temperature (T°C), level 1 of pressure (F0-200
mtorr), level 2 of settling time (?o + 8 minutes), and level 3 of cleaning method (CM^).
The settings of experiment 4 can also be referred to concisely as A2 B\ C2 D$.
Levels*
Factor 1 2 3
1 2 3 4 Observation*
Expt. Temperature Pressure Settling Cleaning Tl
No. (A) (B) Time (C) Method (D) (dB)
1 1 1 1 1 Tl. =-20
2 1 2 2 2 Tl2=-10
3 1 3 3 3 Tb = -30
4 2 1 2 3 n.4 = -25
5 2 2 3 1 Tl5=-45
6 2 3 1 2 r|6 = -65
7 3 1 3 2 r>7 = -45
8 3 2 1 3 Tig =-65
9 3 3 2 1 ti,=-70
Suppose for each experiment we observe the surface defect count per unit area at three
locations each on three silicon wafers (thin disks of silicon used for making VLSI
circuits) so that there are nine observations per experiment. We define by the following
formula a summary statistic, r),, for experiment i:
where the mean square refers to the average of the squares of the nine observations in
experiment i. We refer to the r|,- calculated using the above formula as the observed
T|/. Let the observed r|, for the nine experiments be as shown in Table 3.2. Note that
the objective of minimizing surface defects is equivalent to maximizing r). The
summary statistic r| is called signal-to-noise (S/N) ratio. The rationale for using r| as the
objective function is discussed in Chapter 5.
Sec. 3.2 Estimation of Factor Effects 45
Let us now see how to estimate the effects of the four process parameters from
the observed values of r| for the nine experiments. First, the overall mean value of r|
for the experimental region defined by the factor levels in Table 3.1 is given by
19
y 1=1
(3.1)
9 "Hi +Tl2H HTlg
By examining columns 1, 2, 3, and 4 of the orthogonal array in Table 3.2, observe that
all three levels of every factor are equally represented in the nine experiments. Thus,
m is a balanced overall mean over the entire experimental region.
The effect of a factor level is defined as the deviation it causes from the overall
mean. Let us examine how the experimental data can be used to evaluate the effect of
temperature at level A-}. Temperature was at level A-} for experiments 7, 8, and 9.
The average S/N ratio for these experiments, which is denoted by mAi, is given by
1
mA, = r|7+r|8+r|9 (3.2)
Thus, the effect of temperature at level At, is given by {mA^-m). From Table 3.2,
observe that for experiments 7, 8, and 9, the pressure level takes values 1, 2, and 3,
respectively. Similarly, for these three experiments, the levels of settling time and
cleaning method also take values 1, 2, and 3. So the quantity mAj represents an
average T) when the temperature is at level A-} where the averaging is done in a balanced
manner over all levels of each of the other three factors.
The average S/N ratio for levels Ax and A2 of temperature, as well as those for
the various levels of the other factors, can be obtained in a similar way. Thus, for
example,
is the average S/N ratio for pressure at level Bi- Because the matrix experiment is
based on an orthogonal array, all the level averages possess the same balancing
property described for m^.
By taking the numerical values of r\ listed in Table 3.2, the average r\ for each
level of the four factors can be obtained as listed in Table 3.3. These averages are
shown graphically in Figure 3.1. They are separate effects of each factor and are
commonly called main effects. The process of estimating the factor effects discussed above
is sometimes called analysis of means (ANOM).
Tl(dB)
A
-20 -
-40 -
-60 -
-80
T—I—I 1—I—I 1—T
¦3 °, D2 D3 C2 C
a^ a e, e2 e3 J^
¦v Y
A — )
Figure 3.1 Plots of factor effects. Underscore indicates starting level. Two-standard-deviation
confidence limits are also shown for the middle level.
Sec. 3.2 Estimation of Factor Effects 47
Level
Factor 1 2 3
The predicted best settings need not correspond to one of the rows in the matrix
experiment. In fact, often they do not correspond as is the case in the present example.
Also, typically, the value of r) realized for the predicted best settings is better than the
best among the rows of the matrix experiment.
48 Matrix Experiments Using Orthogonal Arrays Chap. 3
In the preceding section, we used simple averaging to estimate factor effects. The
same nine observations (r|,, r|2, • • • , T]9) are grouped differently to estimate the factor
effects. Also, the optimum combination of settings was determined by examining the
effect of each factor separately. Justification for this simple procedure comes from
In the above equation, \i is the overall mean—that is, the mean value of r) for the
experimental region; the deviation from \i caused by setting factor A at level At is at;
the terms bj, ck and dt represent similar deviations from [i caused by the settings Bj,
Ck and Dt of factors B, C, and D, respectively; and e stands for the error. Note that by
error we imply the error of the additive approximation plus the error in the
repeatability of measuring r\ for a given experiment.
An additive model is also referred to as a superposition model or a variables
separable model in engineering literature. Note that superposition model implies that
the total effect of several factors (also called variables) is equal to the sum of the
individual factor effects. It is possible for the individual factor effects to be linear,
quadratic, or of higher order. However, in an additive model cross product terms
involving two or more factors are not allowed.
By definition a\, a2, and a3 are the deviations from [i caused by the three levels
of factor A. Thus,
ai + a2 + «3 = 0 . (3.6)
Similarly,
bx + b2 + b3 = 0
C{ + c2 + c3 = 0 (3.7)
dl +d2 +d3 =0
Sec. 3.3 Additive Model for Factor Effects 49
It can be shown that the averaging procedure of Section 3.2 for estimating the factor
effects is equivalent to fitting the additive model, defined by Equations (3.5), (3.6), and
(3.7), by the least squares method. This is a consequence of using an orthogonal array
to plan the matrix experiment.
Now, consider Equation (3.2) for the estimation of the effect of setting
temperature at level 3:
(\i + a3 + bx + c3 + d2 + e-i)
Note that the terms corresponding to the effects of factors B, C and D drop out because
of Equation (3.7). Thus, mA3 is an estimate of (\i+a^).
Furthermore, the error term in Equation (3.8) is an average of three error terms.
Suppose Og is the average variance for the error terms e \, ei, • • • , e9. Then the error
variance for the estimate mAj is approximately (1/3)0^. (Note that in computing the
error variances of the estimate mA^ and other estimates in this chapter, we treat the
individual error terms as independent random variables with zero mean and variance
Og. In reality, this is only an approximation because the error terms include the error
of the additive approximation so that the error terms are not strictly independent
random variables with zero mean. This approximation is adequate because the error
variance is used for only qualitative purposes.) This represents a 3-fold reduction in error
variance compared to conducting a single experiment at the setting A 3 of factor A.
50 Matrix Experiments Using Orthogonal Arrays Chap. 3
The term replication number is used to refer to the number of times a particular
factor level is repeated in an orthogonal array. The error variance of the average effect
for a particular factor level is smaller than the error variance of a single experiment by
a factor equal to its replication number. To obtain the same accuracy of the factor
level averages, we would need a much larger number of experiments if we were to use
the traditional approach of studying one factor at a time. For example, we would have
to conduct 3x3 = 9 experiments to estimate the average r| for three levels of
temperature alone (three repetitions each for the three levels), while keeping other factors fixed
at certain levels, say, Bx, C\,DX.
We may then fix temperature at its best setting and experiment with levels B2
and B 3 of pressure. This would need 3x2 = 6 additional experiments. Continuing in
this manner, we can study the effects of factors C and D by performing 2x6= 12
additional experiments. Thus, we would need a total of 9 + 3 x 6 = 27 experiments to
study the four factors, one at a time. Compare this to only nine experiments needed
for the orthogonal array based matrix experiment to obtain the same accuracy of the
factor level averages.
Different factors affect the surface defect formation to a different degree. The relative
magnitude of the factor effects could be judged from Table 3.3, which gives the
average r\ for each factor level. A better feel for the relative effect of the different factors
can be obtained by the decomposition of variance, which is commonly called analysis
of variance (ANOVA). ANOVA is also needed for estimating the error variance for
the factor effects and variance of the prediction error.
• The columns in the matrix experiment are orthogonal, which is analogous to the
orthogonality of the different harmonics.
The analogy between the Fourier analysis of the power of an electrical signal and
ANOVA is displayed in Figure 3.2. The experiments are arranged along the horizontal
axis like time. The overall mean is plotted as a straight line like a dc component. The
effect of each factor is displayed as an harmonic. The level of factor A for
experiments 1, 2, and 3 is A,. So, the height of the wave for A is plotted as mAl for these
experiments. Similarly, the height of the wave for experiments 4, 5, and 6 is mAl> and
the height for experiments 7, 8, and 9 is mAy The waves for the other factors are also
plotted similarly. By virtue of the additive model [Equation (3.5)], the observed r\ for
any experiment is equal to the sum of the height of the overall mean and the deviation
from mean caused by the levels of the four factors. By referring to the waves of the
different factors shown in Figure 3.2 it is clear that factors A, B, C, and D are in the
decreasing order of importance. Further aspects of the analogy are discussed in the rest
of this section.
Matrix Experiments Using Orthogonal Arrays Chap. 3
Mean _50 _e
m—r i i i i i' t
-10 4123456789
mm^m*™'*!**
-10
Effect of -3°
Factor B _5q -
i—i—i—i—
-10 ik 123456789
Effect of ~30 "
Factor C _5q
n—i—i—i—
Experiment Number
The sum of the squared values of T| is called grand total sum of squares. Thus, we
have
= 19,425 (dB)2 .
The grand total sum of squares is analogous to the total signal power in Fourier
analysis. It can be decomposed into two parts—sum of squares due to mean and total
sum of squares which are defined as follows:
= (-20-41.67)2 + (-10-41.67)2 + • • •
+ (-70-41.67)2
= 3,800 (dB)2 .
The sum of squares due to mean is analogous to the dc power of the signal and the
total sum of squares is analogous to the ac power of the signal in Fourier analysis.
Because m is the average of the nine T|, values, we have the following algebraic
identity:
;=1 ;=1
54 Matrix Experiments Using Orthogonal Arrays Chap. 3
The above equation is analogous to the fact from Fourier analysis that the ac power is
equal to the difference between the total power and the dc power of the signal.
The sum of squares due to factor A is equal to the total squared deviation of the
wave for factor A from the line representing the overall mean. There are three
experiments each at levels A,, A 2, and A 3. Consequently,
= 2450 (dB)2.
Prareoceeditabulated
ng along the same linines, weTable
can show 3.4.
that the sumTheof squarsums
es due to fofactorsquares
s B, C, and D are,values
respectively, 950,due350, andto50various
(dB)2. These sumsfactors
of squaresare
values
analogous to the power in various harmonics, and are a measure of the relative
importance of the factors in changing the values of T|.
Thus, factor A explains a major portion of the total variation of T|. In fact, it is
responsible for (2450/3800) x 100 = 64.5 percent of the variation of T|. Factor B is
responsible for the next largest portion, namely 25 percent; and factors C and D
together are responsible for only a small portion, a total of 10.5 percent, of the
variation in T|.
Knowing the factor effects (that is, knowing the values of a,, bj, c*, and dt), we
can use the additive model given by Equation (3.5) to calculate the error term e, for
each experiment i. The sum of squares due to error is the sum of the squares of the
error terms. Thus we have,
In the present case study, the total number of model parameters {\i, a,, a 2, a3> ^i» t>2,
etc.) is 13; the number of constraints, defined by Equations (3.6) and (3.7) is 4. The
Sec. 3.4 Analysis of Variance 55
number of model parameters minus the number of constraints is equal to the number of
experiments. Hence, the error term is identically zero for each experiment. Hence, the
sum of squares due to error is also zero. Note that this need not be the situation with
all matrix experiments.
Error 0 0 -
Total 8 3800
Equation (3.9) is analogous to Parseval's equation for the decomposition of the power
of a signal into power in different harmonics. Equation (3.9) is often used for
calculating the sum of squares due to error after computing the total sum of squares and the
sum of squares due to various factors. Derivation of Equation (3.9) as well as detailed
mathematical description of ANOVA can be found in many books on statistics, such as
Scheffe [SI], Rao [R3], and Searle [S2].
56 Matrix Experiments Using Orthogonal Arrays Chap. 3
For the matrix experiment described in this chapter, Equation (3.9) implies:
Note that the various sums of squares tabulated in Table 3.4 do satisfy the above
equation.
Degrees of Freedom
Factor A has three levels, so its effect can be characterized by three parameters: a \,
a2, and a-x,. But these parameters must satisfy the constrain given by Equation (3.6).
Thus, effectively, factor A has only two independent parameters and, hence, two degrees
of freedom. Similarly, factors B, C, and D have two degrees of freedom each. In
general, the degrees of freedom associated with a factor is one less than the number of levels.
Note the similarity between Equations (3.9) and (3.10). Equation (3.10) is useful for
computing the degrees of freedom for error. In the present case study, the degrees of
freedom for error comes out to be zero. This is consistent with the earlier observation
that the error term is identically zero for each experiment in this case study.
It is customary to write the analysis of variance in a tabular form shown in Table
3.4. The mean square for a factor is computed by dividing the sum of squares by the
degrees of freedom.
Sec. 3.4 Analysis of Variance 57
The error variance, which is equal to the error mean square, can then be estimated as
follows:
The estimation of the error variance by pooling will be further illustrated through
the applications discussed in the subsequent chapters. As it will be apparent from
these applications, deciding which factors' sum of squares should be included in the
error variance is usually obvious by inspecting the mean square column. The decision
process can sometimes be improved by using a graphical data analysis technique called
half-normal plots (see Daniel [Dl] and Box, Hunter, and Hunter [B3]).
58 Matrix Experiments Using Orthogonal Arrays Chap. 3
Confidence intervals for factor effects are useful in judging the size of the change
caused by changing a factor level compared to the error standard deviation. As shown
in Section 3.3, the variance of the effect of each factor level for this example is (1/3) o2 = (1/3)(100) = 33.3 (dB)2. Thus, the width of the two-standard-deviation
confidence interval, which is approximately 95 percent confidence interval, for each
estimated effect is ±2V33.3 = ±11.5 dB. In Figure 3.1 these confidence intervals are
plotted for only the starting level to avoid crowding.
Variance Ratio
The variance ratio, denoted by F in Table 3.4, is the ratio of the mean square due to a
factor and the error mean square. A large value of F means the effect of that factor is
large compared to the error variance. Also, the larger the value of F, the more
important that factor is in influencing the process response T|. So, the values of F can be
used to rank order the factors.
Referring to the sum of squares column in Table 3.4, notice that factor A makes
the largest contribution to the total sum of squares, namely, (2450/3800) x 100 = 64.5
percent. Factor B makes the next largest contribution, (950/3800) x 100 = 25.0
percent, to the total sum of squares. Factors C and D together make only a 10.5 percent
contribution to the total sum of squares. The larger the contribution of a particular
factor to the total sum of squares, the larger the ability is of that factor to influence T|.
In this matrix experiment, we have used all the degrees of freedom for estimating
the factor effects (four factors with two degrees of freedom each make up all the eight
degrees of freedom for the total sum of squares). Thus, there are no degrees of
Sec. 3.5 Prediction and Diagnosis 59
freedom left for estimating the error variance. Following the rule of thumb spelled out
earlier in this section, we use the bottom half factors that have the smallest mean
square to estimate the error variance. Thus, we obtain the error sum of squares,
indicated by parentheses in the ANOVA table, by pooling the sum of squares due to
factors C and D. This gives 100 as an estimate of the error variance.
The largeness of a factor effect relative to the error variance can be judged from
the F column. The larger the F value, the larger the factor effect is compared to the
error variance.
This section points out that our purpose in conducting ANOVA is to determine
the relative magnitude of the effect of each factor on the objective function T| and to
estimate the error variance. We do not attempt to make any probability statements
about the significance of a factor as is commonly done in statistics. In Robust Design,
ANOVA is also used to choose from among many alternatives the most appropriate
quality characteristic and S/N ratio for a specific problem. Such an application of
ANOVA is described in Chapter 8. Also, ANOVA is useful in computing the S/N
ratio for dynamic problems as described in Chapter 9.
= -8.33dB. (3.12)
Note that since the sum of squares due to factors C and D are small and that
these terms are included as error, we do not include the corresponding improvements in
the prediction of T| under optimum conditions. Why are the contributions by factors
having a small sum of squares ignored? Because if we include the contribution from
all factors, it can be shown that the predicted improvement in T| exceeds the actual
realized improvement—that is, our prediction would be biased on the higher side. By
ignoring the contribution from factors with small sums of squares, we can reduce this
60 Matrix Experiments Using Orthogonal Arrays Chap. 3
bias. Again, this is a rule of thumb. For more precise prediction, we need to use
appropriate shrinkage coefficients described by Taguchi [Tl].
Thus, by Equation (3.12) we predict that the defect count under the optimum
conditions would be -8.33 dB. This is equivalent to a mean square count of
^lopt
y = 10~ ~>°~
= 100-833 = 6.8 (defects/unit area)2 .
= 35dB. (3.13)
Once again we do not include the terms corresponding to factors C and D for the
reasons explained earlier.
We need to determine the variance of the prediction error so that we can judge the
clcomponents.
oseness of the observedThe
T|opt tofirst
the predicomponent
cted T|opt. The predicistiontheerror, whierror
ch is theindifthe
erenceprediction
between the observofed T|T|opt
opt and thcaused
e predicted T|obypt, hasthetwo inerrors
dependent
in the estimates of m, mAl, and mB x. The second component is the repetition error of
an experiment. Because these two components are independent, the variance of the
prediction error is the sum of their respective variances.
Consider the first component. Its variance can be shown equal to (\/n0)ol
where <52e is the error variance whose estimation was discussed earlier; n0 is the
equivalent sample size for the estimation of T|opt. The equivalent sample size n0 can
be computed as follows:
(3.14)
n0 n "A, "a,
where n is the number of rows in the matrix experiment and nAi is the number of
times level A \ was repeated in the matrix experiment—that is, nA, is the replication
number for factor level A \ and nB] is the replication number for factor level B x.
Observe the correspondence between Equations (3.14) and (3.12). The term
(1/n) in Equation (3.14) corresponds to the term m in the prediction Equation (3.12);
and the terms (l/nA] -1/n) and (l/nB) -1/n) correspond, respectively, to the terms
(mA]-m) and {mg{-m). This correspondence can be used to generalize Equation
(3.14) to other prediction formulae.
Now, consider the second component. Suppose we repeat the verification
experiment nr times under the optimum conditions and call the average T| for these
experiments as the observed T|opt. The repetition error is given by (l/nr)o|. Thus, the
variance of the prediction error, Opre(j, is
1
2 <Ved 1 C2e +
n0 nr
(3.15)
The corresponding two-standard-deviation confidence limits for the prediction error are
±17.96 dB. If the prediction error is outside these limits, we should suspect the
possibility that the additive model is not adequate. Otherwise, we consider the additive
model to be adequate.
It is obvious from Equation (3.15) that the variance of the prediction error, O^d, is the
same for all combinations of the factor levels in the experimental region. It does not
matter whether the particular combination does or does not correspond to one of the
rows in the matrix experiment. Before conducting the matrix experiment we do not
know what would be the optimum combination. Hence, it is important to have the
property of uniform prediction error.
When interactions between two or more factors are present, we need cross
product terms to describe the variation of T| in terms of the control factors. A model for
such a situation needs more parameters than an additive model and, hence, it needs
more experiments to estimate all the parameters. Further, as discussed in Chapter 6,
using a model with interactions can have problems in the field. Thus, we consider the
presence of interactions to be highly undesirable and try to eliminate them.
When the quality characteristic is correctly chosen, the S/N ratio is properly
constructed, and the control factors are judiciously chosen (see Chapter 6 for guidelines),
the additive model provides excellent approximation for the relationship between T| and
the control factors. The primary purpose of the verification experiment is to warn us
Sec. 3.6 Summary 63
when the additive model is not adequate and, thus, prevent faulty process and product
designs from going downstream. Some applications call for a broader assurance of the
additive model. In such cases, the verification experiment consists of two or more
conditions rather than just the optimum conditions. For the additive model to be
considered adequate, the predictions must match the observation under all conditions that
are tested. Also, in certain situations, we can judge from engineering knowledge that
particular interactions are likely to be important. Then, orthogonal arrays can be
suitably constructed to estimate those interactions along with the main effects, as described
in Chapter 7.
3.6 SUMMARY
• Matrix experiments are also called designed experiments, parameters are also
called factors, and parameter settings are also called levels.
• Conducting matrix experiments using orthogonal arrays is an important technique
in Robust Design. It gives more reliable estimates of factor effects with fewer
experiments when compared to the traditional methods, such as one factor at a
time experiments. Consequently, more factors can be studied in given R&D
resources, leading to more robust and less expensive products.
64 Matrix Experiments Using Orthogonal Arrays Chap. 3
• The columns of an orthogonal array are pairwise orthogonal—that is, for every
pair of columns, all combinations of factor levels occur an equal number of
times. The columns of the orthogonal array represent factors to be studied and
the rows represent individual experiments.
• Some important terms used in matrix experiments are: The region formed by the
factors being studied and their alternate levels is called the experimental region.
The starting levels of the factors are the levels used before conducting the matrix
experiment. The main effects of the factors are their separate effects. If the
effect of a factor depends on the level of another factor, then the two factors are
said to have an interaction. Otherwise, they are considered to have no
interaction. The replication number of a factor level is the number of experiments in
the matrix experiment that are conducted at that factor level. The effect of a
factor level is the deviation it causes from the overall mean response. The optimum
level of a factor is the level that gives the highest S/N ratio.
• An additive model (also called superposition model or variables separable
model) is used to approximate the relationship between the response variable and
the factor levels. Interactions are considered errors in the additive model.
• Orthogonal array based matrix experiments are used for a variety of purposes in
Robust Design. They are used to:
— Study the effects of control factors
— Study the effects of noise factors
— Evaluate the S/N ratio
3. Perform ANOVA to evaluate the relative importance of the factors and the
error variance.
4. Determine the optimum level for each factor and predict the S/N ratio for
the optimum combination.
Sec. 3.6 Summary 65
2) Identify noise factors and the testing conditions for evaluating the quality
loss.
5) Design the matrix experiment and define the data analysis procedure.
67
68 Steps in Robust Design Chap. 4
These eight steps make up a Robust Design cycle. We will illustrate them in
this chapter by using a case study of improving a polysilicon deposition process. The
case study was conducted by Peter Hey in 1984 as a class project for the first offering
of the 3-day Robust Design course developed by the author, Madhav Phadke, and Chris
Sherrerd, Paul Sherry, and Rajiv Keny of AT&T Bell Laboratories. Hey and Sherry
jointly planned the experiment and analyzed the data. The experiment yielded a 4-fold
reduction in the standard deviation of the thickness of the polysilicon layer and nearly
two orders of magnitude reduction in surface defects, a major yield-limiting problem
which was virtually eliminated. These results were achieved by studying the effects of
six control factors by conducting experiments under 18 distinct combinations of the
levels of these factors—a rather small investment for huge benefits in quality and yield.
• Sections 4.1 through 4.8 describes in detail the polysilicon deposition process
case study in terms of the eight steps that form a Robust Design cycle.
• Section 4.9 summarizes the important points of this chapter.
Manufacturing very large scale intergrated (VLSI) circuits involves about 150 major
steps. Deposition of polysilicon comes after about half of the steps are complete, and,
as a result, the silicon wafers (thin disks of silicon) used in the process have a
significant amount of value added by the time they reach this step. The polysilicon
layer is very important for defining the gate electrodes for the transistors. There are
over 250,000 transistors in a square centimeter chip areȣor the 1.75 micron
(micrometer = micron) design rules used in the case study.
A hot-wall, reduced-pressure reactor (see Figure 4.1) is used to deposit
polysilicon on a wafer. The reactor consists of a quartz tube which is heated by a 3-zone
furnace. Silane and nitrogen gases are introduced at one end and pumped out the other.
The silane gas pyrolizes, and a polysilicon layer is deposited on top of the oxide layer
on the wafers. The wafers are mounted on quartz carriers. Two carriers, each carrying
25 wafers, can be placed inside the reactor at a time so that polysilicon is deposited
simultaneously on 50 wafers.
Sec. 4.1 The Polysilicon Deposition Process and its Main Function 69
Pressure
Sensor Wafers
Loading
Door
of a specified thickness. In the case study, the experimenters were interested in achieving 3600 angstrom(A ) thickness (1A = 10~10 meter). Figure 4.2 shows a cross
section of the wafer after the deposition of the polysilicon layer.
Interlevel Dielectric
Si02 2300 A P-doped Polysilicon
3600 A
vmwmmv
Si Substrates
P-doped Polysilicon
2700 A
At the start of the study, two main problems occurred during the deposition
process: (1) too many surface defects (see Figure 4.3) were encountered, and (2) too large
70
Steps in Robust Design Chap. 4
a thickness variation existed within wafers and among wafers. In a subsequent VLSI
manufacturing step, the polysilicon layer is patterned by an etching process to form
lines of appropriate width and length. Presence of surface defects causes these lines to
have variable width, which degrades the performance of the integrated circuits. The nonuniform thickness is detrimental to the etching process because it can lead to
residual polysilicon in some areas and an etching away of the underlying oxide layer in other areas.
Prior to the case study, Hey noted that the surface-defect problem was crucial
because a si g ni f i c ant per c ent a ge of waf e r s wer e scr a pped due t o
intermittent occurrence; for example, some batches of wafers (50 wafers make one
excessi v e def e ct s . Al s o, he obser v ed t h at cont r o l i n g def e ct f o r m at i o n was par t i c ul a r l y di f i c ul t due t o i t s
batch) had approximately ten defects per unit area, while other batches had as many as
5,experimentation
000 defects per unit area. Furwas
thermorthe
e, no thonly
eoreticalway
modelstoexiscontrol
ted to predictthe
defectsurface-defect
formation as a function of thproblem.
e various processHowever,
parameters; therthe
efore,
Sec. 4.2 Noise Factors and Testing Conditions 71
The testing conditions for this case study are rather simple: observe thickness
and surface defects at three positions of three wafers, which are placed in specific
positions along the length of the reactor. Sometimes orthogonal arrays (called noise
orthogonal arrays) are used to determine the testing conditions that capture the effect
of many noise factors. In some other situations, the technique of compound noise
factor is used. These two techniques of constructing testing conditions are described in
Chapter 8.
It is often tempting to observe the percentage of units that meet the specification and
use that percentage directly as an objective function to be optimized. But, such
temptation should be meticulously avoided. Besides being a poor measure of quality loss,
using percentage of good (or bad) wafers as an objective function leads to orders of
magnitude reduction in efficiency of experimentation. First, to observe accurately the
percentage of "good" wafers, we need a large number (much larger than three) of test
wafers for each combination of control factor settings. Secondly, when the percentage
of good wafers is used as an objective function, the interactions among control factors
often become dominant; consequently, additive models cannot be used as adequate
approximations. The appropriate quality characteristics to be measured for the polysili-
con deposition process in the case study were the polysilicon thickness and the surface
defect count. The specifications were that the thickness should be within + 8 percent
of the target thickness and that the surface defect count should not exceed 10 per
square centimeter.
= -i0iog10|{zZyy| (4-D
where ytj is the observed surface defect count at position j on test wafer /. Note that
y'=l, 2, and 3 stand for top, center, and bottom positions, respectively, on a test wafer.
And i'=l, 2, and 3 refer to position numbers 3, 23, and 48, respectively, along the
length of the tube. Maximizing T| leads to minimization of the quality loss due to
surface defects.
Sec. 4.3 Quality Characteristics and Objective Functions 73
The target value in the study for the thickness of the polysilicon layer was
x0 = 3600 A . Let x,y be the observed thickness at position j on test wafer i. The
mean and variance of the thickness are given by
V=^H*v
y 1=1 7=1
(4.2)
°28 1=1
=7=1iri i«ij-rt2- (4-3)
The goal in optimization for thickness is to minimize variance while keeping the
mean on target. This is a constrained optimization problem, which can be very
difficult, especially when many control factors exist. However, as Chapter 5 shows,
when a scaling factor (a factor that increases the thickness proportionally at all points
on the wafers) exists, the problem can be simplified greatly.
In the case study, the deposition time was a clear scaling factor—that is, for
every surface area where polysilicon was deposited, (thickness) = (deposition rate) x
(deposition time). The deposition rate may vary from one wafer to the next, or from
one position on a wafer to another position, due to the various noise factors cited in the
previous section. However, the thickness at any point is proportional to the deposition
time.
Thus, the constrained optimization problem in the case study can be solved in
two steps as follows:
1. Maximize the Signal-to-noise (S/N) ratio, T|',
*i
In the case study, six control factors were selected for optimization. These
factors and their alternate levels are listed in Table 4.1. The deposition temperature (A) is
the steady state temperature at which the deposition takes place. When the wafers are
placed in the reactor, they first have to be heated from room temperature to the
deposition temperature and then held at that temperature. The deposition pressure (B) is the
constant pressure maintained inside the reactor through appropriate pump speed and
butterfly adjustment. The nitrogen flow (C) and the silane flow (D) are adjusted using
the corresponding flow meters on gas tanks. Settling time (E) is the time between
placing the wafer carriers in the reactors and the time at which gases flow. The
settling time is important for establishing thermal and pressure equilibrium inside the
Sec. 4.4 Control Factors and Their Levels 75
reactor before the reaction is allowed to start. Cleaning method (F) refers to cleaning
the wafers prior to the deposition step. Before undertaking the case study experiment,
the practice was to perform no cleaning. The alternate two cleaning methods the
experimenters wanted to study were CM 2, performed inside the reactor, and CM 3,
performed outside the reactor.
Levels*
Factor 1 2 3
Thus, it is important to resist the tendency to choose control factor levels that are
rather close. Of course, during subsequent refinement experiments, levels closer to
each other could be chosen. In the polysilicon deposition case study, the ratio of the
largest to the smallest levels of factors B, C, D, and, E was between three and five
which represents a wide variation. Temperature variation from (70-25) °C to
(Tq + 25) °C also represents a wide range in terms of the known impact on the
deposition rate.
76 Steps in Robust Design Chap. 4
The initial settings of the six control factors are indicated by an underscore in
Table 4.1. The objective of this project was to determine the optimum level for each
factor so that T| and T|' are improved, while ensuring simultaneously that the deposition
rate, r, remained as high as possible. Note that the six control factors and their
selected settings define the experimental region over which process optimization was
done.
An efficient way to study the effect of several control factors simultaneously is to plan
matrix experiments using orthogonal arrays. As pointed out in Chapter 3, orthogonal
arrays offer many benefits. First, the conclusions arrived at from such experiments are
valid over the entire experimental region spanned by the control factors and their
settings. Second, there is a large saving in the experimental effort. Third, the data
analysis is very easy. Finally, it can detect departure from the additive model.
An orthogonal array for a particular Robust Design project can be constructed
from the knowledge of the number of control factors, their levels, and the desire to
study specific interactions. While constructing the orthogonal array, we also take into
account the difficulties in changing the levels of control factors, other physical
limitations in conducting experiments, and the availability of resources. In the polysilicon
deposition case study, there were six factors, each at three levels. The experimenters
found no particular reason to study specific interactions and no unusual difficulty in
changing the levels of any factor. The available resources for conducting the
experiments were such that about 20 batches could be processed and appropriate
measurements made. Using the standard methods of constructing orthogonal arrays, which are
described in Chapter 7, the standard array L 18 was selected for this matrix experiment.
The L18 orthogonal array is given in Table 4.2. It has eight columns and
eighteen rows. The first column is a 2-level column—that is, it has only two distinct
entries, namely 1 or 2. All the chosen six control factors have three levels. So,
column 1 was kept empty or unassigned. From the remaining seven 3-level columns,
column 7 was arbitrarily designated as an empty column, and factors A through F were
assigned, respectively, to columns 2 through 6 and 8. (Note that keeping one or more
columns empty does not alter the orthogonality property of the array. Thus, the matrix
formed by columns 2 through 6 and 8 is still an orthogonal array. But, if one or more
rows are dropped, the orthogonality is destroyed.) The reader can verify the
orthogonality by checking that for every pair of columns all combinations of levels occur,
and they occur an equal number of times.
The 18 rows of the L18 array represent the 18 experiments to be conducted.
Thus, experiment 1 is to be conducted at level 1 for each of the six control factors.
These levels can be read from Table 4.1. However, to make it convenient for the
experimenter and to prevent translation errors, the entire matrix of Table 4.2 should be
Sec. 4.5 Matrix Experiment and Data Analysis Plan 77
translated using the level definitions in Table 4.1 to create the experimenter's log sheet
shown in Table 4.3.
Expt. 1 2 3 4 5 6 7 8
No. e A B C D E e F
1 1 1111 1 1
2 1 2 2 2 2 2 2
3 1 3 3 3 3 3 3
4 2 112 2 3 3
5 2 2 2 3 3 1 1
6 2 3 3 11 2 2
7 3 12 13 2 3
8 3 2 3 2 1 3 1
9 3 3 13 2 1 2
10 2 1 13 3 2 2 1
11 2 1 2 113 3 2
12 2 1 3 2 2 1 1 3
13 2 2 12 3 1 3 2
14 2 2 2 3 12 1 3
15 2 2 3 12 3 2 1
16 2 3 13 2 3 1 2
17 2 3 2 13 1 2 3
18 2 3 3 2 12 3 1
5 To Po iVo-150 So r0 + 16 None
Now we combine the experimenter's log sheet with the testing conditions
described in Section 4.2 to create the following experimental procedure:
1. Conduct 18 experiments as specified by the 18 rows of Table 4.3.
2. For each experiment, process one batch, consisting of 47 dummy wafers and
three test wafers. The test wafers should be placed in positions 3, 23, and 48.
Sec. 4.6 Conducting the Matrix Experiment 79
3. For each experiment, compute to your best ability the deposition time needed to
achieve the target thickness of 3600A. Note that in the experiment the actual
thickness may turn out to be much different from 3600A . However, such data
are perfectly useful for analysis. Thus, a particular experiment need not be
redone by adjusting the deposition time to obtain 3600A thickness.
4. For each experiment, measure the surface defects and thickness at three specific
points (top, center, and bottom) on each test wafer. Follow standard laboratory
practice to prepare data sheets with space for every observation to be recorded.
From Table 4.3 it is apparent that, from one experiment to the next, levels of several
control factors must be changed. This poses a considerable amount of difficulty to the
experimenter. Meticulousness in correctly setting the levels of the various control
factors is critical to the success of a Robust Design project. Let us clarify what we mean
by meticulousness. Going from experiment 3 to experiment 4 we must change
temperature from (T0-25) °C to T0 °C, pressure from (P0 + 200) mtorr to (P0-200)
mtorr, and so on. By meticulousness we mean ensuring that the temperature, pressure,
and other dials are set to their proper levels. Failure to set the level of a factor
correctly could destroy the valuable property of orthogonality. Consequently,
conclusions from the experiment could be erroneous. However, if an inherent error in the
equipment leads to an actual temperature of (T0 -1) °C or (T0 + 2) "C when the dial is
set at Tq °C, we should not bother to correct for such variations. Why? Because
unless we plan to change the equipment, such variations constitute noise and will
continue to be present during manufacturing. If our conclusions from the matrix
experiment are to be valid in actual manufacturing, our results must not be sensitive to such
inherent variations. By keeping these variations out of our experiments, we lose the
ability to test for robustness against such variations. The matrix experiment, coupled
with the verification experiment, has a built-in check for sensitivity to such inherent
variations.
setting. Suppose six batches are processed at each temperature setting. (Note that in
the Ljg array the replication number is six; that is, there are six experiments for each
factor level.) Then, we would need 18 batches to evaluate the effect of three
temperature settings. For the other factors, we need to experiment with the two alternate
levels, so that we need to process 12 batches each. Thus, for the six factors, we would
need to process 18 + 5x 12 = 78 batches. This is a large number compared to the 18
batches needed for the matrix experiment. Further, if there are strong interactions
among the control factors, this method of experimentation cannot detect them.
The matrix experiment, though somewhat tedious to conduct, is highly
efficient—that is, when compared to the practices above, we can generate more
dependable information about more control factors with the same experimental effort. Also,
this method of experimentation allows for the detection of the interactions among the
control factors, when they are present, through the verification experiment.
In practice, many design improvement experiments, where only one factor is
studied at a time, get terminated after studying only a few control factors because both
the R&D budget and the experimenter's patience run out. As a result, the quality
improvement turns out to be only partial, and the product cost remains somewhat high.
This danger is reduced greatly when we conduct matrix experiments using orthogonal
arrays.
count was then multiplied by an appropriate number to determine the defect count per unit area (0.2 cm2). The thickness was measured by an optical interferometer. The
deposition rate was computed by dividing the average thickness by the deposition time.
The first step in data analysis is to summarize the data for each experiment. For the
case study, these calculations are illustrated next.
For experiment number 1, the S/N ratio for the surface defects, given by
Equation (4.1), was computed as follows:
= -10 log 10
= 0.51.
From the thickness data, the mean, variance, and S/N ratio were calculated as follows
by using Equations (4.2), (4.3) and (4.4):
= 1958.1 A
3 3
(2029-1958.1)2 (1949-1958.1)2
= 1151.36(A)2
ti' = 10 log10 *j MI o2
1 1958.12
= 10 log10 1151.36
= 35.22 dB.
82 Steps in Robust Design Chap. 4
Expt.
No. Top Center Bottom Top Center Bottom Top Center Bottom
1 1 0 1 2 0 0 1 1 0
2 1 2 8 180 5 0 126 3 1
4 6 15 6 17 20 16 15 40 18
10 3 0 0 3 0 0 1 0 1
11 1 0 1 5 0 0 1 0 1
14 3 21 162 90 6 1 63 15 39
16 5 6 40 54 0 8 14 1 1
Thickness (A)
Deposition
Expt. .Rate
No. Top Center Bottom Top Center Bottom Top Center Bottom (A/min)
1 2029 1975 1961 1975 1934 1907 1952 1941 1949 14.5
2 5375 5191 5242 5201 5254 5309 5323 5307 5091 36.6
3 5989 5894 5874 6152 5910 5886 6077 5943 5962 41.4
4 2118 2109 2099 2140 2125 2108 2149 2130 2111 36.1
5 4102 4152 4174 4556 4504 4560 5031 5040 5032 73.0
6 3022 2932 2913 2833 2837 2828 2934 2875 2841 49.5
7 3030 3042 3028 3486 3333 3389 3709 3671 3687 76.6
8 4707 4472 4336 4407 4156 4094 5073 4898 4599 105.4
9 3859 3822 3850 3871 3922 3904 4110 4067 4110 115.0
10 3227 3205 3242 3468 3450 3420 3599 3591 3535 24.8
11 2521 2499 2499 2576 2537 2512 2551 2552 2570 20.0
12 5921 5766 5844 5780 5695 5814 5691 5777 5743 39.0
13 2792 2752 2716 2684 2635 2606 2765 2786 2773 53.1
14 2863 2835 2859 2829 2864 2839 2891 2844 2841 45.7
15 3218 3149 3124 3261 3205 3223 3241 3189 3197 54.8
16 3020 3008 3016 3072 3151 3139 3235 3162 3140 76.8
17 4277 4150 3992 3888 3681 3572 4593 4298 4219 105.3
18 3125 3119 3127 3567 3563 3520 4120 4088 4138 91.4
84 Steps in Robust Design Chap. 4
= 23.23 dBam
The data summary for all 18 experiments was computed in a similar fashion and
the results are tabulated in Table 4.5.
o
Observe that the mean thickness for the 18 experiments ranges from 1958 A to
5965 A . But we are least concerned about this variation in the thickness because the
average thickness can be adjusted easily by changing the deposition time. During a
Robust Design project, what we are most interested in is the S/N ratio, which in this
case is a measure of variation in thickness as a proportion of the mean thickness.
Hence, no further analysis on the mean thickness was done in the case study, but the
mean thickness, of course, was used in computing the deposition rate, which was of
interest.
After the data for each experiment are summarized, the next step in data analysis
is to estimate the effect of each control factor on each of the three characteristics of
interest and to perform analysis of variance (ANOVA) as described in Chapter 3.
The factor effects for surface defects (T|), thickness (n/), and deposition rate (T|"),
and the respective ANOVA are given in Tables 4.6, 4.7, and 4.8, respectively. A
summary of the factor effects is tabulated in Table 4.9, and the factor effects are displayed
graphically in Figure 4.5, which makes it easy to visualize the relative effects of the
various factors on all three characteristics.
To assist the interpretation of the factor effects plotted in Figure 4.5, we note the
following relationship between the decibel scale and the natural scale for the three
characteristics:
Surface Deposition
Rate
Experiment Condition Defects Thickness
Matrix* if
Expt.
No. eABCDEeF (dB) (A) (dB) (dBam)
dB
-50-
-75-1 r~i—i—r—j—i—i—i—i—
A, A2 A3 fi, B2 S3 C2 C3 C, D| D2 D, £, E2 E3 F, F2 F3
dB
2
40-
n' = 10 log10(-t^) for thickness
30-
\-r-/-\r~f
20- 1—i—i—i—i—i—
A, A2 A3 B, S2 S3 C2 C3 C, O, D2 D3 £, E2 E3 F, F2 F3
dBam
A n" =10 log10 (deposition rate)2
40-
¦^•-•^-j^- -•—•-
30-
20- —m—r-n—m—r-n—r
A, A2 A3 B, B2 B3 C2 C3 C, D, D2 D3 £, E2 E3 F, F2 F3
I — A — A —A —A
Temp. Pressure Nitrogen Silane Settling Cleaning
(CC) (mtorr) (seem) (seem) Time Method
(min)
Figure 4.5 Plots of factor effects. Underline indicates starting level. Two-standard-
deviation confidence limits are also shown for the starting level. Estimated confidence limits
for r|" are too small to show.
Sec. 4.7 Data Analysis 87
study, we can make the following observations about the optimum setting from
Figure 4.5 and Table 4.9:
• Deposition pressure (factor B) has the next largest effect on surface defect and
deposition rate. Reducing the pressure from the starting level of Pq mtorr to
(P0-200) mtorr can improve T| by about 20 dB (a 10-fold reduction in the root
mean square surface defect count) at the expense of reducing the deposition rate
by 2.75 dBam (37 percent reduction in deposition rate). The effect of pressure
on thickness uniformity is very small.
• Nitrogen flow rate (factor C) has a moderate effect on all three characteristics.
The starting setting of iVo seem gives the highest S/N ratios for surface defects
and thickness uniformity. There is also a possibility of further improving these
two S/N ratios by increasing the flow rate of this dilutant gas. This is an
important fact to be remembered for future experiments. The effect of nitrogen flow
rate on deposition rate is small compared to the effects of temperature and
pressure.
• Silane flow rate (factor D) also has a moderate effect on all three characteristics.
Thickness uniformity is the best when silane flow rate is set at (So-50) seem.
This can also lead to a small reduction in surface defects and the deposition rate.
• Cleaning method (factor F) has no effect on deposition rate and surface defects.
But, by instituting some cleaning prior to deposition, the thickness uniformity
can be improved by over 6.0 dB (a factor of 2 reduction in standard deviation of
88 Steps in Robust Design Chap. 4
From these observations, the optimum settings of factors E and F are obvious,
namely E2 and F2. However, for factors A through D, the direction in which the
quality characteristics (surface defects and thickness uniformity) improve tend to reduce the
deposition rate. Thus, a trade-off between quality loss and productivity must be made
in choosing their optimum levels. In the case study, since surface defects were the key
quality problem that caused significant scrap, the experimenters decided to take care of
it by changing temperature from A 2 to A j. As discussed earlier, this also meant a
substantial reduction in deposition rate. Also, they decided to hold the other three factors
at their starting levels, namely fi 2, C1 > an^ D 3 • The potential these factors held would
Error 5 405t 81
Total 17 10192
have been used if the confirmation experiment indicated a need to improve the surface
defect and thickness uniformity further. Thus, the optimum conditions chosen were:
AlB2ClD3E2F2.
The next step in data analysis is to predict the anticipated improvements under
the chosen optimum conditions. To do so, we first predict the S/N ratios for surface
defects, thickness uniformity, and deposition rate using the additive model. These
computations for the case study are displayed in Table 4.10. According to the table, an
improvement in surface defects equal to [-19.84 -(-56.69)] = 36.85 dB should be
anticipated, which is equivalent to a reduction in the root mean square surface defect
count by a factor of 69.6. The projected improvement in thickness uniformity is
36.79-29.95 = 6.84 dB, which implies a reduction in standard deviation by a factor of
2.2. The corresponding change in deposition rate is 29.60-34.97 = -5.37 dB, which
amounts to a reduction in the deposition rate by a factor of 1.9.
34.78 66
B3: Pq+200 -61.10 32.24 35.54
33.99 -
additivity problem is to study a few key interactions among the control factors in
future experiments. Construction of orthogonal arrays that permit the estimation of a
few specific interactions, along with all main effects, is discussed in Chapter 7.
The verification experiment has two aspects: the first is that the predictions must
agree under the laboratory conditions; the second aspect is that the predictions should
be valid under actual manufacturing conditions for the process design and under actual
field conditions for the product design. A judicious choice of both the noise factors to
be included in the experiment and the testing conditions is essential for the predictions
made through the laboratory experiment to be valid under both manufacturing and field
conditions.
For the polysilicon deposition case study, four batches of 50 wafers containing 3
test wafers were processed under both the optimum condition and under the starting
conditions. The results are tabulated in Table 4.11. It is clear that the data agree very
well with the predictions about the improvement in the S/N ratios and the deposition
rate. So, we could adopt the optimum settings as the new process settings and proceed
to implement these settings.
92 Steps in Robust Design Chap. 4
* Indicates the factors whose levels are changed from the starting to the optimum conditions.
t By contribution we mean the deviation from the overall mean caused by the particular factor level.
Starting Optimum
Condition Condition Improvement
Follow-up Experiments
Range of Applicability
In any development activity, it is highly desirable that the conclusions continue to be
valid when we advance to a new generation of technology. In the case study of the
polysilicon deposition process, this means that having developed the process with 4-
inch wafers, we would want it to be valid when we advance to 5-inch wafers. The
process developed for one application should be valid for other applications. Processes
and products developed by the Robust Design method generally possess this
characteristic of design transferability. In the case study, going from 4-inch wafers to 5-inch
wafers was achieved by making minor changes dictated by the thermal capacity
calculations. Thus, a significant amount of development effort was saved in transferring the
process to the reactor that handled 5-inch wafers.
4.9 SUMMARY
Optimizing the product or process design means determining the best architecture,
levels of control factors, and tolerances. Robust Design is a methodology for finding the
94 Steps in Robust Design Chap. 4
optimum settings of control factors to make the product or process insensitive to noise
factors. It involves eight major steps which can be grouped as planning a matrix
experiment to determine the effects of the control factors (Step 1 through 5),
conducting the matrix experiment (Step 6), and analyzing and verifying the results (Steps 7
and 8).
• Step 1. Identify the main function, side effects and failure modes. This step
requires engineering knowledge of the product or process and the customer's
environment.
• Step 2. Identify noise factors and testing conditions for evaluating the quality
loss. The testing conditions are selected to capture the effect of the more
important noise factors. It is important that the testing conditions permit a consistent
estimation of the sensitivity to noise factors for any combination of control factor
levels. In the polysilicon deposition case study, the effect of noise factors was
captured by measuring the quality characteristics at three specific locations on
each of three wafers, appropriately placed along the length of the tube. Noise
orthogonal array and compound noise factor are two common techniques for
constructing testing conditions. These techniques are discussed in Chapter 8.
• Step 3. Identify the quality characteristic to be observed and the objective
function to be optimized. Guidelines for selecting the quality characteristic and the
objective function, which is generically called S/N ratio, are given in Chapters 5
and 6. The common temptation of using the percentage of products that meet
the specification as the objective function to be optimized should be avoided. It
leads to orders of magnitude reduction in efficiency of experimentation. While
optimizing manufacturing processes, an appropriate throughput characteristic
should also be studied along with the quality characteristics because the
economics of the process is determined by both of them.
• Step 4. Identify the control factors and their alternate levels. The more complex
a product or a process, the more control factors it has and vice versa. Typically,
six to eight control factors are chosen at a time for optimization. For each
control factor two or three levels are selected, out of which one level is usually the
starting level. The levels should be chosen sufficiently far apart to cover a wide
experimental region because sensitivity to noise factors does not usually change
with small changes in control factor settings. Also, by choosing a wide
experimental region, we can identify good regions, as well as bad regions, for control
factors. Chapter 6 gives additional guidelines for choosing control factors and
their levels. In the polysilicon deposition case study, we investigated three levels
each of six control factors. One of these factors (cleaning method) had discrete
levels. For four of the factors the ratio of the largest to the smallest levels was
between three and five.
• Step 5. Design the matrix experiment and define the data analysis procedure.
Using orthogonal arrays is an efficient way to study the effect of several control
factors simultaneously. The factor effects thus obtained are valid over the
Sec. 4.9 Summary 95
experimental region and it provides a way to test for the additivity of the factor
effects. The experimental effort needed is much smaller when compared to other
methods of experimentation, such as guess and test (trial and error), one factor at
a time, and full factorial experiments. Also, the data analysis is easy when
orthogonal arrays are used. The choice of an orthogonal array for a particular
project depends on the number of factors and their levels, the convenience of
changing the levels of a particular factor, and other practical considerations.
Methods for constructing a suitable orthogonal array are given in Chapter 7. The
orthogonal array L18, consisting of 18 experiments, was used for the poly silicon
deposition study. The array L18 happens to be the most commonly used array
because it can be used to study up to seven 3-level and one 2-level factors.
• Step 6. Conduct the matrix experiment. Levels of several control factors must
be changed when going from one experiment to the next in a matrix experiment.
Meticulousness in correctly setting the levels of the various control factors is
essential—that is, when a particular factor has to be at level 1, say, it should not
be set at level 2 or 3. However, one should not worry about small perturbations
that are inherent in the experimental equipment. Any erroneous experiments or
missing experiments must be repeated to complete the matrix. Errors can be
avoided by preparing the experimenter's log and data sheets prior to conducting
the experiments. This also speeds up the conduct of the experiments
significantly. The 18 experiments for the poly silicon deposition case study were
completed in 9 days.
• Step 7. Analyze the data, determine optimum levels for the control factors, and
predict performance under these levels. The various steps involved in analyzing
the data resulting from matrix experiments are described in Chapter 3. S/N
ratios and other summary statistics are first computed for each experiment. (In
Robust Design, the primary focus is on maximizing the S/N ratio.) Then, the
factor effects are computed and ANOVA performed. The factor effects, along
with their confidence intervals, are plotted to assist in the selection of their
optimum levels. When a product or a process has multiple quality
characteristics, it may become necessary to make some trade-offs while choosing the
optimum factor levels. The observed factor effects together with the quality loss
function can be used to make rational trade-offs. In the polysilicon case study,
the data analysis indicated that levels of three factors—deposition temperature
(A), settling time (E), and cleaning method (F)—be changed, while the levels of
the other five factors be kept at their starting levels.
The concept of quadratic loss function introduced in Chapter 2 is ideally suited for
evaluating the quality level of a product as it is shipped by a supplier to a customer.
"As shipped" quality means that the customer would use the product without any
adjustment to it or to the way it is used. Of course, the customer and the supplier
could be two departments within the same company.
A few common variations of the quadratic loss function were given in Chapter 2.
Can we use the quadratic loss function directly for finding the best levels of the control
factors? What happens if we do so? What objective function should we use to
minimize the sensitivity to noise? We examine these and other related questions in this
chapter. In particular, we describe the concepts behind the signal-to-noise (S/N) ratio
and the rationale for using it as the objective function for optimizing a product or
process design. We identify a number of common types of engineering design problems
and describe the appropriate S/N ratios for these problems. We also describe a
procedure that could be used to derive S/N ratios for other types of problems. This
chapter has six sections:
• Section 5.2 presents a general procedure for deriving the S/N ratio.
97
98 Signal-to-Noise Ratios Chap. 5
• Section 5.3 describes common static problems (where the target value for the
quality characteristic is fixed) and the corresponding S/N ratios.
• Section 5.4 discusses common dynamic problems (where the quality
characteristic is expected to follow the signal factor) and the corresponding S/N ratios.
• Section 5.5 describes the accumulation analysis method for analyzing ordered
categorical data.
• Section 5.6 summarizes the important points of this chapter.
One of the two quality characteristics optimized in the case study of the polysilicon
deposition process in Chapter 4 was the thickness of the polysilicon layer. Recall that
one of the goals was to achieve a uniform thickness of 3600 A . More precisely, the
experimenters were interested in minimizing the variance of thickness while keeping
the mean on target. The objective of many robust design projects is to achieve a
particular target value for the quality characteristic under all noise conditions. These types
of projects were previously referred to as nominal-the-best type problems. The detailed
analysis presented in this section will be helpful in formulating such projects. This
section discusses the following issues:
• Comparison of the quality of two process conditions
• Relationship between S/N ratio and quality loss after adjustment (Qa)
• Optimization for different target thickness
• Interaction induced by the wrong choice of objective function
• Identification of a scaling factor
• Minimization of standard deviation and mean separately
target, but the standard deviation is large. As we observe here, it is very typical for
both the mean and standard deviation to change when we change the level of a factor.
Mean Standard
et
Expt. Temperature Thickness (ji)* Deviation (a) a
From the data presented in Table 5.1, which temperature setting can we
recommend? Since both the mean and standard deviation change when we change the
temperature, we may decide to use the quadratic loss function to select the better
temperature setting. For a given mean, \i, and standard deviation, a, the quality loss without
adjustment, denoted by Q, is given by
where k is the quality loss coefficient. Note that throughout this chapter we ignore the
constant k (that is, set it equal to 1) because it has no effect on the choice of optimum levels for the control factors. The quality loss under T0 °C is 3.24 x 106, while under
(T0 + 25) °C it is 8.0 x 104. Thus, we may conclude that (T0 + 25) °C is the better
temperature setting. But, is that really a correct conclusion?
Recall that the deposition time is a scaling factor for the deposition process—that
is, for any fixed settings of all other control factors, the polysilicon thickness at the
various points within the reactor is proportional to the deposition time. Of course, the
proportionality constant, which is the same as the deposition rate, could be dif erent at dif erent locations within the reactor. This is what leads to the variance, a2, of the
polysilicon thickness. We can use this knowledge of the scaling factor to estimate the
quality loss after adjusting the mean on target.
For T0 °C temperature, we can attain the mean thickness of 3600 A by
increasing the deposition time by a factor of 3600/1800 = 2.0. Correspondingly, the standard
100 Signal-to-Noise Ratios Chap. 5
The general formula for computing the quality loss after adjustment for the polysilicon
thickness problem, which is a nominal-the-best type problem, can be derived as
follows: If the observed mean thickness is \i, we have to increase the deposition time by
a factor of ^o^ to get tne mean thickness on target. The predicted standard deviation
after adjusting the mean on target is ((J.0/(J.) a, where a is the observed standard
deviation. So, we have
(5.3)
Qa = *W) [^
T| = 10 logio V? o2 (5.4)
Although it is customary to refer to both (fa2/a2) and r| as the S/N ratio, it is clear
from the context which one we mean. The range of values of (|j.2/a2) is (0, oo),
while the range of values of r| is (- <», oo). Thus, in the log domain, we have better
additivity of the effects of two or more control factors. Since log is a monotone function, maximizing ( J.2/a2) is equivalent to maximizing T|.
Optimization for Different Target Thicknesses
Using the S/N ratio rather than the mean square deviation from target as an objective
function has one additional advantage. Suppose for a different application of the
polysilicon deposition process, such as manufacturing a new code of microchips, we
want to have 3000 A target thickness. Then, the optimum conditions obtained by
maximizing the S/N ratio would still be valid, except for adjustment of the mean.
However, the same cannot be said if we used the mean square deviation from target as
the objective function. We would have to perform the optimization again.
The problem of minimizing the variance of thickness while keeping the mean on
target is a problem of constrained optimization. As discussed in Appendix B, by using
the S/N ratio, the problem can be converted into an unconstrained optimization
problem that is much easier to solve. The property of unconstrained optimization is the
basis for our ability to separate the actions of minimizing sensitivity to noise factors by
maximizing the S/N ratio and the adjustment of mean thickness on target.
When we advance from one technology of integrated circuit manufacturing to a
newer technology, we must produce thinner layers, print and etch smaller width lines,
etc. With this in mind, it is crucial that we focus our efforts on reducing sensitivity to
noise by optimizing the S/N ratio. The mean can then be adjusted to meet the desired
target. This flexible approach to process optimization is needed not only for integrated
circuit manufacturing, but also for virtually all manufacturing processes and
optimization of all product designs.
During product development, the design of subsystems and components must
proceed in parallel. Even though the target values for various characteristics of the
subsystems and components are specified at the beginning of the development activity,
it often becomes necessary to change the target values as more is learned about the
product. Optimizing the S/N ratio gives us the flexibility to change the target later in
the development effort. Also, the reusability of the subsystem design for other
applications is greatly enhanced. Thus, by using the S/N ratio we improve the overall
productivity of the development activity.
102 Signal-to-Noise Ratios Chap. 5
Using the quality loss without adjustment as the objective function to be optimized can
also lead to unnecessary interactions among the control factors. To understand this
point, let us consider again the data in Table 5.1. Suppose the deposition time for the
two experiments in Table 5.1 was 36 minutes. Now suppose we conducted two more
experiments with 80 minutes of deposition time and temperatures of T0 °C and
(T0 + 25) °C. Let the data for these two experiments be as given in Table 5.2. For
ease of comparison, the data from Table 5.1 are also listed in Table 5.2.
$ Qa = \il H2J
70- /
V/80 min
(a) When Q is the objective
function, the control factors, _/\36 min
temperature and time, have o 50- • N^
strong antisynergistic
interaction.
30- —T 1—?
T0 T0 + 25
70-
(b) When Oa is the objective
function, there is no interaction
between temperature and time.
50
Here, since time is a scaling factor,
the curves for 36 min. and o
36 min
80 min. deposition time are 80 min
almost overlapping. 30
T0 To+25
i i
70-
V80 min
S >^36 min
(c) From this figure we see that 50-
much of the interaction in (a) o
is caused by the deviation of
the mean from the target.
1 1 ^
r0 + 25
The squared deviation of the mean from the target thickness is a component of
the objective function Q [see Equation (5.1)]. This component is plotted in Figure
5.1(c). From the figure it is obvious that the interaction revealed in Figure 5.1(a) is
primarily caused by this component. The objective function Qa does not have the
squared deviation of the mean from the target as a component. Consequently, the
corresponding interaction, which unnecessarily complicates the decision process, is
eliminated.
In the polysilicon deposition case study, the deposition time is an easily identified
scaling factor. However, in many situations where we want to obtain mean on target, the
scaling factor cannot be identified readily. How should we determine the best settings
of the control factors in such situations?
It might, then, be tempting to use the mean squared deviation from the target as
the objective function to be minimized. However, as explained earlier, minimizing the
mean squared deviation from the target can lead to wrong conclusions about the
optimum levels for the control factors; so, the temptation should be avoided. Instead,
we should begin with an assumption that a scaling factor exists and identify such a
factor through experiments.
The objective function to be maximized, namely T|, can be computed from the
observed (J. and a without knowing which factor is a scaling factor. Also, the scaling
operation does not change the value of T|. Thus, the process of discovering a scaling
factor and the optimum levels for the various control factors is a simple one. It
consists of determining the effect of every control factor on r| and \i, and then classifying
these factors as follows:
1. Factors that have a significant effect on r\. For these factors, we should pick the
levels that maximize T|.
2. Factors that have a significant effect on (J. but practically no effect on r\. Any
one of these factors can serve as a scaling factor. We use one such factor to
adjust the mean on target. We are generally successful in finding at least one
scaling factor. However, sometimes we must settle for a factor that has a small
effect on r| as a scaling factor.
3. Factors that have no effect on r\ and no effect on \i. These are neutral factors
and we can choose their best levels from other considerations such as ease of
operation or cost.
Another way to approach the problem of minimizing variance with the constraint that
the mean should be on target is, first, to minimize standard deviation while ignoring
the mean, and, then, bring the mean on target without affecting the standard deviation
by changing a suitable factor. The difficulty with this approach is that often we cannot
find a factor that can change the mean over a wide range without affecting the
standard deviation. This can be understood as follows: In these problems, when the mean
is zero, the standard deviation is also zero. However, for all other mean values, the
standard deviation cannot be identically zero. Thus, whenever a factor changes the
mean, it also affects the standard deviation. Also, an attempt to minimize standard
deviation without paying attention to the mean drives both the standard deviation and
Sec. 5.2 Evaluation of Sensitivity to Noise 105
the mean to zero, which is not a worthwhile solution. Therefore, we should not try to
minimize a without paying attention to the mean. However, we can almost always
find a scaling factor. Thus, an approach where we maximize the S/N ratio leads to
useful solutions.
Note that the above discussion pertains to the class of problems called nominal-
the-best type problems, of which polysilicon thickness uniformity is an example. A
class of problems called signed-target type problems where it is appropriate to first
minimize variance and then bring the mean on target is described in Section 5.3.
Let us now examine the general problem of evaluating sensitivity to noise for a
dynamic system. Recall that in a dynamic system the quality characteristic is expected
to follow the signal factor. The ideal function for many products can be written as
y = M (5.5)
where y is the quality characteristic (or the observed response) and M is the signal (or
the command input). In this section we discuss the evaluation of sensitivity to noise
for such dynamic systems. For specificity, suppose we are optimizing a servomotor (a
device such as an electric motor whose movement is controlled by a signal from a
command device) and that y is the displacement of the object that is being moved by
the servomotor and M specifies the desired displacement. To determine the sensitivity
of the servomotor, suppose we use the signal values M\, M2, ••• , Mm\ and for each
signal value, we use the noise conditions x\, x2, ••• , xn. Let v,y denote the observed
displacement for a particular value of control factor settings, z = (zj, z2, •• • , zq)T,
when the signal is M, and noise is xj. Representative values of y^ and the ideal
function are shown in Figure 5.2. The average quality loss, Q(z), associated with the
control factor settings, z, is given by
h m n
As shown by Figure 5.2, Q(z) includes not only the effect of noise factors but
also the deviation of the mean function from the ideal function. In practice, Q(z) could
be dominated by the deviation of the mean function from the ideal function. Thus, the
direct minimization of Q(z) could fail to achieve truly minimum sensitivity to noise. It
could lead simply to bringing the mean function on target, which is not a difficult
problem in most situations anyway. Therefore, whenever adjustment is possible, we
should minimize the quality loss after adjustment.
106 Signal-to-Noise Ratios Chap. 5
For the servomotor, it is possible to adjust a gear ratio so that, referring to Figure
5.2, the slope of the observed mean function can be made equal to the slope of the
ideal function. Let the slope of the observed mean function be p. By changing the
gear ratio we can change every displacement v,y to V;y = (l/|$)v,y. This brings the mean
function on target.
For the servomotor, change of gear ratio leads to a simple linear transformation
of the displacement v,y. In some products, however, the adjustment could lead to a
morvije compl icated function between the adjusted value v,y and the unadjusted value v,y. For a general case, let the effect of the adjustment be to change each v;y to a value
= ^R (yij), where the function hR defines the adjustment that is indexed by a
parameter R. After adjustment, we must have the mean function on target—that is, the errors
Sec. 5.2 Evaluation of Sensitivity to Noise 107
(v,y - Mt) must be orthogonal to the signal Mt. Mathematically, the requirement of
orthogonality can be written as
m n
Z X (VU ~ Md Mi = 0. (5.7)
Equation (5.7) can be solved to determine the best value of R for achieving the
mean function on target. Then the quality loss after adjustment, Qa(z), can be
evaluated as follows:
h m n
The quantity Qa(z) is a measure of sensitivity to noise. It does not contain any part
that can be reduced by the chosen adjustment process. However, any systematic part
of the relationship between y and M that cannot be adjusted is included in Qa(z). [For
the servomotor, the nonlinearity (2nd, 3rd, and higher order terms) of the relationship
between y and M are contained in Qa(z).] Minimization of Qa(z) makes the design
robust against the noise factors and reduces the nonadjustable part of the relationship
between y and M. Any control factor that has an effect on jy- but has no effect on
Qa(z) can be used to adjust the mean function on target without altering the sensitivity
to noise, which has already been minimized. Such a control factor is called an
adjustment factor.
Note that the constant k in Qa(z) and sometimes some other constants are generally
ignored because they have no effect on the optimization.
108 Signal-to-Noise Ratios Chap. 5
2. For the factors that have a significant effect on T|, select levels that maximize T|.
3. Select any factor that has no effect on r| but a significant effect on the mean
function as an adjustment factor. In practice, we must sometimes settle for a
factor that has a small effect on T| but a significant effect on the mean function as
an adjustment factor. Use the adjustment factor to bring the mean function on
target. Adjusting the mean function on target is the main quality control activity
in manufacturing. It is needed because of changing raw material, varying
processing conditions, etc. Thus, finding an adjustment factor that can be changed
conveniently during manufacturing is important. However, finding the level of
the adjustment factor that brings the mean precisely on target during product or
process design is not important.
4. For factors that have no effect on r\ and the mean function, we can choose any
level that is most convenient from the point of view of other considerations, such
as other quality characteristics and cost.
Minimizing the surface defect count and achieving target thickness in polysilicon
deposition are both examples of static problems. In each case, we are interested in a
fixed target, so that the signal factor is trivial, and for all practical purposes, we can
say it is absent. In contrast, the design of an electrical amplifier is a dynamic problem
in which the input signal is the signal factor and our requirement is to make the output
signal proportional to the input signal. The tracking of the input signal by the output
signal makes it a dynamic problem. We discuss dynamic problems in Section 5.4.
Static problems can be further characterized by the nature of the quality
characteristic. Recall that the response we observe for improving quality is called quality
characteristic. The classification of static problems is based on whether the quality
characteristic is:
• Continuous or discrete
Here, the quality characteristic is continuous and nonnegative—that is, it can take any
value from 0 to oo. Its most desired value is zero. Such problems are characterized by
the absence of a scaling factor or any other adjustment factor. The surface defect count
is an example of this type of problem. Note that for all practical purposes we can treat
this count as a continuous variable.
(5.10)
= k n /ti
Minimizing Q is equivalent to maximizing r\ defined by the following equation,
(5.11)
= - 10 log 10 n &
Note that we have ignored the constant k and expressed the quality loss in the decibel
scale.
In this case the signal is constant, namely to make the quality characteristic equal
to zero. Therefore, the S/N ratio, r\, measures merely the effect of noise.
where
= - 10 logio 1ii
n / = i yf
(5.14)
The following questions are often asked about the larger-the-better type
problems: Why do we take the reciprocal of a larger-the-better type characteristic and then
treat it as a smaller-the-better type characteristic? Why do we not maximize the mean
square quality characteristic? This can be understood from the following result from
mathematical statistics:
where \i and a are the mean and variance of the quality characteristic. [Note that if y
denotes the quality characteristic, then the mean square reciprocal quality characteristic
is the same as the expected value of (1/j) .] Minimizing the mean square reciprocal
quality characteristic implies maximizing \x and minimizing a2, which is the desired
thing to do. However, if we were to try to maximize the mean square quality characteristic, which is equal to ( J2 + a2), we would end up maximizing both (j. and a2,
which is not a desirable thing to do.
Tl = - 10 log10 s2
Note that this type of problem occurs relatively less frequently compared to the
nominal-the-best type problems.
Sec. 5.3 S/N Ratios for Static Problems 113
This is the case when the quality characteristic, denoted by p, is a fraction taking values
between 0 and 1. Obviously, the best value forp is zero. Also, there is no adjustment
factor for these problems. When the fraction defective is p, on an average, we have to
manufacture 1/(1—p) pieces to produce one good piece. Thus, for every good piece
produced, there is a waste and, hence, a loss that is equivalent to the cost of processing
{1/(1—p) -1} =p/(l-p) pieces. Thus, the quality loss is given by Q,
Q=k-t- (5.16)
\-p
where k is the cost of processing one piece. Ignoring k, we obtain the objective function
to be maximized in the decibel scale as
(5.17)
ti = - 10 log 10 \-p
Note that the range of possible values of Q is 0 to oo, but the range of possible values of r)
is -oo to oo. Therefore, the additivity of factor effects is better for r\ than for Q. The S/N
ratio for the fraction-defective problems is the same as the familiar logit transform, which
is commonly used in biostatistics for studying drug response.
Here, the quality characteristic takes ordered categorical values. For example, after a
drug treatment we may observe a patient's condition as belonging to one of the following
categories: worse, no change, good, or excellent. In this situation, the extreme category,
excellent, is the most desired category. However, in some other cases, an intermediate
category is the most desired category. For analyzing data from ordered categorical
problems, we form cumulative categories and treat each category (or its compliment, as the
case may be) as a fraction-defective type problem. We give an example of analysis of
ordered categorical data in Section 5.5.
Dynamic problems have even more variety than static problems because of the many
types of potential adjustments. Nonetheless, we use the general procedure described in
Sections 5.1 and 5.3 to derive the appropriate objective functions or the S/N ratio.
Dynamic problems can be classified according to the nature of the quality characteristic
and the signal factor, and, also, the ideal relationship between the signal factor and the
quality characteristic. Some common types of dynamic problems and the
corresponding S/N ratios are given below (see also Taguchi [Tl], Taguchi and Phadke [T6], and
Phadke and Dehnad [P4]).
Here, both the signal factor and the quality characteristic take positive or negative
continuous values. When the signal is zero, that is, M = 0, the quality characteristic is also
zero, that is, y = 0. The ideal function for these problems is y = M, and a scaling
factor exists that can be used to adjust the slope of the relationship between y and M.
This is one of the most common types of dynamic problems. The servomotor
example described in Section 5.2 is an example of this type. Some other examples are
analog telecommunication, design of test sets (such as voltmeter and flow meter), and
design of sensors (such as the crankshaft position sensor in an automobile).
We now derive the S/N ratio for the C-C type problems. As described in
Section 5.2, let yij be the observed quality characteristic for the signal value Mt and noise
condition Xj. The quality loss without adjustment is given by
\ m n [ ~\1
The quality loss has two components. One is due to the deviation from linearity and
the other is due to the slope being other than one. Of the two components, the latter
can be eliminated by adjusting the slope. In order to find the correct adjustment for
given control factor settings, we must first estimate the slope of the best linear
relationship between yVj and M,-. Consider the regression of y^ on Mt given by
yu = P Mi + eij (5.18)
where p is the slope and e^ is the error. The slope p" can be estimated by the least squares criterion as fol ows:
Sec. 5.4 S/N Ratios for Dynamic Problems 115
d z zo'y-pM,.)2 =0 (5.19)
dp L/=i j=\
that is (5.20)
Z Z (y.jMi)
that is P = '=1 y=i
m n
(5.21)
Z Z W2)
/=i i=\
Note that Equation (5.20) is nothing but a special case of the general Equation
(5.7) for determining the best adjustment. Here, hR(yjj) = (1/(3) yij = v,y and p is the
same as the index R. Also note that the least squares criterion is analogous to the
criterion of making the error [(1/p) yij - M{\ orthogonal to the signal, M,-.
The quality loss after adjustment is given by
Qa = — Z Z(v,y-M,-)2
mn ¦ 1=1 j=\
u m n
— TV
mn i=i j=\
P2 mn & ~ Z ZO'o-Pm,.)2
= k (mn-l) Qe
mn p2
(5.21)
"*-F
where the error variance, a2, is given by
Note that p is the change in y produced by a unit change in M. Thus, p2 quantifies the
effect of signal. The denominator a2 is the effect of noise. Hence, r) is called the S/N ratio. Note that a2, includes sensitivity to noise factors as well as the nonlinearity of
the relationship between y and M. Thus, maximization of r] leads to reduction in non-
linearity along with the reduction in sensitivity to noise factors.
In summary, the C-C type problems are optimized by maximizing T| given by
Equation (5.22). After maximization of r\, the slope is adjusted by a suitable scaling
factor. Note that any control factor that has no effect on r) but an appreciable effect on
P can serve as a scaling factor.
Although we have shown the optimization for the target function y = M, it is still
valid for all target functions that can be obtained by adjusting the slope—that is, the
optimization is valid for any target functions of the form y = $qM where po is the
desired slope.
Another variation of the C-C type target function is
y = Oo + p0M. (5.23)
In this case, we must consider two adjustments: one for the intercept and the other for
the slope. One might think of this as a vector adjustment factor. The S/N ratio to be
maximized for this problem can be shown to be r), given by Equation (5.22). The two
adjustment factors should be able to change the intercept and the slope, and should
have no effect on r\.
A temperature controller where the input temperature setting is continuous, while the
output (which is the ON or OFF state of the heating unit) is discrete is an example of
the C-D type problem. Such problems can be divided into two separate problems: one
for the ON function and the other for the OFF function. Each of these problems can
be viewed as a separate continuous-continuous or nominal-the-best type problem. The
design of a temperature control circuit is discussed in detail in Chapter 9.
The familiar digital-to-analog converter is an example of the D-C case problem. Here
again, we separate the problems of converting the 0 and 1 bits into the respective
Sec. 5.4 S/N Ratios for Dynamic Problems 117
Digital communication systems, computer operations, etc., where both the signal factor
and the quality characteristic are digital, are examples of the D-D type problem. Here,
the ideal function is that whenever 0 is transmitted, it should be received as 0, and
whenever 1 is transmitted, it should be received as 1. Let us now derive an
appropriate objective function for minimizing sensitivity to noise.
Here, the signal values for testing are M0 = 0 and M\ = 1. Suppose under
certain settings of control factors and noise conditions, the probability of receiving 1,
when 0 is transmitted, is p (see Table 5.3). Thus, the average value of the received
signal, which is the same as the quality characteristic, is p and the corresponding
variance is p{\-p). Similarly, suppose the probability of receiving 0, when 1 is
transmitted, is q. Then, the average value of the corresponding received signal is (l-q) and the
corresponding variance is q{\-q). The ideal transmit-receive relationship and the
observed transmit-receive relationship are shown graphically in Figure 5.3. Although
the signal factor and the quality characteristic take only 0-1 values, for convenience we
represent the transmit-receive relationship as a straight line. Let us now examine the
possible adjustments.
0 l Mean Variance
0 \-p p P Pd-P)
Transmitted
Signal
1 q \-q \-q <?(!-<?)
continuous variable, we should prefer it. In that case, the problem can be classified as
a C-D type and dealt with by the procedure described earlier. Here, we consider the
situation when it is not possible to measure the continuous variable.
Transmitted Signal
Figure 5.4 shows possible distributions of the continuous variable received at the
output terminal when 0 or 1 are transmitted. If the threshold value is /? x, the errors of
0 would be far more likely than the errors of 1. However, if the threshold is moved to
Sec. 5.4 S/N Ratios for Dynamic Problems 119
/?2, we would get approximately equal errors of 0 and 1. The effect of this adjustment
is also to reduce the total error probability (p + q).
(a) When the threshold is at R^, the error probabilities p and q are not equal.
(b) By adjusting the threshold to/?2, we can make the two error probabilities equal, i.e. p'=q'.
How does one determine p' (which is equal to q') corresponding to the observed
error rates p and ql The relationship between p', p, and q will obviously depend on
the continuous distribution. However, we are considering a situation where we do not
120 Signal-to-Noise Ratios Chap. 5
have the ability to observe the distributions. Taguchi has suggested the use of the
following relationship for estimating p' after equalization or leveling:
(5.24)
-2 x 10 log 10 Il-P'j = - 10 log 10 \-p - 10 log \-q 10
The two terms on the right hand side of Equation (5.24) are fraction-defective type S/N
ratios for the separate problems of the errors of 0 and errors of 1. Equation (5.24)
asserts that the effect of equalization is to make the two S/N ratios equal to the average
of the S/N ratios before equalization.
-l
P = 1+A/i-P.i=l (5.25)
Observe that (1-2//) is the difference of the averages of the received signal when 0
and 1 are transmitted. The quantity p'(l - p') is the variance of the received signal.
So T| measures the ability of the communication system to discriminate between 0 and
1 at the receiving terminal.
The strategy to optimize a D-D system is to maximize r\, and then use a control
factor which has no effect on r\, but can alter the ratio p:q to equalize the two error
probabilities.
I : 0 — 3 defects
II : 4 —30 defects
III : 31 —300 defects
IV : 301 — 1000 defects
V : 1001 and more defects
Thus, among the nine observations of experiment 2, five belong to category I, two to
category II, three to category III, and none to categories IV and V. The categorical
data for the 18 experiments are listed in Table 5.4.
122 Signal-to-Noise Ratios Chap. 5
We will now describe Taguchi's accumulation analysis method [T7, Tl], which
is an effective method for determining optimum control factor settings in the case of
ordered categorical data. (See Nair [Nl] for an alternate method of analyzing ordered
categorical data.) The first step is to define cumulative categories as follows:
(D = I 0 — 3 defects
The number of observations in the cumulative categories for the eighteen experiments
are listed in Table 5.4. For example, the number of observations in the five cumulative
categories for experiment 2 are 5, 7, 9, 9, and 9, respectively.
The second step is to determine the effects of the factor levels on the probability
distribution by the defect categories. This is accomplished analogous to the
determination of the factor effects described in Chapter 3. To determine the effect of
temperature of level A!, we identify the six experiments conducted at that level and sum the
observations in each cumulative category as follows:
Cumulative Categories
(I) (II) (III) (IV) (V)
Experiment 1 9 9 9 9 9
Experiment 2 5 7 9 9 9
Experiment 3 1 1 7 9 9
Experiment 10 9 9 9 9 9
Experiment 11 8 9 9 9 9
Experiment 12 2 5 8 8 9
Total 34 40 51 53 54
The number of observations in the five cumulative categories for every factor
level are listed in Table 5.5. Note that the entry for the cumulative category (V) is
equal to the total number of observations for the particular factor level and that entry is
uniformly 54 in this case study. If we had used the 2-level column, namely column 1,
or if we had used the dummy level technique (described in Chapter 7), the entry in
category (V) would not be 54. The probabilities for the cumulative categories shown
in Table 5.5 are obtained by dividing the number of observations in each cumulative
category by the entry in the last cumulative category for that factor level, which is 54
for the present case.
Sec. 5.5 Analysis of Ordered Categorical Data 123
Expt.
No. I II III IV V (I) (II) (HI) (IV) (V)
1 9 0 0 0 0 9 9 9 9 9
2 5 2 2 0 0 5 7 9 9 9
3 10 6 2 0 117 9 9
4 0 8 10 0 0 8 9 9 9
5 0 10 4 4 0 115 9
6 10 4 13 115 6 9
7 0 114 3 0 12 6 9
8 3 0 2 13 3 3 5 6 9
9 0 0 0 4 5 0 0 0 4 9
10 9 0 0 0 0 9 9 9 9 9
11 8 10 0 0 8 9 9 9 9
12 2 3 3 0 1 2 5 8 8 9
13 4 2 2 10 4 6 8 9 9
14 2 3 4 0 0 2 5 9 9 9
15 0 1116 0 12 3 9
16 3 4 2 0 0 3 7 9 9 9
17 2 10 2 4 2 3 3 5 9
18 0 0 0 2 7 0 0 0 2 9
The third step in data analysis is to plot the cumulative probabilities. Two useful
plotting methods are the line plots shown in Figure 5.5 and the bar plots shown in
Figure 5.6. From both figures, it is apparent that temperature (factor A) and pressure
(factor B) have the largest impact on the cumulative distribution function for the surface
defects. The effects of the remaining four factors are small compared to temperature
and pressure. Among the factors C, D, E, and F, factor F has a somewhat larger effect.
In the line plots of Figure 5.5, for each control factor we look for a level for
which the curve is uniformly higher than the curves for the other levels of that factor.
124 Signal-to-Noise Ratios Chap. 5
TABLE 5.5 FACTOR EFFECTS FOR THE CATEGORIZED SURFACE DEFECT DATA
Factor Level (I) (H) (III) (IV) (V) (I) (II) (HI) (IV) (V)
A uniformly higher curve implies that the particular factor level produces more
observations with lower defect counts; hence, it is the best level. In Figure 5.6, we look for
a larger height of category I and smaller height of category V. From the two figures, it
is clear that A i, B lf and F2 are the best levels for the respective factors. The choice
of the best level is not as clear for the remaining three factors. However, the curves
for the factor levels C2< D3, and £3 lie uniformly lower among the curves for all
levels of the respective factors, and these levels must be avoided. Thus, the optimum
settings suggested by the analysis are A XB x (C1IC3) {D x ID 2) (E ^ IE2) F2. By comparing
Figures 5.5, 5.6, and 4.5 it is apparent that the conclusions based on the ordered
categorical data are consistent with the conclusions based on the actual counts, except
for factors C, D, and E whose effects are rather small.
The next step in the analysis is to predict the distribution of defect counts under
the starting and optimum conditions. This can be achieved analogous to the procedure
described in Chapter 3, except that we must use the omega transform, also known as
Sec. 5.5 Analysis of Ordered Categorical Data 125
Temperature
1.0- Silane
0.5-
/
/ y
~\ 1 1 1 1 1 1 1 I 1
(I) (II) (Ml) (IV) (V) (I) (II) (HO (IV) (V)
1 i Pressure
1.0- Settling Time
0.5-
n i i i i T 1 1 1 1
(i) (ii) (in) (iv) (V) (I) (II) (HO (IV) (V)
Nitrogen
1.0- Cleaning Method
0.5-
—I 1 1 1 1
(I) (II) (IN) (IV) (V) (I) (ID (HI) (IV) (V)
Cumulative Categories Cumulative Categories
Figure 5.5 Line plots of the factor effects for the categorized surface defect data.
Signal-to-Noise Ratios Chap. 5
1.00-1 1.00-1
0.75- 0.75-
0.50- 0.50 H
0.25- 0.25-
KEY:
A, Aj A3 V >1000
Temperature Silane
^ IV 301-1000
III 31-300
1.00-1 1.00—1
^ II 4-30
1 0-3
0.75- 0.75-
0.50-
0.25-
Bi D2 63 Ei E2 E3
Pressure Settling Time
1.00-1 1.00-1
0.75- 0.75
0.50- 0.50-
0.25- 0.25-
C2 C3 C-\ fi F2 F3
Nitrogen Cleaning Method
Figure 5.6 Bar plots of the factor effects for the categorized surface defect data.
Sec. 5.5 Analysis of Ordered Categorical Data 127
the logit transform, of the probabilities for the cumulative categories (see
Taguchi [Tl]). The omega transform for probability p is given by the following
equation:
+ [-1.94 + 3.68]
= 5.42 dB
Then, by the inverse omega transform, the predicted probability for category (I) is 0.78.
Predicted probabilities for the cumulative categories (II), (III) and (IV) can be obtained
analogously. Prediction is obviously 1.0 for category (V). The predicted probabilities
for the cumulative categories for the starting and the optimum settings are listed in
128 Signal-to-Noise Ratios Chap. 5
Table 5.6. These probabilities are also plotted in Figure 5.7. It is clear that the
recommended optimum conditions give much higher probabilities for the low defect count
categories when compared to the starting conditions. The probability of 0-3 defects,
(category I), is predicted to increase from 0.23 to 0.78 by changing from starting to the
optimum conditions. Likewise, the probability for the 1001 and more category reduces
from 0.37 to 0.01.
5.6 SUMMARY
• The quadratic loss function is ideally suited for evaluating the quality level of a
product as it is shipped by a supplier to a customer. It typically has two
components: one related to the deviation of the product's function from the target, and
the other related to the sensitivity to noise factors.
• S/N ratio developed by Genichi Taguchi, is a predictor of quality loss after making
certain simple adjustments to the product's function. It isolates the sensitivity of
the product's function to noise factors. In Robust Design we use the S/N ratio as
the objective function to be maximized.
• Benefits of using the S/N ratio for optimizing a product or process design are:
— Optimization does not depend on the target mean function. Thus, the design
can be reused in other applications where the target is different.
— Additivity of the factor effects is good when an appropriate S/N ratio is used.
Otherwise, large interactions among the control factors may occur, resulting
in high cost of experimentation and potentially unreliable results.
Sec. 5.6 Summary 129
Optimum
A,B2C,D3£2F2 5.42 6.98 14.53 19.45 0.78 0.83 0.97 0.99 1.00
Starting
A2B1CiDiEiF, -3.68 -1.41 0.04 2.34 0.23 0.42 0.50 0.63 1.00
ik
1.0-
Optimum
>.
•""""^ / /
lit -
*
/>
bab 0.5-
-
tf
P
0.
•'Starting jt
o—
1 1 1 1 1
(I) (II) (III) (IV) (V)
Cumulative Categories
• Robust Design problems can be divided into two broad classes: static problems,
where the target value for the quality characteristic is fixed, and dynamic problems,
where the quality characteristic is expected to follow the signal factor.
• Common types of static problems and the corresponding S/N ratios are summarized
in Table 5.7.
• Common types of dynamic problems and the corresponding S/N ratios are
summarized in Table 5.8.
• For the problems where an adjustment factor does not exist, the optimization is
done by simply maximizing the S/N ratio.
• For the problems where an adjustment factor exists, the problem can be generically
stated as minimize sensitivity to noise factors while keeping the mean function on
target. By using S/N ratio, these problems can be converted into unconstrained
optimization problems and solved by the following two-step procedure:
2. For factors that have a significant effect on r), select levels that maximize r).
3. Select any factor that has no effect on r) but a significant effect on the mean
function as an adjustment factor. Use it to bring the mean function on target.
(In practice, we must sometimes settle for a factor that has a small effect on
T| but a significant effect on the mean function as an adjustment factor.)
4. For factors that have no effect on r) and the mean function, we can choose
any level that is convenient from other considerations such as cost or other
quality characteristics.
Sec. 5.6 Summary 131
1 "
Smaller-the-better
type
0<y<<x. 0 None
r| = -10 log10 ."£i .
Nominal-the-best U2
type
0<y <°° Nonzero, finite Scaling r| = 10 log10 o^y
1 "
o2 = -LrI(yi-n)2
Larger-the-better
type
0<y<o° OO None
\\ = -10 log,0 nky*
Signed-target — <x><y <<» Finite, usually 0 Leveling r| = -10 log10 o2
a2 = -LrI(y,-H)2
0
Fraction defective 0<p<l None
r| = -10 log10 p ll-pj
Ordered categorical Use accumulation
analysis. See Section 5.5.
B2
Continuous-
continuous — oo<M <°° -<x><y <oo; y = M Scaling ti = 10 log10 *y
y = 0 when
(C-C) M=0
B=
I I(M,2)
i-i j=i
Digital-
digital Binary; 0, 1 Binary; 0, 1 y = M Leveling r| = 10 log,, Ip'(i-p')J
(l-2p')2
(D-D)
p= iWi-p._bi p <?
equalized error probability
p = error probability of output
being 1 when input is 0
q = error probability of output
being 0 input is 1
Chapter 6
ACHIEVING ADDITIVITY
The goal of a Robust Design project is to determine the best levels of each of the
various control factors in the design of a product or a manufacturing process. To select the
best levels (or settings) of the control factors, we must first be able to predict the
product's performance and robustness for any combination of the settings. Further, the
prediction must be valid not only under the laboratory conditions but also under
manufacturing and customer conditions.
As pointed out in Chapter 3, if the effects of the control factors on performance
and robustness are additive (that is, they follow the superposition principle), then we
can predict the product's performance for any combination of levels of the control
factors by knowing only the main effects of the control factors. The experimental effort
needed for estimating these effects is rather small. On the other hand, if the effects are
not additive (that is, interactions among the control factors are strong), then we must
conduct experiments under all combinations of control factor settings to determine the
best combination. This is clearly expensive, especially when the number of control
factors is large.
A more important reason exists for seeking additivity. The conditions under
which experiments are conducted can also be considered as a control factor. There are
three types of these conditions: laboratory, manufacturing, and customer usage. If
strong interactions among the control factors are observed during laboratory
experiments, these control factors are also likely to interact with conditions of
experimentation. Consequently, the levels found optimum in the laboratory may not prove to be
133
134 Achieving Additivity Chap. 6
• Section 6.3 describes the process of selecting the S/N ratio with illustrative
examples.
• Section 6.4 discusses the selection of control factors and their levels.
The discussion in this chapter is based on a paper by Phadke and Taguchi [P7].
Sec. 6.1 Guidelines for Selecting Quality Characteristics 135
Many Robust Design experiments are aimed at improving the yield of manufacturing
processes. However, if yield is used as the quality characteristic in these experiments,
it is possible to lose monotonicity, which will lead to an unnecessarily large number of
experiments. As an example, consider a photolithography process used in integrated
circuit manufacturing to print lines of a certain width. The percentage of microchips
with line widths within the limits 2.75 to 3.25 micrometers represents the yield of good
microchips. This is the customer-observable response we want to maximize. Let us
examine the problem of using this response as the quality characteristic.
Exposure and develop time are two important control factors for this photolithog-'
raphy process. For certain settings of these factors, called initial levels, the yield is 40
percent as indicated in Table 6.1. When the exposure time alone is increased to its
high setting, the yield becomes 75 percent. Also, when the develop time alone is
increased to its high setting, the yield increases to 75 percent. Thus, we may anticipate
that increasing both the exposure time and the develop time would improve the yield
beyond 75 percent. But when that is done, the yield drops down to 40 percent. This
is the lack of monotonicity. Interactions among control factors are critically important
when there is lack of monotonicity. In such situations, we need to study all
combinations of control factors to find the best settings. With only two control factors,
studying all combinations is not a major issue. But with eight or ten control factors, the
experimental resources needed would be prohibitively large.
What is, then, a better quality characteristic for the photolithography process?
To answer this question, let us look at Table 6.2 which shows not only the yield (that
is, the percentage of chips with the desired line width) but also the percentage of chips
with line widths smaller or larger than the desired line width. Such data are called
ordered categorical data. The reason for getting low yield, when both the exposure and
the develop time are set at high levels, becomes clear from this table: the effect of
each of these two control factors is to increase the overall line width. Consequently,
Sec. 6.2 Examples of Quality Characteristics 137
recording the data in all three categories (small, desired, and large) provides a better
quality characteristic than only yield by itself. The monotonicity can be observed
through the cumulative categories—small, and small plus desired—as shown in
Table 6.2.
TABLE 6.1 OBSERVED YIELD FOR DIFFERENT EXPOSURE AND DEVELOP TIMES
Expt. Develop
No. Exposure Time Yield (%)*
1 Initial Initial 40
2 High Initial 75
3 Initial High 75
4 High High 40
Small
+
Small Desired
+ +
There are many chemical processes that begin with a chemical A, which after reaction,
becomes chemical B and, if the reaction is allowed to continue, turns into chemical C.
If B is the desired product of the chemical process, then considering the yield of B as a
quality characteristic is a poor choice. As in the case of photolithography, the yield is
not a monotonic characteristic. A better quality characteristic for this experiment is the
concentration of each of the three chemicals. The concentration of A and the
concentration of A plus B possess the needed monotonicity property.
Basic types of Robust Design problems and the associated S/N ratios were described in
Chapter 5. A majority of Robust Design projects fall into one of these basic types of
Sec. 6.3 Examples of S/N Ratios 139
problems. This section gives three examples to illustrate the process of classification
of Robust Design problems. Two of these examples also show how a complex
problem can be broken down into a composite of several basic types of problems.
Heat exchangers are used to heat or cool fluids. For example, in a refrigerator a heat
exchanger coil is used inside the refrigerator compartment to transfer the heat from the
air in the compartment to the refrigerant fluid. This leads to lowering of the
temperature inside the compartment. Outside the refrigerator, the heat from the refrigerant is
transferred to the room air through another heat exchanger.
In optimizing the designs of heat exchangers and other heat-transfer equipment,
defining the reference temperature is critical so that the optimization problem can be
correctly classified.
Consider the heat exchanger shown in Figure 6.1, which is used to cool the fluid
inside the inner tube. The inlet temperature of the fluid to be cooled is T\. As the
fluid moves through the tube, it loses heat progressively to the fluid outside the tube;
its outlet temperature is T2. The inlet and outlet temperature for the coolant fluid are
T3 and T4, respectively. Let the target outlet temperature for the fluid being cooled be
T0. Also, suppose the customer's requirement is that \T2-T0\ < 10 °C. What is
the correct quality characteristic and S/N ratio for this Robust Design problem?
formulation of the problem is that by taking the square of y the positive and negative
deviations in temperature are treated similarly. Consequently, interactions become
important. This can be understood as follows: If y is too large because T2 is too large
compared to T0, then y can be reduced by increasing the coil length. Note that a
longer coil length leads to more cooling of the fluid and, hence, smaller T2. On the
contrary, if y is too large because T2 is too small, then y can be reduced by decreasing
the coil length. Thus, there are two opposite actions that can reduce y, but they cannot
be distinguished by observing y. Therefore, y is not a good quality characteristic, and
this problem should not be treated as smaller-the-better type.
Here, the proper reference temperature is T3 because it represents the lowest
temperature that could be achieved by the fluid inside the tube. Thus, the correct quality
characteristic is y' = T2-Tt,. Note that y' is always positive. Also, when the mean
of y' is zero, its variance must also be zero. Hence, the problem should be classified
as a nominal-the-best type with the target value of y' equal to T0-T3. This
formulation does not have the complication of interaction we described with y as the quality
characteristic. Furthermore, if the target temperature T0 were changed, the information
obtained using y' as the quality characteristic would still be useful. All that is
necessary is to adjust the mean temperature on the new target. However, if y were used as
the quality characteristic, the design would have to be reoptimized when T0 is changed,
which is undesirable. This loss of reusability is one of the reasons for lower R&D
productivity.
A schematic diagram of a paper feeder is shown in Figure 6.2(a). The two main
defects that arise in paper feeding are: no sheet fed or multiple sheets fed. A
fundamental characteristic that controls paper feeding is the normal force needed to pick up a
sheet. Thus, we can measure the threshold force, F i, to pick up just one sheet and the
threshold force, F2, to pick up two sheets. Note that the normal force is a control
factor and that F x and F2 meet the guidelines listed in Section 6.1 and are better quality
characteristics compared to X. By making F x as small as possible and F2 as large as
possible, we can improve the operating window Fx - F2 [see Figure 6.2(b)], reduce
both types of paper feeding defects, and thus increase X. The idea of enlarging the
operating window as a means of improving product reliability is due to Clausing [C2].
Sec. 6.3 Examples of S/N Ratios 141
Force
Paper Stack
1 VIU)W»)/)!»i)UW/)/»H»!>i>\
Guide
W/t V,
Force
Threshold Force F2
for Feeding
Two Sheets
Operating
Window
Threshold Force F,
for Feeding
Single Sheet
Here, the appropriate S/N ratios for F x and F2 are, respectively, the smaller-the-
better type and the larger-the-better type. Note that the two threshold forces comprise
a vector quality characteristic. We must measure and optimize both of them. This is
what we mean by completeness of a quality characteristic.
142 Achieving Additivity Chap. 6
? *
Here, also, the two arrival times can be viewed as a vector quality characteristic.
Both times must be measured and optimized. If we omit one or the other, we cannot
guard against failure due to the paper getting twisted during transport. Also, by
optimizing for both the arrival times (that is, minimizing the variability of both the arrival
times and making their averages equal to each other), the design of each paper
transport module can be decoupled from other modules. Optimizing each of the paper-
feeding and paper-transport characteristics, described above, automatically optimizes X.
Thus, the problem of life improvement is broken down into several problems of
nominal-the-best, smaller-the-better, and larger-the-better types. It is quite obvious that
optimizing these separate problems automatically improves X, the number of pages
copied before failure.
Sec. 6.3 Examples of S/N Ratios 143
fVn
Gain
i_J r? .""z fr fz f3 U
aMu^JMBMMMMh .1
Figure 6.4(b) shows an example of a desired frequency response function and the
customer-specified upper and lower limits for the gain. If the customer-specified gain
limits are violated at any frequency, the filter is considered defective. From the
preceding discussion in this chapter, it should be apparent that counting the percentage
of defective filters, though easiest to measure, is not a good quality characteristic.
144 Achieving Additivity Chap. 6
This problem can be solved more efficiently by dividing the frequencies into
several bands, say five bands as shown in Figure 6.4(b). For each of the middle three
bands, we must achieve gain equal to the gain specified by the frequency response
function. Therefore, we treat these as three separate nominal-the-best type problems.
For each band, we must identify a separate adjustment factor that can be used to set the
mean gain at the right level. Note that a resistor, capacitor, or some other component
in the circuit can serve as an adjustment factor. For any one of these bands, the
adjustment factors for the other two bands should be included as noise factors, along with
other noise factors, such as component tolerances and temperature. Then, adjusting the
gain in one band would have a minimal effect on the mean gain in the other bands.
For each of the two end bands, we must make the gain as small as possible.
Accordingly, these two bands belong to the class of smaller-the-better type problems.
Thus, we have divided a problem where we had to achieve a desired curvilinear
response into several familiar problems.
Additivity of the effects of the control factors is also influenced by the selection of the
control factors and their levels. By definition, the control factors are factors whose
levels can be selected by the designer. Next, it is important that each control factor
influence a distinct aspect of the basic phenomenon affecting the quality characteristic.
If two or more control factors affect the same aspect of the basic phenomenon, then the
possibility of interaction among these factors becomes high. When such a situation is
recognized, we can reduce or even eliminate the interaction through proper
transformation of the control factor levels. We refer to this transformation as sliding levels. The
following examples illustrate some of the important considerations in the selection of
control factors. A qualitative understanding of how control factors affect a product is
very important in their selection.
This rather simplified example brings out an important consideration in the selection of
control factors. Consider three drugs (A, B, and C) proposed by three scientists for
treating wheezing in asthmatic patients. Suppose the drug test results indicate that if
no drug is given, the patient's condition is bad. If only drug A is given, the patients
get somewhat better; if only drug B is given, the patients feel well; and if only drug C
is given, the patients feel moderately good. Can the three drugs be considered as three
separate control factors? If so, then a natural expectation is that by giving all three
drugs simultaneously, we can make the patients very well.
Suppose we take a close look at these drugs to find out that all three drugs
contain theophillin as an active ingredient, which helps dilate the bronchial tubes. Drug A
has 70 percent of full dose, drug B has 100 percent of full dose, and drug C has 150
Sec. 6.4 Selection of Control Factors 145
percent of full dose. Administering all three drugs simultaneously implies giving 320
percent of full dose of theophillin. This could significantly worsen the patient's
condition. Therefore, the three drugs interact. The proper way to approach this problem is
to think of the theophillin concentration as a single control factor with four levels: 0
percent (no drug), 70 percent (drug A), 100 percent (drug B), and 150 percent (drug
C). Here the other ingredients of the three drugs should be examined as additional
potential control factors.
Photolithography Process
Aperture and exposure are among the two important control factors in the
photolithography process used in VLSI fabrication (see Phadke, Kackar, Speeney, and Grieco
[P5]). The width of the lines printed by photolithography depends on the depth of field
and the total light energy falling on. the photoresist. The aperture alone determines the
depth of field. However, both aperture and exposure time influence the total light
energy. In fact, the total light energy for fixed light intensity is proportional to the
product of the aperture and exposure time. Thus, if we chose aperture and exposure
time as control factors, we would expect to see strong interaction between these two
factors. The appropriate control factors for this situation are aperture and total light
energy.
Suppose 1.2N, N, and 0.8N are used as three levels for light energy, where N
stands for the nominal level or the middle level. We can achieve these levels of light
energy for various apertures through the sliding levels of exposure as indicated in
Table 6.3. The level N of total light energy can be achieved by setting exposure at
120 when aperture is 1, exposure at 90 when aperture is 2, and exposure at 50 when
aperture is 3.
Exposure (PEP-Setting)
1.2N N 0.8N
1 96 120 144
Aperture 2 72 90 108
3 40 50 60
viscosity and the spin speed. Here too, sliding levels of spin speed should be
considered to minimize interactions (see Phadke, Kackar, Speeney, and Grieco [P5]).
Consider a matrix experiment where we assign only main effects to the columns
of an orthogonal array so that the interactions (2-factor, 3-factor, etc.) are confounded
with the main effects (see Chapter 7). There are two possibilities for the relative
magnitudes of the interactions:
1. If one or more of these interactions are large compared to the main effects, then
the main effects with which these interactions are confounded will be estimated
with large bias or error. Consequently, the observed response under the
predicted optimum conditions will not match the prediction based on the additive
model. Thus, in this case the verification experiment will point out that large
interactions are present.
2. On the contrary, if the interactions are small compared to the main effects, then
the observed response under the predicted optimum conditions will match the
prediction based on the additive model. Thus, in this case the verification
experiment will confirm that the main effects dominate the interactions.
Optimization studies where only one factor is studied at a time are not capable of
determining if interactions are or are not large compared to the main effects. Thus, it
is important to conduct multifactor experiments using orthogonal arrays. Dr. Taguchi
considers the ability to detect the presence of interactions to be the primary reason for
using orthogonal arrays to conduct matrix experiments.
Sections 6.2, 6.3, and 6.4 described the engineering considerations in selecting
the quality characteristics, S/N ratios, and control factors and their levels. Matrix
experiments using orthogonal arrays provide a test to see whether the above selections
can successfully achieve additivity. If additivity is indeed achieved, the matrix
experiment provides simultaneously the optimum values for the control factors. If additivity
is not achieved, the matrix experiment points it out so that one can re-examine the
selection of the quality characteristics, S/N ratios, and control factors and their levels.
6.6 SUMMARY
• Ability to predict the robustness (sensitivity to noise factors) of a product for any
combination of control factor settings is needed so that the best control factor levels
can be selected. The prediction must be valid, not only under the laboratory
conditions, but also under manufacturing and customer usage conditions.
Sec. 6.6 Summary 147
• The additivity is influenced greatly by the choice of the quality characteristic, the
S/N ratio, and control factors and their levels.
6. For products, having feedback mechanisms, the open loop, sensor and
compensation modules should be optimized separately, and the modules should
then be integrated. Similarly, complex products should be divided into
suitable modules for optimization purposes.
• Although the final success of a product or a process may depend on the reliability
or the yield, such responses often do not make good quality characteristics. They
tend to cause strong interactions among the control factors as illustrated by the
photolithography example.
• Different types of variables can be used as quality characteristics: the output or the
response variable, and threshold values of suitable control factors or noise factors
for achieving a certain value of the output. When the output is discrete, such as
ON-OFF states, it becomes necessary to use the threshold values.
• Additivity of the effects of the control factors is also influenced by the selection of
control factors and their levels. If two or more control factors affect the same
148 Achieving Additivity Chap. 6
aspect of the basic phenomenon, then the possibility of interaction among these
factors becomes high. When such a situation is recognized, the interaction can be
reduced or even eliminated through proper transformation of the control factor
levels (sliding levels). A qualitative understanding of how control factors affect a
product is important in their selection.
• Selecting a good quality characteristic, S/N ratio, and control factors and their
levels is essential in improving the efficiency of development activities. The
selection process is not always easy. However, when experiments are conducted using
orthogonal arrays, a verification experiment can be used to judge whether the
interactions are severe. When interactions are found to be severe, it is possible to
look for an improved quality characteristic, S/N ratio, and control factor levels, and,
thus, mitigate potential manufacturing problems and field failures.
• Matrix experiment based on an orthogonal array followed by a verification
experiment is a powerful tool for detecting lack of additivity. Optimizing a product
design one factor at a time does not provide the needed test for additivity.
Chapter 7
CONSTRUCTING
ORTHOGONAL ARRAYS
The benefits of using an orthogonal array to conduct matrix experiments as well as the
analysis of data from such experiments are discussed in Chapter 3. The role of
orthogonal arrays in a Robust Design experiment cycle is delineated in Chapter 4 with the
help of the case study of improving the polysilicon deposition process. This chapter
describes techniques for constructing orthogonal arrays that suit a particular case study
at hand.
149
150 Constructing Orthogonal Arrays Chap. 7
This chapter describes how to construct an orthogonal array to meet these requirements
and consists of the following eleven sections:
• Section 7.1 describes how to determine the minimum number of rows for the
matrix experiment by counting the degrees of freedom.
• Section 7.2 lists a number of standard orthogonal arrays and a procedure for
selecting one in a specific case study. A novice to Robust Design may wish to
use a standard array that is closest to the needs of the case study, and if
necessary, slightly modify the case study to fit a standard array. The remaining
sections in this chapter describe various techniques of modifying the standard
orthogonal arrays to construct an array to fit the case study.
• Section 7.3 describes the dummy level method which is useful for assigning a
factor with number of levels less than the number of levels in a column of the
chosen orthogonal array.
• Section 7.4 discusses the compound factor method which can be used to assign
two factors to a single column in the array.
• Section 7.5 describes Taguchi's linear graphs and how to use them to assign
interactions to columns of the orthogonal array.
• Section 7.6 presents a set of rules for modifying a linear graph to fit the needs of
a case study.
• Section 7.7 describes the column merging method, which is useful for merging
columns in a standard orthogonal array to create columns with larger number of
levels.
• Section 7.8 describes process branching and shows how to use the linear graphs
to construct an appropriate orthogonal array for case studies involving process
branching.
• Section 7.9 presents three step-by-step strategies (beginner, intermediate, and
advanced) for constructing an orthogonal array.
• Section 7.10 describes the differences between Robust Design and classical
statistical experiment design.
• Section 7.11 summarizes the important points of this chapter.
The first step in constructing an orthogonal array to fit a specific case study is to count
the total degrees of freedom that tells the minimum number of experiments that must
be performed to study all the chosen control factors. To begin with, one degree of
freedom is associated with the overall mean regardless of the number of control factors
to be studied. A 3-level control factor counts for two degrees of freedom because for a
3-level factor, A, we are interested in two comparisons. Taking any one level, A j, as
Sec. 7. Selecting a Standard Orthogonal Array 151
the base level, we want to know how the response changes when we change the level
to A 2 or A 3. In general, the number of degrees of freedom associated with a factor is
equal to one less than the number of levels for that factor.
The degrees of freedom associated with interaction between two factors, called A
and B, are given by the product of the degrees of freedom for each of the two factors.
This can be seen as follows. Let nA and nB be the number of levels for factors A and
B. Then, there are nA nB total combinations of the levels of these two factors. From
that we subtract one degree of freedom for the overall mean, (nA-l) for the degrees of
freedom of A and (nB-\) for the degrees of freedom of B. Thus,
= inA-\) (nB-l)
Example 1.
Let us illustrate the computation of the degrees of freedom. Suppose a case study has
one 2-level factor (A), five 3-level factors (B, C, D, E, F), and we are interested in
estimating the interaction A x B. The degrees of freedom for this experiment are then
computed as follows:
Overall mean 1
A 2-1 = 1
B, C, D, E, F 5 x (3-1)= 10
A x B (2-l)x(3-l) = 2
Total 14
Taguchi [Tl] has tabulated 18 basic orthogonal arrays that we call standard orthogonal
arrays (see Appendix C). Most of these arrays can also be found in somewhat
different forms in one or more of the following references: Addelman [Al], Box, Hunter,
152 Constructing Orthogonal Arrays Chap. 7
and Hunter [B3], Cochran and Cox [C3], John [J2], Kempthorne [K4], Plackett and
Burman [P8], Raghavarao [Rl], Seiden [S3], and Diamond [D3]. In many case
studies, one of the arrays from Appendix C can be used directly to plan a matrix
L4 4 3 3
L* 8 7 7 - -
L9 9 4 4
L\2 12 11 11
Ll6 16 15 15
L\6 16 5 5
L\i 18 8 1 7
L25 25 6 6
Ln 27 13 13
Lyi 32 31 31 -
L32 32 10 1 9
L36 36 23 11 12
L'y, 36 16 3 13
L50 50 12 1 11
L54 54 26 1 25
L(A 64 63 63
L(A 64 21 21
Lg\ 81 40 40
Example 2:
A case study has seven 2-level factors, and we are only interested in main effects.
Here, there are a total of eight degrees of freedom—one for overall mean and seven for
the seven 2-level factors. Thus, the smallest array that can be used must have eight or
more rows. The array L8 has seven 2-level columns and, hence, fits this case study
perfectly—each column of the array will have one factor assigned to it.
Example 3:
A case study has one 2-level factor and six 3-level factors. This case study has 14
degrees of freedom—one for overall mean, one for the 2-level factor and twelve for the
six, 3-level factors. Looking at Table 7.1, we see that the smallest array with at least
14 rows is Li6. But this array has fifteen 2-level columns. We cannot directly assign
these columns to the 3-level factors. The next larger array is L18 which has one 2-
level and seven 3-level columns. Here, we can assign the 2-level factor to the 2-level
column and the six 3-level factors to six of the seven 3-level columns, keeping one 3-
level column empty. Orthogonality of a matrix experiment is not lost by keeping one
or more columns of an array empty. So, L18 is a good choice for this experiment. In
a situation like this, we should take another look at the control factors to see if there is
an additional control factor to be studied, which we may have ignored as less
important. If one exists, it should be assigned to the empty column. Doing this allows us a
chance to gain information about this additional factor without spending any more
resources.
Example 4:
Suppose a case study has two 2-level and three 3-level factors. The degrees of
freedom for this case study are nine. However, L9 cannot be used directly because it has
no 2-level columns. Similarly, the next larger array L\2 cannot be used directly
because it has no 3-level columns. This line of thinking can be extended all the way
154 Constructing Orthogonal Arrays Chap. 7
through the array L27- The smallest array that has at least two 2-level columns and
three 3-level columns is L36. However, if we selected L36, we would be effectively
wasting 36-9 = 27 degrees of freedom, which would be very inefficient
experimentation. This raises the question of whether these standard orthogonal arrays are flexible
enough to be modified to accommodate various situations. The answer is yes, and the
subsequent sections of this chapter describe the different techniques of modifying
orthogonal arrays.
The dummy level technique allows us to assign a factor with m levels to a column that
has n levels where n is greater than m. Suppose a factor A has two levels, A\ and A 2-
We can assign it to a 3-level column by creating a dummy level A 3 which could be
taken the same as A \ or A 2.
Example 5:
Let us consider a case study that has one 2-level factor (A) and three 3-level factors (B,
C, and D) to illustrate the dummy level technique. Here we have eight degrees of
freedom. Table 7.2 (a) shows the L9 array and Table 7.2 (b) shows the experiment layout
generated by assigning the factors A, B, C, and D to columns 1, 2, 3, and 4,
respectively, and by using the dummy level technique. Here we have taken A^=Ax and
called it A \ to emphasize that this is a dummy level.
Note that after we apply the dummy level technique, the resulting array is still
proportionally balanced and, hence, orthogonal (see Appendix A and Chapter 3).
Also, note that in Example 5, we could just as well have taken A^=A2. But to ensure
orthogonality, we must consistently take A$=Ai or A3=^2 within the matrix
experiment. The choice between taking A^=A\ or A^=A2 depends on many issues. Some
of the key issues are as follows:
1. If we take A?, =A2 then the effect of A2 will be estimated with two times more
precision than the effect of A ^. Thus, the dummy level should be taken to be the
one about which we want more precise information. Thus, if A t is the starting
condition about which we have a fair amount of experience and A 2 is the new
alternative, then we should choose A?, =A2.
Sec. 7.3 Dummy Level Technique 155
A B C D A B C D AE B C D
One can apply the dummy level technique to more than one factor in a given
case study. Suppose in Example 5 there were two 2-level factors (A and B) and two
3-level factors (C and D). We can assign the four factors to the columns of the
orthogonal array L9 by taking dummy levels A3 =A\ (or A3 =A'2) and B3 =B\ (or #3 =^2)-
Note that the orthogonality is preserved even when the dummy level technique is
applied to two or more factors.
The compound factor method allows us to study more factors with an orthogonal array
than the number of columns in the array. It can be used to assign two 2-level factors
to a 3-level column as follows. Let A and B be two 2-level factors. There are four
total combinations of the levels of these factors: AXBX, A2B\, AXB2, and A2B2. We
pick three of the more important levels and call them as three levels of the compound
factor AB. Suppose we choose the three levels as follows: (AB)1=A1B1,
(AB)2 = A{B2, and (AB)3 =A2Bi. Factor AB can be assigned to a 3-level column
and the effects of A and B can be studied along with the effects of the other factors in
the experiment.
For computing the effects of the factors A and B, we can proceed as follows:
the difference between the level means for (AB)i and (AB)2 tells us the effect of
changing from B\ to B2. Similarly, the difference between the level means for (AB)i
and (AB)i tells us the effect of changing from A i to A2.
In the compound factor method, however, there is a partial loss of orthogonality.
The two compounded factors are not orthogonal to each other. But each of them is
orthogonal to every other factor in the experiment. This complicates the computation
of the sum of squares for the compounded factors in constructing the ANOVA table.
The following examples help illustrate the use of the compound factor method.
Example 6:
Let us go back to Example 4 in Section 7.2 where the case study has two 2-level
factors (A and E) and three 3-level factors (B, C, and D). We can form a compound
factor AE with three levels (AE\ = AXEX, (AE)2 = AXE2 and (A£)3 = A2Ex. This leads
us to four 3-level factors that can be assigned to the L9 orthogonal array. See Table
7.2(c) for the experiment layout obtained by assigning factors AE, B, C, and D to
columns 1, 2, 3, and 4, respectively.
Example 7:
The window photolithography case study described by Phadke, Kackar, Speeney and
Grieco [P5] had three 2-level factors (A, B, and D) and six 3-level factors (C, E, F, G,
H, and I). The total degrees of freedom for the case study are sixteen. The next larger
standard orthogonal array that has several 3-level factors is L18 (21 x 37). The
experimenters formed a compound factor BD with three levels (BD)\ =B\D\, (BD)2 = B2D\
and (BD)i=BiD2. This gave them one 2-level and seven 3-level factors that match
perfectly with the columns of the L18 array. Reference [P5] also describes the
computation of ANOVA for the compound factor method.
As a matter of fact, the experimenters had started the case study with two 2-level
factors (A and B) and seven 3-level factors (C through I). However, observing that by
Sec. 7.5 Linear Graphs and Interaction Assignment 157
dropping one level of one of the 3-level factors, the L18 orthogonal array would be
suitable, they dropped the least important level of the least important factor, namely
factor D. Had they not made this modification to the requirements of the case study,
they would have needed to use the L27 orthogonal array, which would have amounted
to 50 percent more experiments! As illustrated by this example, the experimenter
should always consider the possibility of making small modifications in the
requirements for saving the experimental effort.
Sections 7.2 through 7.4 considered the situations where we are not interested in
estimating any interaction effects. Although in most Robust Design experiments we
choose not to estimate any interactions among the control factors, there are situations
where we wish to estimate a few selected interactions. The linear graph technique,
invented by Taguchi, makes it easy to plan orthogonal array experiments involving
interactions.
Let us consider the orthogonal array L8 [Table 7.3 (a)] and suppose we assigned
factors A, B, C, D, E, F, and G to the columns 1 through 7, respectively. Suppose we
believe that factors A and B are likely to have strong interaction. What effect would
the interaction have on the estimates of the effects of the seven factors obtained from
this matrix experiment?
The interaction effect is depicted in Figure 7.1. We can measure the magnitude
of interaction by the extent of nonparallelism of the effects shown in Figure 7.1. Thus,
= (yA2B2+yAiBi)-(yA2Bi+yAiB2) ¦
From Table 7.3 (a) we see that experiments under level C\ of factor C (experiments 1,
2, 7 and 8) have combinations A\B\ and A2B2 of factors A and B; and experiments
under level C2 of factor C (experiments 3, 4, 5 and 6) have combinations A.\B2 and
A2B1 of factors A and B. Thus, we will not be able to distinguish the effect of factor
C from the A x B interaction. Inability to distinguish effects of factors and
interactions is called confounding. Here we say that factor C is confounded with interaction
A x B. We can avoid the confounding by not assigning any factor to column 3 of the
array L8.
158 Constructing Orthogonal Arrays Chap. 7
B=S,
T
A
Figure 7.1 2-factor interaction. Interaction between factors A and B shows as nonparallel-
ism of the effects of factor A under levels B j and B2 of factor B.
Interaction Table
The interaction table, shown in Table 7.3 (b), shows in which column the interaction is
confounded with (or contained in) for every pair of columns of the L g array. Thus, it
can be used to determine which column of the L8 array should be kept empty (that is,
not be assigned to a factor) in order to estimate a particular interaction. From the
table, we see that the interaction of columns 1 and 2 is confounded with column 3, the
interaction of columns 3 and 5 is confounded with column 6, and so on. Note that the
interaction between columns a and b is the same as that between columns b and a.
That is, the interaction table is a symmetric matrix. Hence, only the upper triangle is
given in the table, and the lower triangle is kept blank. Also, the diagonal terms are
indicated in parentheses as there is no real meaning to interaction between columns a
and a.
The interaction table contains all the relevant information needed for assigning
factors to columns of the orthogonal array so that all main effects and desired
interactions can be estimated without confounding. The interaction tables for all standard
orthogonal arrays prepared by Taguchi [Tl] are given in Appendix C, except for the
arrays where the interaction tables do not exist, and for the arrays LM, L'M and L81,
because they are used rather infrequently. The interaction tables are generated directly
from the linear algebraic relations that were used in creating the orthogonal arrays
themselves.
Sec. 7.5 Linear Graphs and Interaction Assignment 159
Column Column
Expt.
No. 12 3 4 5 6 7
1 1111111 5 ftl 7 6
2 1112 2 2 2 mm 6 |7| 4 5
3 12 2 112 2 7 ill 6 1 5 4
(4) 1 2 3
4 12 2 2 2 11
(5) 3 2
5 2 12 12 12
(6) 1
6 2 12 2 12 1
7 2 2 1 12 2 1 (7)
8 2 2 12 112
M??e: Entries in this table show the column with
A B C D E F G which the interaction between every pair of
columns is confounded.
Factor Assignment
Linear Graphs
Using the interaction tables, however, is not very convenient. Linear graphs represent
the interaction information graphically and make it easy to assign factors and
interactions to the various columns of an orthogonal array. In a linear graph, the columns of
an orthogonal array are represented by dots and lines. When two dots are connected by
a line, it means that the interaction of the two columns represented by the dots is
contained in (or confounded with) the column represented by the line. In a linear graph,
each dot and each line has a distinct column number(s) associated with it. Further,
every column of the array is represented in its linear graph once and only once.
One standard linear graph for the array L8 is given in Figure 7.2 (a). It has four
dots (or nodes) corresponding to columns 1, 2, 4, and 7. Also, it has three lines (or
edges) representing columns 3, 6, and 5. These lines correspond to the interactions
between columns 1 and 2, between columns 2 and 4, and between columns 1 and 4,
respectively. From the interaction table, Table 7.3 (b), we can verify that columns 3,
6, and 5 indeed correspond to the interactions mentioned above.
In general, a linear graph does not show the interaction between every pair of
columns of the orthogonal array. It is not intended to do so; that information is
contained in the interaction table. Thus, the interaction between columns 1 and 3, between
columns 2 and 7, etc., are not shown in the linear graph of Lg in Figure 7.2 (a).
160 Constructing Orthogonal Arrays Chap. 7
• 7 1«C >4
(b)
The other standard linear graph for Lg is given in Figure 7.2 (b). It, too, has
four dots corresponding to columns 1, 2, 4, and 7. Also, it has three lines representing
columns 3, 5 and 6. Here, these lines correspond to the interactions between columns
1 and 2, between columns 1 and 4, and between columns 1 and 7, respectively. Let us
see some examples of how these linear graphs can be used.
In general, an orthogonal array can have many linear graphs. Each linear graph,
however, must be consistent with the interaction table of the orthogonal array. The
different linear graphs are useful for planning case studies having different requirements.
Taguchi [Tl] has prepared many linear graphs, called standard linear graphs, for each
orthogonal array. Some of the important standard linear graphs are given in
Appendix C. Note that the linear graphs for the orthogonal arrays LM and Lgi are not
given in Appendix C because they are needed rather infrequently. However, they can
be found in Taguchi [Tl]. Section 7.6 describes the rules for modifying linear graphs
to fit them to the needs of a given case study.
Example 8:
Suppose in a case study there are four 2-level factors A, B, C, and D. We want to
estimate their main effects and also the interactions A x B , B x C, and B x D. Here,
the total degrees of freedom are eight, so L8 is a candidate array. The linear graph in
Figure 7.2 (b) can be used directly here. The obvious column assignment is: factor B
should be assigned to column 1. Factors A, C, and D can be assigned in an arbitrary
order to columns 2, 4, and 7. Suppose we assign factors A, C, and D to columns 2, 4,
and 7, respectively. Then the interactions A x B, B x C and B x D can be obtained
from columns 3, 5, and 6, respectively. These columns must be kept empty. Table 7.4
shows the corresponding experiment layout.
Sec. 7.5 Linear Graphs and Interaction Assignment 161
Column*
Expt.
No. 1 2 3 4 5 6 7
1 a, A, c, D,
2 s, A, c2 D2
3 s, A2 c, D2
4 a, A2 c2 Dx
5 s2 A, c, D2
6 s2 A, c2 D,
7 B2 A2 c, Dx
8 s2 A2 c2 D2
Factor Assignment
* Note that columns 3, 5, and 6 are left empty (no factors are
assigned) so that interactions A x B, B x C and BxD can be
estimated.
Level of Factor B
B, B2
y$+yb
Ai y\+yi
2 2
Level of
factor A
yy+ys
A2 y3+y4
2 2
162 Constructing Orthogonal Arrays Chap. 7
In the above table yt stands for the response for experiment i. Experiments 1 and
2 are conducted at levels A i and B i of factors A and B (see Table 7.4). Accordingly,
the entry in the At Bx position in (yx + y2) 12. The entries in the other positions of
the table are determined similarly. The data of the above 2-way table can be plotted to
display the A x B interaction. The interactions B x C and B x D can be estimated in the
same manner. If fact, this estimation procedure can be used regardless of the number
of levels of a factor.
Example 9:
Suppose there are five 2-level factors A, B, C, D, and E. We want to estimate their
main effects and also the interactions A x B and B x C. Here, also, the needed degrees
of freedom is eight, making L 8 a candidate array. However, none of the two standard
linear graphs of Lg can be used directly. Section 7.6 shows how the linear graphs can
be modified so that a wide variety of experiment designs can be constructed
conveniently.
Tl
A
Linear graphs and interaction tables for the arrays Lg, L2i, etc., which have
3-level columns, are slightly more complicated than those for arrays with 2-level
columns. Each column of a 3-level factor has two degrees of freedom associated with
it. The interaction between two 3-level columns has four degrees of freedom. Hence,
to estimate the interaction between two 3-level factors, we must keep two 3-level
columns empty, in contrast to only one column needed to be kept empty for 2-level
orthogonal arrays. This fact is reflected in the interaction tables and linear graphs
shown in Appendix C.
The previous section showed how linear graphs can be used to assign main effects and
interactions to the columns of standard orthogonal arrays. However, the principal
utility of linear graphs is for creating a variety of different orthogonal arrays from the
standard ones to fit real problems. The linear graphs are useful for creating 4-level
columns in 2-level orthogonal arrays, 9-level columns in 3-level orthogonal arrays and
6-level columns in mixed 2- and 3-level orthogonal arrays. They are also useful for
constructing orthogonal arrays for process branching. Sections 7.7 and 7.8 describe
these techniques. Common to all these applications of linear graphs is the need to
modify a standard linear graph of an orthogonal array so that it matches the linear
graph required by a particular problem.
A linear graph for an orthogonal array must be consistent with the interaction
table associated with that array; that is, every line in a linear graph must represent the
interaction between the two columns represented by the dots it connects. In the
following discussion we assume that for 2-level orthogonal arrays, the interaction between
columns a and b is contained in column c. Also, the interaction between columns /
and g is contained in column c. If it is a 3-level orthogonal array, we assume that the
interaction between columns a and b is contained in columns c and d. Also, the
interaction between columns / and g is contained in columns c and d. The following
three rules can be used for modifying a linear graph to suit the needs of a specific case
study.
1. Breaking a line. In the case of a 2-level orthogonal array, a line connecting two
dots, a and b, can be removed and replaced by a dot. The column associated
with this dot is same as the column associated with the line it was created from.
In case of linear graphs for 3-level orthogonal arrays, a line has two columns
associated with it and it maps into two dots. Figures 7.4 (a) and (b) show this
rule diagrammatically.
2-Level 3-Level
Orthogonal Arrays Orthogonal Arra
(a) (b)
Breaking a Line
a c b a b c a c,d b a
(c) (d)
Forming a Line • • • • c
a b c a c b abed a c
(e) (f)
a c,d b
Moving a Line
a c b a b
f g f c g f g f c,d
2. Forming a line. A line can be added in the linear graph of a 2-level orthogonal
array to connect two dots, a and b, provided we remove the dot c associated with
the interaction between a and b. In the case of the linear graphs for a 3-level
orthogonal array, two dots c and d, which contain the interaction of a and b,
must be removed. The particular dot or dots to be removed can be determined
from the interaction table for the orthogonal array. Figures 7.4 (c) and (d) show
this rule diagrammatically.
3. Moving a line. This rule is really a combination of the preceding two rules. A
line connecting two dots a and b can be removed and replaced by a line joining
another set of two dots, say / and g, provided the interactions a x b and / x g
are contained in the same column or columns. This rule is diagrammatically
shown in Figures 7.4 (e) and (f).
The following examples illustrate the modification of linear graphs.
Example 10:
Consider Example 9 in Section 7.5. The standard linear graph of L8, Figure 7.5 (a)
can be changed into the linear graph shown in Figure 7.5 (b) by breaking the line
joining dots 1 and 6. This modified linear graph matches the problem perfectly. The
factors A, B, C, D and E should be assigned, respectively, to columns 2, 1, 4, 6, and 7.
The A x B and B x C interactions can be estimated by keeping columns 3 and 5 empty.
6 7
Example 11:
The purpose of this example is to illustrate the rule 3, namely moving a line. Figure
7.6 (a) shows one of the standard linear graphs of the orthogonal array L16. It can be
changed into Figure 7.6 (b) by breaking the line connecting columns 6 and 11, and
adding isolated dot for column 13. This can be further turned into Figure 7.6 (c) by
adding a line to connect columns 7 and 10, and simultaneously removing the isolated
dot 13.
166 Constructing Orthogonal Arrays Chap. 7
12 15 14 13
8 10 9 11
12 15 14 • •
6 11 13
2 8 10 9
14 5 7
i ( i <
2
»
12
8
15
10
M /.9
14
6 11
The column merging method can be used to create a 4-level column in a standard
orthogonal array with all 2-level columns, a 9-level column in a standard orthogonal
array with all 3-level columns, and a 6-level column in a standard orthogonal array
with some 2-level and some 3-level columns.
Sec 7.7 Column Merging Method 167
2. Remove columns a, b, and c from the array. These columns cannot be used to
study any other factors or interactions.
The creation of a 4-level column using columns 1, 2, and 3 of L8 is shown in
Table 7.5. It can be checked that the resulting array is still balanced and, hence,
orthogonal. It can be used to study one 4-level factor and up to four 2-level factors.
Column Column
Expt. Expt.
No. 1 2 3 4 5 6 7 No. (1-2-3) 4 5 6 7
1 1 1 1 1 1 1 1 1 1 1 1 1 1
2 1 1 1 2 2 2 2 2 1 2 2 2 2
3 1 2 2 1 1 2 2 3 2 1 1 2 2
4 1 2 2 2 2 1 1 4 2 2 2 1 1
5 2 1 2 1 2 1 2 5 3 1 2 1 2
6 2 1 2 2 1 2 1 6 3 2 1 2 1
7 2 2 1 1 2 2 1 7 4 1 2 2 1
8 2 2 1 2 1 1 2 8 4 2 1 1 2
t t t
a b c=axb
168 Constructing Orthogonal Arrays Chap. 7
A process for applying covercoat on printed wiring boards consists of (1) spreading the
covercoat material (a viscous liquid) on a board, and (2) baking the board to form a
hard covercoat layer. Suppose, to optimize this process, we wish to study two types of
material (factor A), two methods of spreading (factor B) and two methods of baking
(factor C). The two methods of baking are a conventional oven (C\) and an infrared
oven (C2). For the conventional oven there are two additional control factors, bake
temperature (factor D, two levels) and bake time (factor E, two levels). Whereas for
the infrared oven, there are two different control factors: infrared light intensity (factor
F, two levels) and conveyor belt speed (factor G, two levels).
The factors for the covercoat process are diagrammed in Figure 7.7. Factor C is
called a branching factor because, depending on its level, we have different control
factors for further processing steps. Branching design is a method of constructing
orthogonal arrays to suit such case studies.
Linear graphs are extremely useful in constructing orthogonal arrays when there
is process branching. The linear graph required for the covercoat process is given in
Figure 7.8 (a). We need a dot for the branching factor C, and two dots connected with
lines to that dot. These two dots correspond to the factors D and E for the
conventional oven branch, and F and G for the infrared oven branch. The columns associated
with the two interaction lines connected to the branching dot must be kept empty. In
the linear graph we also show two isolated dots corresponding to factors A and B.
The standard linear graph for L8 in Figure 7.8(b) can be modified easily to
match the linear graph in Figure 7.8(a). We break the bottom line to form two isolated
dots corresponding to columns 6 and 7. Thus, by matching the modified linear graph
with the required linear graph, we obtain the column assignment for the control factors
as follows:
Factor Column
A 6
B 7
C 1
D, F 2 (3)
E, G 4 (5)
Sec. 7.8 Branching Design 169
A. Covercoat Material
ir
B. Method of Spreading
i'
C. Method of Baking
Conventional Infrared
Oven Oven
Columns 3 and 5, shown in parenthesis, must be kept empty. The factors D and
F are assigned to the same column, namely column 2. Whether a particular experiment
is conducted by using factor D or F depends on the level of factor C, which is
determined by column 1. Thus, the levels of factors D and F are determined jointly by the
columns 1 and 2 as follows:
Factors D and F can have quite different effects; that is, mol -mDl need not be equal
to mp, - mFl. This difference shows up as interaction between columns 1 and 2,
which is contained in column 3. Hence, column 3 must be kept empty. The factors E
and G are assigned to the column 4 in a similar way, and column 5 is kept empty.
170 Constructing Orthogonal Arrays Chap. 7
C "V^ A
? E,G
D, F
6 7
A B
E,G
The experiment layout for the covercoat process is given in Table 7.6. Note that
experiments 1 through 4 are conducted using the conventional oven, while experiments
5 through 8 are conducted using the infrared oven.
It is possible that after branching, the process can reunite in subsequent steps.
Thus, in the printed wiring board application, after the covercoat is applied, we may go
through common printing and etching steps that all have a common set of control
factors. Branching can also occur in product design; for example, we may select different
mechanisms to achieve a part of the function. Here, associated with each mechanism,
there would be different control factors.
Sec. 7.9 Strategy for Constructing an Orthogonal Array 171
Column
Expt.
No. 1 2 3 4 5 6 7
1 c, £>i £i Ai Bx
2 c, 01 £2 A2 B2
3 c, £>2 £> A2 B2
4 c, £>2 £2 Ax Bx
5 c2 fl G, Ax B2
6 c2 ^1 G2 A2 Bx
7 c2 F2 G, A2 Bx
8 c2 Fi G2 Ax B2
Factor Assignment
Up to this point, this chapter discussed many techniques for constructing orthogonal
arrays needed by the matrix experiments. This section focuses on showing how to
orchestrate the techniques for constructing an orthogonal array to suit a particular case
study. The skill needed to apply these techniques varies widely. Accordingly, we
describe three strategies—beginner, intermediate, and advanced—requiring
progressively higher levels of skill with the techniques described earlier in this chapter. A
vast majority of case studies can be taken care of by the beginner and intermediate
strategies, whereas a small fraction of the case studies requires the advanced strategy.
The router bit life improvement case study in Chapter 11 is one such case study.
Beginner Strategy
A beginner should stick to the direct use of one of the standard orthogonal arrays.
Table 7.7 is helpful in selecting a standard orthogonal array to fit a given case study.
Because it gets difficult to keep track of data from a larger number of experiments, the
beginner is advised to not exceed 18 experiments, which makes the possible choices of
orthogonal arrays as L4, L8, L9, L12, Li6, L'i6, and L18.
172 Constructing Orthogonal Arrays Chap. 7
No. of No. of
2-level Recommended 3-level Recommended
Factors Orthogonal Array Factors Orthogonal Array
2-3 L4 2-4 Lg
4-7 i8 5-7 il8*
8-11 LX2 *When Li8 is used, one 2-level
12 - 15 £.6 factor can be used in addition
to seven 3-level factors.
A beginner should consider either all 2-level factors or all 3-level factors
(preferably 3-level factors) and not attempt to estimate any interactions. This may require
him or her to modify slightly the case-study requirements. The rules given in Table
7.7 can then be used to select the orthogonal array.
The assignment of factors to the columns is straightforward in the cases
discussed above. Any column can be assigned to any factor, except for factors that are
difficult to change, which should be assigned to the columns toward the left.
Among all the arrays discussed above, the array L ig is the most commonly used
array because it can be used to study up to seven 3-level factors and one 2-level factor,
which is the situation with many case studies.
Intermediate Strategy
Experimenters with modest experience in using matrix experiments should use the
dummy level, compound factor, and column merging techniques in conjunction with
the standard orthogonal arrays to broaden the possible combinations of the factor
levels. The factors should have preferably two or three levels and the estimation of
interactions should be avoided. Also, as far as possible, arrays larger than L18 should
be avoided. Table 7.8 can be used to select an appropriate standard orthogonal array
depending on the number of 2- and 3-level factors in the case study. The following
rules can then be used to modify the chosen standard orthogonal array to fit the case
study:
1. To create a 3-level column in the array L8 or L^, merge three columns in the
array (two columns and the column containing their interaction) to form a 4-level
column. Then use the dummy level technique to convert the 4-level column into
a 3-level column.
Sec. 7.9 Strategy for Constructing an Orthogonal Array 173
2. To create two 3-level columns in the array Ll6, merge two distinct sets of three
columns in the array (two columns and the column containing their interaction)
to form two 4-level columns. Then use the dummy level technique to convert
the 4-level columns into 3-level columns.
3. When the array L9 is suggested by the Table 7.8 and the total number of factors
is less than or equal to four, use the dummy level technique to assign a 2-level
factor to a 3-level column.
4. When the array L9 is suggested by the Table 7.8 and the total number of factors
exceeds four, use the compound factor technique to create a 3-level factor from
two 2-level factors until the total number of factors becomes 4.
5. When the array Ltg is suggested by the Table 7.8 and the number of 2-level
columns exceeds one, use the dummy level and compound factor techniques in
the manner similar to rules 3 and 4 above.
Number of
2-level factors 0 1 2 3 4 5 6 7
12 £.6 £,6
13 £,6
14 £,6
15 £,6
Advanced Strategy
3. Select an appropriate standard orthogonal array from among those listed in Table
7.1. If most of the factors are 2- or 4-level factors, then a 2-level array should
be selected. If most of the factors are 3-level factors, then a 3-level array should
be selected.
4. Construct the linear graph required for the case study. The linear graph should
contain the interactions to be estimated and also the appropriate patterns for
column merging and process branching.
5. Select a standard linear graph for the chosen array that is closest to the required
linear graph.
6. Modify the standard linear graph to match the required linear graph by using the
rules in Section 7.6. The column assignment is obvious when the two linear
graphs match. If we do not succeed in matching the linear graphs we must
repeat the procedure above with either a different linear graph for the chosen
standard orthogonal array, or choose a larger standard orthogonal array, or
modify the requirements for the case study.
The advanced strategy needs some skill in using the linear graph modification
rules. The router bit life improvement case study of Chapter 11 illustrates the use of
the advanced strategy. Artificial intelligence programs can be used to carry out the
modifications efficiently as described by Lee, Phadke, and Keny [LI].
section will help such readers understand and apply the Robust Design Method. This
section may be skipped without affecting the readability of the rest of the book.
Any method which was developed over several decades is likely to have
variations in the way it is applied. Here, the term classical statistical experiment design
refers to the way the method is practiced by the majority of its users. Exceptions to
the majority practice are not discussed here. The term Robust Design, of course,
means the way it is described in this book.
The comparison is made in three areas: problem formulation, experiment layout,
and data analysis. The differences in the areas of experiment layout and data analysis
are primarily a result of the fact that the two methods address different problems.
Frequently, the final goal of a project is to maximize the yield or the percent of
products meeting specifications. Accordingly, in classical statistical experiment design
yield is often used as a response to be modeled in terms of the model factors. As
discussed in Chapters 5 and 6, use of such response variables could lead to unnecessary
interactions and it may not lead to a robust product design.
The two methods also differ in the treatment of noise during problem formulation.
Since classical statistical experiment design method is not concerned with minimizing
sensitivity to noise factors, the evaluation of the sensitivity is not considered in the
method. Instead, noise factors are considered nuisance factors. They are either kept at
constant values during the experiments, or techniques called blocking and
randomization are used to block them from having an effect on the estimation of the
mathematical model describing the relationship between the response and the model factors.
On the contrary, minimizing sensitivity to noise factors (factors whose levels
cannot be controlled during manufacturing or product usage, which are difficult to
control, or expensive to control) is a key idea in Robust Design. Therefore, noise factors
are systematically sampled for a consistent evaluation of the variance of the quality
characteristic and the S/N ratio. Thus, in the polysilicon deposition case study of
Chapter 4, the test wafers were placed in specific positions along the length of the
reactor and the quality characteristics were measured at specific points on these wafers.
This ensures that the effect of npise factors is equitable in all experiments. When there
exist many noise factors whose levels can be set in the laboratory, an orthogonal array
is used to select a systematic sample, as discussed in Chapter 8, in conjunction with
the design of a differential operational amplifier. Use of an orthogonal array for
sampling noise is a novel idea introduced by Robust Design and it is absent in classical
statistical experiment design.
Let us define some terms commonly used in classical statistical experiment design.
Resolution V designs are matrix experiments where all 2-factor interactions can be
estimated along with the main effects. Resolution IV designs are matrix experiments
where no 2-factor interaction is confounded with the main effects, and no two main
effects are confounded with each other. Resolution III designs (also called saturated
designs) are matrix experiments where no two main effects are confounded with each
other. In a Resolution III design, 2-factor interactions are confounded with main
effects. In an orthogonal array if we allow assigning a factor to each column, then it
becomes a Resolution III design. It is possible to construct a Resolution IV design
178 Constructing Orthogonal Arrays Chap. 7
from an orthogonal array by allowing only specific columns to be used for assigning
factors.
It is obvious from the above definitions that for a given number of factors,
Resolution III design would need the smallest number of experiments, Resolution IV would
need somewhat more experiments and Resolution V would need the largest number of
experiments. Although heavy emphasis is placed in classical statistical experiment
design on ability to estimate 2-factor interactions, Resolution V designs are used only
very selectively because of the associated large experimental cost. Resolution IV
designs are very popular in classical statistical experiment design. Robust Design
almost exclusively uses Resolution III designs, except in some situations where
estimation of a few specific 2-factor interactions is allowed.
The relative economics of Resolution III and Resolution IV designs can be
understood as follows. By using the interaction tables in Appendix C one can see that
Resolution IV designs can be realized in 2-level standard orthogonal arrays by
assigning factors to selected columns as shown in Table 7.9.
Maximum Maximum
Orthogonal Number of Columns to be Number of Columns to be
Array Factors Used Factors Used
L4 3 1-3 2 1,2
i8 7 1-7 4 1,2,4,7
From the above table it is apparent that for a given orthogonal array roughly twice as
many factors can be studied with Resolution III design compared to Resolution IV
design.
Screening Experiments
Classical statistical experiment design frequently uses the following strategy for
building a mathematical model for the response:
1. Screening. Use Resolution III designs to conduct experiments with a large
number of model factors for determining whether each of these factors should be
included in the mathematical model.
Sec. 7.10 Comparison with the Classical Statistical Experiment Design 179
Because of the heavy emphasis on the ability to estimate interactions and the
complexity of the interactions between 3-level factors, classical statistical experiment design is
frequently restricted to the use of 2-level fractional factorial designs. Consequently, the
number of possible types of experiment conditions is limited. For example, it is not
possible to compare three or four different types of materials with a single 2-level
fractional factorial experiment. Also, the curvature effect of a factor (see Figure 4.4)
cannot be determined with only two levels. However, as discussed earlier in this chapter,
the standard orthogonal arrays and the linear graphs used in Robust Design provide
excellent flexibility and simplicity in planning multifactor experiments.
Central composite designs are commonly used in classical experiment design, especially
in conjunction with the response surface methodology (see Myers [M2]) for estimating the
curvature effects of the factors. Although some research is needed to compare the central
composite designs with 3-level orthogonal arrays used in Robust Design, the following
main differences between them are obvious: the central composite design is useful for
only continuous factors, whereas the orthogonal arrays can be used with continuous as
well as discrete factors. As discussed in Chapter 3, the predicted response under any
combination of the control factor levels has the same variance when an orthogonal array is
used. However, this is not true with central composite designs.
Randomization
order does not scramble the order for all factors effectively. That is, even after
arranging the experiments in a random order, it looks as though the experiments are in a
nearly systematic order for one or more of the factors.
Nuisance factors are analogous to noise factors in Robust Design terminology.
Since robustness against the noise factors is the primary goal of Robust Design, we
introduce the noise factors in a systematic sampling manner to permit equitable
evaluation of sensitivity to them.
Before we describe the differences in data analysis, we note that many of the
layout techniques described in this book can be used beneficially for modeling the mean
response also.
As mentioned earlier, the differences in data analysis arise from the fact that the two
methods were developed to address different problems. One of the common problems
in Robust Design is to find control factor settings that minimize variance while
attaining the mean on target. In solving this problem, provision must be made to ensure that
the solution can be adapted easily in case the target is changed. This is a difficult,
multidimensional, constrained optimization problem. The Robust Design method
solves it in two steps. First, we maximize the S/N ratio and, then, use a control factor
that has no effect on the S/N ratio to adjust the mean function on target. This is an
unconstrained optimization problem, much simpler than the original constrained
optimization problem. Robust Design addresses many engineering design optimization
problems as described in Chapter 5.
Classical statistical experiment design has been traditionally concerned only with
modeling the mean response. Some of the recent attempts to solve the engineering
design optimization problems in the classical statistical experiment design literature are
discussed in Box [Bl], Leon, Shoemaker, and Kackar [L2], and Nair and Pregibon
[N2].
Significance Tests
In classical statistical experiment design, significance tests, such as the F test, play an
important role. They are used to determine if a particular factor should be included in
the model. In Robust Design, F ratios are calculated to determine the relative
importance of the various control factors in relation to the error variance. Statistical
significance tests are not used because a level must be chosen for every control factor
regardless of whether that factor is significant or not. Thus, for each factor the best
level is chosen depending upon the associated cost and benefit.
Sec. 7.11 Summary 181
7.11 SUMMARY
• The process of fitting an orthogonal array to a specific project has been made
particularly easy by the standard orthogonal arrays and the graphical tool, called
linear graphs, developed by Taguchi to represent interactions between pairs of
columns in an orthogonal array. Before constructing an orthogonal array, one
must define the requirements which consist of:
1. Number of factors to be studied
• The first step in constructing an orthogonal array to fit a specific case study is to
count the total degrees of freedom that tells the minimum number of experiments
that must be performed to study the main effects of all control factors and the
chosen interactions.
• The columns of the standard orthogonal arrays are arranged in the increasing
order of number of changes; that is, the number of times the level of a factor
must be changed in running the experiments in the numerical order is smaller for
the columns on the left than those on the right. Consequently, factors whose
levels are difficult to change should be assigned to columns on the left.
• Although in most Robust Design experiments one chooses not to estimate any
interactions among the control factors, there are situations where it is desirable to
estimate a few selected interactions. The linear graph technique makes it easy to
plan orthogonal array experiments that involve interactions.
• Linear graphs represent interaction information graphically and make it easy to
assign factors and interactions to the various columns of an orthogonal array. In
a linear graph, the columns of an orthogonal array are represented by dots and
lines. When two dots are connected by a line, it means that the interaction of the
two columns represented by the dots is contained in (or confounded with) the
column(s) represented by the line. In a linear graph, each dot and each line has a
distinct column number(s) associated with it. Furthermore, every column of the
array is represented in its linear graph once and only once.
182 Constructing Orthogonal Arrays Chap. 7
• Depending on the needs of the case study and experience with matrix
experiments, the experimenter should use the beginner, intermediate, or advanced
strategy to plan experiments. The beginner strategy (see Table 7.7) involves the use
of a standard orthogonal array. The intermediate strategy (see Table 7.8)
involves minor but simple modifications of the standard orthogonal arrays using
the dummy level, compound factor, and column merging techniques. A vast
majority of case studies can be handled by the beginner or the intermediate
strategies. The advanced strategy requires the use of the linear graph modification
rules and is needed relatively infrequently. In complicated case studies, the
advanced strategy can greatly simplify the task of constructing orthogonal arrays.
Needed Linear
Technique Application Graph Pattern
The Robust Design steps outlined in Chapter 4 for use with hardware
experiments can be used just as well to optimize a product or process design when computer
models are used to evaluate the response. Of course, some of these steps can be
automated with the help of appropriate software, thus making it easier to optimize the
design.
183
184 Computer Aided Robust Design Chap. 8
• Section 8.1 describes the differential op-amp circuit and its main function (Step 1
of Robust Design steps described in Chapter 4).
• Section 8.2 discusses the noise factors and their statistical properties (Step 2).
• Section 8.3 summarizes some commonly used methods of simulating the effect
of variation in noise factors.
• Section 8.4 discusses the orthogonal array used in determining the testing
conditions and the evaluation of the effect of noise factors for our circuit (Step 2).
• Section 8.5 gives the S/N ratio used in this example (Step 3).
• Section 8.6 describes the control factors, their alternate levels, and the use of an
orthogonal array for finding optimum settings of control factors (Steps 4 through
7). It also describes the verification experiment and the final results (Step 8).
The circuit diagram for the differential op-amp circuit is given in Figure 8.1.
There are two current sources (OCS, CPCS), five transistors, and eight resistors. The
Sec. 8.1 Differential Op-Amp Circuit 185
circuit has a balancing property (symmetry) that requires the following relationships
among the nominal values of the various circuit parameters:
RFP RFM
RPEP RPEM
RNEP RNEM
AFPP AFPM (8.1)
AFNP AFNM
SIEPP SIEPM
SIENP SIENM
IOCS
(Output)
HAM—%—i
RPEM RPEP
AFNO
SIENO
Figure 8.1 Differential operational amplifier circuit. This input is shorted for evaluating
the dc offset voltage.
186 Computer Aided Robust Design Chap. 8
The circuit parameter names beginning with AF refer to the alpha parameters of the
transistors, and those beginning with SIE are the saturation currents for the transistors.
Further, the gain requirement of the circuit dictates the following ratios of resistance
values:
These relationships among the circuit parameters, expressed by Equations (8.1) and
(8.2), are called tracking relationships. Many product architectures include a variety of
tracking relationships among the product parameters. Past experiences with similar
products are often specified through such relationships.
Because of the symmetric architecture of the circuit, the dc offset voltage is nearly zero
if all circuit parameters are exactly at nominal values; however, that is not the case in
practice. Manufacturing variations violate the symmetry, which leads to high offset
voltage. So, the deviations in circuit parameter values from the respective nominal
values are the primary noise factors for this circuit. Another important noise factor is
the ambient temperature because the circuit is expected to function outdoors in many
climatically different areas. Thus, there are 21 noise parameters for this circuit:
deviations in the 20 circuit parameters from their nominal values and the temperature
variation.
2. Correlation. When the circuits are made as an integrated circuit, which is the
case with this differential op-amp circuit, values of certain components can have
a high correlation. See Figure 8.2(c).
For the starting design in the case study, the mean values and the tolerances on
the noise factors are listed in Table 8.1. The listed tolerances are the three standard-
deviation limits. Thus, for RPEM, the standard deviation is 21/3 = 7 percent of its
nominal value. All saturation currents in these transistors have a long-tailed
distribution. Accordingly, the tolerances on these parameters are specified as a multiple.
Thus, we approximate the distribution of SIEPM as fol ows: logi0 (SIEPM) has mean value log10(3x 1(T13) and standard deviation (log107)/3.
Sec. 8.2 Description of Noise Factors 187
La
process or other
reasons.
60 limits for x1
H W
In this circuit, the mean values of only the parameters 1 through 5 can be
independently specified by the designer. The other mean values are determined by
either the tracking relationships or the chosen manufacturing technology. Consider the
resistance RPEP. Its nominal value is equal to the nominal value of RPEM. Further,
these resistors are located close to each other on a single chip. As a result, there is less
variation of RPEP around RPEM on the same chip when compared to the variation of
either of these resistances from chip to chip on the same wafers or across wafers. This
correlation is expressed by a large tolerance (21 percent) on RPEM and a small
tolerance (2 percent) on RPEP around the value of RPEM. Figure 8.2(c) shows the
correlation in a graphical form. Suppose in a particular group of circuits RPEM = 15 kQ.
188 Computer Aided Robust Design Chap. 8
Then, for that group of circuits RPEP will vary around 15 kQ with three standard-
deviation limits of 2 percent of 15 kQ. If for another group of circuits RPEM is
16.5 kQ (10 percent over 15 Q), then RPEP will vary around 16.5 kQ with three
standard-deviation limits equal to 2 percent of 16.5 kQ.
The correlations among the other circuit parameters are specified in a similar
manner in Table 8.1. RFP, RIM, and RIP are correlated with RFM; RPEP with
RPEM; RNEP with RNEM; AFPP with AFPM; AFNP with AFNM; SIEPP with
SIEPM; and SIENP with SIENM.
Levels
(Multiply by Mean)
Noise
Factor Mean Tolerancet 1 2 3
* The mean values for these parameters can be set by the designer. The values
shown in this table refer to a particular design, called starting design.
t Tolerance means three-standard-deviation limit.
Sec. 8.3 Methods of Simulating the Variation in Noise Factors 189
Let us denote the noise factors by xj , • • •, x^ where k = 2\. Suppose all these noise
factors meet the mean and tolerances, including the implied correlations, specified by
Table 8.1. How can we evaluate the mean and variance of the offset voltage, which is
the product's response being considered? Three common methods of evaluating the
mean and variance of a product's response resulting from variations in many noise
factors are Monte Carlo simulation, Taylor series expansion, and orthogonal array based
simulation. They are described briefly next.
In this method, the mean response is estimated by setting each noise factor equal to its
nominal value. To estimate the variance of the response, we find the derivatives of the
response with respect to each noise factor. Let v denote the offset voltage and a? , • • •, ajt denote the variances of the k noise factors. The variance of v is then
computed by the following formula:
3v
(8.3)
1=1 ax,- of
Note that the derivatives used in this formula can be evaluated mathematically or
numerically. Equation (8.3), which is based on first-order Taylor series expansion,
gives quite accurate estimates of variance when the correlations among the noise
factors are negligible and the tolerances are small, so that interactions among the noise
factors and the higher order terms are negligible. Otherwise, higher order Taylor series
expansions must be used, which makes the formula for evaluating the variance of the
response quite complicated and computationally expensive. Thus, Equation (8.3) does
not always give an accurate estimate of variance.
190 Computer Aided Robust Design Chap. 8
of noise factors. For each noise variable, we take either two or three levels. Suppose [i and aj are the mean and variance, respectively, for the noise variable x,. When we
take two levels, we choose them to be H,--a,- and n,- + a,-. Note that the mean and
variance of these two levels are |a, and aj, respectively. Similarly, when we take three
levels, we choose them to be n,--V372 a,-, (a, and |j.,+V3/2 a,-. Here also, the mean
and variance of the three levels are |a, and a,-, respectively. We then assign the noise
factors to the columns of an orthogonal array to determine testing conditions (sampling
points) for evaluating the response. From these values of response, we can estimate
the mean and variance of the response. Note that orthogonal array based simulation
can be performed with hardware experiments as well, provided experimental equipment
allows us to set the levels of noise factors.
The advantage of this method over the Monte Carlo method is that it needs a
much smaller (orders of magnitude smaller) number of testing conditions; yet, the
accuracy is excellent. Next, the orthogonal array based simulation gives common
testing conditions for comparing two or more combinations of control factor settings. If
different seeds are used in generating the random numbers, we do not get common
testing conditions. Further, when interactions and correlations among the noise factors are
strong, an orthogonal array based simulation gives more accurate estimates of mean
and variance compared to Taylor series expansion. On the other hand, when the
interactions and correlations are small, both the Taylor series expansion method and the
orthogonal array based simulation method give the same results.
Selecting three levels rather than two levels for the noise factors gives more accurate
estimates of variance. However, selecting two levels leads to a smaller orthogonal
array and, hence, a saving in the simulation cost. In the differential op-amp
application, the design team wanted to limit the size of the orthogonal array for simulating the
variation in noise factors to L36. Note that the L36 array, shown in Table 8.2, has
eleven 2-level columns and twelve 3-level columns. So, two levels were taken for the
first ten factors (the eight resistors and the two current sources), and three levels were
taken for the remaining eleven factors (the ten transistor parameters and the
temperature).
The levels for the various noise parameters are shown in Table 8.1. The levels
shown are the nominal value plus the deviation from the nominal value. Thus, for the
parameter RPEM, a = 21/3 = 7 percent. So, level 1 is \i-0.07\i = 0.93\i, and level 2
is (J. -f- 0.07(a = 1.07(a, where [i is the mean value for RPEM. Due to the correlation
induced by the manufacturing process, the mean value of RPEP is equal to the realized
value of RPEM. The a for the variation of RPEP around RPEM is 2/3 = 0.67 percent.
Thus, the two levels of RPEP are 0.9933 and 1.0067 times the realized value of
RPEM. Thus, we take care of the correlation through sliding levels.
Sec. 8.4 Orthogonal Array Based Simulation of Variation in Noise Factors 191
Offset
Column Number
Expt. Voltage^
No. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 (mV)
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 -22.8
2 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2 2 -4.8
3 1 1 1 1 1 1 3 3 3 3 3 3 3 3 3 3 3 3 14.2
4 2 2 2 2 2 2 1 1 1 1 2 2 2 2 3 3 3 3 16.0
5 2 2 2 2 2 2 2 2 2 2 3 3 3 3 1 1 1 1 -55.8
6 2 2 2 2 2 2 3 3 3 3 1 1 1 1 2 2 2 2 37.7
7 2 2 2 1 1 1 2 2 2 1 1 2 3 1 2 3 3 1 2 2 3 -16.5
8 2 2 2 1 1 1 2 2 2 2 2 3 1 2 3 1 1 2 3 3 1 20.8
9 2 2 2 1 1 1 2 2 2 3 3 1 2 3 1 2 2 3 1 1 2 -9.1
10 2 1 2 2 1 2 2 1 1 2 1 1 3 2 1 3 2 3 2 1 3 2 33.5
11 2 1 2 2 1 2 2 1 1 2 2 2 1 3 2 1 3 1 3 2 1 3 4.9
12 2 1 2 2 1 2 2 1 1 2 3 3 2 1 3 2 1 2 1 3 2 1 -56.7
13 2 2 1 2 2 1 2 1 2 1 2 3 1 3 2 1 3 3 2 1 2 25.2
14 2 2 1 2 2 1 2 1 2 2 3 1 2 1 3 2 1 1 3 2 3 -40.6
15 2 2 1 2 2 1 2 1 2 3 1 2 3 2 1 3 2 2 1 3 1 -12.4
16 2 2 2 1 2 2 1 2 1 1 2 3 2 1 1 3 2 3 3 2 1 61.3
17 2 2 2 1 2 2 1 2 1 2 3 1 3 2 2 1 3 1 1 3 2 -38.5
18 2 2 2 1 2 2 1 2 1 3 1 2 1 3 3 2 1 2 2 1 3 -15.5
19 2 1 2 2 1 1 2 2 1 2 1 2 1 3 3 3 1 2 2 1 2 3 -29.2
20 2 1 2 2 1 1 2 2 1 2 2 3 2 1 1 1 2 3 3 2 3 1 27.2
21 2 1 2 2 1 1 2 2 1 2 3 1 3 2 2 2 3 1 1 3 1 2 -31.4
22 2 1 2 1 2 2 2 1 1 1 2 1 2 2 3 3 1 2 1 1 3 3 2 -48.1
23 2 1 2 1 2 2 2 1 1 1 2 2 3 3 1 1 2 3 2 2 1 1 3 13.4
24 2 1 2 1 2 2 2 1 1 1 2 3 1 1 2 2 3 1 3 3 2 2 1 21.8
25 2 1 1 2 2 2 1 2 2 1 1 1 3 2 1 2 3 3 1 3 1 2 2 19.7
26 2 1 1 2 2 2 1 2 2 1 1 2 1 3 2 3 1 1 2 1 2 3 3 -19.0
27 2 1 1 2 2 2 1 2 2 1 1 3 2 1 3 1 2 2 3 2 3 1 1 8.2
28 2 2 2 1 1 1 1 2 2 1 2 1 3 2 2 2 1 1 3 2 3 1 3 10.8
29 2 2 2 1 1 1 1 2 2 1 2 2 1 3 3 3 2 2 1 3 1 2 1 40.1
30 2 2 2 1 1 1 1 2 2 1 2 3 2 1 1 1 3 3 2 1 2 3 2 -40.1
31 2 2 1 2 1 2 1 1 1 2 2 1 3 3 3 2 3 2 2 1 2 1 1 -31.1
32 2 2 1 2 1 2 1 1 1 2 2 2 1 1 1 3 1 3 3 2 3 2 2 -56.6
33 2 2 1 2 1 2 1 1 1 2 2 3 2 2 2 1 2 1 1 3 1 3 3 48.9
34 2 2 1 1 2 1 2 1 2 2 1 1 3 1 2 3 2 3 1 2 2 3 1 -46.5
35 2 2 1 1 2 1 2 1 2 2 1 2 1 2 3 1 3 1 2 3 3 1 2 66.2
36 2 2 1 1 2 1 2 1 2 2 1 3 2 3 1 2 1 2 3 1 1 2 3 -22.7
1 2 3 4 5 6 7 8 9 10 e 11 12 13 14 15 16 17 18 19 20 21 e
"ioise Factor Assignment"
* Columns 1-23 form the L36 orthogonal array. Columns 1-10 and 12-22 form the noise orthogonal
array.
t Empty column is identified by e.
X The values in this column correspond to the starting design.
192 Computer Aided Robust Design Chap. 8
Let us now see how the levels for the transistor parameter SIEPM are calculated.
As mentioned earlier, this parameter has a long-tailed distribution that could be
approximated by the log-normal density. Let log10(|ii6) be the mean for log10(SIEPM).
Then, log10(SIEPM/|J.16) has mean zero and standard deviation equal to
(Similarly,
log107)/3 = 0.2817.theSo, level
level 1 for3log10(forSISIEPM
EPM/u:16) = -isV3~72~100345
(0.2817) ==-0.2.21
345. Thus,times
in the nat|a16,
ural scalewhile
, the level 1level
for SIEPM2 iiss 10"equal
0345 = 0.4to5 tim|a16.
es |a16.
The levels for the other noise parameters are calculated in a similar manner.
The factors 1 through 10 were assigned to columns 1 through 10 of the L$6
array, respectively, and the factors 11 through 21 were assigned to columns 12 through
22, respectively. Columns 11 and 23 were kept empty. The submatrix of L 36 formed
by columns 1 through 10 and 12 through 21 is called the noise orthogonal array, as it
is used to simulate the effect of noise factors.
It is obvious that the quality loss, denoted by Q, for this circuit design is given
by
Q = k x (mean square offset voltage)
= k x (3.542 + 34.42)
= 1197 k.
where k is the quality loss coefficient. Suppose the maximum allowed variation for the
offset voltage is 0 ± 35 mV. Then, assuming that the distribution of the offset voltage
is approximately normal (gaussian) with mean and variance as determined earlier, the
percent yield (p) of circuits produced under this design would be given by
= O(1.12)-O(-0.91)
TABLE 8.3 ANOVA FOR THE OFFSET VOLTAGE UNDER THE STARTING DESIGN*
Error 3 8
Total 35 41449
The selection of offset voltage as the quality characteristic was quite straightforward in
this project and was a natural choice, as it intuitively satisfies the guidelines given in
Chapter 6. The ideal value for the offset voltage is 0.0 mV. Depending on the
particular values of the circuit parameters, the offset voltage can be either positive or
negative. The design of the differential op-amp circuit for offset voltage is clearly a static
problem. Referring to the classification of static problems given in Chapter 5, the
current problem can be classified as a signed-target type of problem. Thus, the
appropriate S/N ratio, T|, to be maximized is
If under the control factor settings that maximize T| we get a nonzero mean offset
voltage, we can take care of it in one of two ways: (1) subtract the mean voltage in
the circuit that receives the output of the differential op-amp circuit, or (2) find a
control factor of the differential op-amp circuit that has a negligible or no effect on T|, but
has an appreciable effect on the mean, and use it to adjust the mean at zero.
Notice that there is always a potential to misclassify a nominal-the-best type
problem as a signed-target problem by subtracting the nominal value. Here, we are not
making such a mistake because zero offset voltage is a naturally occurring value. One
way to test whether a problem is being misclassified as signed-target is to see whether
the variance is identically zero for a particular value of the quality characteristic. If it
is, then by shifting the origin to that value, the problem should be classified as a
nominal-the-best type. Note that it is also necessary to ensure that after shifting the
origin, the quality characteristic takes only positive values. Recall that we followed
this strategy for the heat exchanger example in Chapter 6.
It should also be observed that we should not classify this problem as a smaller-
the-better type because the offset voltage can be positive as well as negative. The
selection of an appropriate S/N ratio is discussed further in Section 8.10.
As mentioned earlier, the circuit designer can set values of only five parameters: RFM,
RPEM, RNEM, CPCS, and OCS. These parameters constitute the control factors. The
other parameters are determined by the tracking relationships and the manufacturing
process. When the design team started the project, the circuit was "optimized"
intuitively to get low offset voltage. The corresponding values of the control factors were
Sec. 8.6 Optimization of the Design 195
RFM = 71 kQ, RPEM = 15 kQ, RNEM = 2.5 kQ, CPCS = 20 |XA, and OCS = 20 |XA.
For each of these control factors, three levels were taken as shown in Table 8.4. For
each control factor, level 2 is the starting level, level 1 is one-half of the starting level,
and level 3 is two times the starting level. Thus, we include a wide range of values
with these levels. This is necessary to benefit adequately from the nonlinearity of the
relationship between the control factors, noise factors, and the offset voltage.
Levels*
Name Description 1 2 3
Expt. 12 3 4 5 Expt. 12 3 4 5
No. A B C D E No. A B C D E
1 11111 19 12 13 3
2 2 2 2 2 2 20 2 3 2 11
3 3 3 3 3 3 21 3 13 2 2
4 11112 22 12 2 3 3
5 2 2 2 2 3 23 2 3 3 11
6 3 3 3 3 1 24 3 112 2
7 112 3 1 25 13 2 12
8 2 2 3 12 26 2 13 2 3
9 3 3 12 3 27 3 2 13 1
10 113 2 1 28 13 2 2 2
11 2 2 13 2 29 2 13 3 3
12 3 3 2 13 30 3 2 111
13 12 3 13 31 13 3 3 2
14 2 3 12 1 32 2 1113
15 3 12 3 2 33 3 2 2 2 1
16 12 3 2 1 34 13 12 3
17 2 3 13 2 35 2 12 3 1
18 3 12 13 36 3 2 3 12
Simulation Algorithm
Each row of the control orthogonal array represents a different trial design. For each
trial design, the S/N ratio was evaluated using the procedure described in Sections 8.4
and 8.5. The simulation algorithm is graphically displayed in Figure 8.3. It consists
of the following calculations for each row of the control orthogonal array:
1. Determine the control factor settings for a row of the control orthogonal array
(Table 8.5) by using Table 8.4. For example, row 1 comprises level 1 for all
Control Orthogonal Noise Orthogonal Offset Mean, V
Array Array Voltage S/N
36 2 2 1 •••• 2
V2, 36
36 2 2 1 •••• 2 V36, 36
The results of the above calculations for the 36 rows of the control orthogonal array
(OA) are given in Table 8.6.
Here, the entire simulation amounts to 36 x 36 = 1,296 evaluations of the offset
voltage. When evaluation of the response is expensive, such a large number of
evaluations can prove to be impractical. Further, such a large number of evaluations is, as a
rule, not necessary. Later in this chapter (Section 8.8) we discuss some practical ways
of reducing the number of evaluations.
Data Analysis
Table 8.6 lists the S/N ratio and the mean offset voltage computed for each of the 36
rows of the control orthogonal array. By analyzing the S/N ratio data, we get the
information in Table 8.7. The effects of the various factors on T| are displayed in
Figure 8.4. The 2c confidence limits are also shown in the figure. It is clear from the
plots and the ANOVA table that a major improvement in T| is possible by reducing
RPEM from 15 kQ to 7.5 k£X A modest improvement is possible by increasing
RNEM from 2.5 kQ to 5 kQ, and by reducing both the current sources, CPCS and
OCS, from 20 (J.A to 10 (J.A. The resistance RFM has a negligible effect.
The results of the analysis of the mean offset voltage data are given in Table 8.8
and are plotted in Figure 8.4. We can see that the three resistors have only a small
effect on the mean offset voltage; however, the two current sources have a large effect
on the mean offset voltage. Considering that their effects on T| are not large, the two
current sources could be used to adjust the mean offset voltage on zero.
Optimum Settings
Considering the data analysis above, the design team chose the following two designs
as potential optimum designs:
• Optimum 1: Only change RPEM from 15 kQ to 7.5 kQ. Using the procedure in
Sections 8.4 and 8.5, the value of T| for this design was found to be 33.58 dB
compared to 29.27 dB for the starting design. In terms of the standard deviation
Sec. 8.6 Optimization of the Design 199
Mean Variance
Row No. Offset of Offset
of Control Voltage Voltage
OA (1<T3V) (1<T6V2) r\ = -10 log,0 (Variance)
1 -1.28 321 34.93
2 -3.54 1184 29.27
3 -11.47 7301 21.37
4 14.12 389 34.10
5 39.68 1789 27.47
6 -127.26 4850 23.14
35-
t| = —10 log (variance)
30-
___^_
25-
20
T~I—I-l~T T~l 1-T"I—I I I
AAA B B B C C C ODD E E E
1 2 3 1 2 3 1 2 3 1 2 3 1 2 3
S -25
-50
D1D
2 D
3 £,1
£,2£,
3
Figure 8.4 Plots of control factor effects. Underscore indicates starting level. Two-
standard-deviation confidence limits are shown for the starting level.
Sec. 8.6 Optimization of the Design 201
Average T| by Levelt
Degrees of Sum of Mean
Factor 1 2 3 Freedom Squares Square F
Average (i by Levelt
(lCT3 V) Sum of Mean
Degrees of Squares Square
Factor 1 2 3 Freedom (lCT6 V2) (lCT6 V2) F
of the offset voltage, this represented a reduction from 34.41 to 20.95 mV.
Correspondingly, the mean offset voltage also changed from -3.54 to -1.94 mV,
which is closer to the desired value of 0 mV.
• Optimum 2: Change RPEM to 7.5 kQ. Also change RNEM to 5.0 kQ, CPCS to
10 \xA, and OCS to 10 \xA. The T) for this design was computed to be 37.0 dB,
which is equivalent to a standard deviation of offset voltage equal to 14.13 mV.
The mean offset voltage for this design was -16.30 mV. As discussed earlier,
the mean could be easily brought to 0 mV by adjusting either CPCS or OCS, but
with some adverse effect on T).
In the discussion thus far, we have paid attention to only the dc offset voltage.
Stability under ac operation is also an important consideration. For a more elaborate study
of this characteristic, one must generate data similar to that in Table 8.6 and
Figure 8.4. The optimum control factor setting should then be obtained by jointly
considering the effects on both the dc and ac characteristics. If conflicts occur, appropriate
trade-offs can be made using the quantitative knowledge of the effects. In the
differential op-amp circuit example, the design team simply checked for ac stability at the two
optimum conditions. For sufficient safety margin with respect to ac stability, optimum
1 was selected as the best design and was called simply optimum design.
This concludes the Robust Design cycle for the differential op-amp case study.
For another circuit design example, see Anderson [A2]. The remaining sections of this
chapter describe additional quality engineering ideas using the differential op-amp case
study.
Selecting values of control parameters that maximize the S/N ratio gives us a lower
quality loss without any increase in the product cost. The optimum design arrived at in
the preceding section gave a 33.58 - 29.27 = 4.31 dB gain in S/N ratio, which is
equivalent to a reduction in variance of offset voltage and, hence, the quality loss by a factor of 100'431 = 2.7. If further quality improvement is needed or desired, we must
address the tolerance factors and make appropriate economic trade-offs between the
increased cost of the product and the improved quality. As discussed in Chapter 2, for
best engineering economics, tolerance design should come only after the S/N ratio has
been maximized.
The first step in tolerance design is to determine the contribution of the noise
factor to the quality loss. This can be done by using the procedure in Section 8.4.
Table 8.9 gives the contribution to total variance of the noise factors for the optimum
design, as well as the starting design. Note that the contribution to the total variance
by a noise factor is computed by dividing the sum of squares due to that factor by the
Sec. 8.7 Tolerance Design 203
degrees of freedom for that total sum of squares, namely 35. In this table, we list only
the top four contributors. Together, these four noise factors account for more than 95
percent of the total variance. Thus, the contribution of noise factors exhibits the
typical Pareto principle. To improve the joint economics of product cost and quality loss,
we should consider the following issues:
1. For noise factors that contribute a large amount to the variance of the offset
voltage, consider ways of reducing their variation. This is the situation with the
noise factors SIENP, AFNO, AFNM, and SIEPP.
2. For the noise factors that contribute only a small amount to the variance,
consider ways of saving cost by allowing more variation. This is the situation with
the noise factors other than those listed above.
Contribution to the
Variance of Offset Voltage
(i<r* v2)
AFNO 224 76
AFNM 167 57
SIEPP 53 55
Remainder 54 19
Here, we use some fictitious cost values to illustrate the economic trade-off
involved in tolerance design. Suppose there are three layout techniques for the
integrated circuit. We call technique 1 the baseline technique. If technique 2 is used,
it leads to a factor of two reduction in variance of SIENP, and it costs $0.20 more per
circuit to manufacture. If technique 3 is used, it leads to a factor of four reduction in
variance of SIENP, and it costs $1.00 more per circuit to manufacture. Which layout
technique should we specify? The layout technique is a tolerance factor because it
204 Computer Aided Robust Design Chap. 8
affects both the quality loss and unit manufacturing cost. Selecting levels of such
factors is the goal of tolerance design.
To select the layout technique, we need to know the quality loss coefficient,
which was defined in Chapter 2. Suppose the LD50 point (the point at which one-half
of the circuits fail) is An = 100 mV and the cost of failure is A0 = $30. Then, the
quality loss coefficient, k, is given by
So, the quality loss due to variation in SIENP when layout technique 1 is used with the
optimum design is
Since the variance of offset voltage caused by SIENP is directly proportional to its own
variance, the quality loss under techniques 2 and 3 when optimum design is used
would be $0.35/circuit and $0.18/circuit, respectively. In each case, the quality loss
due to the other noise factors is $0.62/circuit. Consequently, the total cost (which
consists of product cost plus the quality loss due to all noise factors) for layout
techniques 1, 2, and 3 would be $1.32/circuit, $1.17/circuit, and $1.80/circuit, respectively.
These costs are listed in Table 8.10. It is clear that in order to minimize the total cost
incurred by the customer (which is the quality loss) and by the manufacturer (extra cost
for the layout technique), we should choose technique 2 for layout. Cost trade-offs of
this nature should be done with all tolerance factors.
* Total cost is the sum of incremental product cost and quality loss due to all sources.
Sec. 8.8 Reducing the Simulation Effort 205
It often proves to be very expensive to simulate the product performance even under
one set of parameter values. In that case, using an array such as L36 to evaluate the
S/N ratio for each combination of control factor settings in the control orthogonal array
can be prohibitively expensive. This section presents some practical approaches for
reducing the number of testing conditions used in evaluating the S/N ratio for each
combination of control factor levels.
206 Computer Aided Robust Design Chap. 8
By comparing the two columns in Table 8.9 that correspond to the starting and optimum
designs, we notice that the sensitivity to all noise factors is reduced more or less uniformly
by a factor of about 3. (Tolerance in SIEPP is an exception, but its contribution to the total
variance is rather small.)
For the circuit, it would have been adequate to consider only the tolerances in
the parameters SIENP, AFNO, and AFNM for optimizing the circuit for offset voltage.
This would mean that for the purpose of optimization the Lg array could be used in
place of the L36 array for simulating the effect of noise factors. This amounts to a 4-
fold reduction in the total number of evaluations of the offset voltage.
• Level 1 of CN. Select the levels for SIENP, AFNO, and AFNM so that we get a
compounded effect of lowering the offset voltage—that is, pushing it to the
negative side. From Table 8.3, we see that level 1 of CN should consist of level 1 of
SIENP, level 3 of AFNO, and level 1 of AFNM.
• Level 2 of CN. Take nominal levels for SIENP, AFNO, and AFNM.
• Level 3 of CN. Select the levels of SIENP, AFNO, and AFNM so that we get a
compounded effect of increasing the offset voltage—that is, pushing it to the
Sec. 8.9 Analysis of Nonlinearity 207
positive side. Again from Table 8.3, we see that level 3 of CN should consist of
level 3 of SIENP, level 1 of AFNO, and level 3 of AFNM.
The three levels of the compound noise factor are simply three testing conditions.
Thus, for each combination of control factor settings, we would evaluate the offset
voltage at only three testing conditions and compute the S/N ratio from these three values.
This reduces the testing conditions to bare bones. In order for the optimization
performed with the compound noise factor to give meaningful results, it is necessary that
we know the following:
3. That the directionality of these effects does not depend on the settings of the
control factors.
Note that if conditions two and three above are violated, then during simulation or
experimentation the effect of one noise factor may get compensated for by another
noise factor. In that case, the optimization based on compounded noise can give
confused results.
In any given project, one can use a large orthogonal array like L 36 under a
starting design to identify major noise factors and the directionality of their effects.
Properly selecting the quality characteristic helps ensure consistency in the directionality of
the effects of noise factors. Lack of such consistency can only be detected through the
final verification experiment.
Is the S/N ratio calculated from the compound noise factor equal to the S/N ratio
calculated from an orthogonal array based simulation? Of course, they need not be
equal. However, for optimization purposes, it does not make any difference whether
we use one or the other, except for the computational effort.
It is helpful to examine the nature of the nonlinearity of the relationship between the
circuit parameters and the offset voltage. Figure 8.5 shows the plot of offset voltage as
a function of SIENP/SIENM for RPEM = 15 kQ and RPEM = 7.5 kQ. Note that the
nominal value of SIENP is SIENM. Therefore, we take the ratio SIENP/SIENM to
study the sensitivity of offset voltage to variation in SIENP. All other circuit
parameters are set at their nominal levels for these plots. It is clear that sensitivity to
variation in SIENP/SIENM is altered by changing another parameter, that is, RPEM.
Compare this with the nonlinearity studied in Chapter 2, Section 2.5 where a change in the
nominal value of the gain leads to a change in the sensitivity to variation in the gain
itself. In Robust Design, we are interested in exploiting both the types of nonlineari-
ties to reduce successfully the sensitivity to noise factors.
208 Computer Aided Robust Design Chap. 8
100-
? -100
Figure 8.5 Relationship between offset voltage and SIENP when (a) RPEM = 15 kQ and
(b) RPEM = 7.5 kQ are shown in the two plots. Note that changing RPEM changes the
slope. Also shown in the plots are the transfer of variance from SIEPM to offset voltage.
One benefit of using an orthogonal array with many degrees of freedom for error is that
we can judge the additivity of the factor effects. Referring to Table 8.7, we see that in
this case study the error mean square for the chosen S/N ratio (signed-target) is 0.724.
The total degrees of freedom for the control factors is 10, the corresponding sum of
squares is 3.1 + 504.3 + 53.4 + 108.2 + 50.1 = 719.1 so that the mean square for the
factor effects is 71.91. Thus, the ratio of the error mean square (0.724) to the mean
square for the factor effects is 0.01. This implies that the additivity of the factor
effects is excellent for the chosen S/N ratio.
What if we had taken the absolute value of the offset voltage and treated it as a
smaller-the-better type quality characteristic? ANOVA was performed by using the
corresponding S/N ratio, which yielded the error mean square equal to 9.26 and the
mean square for the factor effects equal to 70.21. For this S/N ratio, the ratio of the
error mean square to the mean square for the factor effects is 0.13, which is more than
10 times larger than the corresponding ratio for the chosen S/N ratio. This is evidence
Sec. 8.11 Summary 209
that the chosen S/N ratio, namely, the signed-target S/N ratio, is substantially
preferable to the smaller-the-better type S/N ratio for this case study. Thus, when in doubt,
one can conduct ANOVA with the candidate S/N ratios and pick the one that gives the
smallest relative error mean square.
Some caution is needed in using the approach of this section for selecting the
appropriate S/N ratio. The candidate S/N ratios should only be those that can be
justified from engineering knowledge. The orthogonal array can then help identify
which adjustments are easier to accomplish. The approach of this section can also be
used in deciding which is a better quality characteristic from among a few that can be
justified from engineering knowledge.
8.11 SUMMARY
• Orthogonal array based simulation of noise factors can be used to evaluate the
S/N ratio for each combination of control factor settings used in a matrix
experiment. However, this may prove to be too expensive in some projects. Selecting
only major noise factors and forming a compound noise factor can greatly reduce
the simulation effort and, hence, are recommended for design optimization.
• For the differential op-amp circuit example, the quality characteristic (offset
voltage) was of the signed-target type. It had five control factors and 21 noise
factors. The standard orthogonal array Lt,6 was used to simulate the effect of noise
factors and also to construct trial combinations of control factor settings. Thus,
36 x 36 = 1,296 circuit evaluations were performed. The selected optimum
control factor settings gave a 4.31 dB increase in S/N ratio. This represents a 63
percent reduction in the mean square offset voltage.
• Selecting the levels of control factors that maximize the S/N ratio gives a lower
quality loss without any increase in product cost. Thus, here, 63 percent
reduction in quality loss was obtained. If further quality improvement is needed or
desired, tolerance factors must be addressed and appropriate economic trade-offs
must be made between the increased cost of the product and the improved
quality. Here, a 11.4 percent reduction in total cost was achieved by tolerance
design. For best engineering economics (as noted in Chapter 2), tolerance design
should come only after the S/N ratio has been maximized (see Table 8.11).
• The first step of tolerance design is to determine the contribution of each noise
factor to the quality loss. To improve the joint economics of product cost and
quality loss, one should consider two issues:
1. Ways of reducing the variation of the noise factors that contribute a large
amount to the quality loss
2. Ways of saving cost by allowing wider variation for the noise factors that
contribute only a small amount to the quality loss
• Matrix experiments using orthogonal arrays are useful for selecting the most
appropriate S/N ratio from among a few candidates. In such an application, a
few degrees of freedom should be reserved for estimating the error variance. A
smaller error variance, relative to the mean square for the control factor effects,
signifies that the additivity of the particular S/N ratio is better, and, hence, the
S/N ratio is more suitable.
Chapter 9
DESIGN OF
DYNAMIC SYSTEMS
Dynamic systems are those in which we want the system's response to follow the
levels of the signal factor in a prescribed manner. Chapter 5 gave several examples of
dynamic systems and the corresponding S/N ratios. The changing nature of the levels
of the signal factor and the response make designing a dynamic system more
complicated than designing a static system. However, the eight steps of Robust Design
described in Chapter 4 are still valid. This chapter describes the design of a
temperature control circuit to illustrate the application of the Robust Design method to a
dynamic system. This chapter has six sections:
• Section 9.1 describes the temperature control circuit and its main function (Step
1 of the Robust Design steps described in Chapter 4).
• Section 9.2 gives the signal, control, and noise factors for the circuit (Steps 2 and
4).
• Section 9.3 discusses the selection of the quality characteristics and the S/N
ratios (Step 3).
• Section 9.4 describes the steps in the optimization of the circuit, including
verification of the optimum conditions (Steps 5 through 8).
• Section 9.5 discusses iterative optimization using matrix experiments based on
orthogonal arrays.
• Section 9.6 summarizes the important points of this chapter.
213
214 Design of Dynamic Systems Chap. 9
Temperature
Control
Circuit
For a particular target temperature, the circuit must turn a heater ON or OFF to
control the heat input, which makes the temperature controller a dynamic problem.
Further, the target temperature can be changed by the users (that is, on one day users
may want the target temperature at 80°C, whereas on another day they may want it at
90°C) which also makes the design of a temperature control circuit a dynamic problem.
Thus, designing a temperature control circuit is a doubly dynamic problem.
Sec. 9.1 Temperature Control Circuit and Its Function 215
Figure 9.2 shows a standard temperature control circuit. (The tolerance design of a
slightly modified version of this circuit was discussed by Akira Tomishima [T9].)
Suppose we want to use the circuit to maintain the temperature of a bath at a value that is
above the ambient temperature. The temperature of the bath is sensed by a thermistor,
which we assume to have a negative temperature coefficient; that is, as shown in Figure
9.3, the thermistor resistance, RT, decreases with an increase in the temperature of the
bath. When the bath temperature rises above a certain value, the resistance Rj drops
below a threshold value so that the difference in the voltages between terminals 1 and 2
of the amplifier becomes negative. This actuates the relay and turns OFF the heater.
Likewise, when the temperature falls below a certain value, the difference in voltages
between the terminals 1 and 2 becomes positive so that the relay is actuated and the
heater is turned ON. In the terminology of Chapter 5, bang-bang controllers are
Continuous-Discrete (C-D) type dynamic problems.
Let us denote the value of the thermistor resistance at which the heater turns ON,
by Rt-on and the value of the thermistor resistance at which the heater turns OFF by
Rt-off- Consider the operation of the controller for a particular target temperature. The
values of Rt-on and Rt-off can change due to variation in the values of the various
circuit components. This is graphically displayed in Figure 9.4. In the figure, mT is the
thermistor resistance at the target temperature; and m0N and m0FF are the mean values of
Rt-on andR-t-off> respectively.
216 Design of Dynamic Systems Chap. 9
CS
8
c
m
60 70 80 90
Temperature, 7(°C)
Figure 9.3 Resistance vs. temperature plot for a thermistor with negative coefficient.
Thus, the ideal function of the temperature control circuit can be written as
follows:
This is clearly a doubly dynamic problem—first for the ability to set mT (C-C type
system), second for the ON-OFF transition (C-D type system).
Sec. 9.2 Signal I, Control, and Noise Factors
ON
I
o
i ' ^ ' i ' i
1 OFF
i *.Y
;
^T —OFF ^T —ON
A
jMllm.
m0FF mT m0N R^
Figure 9.4 Hysteresis and variation in the operation of a temperature control circuit.
Let us first examine the choice of a signal factor. Referring to the circuit diagram of
Figure 9.2, we notice that the four resistances, /?], R2, /?3 and RT, form a Wheatstone
bridge. Therefore, any one of the three resistances, /?], R2 or /?3, can be used to
adjust the value of RT at which the bridge balances. We decide to use /?3, and, thus, it
is our signal factor for deciding the temperature setting. The resistance RT by itself is
the signal factor for the ON-OFF operations.
The purpose of the Zener diode (nominal voltage = Ez) in the circuit is to
regulate the voltage across the terminals a and b (see Figure 9.2). That is, when the Zener
diode is used, the voltage across the terminals a and b remains constant even if the
power supply voltage, E0, drifts or fluctuates. Thus it reduces the dependence of the
threshold values Rj-on and Rt-off °n the power supply voltage E0.
As a general rule, the nominal values of the various circuit parameters are
potential control factors, except for those completely defined by the tracking rules. In the
temperature control circuit, the control factors are R \, R2, /?4 and Ez. Note that we do
218 Design of Dynamic Systems Chap. 9
not take £0 asa control factor. As a rule, the design engineer has to make the
decision about which parameters should be considered control factors and which should
not. The main function of E0 is to provide power for the operation of the relay, and
its nominal value is not as important for the ON-OFF operation. Hence, we do not
include E0 as a control factor. The tolerances on R ], R2, R4, Ez, and E0 are the noise
factors.
For proper operation of the circuit, we must have Ez < E0. Also, /?4 must be
much bigger than /?j or R2. These are the tracking relationships among the circuit
parameters.
The resistances Rt-on an^ Rt-off aTQ continuous variables that are obviously directly
related to the ON-OFF operations; together, they completely characterize the circuit
function. Through standard techniques of circuit analysis, one can express the values
of Rt-on and Rt-off as following simple mathematical functions of the other circuit
parameters:
_ ^3^2(£z^4+£q^i) (9.1)
R1 (EZR 2+EzR 4-EqR2)
R 3 a 2R 4
(9.2)
Rt'off = /?l(/?2+/?4) •
Thus, by the criteria defined in Chapter 6, Rt-on md Rt-off are appropriate choices
for the quality characteristics.
Suppose Equations (9.1) and (9.2) for the evaluation of RT_0N and Rt-off were
not available and that hardware experiments were needed to determine their values.
Measuring Rt-on and Rt-off would still be easy. It could be accomplished by
incrementing the values of RT through small steps until the heater turns ON and
decrementing the values of RT until the heater turns OFF.
The ideal relationship of Rt-on and RT-off with R 3 (the signal factor) is linear,
passing through the origin, as shown in Figure 9.5. So for both quality characteristics, the
appropriate S/N ratio is the C-C type S/N ratio, described in Chapter 5. Suppose for
some particular levels of the control factors and particular tolerances associated with
Sec. 9.3 Quality Characteristics and S/N Ratios 219
cc 2 —
Figure 9.5 Plot ofRT_ON and RT-off vs. R3 for the starting design.
the noise factors we express the dependence of Rt-on on /?3 by the following equation
obtained by the least squares fit:
where B is the slope and e is the error. Note that any nonlinear terms in R 3 (such as R3 or
R]) are included in the error e. The S/N ratio for Rt-on is given by
r| = 10 log,0 P2 -2 '
(9.4)
where 8' is the slope and e' is the error. Then, the corresponding S/N ratio for
Rt-off is given by
B'2
ti' = 10 log,0 -V • (9-6)
Let us first see the computation of the S/N ratio for Rt-on- The nominal values of the
circuit parameters under the starting conditions, their tolerances (three-standard-
deviation limits), and the three levels for testing are shown in Table 9.1. These levels
were computed by the procedure described in Chapter 8; that is, for each noise factor,
the levels 1 and 3 are displaced from level 2, which is equal to its mean value, on
either side by a/3/2 a, where a is one-third the tolerance.
TABLE 9.1 NOISE AND SIGNAL FACTORS FOR TEMPERATURE CONTROL CIRCUIT
Levels
(Multiply by Mean
for Noise Factors)
Tolerance
Factor Mean* (%) 1 2 3
M. #3 (signal)
- -
* Mean values listed here correspond to the nominal values for the
starting design.
The ideal relationship between R 3 and Rt-on is a straight line through the origin
with the desired slope. Second- and higher-order terms in the relationship between /?3
and Rt-on should be, therefore, minimized. Thus, we take three levels for the signal
factor (/?3): 0.5 kQ, 1.0 kQ, and 1.5 kQ. Here Rt-on must be zero when /?3 is zero.
So, with three levels of/?3, we can estimate the first-, second-, and third-order terms in
the dependence of Rt-on on /?3. The first order, or the linear effect, constitutes the
Sec. 9.3 Quality Characteristics and S/N Ratios 221
desired signal factor effect. We include the higher-order effects in the noise variance
so they are reduced with the maximization of r). [It is obvious from Equations (9.1)
and (9.2) that the second- and higher-order terms in /?3 do not appear in this circuit.
Thus, taking only one level of R 3 would have been sufficient. However, we take three
levels to illustrate the general procedure for computing the S/N ratio.]
As discussed in Chapter 8, an orthogonal array (called noise orthogonal array)
can be used to simulate the variation in the noise factors. In addition to assigning
noise factors to the columns of an orthogonal array, we can also assign one of the
columns to the signal factor. From the values of Rt-on corresponding to each row of
the noise orthogonal array, we can perform least squares regression (see Section 5.4, or
Hogg and Craig [H3], or Draper and Smith [D4]) to estimate P and <52e and then the
S/N ratio, r\.
Chapter 8 pointed out that the computational effort can be reduced greatly by
forming a compound noise factor. For that purpose, we must first find directionality of
the changes in Rt-on caused by the various noise factors. By studying the derivatives
of Rt-on with respect to the various circuit parameters, we observed the following
relationships: Rt-on increases whenever R\ decreases, R2 increases, R4 decreases, E0
increases, or Ez decreases. (If the formula for Rt-on were complicated, we could have
used the noise orthogonal array to determine the directionalities of the effects.) Thus,
we form the three levels of the compound noise factor as follows:
For every level of the signal factor, we calculate Rt-on with the noise factor
levels set at the levels (CN){, (CN)2, and (CAO3. Thus, we have nine testing conditions
for the computation of the S/N ratio. The nine values of Rt-on corresponding to the
starting values of the control factors (/?, = 4.0 kCl, R2 = 8.0 kCl, R4 = 40.0 &Q, and
Ez = 6.0 kQ.) are tabulated in Table 9.2. Let yt denote the value of Rt-on f°r the i'h
testing condition; and let R 3(() be the corresponding value of R 3. Then, from standard
least squares regression analysis (see Section 5.4), we obtain
^4l(jrP%,)2.
8 ;=i
(9.8)
222 Design of Dynamic Systems Chap. 9
Substituting the appropriate values from Table 9.2 in Equations (9.7) and (9.8) we
obtain, B = 2.6991 and o2e = 0.030107. Thus, the S/N ratio for Rt_0n corresponding
to the starting levels of the control factors is
B2
r| = 10 log,0 ^r = 23.84 dB.
The S/N ratio, r)', for Rj-off can ^ computed in exactly the same manner as
we computed the S/N ratio for RT_ON.
Note that for dynamic systems, one must identify the signal factor and define the
S/N ratio before making a proper choice of testing conditions. This is the case with
the temperature control circuit.
*3 CN
(Signal Factor) (Compound Noise y = Rt-on*
Test No. (kii) Factor) (kii)
7 1.5 3.7757
The four control factors, their starting levels, and the alternate levels are listed in Table
9.3. For the three resistances (/?], R2, and R4), level 2 is the starting level, level 3 is
1.5 times the starting level, and level 1 is 1/1.5 = 0.667 times the starting level. Thus,
Sec. 9.4 Optimization of the Design 223
we include a fairly wide range of values with the three levels for each control factor.
Since the available range for Ez is restricted, we take its levels somewhat closer. Level
3 of Ez is 1.2 times level 2, while level 1 is 0.8 times level 2.
Levels*
Factor 1 2 3
The orthogonal array Lg, which has four 3-level columns, is just right for studying the
effects of the four control factors. However, by taking a larger array, we can also get a
better indication of the additivity of the control factor effects. Further, computation is
very inexpensive for this circuit. So, we use the L18 array to construct the control
orthogonal array. The L18 array and the assignment of the control factors to the
columns are given in Table 9.4. The control orthogonal array for this study is the sub-
matrix of L]g formed by the columns assigned to the four control factors.
For each row of the control orthogonal array, we computed P and r| for Rt-on- The values of r| and P2 are shown in Table 9.4 along with the control orthogonal array.
The possible range for the values of P is 0 to oo and we are able to get a better additive
model for P in the log transform. Therefore, we study the values of P in the decibel
scale, namely 20 loglop. The results of performing the analysis of variance on r| and
20 logioP are tabulated in Tables 9.5 and 9.6. The control factor effects on r| and
20 log10P are plotted in Figure 9.6(a) and (b). Also shown in the figure are the control
factor effects on r|' and 20 logi0p' corresponding to the quality characteristic Rt-off-
The following observations can be made from Figure 9.6(a):
• For the ranges of control factor values listed in Table 9.3, the overall S/N ratio
for the OFF function is higher than that for the ON function. This implies that
224 Design of Dynamic Systems Chap. 9
the spread of Rj-on values caused by the noise factors is wider than the spread
of Rj-off values.
• The effects of the control factors on T)' are much smaller than the effects on r\.
• R i has negligible effect on T) or T)'.
• r) can be increased by decreasing R2, however, it leads to a small reduction in
n'.
• r) can be increased by increasing R4; however, it too leads to a small reduction
in ti'.
• T) can be increased by increasing Ez, with no adverse effect on T)'.
Column Numbers
and Factor Assignment*
Expt. 12 3 4 5 6 7 8
No. eeABCeDe (dB) P2
1 11111111 22.41 9.59
5 12 2 2 3 3 1 1 24.19 7.12
6 12 3 3 112 2 19.47 15.66
7 13 12 13 2 3 22.25 19.27
g 13 2 3 2 13 1 23.61 15.04
9 13 3 13 2 12 24.93 1.42
10 2 1 1 3 3 2 2 1 24.23 31.22
11 2 12 1 13 3 2 24.50 3.07
12 2 13 2 2 1 13 22.13 5.03
13 2 2 1 2 3 13 2 26.02 11.31
14 2 2 2 3 12 13 16.19 61.02
15 2 2 3 1 2 3 2 1 24.60 1.49
16 2 3 1 3 2 3 12 20.26 58.36
17 2 3 2 13 12 3 25.94 2.49
18 2 3 3 2 12 3 1 23.05 3.95
Average r\ by
Levelt
Total 17 108.67
Average 20 'og,0P by
Levelt
Total 17 391.4
30 -•-
f- ^K ^k ¦-- OFF
m
2- 25 ¦
20'
i i
\/ /
i—i—i—i—m
ON
AAA B B B C C C D D D "l 2 3 1 2 3 1 2 3 1 2 3
15-
co £ 10H
J?
-V-V- on
o
CM
5-
-• • OFF
•
i i i i i i i i i i r
D, D, D, a, a, a, a e, e, c, c,
1 2_ 3 1 2 3 1 2_13 2 c,
3
i 2.7 4.0 6.0 ; ^ 5.3 8.0 12.0^26.7 40.0 60.0A 4.8 6.0 7.2 j
y y y, y
Figure 9.6 Plots of factor effects. Underscore indicates starting level. Two-standard-
deviation confidence limits are also shown for the starting level. Estimated confidence limits
for 20 log 10 P are too small to show.
Thus, the optimum settings of the control factors suggested by Figure 9.6(a) are
R, = 4.0 kQ, R2 = 5.336 kQ, R4 = 60.0 fcft, and Ez - 7.2 V. The verification
experiment under the optimum conditions gave r\ = 26.43 dB, compared to 23.84 dB under
Sec. 9.5 Iterative Optimization 227
the starting design. Similarly, under the optimum conditions we obtained r\' = 29.10
dB, compared to 29.94 dB under the starting design. Note that the increase in r) is
much larger than the reduction in r)'.
From Figure 9.6(b), it is clear that both R\ and /?2 have a large effect on P and
B'. The control factors R4 and Ez have a somewhat less effect on B and B'. Since R x
has no effect on the S/N ratios, it is an ideal choice for adjusting the slopes B and B'.
This ability to adjust B and B' gives the needed freedom to: (1) match the values of
Rj-on and Rj-off witn the chosen thermistor for serving the desired temperature
range, and (2) obtain the desired hysteresis. As discussed earlier, the needed separation
(hysteresis) between Rj-on and Rt-off is determined by the thermal analysis of the
heating system, which is not discussed here.
The preceding section showed one cycle of Robust Design. It is clear from Figure 9.6
that the potential exists for further improvement. By taking the optimum point as a
starting design, one can repeat the optimization procedure to achieve this potential—
that was indeed done for the temperature control circuit. For each iteration, we took
the middle level for each control factor as the optimum level from the previous
iteration, and then took levels 1 and 3 to have the same relationship with the level 2 for
that factor as in the first iteration. However, during these iterations, we did not let the
value of Ez exceed 7.2 V so that adequate separation between Ez and E0 obtained
through three iterations are shown in Table 9.7. Of course, some additional
improvement is possible, but by the third iteration, the rate of improvement has clearly slowed
down, so one need not proceed further.
indicates that, because of the region approach, it works particularly well in the early
stages, that is, when the starting point is far from the optimum. Once we get near the
optimum point, some of the standard nonlinear programming methods, such as the
Newton-Ralphson method, work very well. Thus one may wish to use the orthogonal
array method in the beginning and then switch to a standard nonlinear programming
method.
Tl' Tl+Tl'
Iteration Number (dB) (dB) (dB)
9.6 SUMMARY
• Dynamic systems are those in which we want the system's response to follow
the levels of the signal factor in a prescribed manner. The changing nature of
the levels of the signal factor and the response make the design of a dynamic
system more complicated than designing a static system. Nevertheless, the eight
steps of Robust Design (described in Chapter 4) still apply.
• A temperature controller is a feedback system and can be divided into three main
modules: (1) temperature sensor, (2) temperature control circuit, and (3) a
heating (or cooling) element. For designing a robust temperature controller, the three
modules must be made robust separately and then integrated together.
• The temperature control circuit is a doubly dynamic system. First, for a
particular target temperature of the bath, the circuit must turn a heater ON or OFF at
specific threshold temperature values. Second, the target temperature may be
changed by the user.
• Four circuit parameters (/?,, R2, /?4, and Ez) were selected as control factors.
The resistance /?3 was chosen as the signal factor. The tolerances in the control
factors were the noise factors.
Sec. 9.6 Summary 229
• The threshold resistance, Rj-on, at which the heater turns ON and the threshold
resistance, Rt_0ff> at which the heater turns OFF were selected as the quality
characteristics. The variation of Rj-on and Rt-off as a function of R 3 formed
two C-C type dynamic problems.
• To evaluate the S/N ratio (r|) for the ON function, a compound noise factor, CN
was formed. Three levels were chosen for the signal factor and the compound
noise factor. Rj-on was computed at the resulting nine combinations of the
signal and noise factor levels and the S/N ratio for the ON function was then
computed. (An orthogonal array can be used for computing the S/N ratio when
engineering judgment dictates that multiple noise factors be used). The S/N ratio
for the OFF function (T)') was evaluated in the same manner.
• The L18 array was used as the control orthogonal array. Through one cycle of
Robust Design, the sum T\ +T\' was improved by 1.75 dB. Iterating the Robust
Design cycle three times led to 2.50 dB improvement in T) + r\' .
• Orthogonal arrays can be used to optimize iteratively a nonlinear function. They
provide a region approach and perform especially well when the starting point is
far from the optimum and when some of the parameters are discrete.
Chapter 10
TUNING COMPUTER
SYSTEMS FOR
HIGH PERFORMANCE
This chapter presents a case study to illustrate the use of the Robust Design
method in tuning computer performance. A few details have been modified for
pedagogic purposes. The case study was performed by T. W. Pao, C. S. Sherrerd, and
M. S. Phadke [PI] who are considered to be the first to conduct such a study to
optimize a hardware-software system using the Robust Design method.
231
232 Tuning Computer Systems for High Performance Chap. 10
• Section 10.1 describes the problem formulation of the case study (Step 1 of the
Robust Design steps described in Chapter 4).
• Section 10.2 discusses the noise factors and testing conditions (Step 2).
• Section 10.3 describes the quality characteristic and the signal-to-noise (S/N)
ratio (Step 3).
• Section 10.4 discusses control factors and their alternate levels (Step 4).
• Section 10.5 describes the design of the matrix experiment and the experimental
procedure used by the research team (Steps 5 and 6).
• Section 10.6 gives the data analysis and verification experiments (Steps 7 and 8).
• Section 10.7 describes the standardized S/N ratio that is useful in compensating
for variation in load during the experiment.
The case study concerns the performance of a VAX* 11-780 machine running the
UNIX operating system, Release 5.0. The machine had 48 user terminal ports, two
remote job entry links, four megabytes of memory, and five disk drives. The average
number of users logged on at a time was between 20 and 30.
Before the start of the project, the users' perceptions were that system
performance was very poor, especially in the afternoon. For an objective measurement of the
response time, the experimenters used two specific, representative commands called
standard and trivial. The standard command consisted of creating, editing, and
removing a file; the trivial command was the UNIX system date command, which does not
involve input/output (I/O). Response times were measured by submitting these
commands via the UNIX system crontab facility and clocking the time taken for the
computer to respond using the UNIX system timex command, both of which are automatic
system processes.
For the particular users of this machine, the response time for the standard and
trivial commands could be considered representative of the response time for other
various commands for that computer. In some other computer installations, response
time for the compilation of a C program or the time taken for the trojf command (a
text processing command) may be more representative.
Figure 10.1(a)-(b) shows the variation of the response times as functions of
time of day for the standard and trivial commands. Note that at the start of the study,
the average response time increased as the afternoon progressed (see the curves marked
"Initial" in the figure). The increase in response time correlated well with the increase
in the work load during the afternoon. The objective in the experiment was to make
the response time uniformly small throughout the day, even when the load increased as
usual.
There are two broad approaches for optimizing a complex system such as a
computer: (1) micro-modeling and (2) macro-modeling. They are explained next.
o>
en
c 5 -
o
Q.
(/)
0>
Optimized
Optimized
0 I I I I I i i i, I ^
9 10 11 12 13 14 15 16 9 10 11 12 13 14 15 16
(a) For the Standard Command (b) For the Trivial Command
Micro-Modeling
operation, as well as put forth considerable effort to develop the model. Furthermore,
the more simplifying we do, the less realistic the model will be, and, hence, the less
adequate it will be for precise optimization. But once an adequate model is
constructed, a number of well-known optimization methods, including Robust Design, can
be used to find the best system configuration.
Macro-Modeling
The UNIX system is viewed as a "black box," as illustrated in Figure 10.2. The
parameters that influence the response time are identified and divided into two classes:
noise factors and control factors. The best settings of the control factors are
determined through experiments. Thus, the Robust Design method lends itself well for
optimization through the macro-modeling approach.
Noise Factors
(system load)
1 IMIY <,
Response Time ^
y = f(x;z)
i i
Control Factors
z
(system configurator i)
Load variation during use of the machine, from day-to-day and as a function of the
time of day, constitutes the main noise factor for the computer system under study.
Sec. 10.3 Quality Characteristic and S/N Ratio 235
The number of users logged on, central processor unit (CPU) demand, I/O demand, and
memory demand are some of the more important load measures. Temperature and
humidity variations in the computer room, as well as fluctuations in the power supply
voltage, are also noise factors but are normally of minor consequence.
The case study was conducted live on the computer. As a result, the normal
variation of load during the day provided the various testing conditions for evaluating
the S/N ratio.
At the beginning of the study, the researchers examined the operating logs for
the previous few weeks to evaluate the day-to-day fluctuation in response time and
load. The examination revealed that the response time and the load were roughly
similar for all five weekdays. This meant that Mondays could be treated the same as
Tuesdays, etc. If the five days of the week had turned out to be markedly different from
each other, then those differences would have had to be taken into account in planning
the experiment.
Let us first consider the standard command used in the study. Suppose it takes t0
seconds to execute that command under the best circumstances—that is, when the load
is zero, t0 is the minimum possible time for the command. Then, it becomes obvious
that the actual response time for the standard command minus t0 is a quality
characteristic that is always nonnegative and has a target value of zero—that is, the actual
response time minus t0 belongs to the smaller-the-better type problems. In the case
study, the various measurements of response time showed that ?o was much smaller
than the observed response time. Hence, t0 was ignored and the measured response
time was treated as a smaller-the-better type characteristic. The corresponding S/N
ratio to be maximized is
Referring to Figure 10.1, it is clear that at the beginning the standard deviation
of the response time was large, so much so that it is shown by bars of length ±1/2
standard deviation, as opposed to the standard practice of showing ±2 standard
deviations. From the quadratic loss function considerations, reducing both the mean and
variance is important. It is clear that the S/N ratio in Equation (10.1) accomplishes
this goal because mean square response time is equal to sum of the square of the mean
and the variance.
236 Tuning Computer Systems for High Performance Chap. 10
For the response time for the trivial command, the same formulation was used.
That is, the S/N ratio was defined as follows:
The UNIX operating system provides a number of tunable parameters, some of which
relate to the hardware and others to the software. Through discussions with a group of
system administrators and computer scientists, the experiment team decided to include
the eight control factors listed in Table 10.1 for the tuning study. Among them, factors
A, C, and F are hardware related, and the others are software related. A description of
these parameters and their alternate levels is given next. The discussion about the
selection of levels is particularly noteworthy because it reveals some of the practical
difficulties faced in planning and carrying out Robust Design experiments.
Levels*
Factor 1 2 3
B. File distribution a b c
E. Sticky bits 0 3 8
F. KMCs used 2 0
The number and type of disk drives (factor A) is an important parameter that
determines the I/O access time. At the start, there were four RM05 disks and one
RP06 disk. The experimenters wanted to see the effect of adding one more RP06
Sec. 10.4 Control Factors and Their Alternate Levels 237
disk (level A2), as well as the effect of adding one RP07 disk and a faster memory
controller (level A3). However, the RP07 disk did not arrive in time for the
experiments. So, level A3 was defined to be the same as level A2 for factor A. The next
section discusses the care taken in planning the matrix experiment, which allowed the
experimenters to change the plan in the middle of the experiments.
The file system distributions (factor B) a, b and c refer to three specific
algorithms used for distributing the user and system files among the disk drives.
Obviously, the actual distribution depends on the number of disk drives used in a particular
system configuration. Since the internal entropy (a measure of the lack of order in
storing the files) could have a significant impact on response time, the team took care
to preserve the internal entropy while changing from one file system to another during
the experiments.
One system administrator suggested increasing the memory size (factor C) to
improve the response time. However, another expert opinion was given that stated
additional memory would not improve the response for the particular computer system
being studied. Therefore, the team decided not to purchase more memory until they
were reasonably sure its cost would be justified. They took level C\ as the existing
memory size, namely 4 MB, and disabled some of the existing memory to form levels
C2 and C3. They decided to purchase more memory only if the experimental data
showed that disabling a part of the memory leads to a significant reduction in
performance.
Total memory is divided into two parts: system buffers (factor D) and user
memory. The system buffers are used by the operating system to store recently used
data in the hope that the data might be needed again soon. Increasing the size of the
system buffers improves the probability (technically called hit ratio) of finding the
needed data in the memory. This can contribute to improved performance, but it also
reduces the memory available to the users, which can lead to progressively worse
system performance. Thus, the optimum size of system buffers can depend on the
particular load pattern. We refer to the size of the system buffers as a fraction of the total
memory size. Thus, the levels of the system buffers are sliding with respect to the
memory size.
Sticky bit (factor E) is a way of telling an operating system to treat a command
in a special way. When the sticky bit for a command such as rm or ed is set, the
executable module for that command is copied contiguously in the swap area of a disk
during system initialization. Every time that command is needed but not found in the
memory, it is brought back from the swap area expeditiously in a single operation.
However, if the sticky bit is not set and the command is not found in the memory, it
must be brought back block by block from the file system. This adds to the execution
time.
In this case study, factor E specifies how many and which commands had their
sticky bits set. For level Ex, no command had its sticky bit set. For level E2, the
three commands that had their sticky bits set were sh, ksh and rm. These were the
three most frequently used commands during the month before the case study
238 Tuning Computer Systems for High Performance Chap. 10
according to a 5-day accounting command summary report. For the level £3, the eight
commands that had their sticky bits set were the three commands mentioned above,
plus the next five most commonly used commands, namely, the commands Is, cp, expr,
chmod, and sadc (a local library command).
KMCs (factor F) are special devices used to assist the main CPU in handling the
terminal and remote job entry traffic. They attempt to reduce the number of interrupts
faced by the main CPU. In this case study, only the KMCs used for terminal traffic
were changed. Those used for the remote job entry links were left alone. For level
F], two KMCs were used to handle the terminal traffic, whereas for level F2 the two
KMCs were disabled.
The number of entries in the INODE table (factor G) determines the number of
user files that can be handled simultaneously by the system. The three levels for the
factor G are 400, 500, and 600. The three levels of the eighth factor, namely, the other
system tables (factor H), are coded as a, b, and c.
Note that the software factors (B, D, E, G, and H) can affect only response time.
However, the three hardware factors (A, C, and F) can affect both the response time
and the computer cost. Therefore, this optimization problem is not a pure parameter
design problem, but, rather, a hybrid of parameter design and tolerance design.
This case study has seven 3-level factors and one 2-level factor. There are
7x(3-l)+ 1 x(2-l)+ 1 = 16 degrees of freedom associated with these factors. The
orthogonal array L18 is just right for this project because it has seven 3-level columns
and one 2-level column to match the needs of the matrix experiment. The L^ array
and the assignment of columns to factors are shown in Table 10.2. Aside from
assigning the 2-level factor to the 2-level column, there is really no other reason for assigning
a particular factor to a particular column. The factors were assigned to the columns in
the order in which they were listed at the time the experiment was planned. Some
aspects of the assignment of factors to columns are discussed next.
The experiment team found that changing the level of disk drives (factor A) was
the most difficult among all the factors because it required an outside technician and
took three to four hours. Consequently, in conducting these experiments, the team first
conducted all experiments with level A x of the disk drives, then those with level A 2 of
the disk drives, and finally those with level A3 of the disk drives. The experiments
with level A 3 of the disk drives were kept for last to allow time for the RP07 disk to
arrive. However, because the RP07 disk did not arrive in time, the experimenters
redefined level A 3 of the disk drives to be the same as level A 2 and continued with the
rest of the plan. According to the dummy level technique discussed in Chapter 7, this
redefinition of level does not destroy the orthogonality of the matrix experiment. This
arrangement, however, gives 12 experiments with level A2 of the disk drives; hence,
Sec. 10.5 Design of the Matrix Experiment and the Experimental Procedure 239
more accurate information about that level is obtained when compared to level A i.
This is exactly what we should look for because level A 2 is the new level about which
we have less prior information.
Expt. 1 2 3 4 5 6 7 8
No. F B C D E A G H
1 1 1111 1 1
2 1 2 2 2 2 2 2
3 1 3 3 3 3 3 3
4 2 112 2 3 3
5 2 2 2 3 3 1 1
6 2 3 3 11 2 2
7 3 12 13 2 3
8 3 2 3 2 1 3 1
9 3 3 13 2 1 2
10 2 1 13 3 2 2 1
11 2 1 2 113 3 2
12 2 1 3 2 2 1 1 3
13 2 2 12 3 1 3 2
14 2 2 2 3 12 1 3
15 2 2 3 12 3 2 1
16 2 3 13 2 3 1 2
17 2 3 2 13 1 2 3
18 2 3 3 2 12 3 1
File distribution (factor B) was the second most difficult factor to change.
Therefore, among the six experiments with level A l of the disk drives, the experiment
team first conducted the two experiments with level B\, then those with level B2, and
finally those with level fl3. The same pattern was repeated for level A2 of the disk
drives and then level A3 of the disk drives. The examination of the L18 array given in
Table 10.2 indicates that some of the bookkeeping of the experimental conditions could
have been simplified if the team had assigned factor A to Column 2 and factor B to
240 Tuning Computer Systems for High Performance Chap. 10
From the 96 measurements of standard response time for each experiment, the team
computed the mean response time and the S/N ratio. The results are shown in Table
10.3. Similar computations were made for the trivial response time, but they are not
shown here. The effects of the various factors on the S/N ratio for the standard
response time are shown, along with the corresponding ANOVA in Table 10.4. The
factor effects are plotted in Figure 10.3. Note that the levels C\, C%, and C3 of
memory size are 4.0 MB, 3.0 MB, and 3.5 MB, respectively, which are not in a mono-
tonic order. While plotting the data in Figure 10.3, the experimenters considered the
correct order.
It is apparent from Table 10.4 that the factor effects are rather small, especially
when compared to the error variance. The problem of getting large error variance is
more likely with live experiments, such as this computer system optimization
experiment, because the different rows of the matrix experiment are apt to see quite different
noise conditions, that is, quite different load conditions. Also, while running live
Sec. 10.6 Data Analysis and Verification Experiments 241
Tl(dB)
-12 -
-13 -
-14 -
y-
-15
t~r~i—r-rr
A A2 ei e2 e3 c, c3 c2 D1 D2 D3
4/1 4/2
v a b c A4.0 3.5 3.0 A 1/5 1/4 1/3;
Disk File Memory System
Drives Distr. Size Buffer
Tl(dB)
Figure 10.3 Factor effects for S/N ratio for standard response. Underscore indicates
starting level. One-standard-deviation limits are also shown.
experiments, the tendency is to choose the levels of control factors that are not far
apart. However, we can still make valuable conclusions about optimum settings of
control factors and then see if the improvements observed during the verification
experiment are significant or not.
242 Tuning Computer Systems for High Performance Chap. 10
Expt. Mean 11
No. (sec) (dB)
1 4.65 -14.66
2 5.28 -16.37
3 3.06 -10.49
4 4.53 -14.85
5 3.26 -10.94
6 4.55 -14.96
7 3.37 -11.77
8 5.62 -16.72
9 4.87 -14.67
10 4.13 -13.52
11 4.08 -13.79
12 4.45 -14.19
13 3.81 -12.89
14 5.87 -16.75
15 3.42 -11.65
16 3.66 -12.23
17 3.92 -12.81
18 4.42 -13.71
The following observations can be made from the plots in Figure 10.3 and Table
10.4 (note that these conclusions are valid only for the particular load characteristics of
the computer being tuned):
1. Going from not setting any sticky bits to setting sticky bits on the three most
used commands does not improve the response time. This is probably because
these three commands tend to stay in the memory as a result of their very
frequent use, regardless of setting sticky bits. However, when sticky bits are set on
the five next most used commands, the response time improves by 1.69 dB.
This suggests that we should set sticky bits on the eight commands, and in future
experiments, we should consider even more commands for setting sticky bits.
2. KMCs do not help in improving response time for this type of computer
environment. Therefore, they may be dropped as far as terminal handling is concerned,
thus reducing the cost of the hardware.
Sec. 10.6 Data Analysis and Verification Experiments 243
3. Adding one more disk drive leads to better response time. Perhaps even more
disks should be considered for improving the response time. Of course, this
would mean more cost, so proper trade-offs would have to be made.
4. The S/N ratio is virtually the same for 4 MB and 3.5 MB memory. It is
significantly lower for 3 MB memory. Thus, 4 MB seems to be an optimum
value—that is, buying more memory would probably not help much in
improving response time.
5. There is some potential advantage (0.8 dB) in changing the fraction of system
buffers from 1/3 to 1/4.
6. The effects of the remaining three control factors are very small and there is no
advantage in changing their levels.
244 Tuning Computer Systems for High Performance Chap. 10
The optimum system configuration inferred from the data analysis above is shown in
Table 10.5 along with the starting configuration. Changes were recommended in the
settings of sticky bits, disk drives, and system buffers because they lead to faster
response. KMCs were dropped because they did not help improve response, and
dropping them meant saving hardware. The prediction of the S/N ratio for the standard
response time under the starting and optimum conditions is also shown in Table 10.5.
Note that the contributions of the factors, whose sum of squares were among the
smallest and were pooled, are ignored in predicting the S/N ratio. Thus, the S/N ratio
predicted by the data analysis under the starting condition is -14.67 dB, and under the
optimum conditions it is -11.22 dB. The corresponding, predicted rms response times
under the starting and optimum conditions are 5.41 seconds and 3.63 seconds,
respectively.
Contributionf Contributionf
Factor Setting (dB) Setting (dB)
B. File distribution B,
F. KMCs used* ^i F2
* Indicates the factors whose levels are changed from the starting to the
optimum conditions.
t By contribution we mean the deviation from the overall mean caused by the
particular factor level.
Sec. 10.6 Data Analysis and Verification Experiments 245
As noted earlier, here the error variance is large. We would also expect the
variance of the prediction error to be large. The variance of the prediction error can be
computed by the procedure given in Chapter 3 [see Equation (3.14)]. The equivalent
sample size for the starting condition, ne, is given by
1 1 1 1 1 I 1
+ + +
n
."A, ."c, "J ."03
1 1 1 1
+ +
n
.«£, - .%, «.
1 1 1 1 1 1
+ + +
18 6 18 6 18 6 18
I. J
1 1 1 1
+ +
6 18 6 18
= 0.61 .
Note that the observed S/N ratios for standard response time under the starting
and optimum conditions are within their respective two-standard-deviation confidence
limits. However, they are rather close to the limits. Also, the observed improvement
(8.39 dB) in the S/N ratio is quite large compared to the improvement predicted by the
data (3.47 dB). However, the difference is well within the confidence limits. Also,
observe that the S/N ratio under the optimum conditions is better than the best among
the 18 experiments.
Thus, here we achieved a 60- to 70-percent improvement in response time by
improved system administration. Following this experiment, two similar experiments
were performed by Klingler and Nazaret [K5], who took extra care to ensure that
published UNIX system tuning guides were used to establish the starting conditions. Their
experiments still led to a 20- to 40-percent improvement in response time. One extra
factor they considered was the use of PDQs, which are special auxiliary processors for
handling text processing jobs (troff). For their systems, it turned out that the use of
PDQs could hurt the response time rather than help.
When running Robust Design experiments with live systems, we face the
methodological problem that the noise conditions, which are the load conditions for our computer
system optimization, are not the same for every row of the control orthogonal array.
This can lead to inaccuracies in the conclusions. One way to minimize the impact of
changing noise conditions is to construct a standardized S/N ratio, which we describe
next.
As noted earlier, some of the more important load measures for the computer
system optimization experiment are: number of users, CPU demand, I/O demand, and
Sec. 10.7 Standardized S/N Ratio 247
memory demand. After studying the load pattern over the days when the case study
was conducted, we can define low and high levels for each of these load measures, as
shown in Figure 10.4. These levels should be defined so that for every experiment we
have a reasonable number of observations at each level. The 16 different possible
combinations of the levels of these four load measures are listed in Table 10.7 and are
nothing more than 16 different noise conditions.
Average Value
Range
In a live experiment, the number of observations for each noise condition can
change from experiment to experiment. For instance, one day the load might be heavy
while another day it might be light. Although we cannot dictate the load condition for
the different experiments, we can observe the offered load. The impact of load
variation can be minimized as follows: We first compute the average response time for
each experiment in each of the 16 load conditions. We then treat these 16 averages as
raw data to compute the S/N ratio for each experiment. The S/N ratio computed in
this manner is called standardized S/N ratio because it effectively standardizes the load
conditions for each experiment. The system can then be optimized using this
standardized S/N ratio.
Note that for the standardized S/N ratio to work, we must have good definitions
of noise factors and ways of measuring them. Also, in each experiment every noise
condition must occur at least once to be able to compute the average. In practice,
248 Tuning Computer Systems for High Performance Chap. 10
1 1 1 1 1 y.-i
2 1 1 1 2 y.-2
3 1 1 2 1 y-3
4 1 1 2 2 y.-4
5 1 2 1 1 y.-s
6 1 2 1 2 y.-e
7 1 2 2 1 y.-?
8 1 2 2 2 y.-s
9 2 1 1 1 y,-9
10 2 1 1 2 y.-io
11 2 1 2 1 ttu
12 2 1 2 2 y.12
13 2 2 1 1 y.-i3
14 2 2 1 2 y.-M
15 2 2 2 1 y.-is
16 2 2 2 2 ttl6
however, if one or two conditions are missing, we may compute the S/N ratio with the
available noise conditions without much harm.
In the computer system optimization case study discussed in this chapter, the
experimenters used the concept of standardized S/N ratio to obtain better comparison of
the starting and optimum conditions. Some researchers expressed a concern that part
of the improvement observed in this case study might have been due to changed load
conditions over the period of three months that it took the team to conduct the matrix
experiment. Accordingly, the experimenters computed the standardized S/N ratio for
the two conditions and the results are shown in Table 10.6 along with the other results
of the verification experiment. Since the improvement in the standardized S/N ratio is
quite close to that in the original S/N ratio, the experiment team concluded that the
load change had a minimal impact on the improvement.
Sec. 10.9 Summary 249
The success in improving the telephone network traffic management suggests that
the Robust Design approach could be used successfully to improve other networks, for
example, air traffic management. In fact, it is reported that the method has been used
in Japan to optimize runway and air-space usage at major airports.
10.9 SUMMARY
• There are two systematic approaches for optimizing a complex system: (1)
micro-modeling and (2) macro-modeling. The macro-modeling approach can
utilize the Robust Design method to achieve more rapid and efficient results.
• Load variation during use of the computer, from day to day and as a function of
the time of day, constitutes the main noise factor for the computer system. The
number of users logged on, CPU demand, I/O demand, and memory demand are
some of the more important load measures.
• The response time for the standard (or the trivial) command minus the minimum
possible time for that command was the quality characteristic for the case study.
It is a smaller-the-better type characteristic. The minimum possible response
time was ignored in the analysis as it was very small compared to the average
response time.
• Eight control factors were chosen for the case study: disk drives (A), file
distribution (B), memory size (C), system buffers (D), sticky bits (E), KMCs used (F),
INODE table entries (G), and other system tables (H). Factors A, C, and F are
hardware related, whereas the others are software related. Factor F had two
levels while the others had three levels.
• The L ig orthogonal array was used for the matrix experiment. Disk drives was
the most difficult factor to change, and file distribution was the next most
difficult factor to change. Therefore, the 18 experiments were conducted in an
order that minimized changes in these two factors. Also, the experiments were
ordered in a way that allowed changing level 3 of disk drives, as anticipated
during the planning stage.
• The experiments were conducted on a live system. Each experiment lasted for
two days, with eight hours per day. Response time for the standard and the
trivial commands was observed once every 10 minutes by using an automatic
measurement facility.
• Standardized S/N ratios can be used to reduce the adverse effect of changing
noise conditions, which is encountered in running Robust Design experiments on
live systems.
Sec. 10.9 Summary 251
• Data analysis indicated that levels of four control factors (A, D, E, and F) should
be changed and that the levels of the other four factors should be kept the same.
It also indicated that the next round of experiments should consider setting sticky
bits on more than eight commands.
253
254 Reliability Improvement Chap. 11
First, let us note the difference between reliability characterization and reliability
improvement. Reliability characterization refers to building a statistical model for the
failure times of the product. Log-normal and Weibull distributions are commonly used
for modeling the failure times. Such models are most useful for predicting warranty
cost. Reliability improvement means changing the product design, including the
settings of the control factors, so that time to failure increases.
Invariably, it is expensive to conduct life tests so that an adequate failure-time
model can be estimated. Consequently, building adequate failure-time models under
various settings of control parameters, as in an orthogonal array experiment, becomes
impractical and, hence, is hardly ever done. In fact, it is recommended that conducting
life tests should be reserved as far as possible only for a final check on a product.
Accelerated life tests are well-suited for this purpose.
For improving a product's reliability, we should find appropriate quality
characteristics for the product and minimize its sensitivity to all noise factors. This
automatically increases the product's life. The following example clarifies the relationship
between the life of a product and sensitivity to noise factors.
Consider an electrical circuit whose output voltage, y, is a critical characteristic.
If it deviates too far from the target, the circuit's function fails. Suppose the variation
in a resistor, R, plays a key role in the variation of y. Also, suppose, the resistance R
is sensitive to environmental temperature and that the resistance increases at a certain
Sec. 11.1 Role of S/N Ratios in Reliability Improvement 255
rate with aging. During the use of the circuit, the ambient temperature may go too
high or too low, or sufficient time may pass leading to a large deviation in R.
Consequently, the characteristic y would go outside the limits and the product would fail.
Now, if we change the nominal values of appropriate control factors, so that y is much
less sensitive to variation in R, then for the same ambient temperatures faced by the
circuit and the same rate of change of R due to aging, we would get longer life out of
that circuit.
Sensitivity of the voltage y to the noise factors is measured by the S/N ratio.
Note that in experiments for improving the S/N ratio, we may use only temperature as
the noise factor. Reducing sensitivity to temperature means reducing sensitivity to
variation in R and, hence, reducing sensitivity to the aging of R also. Thus, by
appropriate choice of testing conditions (noise factor settings) during Robust Design
experiments, we can improve the product life as well.
111 -12
It is often the case that the rate of drift of a product's quality characteristic is
proportional to the sensitivity of the quality characteristic to noise factors. Also, the
drift in the quality characteristic as a function of time can be approximated reasonably
well by the Wiener process. Then, through standard theory of level crossing, we can
infer that the average life of the current product would be r times longer than the life
256 Reliability Improvement Chap. 11
of the benchmark product whose average life is known through past experience. Thus,
the S/N ratio permits us to estimate the life of a new product in a simple way without
conducting expensive and time-consuming life tests.
This section described the role of S/N ratios in reliability improvement. This is
a more cost-effective and, hence, preferred way to improve the reliability of a product
or a process. However, for a variety of reasons (including lack of adequate engineering
know-how about the product) we are forced, in some situations, to conduct life tests to
find a way of improving reliability. In the remaining sections of this chapter, we
describe a case study of improving the life of router bits by conducting life studies.
Typically, printed wiring boards are made in panels of 18 x 24 in. size. Appropriate
size boards, say 8 x 4 in., are cut from the panels by stamping or by the routing
process. A benefit of the routing process is that it gives good dimensional control and
smooth edges, thus reducing friction and abrasion during the circuit pack insertion
process. When the router bit gets dull, it produces excessive dust which then cakes on the
edges of the boards and makes them rough. In such cases, a costly cleaning operation
is necessary to smooth the edges. However, changing the router bits frequently is also
expensive. In the case study, the objective was to increase the life of the router bits,
primarily with regard to the beginning of excessive dust formation.
The routing machine used had four spindles, all of which were synchronized in
their rotational speed, horizontal feed (X-Y feed), and vertical feed (in-feed). Each
spindle did the routing operation on a separate stack of panels. Typically, two to four
panels are stacked together to be cut by each spindle. The cutting process consists of
lowering the spindle to an edge of a board, cutting the board all around using the X-Y
feed of the spindle, and then lifting the spindle. This is repeated for each board on a
panel.
Some of the important noise factors for the routing process are the out-of-center
rotation of the spindle, the variation from one router bit to another, the variation in the
material properties within a panel and from panel to panel, and the variation in the
speed of the drive motor.
Ideally, we should look for a quality characteristic that is a continuous variable
related to the energy transfer in the routing process. Such a variable could be the wear
of the cutting edge or the change in the cutting edge geometry. However, these
variables are difficult to measure, and the researchers wanted to keep the experiment
simple. Therefore, the amount of cut before a bit starts to produce an appreciable amount
of dust was used as the quality characteristic. This is the useful life of the bit.
Sec. 11.4 Control Factors and Their Levels 257
The control factors selected for this project are listed in Table 11.1. Also listed in the
table are the control factors' starting and alternate levels. The rationale behind the
selection of some of these factors and their levels is given next.
Levels*
Factors 1 2 3 4
C. In-feed (in/min) 10 50
D. Type of bit 1 2 3 4
E. Spindle positionf 1 2 3 4
F. Suction foot SR BB
Suction (factor A) is used around the router bit to remove the dust as it is
generated. Obviously, higher suction could reduce the amount of dust retained on the
boards. The starting suction was two inches of mercury (Hg). However, the pump
used in the experiment could not produce more suction. So, one inch of Hg was
chosen as the alternate level, with the plan that if the experiments showed a significant
difference in the dust, a more powerful pump would be obtained.
Related to the suction are suction foot and the depth of the backup slot. The
suction foot determines how the suction is localized near the cutting point. Two types of
suction foot (factor F) were chosen: solid ring (SR) and bristle brush (BB). A backup
panel is located underneath the panels being routed. Slots are precut in this backup
panel to provide air passage and a place for dust to accumulate temporarily. The depth
of these slots was a control factor (factor H) in the case study.
258 Reliability Improvement Chap. 11
Stack height (factor G) and X-Y feed (factor B) are control factors related to the
productivity of the process—that is, they determine how many boards are cut per hour.
The 3/16-in. stack height meant three panels were stacked together while 1/4-in. stack
height meant four panels were stacked together. The in-feed (factor C) determines the
impact force during the lowering of the spindle for starting to cut a new board. Thus,
it could influence the life of the bit regarding breakage or damage to its point. Four
different types of router bits (factor D) made by different manufacturers were
investigated in this study. The router bits varied in cutting geometry in terms of the helix
angle, the number of flutes, and the type of point.
Spindle position (factor E) is not a control factor. The variation in the state of
adjustment of the four spindles is indeed a noise factor for the routing process. All
spindle positions must be used in actual production; otherwise, the productivity would
suffer. The reason it was included in the study is that in such situations one must
choose the settings of control factors that work well with all four spindles. The
rationale for including the spindle position along with the control factors is given in
Section 11.5.
For this case study, the goal was to not only estimate the main effects of the nine
factors listed in the previous section, but also to estimate the four key 2-factor
interactions. Note that there are 36 distinct ways of choosing two factors from among nine
factors. Thus, the number of two-factor interactions associated with nine factors is 36.
An attempt to estimate them all would take excessive experimentation, which is also
unnecessary anyway. The four interactions chosen for the case study were the ones
judged to be more important based on the knowledge of the cutting process:
1. (X-Y feed) x (speed), that is, B x I
2. (in-feed) x (speed), that is, C x I
3. (stack height) x (speed), that is, G x I
4. (X-Y feed) x (stack height), that is, B x G
In addition to the requirements listed thus far, the experimenters had to consider
the following aspects from a practical viewpoint:
Sec. 11.5 Design of the Matrix Experiment 259
• Suction (factor A) was difficult to change due to difficult access to the pump.
• All four spindles on a machine move in identical ways—that is, they have the
same X-Y feed, in-feed, and speed. So, the columns assigned to these factors
should be such that groups of four rows can be made, where each group has a
common X-Y feed, in-feed, and speed. This allows all four spindles to be used
effectively in the matrix experiment.
These requirements for constructing the control orthogonal array are fairly complicated.
Let us now see how we can apply the advanced strategy described in Chapter 7 to
construct an appropriate orthogonal array for this project.
First, the degrees of freedom for this project can be calculated as follows:
Overall mean 1
Total 18
Since there are 2-level and 4-level factors in this project, it is preferable to use
an array from the 2-level series. Because there are 18 degrees of freedom, the array
must have 18 or more rows. The next smallest 2-level array is L32.
The linear graph needed for this case study, called the required linear graph, is
shown in Figure 11.1(a). Note that each 2-level factor is represented by a dot, and
interaction is represented by a line connecting the corresponding dots. Each 4-level
factor is represented by two dots, connected by a line according to the column merging
method in Chapter 7.
The next step in the advanced strategy for constructing orthogonal arrays is to
select a suitable linear graph of the orthogonal array L^2 and modify it to fit the
required linear graph. Here we take a slightly different approach. We first simplify the
required linear graph by taking advantage of the special circumstances and then
proceed to fit a standard linear graph.
260 Reliability Improvement Chap. 11
I IxC C
. t5: II..
A B BxG G D E F H
. K II..
A B C D E F H
14 5 7 6 14 5 7 6
•I
3l 121 151 141 131
2 8 10 9 11
,5. 7. 4.
1 13IB,,Q
•2* •
394
"I 12| ,0
84
5 6 ,1• • •
ABCDEFHee
We notice that we must estimate the interactions of the factor I with three other
factors. One way to simplify the required linear graph is to treat I as an outer
factor—that is, to first construct an orthogonal array by ignoring I and its interactions.
Sec. 11.5 Design of the Matrix Experiment 261
Then, conduct each row of the orthogonal array with the two levels of I. By so doing,
we can estimate the main effect of I and also the interactions of I with all other factors.
The modified required linear graph, after dropping factor I and its interactions is
shown in Figure 11.1 (b). This is a much simpler linear graph and, hence, easier to fit
to a standard linear graph. Dropping the 2-level factor I and its interactions with three,
2-level factors is equivalent to reducing the degrees of freedom by four. Thus, there
are 14 degrees of freedom associated with the linear graph of Figure 11.1(b).
Therefore, the orthogonal array L16 can be used to fit the linear graph of Figure 11.1(b).
This represents a substantial simplification compared with having to use the array L32
for the original required linear graph. The linear graph of Figure 11.1(b) has three
lines, connecting two dots each, and four isolated dots. Thus, a standard linear graph
that has a number of lines that connect pairs of dots seems most appropriate. Such a
linear graph was selected from the standard linear graphs of L16 given in Appendix C
and it is shown in Figure 11.1(c). It has five lines, each connecting two distinct dots.
The step-by-step modification of this linear graph to make it fit the one in Figure
11.1 (b) is discussed next.
Next, from among the remaining three lines in the standard linear graph, we
arbitrarily chose columns 7 and 9 to form a 4-level column for factor D. Of course, the
interaction column 14 must be kept empty.
The two remaining lines are then broken to form six isolated dots corresponding
to columns 5, 10, 15, 6, 11, and 13. The next priority is to pick a column for factor G
so that the interaction B x G would be contained in one of the remaining five columns.
For this purpose, we refer to the interaction table for the L16 array given in Appendix
C. We picked column 15 for factor G. Column 13 contains interaction between
columns 2 and 15, so it can be used to estimate the interaction B x G. We indicate
this in the linear graph by a line joining the dots for the columns 2 and 15.
From the remaining four columns, we arbitrarily assign columns 10 and 5 to
factors F and H. Columns 6 and 11 are kept empty.
262 Reliability Improvement Chap. 11
Column Number
Expt.
No. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
1 1 1 1 1 1 1111 1 1 1 1 1
2 1 1 1 1 1 12 2 2 2 2 2 2 2
3 1 1 2 2 2 2 111 1 2 2 2 2
4 1 1 2 2 2 2 2 2 2 2 1 1 1 1
5 2 2 1 1 2 2 112 2 1 1 2 2
6 2 2 1 1 2 2 2 2 1 1 2 2 1 1
7 2 2 2 2 1 1112 2 2 2 1 1
8 2 2 2 2 1 12 2 1 1 1 1 2 2
9 2 1 2 1 2 1 2 12 1 2 1 2 1 2
10 2 1 2 1 2 1 2 2 12 1 2 1 2 1
11 2 1 2 2 1 2 112 1 2 2 1 2 1
12 2 1 2 2 1 2 12 12 1 1 2 1 2
13 2 2 1 1 2 2 112 2 1 1 2 2 1
14 2 2 1 1 2 2 12 11 2 2 1 1 2
15 2 2 1 2 1 1 2 12 2 1 2 1 1 2
16 2 2 1 2 1 1 2 2 1 1 2 1 2 2 1
The final, modified standard linear graph along with the assignment of factors to
columns is shown in Figure 11.1(d). The assignment of factors to the columns of the
L16 array is as follows:
A 1 F 10
B 2 G 15
C 3 H 5
D 7, 9, 14 BxG 13
E 4, 8, 12
Sec. 11.5 Design of the Matrix Experiment 263
The 4-level columns for factors D and E were formed in the array L16 by the column
merging method of Section 7.7. The resulting 16 row orthogonal array is the same as
the first 16 rows of Table 11.3, except for the column for factor I. Because I is an
outer factor, we obtain the entire matrix experiment as follows: make rows 17-32 the
same as rows 1-16. Add a column for factor I that has 1 in the rows 1-16 and 2 in the
rows 17-32. Note that the final matrix experiment shown in Table 11.3 is indeed an
orthogonal array—that is, in every pair of columns, all combinations occur and they
occur an equal number of times. We ask the reader to verify this claim for a few pairs
of columns. Note that the matrix experiment of Table 11.3 satisfies all the
requirements set forth earlier in this section.
The 32 experiments in the control orthogonal array of Table 11.3 are arranged in
eight groups of four experiments such that:
a. For each group there is a common speed, X-Y feed, and in-feed
b. The four experiments in each group correspond to four different spindles
Thus, each group constitutes a machine run using all four spindles, and the 32
experiments in the control orthogonal array can be conducted in eight runs of the routing
machine.
Observe the ease with which we were able to construct an orthogonal array for a
very complicated combinatoric problem using the standard orthogonal arrays and linear
graphs prepared by Taguchi.
As a rule, noise factors should not be mixed in with the control factors in a matrix
experiment (orthogonal array experiment). Instead, noise factors should be used to
form different testing conditions so that the S/N ratio can accurately measure
sensitivity to noise factors. According to this rule, we should have dropped the spindled
position column in Table 11.3 and considered the four spindle positions as four testing
conditions for each row of the orthogonal array, which would amount to a four times
larger experimental effort.
5 1 2 2 3 1 2 2 11 0.5
6 1 2 2 4 2 1 111 2.5
7 1 2 2 14 2 12 1 0.5
8 1 2 2 2 3 12 2 1 0.5
9 2 1 2 4 1 12 2 1 17.5
10 2 1 2 3 2 2 12 1 2.5
11 2 1 2 2 4 1111 0.5
12 2 1 2 13 2 2 11 3.5
13 2 2 12 1 2 12 1 0.5
14 2 2 112 12 2 1 2.5
15 2 2 14 4 2 2 11 0.5
16 2 2 13 3 1111 3.5
21 1 2 2 3 1 2 2 12 0.5
22 1 2 2 4 2 1112 17.5
23 1 2 2 14 2 12 2 14.5
24 1 2 2 2 3 12 2 2 0.5
25 2 1 2 4 1 12 2 2 17.5
26 2 1 2 3 2 2 12 2 3.5
27 2 1 2 2 4 1 112 17.5
28 2 1 2 13 2 2 12 3.5
29 2 2 12 1 2 12 2 0.5
30 2 2 112 12 2 2 3.5
31 2 2 14 4 2 2 12 0.5
32 2 2 13 3 1 112 17.5
* Life was measured in hundreds of inches of movement in X-Y plane. Tests were terminated at 1,700 inches.
Sec.11.7 Data Analysis 265
In order to economize on the size of the experiment, the experimenters took only one
observation of router bit life per row of the control orthogonal array. Of course, they
realized that taking two or three noise conditions per row of the control orthogonal
array would give them more accurate conclusions. However, doing this would mean
exceeding the allowed time and budget. Thus, a total of only 32 bits were used in this
project to determine the optimum settings of the control factors.
During each machine run, the machine was stopped after every 100 in. of cut
(that is, 100 in. of router bit movement in the X-Y plane) to inspect the amount of
dust. If the dust was beyond a certain minimum predetermined level, the bit was
recorded as failed. Also, if a bit broke, it was obviously considered to have failed.
Otherwise, it was considered to have survived.
Before the experiment was started, the average bit life was around 850 in. Thus,
each experiment was stopped at 1,700 in. of cut, which is twice the original average
life, and the survival or failure of the bit was recorded.
Table 11.3 gives the experimental data in hundreds of inches. A reading of 0.5 means
that the bit failed prior to the first inspection at 100 in. A reading of 3.5 means that
the bit failed between 300 and 400 in. Other readings have similar interpretation,
except the reading of 17.5 which means survival beyond 1,700 in., the point where the
experiment was terminated. Notice that for 14 experiments, the life is 0.5 (50 in.),
meaning that those conditions are extremely unfavorable. Also, there are eight cases of
life equal to 17.5, which are very favorable conditions. During experimentation, it is
important to take a broad range for each control factor so that a substantial number
of favorable and unfavorable conditions are created. Much can be learned about the
optimum settings of control factors when there is such diversity of data.
Now we will show two simple and separate analyses of the life data for
determining the best levels for the control factors. The first analysis is aimed at
determining the effect of each control factor on the mean failure time. The second analysis,
described in the next section, is useful for determining the effect of changing the level
of each factor on the survival probability curve.
266 Reliability Improvement Chap. 11
The life data was analyzed by the standard procedures described in Chapter 3 to
determine the effects of the control factors on the mean life. The mean life for each
factor level and the results of analysis of variance are given in Table 11.4. These
results are plotted in Figure 11.2. Note that in this analysis we have ignored the effect
of both types of censoring. The following conclusions are apparent from the plots in
Figure 11.2:
• 1-in. suction is as good as 2-in. suction. Therefore, it is unnecessary to increase
suction beyond 2 in.
• Slower X-Y feed gives longer life.
• The effect of in-feed is small.
TABLE 11.4 FACTOR EFFECTS AND ANALYSIS OF VARIANCE FOR ROUTER BIT LIFE*
Level Meanst
Sum of Degrees of Mean
Factor 1 2 3 4 Squares Freedom Square F
Total 1627.90 31
The best settings of the control factors, called optimum 1, suggested by the results
above, along with their starting levels, are displayed side by side in Table 11.5.
Using the linear model and taking into consideration only the terms for which
the variance ratio is large (that is, the factors B, D, F, G, I and interaction I x G), we
can predict the router bit life under the starting, optimum, or any other combination of
control factor settings. The predicted life under the starting and optimum conditions
are 888 in. and 2,225 in., respectively. The computations involved in the prediction
are displayed in Table 11.5. Note that the contribution of the I x G interaction under
starting conditions was computed as follows:
= -2.30
Note that m!2Q2 is the mean life for the experiments with speed I2 and stack height
G2. The contribution of the I x G interaction under the optimum conditions was
computed in a similar manner. Because of the censoring during the experiment at 1,700
268 Reliability Improvement Chap. 11
15 -
d)
.c
o
£10
o
to
o
o
J 5-
—
,^-
?¦
c
(0
n—tt "TT" I I I I
*1 ^2 si e2 D1 D2 D3 D4
12 60 80 10 50 1 2 3
-i 10
j..~Jv\~yyT* I I I I \ r
*1 E2 E3 E< F, F2 G1 G2 'l '2
2 34- SR BB^ ^3/16 1/4; ^60 100, 30K 40K;
Spindle Suction Stack Slot Speed
Position Foot Height Depth (rpm)
(in) (thou)
Figure 11.2 Main effects of control factors on router bit life and some 2-factor
interactions. Two-standard-deviation confidence limits on the main effect for the starting level are
also shown.
Sec.11.7 Data Analysis
15 -i 15 -i
I x B Interaction I x G Interaction
(0
d)
.c
o
_c I
S10
o
o |,0
c c
(0 (0
^ 5 ® 5
~1 l~~
30K 40K 30K 40K
15 -i 15 -i
I x C Interaction (0
G x B Interaction
(0
d)
o o
c c
°10 °10
o o
o o
c c
(0 (0
® 5 ^ 5
Figure 11.2 (Continued) Main effects of control factors on router bit life and some
2-factor interactions.
270 Reliability Improvement Chap. 11
A. Suction A2
-
A2
-
A2 -
C. In-feed c, -
c, -
c, -
E\-Ea, -
F\-Et -
H. Depth of slot w. -
w. -
w. -
in., these predictions, obviously, are likely to be on the low side, especially the
prediction under optimum conditions which is likely to be much less than the realized value.
From the machine logs, the router bit life under starting conditions was found to
be 900 in., while the verification (confirmatory) experiment under optimum conditions
yielded an average life in excess of 4,150 in.
In selecting the best operating conditions for the routing process, one must
consider the overall cost, which includes not only the cost of router bits but also the cost
of machine productivity, the cost of cleaning the boards if needed, etc. Under the
optimum conditions shown in Table 11.5, the stack height is 3/16 in. as opposed to 1/4
in. under the starting conditions. This means three panels are cut simultaneously
instead of four panels. However, the lost machine productivity caused by this change
can be made up by increasing the X-Y feed. If the X-Y feed is increased to 80 in. per
minute, the productivity of the machine would get back approximately to the starting
level. The predicted router bit life under these alternate optimum conditions, called
Sec. 11.8 Survival Probability Curves 271
optimum 2, is 1,863 in., which is about twice the predicted life for starting conditions.
Thus, a 50-percent reduction in router bit cost can be achieved while still maintaining
machine productivity. An auxiliary experiment typically would be needed to estimate
precisely the effect of X-Y feed under the new settings of all other factors. This would
enable us to make an accurate economic analysis.
In summary, orthogonal array based matrix experiments are useful for finding
optimum control factor settings with regard to product life. In the router bit example,
the experimenters were able to improve the router bit life by a factor of 2 to 4.
Sometimes, in order to see any failures in a reasonable time, life tests must be
conducted under stressed conditions, such as higher than normal temperature or humidity.
Such life tests are called accelerated life tests. An important concern in using
accelerated life tests is how to ensure that the control factor levels found optimum
during the accelerated tests will also be optimum under normal conditions. This can be
achieved by including several stress levels in the matrix experiment and demonstrating
additivity. For an application of the Robust Design method for accelerated life tests,
see Phadke, Swann, and Hill [P6] and Mitchell [Ml].
The life data can also be analyzed in a different way (refer to the minute analysis
method described in Taguchi [Tl] and Taguchi and Wu [T7]) to construct the survival
probability curves for the levels of each factor. To do so, we look at every 100 in. of
cut and note which router bits failed and which survived. Table 11.6 shows the
survival data displayed in this manner. Note that a 1 means survival and a 0 means
failure. The survival data at every time point can be analyzed by the standard method
described in Chapter 3 to determine the effects of various factors. Thus, for suction
levels A\ and A2, the level means at 100 in. of cut are 0.4375 and 0.6875,
respectively. These are nothing but the fraction of router bits surviving at 100 in. of cut for
the two levels of suction. The survival probabilities can be estimated in a similar
manner for each factor and each time period—100 in., 200 in., etc. These data are
plotted in Figure 11.3. These plots graphically display the effects of factor level
changes on the entire life curve and can be used to decide the optimum settings of the
control factors. In this case, the conclusions from these plots are consistent with those
from the analysis described in Section 11.6.
Plots similar to those in Figure 11.3 can be used to predict the entire survival
probability curve under a new set of factor level combinations such as the optimum
combination. The prediction method is described in Chapter 5, Section 5.5 in
conjunction with the analysis of ordered categorical data (see also Taguchi and Wu [T7].
Reliability Improvement Chap. 11
1.0r
A. Suction
to = xoM2
.>.o 0.5
> to _ x
1 x-x-x-x-x-x-x-x-x-x-x^c-x-x-
J_
0 1000 2000
LOr
B. X-YFeed
•° X-X \__
s2 *x-x-x-x-x-x-x-x-x-x-x. x-x-x-
I
0 1000 2000
LOr
c. In-Feed
tot x-x C2
0.5
"A
C, v-x-x-x-x-x-x-x-x-x-Xv... -
1 1
0 1000 20(
^V'
5I o2 Y
1 1
0 1000 2000
Inches of Router Bit Movement
1.0r
F. Suction Foot
0.5
5! x-x.„
FA™A™A™A™A™A™A™A™A™X™A»w w w
2 X-X-X-
o 1000 2000
1.0r
G. Stack Height
x-x.
"I
G2'\ x-x-x-x-x-x-x-x-x-x-x-x-x-x-
I I
1000 2000
1.0
H. Depth of Slot
to =
¦x-x-x-x-x-x-x-x-x. IKK
H,
0 1000 2000
1.0r
, I. Speed '2
to =
5I / \ X-X-X-
J_ J
0 1000 2000
Inches of Router Bit Movement
1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0
2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
3 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
4 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
5 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
6 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
7 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
8 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
9 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
10 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
11 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
12 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0
13 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
14 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
15 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
16 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0
17 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
18 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
19 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
20 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
21 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
22 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
23 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0
24 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
25 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
26 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0
27 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
28 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0
29 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
30 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0
31 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
32 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
Note that in this method of determining life curves, no assumption was made
regarding the shape of the curve—such as Weibull or log-normal distribution. Also,
the total amount of data needed to come up with the life curves is small. In this
example, it took only 32 samples to determine the effects of eight control factors. For a
single good fit of a Weibull distribution, one typically needs several tens of observations.
So, the approach used here can be very beneficial for reliability improvement projects.
11.9 SUMMARY
• The S/N ratio calculated from a continuous quality characteristic can be used to
estimate the average life of a new product. Let T|i be the S/N ratio for the new
product and r|2 be the S/N ratio for a benchmark product whose average life is
276 Reliability Improvement Chap. 11
known. Then the average life of the new product is r times the average life of
the benchmark product, where
• The goal of the router bit case study was to reduce dust formation. Since there
existed no continuous quality characteristic that could be observed conveniently,
the life test was conducted to improve the router bit life.
• Effects of nine factors, eight control factors and spindle position, were studied
using an orthogonal array with 32 experiments. Out of the nine factors, two
factors had four levels, and the remaining seven had two levels. Four specific
2-factor interactions were also studied. In addition, there were several physical
restrictions regarding the factor levels. Use of Taguchi's linear graphs made it
easy to construct the orthogonal array, which allowed the estimation of desired
factor main effects and interactions while satisfying the physical restrictions.
• Only one router bit was used per experiment. Dust formation was observed
every 100 in. of cut in order to judge the failure of the bit. The length of cut
prior to formation of appreciable dust or breakage of the bit was called the bit
life and it was used as the quality characteristic. Each experiment was
terminated at 1,700 in. of cut regardless of failure or survival of the bit. Thus the
life data were censored.
• Effects of the nine factors on router bit life were computed and optimum levels
for the control factors were identified. Under a set of optimum conditions, called
optimum 1, a 4-fold increase in router bit life was observed, but with a
12.5-percent reduction in throughput. Under another set of optimum conditions,
called optimum 2, a 2-fold increase in router bit life was observed, with no drop
in throughput.
• The life data from a matrix can also be analyzed by the minute analysis method
to determine the effects of the control factors on the survival probability curves.
This method of analysis does not presume any failure time distribution, such as
log-normal or Weibull distribution. Also, the total amount of data needed to
determine the survival probability curves is small.
Appendix A
ORTHOGONALITY OF A
MATRIX EXPERIMENT
which is a weighted sum of the nine observations. The linear form L, is called a
contrast if the weights add up to zero—that is, if
Two contrasts, L\ and L2, are said to be orthogonal if the inner product of the
vectors corresponding to their weights is zero—that is, if
Let us consider three weights wn, w\2, and w^ corresponding to the three
levels in the column 1 of the matrix experiment given by Table 3.2 (Chapter 3). Then we
call the following linear form, Ll, the contrast corresponding to the column 1
277
278 Appendix A
Note that in Equation (A.4) we use the weight w>n whenever the level is 1, weight w12
whenever the level is 2, and weight w 13 whenever the level is 3.
The inner product of the vectors corresponding to the weights in the two contrasts L {
and L 2 is given by
= 0.
Hence, columns 1 and 2 are mutually orthogonal. The orthogonality of all pairs of
columns in the matrix experiment given by Table 3.2 can be verified in a similar
manner. In general it can be shown that the balancing property is a sufficient condition
for a matrix experiment to be orthogonal.
Appendix A 279
Here we define in precise mathematical terms the problem of minimizing the variance
of thickness while keeping the mean on target and derive its solution.
Let z = (z\, z2, ¦ ¦ ¦ , zg)T be the vector formed by the control factors; x be the
vector formed by the noise factors; and y(x; z) denote the observed quality
Problem Statement:
Minimize o2(z)
z (B.l)
Subject to n(z) = Ho .
281
282 Appendix B
Solution:
We postulate that one of the control factors is a scaling factor, say, z\. It, then,
implies that
for all x and z, where z' = (z2, z3, . . . , zg)T, and h(x; z) does not depend onzj.
It follows that
and
where (J.A and a% are, respectively, the mean and variance of h(x, z).
We will now show that z is an optimum solution to the problem defined by Equation
(B.l).
First, note that z* is a feasible solution since \i(Z{*, z'*) = jj.0. Next consider
any feasible solution z = {z\, z'). We have
2, /v _ 2, /v °2(Z1' Z')
^(Z„ Z')=^(Z,, Z') H (zi, z)
2 °*tf >
W • -5-7— • (B.5
Appendix B 283
Combining the definition of z' in (a) above and Equation (B.5), we have for all
feasible solutions,
Thus, z* = (z\, z'*) is an optimal solution for the problem defined by Equation (B.l).
q2(zi, z) = g^)
H2(z,, z') \i2h{z')
In fact, it is not even necessary to know which control factor is a scaling factor. We
can discover the scaling factor by examining the effects of all control factors on the
signal-to-noise (S/N) ratio and the mean. Any factor that has no effect on the S/N
ratio, but a significant effect on the mean can be used as a scaling factor.
In summary, the original constrained optimization problem can be solved as an
unconstrained optimization problem, step (a), followed by adjusting the mean on target,
step (b). For obvious reasons, this procedure is called a 2-step procedure. For further
discussions on the 2-step procedure, see Taguchi and Phadke [T6]; Phadke and Dehnad
[P4]; Leon, Shoemaker and Kackar [L2]; Nair and Pregibon [N2]; and Box [Bl]. The
particular derivation given in this appendix was suggested by M. Hamami.
Note that the derivation above is perfectly valid if we replace z\ in Equation
(B.2) by an arbitrary function g(zx) of z\. This represents a generalization of the
common concept of linear scaling.
Appendix C
STANDARD ORTHOGONAL
ARRAYS
AND
LINEAR GRAPHS*
The orthogonal arrays and linear graphs are reproduced with permission from Dr. Genichi Taguchi
and with help from Mr. John Kennedy of American Supplier Institute, Inc. For more details of the
orthogonal arrays and linear graphs, see Taguchi [Tl] and Taguchi and Konishi [T5].
285
286 Appendix C
L4 (23)
Expt. Column
No. 1 2 3
1 1 1 1
2 1 2 2
3 2 1 2
4 2 2 1
i8 (27)
1 1 1 1 1 1 1 1 1 (1) 3 2 5 4 7 6
2 1 1 1 2 2 2 2 2 (2) 1 6 7 4 5
3 1 2 2 1 1 2 2 3 (3) 7 6 5 4
4 1 2 2 2 2 1 1 4 (4) 1 2 3
5 2 1 2 1 2 1 2 5 (5) 3 2
6 2 1 2 2 1 2 1 6 (6) 1
7 2 2 1 1 2 2 1 7 (7)
8 2 2 1 2 1 1 2
(D (2)
A ¦
L9 (34)
Expt Column
No. 12 3 4
1 1111
2 12 2 2
3 13 3 3
4 2 12 3
5 2 2 3 1
6 2 3 12
7 3 13 2
8 3 2 13
9 3 3 2 1
Ln (211)
Expt. Column
No. 123456789 10 11
1 1111111111 1
2 111112 2 2 2 2 2
3 112 2 2 1112 2 2
4 12 12 2 12 2 1 1 2
5 12 2 12 2 12 1 2 1
6 12 2 2 12 2 12 1 1
7 2 12 2 112 2 12 1
8 2 12 12 2 2 11 1 2
9 2 112 2 2 12 2 1 1
10 2 2 2 11112 2 1 2
11 2 2 12 12 1112 2
12 2 2 112 12 12 2 1
Note: The interaction between any two columns is confounded partially with the remaining nine columns.
Do not use this array if the interactions must be estimated.
288 Appendix C
i,6 (215)
Expt Column
No. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
2 1 1 1 1 1 1 2 2 2 2 2 2 2 2
3 1 1 2 2 2 2 1 1 1 1 2 2 2 2
4 1 1 2 2 2 2 2 2 2 2 1 1 1 1
5 2 2 1 1 2 2 1 1 2 2 1 1 2 2
6 2 2 1 1 2 2 2 2 1 1 2 2 1 1
7 2 2 2 2 1 1 1 1 2 2 2 2 1 1
8 2 2 2 2 1 1 2 2 1 1 1 1 2 2
9 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2
10 2 1 2 1 2 1 2 2 1 2 1 2 1 2 1
11 2 1 2 2 1 2 1 1 2 1 2 2 1 2 1
12 2 1 2 2 1 2 1 2 1 2 1 1 2 1 2
13 2 2 1 1 2 2 1 1 2 2 1 1 2 2 1
14 2 2 1 1 2 2 1 2 1 1 2 2 1 1 2
15 2 2 1 2 1 1 2 1 2 2 1 2 1 1 2
16 2 2 1 2 1 1 2 2 1 1 2 1 2 2 1
Column
Column 12 3 4 5 6 7 8 9 10 11 12 13 14 15
1 (1) 3 2 5 4 7 6 9 8 11 10 13 12 15 14
2 (2) 1 6 7 4 5 10 11 8 9 14 15 12 13
3 (3) 7 6 5 4 11 10 9 8 15 14 13 12
4 (4) 1 2 3 12 13 14 15 8 9 10 11
5 (5) 3 2 13 12 15 14 9 8 11 10
6 (6) 1 14 15 12 13 10 11 8 9
7 (7) 15 14 13 12 11 10 9 8
8 (8) 1 2 3 4 5 6 7
9 (9) 3 2 5 4 7 6
10 (10) 1 6 7 4 5
11 (ID 7 6 5 4
12 (12) 1 2 3
13 (13) 3 2
14 (14) 1
15 (15)
289
Appendix C
(2)
12» •10
2 6 4
4 12 8
(4)
(3)
7 1 13 11 2* • 14
3/V 1^V
2 10 8 4 5
(5) (6)
1 4 5 ' 6
10
3 12 15 14 13
12
2 8 10 9 11 14
290 Appendix C
L\e (45)
Expt. Column
No. 12 3 4 5
1 11111
2 12 2 2 2
3 13 3 3 3
4 14 4 4 4
5 2 12 3 4
6 2 2 14 3
7 2 3 4 12
8 2 4 3 2 1
9 3 13 4 2
10 3 2 4 3 1
11 3 3 12 4
12 3 4 2 13
13 4 14 2 3
14 4 2 3 14
15 4 3 2 4 1
16 4 4 13 2
Note: To estimate the interaction between columns 1 and 2, all other columns must be kept empty.
3,4,5 1i -•2
Appendix C 291
Expt. Column
No. 12 3 4 5 6 7 8
1 11 11 11 11
2 11 2 2 2 2 2 2
3 11 3 3 3 3 3 3
4 12 11 2 2 3 3
5 12 2 2 3 3 11
6 12 3 3 11 2 2
7 13 12 13 2 3
8 13 2 3 2 1 3 1
9 13 3 1 3 2 12
10 2 1 13 3 2 2 1
11 2 1 2 1 13 3 2
12 2 1 3 2 2 1 13
13 2 2 12 3 1 3 2
14 2 2 2 3 12 13
15 2 2 3 1 2 3 2 1
16 2 3 13 2 3 12
17 2 3 2 1 3 1 2 3
18 2 3 3 2 12 3 1
Note: Interaction between columns 1 and 2 is orthogonal to all columns and hence can be estimated
without sacrificing any column. The interaction can be estimated from the 2-way table of columns 1 and
2. Columns 1 and 2 can be combined to form a 6-level column. Interactions between any other pair of
columns is confounded partially with the remaining columns.
1 2 3 4 5
•-
292 Appendix C
LB (5s)
Expt. Column
No. 12 3 4 5 6
1 111111
2 1 2 2 2 2 2
3 13 3 3 3 3
4 14 4 4 4 4
5 15 5 5 5 5
6 2 12 3 4 5
7 2 2 3 4 5 1
8 2 3 4 5 12
9 2 4 5 12 3
10 2 5 12 3 4
11 3 13 5 2 4
12 3 2 4 13 5
13 3 3 5 2 4 1
14 3 4 13 5 2
15 3 5 2 4 13
16 4 14 2 5 3
17 4 2 5 3 14
18 4 3 14 2 5
19 4 4 2 5 3 1
20 4 5 3 14 2
21 5 15 4 3 2
22 5 2 15 4 3
23 5 3 2 15 4
24 5 4 3 2 15
25 5 5 4 3 2 1
Note: To estimate the interaction between columns 1 and 2, all other columns must be kept empty.
Expt. Column
No. 1 2 3 4 5 6 7 8 9 10 11 12 13
1 1 1 1 1 1 1 1 1 1 1 1 1
2 1 1 1 2 2 2 2 2 2 2 2 2
3 1 1 1 3 3 3 3 3 3 3 3 3
4 2 2 2 1 1 1 2 2 2 3 3 3
S 2 2 2 2 2 2 3 3 3 1 1 1
6 2 2 2 3 3 3 1 1 1 2 2 2
7 3 3 3 1 1 1 3 3 3 2 2 2
8 3 3 3 2 2 2 1 1 1 3 3 3
9 3 3 3 3 3 3 2 2 2 1 1 1
10 2 1 2 3 1 2 3 1 2 3 1 2 3
11 2 1 2 3 2 3 1 2 3 1 2 3 1
12 2 1 2 3 3 1 2 3 1 2 3 1 2
13 2 2 3 1 1 2 3 2 3 1 3 1 2
14 2 2 3 1 2 3 1 3 1 2 1 2 3
15 2 2 3 1 3 1 2 1 2 3 2 3 1
16 2 3 1 2 1 2 3 3 1 2 2 3 1
17 2 3 1 2 2 3 1 1 2 3 3 1 2
18 2 3 1 2 3 1 2 2 3 1 1 2 3
19 3 1 3 2 1 3 2 1 3 2 1 3 2
20 3 1 3 2 2 1 3 2 1 3 2 1 3
21 3 1 3 2 3 2 1 3 2 1 3 2 1
22 3 2 1 3 1 3 2 2 1 3 3 2 1
23 3 2 1 3 2 1 3 3 2 1 1 3 2
24 3 2 1 3 3 2 1 1 3 2 2 1 3
25 3 3 2 1 1 3 2 3 2 1 2 1 3
26 3 3 2 1 2 1 3 1 3 2 3 2 1
27 3 3 2 1 3 2 1 2 1 3 1 3 2
294 Appendix C
Column
Column 1 2 3 4 S 6 7 8 9 10 11 12 13
1 2 6 5 5 9 8 8 12 11 11
(1) I 4
4 4 3 7 7 6 10 10 9 13 13 12
2 1 8 9 10 5 6 7 5 6 7
(2) ' 4 3 11 12 13 11 12 13 8 9 10
3 9 10 8 7 5 6 6 7 5
(3) 1 2 13 11 12 12 13 11 10 8 9
4 8 9 6 7 5 7 5 6
(4) 10
12 13 11 13 11 12 9 10 8
S 1 2 3 4 2 4 3
(5) 17 6 11 13 12 8 10 9
6 2 3 3 2 4
(6) 5
1 13
4 12 11 10 9 8
7 4 2 4 3 2
(7) ,32 11 13 9 8 10
8 1 2 3 4
(8) 1 10 9 5 7 6
9 1 4 2 3
(9) 8 7 6 5
10 4 2
(10) 36 5 7
11 1 1
(ID 13 12
12 1
(12) 11
13 (13)
295
Appendix C
(2)
<5\ 9 10 12 13 5
8,11
Appendix C
L32 (231)
Expt. Column
No. 12 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31
17 2 12 12 12 1 212121212121 2 12 12 12 12 12
18 2 12 12 12 1 212121221212 12 12 12 12 12 1
19 2 12 12 12 2 121212112121 2 12 2 12 12 12 1
20 2 12 12 12 2 121212121212 12 112 12 12 12
21 2 12 2 12 11 2122121 12122 12 1 12 12 2 12 1
22 2 12 2 12 11 21221212121 1 2 12 2 12 112 12
23 2 12 2 12 12 121 121212122 12 12 12 112 12
24 2 12 2 12 12 121121221211 2 12 12 12 2 12 1
1 (1) 1 2 5 4 7 6 9 8 II 10 13 12 15 14 17 16 19 18 21 20 23 22 25 24 27 26 29 28 31 30
2 (2) 1 6 7 4 5 10 11 8 9 14 15 12 13 18 19 16 17 22 21 20 21 26 27 24 25 30 31 28 29
3 (3) 7 6 5 4 II 10 9 8 15 14 13 12 19 18 17 16 23 22 21 20 27 26 25 24 31 30 29 28
4 (4) 1 2 3 12 13 14 15 8 9 10 11 20 21 22 23 16 17 18 19 28 29 30 31 24 25 26 27
5 (5) 3 2 13 12 15 14 9 8 II 10 21 20 23 22 17 16 19 18 29 28 31 30 25 24 27 26
6 (6) 1 14 15 12 13 10 11 8 9 22 23 20 21 18 19 16 17 30 31 28 29 26 27 24 25
7 (7) 15 14 13 12 II 10 9 8 23 22 21 20 19 18 17 16 31 30 29 28 27 26 25 24
8 (8) 1 2 3 4 5 6 7 24 25 26 27 28 29 30 31 16 17 18 19 20 21 22 23
9 (9) 3 2 5 4 7 6 25 24 27 26 29 28 31 30 17 16 19 18 21 20 23 22
10 (10) 1 6 7 4 5 26 27 24 25 30 31 28 29 18 19 16 17 22 23 20 21
11 (II) 7 6 5 4 27 26 25 24 31 30 29 28 19 18 17 16 23 22 21 20
12 (12) 1 2 3 28 29 30 31 24 25 26 27 20 21 22 23 16 17 18 19
13 (13) 3 2 29 28 31 30 25 24 27 26 21 20 23 22 17 16 19 18
14 (14) 1 30 31 28 29 26 27 24 25 22 23 20 21 18 19 16 17
15 (15) 31 30 29 28 27 26 25 24 23 22 21 20 19 18 17 16
16 (16) 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
17 (17) 3 2 5 4 7 6 9 8 11 10 13 12 15 14
18 (18) 1 6 7 4 5 10 11 8 9 14 15 12 13
19 (19) 7 6 5 4 II 10 9 8 15 14 13 12
20 (20) 1 2 3 12 13 14 15 8 9 10 11
21 (21) 3 2 13 12 15 14 9 8 II 10
22 (22) 1 14 15 12 13 10 11 8 9
23 (23) 15 14 13 12 II 10 9 8
24 (24) 1 2 3 4 5 6 7
25 (25) 3 2 5 4 7 6
26 (26) 1 6 7 4 5
27 (27) 7 6 5 4
28 (28) 1 2 3
29 (29) 3 2
30 (30) 1
31 (3D
298 Appendix C
(2)
8 12 4
• 16
(4)
18 25 10 11
J
26
•14
23
: , 19 17/ <<3 y* r28 • 12 •
23
1^ 21 X v29
20 15 14 13
Appendix C 299
18
21
(7) (8)
16 18 9 24 12
29
_ . . _ _ . . ? ?
2 4 15 25 10 30 13 28 11 23
300 Appendix C
>s CO
6 CO CO 8
^"N
1« CO CO CO CO CO CO
4 16 19 20 23 17 18 21 22
(11)
(12)
20 21 11 10
Appendix C 301
Expt. Column
No. 12 34 5 6 78 9 10
1 11 11 11 11 11
2 11 22 22 22 22
3 11 33 33 33 33
4 11 44 44 44 44
5 12 11 2 2 3 3 4 4
6 12 2 2 11 4 4 3 3
7 12 3 3 4 4 11 2 2
8 12 4 4 3 3 2 2 11
9 13 12 3 4 12 3 4
10 13 2 1 4 3 2 1 4 3
11 13 3 4 12 3 4 12
12 13 4 3 2 1 4 3 2 1
13 14 12 4 3 3 4 2 1
14 14 2 1 3 4 4 3 12
15 14 3 4 2 1 12 4 3
16 14 4 3 12 2 1 3 4
17 2 1 14 14 2 3 2 3
18 2 1 2 3 2 3 14 14
19 2 1 3 2 3 2 4 1 4 1
20 2 1 4 1 4 1 3 2 3 2
21 22 14 23 41 32
22 22 23 14 32 41
23 22 32 41 23 14
24 22 41 32 14 23
25 23 13 31 24 42
26 23 24 42 13 31
27 23 31 13 42 24
28 23 42 24 31 13
29 24 13 42 42 13
30 24 24 31 31 24
31 24 31 24 24 31
32 24 42 13 13 42
Note: Interaction between columns 1 and 2 is orthogonal to all columns and hence can be estimated
without sacrificing any column. It can be estimated from the 2-way table of these columns. Columns 1
and 2 can be combined to form an 8-level column. Interactions between any two 4-level columns is
confounded partially with each of the remaining 4-level columns.
302 Appendix C
1 23456789 10
m m ••
Appendix C 303
Lx (211 x 312)
Expt Column
No. 123456789 10 11 12 13 14 15 16 17 18 19 20 21 22 23
1 111111111 1 1 1 1 1 1 1 1 1 1 1 1 1 1
2 111111111 1 1 2 2 2 2 2 2 2 2 2 2 2 2
3 111111111 1 1 3 3 3 3 3 3 3 3 3 3 3 3
4 111112 2 2 2 2 2 1 1 1 1 2 2 2 2 3 3 3 3
S 111112 2 2 2 2 2 2 2 2 2 3 3 3 3 1 1 1 1
6 111112 2 2 2 2 2 3 3 3 3 1 1 1 1 2 2 2 2
7 112 2 2 1112 2 2 1 1 2 3 1 2 3 3 1 2 2 3
8 112 2 2 1112 2 2 2 2 3 1 2 3 1 1 2 3 3 1
9 112 2 2 1112 2 2 3 3 1 2 3 1 2 2 3 1 1 2
10 12 12 2 12 2 1 1 2 1 1 3 2 1 3 2 3 2 1 3 2
11 12 12 2 12 2 1 1 2 2 2 1 3 2 1 3 1 3 2 1 3
12 12 12 2 12 2 1 1 2 3 3 2 1 3 2 1 2 1 3 2 1
13 12 2 12 2 12 1 2 1 1 2 3 1 3 2 1 3 3 2 1 2
14 12 2 12 2 12 1 2 1 2 3 1 2 1 3 2 1 1 3 2 3
15 12 2 12 2 12 1 2 1 3 1 2 3 2 1 3 2 2 1 3 1
16 12 2 2 12 2 12 1 1 1 2 3 2 1 1 3 2 3 3 2 1
17 12 2 2 12 2 12 1 1 2 3 1 3 2 2 1 3 1 1 3 2
18 12 2 2 12 2 12 1 1 3 1 2 1 3 3 2 1 2 2 1 3
19 2 12 2 1 12 2 1 2 1 1 2 1 3 3 3 1 2 2 1 2 3
20 2 12 2 1 12 2 1 2 1 2 3 2 1 1 1 2 3 3 2 3 1
21 2 12 2 1 12 2 1 2 1 3 1 3 2 2 2 3 1 1 3 1 2
22 2 12 12 2 2 11 1 2 1 2 2 3 3 1 2 1 1 3 3 2
23 2 12 12 2 2 11 1 2 2 3 3 1 1 2 3 2 2 1 1 3
24 2 12 12 2 2 1 1 1 2 3 1 1 2 2 3 1 3 3 2 2 1
25 2 1 12 2 2 12 2 1 1 1 3 2 1 2 3 3 1 3 1 2 2
26 2 1 12 2 2 12 2 1 1 2 1 3 2 3 1 1 2 1 2 3 3
27 2 112 2 2 12 2 1 1 3 2 1 3 1 2 2 3 2 3 1 1
1
28 2 2 2 11112 2 1 2 1 3 2 2 1 1 3 2 3 1 3
29 2 2 2 11112 2 1 2 2 1 3 3 3 2 2 1 3 1 2 1
30 2 2 2 11112 2 1 2 3 2 1 1 1 3 3 2 1 2 3 2
31 2 2 12 12 111 2 2 1 3 3 3 2 3 2 2 1 2 1 1
32 2 2 12 12 111 2 2 2 1 1 1 3 1 3 3 2 3 2 2
33 2 2 12 12 111 2 2 3 2 2 2 1 2 1 1 3 1 3 3
34 2 2 112 12 12 2 1 1 3 1 2 3 2 3 1 2 2 3 1
35 2 2 1 12 12 12 2 1 2 1 2 3 1 3 1 2 3 3 1 2
36 2 2 112 12 12 2 1 3 2 3 1 2 1 2 3 1 1 2 3
Note: Interaction between any two columns is partially confounded with the remaining columns.
304 Appendix C
Lx (23 x 3")
Expt. Column
No. 123456789 10 11 12 13 14 15 16
1 1111111111 1 1 1 1 1 1
2 1 1 1 12 2 2 2 2 2 2 2 2 2 2 2
3 1 1 1 13 3 3 3 3 3 3 3 3 3 3 3
4 12 2 111112 2 2 2 3 3 3 3
S 1221222233 3 3 1 1 1 1
6 12 2 13 3 3 3 11 1 1 2 2 2 2
7 2 12 1 1 12 3 12 3 3 1 2 2 3
8 2 12 12 2 3 12 3 1 1 2 3 3 1
9 2 12 13 3 12 3 1 2 2 3 1 1 2
10 2 2 11113 2 13 2 3 2 1 3 2
11 2 2 112 2 13 2 1 3 1 3 2 1 3
12 2 2 113 3 2 13 2 1 2 1 3 2 1
13 1 1 12 12 3 13 2 1 3 3 2 1 2
14 1112 2 3 12 13 2 1 1 3 2 3
15 1112 3 12 3 2 1 3 2 2 1 3 1
16 12 2 2 12 3 2 11 3 2 3 3 2 1
17 1222231322 1 3 1 1 3 2
18 12 2 2 3 12 13 3 2 1 2 2 1 3
19 2 12 2 12 13 3 3 1 2 2 1 2 3
20 2 12 2 2 3 2 1 11 2 3 3 2 3 1
21 2122313222 3 1 1 3 1 2
22 2 2 12 12 2 3 3 1 2 1 1 3 3 2
23 2 2 12 2 3 3 1 12 3 2 2 1 1 3
24 2 2 12 3 1 12 2 3 1 3 3 2 2 1
25 1113 13 2 12 3 3 1 3 1 2 2
26 1113 2 13 2 3 1 1 2 1 2 3 3
27 1 1 13 3 2 13 12 2 3 2 3 1 1
28 12 2 3 13 2 2 2 1 1 3 2 3 1 3
29 1223213332 2 1 3 1 2 1
30 12 2 3 3 2 1 1 13 3 2 1 2 3 2
31 2123133323 2 2 1 2 1 1
32 2 12 3 2 1 1 13 1 3 3 2 3 2 2
33 2123322212 1 1 3 1 3 3
34 2 2 13 13 12 3 2 3 1 2 2 2 3
35 2 2 13 2 12 3 13 1 2 3 3 1 2
36 2 2 13 3 2 3 12 1 2 3 1 1 2 3
Notes: (i) The interactions 1 x 4, 2 x 4 and 3 x 4 are orthogonal to all columns and hence can be obtained
without sacrificing any column, (ii) The 3-factor interaction between columns 1, 2, and 4 can be obtained by
keeping only column 3 empty. Thus, a 12-level factor can be formed by combining columns 1, 2, and, 4 and
by keeping column 3 empty, (iii) Columns 5 through 16 in the array i36(23 x 313) are the same as the
columns 12 through 23 in the array 1-36(2" x 312).
Appendix C 305
A. *^;
306 Appendix C
Expl. Column
No. 1 2 3 4 5 6 7 8 9 10 11 12
1 111111111111
2 1 12222222222
3 1 13333333333
4 1 1 4444444444
5 115555555555
6 121234512345
7 122345123451
8 123451234512
9 124512345123
10 125123451234
11 131352441352
12 132413552413
13 1335241 13524
14 134135224135
15 135241335241
16 141425353142
17 142531414253
18 143142525314
19 144253131425
20 145314242531
21 151543243215
22 1 521 54354321
23 153215415432
24 154321521543
25 155432132154
26 21 1 145432523
27 212251543134
28 213312154245
29 214423215351
30 215534321412
31 221213324554
32 2223244351 15
33 223435541221
34 224541152332
35 225152213443
36 231331255424
37 232442311535
38 233553422141
39 2341 14533252
40 235225144313
(Continued)
Appendix C 307
Expt. Column
No. 1 2 3 4 5 6 7 8 9 10 11 12
41 2 4 1 4 5 4 1 2 5 2 3 3
42 2 4 2 5 1 5 2 3 1 3 4 4
43 2 4 3 1 2 1 3 4 2 4 5 5
44 2 4 4 2 3 2 4 5 3 5 1 1
45 2 4 5 3 4 3 5 1 4 1 2 2
46 2 5 1 5 2 2 5 3 4 4 3 1
47 2 5 2 1 3 3 1 4 5 5 4 2
48 2 5 3 2 4 4 2 5 1 1 5 3
49 2 5 4 3 5 5 3 1 2 2 1 4
50 2 5 5 4 1 1 4 2 3 3 2 5
Note: Interaction between columns 1 and 2 is orthogonal to all columns and hence can be estimated
without sacrificing any column. It can be estimated from the 2-way table of these two columns. Columns
1 and 2 can be combined to form a 10-level column.
Expl. Column
No. 123456789 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26
1 1111111111 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
2 111111112 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
3 111111113 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3
4 1 12 2 2 2 2 2 11 1 1 1 1 2 3 2 3 2 3 2 3 2 3 2 3
5 1 122222222 2 2 2 2 3 1 3 1 3 1 3 1 3 1 3 1
6 1 1 22222233 3 3 3 3 1 2 1 2 1 2 1 2 1 2 1 2
7 1 13 3 3 3 3 3 11 1 1 1 1 3 2 3 2 3 2 3 2 3 2 3 2
8 1 133333322 2 2 2 2 1 3 1 3 1 3 1 3 1 3 1 3
9 1 133333333 3 3 3 3 2 1 2 1 2 1 2 1 2 1 2 1
10 12 1 12 2 3 3 11 2 2 3 3 1 1 1 1 2 3 2 3 3 2 3 2
11 12 1 12 2 3 3 2 2 3 3 1 1 2 2 2 2 3 1 3 1 1 3 1 3
12 12 1 12 2 3 3 3 3 1 1 2 2 3 3 3 3 1 2 1 2 2 1 2 1
13 12 2 2 3 3 1 1 11 2 2 3 3 2 3 2 3 3 2 3 2 1 1 1 1
14 12 2 2 3 3 1 12 2 3 3 1 1 3 1 3 1 1 3 1 3 2 2 2 2
15 12 2 2 3 3 1 13 3 1 1 2 2 1 2 1 2 2 1 2 1 3 3 3 3
16 12 3 3 1 12 2 11 2 2 3 3 3 2 3 2 1 1 1 1 2 3 2 3
17 12 3 3 112 2 2 2 3 3 1 1 1 3 1 3 2 2 2 2 3 1 3 1
18 12 3 3 1 12 2 3 3 1 1 2 2 2 1 2 1 3 3 3 3 1 2 1 2
19 13 12 13 2 3 12 1 3 2 3 1 1 2 3 1 1 3 2 2 3 3 2
20 13 12 13 2 3 2 3 2 1 3 1 2 2 3 1 2 2 1 3 3 1 1 3
21 13 12 13 2 3 3 1 3 2 1 2 3 3 1 2 3 3 2 1 1 2 2 1
22 13 2 3 2 13 1 12 1 3 2 3 2 3 3 2 2 3 1 1 3 2 1 1
23 13 2 3 2 13 12 3 2 1 3 1 3 1 1 3 3 1 2 2 1 3 2 2
24 13 2 3 2 13 13 1 3 2 1 2 1 2 2 1 1 2 3 3 2 1 3 3
25 13 3 13 2 12 12 1 3 2 3 3 2 1 1 3 2 2 3 1 1 2 3
26 13 3 13 2 12 2 3 2 1 3 1 1 3 2 2 1 3 3 1 2 2 3 1
27 13 3 13 2 12 3 1 3 2 1 2 2 1 3 3 2 1 1 2 3 3 1 2
28 2 1 13 3 2 2 1 13 3 2 2 1 1 1 3 2 3 2 2 3 2 3 1 1
29 2 1 13 3 2 2 12 1 1 3 3 2 2 2 1 3 1 3 3 1 3 1 2 2
30 2 1 13 3 2 2 13 2 2 1 1 3 3 3 2 1 2 1 1 2 1 2 3 3
31 2 12 113 3 2 13 3 2 2 1 2 3 1 1 1 1 3 2 3 2 2 3
32 2 12 1 13 3 2 2 1 1 3 3 2 3 1 2 2 2 2 1 3 1 3 3 1
33 2 12 113 3 2 3 2 2 1 1 3 1 2 3 3 3 3 2 1 2 1 1 2
34 2 13 2 2 113 13 3 2 2 1 3 2 2 3 2 3 1 1 1 1 3 2
35 2 13 2 2 1 13 2 1 1 3 3 2 1 3 3 1 3 1 2 2 2 2 1 3
36 2 13 2 2 1 13 3 2 2 1 1 3 2 1 1 2 1 2 3 3 3 3 2 1
37 2 2 12 3 13 2 12 3 1 3 2 1 1 2 3 3 2 1 1 3 2 2 3
38 2212313223 1 2 1 3 2 2 3 1 1 3 2 2 1 3 3 1
39 2 2 12 3 13 2 3 1 2 3 2 1 3 3 1 2 2 1 3 3 2 1 1 2
(Continued)
Appendix C 309
Expl. Column
No. 123456789 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26
40 2 2 2 3 12 13 1 2 3 1 3 2 2 3 3 2 1 1 2 3 113 2
41 2 2 2 3 12 13 2 3 1 2 1 3 3 1 1 3 2 2 3 12 2 13
42 2 2 2 3 12 13 3 1 2 3 2 1 1 2 2 1 3 3 1 2 3 3 2 1
43 2 2 3 12 3 2 11 2 3 1 3 2 3 2 1 1 2 3 3 2 2 3 11
44 2 2 3 12 3 2 12 3 1 2 1 3 1 3 2 2 3 1 1 3 3 12 2
45 2 2 3 12 3 2 13 1 2 3 2 1 2 1 3 3 1 2 2 112 3 3
46 2 3 13 2 3 12 1 3 2 3 1 2 1 1 3 2 2 3 3 2 112 3
47 2 3 13 2 3 12 2 1 3 1 2 3 2 2 1 3 3 1 1 3 2 2 3 1
48 2 3 13 2 3 12 3 2 1 2 3 1 3 3 2 1 1 2 2 13 3 12
49 2 3 2 13 12 2 1 3 2 3 1 2 2 3 1 1 3 2 1 12 3 3 2
50 2 3 2 13 12 3 2 1 3 1 2 3 3 1 2 2 1 3 2 2 3 113
51 2 3 2 13 12 3 3 2 1 2 3 1 1 2 3 3 2 1 3 3 12 2 1
52 2 3 3 2 12 3 1 1 3 2 3 1 2 3 2 2 3 1 1 2 3 3 2 11
53 2 3 3 2 12 3 12 1 3 1 2 3 1 3 3 1 2 2 3 113 2 2
54 2 3 3 2 12 3 13 2 1 2 3 1 2 1 1 2 3 3 1 2 2 13 3
Notes: (i) Interaction between columns 1 and 2 is orthogonal to all columns and hence can be estimated
without sacrificing any column. Also, these columns can be combined to form a 6-level column, (ii) The
interactions 1 x 9, 2 x 9, and 1x2x9 appear comprehensively in the columns 10, 11, 12, 13, and 14.
Hence, the aforementioned interactions can be obtained by keeping columns 10 through 14 empty. Also,
columns 1, 2, and 9 can be combined to form a 18-level column by keeping columns 10 through 14
empty.
1 9 25,26
310 Appendix C
£<* (2a)
Expl. Column
No. 123456789 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31
1 1111111111 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
2 1111111111 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
3 1111111111 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
4 1111111111 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
5 11111112 2 2 2 2 2 2 2 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2
6 11111112 2 2 2 2 2 2 2 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2
7 11111112 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 1 1 1 1 1 1 1 1
8 11111112 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 1 1 1 1 1 1 1 1
9 1 1 12 2 2 2 1 11 1 2 2 2 2 1 1 1 1 2 2 2 2 1 1 1 1 2 2 2 2
10 1 1 12 2 2 2 1 11 1 2 2 2 2 1 1 1 1 2 2 2 2 1 1 1 1 2 2 2 2
11 1112 2 2 2 111 1 2 2 2 2 2 2 2 2 1 1 1 1 2 2 2 2 1 1 1 1
12 1112 2 2 2 111 1 2 2 2 2 2 2 2 2 1 1 1 1 2 2 2 2 1 1 1 1
13 1112 2 2 2 2 2 2 2 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 1 1 1 1
14 1 1 12 2 2 2 2 2 2 2 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 1 1 1 1
15 1 1 12 2 2 2 2 2 2 2 1 1 1 1 2 2 2 2 1 1 1 1 1 1 1 1 2 2 2 2
16 1 1 12 2 2 2 2 2 2 2 1 1 1 1 2 2 2 2 1 1 1 1 1 1 1 1 2 2 2 2
17 12 2 1 12 2 1 12 2 1 1 2 2 1 1 2 2 1 1 2 2 1 1 2 2 1 1 2 2
18 12 2 1 12 2 1 12 2 1 1 2 2 1 1 2 2 1 1 2 2 1 1 2 2 1 1 2 2
19 12 2 112 2 112 2 1 1 2 2 2 2 1 1 2 2 1 1 2 2 1 1 2 2 1 1
20 12 2 1 12 2 1 12 2 1 1 2 2 2 2 1 1 2 2 1 1 2 2 1 1 2 2 1 1
21 12 2 1 12 2 2 2 1 1 2 2 1 1 1 1 2 2 1 1 2 2 2 2 1 1 2 2 1 1
22 12 2 1 12 2 2 2 1 1 2 2 1 1 1 1 2 2 1 1 2 2 2 2 1 1 2 2 1 1
23 12 2 1 12 2 2 2 1 1 2 2 1 1 2 2 1 1 2 2 1 1 1 1 2 2 1 1 2 2
24 12 2 1 12 2 2 2 1 1 2 2 1 1 2 2 1 1 2 2 1 1 1 1 2 2 1 1 2 2
25 12 2 2 2 11112 2 2 2 1 1 1 1 2 2 2 2 1 1 1 1 2 2 2 2 1 1
26 12 2 2 2 1 1 1 12 2 2 2 1 1 1 1 2 2 2 2 1 1 1 1 2 2 2 2 1 1
27 12 2 2 2 11112 2 2 2 1 1 2 2 1 1 1 1 2 2 2 2 1 1 1 1 2 2
28 12 2 2 2 11112 2 2 2 1 1 2 2 1 1 1 1 2 2 2 2 1 1 1 1 2 2
29 12 2 2 2 112 2 1 1 1 1 2 2 1 1 2 2 2 2 1 1 2 2 1 1 1 1 2 2
30 12 2 2 2 112 2 1 1 1 1 2 2 1 1 2 2 2 2 1 1 2 2 1 1 1 1 2 2
31 12 2 2 2 1 12 2 1 1 1 1 2 2 2 2 1 1 1 1 2 2 1 1 2 2 1 1 1 1
32 12 2 2 2 1 12 2 1 1 1 1 2 2 2 2 1 1 1 1 2 2 1 1 2 2 2 2 1 1
33 2 12 12 12 12 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2
34 2 12 12 12 12 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2
35 2 12 12 12 12 1 2 1 2 1 2 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1
36 2 12 12 12 12 1 2 1 2 1 2 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1
37 2 12 12 12 2 12 1 2 1 2 1 1 2 1 2 1 2 1 2 2 1 2 1 2 1 2 1
38 2 12 12 12 2 12 1 2 1 2 1 1 2 1 2 1 2 1 2 2 1 2 1 2 1 2 1
39 2 12 12 12 2 12 1 2 1 2 1 2 1 2 1 2 1 2 1 1 2 1 2 1 2 1 2
40 2 12 12 12 2 12 1 2 1 2 1 2 1 2 1 2 1 2 1 1 2 1 2 1 2 1 2
(Continued)
Appendix C 311
Expl. Column
No. 123456789 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31
41 2 12 2 12 112 1 2 2 1 2 1 1 2 1 2 2 1 2 1 12 12 2 12 1
42 2 12 2 12 112 1 2 2 1 2 1 1 2 1 2 2 1 2 1 12 12 2 12 1
43 2 12 2 12 112 1 2 2 1 2 1 2 1 2 1 1 2 12 2 12 1 12 12
44 2 12 2 12 1 12 1 2 2 1 2 1 2 1 2 1 1 2 12 2 12 1 12 12
45 2 12 2 12 12 1 2 1 1 2 1 2 1 2 1 2 2 1 2 12 12 112 12
46 2 12 2 12 12 1 2 1 1 2 1 2 1 2 1 2 2 1 2 12 12 112 12
47 2 12 2 12 12 1 2 1 1 2 1 2 2 1 2 1 1 2 12 12 12 2 12 1
48 2 12 2 12 12 1 2 1 1 2 1 2 2 1 2 1 1 2 12 12 12 2 12 1
57 2 2 12 112 12 2 1 2 1 1 2 1 2 2 1 2 1 12 12 2 12 112
58 2 2 12 112 12 2 1 2 1 1 2 1 2 2 1 2 1 12 12 2 12 112
59 2 2 12 1 12 12 2 1 2 1 1 2 2 1 1 2 1 2 2 12 1 12 12 2 1
60 2 2 12 112 12 2 1 2 1 1 2 2 1 1 2 1 2 2 12 112 12 2 1
61 2 2 12 1 12 2 1 1 2 1 2 2 1 1 2 2 1 2 1 12 2 1 12 12 2 1
62 2 2 12 112 2 1 1 2 1 2 2 1 1 2 2 1 2 1 12 2 112 12 2 1
63 2 2 12 1 12 2 1 1 2 1 2 2 1 2 1 1 2 1 2 2 112 2 12 112
64 2 2 12 112 2 1 1 2 1 2 2 1 2 1 1 2 1 2 2 112 2 12 112
(Continued)
312 Appendix C
LM (2s3) (Continued)
Expl. Column
No. 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63
9 1 1 1 1 2 2 2 2 1 1 1 1 2 2 2 2 11 1 1 2 2 2 2 1 1 112 2 2 2
10 2 2 2 2 1 1 1 1 2 2 2 2 1 1112 2 2 2 11112 2 2 2 11 1 1
11 1 1 1 1 2 2 2 2 1 1 1 1 2 2 2 2 2 2 2 2 11112 2 2 2 11 1 1
12 2 2 2 2 1 1 1 1 2 2 2 2 1 11111 1 1 2 2 2 2 1 1 112 2 2 2
13 1 1 1 1 2 2 2 2 2 2 2 2 1 11111 1 1 2 2 2 2 2 2 2 2 11 1 1
14 2 2 2 2 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 11111 1 112 2 2 2
15 1 1 1 1 2 2 2 2 2 2 2 2 1 1112 2 2 2 11111 1 112 2 2 2
16 2 2 2 2 1 1 1 1 1 1 1 1 2 2 2 2 11 1 1 2 2 2 2 2 2 2 2 11 1 1
17 1 1 2 2 1 1 2 2 1 1 2 2 1 12 2 11 2 2 112 2 1 1 2 2 11 2 2
18 2 2 1 1 2 2 1 1 2 2 1 1 2 2 112 2 1 1 2 2 112 2 112 2 1 1
19 1 1 2 2 1 1 2 2 1 1 2 2 1 12 2 2 2 1 1 2 2 112 2 112 2 1 1
20 2 2 1 1 2 2 1 1 2 2 1 1 2 2 1111 2 2 112 2 1 1 2 2 11 2 2
21 1 1 2 2 1 1 2 2 2 2 1 1 2 2 1111 2 2 112 2 2 2 112 2 1 1
22 2 2 1 1 2 2 1 1 1 1 2 2 1 12 2 2 2 1 1 2 2 111 1 2 2 11 2 2
23 1 1 2 2 1 1 2 2 2 2 1 1 2 2 112 2 1 1 2 2 111 1 2 2 11 2 2
24 2 2 1 1 2 2 1 1 1 1 2 2 1 12 2 11 2 2 112 2 2 2 112 2 1 1
25 1 1 2 2 2 2 1 1 1 1 2 2 2 2 1111 2 2 2 2 111 1 2 2 2 2 1 1
26 2 2 1 1 1 1 2 2 2 2 1 1 1 12 2 2 2 1 1 112 2 2 2 1111 2 2
27 1 1 2 2 2 2 1 1 1 1 2 2 2 2 112 2 1 1 112 2 2 2 1111 2 2
28 2 2 1 1 1 1 2 2 2 2 1 1 1 12 2 11 2 2 2 2 111 1 2 2 2 2 1 1
29 1 1 2 2 2 2 1 1 2 2 1 1 1 12 2 11 2 2 2 2 112 2 1111 2 2
30 2 2 1 1 1 1 2 2 1 1 2 2 2 2 112 2 1 1 112 2 1 1 2 2 2 2 1 1
31 1 1 2 2 2 2 1 1 2 2 1 1 1 12 2 2 2 1 1 112 2 1 1 2 2 2 2 1 1
32 2 2 1 1 1 1 2 2 1 1 2 2 2 2 1111 2 2 2 2 112 2 1111 2 2
33 1 2 1 2 1 2 1 2 1 2 1 2 1 2 12 12 1 2 12 12 1 2 12 12 1 2
34 2 1 2 1 2 1 2 1 2 1 2 1 2 12 12 1 2 1 2 12 12 1 2 12 1 2 1
35 1 2 1 2 1 2 1 2 1 2 1 2 1 2 12 2 1 2 1 2 12 12 1 2 12 1 2 1
36 2 1 2 1 2 1 2 1 2 1 2 1 2 12 112 1 2 12 12 1 2 12 12 1 2
37 1 2 1 2 1 2 1 2 2 1 2 1 2 12 112 1 2 12 12 2 1 2 12 1 2 1
38 2 1 2 1 2 1 2 1 1 2 1 2 1 2 12 2 1 2 1 2 12 11 2 12 12 1 2
39 1 2 1 2 1 2 1 2 2 1 2 1 2 12 12 1 2 1 2 12 11 2 12 12 1 2
40 2 1 2 1 2 1 2 1 1 2 1 2 1 2 12 12 1 2 12 12 2 1 2 12 1 2 1
(Continued)
Appendix C 313
Expl. Column
No. 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63
57 12212112122121121221211212212112
58 21121221211212212112122121121221
59 122 121 12122121 1221 12122121 121221
60 21121221211212211221211212212112
61 122121 1221 121221 122121 1221 121221
62 21 121221 122121 1221 121221 122121 12
63 12212112211212212112122112212112
64 21121221122121121221211221121221
314 Appendix C
L'M (421)
Expt. Column
No. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
2 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
3 1 1 1 1 1 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3
4 1 1 1 1 1 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4
5 1 2 2 2 2 1 1 1 1 2 2 2 2 3 3 3 3 4 4 4 4
6 1 2 2 2 2 2 2 2 2 1 1 1 1 4 4 4 4 3 3 3 3
7 1 2 2 2 2 3 3 3 3 4 4 4 4 1 1 1 1 2 2 2 2
8 1 2 2 2 2 4 4 4 4 3 3 3 3 2 2 2 2 1 1 1 1
9 1 3 3 3 3 1 1 1 1 3 3 3 3 4 4 4 4 2 2 2 2
10 1 3 3 3 3 2 2 2 2 4 4 4 4 3 3 3 3 1 1 1 1
11 1 3 3 3 3 3 3 3 3 1 1 1 1 2 2 2 2 4 4 4 4
12 1 3 3 3 3 4 4 4 4 2 2 2 2 1 1 1 1 3 3 3 3
13 1 4 4 4 4 1 1 1 1 4 4 4 4 2 2 2 2 3 3 3 3
14 1 4 4 4 4 2 2 2 2 3 3 3 3 1 1 1 1 4 4 4 4
15 1 4 4 4 4 3 3 3 3 2 2 2 2 4 4 4 4 1 1 1 1
16 1 4 4 4 4 4 4 4 4 1 1 1 1 3 3 3 3 2 2 2 2
17 2 1 2 3 4 1 2 3 4 1 2 3 4 1 2 3 4 1 2 3 4
18 2 1 2 3 4 2 1 4 3 2 1 4 3 2 1 4 3 2 1 4 3
19 2 1 2 3 4 3 4 1 2 3 4 1 2 3 4 1 2 3 4 1 2
20 2 1 2 3 4 4 3 2 1 4 3 2 1 4 3 2 1 4 3 2 1
21 2 2 1 4 3 1 2 3 4 2 1 4 3 3 4 1 2 4 3 2 1
22 2 2 1 4 3 2 1 4 3 1 2 3 4 4 3 2 1 3 4 1 2
23 2 2 1 4 3 3 4 1 2 4 3 2 1 1 2 3 4 2 1 4 3
24 2 2 1 4 3 4 3 2 1 3 4 1 2 2 1 4 3 1 2 3 4
25 2 3 4 1 2 1 2 3 4 3 4 1 2 4 3 2 1 2 1 4 3
26 2 3 4 1 2 2 1 4 3 4 3 2 1 3 4 1 2 1 2 3 4
27 2 3 4 1 2 3 4 1 2 1 2 3 4 2 1 4 3 4 3 2 1
28 2 3 4 1 2 4 3 2 1 2 1 4 3 1 2 3 4 3 4 1 2
29 2 4 3 2 1 1 2 3 4 4 3 2 1 2 1 4 3 3 4 1 2
30 2 4 3 2 1 2 1 4 3 3 4 1 2 1 2 3 4 4 3 2 1
31 2 4 3 2 I 3 4 1 2 2 1 4 3 4 3 2 1 1 2 3 4
32 2 4 3 2 1 4 3 2 1 1 2 3 4 3 4 1 2 2 1 4 3
33 3 1 3 4 2 1 3 4 2 1 3 4 2 1 3 4 2 1 3 4 2
34 3 1 3 4 2 2 4 3 1 2 4 3 1 2 4 3 1 2 4 3 1
35 3 1 3 4 2 3 1 2 4 3 1 2 4 3 1 2 4 3 1 2 4
36 3 1 3 4 2 4 2 1 3 4 2 1 3 4 2 1 3 4 2 1 3
37 3 2 4 3 1 1 3 4 2 2 4 3 1 3 1 2 4 4 2 1 3
38 3 2 4 3 1 2 4 3 1 1 3 4 2 4 2 1 3 3 1 2 4
39 3 2 4 3 1 3 1 2 4 4 1 1 3 1 3 4 2 2 4 3 1
40 3 2 4 3 1 4 2 1 3 3 2 2 4 2 4 3 1 1 3 4 2
(Continued)
Appendix C 315
Expl. Column
No. 123456789 10 11 12 13 14 15 16 17 18 19 20 21
41 3 3 12 4 13 4 2 3 1 2 4 4 2 1 3 2 4 3 1
42 3 3 12 4 2 4 3 1 4 2 1 3 3 1 2 4 1 3 4 2
43 3 3 12 4 3 12 4 1 3 4 2 2 4 3 1 4 2 1 3
44 3 3 12 4 4 2 13 2 4 3 1 1 3 4 2 3 1 2 4
45 3 4 2 13 13 4 2 4 2 1 3 2 4 3 1 3 1 2 4
46 3 4 2 13 2 4 3 1 3 1 2 4 1 3 4 2 4 2 1 3
47 3 4 2 13 3.124 2 4 3 1 4 2 1 3 1 3 4 2
48 3 4 2 13 4 2 13 1 3 4 2 3 1 2 4 2 4 3 1
49 4 14 2 3 14 2 3 1 4 2 3 1 4 2 3 1 4 2 3
50 4 14 2 3 2 3 14 2 3 1 4 2 3 1 4 2 3 1 4
51 4 14 2 3 3 2 4 1 3 2 4 1 3 2 4 1 3 2 4 1
52 4 14 2 3 4 13 2 4 1 3 2 4 1 3 2 4 1 3 2
53 4 2 3 14 14 2 3 2 3 1 4 3 2 4 1 4 1 3 2
54 4 2 3 14 2 3 14 1 4 2 3 4 1 3 2 3 2 4 1
55 4 2 3 14 3 2 4 1 4 1 3 2 1 4 2 3 2 3 1 4
56 4 2 3 14 4 13 2 3 2 4 1 2 3 1 4 1 4 2 3
57 4 3 2 4 1 14 2 3 3 2 4 1 4 1 3 2 2 3 1 4
58 4 3 2 4 12 3 14 4 1 3 2 3 2 4 1 1 4 2 3
59 4 3 2 4 13 2 4 1 1 4 2 3 2 3 1 4 4 1 3 2
60 4 3 2 4 14 13 2 2 3 1 4 1 4 2 3 3 2 4 1
61 4 4 13 2 14 2 3 4 1 3 2 2 3 1 4 3 2 4 1
62 4 4 13 2 2 3 14 3 2 4 1 1 4 2 3 4 1 3 2
63 4 4 13 2 3 2 4 1 2 3 1 4 4 1 3 2 1 4 2 3
64 4 4 13 2 4 13 2 1 4 2 3 3 2 4 1 2 3 1 4
316 Appendix C
Ln (3<°)
Expl. Column
No. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
2 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2
3 1 1 1 1 1 1 1 1 1 3 3 3 3 3 3 3
4 2 2 2 2 2 2 2 2 2 1 1 1 1 1 1 1
5 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
6 2 2 2 2 2 2 2 2 2 3 3 3 3 3 3 3
7 3 3 3 3 3 3 3 3 3 1 1 1 1 1 1 1
8 3 3 3 3 3 3 3 3 3 2 2 2 2 2 2 2
9 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3
10 2 2 2 1 1 1 2 2 2 3 3 3 1 1 1 2 2 2 3
11 2 2 2 1 1 1 2 2 2 3 3 3 2 2 2 3 3 3 1
12 2 2 2 1 1 1 2 2 2 3 3 3 3 3 3 1 1 1 2
13 2 2 2 2 2 2 3 3 3 1 1 1 1 1 1 2 2 2 3
14 2 2 2 2 2 2 3 3 3 1 1 1 2 2 2 3 3 3 1
15 2 2 2 2 2 2 3 3 3 1 1 1 3 3 3 1 1 1 2
16 2 2 ¦ 2 3 3 3 1 1 1 2 2 2 1 1 1 2 2 2 3
17 2 2 2 3 3 3 1 1 1 2 2 2 2 2 2 3 3 3 1
18 2 2 2 3 3 3 1 1 1 2 2 2 3 3 3 1 1 1 2
19 3 3 3 1 1 1 3 3 3 2 2 2 1 1 1 3 3 3 2
20 3 3 3 1 1 1 3 3 3 2 2 2 2 2 2 1 1 1 3
21 3 3 3 1 1 1 3 3 3 2 2 2 3 3 3 2 2 2 1
22 3 3 3 2 2 2 1 1 1 3 3 3 1 1 1 3 3 3 2
23 3 3 3 2 2 2 1 1 1 3 3 3 2 2 2 1 1 1 3
24 3 3 3 2 2 2 1 1 1 3 3 3 3 3 3 2 2 2 1
25 3 3 3 3 3 3 2 2 2 1 1 1 1 1 1 3 3 3 2
26 3 3 3 3 3 3 2 2 2 1 1 1 2 2 2 1 1 1 3
27 3 3 3 3 3 3 2 2 2 1 1 1 3 3 3 2 2 2 1
28 2 2 3 1 2 3 1 2 3 1 2 3 1 2 3 1 2 3 1
29 2 2 3 1 2 3 1 2 3 1 2 3 2 3 1 2 3 1 2
30 2 2 3 1 2 3 1 2 3 1 2 3 3 1 2 3 1 2 3
31 2 2 3 2 3 1 2 3 1 2 3 1 1 2 3 1 2 3 1
32 2 2 3 2 3 1 2 3 1 2 3 1 2 3 1 2 3 1 2
33 2 2 3 2 3 1 2 3 1 2 3 1 3 1 2 3 1 2 3
34 2 2 3 3 1 2 3 1 2 3 1 2 1 2 3 1 2 3 1
35 2 2 3 3 1 2 3 1 2 3 1 2 2 3 1 2 3 1 2
36 2 2 3 3 1 2 3 1 2 3 1 2 3 1 • 2 3 1 2 3
37 2 2 3 1 1 2 3 2 3 1 3 1 2 1 2 3 2 3 1 3
38 2 2 3 1 1 2 3 2 3 1 3 1 2 2 3 1 3 1 2 1
39 2 2 3 1 1 2 3 2 3 1 3 1 2 3 1 2 1 2 3 2
(Continued)
-J -J -J -J -J -J -J -J -J » ft » ft (Si (Si (Si (Si (Si (Si (Si (Si (Si (Si «t J> «t *t
ggs oe -j ft (Si 4k. W N» N- © SS3 « Ul fr »ft ft W tO N- © v© oe -J ft (Si 4k. t*i N» N O « oe -4 ft tn £ it*
www to to to —
www
—
to to to — www to to to www to to to www to
w to — w to — w w—
to to — w to — w to
w— w—
to — w to — w to to — w to — w to
w—to —
to — w to — oj to —
to w to w
— w to — w to — — w to — w to —
—ww to — w to — w
—to
w to
— w to — w to — w
—to
w to — w to — W—tOw to — w to — to to
w — oj to — w to —
to oj
— w
to — oj to — oj to —
—oj
w to — w to — Ww to — w to — w to
tO to—
— oj to — w to —
—oj
w to
o — w to — w to — w
w to
to — w to — w to—
to — w to — w to — w
w to — w to — w to
to—
— w
o
3
w to — w to — w to
to— —w
— w to — w to — —to
w to — w to — w w to — w to — w
w to —
5"
c
— w to — w to — w
to to
— w to — w to w to
— w — w to — w to
— —w to — w to — w
to to
— w
318 Appendix C
L*\ O ) (Continued)
Expt. Column
No. 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3
4 1 1 2 2 2 2 2 2 2 2 2 3 3 3 3 3 3 3 3 3
5 2 2 3 3 3 3 3 3 3 3 3 1 1 1 1 1 1 1 1 1
6 3 3 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2
7 1 1 3 3 3 3 3 3 3 3 3 2 2 2 2 2 2 2 2 2
8 2 2 1 1 1 1 1 1 1 1 1 3 3 3 3 3 3 3 3 3
9 3 3 2 2 2 2 2 2 2 2 2 1 1 1 1 1 1 1 1 1
10 3 3 1 1 1 2 2 2 3 3 3 1 1 1 2 2 2 3 3 3
11 1 1 2 2 2 3 3 3 1 1 1 2 2 2 3 3 3 1 1 1
12 2 2 3 3 3 1 1 1 2 2 2 3 3 3 1 1 1 2 2 2
13 3 3 2 2 2 3 3 3 1 1 1 3 3 3 1 1 1 2 2 2
14 1 1 3 3 3 1 1 1 2 2 2 1 1 1 2 2 2 3 3 3
15 2 2 1 1 1 2 2 2 3 3 3 2 2 2 3 3 3 1 1 1
16 3 3 3 3 3 1 1 1 2 2 2 2 2 2 3 3 3 1 1 1
17 1 1 1 1 1 2 2 2 3 3 3 3 3 3 1 1 1 2 2 2
18 2 2 2 2 2 3 3 3 1 1 1 1 1 1 2 2 2 3 3 3
19 2 2 1 1 1 3 3 3 2 2 2 1 1 1 3 3 3 2 2 2
20 3 3 2 2 2 1 1 1 3 3 3 2 2 2 1 1 1 3 3 3
21 1 1 3 3 3 2 2 2 1 1 1 3 3 3 2 2 2 1 1 1
22 2 2 2 2 2 1 1 1 3 3 3 3 3 3 2 2 2 1 1 1
23 3 3 3 3 3 2 2 2 1 1 1 1 1 1 3 3 3 2 2 2
24 1 1 1 1 1 3 3 3 2 2 2 2 2 2 1 1 1 3 3 3
25 2 2 3 3 3 2 2 2 1 1 1 2 2 2 1 1 1 3 3 3
26 3 3 1 1 1 3 3 3 2 2 2 3 3 3 2 2 2 1 1 1
27 1 1 2 2 2 1 1 1 3 3 3 1 1 1 3 3 3 2 2 2
28 2 3 1 2 3 1 2 3 1 2 3 1 2 3 1 2 3 1 2 3
29 3 1 2 3 1 2 3 1 2 3 1 2 3 1 2 3 1 2 3 1
30 1 2 3 1 2 3 1 2 3 1 2 3 1 2 3 1 2 3 1 2
31 2 3 2 3 1 2 3 1 2 3 1 3 1 2 3 1 2 3 1 2
32 3 1 3 1 2 3 1 2 3 1 2 1 2 3 1 2 3 1 2 3
33 1 2 1 2 3 1 2 3 1 2 3 2 3 1 2 3 1 2 3 1
34 2 3 3 1 2 3 1 2 3 1 2 2 3 1 2 3 1 2 3 1
35 3 1 1 2 3 1 2 3 1 2 3 3 1 2 3 1 2 3 1 2
36 1 2 2 3 1 2 3 1 2 3 1 1 2 3 1 2 3 1 2 3
37 1 2 1 2 3 2 3 1 3 1 2 1 2 3 2 3 1 3 1 2
38 2 3 2 3 1 3 1 2 1 2 3 2 3 1 3 1 2 1 2 3
39 3 1 3 1 2 1 2 3 2 3 1 3 1 2 1 2 3 2 3 1
(Continued)
Appendix C
Expt. Column
No. 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40
40 1 2 2 3 1 3 1 2 1 2 3 3 1 2 1 2 3 2 3 1
41 2 3 3 1 2 1 2 3 2 3 1 1 2 3 2 3 1 3 1 2
42 3 1 1 2 3 2 3 1 3 1 2 2 3 1 3 1 2 1 2 3
43 1 2 3 1 2 1 2 3 2 3 1 2 3 1 3 1 2 1 2 3
44 2 3 1 2 3 2 3 1 3 1 2 3 1 2 1 2 3 2 3 1
45 3 1 2 3 1 3 1 2 1 2 3 1 2 3 2 3 1 3 1 2
46 3 1 1 2 3 3 1 2 2 3 1 1 2 3 3 1 2 2 3 1
47 1 2 2 3 1 1 2 3 3 1 2 2 3 1 1 2 3 3 1 2
48 2 3 3 1 2 2 3 1 1 2 3 3 1 2 2 3 1 1 2 3
49 3 1 2 3 1 1 2 3 3 1 2 3 1 2 2 3 1 1 2 3
50 1 2 3 1 2 2 3 1 1 2 3 1 2 3 3 1 2 2 3 1
51 2 3 1 2 3 3 1 2 2 3 1 2 3 1 1 2 3 3 1 2
52 3 1 3 1 2 2 3 1 1 2 3 2 3 1 1 2 3 3 1 2
53 1 2 1 2 3 3 1 2 2 3 1 3 1 2 2 3 1 1 2 3
54 2 3 2 3 1 1 2 3 3 1 2 1 2 3 3 1 2 2 3 1
55 3 2 1 3 2 1 3 2 1 3 2 1 3 2 1 3 2 1 3 2
56 1 3 2 1 3 2 1 3 2 1 3 2 1 3 2 1 3 2 1 3
57 2 1 3 2 1 3 2 1 3 2 1 3 2 1 3 2 1 3 2 1
58 3 2 2 1 3 2 1 3 2 1 3 3 2 1 3 2 1 3 2 1
59 1 3 3 2 1 3 2 1 3 2 1 1 3 2 1 3 2 1 3 2
60 2 1 1 3 2 1 3 2 1 3 2 2 1 3 2 1 3 2 1 3
61 3 2 3 2 1 3 2 1 3 2 1 2 1 3 2 1 3 2 1 3
62 1 3 1 3 2 1 3 2 1 3 2 3 2 1 3 2 1 3 2 1
63 2 1 2 1 3 2 1 3 2 1 3 1 3 2 1 3 2 1 3 2
64 2 1 1 3 2 2 1 3 3 2 1 1 3 2 2 1 3 3 2 1
65 3 2 2 1 3 3 2 I 1 3 2 2 1 3 3 2 1 1 3 2
66 1 3 3 2 1 1 3 2 2 1 3 3 2 1 1 3 2 2 1 3
67 2 1 2 1 3 3 2 1 1 3 2 3 2 1 1 3 2 2 1 3
68 3 2 3 2 1 1 3 2 2 1 3 1 3 2 2 1 3 3 2 1
69 1 3 1 3 2 2 1 3 3 2 1 2 1 3 3 2 1 1 3 2
70 2 1 3 2 1 1 3 2 2 1 3 2 1 3 3 2 1 1 3 2
71 3 2 1 3 2 2 1 3 3 2 1 3 2 1 1 3 2 2 1 3
72 1 3 2 1 3 3 2 1 I 3 2 1 3 2 2 1 3 3 2 1
73 1 3 1 3 2 3 2 I 2 I 3 1 3 2 3 2 I 2 I 3
74 2 1 2 1 3 1 3 2 3 2 1 2 1 3 1 3 2 3 2 1
75 3 2 3 2 1 2 1 3 1 3 2 3 2 1 2 1 3 1 3 2
76 1 3 2 1 3 1 3 2 3 2 1 3 2 1 2 1 3 1 3 2
77 2 1 3 2 1 2 1 3 1 3 2 ] 3 2 3 2 1 2 1 3
78 3 2 1 3 2 3 2 1 2 1 3 2 1 3 1 3 2 3 2 1
79 1 3 3 2 1 2 1 3 1 3 2 2 1 3 1 3 2 3 2 1
80 2 1 1 3 2 3 2 1 2 1 3 3 2 I 2 1 3 I 3 2
81 3 2 2 1 3 1 3 2 3 2 1 1 3 2 3 2 1 2 1 3
REFERENCES
321
322 References
C3. Cochran, W. G. and Cox, G. M. Experimental design. New York: John Wiley
and Sons, 1957.
C4. Cohen, L. "Quality Function Development and Application Perspective from
Digital Equipment Corporation." National Productivity Review, vol. 7, no. 3
(Summer, 1988), pp. 197-208.
C5. Crosby, P. Quality is Free. New York: McGraw-Hill Book Co., 1979.
Dl. Daniel, C. Applications of Statistics to Industrial Experimentation. New York:
John Wiley and Sons, 1976.
D2. Deming, W. E. Quality, Productivity, and Competitive Position. Cambridge:
Massachusetts Institute of Technology, Center for Advanced Engineering Study,
1982.
Fl. Feigenbaum, A. V. Total Quality Control, 3rd Edition. New York: McGraw
Hill Book Company, 1983.
Gl. Garvin, D. A. "What Does Product Quality Really Mean?" Sloan Management
Review, Fall 1984, pp. 25-43.
G2. Grant, E. L. Statistical Quality Control, 2nd Edition. New York: McGraw Hill
Book Co., 1952.
Kl. Kackar, R. N. "Off-line Quality Control, Parameter Design and the Taguchi
Method." Journal of Quality Technology (Oct. 1985) vol. 17, no. 4, pp.
176-209.
References 323
PI. Pao, T. W., Phadke, M. S., and Sherrerd, C. S. "Computer Response Time
Optimization Using Orthogonal Array Experiments." IEEE International
Communications Conference. Chicago, IL (June 23-26, 1985) Conference Record,
vol. 2, pp. 890-895.
P2. Phadke, M. S. "Quality Engineering Using Design of Experiments." Proceedings
of the American Statistical Association, Section on Statistical Education (August
1982) Cincinnati, OH, pp. 11-20.
P3. Phadke, M. S. "Design Optimization Case Studies." AT&T Technical Journal
(March/April 1986) vol. 65, no. 2, pp. 51-68.
324 References
P4. Phadke, M. S. and Dehnad, K. "Optimization of Product and Process Design for
Quality and Cost." Quality and Reliability Engineering International (April-June
1988) vol. 4, no. 2, pp. 105-112.
P5. Phadke, M. S., Kackar, R. N., Speeney, D. V., and Grieco, M. J. "Off-Line Quality
Control in Integrated Circuit Fabrication Using Experimental Design." The Bell
System Technical Journal, (May-June 1983) vol. 62, no. 5, pp.
1273-1309.
P6. Phadke, M. S., Swann, D. W., and Hill, D. A. "Design and Analysis of an
Accelerated Life Test Using Orthogonal Arrays." Paper presented at the 1983
Annual Meeting of the American Statistical Association, Toronto, Canada.
P7. Phadke, M. S. and Taguchi, G. "Selection of Quality Characteristics and S/N
Ratios for Robust Design." Conference Record, GLOBECOM 87 Meeting, IEEE
Communications Society. Tokyo, Japan (November 1987) pp. 1002-1007.
P8. Plackett, R. L. and Burman, J. P. "The Design of Optimal Multifactorial
Experiments." Biometrika, vol. 33, pp. 305-325.
P9. Proceedings of Supplier Symposia on Taguchi Methods, April 1984, November
1984, October 1985, October 1986, October 1987, and October 1988, American
Supplier Institute, Inc., 6 Parklane Blvd., Suite 411, Dearborn, MI 48126.
Rl. Raghavarao, D. Construction of Combinatorial Problems in Design Experiments.
New York: John Wiley and Sons, 1971.
R2. Rao, C. R. "Factorial Experiments Derivable from Combinatorial Arrangements of
Arrays." Journal of Royal Statistical Society (1947) Series B, vol. 9, pp. 128-139.
R3. Rao, C. R. Linear Statistical Inference and Its Applications, 2nd Edition. New
York: John Wiley and Sons, Inc., 1973.
51. Scheffe, H. Analysis of Variance. New York: John Wiley and Sons, Inc., 1959.
52. Searle, S. R. Linear Models. New York: John Wiley and Sons, 1971.
56. Sullivan, L. P. "Quality Function Deployment." Quality Progress (June 1986) pp.
39-50.
References 325
Tl. Taguchi, G. Jikken Keikakuho, 3rd Edition. Tokyo, Japan: Maruzen, vol. 1 and
2, 1977 and 1978 (in Japanese). English translation: Taguchi, G. System of
Experimental Design, Edited by Don Clausing. New York: UNIPUB/Kraus
International Publications, vol. 1 and 2, 1987.
T2. Taguchi, G. "Off-line and On-Line Quality Control System." International
Conference on Quality Control. Tokyo, Japan, 1978.
T3. Taguchi, G. On-line Quality Control During Production. Tokyo, Japan:
Japanese Standards Association, 1981. (Available from the American Supplier
Institute, Inc., Dearborn, MI).
T4. Taguchi, G. Introduction to Quality Engineering. Asian Productivity
Organization, 1986. (Distributed by American Supplier Institute, Inc., Dearborn, MI).
T5. Taguchi, G. and Konishi, S. Orthogonal Arrays and Linear Graphs. Dearborn,
MI: ASI Press, 1987.
327
328 Index
"Robust Design can be used to improve product quality, which includes better performance,
reduced manufacturing cost, and less development time. Its power comes from reducing the effects
of the causes of variation without having to eliminate the causes. We have had remarkable success
in using the method at AT&T to design numerous products and processes from diverse engineering
fields resulting in millions of dollars of savings. Dr. Phadke, who had been a vigorous leader in
implementing the method at AT&T, has described the method through real case studies making it
easy to read the book."
John S. Mayo, Executive Vice President
Network Systems.
AT&T Bell Laboratories
"Robust product design and robust factory processes are the most important improvements now
taking place in United States industries- They are key to further large improvements in quality, and
to the critical objective of reducing time to market. Dr. Phadke's book is the first written originally in
English and based on American experiences to explicitly address the systematic improvement of
robustness. This book fills a great need for an introduction to the development of robustness, and
should be read by executives of manufacturing companies. Dr. Phadke's book is a clear introduction
to a subject that is essential for competitiveness in today's international economy.''
Don Clausing,
Bernard M. Gordon, Adjunct Professor of
Engineering Innovation and Practice,
Massachusetts Institute of Technology
ISBN D-13-7MS
<1i70
9