Open navigation menu
Close suggestions
Search
Search
en
Change Language
Upload
Sign in
Sign in
Download free for days
0 ratings
0% found this document useful (0 votes)
451 views
Quality Engineering Using Robust Design PDF
Uploaded by
Armin Kadragic
AI-enhanced title
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content,
claim it here
.
Available Formats
Download as PDF or read online on Scribd
Download now
Download
Save Quality Engineering Using Robust Design.pdf For Later
Download
Save
Save Quality Engineering Using Robust Design.pdf For Later
0%
0% found this document useful, undefined
0%
, undefined
Embed
Share
Print
Report
0 ratings
0% found this document useful (0 votes)
451 views
Quality Engineering Using Robust Design PDF
Uploaded by
Armin Kadragic
AI-enhanced title
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content,
claim it here
.
Available Formats
Download as PDF or read online on Scribd
Download now
Download
Save Quality Engineering Using Robust Design.pdf For Later
Carousel Previous
Carousel Next
Download
Save
Save Quality Engineering Using Robust Design.pdf For Later
0%
0% found this document useful, undefined
0%
, undefined
Embed
Share
Print
Report
Download now
Download
You are on page 1
/ 342
Search
Fullscreen
UALITY ENGINEERING USING ROBUST DESIGN MADHAV S. PHADKE= pret QUALITY ENGINEERING USING ROBUST DESIGN MADHAV S. PHADKE AT&T Bell Laboratories # P TR Prentice Hall, Englewood Cliffs, New Jersey 07632Library of Congress Cataloging-in-Publication Data Phagke, Madhav shridhar (Quality engineering using robust design / Madhav S. Phadko. Pca 1SBN 0-13-745167-9 1. Engineering design 2. Computer-aided design. 3. unrx (Conputer operating systen) 4, Integrated circuits--Very large scale integration. I. Title. Tal74.p49. 1969 620" .0082 * 0285--4e20 89-3927 cir © 1989 by AT&T Bell Laboratories Published by PT R Prentice-Hall, Inc A Simon & Schuster Company Englewood Clifs, New Jersey 07632 All rights reserved. No part of this book may be reproduced, in any form or by any means, without permission in writing from the publisher. Printed in the United States of America 10 ISBN 0-13-745167-9 Prentice-Hall International (UK) Limited, London Prentice-Hall of Australia Pty. Limited, Sydney Prentice-Hall Canada Inc., Toronto Prentice-Hall Hispanoamericana, S.A., Mexico Prentice-Hall of India Private Limited, New Delhi Prentice-Hall of Hapan, Inc., Tokyo Simon & Schuster Asia Pte. Lid., Singapore Editora Prentice-Hall do Brasil, Ltda, Rio de JaneiroTo my parents, and Maneesha, Kedar, and Lata.CONTENTS Foreword xi Preface xv Acknowledgments xvil CHAPTER 1 INTRODUCTION 1 1.1 A Historical Perspective 2 1.2 What Is Quality? 3 1.3. Elements of Cost. 4 14 Fundamental Principle 5 1.5 Tools Used in Robust Design 6 1.6 Applications and Benefits of Robust Design 8 1.7 Organization of the Book 10 18 Summary 10 vlvill CHAPTER 2 2A 22 23 24 25 26 24 28 29 CHAPTER 3 Ba 32 33 34 35 3.6 CHAPTER 4 4d Contents PRINCIPLES OF QUALITY ENGINEERING 13 Quality Loss Function—The Fraction Defective Fallacy 14 Quadratic Loss Function 18 Noise Factors—Causes of Variation 23 Average Quality Loss 25 Exploiting Nonlinearity 27 Classification of Parameters: P Diagram 30 Optimization of Product and Process Design 32 Role of Various Quality Control Activities 35 Summary 38 MATRIX EXPERIMENTS USING ORTHOGONAL ARRAYS. a Matrix Experiment for a CVD Process 42 Estimation of Factor Effects 44 Additive Model for Factor Effects 48 Analysis of Variance 51 Prediction and Diagnosis 59 Summary 63 STEPS IN ROBUST DESIGN 67 ‘The Polysilicon Deposition Process and Its Main Function 68Contents 4.2 Noise Factors and Testing Conditions 71 4.3. Quality Characteristics and Objective Functions 72 44 Control Factors and Their Levels 74 4,5. Matrix Experiment and Data Analysis Plan 76 4.6 — Conducting the Matrix Experiment 79 4.7 Data Analysis 80 48 Verification Experiment and Future Plan 90 4.9 Summary 93 CHAPTER 5 SIGNAL-TO-NOISE RATIOS 97 5.1 Optimization for Polysilicon Layer Thickness Uniformity 98 5.2. Evaluation of Sensitivity to Noise 105 5.3. SIN Ratios for Static Problems 108 5.4 SIN Ratios for Dynamic Problems 114 5.5 Analysis of Ordered Categorical Data 121 5.6 Summary 128 CHAPTER 6 ACHIEVING ADDITIVITY 133 6.1 Guidelines for Selecting Quality Characteristics 135 6.2 Examples of Quality Characteristics 136 6.3 Examples of S/N Ratios 138 6.4 Selection of Control Factors. 14465 66 CHAPTER 7 Ta 12 13 14 15 16 1 18 19 7.10 TAL CHAPTER 8 81 82 83 84 Contents Role of Orthogonal Arrays 146 Summary 146 CONSTRUCTING ORTHOGONAL ARRAYS 149 Counting Degrees of Freedom 150 Selecting a Standard Orthogonal Array 151 Dummy Level Technique 154 Compound Factor Method 156 Linear Graphs and Interaction Assignment 157 Modification of Linear Graphs 163 Column Merging Method 166 Branching Design 168 Strategy for Constructing an Orthogonal Array 171 Comparison with the Classical Stati Design 174 ical Experiment Summary 181 COMPUTER AIDED ROBUST DESIGN 183 Differential Op-Amp Circuit 184 Description of Noise Factors 186 ‘Methods of Simulating the Variation in Noise Factors 189 Orthogonal Array Based Simulation of Variation in Noise Factors 190Contents 8.5 Quality Characteristic and S/N Ratio 194 8.6 Optimization of the Design 194 8.7 Tolerance Design 202 88 Reducing the Simulation Effort 205 8.9 Analysis of Nonlinearity 207 8.10 Selecting an Appropriate S/N Ratio 208 8.11 Summary 209 CHAPTER 9 DESIGN OF DYNAMIC SYSTEMS 213 9.1 Temperature Control Circuit and Its Function 214 9.2 Signal, Control, and Noise Factors 217 9.3 Quality Characteristics and S/N Ratios 218 9.4 Optimization of the Design 222 9.5 Iterative Optimization 227 9.6 Summary 228 CHAPTER 10 TUNING COMPUTER SYSTEMS 10.1 10.2 103 10.4 105 FOR HIGH PERFORMANCE 231 Problem Formulation 232 Noise Factors and Testing Conditions 234 Quality Characteristic and S/N Ratio 235 Control Factors and Their Alternate Levels 236 Design of the Matrix Experiment and the Experimental Procedure 238106 10.7 10.8 109 Contents Data Analysis and Verification Experiments 240 Standardized S/N Ratio 246 Related Applications 249 Summary 249 CHAPTER 11 RELIABILITY IMPROVEMENT 253 Wa 12 13 4 7 us 119 Role of S/N Ratios in Reliability Improvement 254 The Routing Process 256 Noise Factors and Quality Characteristics 256 Control Factors and Their Levels. 257 Design of the Matrix Experiment 258 Experimental Procedure 265 Data Analysis 265 Survival Probability Curves 271 Summary 275 APPENDIX A ORTHOGONALITY OF A MATRIX EXPERIMENT 277 APPENDIX B UNCONSTRAINED OPTIMIZATION 281 APPENDIX C STANDARD ORTHOGONAL ARRAYS AND. LINEAR GRAPHS 285 REFERENCES 321 INDEX 327FOREWORD ‘The main task of design engineer is to build in the function specified by the product planning people at a competitive cost. An engineer knows that all kinds of functions are energy transformations. Therefore, the product designer must identify what is input, what is output, and what is ideal function while developing a new product, It is important to make the product’s function as close to the ideal function as possible. Therefore, it is very important to measure correctly the distance of the product's performance from the ideal function. This is the main role of quality engineering, In order to measure the distance, we have to consider the following problems: 1, Identify signal and noise space 2. Select several points from the space 3. Select an adequate design parameter to observe the performance 4, Consider possible calibration or adjustment method 5, Select an appropriate measurement related with the mean distance As most of those problems require engineering knowledge, a book on quality engineer- ing must be written by a person who has enough knowledge of engineering. Dr. Madhav Phadke, a mechanical engineer, has worked at AT&T Bell Labora- tories for many years and has extensive experience in applying the Robust Design method to problems from diverse engineering fields. He has made many eminent and pioneering contributions in quality engineering, and he is one of the best qualified per- sons to author a book on quality engineering axv Foreword ‘The greatest strength of this book is the case studies. Dr. Phadke presents four real instances where the Robust Design method was used to improve the quality and cost of products, Robust Design is universally applicable to all engineering fields. ‘You will be able to use these case studies to improve the quality and cost of your products, This is the first book on quality engineering, written in English by an engineer. ‘The method described here has been applied successfully in many companies in Japan, USA, and other countries, I recommend this book for all engineers who want to apply experimental design for actual product design G. TaguchiPREFACE Designing high-quality products and processes at low cost is an economic and technol- gical challenge to the engineer. A systematic and efficient way to meet this challenge is a new method of design optimization for performance, quality, and cost. The ‘method, called Robust Design, consists of 1, Making product performance insensitive to raw material variation, thus allowing, the use of low grade material and components in most cases, 2, Making designs robust against manufacturing variation, thus reducing labor and material cost for rework and scrap, ‘Making the designs least sensitive to the variation in operating environment, thus improving reliability and reducing operating cost, and 4. Using a new structured development process so that engineering time is used ‘most productively. Al engineering designs involve setting values of a large number of decision vari- ables. Technical experience together with experiments, through prototype hardware models or computer simulations, are needed to come up with the most advantageous decisions about these variables. ‘Studying these variables one at a time or by trial and error is the common approach to the decision process. This leads to either a very long and expensive time span for completing the design or premature termination of the design process so that the product design is nonoptimal. This can mean missing the market window and/or delivering an inferior quality product at an inflated cost. ~xvi Preface ‘The Robust Design method uses a mathematical tool called orthogonal arrays to study a large number of decision variables with a small number of experiments. It also uses a new measure of quality, called signal-to-noise (S/N) ratio, to predict the quality from the customer's perspective. Thus, the most economical product and process design from both manufacturing and customers" viewpoints can be accomplished at the smallest, affordable development cost. Many companies, big and small, high-tech and low-tech, have found the Robust Design method valuable in making high-quality prod- ucts available to customers at a low competitive price while still maintaining an accept- able profit margin. This book will be useful to practicing engineers and engineering managers from all disciplines. It can also be used as a text in a quality engineering course for seniors and first year graduate students. The method is explained through a series of real case studies, thus making it easy for the readers to follow the method without the burden of earning detailed theory. At AT&T, several colleagues and I have developed a two and a half day course on this topic, My experience in teaching the course ten times has convinced me that the case studies approach is the best one to communicate how to use the method in practice. The particular case studies used in this book relate to the fabri- cation of integrated circuits, circuit design, computer tuning, and mechanical routing. Although the book is written primarily for engineers, it can also be used by stat- isticians to study the wide range of applications of experimental design in quality engineering. This book differs from the available books on statistical experimental design in that it focuses on the engineering problems rather than on the statistical theory. Only those statistical ideas that are relevant for solving the broad class of pro- duct and process design problems are discussed in the book. Chapters 1 through 7 describe the necessary theoretical and practical aspects of the Robust Design method. The remaining chapters show a variety of applications from different engineering disciplines. ‘The best way for readers to use this book is, after reading each section, to determine how the concepts apply to their projects. My experience in teaching the method has revealed that many engineers like to see an application of the method in their own field. Chapters 8 through 11 describe case stud- ies from different engineering fields. It is hoped that these case studies will help readers see the breadth of the applicability of the Robust Design method and assist them in their own applications. Madhav S. Phadke AT&T Bell Laboratories Holmdel, NJ.ACKNOWLEDGMENTS I had the greatest fortune to leam the Robust Design methodology directly from its founder, Professor Genichi Taguchi. It is with the deepest gratitude that I acknowledge his inspiring work. My involvement in the Robust Design method began when Dr. Roshan Chaddha asked me to host Professor Taguchi's visit to AT&T Bell Labora- tories in 1980. I thank Dr. Chaddha (Bellcore, formerly with AT&T Bell Labs) for the invaluable encouragement he gave me during the early applications of the method in AT&T and also while writing this book. I also received valuable support and encouragement from Dr. E. W. Hinds, Dr. A. B. Godirey, Dr. R. E. Kerwin, and Mr. E, Fuchs in applying the Robust Design method to many different engineering fields which led to deeper understanding and enhancement of the method. Writing a book of this type needs a large amount of time. I am indebted to Ms. Cathy Savolaine for funding the project. I also thank Mr. J. V. Bodycomb and Mr. Larry Bernstein for supporting the project. ‘The case studies used in this book were conducted through collaboration with many colleagues, Mr. Gary Blaine, Mr. Dave Chrisman, Mr. Joe Leanza, Dr. T. W. Pao, Mr. C. S. Sherrerd, Dr. Peter Hey, and Mr. Paul Sherry. I am grateful to them for allowing me to use the case studies in the book. I also thank my colleagues, Mr. Don Speeney, Dr. Raghu Kackar, and Dr. Mike Grieco, who worked with me on the first Robust Design case study at AT&T. Through this case study, which resulted in huge improvements in the window photo- lithography process used in integrated circuits fabrication, I gained much insight into the Robust Design method. xvilvill ‘Acknowledgments I thank Mr, Rajiv Keny for numerous discussions on the organization of the book. A number of my colleagues read the draft of the book and provided me with valuable comments, Some of the people who provided the comments are: Dr. Don Clausing (M.LT.), Dr. A. M. Joglekar (Honeywell), Dr. C. W. Hoover, Jr. (Polytechnic University), Dr. Jim Pennell (IDA), Dr. Steve Eick, Mr. Don Speeney, Dr. M. Daneshmand, Dr. V. N. Nair, Dr. Mike Luvalle, Dr. Ajit S. Manocha, Dr. V. V. S. Rana, Ms. Cathy Hudson, Dr. Miguel Perez, Mr. Chris Sherrerd, Dr. M. H. Sherif, Dr. Helen Hwang, Dr. Vasant Prabhu, Ms. Valerie Partridge, Dr. Sachio Nakamura, Dr. K. Dehnad, and Dr. Gary Ulrich, I thank them all for their generous help in improving the content and readability of the book. I also thank Mr, Akira Tomishima (Yamatake-Honeywell), Dr. Mohammed Hamami, and Mr. Bruce Linick for helpful discussions on specific topics in the book. Thanks are also due to Mr. Yuin Wu (ASI) for valuable general discussions, [ very much appreciate the editorial help I received from Mr. Robert Wright and Ms. April Cormaci through the various stages of manuscript preparation. Also, I thank Ms. Eve Engel for coordinating text processing and the artwork during manuscript preparation, The text of this volume was prepared using the UNIX* operating system, 5.2.6a, and a LINOTRONIC® 300 was used to typeset the manuscript. Mr. Wright was responsible for designing the book format and coordinating production. Mr. Don Han kinson, Ms. Mari-Lynn Hankinson, and Ms, Marilyn Tomaino produced the final illus- trations and were responsible for the layout. Ms. Kathleen Attwooll, Ms, Sharon Mor- gan, and several members of the Holmdel Text Processing Center provided electronic text processingChapter 1 INTRODUCTION ‘The objective of engineering design, a major part of research and development (R&D), is to produce drawings, specifications, and other relevant information needed to manufacture products that meet customer requirements. Knowledge of scientific phenomena and past engineering experience with similar product designs and manufac- turing processes form the basis of the engineering design activity (see Figure 1.1). However, a number of new decisions related to the particular product must be made regarding product architecture, parameters of the product design, the process architec- ture, and parameters of the manufacturing process. A large amount of engineering effort is consumed in conducting experiments (either with hardware or by simulation) to generate the information needed to guide these decisions. Efficiency in generating such information is the key to meeting market windows, keeping development and manufacturing costs low, and having high-quality products. Robust Design is an ‘engineering methodology for improving productivity during research and development so that high-quality products can be produced quickly and at low cost. This chapter gives an overview of the basic concepts underlying the Robust Design methodology’ + Section 1.1 gives a brief historical background of the method. * Section 1.2 defines the term quality as itis used in this book. * Section 1.3 enumerates the basic elements of the cost of a product. * Section 1.4 describes the fundamental principle of the Robust Design methodol- ogy with the help of a manufacturing example.2 Introduction Chap. 1 Section 1.5 briefly describes the major tools used in Robust Design, * Section 1.6 presents some representative problems and the benefits of using the Robust Design method in addressing them + Section 1.7 gives a chapter-by-chapter outline of the rest of the book. ‘+ Section 1.8 summarizes the important points of this chapter. In the subsequent chapters, we describe Robust Design concepts in detail and, through case studies, we show how to apply them. Product Customers Pro Requirements R&D for Design s + Desired function and + Low cost + Usage environment Manufacturing + High quality + Failure cost (low failure cost) y Scientific Knowledge Engineering Knowiedge Understanding of natural phenomena Experience with previous designs and manufacturing Processes Figure 1.1 Block diagram of R&D activity, 1.1 A HISTORICAL PERSPECTIVE When Japan began its reconstruction efforts after World War Il, it faced an acute short- age of good-quality raw material, high-quality manufacturing equipment and skilled engineers. ‘The challenge was to produce high-quality products and continue to improve the quality under those circumstances. ‘The task of developing a methodology to meet the challenge was assigned to Dr. Genichi Taguchi, who at that time was a ‘manager in charge of developing certain telecommunications products at the Electrical Communications Laboratories (ECL) of Nippon Telephone and Telegraph Company (NTT). Through his research in the 1950s and the early 1960s, Dr. Taguchi developed the foundations of Robust Design and validated its basic philosophies by applying‘Sec. 1.2 What Is Quality? 3 them in the development of many products. In recognition of this contribution, Dr. Taguchi received the individual Deming Award in 1962, which is one of the highest recognitions in the quality field. The Robust Design method can be applied to a wide variety of problems. The application of the method in electronics, automotive products, photography and many other industries has been an important factor in the rapid industrial growth and the sub- sequent domination of international markets in these industries by Japan. Robust Design draws on many ideas from statistical experimental design to plan experiments for obtaining dependable information about variables involved in making ci ‘The science of statistical experimental design originated with the work of Sir Ronald Fisher in England in the 1920s. Fisher founded the basic prin- ciples of experimental design and the associated data-analysis technique called analysis of variance (ANOVA) during his efforts to improve the yield of agricultural crops. The theory and applications of experimental design and the related technique of response surface methodology have been advanced by many statistical researchers. Today, many excellent textbooks on this subject exist, for example, Box, Hunter and Hunter [B3], Box and Draper (B2J, Hicks [H2], John [2], Raghavarao [RI], and Kempthome [K4]. Various types of matrices are used for planning experiments. to study several decision variables simultaneously. Among them, Robust Design makes heavy use of the orthogonal arrays, whose use for planning experiments was first pro: posed by Rao [R2] Robust Design adds a new dimension to statistical experimental design—it explicitly addresses the following concerns faced by all product and process designers: + How to reduce economically the variation of a product's function in the customer's environment. (Note that achieving a product's function consistently on target maximizes customer satisfaction.) + How to ensure that decisions found to be optimum during laboratory experiments will prove to be so in manufacturing and in customer environments In addressing these concems, Robust Design uses the mathematical formalism of sta tistical experimental design, but the thought process behind the mathematics is different in many ways. The answers provided by Robust Design to the two concems listed above make it a valuable tool for improving the productivity of the R&D activity. The Robust Design method is still evolving. With the active research being car- ried out in the United States, Japan, and other countries, it is expected that the applica- tions of the method and the method itself will grow rapidly in the coming decade. 1.2 WHAT IS QUALITY? Because the word quality means different things to different people (see, for example, Juran [13], Deming [D2], Crosby [C5], Garvin [G1], and Feigenbaum [F1}), we need to define its use in this book. First, let us define what we mean by the ideal quality4 Introduction Chap. 1 which can serve as a reference point for measuring the quality level of a product. The ideal quality a customer can expect is that every product delivers the target perfor- ‘mance each time the product is used, under all intended operating conditions, and throughout its intended life, with no harmful side effects. Note that the traditional con- cepts of reliability and dependability are part of this definition of quality. In specific situations, it may be impossible to produce a product with ideal quality. Nonetheless, ideal quality serves as a useful reference point for measuring the quality level. The following example helps clarify the definition of ideal quality. People buy automobiles for different purposes. Some people buy them to impress their friends while others buy them to show off their social status. To satisfy these diverse pur- poses, there are different types (species) of cars—sports cars, luxury cars, ete—on the market. For any type of car, the buyer always wants the automobile to provide reliable transportation. ‘Thus, for each type of car, an ideal quality automobile is one that works perfectly each time it is used (on hot summer days and cold winter days), throughout its intended life (not just the warranty life) and does not pollute the atmo- sphere. When a product's performance deviates from the target performance, its quality is considered inferior. The performance may differ from one unit to another or from one environmental condition to another, or it might deteriorate before the expiration of the intended life of the product. Such deviation in performance causes loss to the user of the product, the manufacturer of the product, and, in varying degrees, to the rest of the society as well. Following Taguchi, we measure the quality of a product in terms of the total loss to society due to functional variation and harmful side effects. Under the ideal quality, the loss would be zero; the greater the loss, the lower the quality. In the automobile example, if a car breaks down on the road, the driver would, at the least, be delayed in reaching his or her destination. The disabled car might be the cause of traffic jams or accidents. ‘The driver might have to spend money to have the car towed. If the car were under warranty, the manufacturer would have to pay for repairs. The concept of quality loss includes all these costs, not just the warranty cost. Quantifying the quality loss is difficult and is discussed in Chapter 2. Note that the definition of quality of a product can be easily extended to processes as well as services. AS a matter of fact, the entire discussion of the Robust Design method in this book is equally applicable for processes and services, though for simplicity, we do not state so each time. 1.3 ELEMENTS OF COST Quality at what cost? Delivering a high-quality product at low cost is an interdisci- plinary problem involving engineering, economics, statistics, and management. The three main categories of cost one must consider in delivering a product are:Sec. 1.4 Fundamental Principle 5 1, Operating Cost. Operating cost consists of the cost of energy needed to operate the product, environmental control, maintenance, inventory of spare pars and units, etc. Products made by different manufacturers can have different energy costs. If a product is sensitive to temperature and humidity, then elaborate and costly air conditioning and heating units are needed. A high product failure rate of a product causes large maintenance costs and costly inventory of spare units. ‘A manufacturer can greatly reduce the operating cost by designing the product robust—that is, minimizing the product's sensitivity to environmental and usage conditions, manufacturing variation, and deterioration of pars. 2. Manufacturing Cost. Important elements of manufacturing cost are equipment, machinery, raw materials, labor, scrap, rework, etc. In a competitive environ- ment, it is important to keep the unit manufacturing cost (ume) low by using low-grade material, employing less-skilled workers, and using less-expensive equipment, and at the same time maintain an appropriate level of quality. This is possible by designing the product robust, and designing the manufacturing pro- cess robust—that is, minimizing the process’ sensitivity to manufacturing distur- bances. 3. R&D Cost. The time taken to develop a new product plus the amount of engineering and laboratory resources needed are the major elements of R&D cost. The goal of R&D activity is to keep the ume and operating cost low. Robust Design plays an important role in achieving this goal because it improves the efficiency of generating information needed to design products and processes, thus reducing development time and resources needed for development. Note that the manufacturing cost and R&D cost are incurred by the producer and then passed on to the customer through the purchase price of the product. The operat- ing cost, which is also called usage cost, is borne directly by the customer and it is directly related to the product's quality. From the customer's point of view, the pur- chase price plus the operating cost determine the economics of satisfying the need for which the product is bought. Higher quality means lower operating cost and vice versa. Robust Design is a systematic method for keeping the producer’s cost low while delivering a high-quality product, that is, while keeping the operating cost low. 1.4 FUNDAMENTAL PRINCIPLE ‘The key idea behind Robust Design is illustrated by the experience of Ina Tile Com- pany, described in detail in Taguchi and Wu [T7]. During the late 1950s, Ina Tile ‘Company in Japan faced the problem of high variability in the dimensions of the tiles it produced [see Figure 1.2(a)]. Because screening (rejecting those tiles outside specified dimensions) was an expensive solution, the company assigned a team of expert engineers to investigate the cause of the problem, The team’s analysis showed that the tiles at the center of the pile inside the kiln (see Figure 1.2 (b)] experienced lower temperature than those on the periphery. This nonuniformity of temperature dis- tribution proved to be the cause of the nonuniform tile dimensions. The team reported6 Introduction Chap. 1 that it would cost approximately half a million dollars to redesign and build a kiln in which all the tiles would receive uniform temperature distribution. Although this alter- native was less expensive than screening it was still too costly. The team then brainstormed and defined a number of process parameters that could be changed easily and inexpensively. After performing a small set of well- planned experiments according to Robust Design methodology, the team concluded that increasing the lime content of the clay from I percent to 5 percent would greatly reduce the variation of the tile dimensions. Because lime was the least expensive ingredient, the cost implication of this change was also favorable. ‘Thus, the problem of nonuniform tile dimensions was solved by minimizing the effect of the cause of the variation (nonuniform temperature distribution) without con- trolling the cause itself (the kiln design). As illustrated by this example, the fundamen- tal principle of Robust Design is to improve the quality of a product by minimizing the effect of the causes of variation without eliminating the causes. This is achieved by optimizing the product and process designs to make the performance minimally sensi- tive to the various causes of variation. This is called parameter design. However, parameter design alone does not always lead to sufficiently high quality. Further improvement can be obtained by controlling the causes of variation where economi- cally justifiable, typically by using more expensive equipment, higher grade com- ponenis, better environmental controls, efc., all of which lead to higher product cost, or operating cost, or both. The benefits of improved quality must justify the added prod- uct cost. 1.5 TOOLS USED IN ROBUST DESIGN ‘A great deal of engineering time is spent generating information about how different design parameters affect performance under different usage conditions. Robust Design methodology serves as an “amplifier’—that is, it enables an engineer to generate infor- ‘mation needed for decision-making with half (or even less) the experimental effort. ‘There are two important tasks to be performed in Robust Design: 1, Measurement of Quality During Design/Development. We want a leading indi- cator of quality by which we can evaluate the effect of changing a particular design parameter on the product’s performance. 2. Efficient Experimentation to Find Dependable Information about the Design Parameters. It is critical to obtain dependable information about the design parameters so that design changes during manufacturing and customer use can be avoided. Also, the information should be obtained with minimum time and resources. The estimated effects of design parameters must be valid even when other param- eters are changed during the subsequent design effort or when designs of related sub- systems change. This can be achieved by employing the signal-to-noise (SIN) ratio to measure quality and orthogonal arrays to study many design parameters simultane- ‘ously. These tools are described later in this book.Probability Distribution (©) Schematic Diagram of the Kiln Figure 1.2 Tile manufacturing example.8 Introduction Chap. 1 1.6 APPLICATIONS AND BENEFITS OF ROBUST DESIGN ‘The Robust Design method is in use in many areas of engineering throughout the United States. For example, AT&T’s use of Robust Design methodology has lead to improvement of several processes in very large scale integrated (VLSI) circuit fabrica- tion used in the manufacture of I-megabit and 256-kilobit memory chips, 32-bit pro- ‘cessor chips, and other products. Some of the VLSI applications are: * The window photolithography application (documented in Phadke, Kackar, Speeney, and Grieco [PS]) was the first application in the United States that demonstrated the power of Taguchi's approach to quality and cost improvement through robust process design. In particular, the benefits of the application were: — 4-fold reduction in process variance — 3-fold reduction in fatal defects — 2fold reduction in processing time (because the process became stable, allowing time-consuming inspection to be dropped) — Easy transition of design from research to manufacturing. — Easy adaptation of the process to finer-line technology (adaptation from 3.S-micron to 2.5-micron technology), which is typically a very difficult problem. + The aluminum etching application originated from a belief that poor photoresist print quality leads to line width loss and to undercutting during the etching pro- cess. By making the etching process insensitive to photoresist profile variation and other sources of variation, the visual defects were reduced from 80 percent to 15 percent. Moreover, the etching step could then tolerate the variation in the photoresist profil. + The reactive ion etching of tantalum silicide (described in Katz and Phadke [K3)), used to give highly nonuniform etch quality, so only 12 out of 18 possible wafer positions could be used for production. After optimization, 17 wafer posi- tions became usable—a hefty 40 percent increase in machine utilization. Also, the efficiency of the orthogonal array experimentation allowed this project to be completed by the 20-day deadline. In this case, $1.2 million was saved in equip- ment replacement costs not including the expense of disruption on the factory floor. * The polysilicon deposition process had between 10 and 5000 surface defects per unit area. As such, it represented a serious road block in advancing to line widths smaller than 1.75 micron. Six process parameters were investigated with 18 experiments leading to consistently less than 10 surface defects per unit area, AS a result, the scrap rate was reduced significantly and it became possible to Process smaller line widths. This case study is described in detail in Chapter 4.Sec. 1.8 Applications and Benefits of Robust Design 9 Other AT&T applications include + The router bit life-improvement project (described in Chapter 11 and Phadke {P3)) led to a 2-fold to 4-fold increase in the life of router bits used in cutting printed wiring boards. The project illustrates how reliability or life improvement projects can be organized to find the best settings of the routing process parame- ters with a very small number of samples. The number of samples needed in this approach is very small, yet it can give valuable information about how each parameter changes the survival probability curve (change in survival probability as a function of time) + In the differential operational amplifier circuit optimization application (described in Chapter 8 and Phadke [P3]), a 40-percent reduction in the root mean square (rms) offset voltage was realized by simply finding new nominal values for the circuit parameters. ‘This was done by reducing sensitivity to all tolerances and temperature, rather than reducing tolerances, which could have increased manufacturing cost. * The Robust Design method was also used to find optimum proportions of ingredients for making water-soluble flux. By simultaneous study of the parame- ters for the wave soldering process and the flux composition, the defect rate was reduced by 30 to 40 percent (see Lin and Kackar [L3)). * Orthogonal array experiments can be used to tune hardwarelsoftware systems. By simultaneous study of three hardware and six software parameters, the response time of the UNIX operating system was reduced 60 percent for a partic~ ular set of load conditions experienced by the machine (see Chapter 10 and Pao, Phadke, and Sherrerd (P1)) Under the leadership of American Supplier Institute and Ford Motor Company, a number of automotive suppliers have achieved quality and cost improvement through Robust Design. These applications include improvements in metal casting, injection molding of plastic parts, wave soldering of electronic components, speedometer cable design, integrated circuit chip bonding, and picture tube lens coating. Many of these applications are documented in the Proceedings of Supplier Symposia on Taguchi Methods [P9] Al these examples show that the Robust Design methodology offers simultane- ous improvement of product quality, performance and cost, and engineering produc- tivity. Its widespread use in industry will have a far-reaching economic impact because this methodology can be applied profitably in all engineering activities, including prod- uct design and manufacturing process design. The philosophy behind Robust Design is not limited to engineering applications. Yokoyama and Taguchi [Y1] have also shown its applications in profit planning in business, cash-flow optimization in banking, government policymaking, and other areas. The method can also be used for tasks such as determining optimum work force mix for jobs where the demand is random, and improving the runway utilization at an airport.10 Introduction Chap. 1 1.7 ORGANIZATION OF THE BOOK This book is divided into three parts. The first part (Chapters 1 through 4) describes the basics of the Robust Design methodology. Chapter 2 describes the quality loss function, which gives a quantitative way of evaluating the quality level of a product, rather than just the "good-bad” characterization. After categorizing the sources of vari- ation, the chapter further describes the steps in engineering design and the classification ‘of parameters affecting the product’s function Quality control activities during di ferent stages of the product realization process are also described there. Chapter 3 is devoted to orthogonal array experiments and basic analysis of the data obtained through such experiments. Chapter 4 illustrates the entire strategy of Robust Design through an integrated circuit (IC) process design example. The strategy begins with problem formulation and ends with verification experiment and implementation. This case study could be used as a model in planning and carrying out manufacturing pro- cess optimization for quality, cost, and manufacturability. ‘The example also has the basic framework for optimizing a product design. ‘The second part of the book (Chapters 5 through 7) describes, in detail, the tech- niques used in Robust Design. Chapter 5 describes the concept of signal-to-noise ratio and gives appropriate signal-to-noise ratios for a number of common engineering prob- lems. Chapter 6 is devoted to a critical decision in Robust Design: choosing an appropriate response variable, called quality characteristic, for measuring the quality of a product or a process. The guidelines for choosing quality characteristics are illus- trated with examples from many different engineering fields. A step-by-step procedure for designing orthogonal array experiments for a large variety of industrial problems is given in Chapter 7. The third part of the book (Chapters 8 through 11) describes four more case studies to illustrate the use of Robust Design in a wide variety of engineering discip- lines. Chapter 8 shows how the Robust Design method can be used to optimize prod- uct design when computer simulation models are available, The differential operational amplifier case study is used to illustrate the optimization procedure. This chapter also shows the use of orthogonal arrays to simulate the variation in component values and environmental conditions, and thus estimate the yield of a product. Chapter 9 shows the procedure for designing an ON-OFF control system for a temperature controller. ‘The use of Robust Design for improving the performance of a hardware-software sys- tem is described in Chapter 10 with the help of the UNIX operating system tuning case study. Chapter 11 describes the router bit life study and explains how Robust Design can be used to improve reliability. 1.8 SUMMARY ‘+ Robust Design is an engineering methodology for improving productivity during research and development so that high-quality products can be produced quickly and at low cost.” Its use can greatly improve an organization's ability to meet market windows, keep development and manufacturing costs low, and deliver high-quality products.Sec. 1.8 Summary 1” + Through his research in the 1950s and early 1960s, Dr. Genichi Taguchi developed the foundations of Robust Design and validated the basic, underlying philosophies by applying them in the development of many products, ‘+ Robust Design uses many ideas from statistical experimental design and adds a new dimension to it by explicitly addressing two major concems faced by all product and process designers: a. How to reduce economically the variation of a product's function in the customer's environment. b. How to ensure that decisions found optimum during laboratory experiments will prove to be so in manufacturing and in customer environments + The ideal quality a customer can receive is that every product delivers the target performance each time the product is used, under all intended operating condi- tions, and throughout the product's intended life, with no harmful side effects The deviation of a product's performance from the target causes loss to the user of the product, the manufacturer, and, in varying degrees, to the rest of society as well. The quality level of a product is measured in terms of the total loss to the society due to functional variation and harmful side effects. + The three main categories of cost one must consider in delivering a product are: (1) operating cost: the cost of energy, environmental control, maintenance, inven- tory of spare parts, etc. (2) manufacturing cost: the cost of equipment, machinery, raw materials, labor, scrap, network, etc. (3) R&D cost: the time taken to develop a new product plus the engineering and laboratory resources needed. * The fundamental principle of Robust Design is to improve the quality of a prod- uct by minimizing the effect of the causes of variation without eliminating the causes. This is achieved by optimizing the product and process designs to make the performance minimally sensitive to the various causes of variation, a process called parameter design * The two major tools used in Robust Design are: (1) signal-to-noise ratio, which measures quality and (2) orthogonal arrays, which are used to study many design parameters simultaneously. * The Robust Design method has been found valuable in virtually all engineering fields and business applicationsChapter 2 PRINCIPLES OF QUALITY ENGINEERING A product's life eycle can be divided into two main parts: before sale to the customer and after sale to the customer. All costs incurred prior to the sale of the product are added to the unit manufacturing cost (umc), while all costs incurred after the sale are lumped together as quality loss. Quality engineering is concerned with reducing both of these costs and, thus, is an interdisciplinary science involving engineering design, manufacturing operations, and economics, It is offen said that higher quality (lower quality loss) implies higher unit manufacturing cost. Where does this misconception come from? It arises because engineers and managers, unaware of the Robust Design method, tend to achieve higher quality by using more costly parts, components, and manufacturing processes. In this chapter we delineate the basic principles of quality engineering and put in perspective the role of Robust Design in reducing the quality loss as well as the ume. This chapter contains nine sections * Sections 2.1 and 2.2 are concerned with the quantification of quality loss. Sec- tion 2.1 describes the shortcomings of using fraction defective as a measure of quality loss. (This is the most commonly used measure of quality loss.) Sec- tion 2.2 describes the quadratic loss function, which is a superior way of quanti fying quality loss in most situations. * Section 2.3 describes the various causes, called noise factors, that lead to the deviation of a product’s function from its target. 18“ Principles of Quality Engineering Chap. 2 * Section 2.4 focuses on the computation of the average quality loss, its com- ponents, and the relationship of these components to the noise factors, * Section 2.5 describes how Robust Design exploits nonlinearity to reduce the average quality loss without increasing ume * Section 2.6 describes the classification of parameters, an important activity in quality engineering for recognizing the different roles played by the various parameters that affect a product's performance. * Section 2.7 discusses different ways of formulating product and process design ‘optimization problems and gives a heuristic solution. + Section 2.8 addresses the various stages of the product realization process and the role of various quality control activities in these stages. ‘+ Section 2.9 summarizes the important points of this chapter. Various aspects of quality engineering are described in the following references: Taguchi [T2], Taguchi and Wu [T7], Phadke [P2}, Taguchi and Phadke (T6], Kackar [K1,K2], Taguchi [4], Clausing [C1], and Byrne and Taguchi (B4]. 2.1 QUALITY LOSS FUNCTION—THE FRACTION DEFECTIVE FALLACY We have defined the quality level of a product to be the total loss incurred by society due to the failure of the product to deliver the target performance and due to harmful side effects of the product, including its operating cost. Quantifying this loss is difficult because the same product may be used by different customers, for different applications, under different environmental conditions, etc. However, itis important to quantify the loss so that the impact of altemative product designs and manufacturing processes on customers can be evaluated and appropriate engineering decisions made. Moreover, it is critical that the quantification of loss not become a major task that con- sumes substantial resources at various stages of product and process design. It is common to measure quality in terms of the fraction of the total number of units that are defective. This is referred to as fraction defective. Although commonly used, this measure of quality is often incomplete and misleading. It implies that all products that meet the specifications (allowable deviations from the target response) are equally good, while those outside the specifications are bad. The fallacy here is that the product that barely meets the specifications is, from the customer's point of view, as good or as bad as the product that is barely outside the specifications. In reality, the product whose response is exactly on target gives the best performance. As the product’s response deviates from the target, the quality becomes progressively worse.Sec. 2.1 Quality Loss Function—The Fraction Detective Fallacy 15 Example—Television Set Color Density: ‘The deficiency of fraction defective as a quality measure is well-illustrated by the Sony television customer preference study published by the Japanese newspaper, The Asahi (T8]. In the late 1970s, American consumers showed a preference for the telev sion sets made by Sony-Japan over those made by Sony- it study was quality. Both factories, however, made televisions using iden and tolerances. What could then account for the perceived difference in qui In its investigative report, the newspaper showed the distribution of color density for the sets made by the two factories (see Figure 2.1). In the figure, m is the target color density and m5 are the tolerance limits (allowable manufacturing deviations). ‘The distribution for the Sony-Japan factory was approximately normal with mean on target and a standard deviation of 5/3. The distribution for Sony-USA. was approxi- ‘mately uniform in the range of m5. Among the sets shipped by Sony-Japan, about 0.3 percent were outside the tolerance limits, while Sony-USA shipped virtually no sets, outside the tolerance limits. Thus, the difference in customer preference could not be explained in terms of the fraction defective sets. ‘Sony—USA ‘Sony—Japan 1979). The perceived difference in quality becomes clear when we look closely at the sets that met the tolerance limits. Sets with color density very near m perform best and can be classified grade A. As the color density deviates from m, the performance becomes progressively worse, as indicated in Figure 2.1 by grades Band C. It is clear16 Principles of Quality Engineering Chap. 2 that Sony-Japan produced many more grade A sets and many fewer grade C sets when compared to Sony-USA. Thus, the average grade of sets produced by Sony-Japan was better, hence the customer's preference for the sets made by Sony-Japan, In short, the difference in the customer's perception of quality was a result of Sony-USA paying attention only to meeting the tolerances, whereas in Sony-Japan the attention was focused on meeting the target. Example—Telephone Cable Resistance: Using a wrong measurement system can, and often does, drive the behavior of people in wrong directions. ‘The telephone cable example described here illustrates how using fraction defective as a measure of quality loss can permit suboptimization by the manufacturer leading to an increase in the foral cost, which is the sum of quality loss and ume, A certain gauge of copper wires used in telephone cables had a nominal resis- tance value of m ohms/mile and the maximum allowed resistance was (m+Ao) ohms/mile. This upper limit was determined by taking into consideration the manufac- turing capability, represented by the distribution (a) in Figure 2.2, at the time the specifications were written, Consequently, the upper limit (m-+A9) was an adequate way to ensure that the drawing process used to form the copper wire was kept in con- trol with the mean on target. — m-A, m m+, Resistance (Ohms/Mile). —> Figure 2.2 Distribution of telephone cable resistance. (a) Initial distribution. (b) After pro- cess improvement and shifting the mean,Sec. 2.1 Quality Loss Function—The Fraction Defective Fallacy 7 By improving the wire drawing process through the application of new technol- ogy, the manufacturer was able to reduce substantially the process variance. This per- ited the manufacturer to move the mean close to the upper limit and still meet the fraction defective criterion for quality (see distribution (b) in Figure 2.2]. At the same time, the manufacturer saved on the cost of copper since larger resistance implies a smaller cross section of the wire. However, from the network point of view, the larger average resistance resulted in high electrical loss, causing complaints from the tele- phone users. Solving the problem in the field meant spending a lot more money for installing additional repeaters and for other corrective actions than the money saved in manufacturing—that is, the increase in the quality loss far exceeded the saving in the ume. Thus, there was a net loss to the society consisting of both the manufacturer and the telephone company who offered the service. Therefore, a quality loss metric that permits such local optimization leading to higher total cost should be avoided. Section 2.2 discusses a better way to measure the quality loss. Interpretation of Engineering Tolerances ‘The examples above bring out an important point regarding quantification of quality loss. Products that do not meet tolerances inflict a quality loss on the manufacturer, a loss visible in the form of scrap or rework in the factory, which the manufacturer adds to the cost of the product. However, products that meet tolerance also inflict a quality loss, a loss that is visible to the customer and that can adversely affect the sales of the product and the reputation of the manufacturer. Therefore, the quality loss function ‘must also be capable of measuring the loss due to products that meet the tolerances. Engineering specifications are invariably written as m + A. ‘These specifications should not be interpreted to mean that any value in the range (m — Ag) to (m + Ao) is equally good for the customer and that as soon as the range is exceeded the product is, bad. In other words, the step function shown below and in Figure 2.3(a) is an inade- quate way to quantify the quality loss: 0 if |y—m|
. Because of linearity, this change in resistance has a negligible effect on the varia- tion of the output voltage. Thus, we can achieve a large reduction in the variation of the output voltage by simply changing the nominal values of the transistor gain and the dividing resistor. ‘This change, however, does not change the manufacturing cost of the ircuit. Thus, by exploiting nonlinearity we can reduce the quality loss without increasing the product cost. If the requirements on the variation of the output voltage were even tighter due to a large quality loss associated with the deviation of the output voltage from the tar- ‘get, the tolerance on the gain could be tightened as economically justifiable. Thus, the variance of the output voltage can be reduced by two distinct actions: 1. Move the nominal value of gain so that the output voltage is less sensitive to the tolerance on the gain, which is noise. 2. Reduce the tolerance on the gain to control the noise. Genichi Taguchi refers to action 1 as parameter design and action 2 as tolerance design. Typically, no manufacturing cost increase is associated with changing the nomi- nal values of product parameters (parameter design). However, reducing tolerances (tolerance design) leads to higher manufacturing cost. Managing the Economics of Quality Improvement It is obvious from the preceding discussion that to minimize the total cost, which con- sists of the unit manufacturing cost and quality loss, we must first carry out parameter design, Next, during tolerance design, we should adjust the tolerances to strike an economic balance between reduction in quality loss and increase in manufacturing cost. This strategy for minimization of the total cost is a more precise statement of the fun- damental principle of Robust Design described in Chapter 1 Engineers and managers, unaware of the benefits of designing robust products and the Robust Design methodology, tend to use more costly parts, components, and ‘manufacturing processes to improve quality without first obtaining the most benefit out of parameter design. As a result, they miss the opportunity to improve quality without increasing manufacturing cost. This leads to the misconception that higher quality always means more ume. ‘The average quality loss evaluation described in Section 2.4 can be used to jus- tify the investment in quality improvement for a specific product or process. The investment consists of two parts: R&D cost associated with parameter design (this cost30 Principles of Quality Engineering Chap. 2 should be normalized by the projected sales volume) and ume associated with tolerance design. The role of the quadratic loss function and the average quality loss evaluation in managing the economics of continuous quality improvement is discussed in detail by Sullivan (S5], Although parameter design may not increase the ume, itis not necessarily free of cost. It needs an R&D budget to explore the nonlinear effects of the various control factors. By using the techniques of orthogonal arrays and signal-to-noise ratios, which are an integral part of the Robust Design method, one can greatly improve the R&D efficiency when compared to the efficiency of the present practice of studying one con- trol factor at a time or an ad-hoc method of finding the best values of many control factors simultaneously. ‘Thus, by using the Robust Design method, there is potential to also reduce the total R&D cost. 2.6 CLASSIFICATION OF PARAMETERS: P DIAGRAM A block diagram representation of a product is shown in Figure 2.7. The response of the product is denoted by y. The response could be the output of the product or some other suitable characteristic. Recall that the response we consider for the purpose of optimization in a Robust Design experiment is called a quality characteristic. x | Noise Factors y Product / Process Signal Response Factor 2 | Conte! Factors Figure 2.7 Block diagram of a product/process: P Diagram. ‘A number of parameters can influence the quality characteristic or response of the product. These parameters can be classified into the following three classes (noteSec. 2.6 Classification of Parameters: P Diagram 31 that the word parameter is equivalent to the word factor in most of Robust Design literature): 1. Signal factors (M). ‘These are the parameters set by the user or operator of the product to express the intended value for the response of the product. For exam- ple, the speed setting on a table fan is the signal factor for specifying the amount of breeze; the steering wheel angle is a signal factor that specifies the turning radius of an automobile. Other examples of signal factors are the O and 1 bits transmitted in a digital communication system and the original document to be copied by a photocopying machine. The signal factors are selected by the design engineer based on the engineering knowledge of the product being developed. Sometimes two or more signal factors are used in combination to express the desired response. Thus, in a radio receiver, tuning could be achieved by using. the coarse and fine-tuning knobs in combination. 2. Noise factors (x). Certain parameters cannot be controlled by the designer and are called noise factors. Section 2.3 described three broad classes of noise fac- tors. Parameters whose settings (also called levels) are difficult to control in the field or whose levels are expensive to control are also considered noise factors. The levels of the noise factors change from one unit to another, from one environment to another, and from time to time, Only the statistical characte tics (such as the mean and variance) of noise factors can be known or specified but the actual values in specific situations cannot be known. The noise factors ‘cause the response y to deviate from the target specified by the signal factor M and lead to quality joss. 3. Control factors (2). These are parameters that can be specified freely by the designer. In fact, itis the designer's responsibility to determine the best values of these parameters. Each control factor can take multiple values, called levels. When the levels of certain control factors are changed, the manufacturing cost does not change; however, when the levels of others are changed, the manufac- turing cost also changes. In the power supply circuit example of Section 2.5, the transistor gain and the dividing resistance are control factors that do not change the manufacturing cost. However, the tolerance of the transistor gain has a definite impact on the manufacturing cost. We will refer to the control factors that affect manufacturing cost as tolerance factors, whereas the other control fac- tors simply will be called control factors. ‘The block diagram of Figure 2.7 can be used to represent a manufacturing pro- cess or even a business system. Identifying important responses, signal factors, noise factors, and control factors ina specific project are important tasks. In planning a Robust Design project, it is also important to recognize which control factors change ‘the manufacturing cost and which do not. The best settings of the latter are determined through parameter design, whereas the best settings of the former are determined through tolerance design. {Sometimes, tolerance factors are also optimized during parameter design (see Chapters 10 and 11).]32 Principles of Quality Engineering Chap. 2 Robust Design projects can be classified on the basis of the nature of the signal factor and the quality characteristic. In some problems, the signal factor takes @ con- stant value. Such problems are called static problems. "The other problems are called dynamic problems. ‘These and other types of problems are described in Chapter 5. ‘Thus far in this chapter, we have described the basic principles of quality engineering, including the quadratic loss function, the exploitation of nonlinearity, and the classification of product or process parameters. All this material creates a founda- tion for discussing the optimization of the design of products and processes in the next section. 2.7 OPTIMIZATION OF PRODUCT AND PROCESS DESIGN Designing a product or a manufacturing process is a complex activity. ‘The output of the activity is a set of drawings and written specifications that specify how to make the particular product. Three essential elements of these drawings and specifications are: (a) system architecture, (b) nominal values for all parameters of the system, and (©) the tolerance or the allowable variation in each parameter. Optimizing a product or process design means determining the best architecture, the best parameter values, and the best tolerances. Optimization Strategy for Becoming a Preferred Supplier Consider a market where there are two suppliers for a product and the customers are capable of evaluating their quality loss. Recall that the quality loss includes the operat- ing cost of the product as well as other losses due to the deviation of the product's function from the target. Suppose the suppliers differ in price and quality loss for their products. In such a market, the preferred supplier would be the one for whom the sum total of quality loss and price is the smallest. Depending on marketing strategy and corporate policy, a supplier can adopt one of many optimization strategies for becom- ing a preferred supplier. Among them, three noteworthy strategies are: 1, Minimize manufacturing cost while delivering the same quality as the competi- tor. Here the supplier would be able to maximize per unit profit. 2, Minimize the quality loss while keeping the manufacturing cost the same as the competitor (as judged by the price). With this strategy, the supplier can build a reputation for quality. 3. Minimize the sum of the quality loss and manufacturing cost. This is a strategy for best utilization of the sum of supplier's and customer's resources. It is most appropriate when the supplier and the customer are part of the same corporation. Also, public utility commissions are required to follow this strategy in regulating the utility companies.Sec. 2.7 Optimization of Product and Process Design 33 Note that strategies 1 and 2 are the extreme strategies a supplier can follow to remain a preferred supplier. In between, there are infinitely many strategies, and strat- egy 3 is an important one among them. Engineering Design Problem Consider the strategy of minimizing the manufacturing cost while delivering a specified quality level. The engineering problem of optimizing a product or process design to reflect this strategy is difficult and fuzzy. First, the relationship between the numerous parameters and the response is often unknown and must be observed experimentally. Secondly, during product or process design the precise magnitudes of noise factor vari- ations and the costs of different grades of materials, components, and tolerances are not known, For example, during product design, exact manufacturing variations are not Known unless existing processes are to be used, Therefore, writing a single objective function encompassing all costs is not possible. Considering these difficulties, the fol- lowing strategy has an intuitive appeal and consists of three steps: (1) concept design, (2) parameter design, and (3) tolerance design. These steps are described below. 1. Concept design. In this step, the designer examines a variety of architectures and technologies for achieving the desired function of the product and selects the most suitable ones for the product. Selecting an appropriate circuit diagram or @ sequence of manufacturing steps are examples of concept design activity. This is a highly creative step in which the experience and skill of the designer play an important role. Usually, only one architecture or technology is selected based on the judgment of the designer. However, for highly complex products, two or three promising architectures are selected; each one is developed separately, and, in the end, the best architecture is adopted. Concept design can play an impor- tant role in reducing the sensitivity to noise factors as well as in reducing the manufacturing cost. Quality Function Deployment (QFD) and Pugh’s concept selection method are two techniques that can improve the quality and produc- tivity of the concept design step (see Clausing [C1], Sullivan (S6}, Hauser and Clausing (H1}, and Cohen (C4). 2. Parameter design. In parameter design, we determine the best settings for the control factors that do not affect manufacturing cost, that is, the settings that ‘minimize quality loss. Thus, we must minimize the sensitivity of the function of the product or process to all noise factors and also get the mean function on tar- get. During parameter design, we assume wide tolerances on the noise factors and assume that low-grade components and materials would be used; that is, we fix the manufacturing cost at a low value and, under these conditions, minimize the sensitivity to noise, thus minimizing the quality loss. If at the end of param- eter design the quality loss is within specifications, we have a design with the lowest cost and we need not go to the third step. However, in practice thea4 Principles of Quality Engineering Chap. 2 quality loss must be further reduced; therefore, we always have to go to the third step. 3. Tolerance design. In tolerance design, a trade-off is made between reduction in the quality loss due to performance variation and increase in manufacturing cost; that is, we selectively reduce tolerances and selectively specify higher-grade material (note that these are all tolerance factors) in the order of their cost effec- tiveness. Tolerance design should be performed only after sensitivity to noise has been minimized through parameter design. Otherwise, to achieve the desired low value of quality loss, we would have to specify unnecessarily higher-grade materials and components leading to higher manufacturing cost. Sometimes the variation in a products response can be reduced by adding a suitable compensa- tion mechanism, such as feedback control. Of course, this leads to higher prod- uct cost. Thus, the inclusion of a compensation mechanism should be considered as a tolerance factor to be optimized along with the component tolerances. Many Japanese companies do an excellent job of minimizing sensitivity to noise by using the Robust Design method. As a resuli, they can manufacture higher quality Products at a lower cost. Many American companies, unaware of the Robust Design method, depend heavily on tolerance design and concept design to improve quality. Relying on tolerance design makes products more expensive to manufacture, and rely- ing on improved concept design requires achieving breakthroughs which are difficult to schedule and, hence, lead to longer development time. Robust Design and its associated methodology focus on parameter design. A full treatment of concept design is beyond the scope of this book. Tolerance design is dis- cussed briefly in Chapters 8 and 11 with case studies. At the beginning of this section, we described three optimization strategies for becoming a preferred supplier. Each of these strategies can be realized through the concept design, parameter design, and tolerance design. The only difference is in the tolerance design step in which the "stopping rule” is different for the selective process of specifying higher-grade material and components. To maximize per unit profit, we should stop the selective specification of higher-grade material as soon as the quality loss for the product equals that of our competitors. To gain a reputation for quality, we should stop as soon as our manufacturing cost equals the competitor's manufactur ing cost. To maximize the use of societal resources, we should stop as soon as the marginal increase in manufacturing cost equals the marginal reduction in quality loss. In al these cases, parameter design is an essential step for gaining the most benefit of a given concept design. The engineering design activity for complicated products is often organized in the following hierarchy: (1) design of the overall system, (2) subsystem design, and (3) component design. ‘The steps of concept design, parameter design, and tolerance design can be applied in each of the three hierarchical levels.Sec, 2.8 Role of Various Quality Control Activities 35 2.8 ROLE OF VARIOUS QUALITY CONTROL ACTIVITIES ‘The goal of this section is to delineate the major quality control activities during the various stages of the life cycle of a product and to put in perspective the role of the Robust Design methodology. Once a decision to make a product has been made, the life cycle of that product has four major stages: 1. Product design 2. Manufacturing process design 3. Manufacturing 4. Customer usage The quality control activities in each of these stages are listed in Table 2.1. ‘The quality control activities in product and process design are called off-line quality con- trol, whereas the quality control activities in manufacturing are called on-line quality control. For customer usage, quality control activities involve warranty and service. Product Design During product design, one can address all three types of noise factors (external, unit- to-unit variation, and deterioration), thus making it the most important stage for improving quality and reducing the unit manufacturing cost. Parameter design during this stage reduces sensitivity to all three types of noise factors and, thus, gives us the following benefits: * The product can be used in a wide range of environmental conditions, so the products operating cost is lower. * The use of costly compensation devices, such as temperature-compensation cir- ‘cuits and airtight encapsulation, can be avoided, thus making the product simpler and cheaper. * Lower-grade components and materials can be used. + The benefits that can be derived from making manufacturing process design robust are also realized through parameter design during product design. Manufacturing Process Design During manufacturing process design, we cannot reduce the effects of either the exter- nal noise factors or the deterioration of the components on the product's performance in the field. This can be done only through product design and selection of material and components. However, the unit-to-unit variation can be reduced through36 Principles of Quality Engineering Chap. 2 TABLE 2.1 QUALITY CONTROL ACTIVITIES DURING VARIOUS PRODUCT REALIZATION STEPS a: Ability to Reduce Effect of ‘alse Factors Product Realization Sentret Unit toeteror Step ‘Activity [Exterma) Unit | sation Comments Product [a) Concept design | Yes | Yes | Yer [Involves innovation to reduce design sensitiv to all note factors. >) Parameter design | Yes | Yes | Yes [Most important step for reducing sensitivity to all nose factors. Uses Robust Design method, le) Tolerance design | Yes | Yes | Yes |Method for selecting most economi- cal grades of materials, components and manufacturing equipment, and operating environment forthe product. process design ‘unitto-unit variation, No | Yes | No |important for reducing sensitivity of unitto-unit variation to manufactur- ing variations. No | Yes | No |Method for determining tolerances ‘on manufacturing process parameters, b) Parameter design |e) Tolerance design Manufacturing |) Detection and ‘correction No | Yes | No. |Method of detecting problems when Manufacturing |s) Concept design | No | Yes | No |Invotves innovation to reduce | | | they occut ad coneting them. )Feeifrward contol] No | Yes | No Meth of compensating fr krown problems. 6) Sereening No | Yes | No |Last altematve, useful when process | capability is poor. T No Customer usage Source: Adapted from G. Taguchi, “Off-line and On-line Quality Control System,” International Conference on Quality Control, Tokyo, Japan, 1978. process design because parameter design during process design reduces the sensitivity Of the unit-to-unit variation to the various noise factors that affect the manufacturing process. For some products the deterioration rate may depend on the design of the ‘manufacturing process. For example, in microelectronics, the amount of impurity has a direct relationship with the deterioration rate of the integrated circuit and it can beSec. 28 Role of Various Quality Control Activities 37 controlled through process design. For some mechanical parts, the surface finish can determine the wear-out rate and it too can be controlled by the manufacturing process design. Therefore, it is often said that the manufacturing process design can play an important role in controlling the product deterioration, However, it is not the same thing as making the product's performance insensitive to problems, such as impurities or surface finish. Reducing the sensitivity of the product's performance can only be done during product design. In the terminology of Robust Design, we consider the problems of impurities or surface finish as a part of manufacturing variation (unit-to- unit variation) around the target. ‘The benefits of parameter design in process design are: * The expense and time spent in final inspection and on rejects can be reduced greatly. ‘+ Raw material can be purchased from many sources and the expense of incoming ‘material inspection can be reduced. * Less expensive manufacturing equipment can be used. * Wider variation in process conditions can be permitted, thus reducing the need and expense of on-line quality control (process control), Manufacturing ‘No matter how well a manufacturing process is designed, it will not be perfect. There- fore, it is necessary to have on-line quality control during daily manufacturing so that the unit-to-unit variation can be minimized as justified by manufacturing economics. There are three major types of on-line quality control activities (see Taguchi [T3] for detailed discussion of on-line quality control): 1. Detection and correction. Here the goal is to recognize promptly the breakdown of a machine or a piece of equipment, a change in raw material characteristic, or an operation error which has a consistent effect on the process. This is accom- plished by periodically observing either the process conditions or product charac- teristics. Once such a deviation is recognized, appropriate corrective action is taken to prevent the future units from being off-target. The detection and correc- tion method is a way of balancing the customer's quality loss resulting from unit-to-unit variation against the manufacturer’s operating expenses, including the cost of perioilic testing and correction of problems. Thus, it includes activities such as prevehtive maintenance and buying better test equipment. Statistical pro- ‘cess control (SPC) techniques are often used for detecting process problems (see Grant {G2}, Duncan [D5}, and Feigenbaum (F1)). 2. Feedforward control. Here, the goal is to send information about errors or prob- Jems discovered in one step to the next step in the process so that variation can be reduced. Consider a situation where, by mistake, a 200 ASA film is exposed with a 100 ASA setting on the camera. If that information is passed on to a film38 Principles of Quality Engineering Chap. 2 developing organization, the effect of the error in exposure can be reduced by adjusting the developing parameters. Similarly, by measuring the properties of incoming material or informing the subsequent manufacturing step about the problems discovered in an earlier step, unit-to-unit variation in the final product, can be reduced. This is the essence of feedforward control. 3. Screening. Here, the goal is to stop defective units from being shipped. In cer- tain situations, the manufacturing process simply does not have adequate capability—that is, even under nominal operating conditions the process produces a large number of defective products. Then, as the last alternative, all units pro- duced can still be measured and the defective ones discarded or repaired to prevent shipping them to customers. In electronic component manufacturing, it is common to burn-in the components (subject the components to normal or high stress for a period of time) as a method for screening out the bad components. Customer Usage With all the quality control efforts in product design, process design, and manufactur- ing, some defective products may still get shipped to the customer. The only way to prevent further damage to the manufacturer's reputation for quality is to provide field service and compensate the customer for the loss caused by the defective product. 2.9 SUMMARY * Quality engineering is concerned with reducing both the quality loss, which is the cost incurred after the sale of a product, and the unit manufacturing cost (ume). ‘+ Fraction defective is often an incomplete and misleading measure of quality. It ‘connotes that all products that meet the specification limits are equally good, while those outside the specification limits are bad. However, in practice the quality becomes progressively worse as the product’s response deviates from the target value. * The quadratic loss function is a simple and meaningful function for approximat- ing the quality loss in most situations. ‘The three most common variations of the quadratic loss function are: Ao 1. Nominal-the-best ype: LO) = =F 0 —my Age 2. Smaller-the-better type: L(y) = > y 3Sec. 29 Summary 39 3. Larger-the-better type: L(y) = Ag}, [ In the formulae above, Ap is the functional limit and Ag is the loss incurred at the functional limit. ‘The target values of the response (or the quality characteris- tic) for the three cases are m, 0, and eo, respectively. * A product response that is observed for the purpose of evaluating the quality loss or optimizing the product design is called a quality characteristic. The parame- ters (also called factors) that influence the quality characteristic can be classified into three classes: 1. Signal factors are the factors that specify the intended value of the Product's response. 2. Noise factors are the factors that cannot be controlled by the designer. Factors whose settings are difficult or expensive to control are also called noise factors. The noise factors themselves can be divided into three broad classes: (1) extemal (environmental and load factors), (2) unit-to-unit variation (manufacturing nonuniformity), and (3) deterioration (wear-out, process drift) 3. Control factors are the factors that can be specified freely by the designer. ‘Their settings (or levels) are selected to minimize the sensitivity of the product’s response to all noise factors. Control factors that also affect the product’s cost are called tolerance factors. * A robust product or a robust process is one whose response is least sensitive to all noise factors. A product's response depends on the values of the control and noise factors through nonlinear function. We exploit the nonlinearity to achieve robustness. * The three major steps in designing a product or a process are: 1, Concept design—selection of product architecture or process technology. 2, Parameter design—selection of the optimum levels of the control factors to maximize robustness. 3. Tolerance design—selection of the optimum values of the tolerance factors (material type, tolerance limits) to balance the improvement in quality loss against the increase in the ume. * Quality improvement through concept design needs breakthroughs, which are difficult to schedule. Parameter design improves quality without increasing the ume. It can be performed systematically by using orthogonal arrays. and the signal-to-noise ratios, which is the most inexpensive way to improve qualityPrinciples of Quality Enginoering Chap. 2 Tolerance design should be undertaken only after parameter design is done; oth- erwise, the product's ume may tum out to be unnecessarily high. Japanese com- panies have gained huge quality and cost advantage by emphasizing parameter design. The Robust Design methodology focuses on how to perform parameter design efficiently. Depending on marketing strategy and corporate policy, a supplier can adopt one of many optimization strategies for becoming a preferred supplier. Among them, three noteworthy strategies are: (1) minimize manufacturing cost while deliver- ing the same quality as the competition, (2) minimize the quality loss while keeping the manufacturing cost the same as the competition, and (3) minimize the sum of the quality loss and the manufacturing cost. Regardless of which optimization strategy is adopted, one must first perform parameter design. The life cycle of a product has four major stages: (1) product design, (2) manufacturing process design, (3) manufacturing, and (4) customer usage. Quality control activities during product and process design are called off-line quality control, while those in manufacturing are called on-line quality control Warranty and service are the ways for dealing with quality problems during cus- tomer usage. A product's sensitivity to all three types of noise factors can be reduced during product design, thus making product design the most important stage for improv- ing quality and reducing umc. The next important step is manufacturing process design through which the unit-to-unit variation (and some aspects of deteriora- tion) can be reduced along with the ume. During manufacturing, the unit-to-unit variation can be further reduced, but with less cost effectiveness than during manufacturing process design.Chapter 3 MATRIX EXPERIMENTS USING ORTHOGONAL ARRAYS ‘A matrix experiment consists of a set of experiments where we change the settings of the various product or process parameters we want to study from one experiment to another. After conducting a matrix experiment, the data from all experiments in the set taken together are analyzed to determine the effects of the various parameters. Con- ducting matrix experiments using special matrices, called orthogonal arrays, allows the effects of several parameters to be determined efficiently and is an important technique in Robust Design. This chapter introduces the technique of matrix experiments based on orthogonal arrays through a simulated example of a chemical vapor deposition (CVD) process. In particular, it focuses on the analysis of data collected from matrix experiments and the benefits of using orthogonal arrays. The engineering issues involved in planning and conducting matrix experiments are discussed in Chapter 4, and the techniques of con- structing orthogonal arrays are discussed in Chapter 7. This chapter consists of six sections: * Section 3.1 describes the matrix experiment and the concept of orthogonality. * Section 3.2 shows how to analyze the data from matrix experiments to determine the effects of the various parameters or factors. One of the benefits of using orthogonal arrays is the simplicity of data analysis. The effects of the various factors can be determined by computing simple averages, an approach that has an a2 Matrix Experiments Using Orthogonal Arrays Chap. 3 intuitive appeal. The estimates of the factor effects are then used to determine the optimum factor settings. * Section 3.3 presents a model, called additive model, for the factor effects and demonstrates the validity of using simple averages for estimating the factor effects. * Section 3.4 describes analysis of variance (ANOVA), a useful technique for estimating error variance and for determining the relative importance of various factors. Here we also point out the analogy between ANOVA and Fourier series decomposition. * Section 3.5 discusses the use of the additive model for prediction and diagnosis, as well as the concept of interaction and the detection of the presence of interac- tion through diagnosis. + Section 3.6 summarizes the important points of this chapter. In statistical literature, matrix experiments are called designed experiments and the individual experiments in a matrix experiment are sometimes called runs or treat- ‘ments, Settings are also referred to as levels and parameters as factors. The theory behind the analysis of matrix experiments can be found in many text books on experi- mental design, including Hicks (H2J; Box, Hunter, and Hunter (B3]; Cochran and Cox [C3]; John [J2}; and Daniel [D1]. 3.1 MATRIX EXPERIMENT FOR A CVD PROCESS Consider a project where we are interested in determining the effect of four process parameters: temperature (A), pressure (B), settling time (C), and cleaning method (D) on the formation of certain surface defects in a chemical vapor deposition (CVD) pro- cess. Suppose for each parameter three settings are chosen to cover the range of interest ‘The factors and their chosen levels are listed in Table 3.1. The starting levels (levels before conducting the matrix experiment) for the four factors, identified by an underscore in the table, are: T’C temperature, Po mtorr pressure, f minutes of set- tling time, and no cleaning. The altemate levels for the factors are as shown in the table; for example, the two altemate levels of temperature included in the study are (To-25)'C and (T'o+25)°C. These factor levels define the experimental region or the region of interest. Our goal for this project is to determine the best setting for each parameter so that the surface defect formation is minimized. ‘The matrix experiment selected for this project is given in Table 3.2. It consists, of nine individual experiments corresponding to the nine rows. The four columns of the matrix represent the four factors as indicated in the table. The entries in the matrix represent the levels of the factors. ‘Thus, experiment 1 is to be conducted with each factor at the first level. Referring to Table 3.1, we see that the factor levels for experi- ment I are ('~25)'C temperature, (Po~ 200) mtorr pressure, ¢o minute settling time,Sec. 3.1 Matrix Experiment for a CVD Process 43 ‘and no cleaning. Similarly, by referring to Tables 3.2 and 3.1, we see that experiment 4 is to be conducted at level 2 of temperature (T°C), level 1 of pressure (P 9-200 mtorr), level 2 of settling time (fo +8 minutes), and level 3 of cleaning method (CM). The settings of experiment 4 can also be referred to concisely as Az By C2 Ds. TABLE 3.1 FACTORS AND THEIR LEVELS 2 Levels* : Factor 1 | 2 3 A Tepe Ce) | Tes | Ty | Toes B. Pres ion) | Py-200 | Py | Pot200 cee | ee fee] cn * The starting level for each factor is identified by an underscore TABLE 3.2 MATRIX EXPERIMENT Column Number and Factor Assigned 1 2 3 4 | Observation Expt. | Temperature | Presure | Setting | Cleaning a Nee | a (B) | Time (€) | Method b) | cB) 1 1 1 1 1 2 1 2 2 2 3 1 3 3 3 4 2 1 2 3 5 2 2 3 1 ‘ 2 3 1 2 7 3 1 3 2 8 3 2 1 3 ° 3 3 2 1 + 11=~ 10 logio (mean square surface defect count)“4 Matrix Experiments Using Orthogonal Arrays Chap. 3 ‘The matrix experiment of Table 3.2 is the standard orthogonal array Lo of Taguchi and Wu (T7]. As the name suggests, the columns of this array are mutually orthogonal. Here, orthogonality is interpreted in the combinatoric sense—that is, for any pair of columns, all combinations of factor levels occur and they occur an equal number of times. This is called the balancing property and it implies orthogonality. ‘A more formal mathematical definition of orthogonality of a matrix experiment is given in Appendix A at the end of this book. In this matrix, for each pair of columns, there exist 3x3=9 possible combinations of factor levels, and each combination occurs precisely once. For columns 1 and 2, the nine possible combinations of factor levels, namely the combinations (1,1), (1,2) (13) 21), 2,2), 23) GD, G2), and 3,3), occur in experiments (or rows) 1, 2, 3, 4, 5, 6, 7, 8, and 9, respectively. In all, six pairs of columns can be formed from the four columns. We ask the reader to verify orthogonality for at least a few of these pairs, A common method of finding the frequency response function of a dynamic sys- tem to time varying input is to observe the output of the system for sinusoidal inputs of different frequencies, one frequency at a time. Another approach is to use an input consisting of several sinusoidal frequencies and observe the corresponding output. Fourier analysis is then used to determine the gain and phase for each frequency. Con- ducting matrix experiments with several factors is analogous to the use of multfre- quency input for finding the frequency response function. The analogy of matrix experiments with Fourier analysis is discussed further in Section 3.5. ‘There exists a large variety of industrial experiments. Each experiment has a dif- ferent number of factors. Some factors have two levels, some three levels, and some even more. The following sections discuss the analysis of such experiments and demonstrate the benefits of using orthogonal arrays to plan matrix experiments. A ‘number of standard orthogonal arrays and the techniques of constructing orthogonal arrays to suit specific projects are described in Chapter 7. 3.2 ESTIMATION OF FACTOR EFFECTS ‘Suppose for each experiment we observe the surface defect count per unit area at three locations each on three silicon wafers (thin disks of silicon used for making VLSI cir- cuits) so that there are nine observations per experiment. We define by the following formula a summary statistic, 7;, for experiment i: 7; = -10 logio (mean square defect count for experiment i) where the mean square refers to the average of the squares of the nine observations in experiment i, We refer to the 7; calculated using the above formula as the observed ‘yj. Let the observed nj for the nine experiments be as shown in Table 3.2. Note that the objective of minimizing surface defects is equivalent to maximizing 1. The sum- mary statistic 7| is called signal-to-noise (S/N) ratio, The rationale for using 1) as the objective function is discussed in Chapter 5.Sec. 3.2 Estimation of Factor Effects 45 Let us now see how to estimate the effects of the four process parameters from the observed values of 7 for the nine experiments. First, the overall mean value of 7) for the experimental region defined by the factor levels in Table 3.1 is given by 3 ZN 1 J [nent tny) G1) By examining columns 1, 2, 3, and 4 of the orthogonal array in Table 3.2, observe that all three levels of every factor are equally represented in the nine experiments. Thus, ‘mis a balanced overall mean over the entire experimental region. The effect of a factor level is defined as the deviation it causes from the overall mean, Let us examine how the experimental data can be used to evaluate the effect of temperature at level 3. Temperature was at level A; for experiments 7, 8, and 9. ‘The average S/N ratio for these experiments, which is denoted by ma, is given by m= + (mmm) 62) Thus, the effect of temperature at level A; is given by (m4,—m). From Table 3.2, observe that for experiments 7, 8, and 9, the pressure level takes values 1, 2, and 3, respectively. Similarly, for these three experiments, the levels of settling time and cleaning method also take values 1, 2, and 3. So the quantity m,, represents an aver- age 7) when the temperature is at level Ax where the averaging is done in a balanced ‘manner over all levels of each of the other three factors. ‘The average S/N ratio for levels A, and A2 of temperature, as well as those for the various levels of the other factors, can be obtained in a similar way. Thus, for example, L ie) 2 represents the average S/N ratio for temperature at level 2, and mo, = 4 (n+ns+m4) 64)46 Matrix Experiments Using Orthogonal Arrays Chap. 3 is the average S/N ratio for pressure at level By. Because the matrix experiment is based on an orthogonal array, all the level averages possess the same balancing prop- erty described for ma,. By taking the numerical values of 1 listed in Table 3.2, the average 1) for each level of the four factors can be obtained as listed in Table 3.3. ‘These averages are shown graphically in Figure 3.1. They are separate effects of each factor and are com- monly called main effects. ‘The process of estimating the factor effects discussed above is sometimes called analysis of means (ANOM), \ ‘A ~40 80 AL Aa Ay 8 Be By % CG Di De 0 ey Temperature Pressure. Setting Cleaning “Time Method Figure 3.1 Plots of factor effects. Underscore indicates stating level. TWo-standard-deviation ‘confidence limits are also shown for the middle level.- Sec.3.2 Estimation of Factor Effects ar ‘TABLE 33 AVERAGE n BY FACTOR LEVELS (48) Level Factor ila jes ‘A. Temperature | —20" 60 B, Pressure | -s0 =55 ©. Senting ime | ~50 -40 D. Cleaning method | —a5 40" * Overall mean = —41.67 dB, Starting level is Identified by an underscore, and the optimum level is identified by *. Selecting Optimum Factor Levels A primary goal in conducting a matrix experiment is to optimize the product or process design—that is, to determine the best or the optimum level for each factor. The ‘optimum level for a factor is the level that gives the highest value of 1) in the experi- mental region. The estimated main effects can be used for this purpose provided the variation of 1) as a function of the factor levels follows the additive model described in the next section, How to ensure that the additive model holds in a specific project is a crucial question, which is addressed later in this chapter and also at various places in the rest of the book as the appropriate situations arise. Recall that our goal in the CVD project is to minimize the surface defect count. Since log is a monotone decreasing function, it implies that we should maximize 1. Note that 7) = -20 is preferable to 1) = —45, because ~20 is greater than ~45. From Figure 3.1 we can determine the optimum level for each factor as the level that has the highest value of 1. Thus, the best temperature setting is A, the best pressure is B, the best settling time is C2, and the best cleaning method could be Dz or Ds, since the average 1) for both D and D, is ~40 dB. Based on the matrix experiment, we can conclude that the settings AB,C2D2 and A,B CD, would give the highest 7] or the lowest surface defect count. ‘The predicted best settings need not correspond to one of the rows in the matrix experiment. In fact, often they do not correspond as is the case in the present example. Also, typically, the value of 1) realized for the predicted best settings is better than the best among the rows of the matrix experiment.48 Matrix Experiments Using Orthogonal Arrays Chap. 3 3.3 ADDITIVE MODEL FOR FACTOR EFFECTS. In the preceding section, we used simple averaging to estimate factor effects. ‘The same nine observations (Ny, Mla. °** » Ny) are grouped differently to estimate the factor effects. Also, the optimum combination of settings was determined by examining the effect of each factor separately. Justification for this simple procedure comes from * Use of the additive model as an approximation * Use of an orthogonal array to plan the matrix experiment We now examine the additive model. ‘The relationship between 1) and the pro- cess parameters A, B, C, and D can be quite complicated. Empirical determination of this relationship can, therefore, tum out to be quite expensive. However, in most situa- tions, when 1) is chosen judiciously, the relationship can be approximated adequately by the following additive model: TAs By. Ce, DD = Wt a; +b; + ey + dy $e. G5) In the above equation, 11 is the overall mean—that is, the mean value of 7) for the experimental region; the deviation from wt. caused by setting factor A at level 4; is aj; the terms bj, ce and dj represent similar deviations from Ht caused by the settings B), Cy and D; of factors B, C, and D, respectively; and e stands for the error. Note that by error we imply the error of the additive approximation plus the error in the repeatabil- ity of measuring 1| for a given experiment. ‘An additive model is also referred to as a superposition model ot a variables separable model in engineering literature. Note that superposition model implies that the total effect of several factors (also called variables) is equal to the sum of the indi- vidual factor effects. It is possible for the individual factor effects to be linear, qua- dratic, or of higher order. However, in an additive model cross product terms involv- ing two or more factors are not allowed By definition a1, a2, and a3 are the deviations from j1 caused by the three levels, of factor A. Thus, a) +a +a3=0 G6) Similarly, by + by +b; =0| cy ten +03 =0 @.7) dy +d) +d3 =0,Sec.3:3 Additive Model for Factor Etfects 49 It can be shown that the averaging procedure of Section 3.2 for estimating the factor effects is equivalent to fitting the additive model, defined by Equations (3.5), (3.6), and (3.7), by the least squares method. ‘This is a consequence of using an orthogonal array to plan the matrix experiment. Now, consider Equation (3.2) for the estimation of the effect of setting tempera- ture at level 3: 1 may = 5 My +7 +79) 1 (wsestitestaseen + (tas +b; +0) +dy +9) + tay tbr ter +d, +60) 4 au +3a)+ +O +b, +b)4+ 40, 402400) FG +3a5)+ 5 by +b; +bs)+ 5 tea te9 FPG td ede d (e; +ey + ey) =U tas) Ele testes) G8) Note that the terms corresponding to the effects of factors B, C and D drop out because of Equation (3.7). Thus, ma, is an estimate of (i+a3). Furthermore, the error term in Equation (3.8) is an average of three error terms Suppose oF is the average variance for the error terms ¢1, €9. °°» e. Then the error variance for the estimate ma, is approximately (1/3)o2. (Note that in computing the ‘error variances of the estimate m,, and other estimates in this chapter, we treat the individual error terms as independent random variables with zero mean and variance 62. In reality, this is only an approximation because the error terms include the error Of the additive approximation so that the error terms are not strictly independent ran- dom variables with zero mean. This approximation is adequate because the error vari- ance is used for only qualitative purposes.) This represents a 3-fold reduction in error variance compared to conducting a single experiment at the setting A of factor A.50 Matrix Experiments Using Orthogonal Arrays Chap. 3 Substituting Equation (3.5) in Equation (3.3) verifies that ma, estimates jt + a2 with error variance (1/3)62. Similarly, substituting Equation (3.5) in Equation (3.4) shows that mg, estimates jt + b2 with error variance (1/3)67. It can be verified that similar relationships hold for the estimation of the remaining factor effects. The term replication number is used to refer to the number of times a particular factor level is repeated in an orthogonal array. The error variance of the average effect, for a particular factor level is smaller than the error variance of a single experiment by a factor equal to its replication number. To obtain the same accuracy of the factor level averages, we would need a much larger number of experiments if we were to use the traditional approach of studying one factor at a time. For example, we would have to conduct 3 x 3 = 9 experiments to estimate the average 11 for three levels of tempera- ture alone (three repetitions each for the three levels), while keeping other factors fixed at certain levels, say, By, C1, D1. We may then fix temperature at its best setting and experiment with levels By and B3 of pressure. This would need 3 x 2 = 6 additional experiments. Continuing in this manner, we can study the effects of factors C and D by performing 2 x 6 = 12 additional experiments. Thus, we would need a total of 9 + 3 x 6 = 27 experiments to study the four factors, one at a time. Compare this to only nine experiments needed for the orthogonal array based matrix experiment to obtain the same accuracy of the factor level averages. Another common approach to finding the optimum combination of factor levels is to conduct a full factorial experiment—that is, conduct experiments under all combi- nations of factor levels. In the present example, it would mean conducting experiments, under 3* = 81 distinct combinations of factor levels, which is much larger than the nine experiments needed for the matrix experiment. When the additive model [Equa- tion (.5)] holds, it is obviously unnecessary to experiment with all combinations of factor levels. Fortunately, in most practical situations the additive model provides an excellent approximation. The additivity issue is discussed in much detail in Chapters 5 and 6. Conducting matrix experiments using orthogonal arrays has another statistical advantage. If the errors, ¢;, are independent with zero mean and equal variance, then the estimated factor effects are mutually uncorrelated. Consequently, the best level of each factor can be determined separately. In order to preserve the benefits of using an orthogonal array, it is important that all experiments in the matrix be performed. If experiments corresponding to one or ‘more rows are not conducted, or if their data are missing or erroneous, the balancing Property and, hence, the orthogonality is lost. In some situations, incomplete matrix experiments can give useful results, but the analysis of such experiments is compli- cated. (Statistical techniques used for analyzing such data are regression analysis and linear models; see Draper and Smith (D4].) Thus, we recommend that any missing experiments be performed to complete the matrix.Sec.3.4 Analysis of Variance 81 3.4 ANALYSIS OF VARIANCE Different factors affect the surface defect formation to a different degree. The relative magnitude of the factor effects could be judged from Table 3.3, which gives the aver- age 1] for each factor level. A better feel for the relative effect of the different factors can be obtained by the decomposition of variance, which is commonly called analysis of variance (ANOVA). ANOVA js also needed for estimating the error variance for the factor effects and variance of the prediction error. Analogy with Fourler Analysis ‘An important reason for performing Fourier analysis of an electrical signal is to deter- mine the power in each harmonic to judge the relative importance of the various har- monics. The larger the amplitude of a harmonic, the larger the power is in it and the more important it is in describing the signal. "Similarly, an important purpose of ANOVA is to determine the relative importance of the various factors. In fact, there is, a strong analogy between ANOVA and the decomposition of the power of an electrical signal into different harmonics: * The nine observed values of 7| are analogous to the observed signal. * The sum of squared values of 1) is analogous to the power of the signal. * The overall mean 1) is analogous to the de part of the signal. * The four factors are like four harmonics. * The columns in the matrix experiment are orthogonal, which is analogous to the orthogonality of the different harmonics. ‘The analogy between the Fourier analysis of the power of an electrical signal and ANOVA is displayed in Figure 3.2. ‘The experiments are arranged along the horizontal axis like time. ‘The overall mean is plotted as a straight line like a de component. The effect of each factor is displayed as an harmonic. The level of factor A for experi- ments 1, 2, and 3 is Ay. So, the height of the wave for A is plotted as m,, for these experiments. Similarly, the height of the wave for experiments 4, 5, and 6 is ma,, and the height for experiments 7, 8, and 9 is m,,. The waves for the other factors are also plotted similarly. By virtue of the additive model [Equation (3.5)], the observed 1 for any experiment is equal to the sum of the height of the overall mean and the deviation from mean caused by the levels of the four factors. By referring to the waves of the different factors shown in Figure 3.2 it is clear that factors A, B, C, and D are in the decreasing order of importance. Further aspects of the analogy are discussed in the rest of this section,82 Matrix Experiments Using Orthogonal Arrays Chap. 3 Observed SIN Ratio Overall ‘Mean Effect of Factor A Effect of Factor B eo Etfectot 30 4 Factor 59 ae + 4 naga t 2 ¢:4 656 7:6 0 Etfectot -30 FactorD 59 123456789 Experiment Number Figure 3.2 Orthogonal decomposition of the observed S/N ratio.Sec. 3.4 Analysis of Variance 53 Computation of Sum of Squares ‘The sum of the squared values of 1 is called grand total sum of squares. Thus, we have 2 Grand total sum of squares = 5) 1? a = (- 20) + (10) +--+ (70? = 19,425 (4B? ‘The grand total sum of squares is analogous to the total signal power in Fourier analysis. It can be decomposed into two parts—sum of squares due to mean and total sum of squares which are defined as follows: ‘Sum of squares due to mean = (number of experiments) x m? = 9 (41.67) = 15,625 (dB)? . 3 Total sum of squares = 5 (nm)? (-20- 41.67)? + (-10-41.67) +++ + (-10-41.67) 800 (dB)? . ‘The sum of squares due to mean is analogous to the de power of the signal and the total sum of squares is analogous to the ac power of the signal in Fourier analysis. Because m is the average of the nine nj values, we have the following algebraic identity: E (nj-m? = EF nF-9n?34 Matrix Experiments Using Orthogonal Arrays Chap. 3 which can also be written as Total sum of squares = (grand total sum of squares) (sum of squares due to mean) . The above equation is analogous to the fact from Fourier analysis that the ac power is ‘equal to the difference between the total power and the dc power of the signal. ‘The sum of squares due to factor A is equal to the total squared deviation of the wave for factor A from the line representing the overall mean, There are three experi ‘ments each at levels Ay, Az, and A3, Consequently, ‘Sum of squares due to factor A = 3m, —m)* + 30m, -m)? + 30m, -m)? = 3(-20+41.67)? +3(-45 +41.67)? +3(-60+41.677 = 2450 (dB)? Proceeding along the same lines, we can show that the sum of squares due to factors B, C, and D are, respectively, 950, 350, and 50 (dB)*. These sums of squares values are tabulated in Table 3.4. ‘The sums of squares values due to various factors are analogous to the power in various harmonics, and are a measure of the relative impor- tance of the factors in changing the values of 1). ‘Thus, factor A explains a major portion of the total variation of 1. In fact, itis responsible for (2450/3800) x 100 = 64.5 percent of the variation of 1). Factor B is responsible for the next largest portion, namely 25 percent; and factors C and D together are responsible for only a small portion, a total of 10.5 percent, of the varia- tion in 7, Knowing the factor effects (that is, knowing the values of a;, bj, ¢4, and di), we can use the additive model given by Equation (3.5) to calculate the error term ¢; for each experiment i, The sum of squares due to error is the sum of the squares of the error terms. ‘Thus we have, 2 ‘Sum of squares due to error = 5) e? ia In the present case study, the total number of model parameters (jt, a, a2, a3, by, ba. etc.) is 13; the number of constraints, defined by Equations (3.6) and (3.7) is 4. "TheSec. 3.4 Analysis of Variance 58 ‘number of model parameters minus the number of constraints is equal to the number of experiments. Hence, the error term is identically zero for each experiment. Hence, the sum of squates due to error is also zero, Note that this need not be the situation with all matrix experiments. TABLE 3.4 ANOVA TABLE FOR 1, Degrees of | Sum of | FactorSource | Freedom | Squares F A. Temperaure | 2 | 2450 | 1225 | 1225 B, Presure 2 oo | as | ars ©. Setting time 2 sor | 175 D. Cleaning method | 2 sor | 25 Enor ° ce Tou «| 3800 Exe) @ — | oo | coo) * Indicates sum of squares added together to estimate the pooled ertor sum of squares indicated by parentheses. F rato is calculated by using the pooled error mean square Relationship Among the Various Sums of Squares The orthogonality of the matrix experiment implies the following relationship among the various sums of squares: (Total sum of squares) = (sum of the sums of squares due to various factors) + (sum of squares due to error) 89) Equation (3.9) is analogous to Parseval’s equation for the decomposition of the power of signal into power in different harmonics. Equation (3.9) is often used for calculat- ing the sum of squares due to error after computing the total sum of squares and the sum of squares due to various factors. Derivation of Equation (3.9) as well as detailed ‘mathematical description of ANOVA can be found in many books on statistics, such as Schefifé [S1}, Rao [R3], and Searle [$2].56 Matrix Experiments Using Orthogonal Arrays Chap. 3 For the matrix experiment described in this chapter, Equation (3.9) implies: (Total sum of squares) = (sum of the sums of squares, due to factors A, B, C, and D) + (sum of squares due to error) . Note that the various sums of squares tabulated in Table 3.4 do satisfy the above equation. Degrees of Freedom ‘The number of independent parameters associated with an entity like a matrix expe ment, or a factor, or a sum of squares is called its degrees of freedom. A matrix experi- ment with nine rows has nine degrees of freedom and so does the grand total sum of squares. The overall mean has one degree of freedom and so does the sum of squares due to mean, Thus, the degrees of freedom associated with the total sum of squares is 9-1=8. (Note that total sum of squares is equal to grand total sum of squares minus the sum of squares due to mean.) Factor A has three levels, so its effect can be characterized by three parameters: a, a>, and a3. But these parameters must satisfy the constrain given by Equation (3.6). Thus, effectively, factor A has only two independent parameters and, hence, two degrees of freedom. Similarly, factors B, C, and D have two degrees of freedom each. In gen- cal, the degrees of freedom associated with a factor is one less than the number of levels. ‘The orthogonality of the matrix experiment implies the following relationship among the various degrees of freedom: (Degrees of freedom for the total sum of squares) = (sum of the degrees of freedom for the various factors) + (degrees of freedom for the error) . (3.10) Note the similarity between Equations (3.9) and (3.10). Equation (3.10) is useful for computing the degrees of freedom for error. In the present case study, the degrees of freedom for error comes out to be zero. This is consistent with the earlier observation that the error term is identically zero for each experiment in this case study. It is customary to write the analysis of variance in a tabular form shown in Table 3.4, The mean square for a factor is computed by dividing the sum of squares by the degrees of freedom.Sec. 3.4 Analysis of Variance 7 Estimation of Error Variance ‘The error variance, which is equal to the error mean square, can then be estimated as follows: 9f squares due to error Error = Sung 3.10 ‘Variance = “Gegrees of freedom for error ein ‘The error variance is denoted by 02. In the interest of gaining the most information from a matrix experiment, all or most of the columns should be used to study process or product parameters. AS @ result, no degrees of freedom may be left to estimate error variance. Indeed, this is the situation with the present example. In such situations, we cannot directly estimate the error variance. However, an approximate estimate of the error variance can be obtained by pool- ing the sum of squares corresponding to the factors having the lowest mean square. As a rule of thumb, we suggest that the sum of squares corresponding to the bottom half of the factors (as defined by lower mean square) corresponding to about half of the degrees of freedom be used to estimate the error mean square or error variance. This rule is similar to considering the bottom half harmonics in a Fourier expansion as error and using the rest to explain the function being investigated. In the present example, we use factors C and D to estimate the error mean square. Together, they account for four degrees of freedom and the sum of their sum of squares is 400. Hence, the error variance is 100. Error variance computed in this manner is indicated by parentheses, and the computation method is called pooling. (By the traditional statistical assump- tions, pooling gives a biased estimate of error variance. To obtain a better estimate of ‘error variance, a significantly larger number of experiments would be needed, the cost of which is usually not justifiable compared to the added benefit.) In Fourier analysis of a signal, it is common to compute the power in all har- monics and then use only those harmonics with large power to explain the signal and treat the rest as error. Pooling of sum of squares due to bottom half factors is exactly analogous to that practice. After evaluating the sum of squares due to all factors, we retain only the top half factors to explain the variation in the process response 1) and the rest to estimate approximately the error variance. ‘The estimation of the error variance by pooling will be further illustrated through the applications discussed in the subsequent chapters. As it will be apparent from these applications, deciding which factors’ sum of squares should be included in the error variance is usually obvious by inspecting the mean square column, ‘The decision process can sometimes be improved by using a graphical data analysis technique called half-normal plots (see Daniel [D1 and Box, Hunter, and Hunter [B3)).88 Matrix Experiments Using Orthogonal Arrays Chap. 3 Confidence Intervals for Factor Effects Confidence intervals for factor effects are useful in judging the size of the change caused by changing a factor level compared to the error standard deviation. As shown in Section 3.3, the variance of the effect of each factor level for this example is (1/3) 02 = (1/3)(100) = 33.3 4B)’. Thus, the width of the two-standard-deviation confidence interval, which is approximately 95 percent confidence interval, for each estimated effect is 233.3 =£11.5 dB. In Figure 3.1 these confidence intervals are plotted for only the starting level to avoid crowding. Variance Ratlo ‘The variance ratio, denoted by F in Table 3.4, is the ratio of the mean square due to a factor and the error mean square. A large value of F means the effect of that factor is large compared to the error variance, Also, the larger the value of F, the more impor- tant that factor is in influencing the process response |. So, the values of F can be used to rank order the factors. In statistical literature, the F value is often compared with the quantiles of a probability distribution called the F-distribution to determine the degree of confidence that a particular factor effect is real and not just a random occurrence (see, for example, Hogg and Craig [H3)). However, in Robust Design we are not concemed with such probability statements; we use the F ratio for only qualitative understanding of the rela- tive factor effects. A value of F less than one means the factor effect is smaller than the error of the additive model. A value of F larger than two means the factor is not uite small, whereas larger than four means the factor effect is quite large. Interpretation of ANOVA Table: ‘Thus far in this section, we have described the computation involved in the ANOVA. table, as well as the inferences that can be made from the table. A variety of computer programs can be used to perform the calculations, but the experimenter must make appropriate inferences. Here we put together the major inferences from the ANOVA. table. Referring to the sum of squares column in Table 3.4, notice that factor A makes the largest contribution to the total sum of squares, namely, (2450/3800) x 100 = 64.5 percent. Factor B makes the next largest contribution, (950/3800) x 100 = 25.0 per- cent, to the total sum of squares. Factors C and D together make only a 10.5 percent contribution to the total sum of squares. The larger the contribution of a particular fac- tor fo the total sum of squares, the larger the ability is of that factor to influence 1. In this matrix experiment, we have used all the degrees of freedom for estimating, the factor effects (four factors with two degrees of freedom each make up all the eight degrees of freedom for the total sum of squares). Thus, there are no degrees ofSec.3.5 Prediction and Diagnosis 59 freedom left for estimating the error variance. Following the rule of thumb spelled out earlier in this section, we use the bottom half factors that have the smallest mean square to estimate the error variance. Thus, we obtain the error sum of squares, in cated by parentheses in the ANOVA table, by pooling the sum of squares due to fac- tors C and D. This gives 100 as an estimate of the error variance. ‘The largeness of a factor effect relative to the error variance can be judged from the F column. ‘The larger the F value, the larger the factor effect is compared to the error variance. This section points out that our purpose in conducting ANOVA is to determine the relative magnitude of the effect of each factor on the objective function 1) and to estimate the error variance. We do not attempt to make any probability statements about the significance of a factor as is commonly done in statistics. In Robust Design, ANOVA is also used to choose from among many altematives the most appropriate quality characteristic and S/N ratio for a specific problem. Such an application of ANOVA is described in Chapter 8. Also, ANOVA is useful in computing the S/N ratio for dynamic problems as described in Chapter 9. 3.5 PREDICTION AND DIAGNOSIS Prediction of 1 under Optimum Conditions As discussed carlier, a primary goal of conducting Robust Design experiments is to determine the optimum level for each factor. For the CVD project, one of the two identified optimum conditions is Ay By C2 D2. The additive model, Equation (3.5), can be used to predict the value of 1| under the optimum conditions, denoted by Thay. as follows: Mgt =m + (mg —m) + (mgm) = 41.67 + (-20+41.67) + (-30+41.67) -8.33 dB @.12) Note that since the sum of squares due to factors C and D are small and that these terms are included as error, we do not include the corresponding improvements in the prediction of 1) under optimum conditions. Why are the contributions by factors having a small sum of squares ignored? Because if we include the contribution from all factors, it can be shown that the predicted improvement in 1) exceeds the actual realized improvement—that is, our prediction would be biased on the higher side. By ignoring the contribution from factors with small sums of squares, we can reduce this60 Matrix Experiments Using Orthogonal Arrays Chap. 3 bias. Again, this is a rule of thumb. For more precise prediction, we need to use appropriate shrinkage coefficients described by Taguchi [T1}. Thus, by Equation (3.12) we predict that the defect count under the optimum, conditions would be -8.33 dB. This is equivalent to a mean square count of = 10° = 6.8 (defects/unit area)? ‘The corresponding root-mean-square defect count is V6.8 = 2.6 defects/unit area, ‘The purpose of taking log in constructing the S/N ratio can be explained in terms of the additive model. If the actual defect count were used as the characteristic for constructing the additive model, it is quite possible that the defect count predicted under the optimum conditions would have been negative. This is highly undesirable since negative counts are meaningless. However, in the log scale, such negative counts cannot occur. Hence, it is preferable to take the log. The additive model is also useful in predicting the difference in defect counts between two process conditions. The anticipated improvement in changing the process conditions from the initial settings (42BC ,D ) to the optimum settings (A,B,C2D2) AN = Mop ~ Minit = (a, — 4) + (tg, — mg.) (-20+45) + (-30+40) =354B. G13) Once again we do not include the terms corresponding to factors C and D for the rea- sons explained earlier. Verification (Confirmation) Experiment ‘After determining the optimum conditions and predicting the response under these con- ditions, we conduct an experiment with optimum parameter settings and compare the observed value of 1 with the prediction. If the predicted and observed 1 ate close to each other, then we may conclude that the additive model is adequate for describing the dependence of 11 on the various parameters. On the contrary, if the observation is drastically different from the prediction, then we say the additive model is inadequate. This is evidence of a strong interaction among the parameters, which is described later in this section.Sec. 3.5 Prediction and Diagnosis a Variance of Prediction Error ‘We need to determine the variance of the prediction error so that we can judge the closeness of the observed Tho. 10 the predicted Tha. The prediction error, which is the difference between the observed Nop: and the predicted Tho, has two independent com- ponents. ‘The first component is the error in the prediction of Toy caused by the errors. in the estimates of m, ma,, and mg,. The second component is the repetition error of ‘an experiment. Because these two components are independent, the variance of the prediction error is the sum of their respective variances, Consider ihe first component. Its vatiance can be shown equal to (1/n9)o2 where 62 is the error variance whose estimation was discussed earlier; mo is the equivalent sample size for the estimation of log. ‘The equivalent sample size ng can be computed as follows: : [e - 4) G14) ea ok where n is the number of rows in the matrix experiment and n, is the number of times level A, was repeated in the matrix experiment—that is, n4, is the replication number for factor level A, and ng, is the replication number for factor level By. Observe the correspondence between Equations (3.14) and (3.12). The term (Jin) in Equation (3.14) corresponds to the term m in the prediction Equation (3.12); and the terms (1/ng,—1/n) and (1/ng,—1/n) correspond, respectively, to the terms (mq, —m) and (mg, —m). This correspondence can be used to generalize Equation (3.14) to other prediction formulae. Now, consider the second component. Suppose we repeat the verification expei ‘ment 7, times under the optimum conditions and call the average 7| for these experi- ‘ments as the observed Tlon. The repetition error is given by (I/n,)2. ‘Thus, the vari- ance of the prediction error, Gjras is on [ae le o @.15) In the example, n =9 and ny, = ng, = 3. Thus, (I/ng) = (1/9) + (1/3-1/9) + (1/3=1/9) = (59). Suppose n, = 4. Then adel = 80.6(B)".62 Matrix Experiments Using Orthogonal Arrays Chap. 3 ‘The corresponding two-standard-deviation confidence limits for the prediction error are £17.96 dB. If the prediction error is outside these limits, we should suspect the possi- bility that the additive model is not adequate. Otherwise, we consider the additive model to be adequate. Uniformity of Prediction Error Variance It is obvious from Equation (3.15) that the variance of the prediction error, Gre, is the same for all combinations of the factor levels in the experimental region. It does not matter whether the particular combination does or does not correspond to one of the rows in the matrix experiment, Before conducting the matrix experiment we do not know what would be the optimum combination. Hence, it is important to have the property of uniform prediction error. Interactions among Control Factors ‘The concept of interactions can be understood from Figure 3.3. Figure 3.3(a) shows the case of no interaction between two factors A and B. Here, the lines of the effect of factor A for the settings By, B>, and B of factor B are parallel to each other. Parallel lines imply that if we change the level of factor A from A, to Az or A3, the corresponding change in 7 is the same regardless of the level of factor B. Similarly, a change in level of B produces the same change in 7 regardless of the level of factor A. ‘The additive model is perfect for this situation. Figures 3.3(b) and 3.3(c) show two ‘examples of presence of interaction, In Figure 3.3(b), the lines are not parallel, but the direction of improvement does not change. In this case, the optimum levels identified by the additive model are still valid. Whereas in Figure 3.3(c), not only are the lines not parallel, but the direction of improvement is also not consistent. In such a case, the optimum levels identified by the additive model can be misleading. The type of interaction in Figure 3.3(b) is sometimes called synergistic interaction while the one in Figure 3.3(c) is called antisynergistic interaction. The concept of interaction between two factors described above can be generalized to apply to interaction among three or ‘more factors. When interactions between two or more factors are present, we need cross prod- uct terms to describe the variation of 1 in terms of the control factors. A model for such a situation needs more parameters than an additive model and, hence, it needs more experiments to estimate all the parameters. Further, as discussed in Chapter 6, using a model with interactions can have problems in the field. Thus, we consider the presence of interactions to be highly undesirable and try to eliminate them. When the quality characteristic is correctly chosen, the S/N ratio is properly con- structed, and the control factors are judiciously chosen (see Chapter 6 for guidelines), the additive model provides excellent approximation for the relationship between 1) and the control factors. The primary purpose of the verification experiment is to warn usSec.36 © Summary 63 (@) No Interaction (0) Synergistic (6) Antisynergistic Interaction Interaction Figure 3.3. Examples of interaction. when the additive model is not adequate and, thus, prevent faulty process and product designs from going downstream. Some applications call for a broader assurance of the additive model. In such cases, the verification experiment consists of two or more con- ditions rather than just the optimum conditions. For the additive model to be con- sidered adequate, the predictions must match the observation under all conditions that are tested. Also, in certain situations, we can judge from engineering knowledge that particular interactions are likely to be important. ‘Then, orthogonal arrays can be suit- ably constructed to estimate those interactions along with the main effects, as described in Chapter 7. 3.6 SUMMARY + A matrix experiment consists of a set of experiments where the settings of several product or process parameters to be studied are changed from one experi- ment to another. + Matrix experiments are also called designed experiments, parameters are also called factors, and parameter settings are also called levels. * Conducting matrix experiments using orthogonal arrays is an important technique in Robust Design. It gives more reliable estimates of factor effects with fewer experiments when compared to the traditional methods, such as one factor at @ time experiments. Consequently, more factors can be studied in given R&D resources, leading to more robust and less expensive products.Matrix Experiments Using Orthogonal Arrays Chap. 3 * The columns of an orthogonal array are pairwise orthogonal—that is, for every pair of columns, all combinations of factor levels occur an equal number of times. ‘The columns of the orthogonal array represent factors to be studied and the rows represent individual experiments. * Conducting a matrix experiment with an orthogonal array is analogous to finding the frequency response function of a dynamic system by using a multifrequency input. ‘The analysis of data obtained from matrix experiments is analogous to Fourier analysis. + Some important terms used in matrix experiments are: The region formed by the factors being studied and their alternate levels is called the experimental region. ‘The starting levels of the factors are the levels used before conducting the matrix experiment. ‘The main effects of the factors are their separate effects. If the effect of a factor depends on the level of another factor, then the two factors are said to have an interaction. Otherwise, they are considered to have no interac- tion. ‘The replication number of a factor level is the number of experiments in the matrix experiment that are conducted at that factor level. The effect of a fac- tor level is the deviation it causes from the overall mean response. ‘The optimum level of a factor is the level that gives the highest S/N ratio. + An additive model (also called superposition model ot variables separable ‘model) is used to approximate the relationship between the response variable and the factor levels. Interactions are considered errors in the additive model. * Orthogonal array based matrix experiments are used for a variety of purposes in Robust Design. They are used to: — Study the effects of control factors — Study the effects of noise factors — Evaluate the S/N ratio — Determine the best quality characteristic or S/N ratio for particular applica- tions + Key steps in analyzing data obtained from a matrix experiment are: 1, Compute the appropriate summary statistics, such as the S/N ratio for each experiment. 2, Compute the main effects of the factors. 3. Perform ANOVA to evaluate the relative importance of the factors and the error variance. 4. Determine the optimum level for each factor and predict the S/N ratio for the optimum combination,Sec.36 Summary 65 5, Compare the results of the verification experiment with the prediction. If the results match the prediction, then the optimum conditions are con- sidered confirmed; otherwise, additional analysis and experimentation are needed. ‘+ If one or more experiments in a matrix experiment are missing or erroneous, then those experiments should be repeated to complete the matrix. This avoids the need for complicated analysis. ‘+ Matrix experiment, followed by a verification experiment, is a powerful tool for detecting the presence of interactions among the control factors. If the predicted response under the optimum conditions does not match the observed response, then it implies that the interactions are important. If the predicted response ‘matches the observed response, then it implies that the interactions are probably not important and that the additive model is a good approximation,Chapter 4 STEPS IN ROBUST DESIGN ‘As explained in Chapter 2, optimizing a product or process design means determining the best architecture, levels of control factors, and tolerances. Robust Design is a methodology for finding the optimum settings of the control factors to make the prod- uct or process insensitive to noise factors. It involves eight steps that can be grouped into the three major categories of planning experiments, conducting them, and analy2- ing and verifying the results. *+ Planning the experiment 1) Identify the main function, side effects, and failure modes. 2) Identify noise factors and the testing conditions for evaluating the quality loss. 3) Identify the quality characteristic to be observed and the objective function to be optimized. 4) Identify the control factors and their alternate levels. 5) Design the matrix experiment and define the data analysis procedure. + Performing the experiment 6) Conduct the matrix experiment, 6768 ‘Steps in Robust Design Chap. 4 + Analyzing and verifying the experiment results 7) Analyze the data, determine optimum levels for the control factors, and predict performance under these levels. 8) Conduct the verification (also called confirmation) experiment and plan future actions. These eight steps make up a Robust Design cycle. We will illustrate them in this chapter by using a case study of improving a polysilicon deposition process. ‘The case study was conducted by Peter Hey in 1984 as a class project for the first offering Of the 3-day Robust Design course developed by the author, Madhav Phadke, and Chris Sherrerd, Paul Sherry, and Rajiv Keny of AT&T Bell Laboratories. Hey and Sherry jointly planned the experiment and analyzed the data. The experiment yielded a 4-fold reduction in the standard deviation of the thickness of the polysilicon layer and nearly two orders of magnitude reduction in surface defects, a major yield-limiting problem which was virtually eliminated. These results were achieved by studying the effects of six control factors by conducting experiments under 18 distinct combinations of the levels of these factors—a rather small investment for huge benefits in quality and yield. This chapter consists of nine sectio * Sections 4.1 through 4,8 describes in detail the polysilicon deposition process case study in terms of the eight steps that form a Robust Design cycle. * Section 4.9 summarizes the important points of this chapter. 4.1 THE POLYSILICON DEPOSITION PROCESS AND ITS MAIN FUNCTION Manufacturing very large scale intergrated (VLSI) circuits involves about 150 major steps. Deposition of polysilicon comes after about half of the steps are complete, and, as a result, the silicon wafers (thin disks of silicon) used in the process have a significant amount of value added by the time they reach this step. The polysilicon layer is very important for defining the gate electrodes for the transistors. ‘There are ‘over 250,000 transistors in a square centimeter chip areafor the 1.75 micron (microme- ter = micron) design rules used in the case study. A hot-wall, reduced-pressure reactor (see Figure 4.1) is used to deposit polysili- ‘con on a wafer. ‘The reactor consists of a quartz tube which is heated by a 3-zone fur- nace, Silane and nitrogen gases are introduced at one end and pumped out the other. ‘The silane gas pyrolizes, and a polysilicon layer is deposited on top of the oxide layer on the wafers. The wafers are mounted on quartz carriers. Two carriers, each carrying 25 wafers, can be placed inside the reactor at a time so that polysilicon is deposited simultaneously on 50 wafers.‘Sec. 4.1. The Polysiicon Deposition Process and its Main Function 69 Pressure Sensor Figure 4.1 Schematic diagram of a reduced pressure reactor. The function of the polysilicon deposition process is to deposit a uniform layer of a specified thickness. In the case study, the experimenters were interested in achiev- ing 3600 angstrom(A ) thickness (1A = 10"? meter). Figure 4.2 shows a cross sec- tion of the wafer after the deposition of the polysilicon layer. ‘nterievel Dialectic SiO, 2300A P-doped Polysilicon 3600 \\ SiO, Gate Layer 360A, Suberaia RRR ‘SiO, Gate Layer 360A \ oped Plyicon 00k Figure 4.2. Cross section of a wafer showing polysilicon layer. At the start of the study, two main problems occurred during the deposition pro- cess: (1) too many surface defects (see Figure 4.3) were encountered, and (2) too large70 ‘Steps in Robust Design Chap. 4 a thickness variation existed within wafers and among wafers. In a subsequent VLSI ‘manufacturing step, the polysilicon layer is pattemed by an etching process to form lines of appropriate width and length. Presence of surface defects causes these lines to have variable width, which degrades the performance of the integrated circuits. ‘The nonuniform thickness is detrimental to the etching process because it can lead to resid- ual polysilicon in some areas and an etching away of the underlying oxide layer in other areas. Figure 4.3 Photographs of polysilicon surface showing surface defects. Prior to the case study, Hey noted that the surface-defect problem was crucial because a significant percentage of wafers were scrapped due to excessive defects. Allso, he observed that controlling defect formation was particularly difficult due to its intermittent occurrence; for example, some batches of wafers (50 wafers make one batch) had approximately ten defects per unit area, while other batches had as many as 5,000 defects per unit area. Furthermore, no theoretical models existed to predict defect formation as a function of the various process parameters; therefore, experi- ‘mentation was the only way to control the surface-defect problem. However, theSec. 42 Noise Factors and Testing Conditions n intermittency of the problem had rendered the traditional method of experimentation, where only one process parameter is changed at a time, virtually useless. 4.2 NOISE FACTORS AND TESTING CONDITIONS To minimize sensitivity to noise factors, we must first be able to estimate the sensi- tivity in a consistent manner for any combination of the control factor levels. This is achieved through proper selection of testing conditions. In a Robust Design project, we identify all noise factors (Factors whose levels cannot be controlled during manufacturing, which are difficult to control, or expensive to control), and then select a few testing conditions that capture the effect of the more important noise factors. Simulating the effects of all noise factors is impractical because the experimenter may not know all the noise sources and because total simula- tion would require too many testing conditions and be costly. Although it is not neces- sary to include the effect of all noise factors, the experimenter should list as many of them as possible and, then, use engineering judgment to decide which are more impor- tant and what testing conditions are appropriate to capture their effects. Various noise factors exist in the deposition process. The nonuniform thickness and the surface defects of the polysilicon layer are caused by the variations in the parameters involved in the chemical reactions associated with the deposition process. First, the gases are introduced at one end of the reactor (see Figure 4.1). As they travel to the other end, the silane gas decomposes into polysilicon, which is deposited on the wafers, and into hydrogen. This activity causes a concentration gradient along the length of the reactor. Further, the flow pattern (direction and speed) of the gases need not be the same as they travel from one end of the tube to the other, The flow pattem could also vary from one part of a wafer to other parts of the same wafer. Another important noise factor is the temperature variation along the length and cross section of the tube. There are, of course, other sources of variation or noise factors, such as topography of the wafer surface before polysilicon deposition, variation in pumping speed, and variation in gas supply. For the case study of the polysilicon deposition process, Hey and Sherry decided to process one batch of 50 wafers to evaluate the quality associated with each combina- tion of control factor settings suggested by the orthogonal array experiment. Of these 50 wafers, only 3 were test wafers, while the remaining 47 were dummy wafers, which provided the needed "full load” effect while saving the cost of expensive test wafers. To capture the variation in reactant concentration, flow pattem variation, and tempera- ture variation along the length of the tube, the test wafers were placed in positions 3, 23, and 48 along the tube. Furthermore, to capture the effect of noise variation across 1a wafer, the thickness and surface defects were measured at three points on each test wafer: top, middle, and bottom. Other noise factors were judged to be less important. To include their effect, the experimenters would have had to process multiple batches, thus making the experiments very expensive. Consequently, the other noise factors ‘were ignored,2 ‘Stops in Robust Design Chap. 4 The testing conditions for this case study are rather simple: observe thickness and surface defects at three positions of three wafers, which are placed in specific posi- tions along the length of the reactor. Sometimes orthogonal arrays (called noise orthogonal arrays) are used to determine the testing conditions that capture the effect of many noise factors. In some other situations, the technique of compound noise fac- tor is used. These two techniques of constructing testing conditions are described in Chapter 8. 4.3 QUALITY CHARACTERISTICS AND OBJECTIVE FUNCTIONS It is often tempting to observe the percentage of units that meet the specification and use that percentage directly as an objective function to be optimized, But, such temp- tation should be meticulously avoided. Besides being a poor measure of quality loss, using percentage of good (or bad) wafers as an objective function leads to orders of magnitude reduction in efficiency of experimentation. First, to observe accurately the percentage of "good" wafers, we need a large number (much larger than three) of test wafers for each combination of control factor settings. Secondly, when the percentage of good wafers is used as an objective function, the interactions among control factors often become dominant; consequently, additive models cannot be used as adequate approximations. The appropriate quality characteristics to be measured for the polysili- con deposition process in the case study were the polysilicon thickness and the surface defect count. The specifications were that the thickness should be within + 8 percent Of the target thickness and that the surface defect count should not exceed 10 per square centimeter. As stated in Section 4.2, nine measurements (3 wafers x 3 measurements per wafer) of thickness and surface defects were taken for each combination of control fac- tor settings in the matrix experiment. The ideal value for surface defects is zer0—the smaller the number of surface defects per cm?, the better the wafer. So, by adopting the quadratic loss function, we see that the objective function to be maximized is =10 logyo (mean square surface defects) 1s 10 login} 9X Lyi (4.1) where yi is the observed surface defect count at position j on test wafer i. Note that Jl, 2, and 3 stand for top, center, and bottom positions, respectively, on a test wafer. ‘And i=1, 2, and 3 refer to position numbers 3, 23, and 48, respectively, along the length of the tube. Maximizing 9 leads to minimization of the quality loss due to sur- face defects.Sec.43 Quality Characteristics and Objective Functions B ‘The target value in the study for the thickness of the polysilicon layer was % = 3600 A. Let t; be the observed thickness at position j on test wafer i, The ‘mean and variance of the thickness are given by (42) o 43) ‘The goal in optimization for thickness is to minimize variance while keeping the ‘mean on target. This is a constrained optimization problem, which can be very difficult, especially when many control factors exist. However, as Chapter 5 shows, when a scaling factor (a factor that increases the thickness proportionally at all points on the wafers) exists, the problem can be simplified greatly. In the case study, the deposition time was a clear scaling factor—that is, for every surface area where polysilicon was deposited, (thickness) = (deposition rate) x (deposition time). The deposition rate may vary from one wafer to the next, or from ‘one position on a wafer to another position, due to the various noise factors cited in the previous section. However, the thickness at any point is proportional to the deposition time. Thus, the constrained optimization problem in the case study can be solved in two steps as follows: 1, Maximize the Signal-to-noise (S/N) ratio, 1, 2 MY = 10 logio 5 (4a) Adjust the deposition time so that mean thickness is on target. In summary, the two quality characteristics to be measured were the surface defects and the thickness. The corresponding objective functions to be maximized were 1] and 1’ defined by Equations (4.1) and (4.4), respectively. (Note that S/N ratio is a general term used for measuring sensitivity to noise factors. It takes a different form depending on the type of quality characteristic, as discussed in detail in Chapter 5. Both 7 and 1 are different types of S/N ratios.) The economics of a manufacturing process is determined by the throughput as well as by the quality of the products produced. Therefore, along with the quality characteristics, a throughput characteristic also must be studied. Thus, in the case study, the experimenters also observed the deposition rate, r, measured in angstroms of thickness growth per minute,1” ‘Steps in Robust Design Chap. 4 4.4 CONTROL FACTORS AND THEIR LEVELS Processes, such as polysilicon deposition, typically have a large number of control fac- tors (factors that can be freely specified by the process designer). The more complex process, the more control factors it has and vice versa. Typically, we choose six to eight control factors at a time to optimize a process. For each factor we generally select two or three levels (or settings) and take the levels sufficiently far apart so that a wide region can be covered by the three levels. Commonly, one of these levels is taken to be the initial operating condition. Note that we are interested in the nonlinear- ity, so taking the levels of control factors too close together is not very fruitful. If we take only two levels, curvature effects would be missed, whereas such effects can be identified by selecting three levels for a factor (see Figure 4.4). Furthermore, by select- ing three levels, we can simultancously explore the region on either side of the initial operating condition. Hence, we prefer three levels. 1 1 Ay Ar Aa Ay A As (a) With two points we, (0) With three points we can only fita straight ‘can identity curvature line. fffects and, hence, peaks. Figure 44 Linear and curvature effects of a factor. In the case study, six control factors were selected for optimization. ‘These fac- tors and their alternate levels are listed in Table 4.1. The deposition temperature (A) is the steady state temperature at which the deposition takes place. When the wafers are placed in the reactor, they first have to be heated from room temperature to the deposi- tion temperature and then held at that temperature. ‘The deposition pressure (B) is the constant pressure maintained inside the reactor through appropriate pump speed and butterfly adjustment. ‘The nitrogen flow (C) and the silane flow (D) are adjusted using the corresponding flow meters on gas tanks. Settling time (E) is the time between placing the wafer carriers in the reactors and the time at which gases flow. The set- {ling time is important for establishing thermal and pressure equilibrium inside theSec.44 Control Factors and Their Levels 7 reactor before the reaction is allowed to start, Cleaning method (F) refers to cleaning the wafers prior to the deposition step. Before undertaking the case study experiment, the practice was to perform no cleaning. The altemate two cleaning methods the experimenters wanted to study were CM, performed inside the reactor, and CM 3, per- formed outside the reactor. TABLE 4.1 CONTROL FACTORS AND THEIR LEVELS Levelt Factor fa 2 3 A, Deposition temperature (C) | To-25 | Ty | T9425 B. Deposition pressure (tor) | Po-200 | Pe | Po +200 . Nitrogen flow (sm) Ny | Mo-t50 | No=78 D, Silane flow (sem) 59-100 | 54-50 | Sy E, Seting time (nin) te toh | +16 F. Cleaning method None | cM; — | omy * Staning levels are identitied by underscore. While deciding on the levels of control factors, a frequent tendency is to choose the levels relatively close to the starting levels. This is due to the experimenter’s con- cem that a large number of bad products may be produced during the matrix experi- ment. But, producing bad products during the experiment stage may, in fact, be beneficial because it tells us which region of control factor levels should be avoided, Also, by choosing levels that are wide apart, we increase the chance of capturing the nonlinearity of the relationship between the control factors and the noise factors, and, thus, finding the levels of control factors that minimize sensitivity to noise. Further, when the levels are wide apart, the factor effects are large when compared to the exper imental errors. As a result, the factor effects can be identified without too many repeti- tions. ‘Thus, it is important to resist the tendency to choose control factor levels that are rather close. Of course, during subsequent refinement experiments, levels closer to each other could be chosen. In the polysilicon deposition case study, the ratio of the largest to the smallest levels of factors B,C, D, and, E was between three and five which represents a wide variation. Temperature variation from (T 9-25) °C to (To+25) °C also represents a wide range in terms of the known impact on the deposi- tion rate,76 ‘Stops in Robust Design Chap. 4 The initial settings of the six control factors are indicated by an underscore in Table 4.1. The objective of this project was to determine the optimum level for each factor so that 7) and 1’ are improved, while ensuring simultaneously that the deposition rate, r, remained as high as possible. Note that the six control factors and their selected settings define the experimental region over which process optimization was done. 4.5 MATRIX EXPERIMENT AND DATA ANALYSIS PLAN An efficient way to study the effect of several control factors simultaneously is to plan matrix experiments using orthogonal arrays. As pointed out in Chapter 3, orthogonal arrays offer many benefits. First, the conclusions arrived at from such experiments are valid over the entire experimental region spanned by the control factors and their set- tings. Second, there is a large saving in the experimental effort. Third, the data analysis is very easy. Finally, it can detect departure from the additive model. ‘An orthogonal array for a particular Robust Design project can be constructed from the knowledge of the number of control factors, their levels, and the desire to study specific interactions. While constructing the orthogonal afray, we also take into account the difficulties in changing the levels of control factors, other physical limita- tions in conducting experiments, and the availability of resources. In the polysilicon deposition case study, there were six factors, each at three levels. The experimenters found no particular reason to study specific interactions and no unusual difficulty in changing the levels of any factor. The available resources for conducting the experi- ments were such that about 20 batches could be processed and appropriate measure- ments made, Using the standard methods of constructing orthogonal arrays, which are described in Chapter 7, the standard array Lg was selected for this matrix experiment. The Lyg orthogonal array is given in Table 4.2. It has eight columns and eigh- teen rows. The first column is a 2-level column—that is, it has only two distinct entries, namely 1 or 2. All the chosen six control factors have three levels. So, column 1 was kept empty or unassigned. From the remaining seven 3-level columns, column 7 was arbitrarily designated as an empty column, and factors A through F were assigned, respectively, to columns 2 through 6 and 8. (Note that keeping one or more columns empty does not alter the orthogonality property of the array. Thus, the matrix formed by columns 2 through 6 and 8 is still an orthogonal array. But, if one or more rows are dropped, the orthogonality is destroyed.) The reader can verify the ortho- gonality by checking that for every pair of columns all combinations of levels occur, and they occur an equal number of times. The 18 rows of the Lig array represent the 18 experiments to be conducted. Thus, experiment 1 is to be conducted at level 1 for each of the six control factors. These levels can be read from Table 4.1. However, to make it convenient for the ‘experimenter and to prevent translation errors, the entire matrix of Table 4.2 should beSec. 4.5 Matrix Experiment and Data Analysis Plan 7 translated using the level definitions in Table 4.1 to create the experimenter’s log sheet shown in Table 4.3. TABLE 4.2 Lj, ORTHOGONAL ARRAY AND FACTOR ASSIGNMENT Column Numbers and Factor Assignment® 3 B 4 c 5 D 6 E uu 2 1 1 1 1 8 1“ 15 16 7 18 * Empty columns are identified by ,TABLE 4.3 EXPERIMENTER'S LOG ‘Stops in Robust Design Expt. Setting TNO. | Temperature | Pressure | Nitrogen | Silane | "Time 1 | 7-25 | Po~200 | No 50100 | to a te |r Np 180 tot8 * | CMs 3 | 1-25 | P9420 | %y-75 | sy to+16 | CM alt P.~200 | No 55-50 | toe | CM, Ons Po No~150 | So to416 | None 6 |t 44200 | No~75 to | eM 7 | 14425 =200 | N~150 | 5-100 | 49416 | cM, 8 | rss | No~75 | 50-50 | to | None 9 | ys25 | pys20| x | So tort | CM 10 | 75-25 | Po-200 | No-75 | So tot8 | None a ||P No So=100 | to+16 | CMs 12 | 74-25 | P+200 | No~150 | 59-80 | % | CM 3 [7 200 | No~150 | Sy % |r Po | No-75 | $5100 | +8 | cm, 1s | 7. Po+200| No | S80 | #0416 | None 16 | 70425 | Po-200 | No~75 | $5-50 | torte | CMs a7 | 1425 | pp | Me So Jt fem [| 1425 | pow2mn | No=150 | 59-100 | vo | None Chap. 4 Now we combine the experimenter’s log sheet with the testing conditions described in Section 4.2 to create the following experimental procedure: 1. Conduct 18 experiments as specified by the 18 rows of Table 4.3. 2. For each experiment, process one batch, consisting of 47 dummy wafers and three test wafers. The test wafers should be placed in positions 3, 23, and 48.Sec. 4.8 Conducting the Matrix Experiment 79 3. For each experiment, compute to your best ability the deposition time needed to achieve the target thickness of 3600A.. Note that in the experiment the actual thickness may turn out to be much different from 360A. However, such data are perfectly useful for analysis. Thus, a particular experiment need not be redone by adjusting the deposition time to obtain 3600A thickness. 4. For each experiment, measure the surface defects and thickness at three specific points (top, center, and bottom) on each test wafer. Follow standard laboratory practice to prepare data sheets with space for every observation to be recorded, 4.6 CONDUCTING THE MATRIX EXPERIMENT From Table 4.3 it is apparent that, from one experiment to the next, levels of several control factors must be changed. This poses a considerable amount of difficulty to the experimenter. Meticulousness in correctly setting the levels of the various control fac- tors is critical to the success of a Robust Design project. Let us clarify what we mean by meticulousness. Going from experiment 3 to experiment 4 we must change tem- perature from (T)~25) °C to To °C, pressure from (Po +200) mtorr to (Po -200) mtorr, and so on, By meticulousness we mean ensuring that the temperature, pressure, and other dials are set to their proper levels. Failure to set the level of a factor correctly could destroy the valuable property of orthogonality. Consequently, conclu- sions from the experiment could be erroneous. However, if an inherent error in the equipment leads to an actual temperature of (Tq —1) °C of (To +2) °C when the dial is set at Ty °C, we should not bother to correct for such variations, Why? Because unless we plan to change the equipment, such variations constitute noise and will con- tinue to be present during manufacturing. If our conclusions from the matrix experi- ment are to be valid in actual manufacturing, our results must not be sensitive to such inherent variations. By keeping these variations out of our experiments, we lose the ability to test for robustness against such variations. The matrix experiment, coupled with the verification experiment, has a built-in check for sensitivity to such inherent variations. A difficulty in conducting matrix experiments is their radical difference from the current practice of conducting product or process design experiments. One common practice is to guess, using engineering judgment, the improved settings of the control factors and then conduct a paired comparison with the starting conditions. The guess- and-test cycle is repeated until some minimum improvement is obtained, the deadline is reached, or the budget is exhausted. This practice relies heavily on luck, and it is inefficient and time-consuming, Another common practice is to optimize systematically one control factor at a time. Suppose we wish to determine the effect of the three temperature settings while keeping the settings of the other control factors fixed at their starting levels. To reduce the effect of experimental error, we must process several batches at each temperature80 Steps in Robust Design Chap. 4 setting. Suppose six batches are processed at each temperature setting. (Note that in the Lyg array the replication number is six; that is, there are six experiments for each factor level.) Then, we would need 18 batches to evaluate the effect of three tempera- ture settings. For the other factors, we need to experiment with the two alternate levels, so that we need to process 12 batches each. Thus, for the six factors, we would need to process 18 + 5 x 12 = 78 batches. This is a large number compared to the 18 batches needed for the matrix experiment. Further, if there are strong interactions among the control factors, this method of experimentation cannot detect them. The matrix experiment, though somewhat tedious to conduct, is highly cfficient—that is, when compared to the practices above, we can generate more depend- able information about more control factors with the same experimental effort. Also, this method of experimentation allows for the detection of the interactions among the control factors, when they are present, through the verification experiment. In practice, many design improvement experiments, where only one factor is studied at a time, get terminated after studying only a few control factors because both the R&D budget and the experimenter’s patience run out. As a result, the quality improvement turns out to be only partial, and the product cost remains somewhat high. This danger is reduced greatly when we conduct matrix experiments using orthogonal arrays. In the polysilicon deposition case study, the 18 experiments were conducted according to the experimenter’s log given in Table 4.3. It took only nine days (2 ‘experiments pet day) to conduct them. The observed data on surface defects are listed in Table 4.4(a), and the thickness and deposition rate data are shown in Table 4.4(b). The surface defects were measured by placing the specimen under an optical micro- scope and counting the defects in a field of 0.2 cm?, When the count was high, the field area was divided into smaller areas, defects in one area were counted, and the count was then multiplied by an appropriate number to determine the defect count per unit area (0.2 cm*). The thickness was measured by an optical interferometer. ‘The deposition rate was computed by dividing the average thickness by the deposition time, 4.7 DATA ANALYSIS The first step in data analysis is to summarize the data for each experiment, For the case study, these calculations are illustrated next. For experiment number 1, the S/N ratio for the surface defects, given by Equa- tion (4.1), was computed as follows: 1323 10 loge | 5 EE viiSec.4.7 Data Analysis a (274+ +17)+(22+0+0) + (7+ +0?) 9 From the thickness data, the mean, variance, and S/N ratio were calculated as follows by using Equations (4.2), (4.3) and (4.4): 10 logio { ele 10 logio | = 051 33 See Eq, (4.2) p Syst i ist (2029 + 1975 + 1961) + (1975 +1934+1907) + (1952+ 1941 +1949)| 1 9 = 1958.1 A See Eq. (4.3) o? =t (2029- 1958.1)? + + -+ (1949-1958, v) = 1151.36 (Ay. 2 WY = 10 logo 5 1958.1? 1151.36 = 10 logio = 35.22 4B.‘Stops in Robust Design Chap. 4 ‘TABLE 4.4(a) SURFACE DEFECT DATA (OEFECTS/UNIT AREA) Test Wafer 1 ‘Test Water 2 ‘Test Water 3 Expt. : 7 Na. | Top | Center| Bottom | Top | Center| Bottom | Top | Center| Bottom 1 a o 1 2 “0 0 Et it 0 2} sf 2] 8/1) s] of 6) 3] 1 3] 3{ 35] 106 | 360] 38] 13s | 315) so] 180 4} 6) ws] 6] i} 2] ww} as) a} os | 1720} 1980 | 2000 | 487] 810 | 00 | 2020) 360) 13 6 | 135] 360 | 1620 |2430| 207] 2 | 2800 270 | 35 7 | 360 sio | 121s | 1620] 117 | 30 | 1800) 720 | 315 8 | 270| 2730 | sooo | 360} 1] 2 | 99%] 25} 1 me] 3) 0] 0] 3) 6] 6] 1) 6] ji) o) 1) 5) 6) 6] | 6) a] | toon) on | ig | 5 | fom) a | s w|i] 2] 20 [sol iw] 1] 2s| 3] 0 uw} 3| a} a2} 0] 6] 1] o] is] 3 1s | 450| 1200 | 1800 |2s30| 2080 | 2080 | 1890] 180 | 25 ie | 5) 6 | |) e) 6| al i]s i 17 | 1200] 3500 | 300 | 1000] 3] 1 |9999| 600] 8 18 | 8000} 2500 | 3500 | s000| 1000 | 1000 | 5000} 2000 | 2000S0c.4.7 Data Analysis TABLE 4.4(0) THICKNESS AND DEPOSITION RATE DATA ‘Thickness (A) Test Wafer 1 Test Wafer 2 Test Water 3 [— i] Deposition Expt. “fale ‘Nex | Top [Center| Bottom | Top |Center| Hotiom | Top | Center| Botom| (Amin) 1 [ams] i975 | i961 J975 | 1934 | 1907 [i9s2] 198i | 1909 | 4s 2 | sans] sisi | so4z | szor | asa | sso /ss03] ssor | soo | 360 3 | soe seve | sera |is2] soi0 | sexs [077] sous | som] ans 4 [2118] 2109 | 2099 J2i40] 2125 | 2108 | 249] 2130 | am | 36. ss | aoa] sisa | 417s |4ss6| as0s | 4560 [031 | soio | sox | 720 6 | 022} 2992 | 2913 | 2833] 2837 | 2508 f2ms| 2875 | agar | a9 7 | 3020] 3082 | sone | 486] s033 | 2389 [3709] aon | 3687 | 756 | 4707] sere | 4336 [4407] sis | 4004 |s07s| 4998 | 4599 | 1054 9 | 859] ae22 | 3850 | en | a922 ) 900 Jart0| 067 | an1o | 1150 10 | 3227| sans | 242 | 3408 | 3450 | 3420 | 500 aso | sas | 248 11} a1] 2459 | 2499 | 2576 2537 | asi2 |sst| 2ss2 | 2870 | 200 12 | s921| s7as | seas | 780] soss | seis | s001| s777 | s7as | 390 13 | 2792] 2752 | 2716 [anes] 2635 | 2606 2765] 2786 | 2773 | sa 14 | 2963) 2535 | 2859 [2529] 2864 | zea | 2601 | 2ess | ase | 45.7 1s | sais] s149 | 124 [261 | 205 | 3225 |2a1 s1a9 | s1s7 | saa 16 | 3020| s008 | 01s |s072| sisi | 3139 | 3235) s162 | a0 | 768 17 | 4277] 150 | 992 [808 | 6s1 | asre |4593] a298 | «219 | 1053 us | 5125] aus | s127 | 3567] 3563 | 3820 | 4120] sons | aise | 9143 Steps in Robust Design Chap. 4 ‘The deposition rate in the decibel scale for experiment 1 is given by 1” = 10 logyo r? 20 logio r = 20 logio(14.5) = 23.23 dBam ‘where dBam stands for decibel A /min. ‘The data summary for all 18 experiments was computed in a similar fashion and the results are tabulated in Table 4.5. Observe that the mean thickness for the 18 experiments ranges from 1958 A to 5965 A. But we are least concemed about this variation in the thickness because the average thickness can be adjusted easily by changing the deposition time. During a Robust Design project, what we are most interested in is the S/N ratio, which in this case is a measure of variation in thickness as a proportion of the mean thickness. Hence, no further analysis on the mean thickness was done in the case study, but the mean thickness, of course, was used in computing the deposition rate, which was of interest. After the data for each experiment are summarized, the next step in data analysis is to estimate the effect of each control factor on each of the three characteristics of interest and to perform analysis of variance (ANOVA) as described in Chapter 3. The factor effects for surface defects (1), thickness (1), and deposition rate (n”), and the respective ANOVA are given in Tables 4.6, 4.7, and 4.8, respectively. A sum- mary of the factor effects is tabulated in Table 4.9, and the factor effects are displayed graphically in Figure 4.5, which makes it easy to visualize the relative effects of the Various factors on all three characteristics. To assist the interpretation of the factor effects plotted in Figure 4.5, we note the following relationship between the decibel scale and the natural scale for the three characteristics: * An increase in 7) by 6 dB is equivalent to a reduction in the root mean square surface defects by a factor of 2. An increase in 7) by 20 dB is equivalent to a reduction in the root mean square surface defects by a factor of 10. + The above statements are valid if we substitute 1)’ or 17” for 1, and standard deviation of thickness or deposition rate for root mean square surface defects. The task of determining the best setting for each control factor can become com- plicated when there are multiple characteristics to be optimized. This is because dif- ferent levels of the same factor could be optimum for different characteristics. ‘The quality loss function could be used to make the necessary trade-offs when different characteristics suggest different optimum levels. For the polysilicon deposition caseS00. 4.7 Data Analysis. TABLE 4.5 DATA SUMMARY BY EXPERIMENT ‘Surface| Deposition Experiment Condition | Defects | Thickness Rate i Mateixt aie fw v Na [eapcDEeF| cam [Ay | cB) | Bam vfrrrraaii] ost[ ise] 522 | 22 2 [11222222] -s730 | s2s5| 3576 | s127 3 [01333333 | 4517 | 5965 | 602 | 3238 4 02112233 | -2576 |r | 4225 | tas s [12223311] -0s6]as| a4) sar 6 [12331122 | -6225 | 2801 | 3291 | s389 [7 [13121323 | -s99e | 3375 | 2139 | 3768 | 8 [13232131 |-r16 | 4s27| 2284 | sons 9 [133132126815 | 3096 | 3060 | 4121 to [21133221| -s47 [ses | 2685 | 2709 m 21211332] -so8 {2535 | 880 | 2602 [21322113 |-s895 | sve | 3806 | s182 ws [22123132 |-4938 [2s] 3207 | 3450 22231213 |~3654 | 2852 | 4334 | 3320 is [22312321 |-6418 | 3201] 3744 | 3476 te | 23.13.2312 |-2731 | 3105] 386] 3771 17 [23213123 |-n151 | 4074] 2201 | soas w | 23321231 | 7200} 3596] 1842 | s922 * Empty column is denoted by e86 ‘Stops in Robust Design Chap. 4 8 4 11 ==10 log,» (mean square surface defects) 24 1 po fA 504 as As AaAy By ByBy C20,C, 0, 0,0, EE, Ey FFP a = 10 log (5) or thickness ob A. a _- a i { zo AgAs By ByBy CzC,C, 0,0; Dy EE, Es Fi Fafa Bam 4 nt = 10 log, (deposition rate)® 40: }-- fpr et et ee 20 AaAs By ByBs Cy CyC; Dy Oy Dy Ej, E> Ey Fi Fo Fs 2 2 aoe a Temp. Pressure Nitrogen Silane Settling Cleaning oO oo a eve Figure 4.3 Plots of factor effects. Underline indicates starting level. ‘Two-standard- Geviation confidence limits are also shown for the starting level. Estimated confidence limits for 1” are too small to show.Sec.4.7 Data Analysis 87 study, we can make the following observations about the optimum setting from Fig- ure 4.5 and Table 4.9: * Deposition temperature (factor A) has the largest effect on all three characteris- ics. By reducing the temperature from the starting setting of Ty °C to To- °C, n can be improved by {(-24.23) ~ (-50.10)} = 26 dB. This is equivalent to a 20-fold reduction in root mean square surface defect count. The effect of this temperature change on thickness uniformity is only (35.12—34.91) 0.21 dB, which is negligible. But the same temperature change would lead to a reduction in deposition rate by (34.13-28.76) = 5.4 dB, which is approximately a 2-fold reduction in the deposition rate. Thus, temperature can dramatically reduce the surface defect problem, but it also would double the deposition time. Accordingly, there is a trade-off to be made between reducing the quality cost (including the scrap due to high surface defect count) and the number of wafers processed per day by the reactor. * Deposition pressure (factor B) has the next largest effect on surface defect and deposition rate. Reducing the pressure from the starting level of Po mtorr to (Po~200) mtorr can improve 1 by about 20 dB (a 10-fold reduction in the root ‘mean square surface defect count) at the expense of reducing the deposition rate by 2.75 dBam (37 percent reduction in deposition rate). ‘The effect of pressure on thickness uniformity is very small. * Nitrogen flow rate (factor C) has a moderate effect on all three characteristics. The starting setting of No sccm gives the highest S/N ratios for surface defects and thickness uniformity. ‘There is also a possibility of further improving these two S/N ratios by increasing the flow rate of this dilutant gas. This is an impor- tant fact to be remembered for future experiments. The effect of nitrogen flow rate on deposition rate is small compared to the effects of temperature and pres- sure. * Silane flow rate (factor D) also has a moderate effect on all three characteristics, Thickness uniformity is the best when silane flow rate is set at (Sq—50) scem. This can also lead to a small reduction in surface defects and the deposition rate. Settling time (factor E) can be used to achieve about 10 dB improvement in sur- face defects by increasing the time from fo minutes to (to +8) minutes. The data indicates that a further increase in the settling time to (to +16) minutes could negate some of the reduction in surface defect count. However, this change is small compared to the standard deviation of the error; and it is not physically iable. Settling time has no effect on the deposition rate and the thickness + Cleaning method (factor F) has no effect on deposition rate and surface defects. But, by instituting some cleaning prior to deposition, the thickness uniformity can be improved by over 6.0 dB (a factor of 2 reduction in standard deviation of38 Steps in Robust Design Chap. 4 thickness), Cleaning with CM, or CM; could give the same improvement in thickness uniformity. However, CM> cleaning can be performed inside the reac- tor, whereas CM cleaning must be done outside the reactor. Thus, CM> clean- ing is more convenient. From these observations, the optimum settings of factors E and F are obvious, namely E> and Fy. However, for factors A through D, the direction in which the qual- ity characteristics (surface defects and thickness uniformity) improve tend to reduce the deposition rate. Thus, a trade-off between quality loss and productivity must be made in choosing their optimum levels. In the case study, since surface defects were the key quality problem that caused significant scrap, the experimenters decided to take care of it by changing temperature from A to A. As discussed earlier, this also meant a sub- stantial reduction in deposition rate. Also, they decided to hold the other three factors at their starting levels, namely B2, Cy, and D3. The potential these factors held would TABLE 4.6 ANALYSIS OF SURFACE DEFECTS DATA* Average n by Factor Level T 4 (ae) | sum ot | Mean Factor Squares | Square | F 1A Tempertre wav | aaa | 7 B. Pressure sais | 708 | 0 C. Nien =2901 | = 5599 2 | 10 | sis | 64 D. Silane = 3920 | - 468s 2 mm | ise | 23 B, Setng ime = 4054 2 me | is | 23 F. Cleaning method -4158 | -s89s| 2 rest | 92 Ener s | ast | a Total 7 | iow x00) fe o | 6» | eo * Overall mean 1] = ~45.36 dB. Underscore indicates starting level 4 Indicates the sum of squares added together to form the pooled error sum of squares shown in parentheses.Sec.47 Data Analysis 89 have been used if the confirmation experiment indicated a need to improve the surface defect and thickness uniformity further. Thus, the optimum conditions chosen were: A\B2C D3E2F >. ‘The next step in data analysis is to predict the anticipated improvements under the chosen optimum conditions. To do so, we first predict the S/N ratios for surface defects, thickness uniformity, and deposition rate using the additive model. These computations for the case study are displayed in Table 4.10. According to the table, an improvement in surface defects equal to [-19.84—(—56.69)] = 36.85 dB should be anticipated, which is equivalent to a reduction in the root mean square surface defect count by a factor of 69.6. The projected improvement in thickness uniformity is 36.79-29.95 = 6.84 dB, whict a reduction in standard deviation by a factor of 2.2. The corresponding change in deposition rate is 29.60-34.97 = -5.37 dB, which amounts to a reduction in the deposition rate by a factor of 1.9. TABLE 4.7 ANALYSIS OF THICKNESS DATA ‘Average 1 by Level ‘) Degree of | Sum of | Mean ver =| a | a | a | Bmmet| Senet] ates | [ie eee lia eo eee ee lee luelawl | a Nogen 2 | aw lo | s0 |b. stan 2 | um | oo | a foe tal | coming metoa | 2704 | x67) nas} 2 | an | ons | oe cee “| W 1004 59.1 oO es [ee] | * Overall mean n = 31.52 dB. Underscore indicates starting level 4 Indicates the sum of squares added together to form the pooled error sum of squares shown in parentheses.90 Stops in Robust Design Chap. 4 TABLE 4.8 ANALYSIS OF DEPOSITION RATE DATA‘ Average 1” by Factor Level E (aBam) | Degree of | Sum of | Mean Factor 1 2 3 | Freedom | Squares | Square | F ‘A Tempernue | 2a76 | sais | soas | 2 | sear [ans | 359 B, Pressure zoos | sare | sss | 2 | ato | 205 | 06 . Nitrogen saat | 3529 | asas | 2 | sz | 94 | 30 D, Silane san | sass | ast 2 | 363 | ir | se E.Sentingtime | 360s | aso | 3430 | 2 oat | 02 F, Cleaning metnoa | aaa | sao | sae | 2 it | 06 Error 5 1st | 026 Total 7 | ass 259 x00) © | 2% | on ‘Overall mean 1)” = 34.12 dBam. Underscore indicates staring level. + Indicates the sum of squares added together to form the pooled error sum of squares shown in parentheses 4.8 VERIFICATION EXPERIMENT AND FUTURE PLAN Conducting a verification experiment is a crucial final step of a Robust Design project. Its purpose is to verify that the optimum conditions suggested by the matrix experiment do indeed give the projected improvement. If the observed S/N ratios under the ‘optimum conditions are close to their respective predictions, then we conclude that the additive model on which the matrix experiment was based is a good approximation of the reality. Then, we adopt the recommended optimum conditions for our process ot product, as the case may be. However, if the observed S/N ratios under the optimum conditions differ drastically from their respective predictions, there is an evidence of failure of the additive model. There can be many reasons for the failure and, thus, there are many ways of dealing with it. The failure of the additive model generally indicates that choice of the objective function or the S/N ratio is inappropriate, the observed quality characteristic was chosen incorrectly, or the levels of the control fac- tors were chosen inappropriately. The question of how to avoid serious additivity problems by properly choosing the quality characteristic, the S/N ratio, and the control factors and their levels is discussed in Chapter 6. Of course, another way to handle theSoc. 4.8 Verification Exporiment and Future Plan on ‘TABLE 4.9 SUMMARY OF FACTOR EFFECTS Surface Defects | Thickness | Deposition Rate a w 1” Factor Level ae) | F | @ | F | Bam | F ‘A. Temperature (©) Ay:Ty-25 | -24.23 28.76 Az To =s0.10 | 27 te | 3413 | ssa AyTy25 | ~61.76 39.46 B, Pressure (mio) B,: Po~200 | 27.55, 3203 Bai Po na7as | 21 - | 3478 | 66 By Po+200 | -61.10 3554 C. Nitrogen (seem) 3281 64 so} 3529 | 30 3425 D. Silane (seem) j 3221 23 aa| sass | se 3561 E, Settling time (min) Ex: te 3152 3406 Extos | 40.54 | 23 = 2) so | | Ey tori6 | 4403 | 3430 F. Cleaning method = Fy: None | ~45.56 3381 Fri; | -4158 | - 6a} 340 | - Fy:CMy | ~4895 ae ‘Overall mean | =4536 | 3412 additivity problem is to study a few key interactions among the control factors in future experiments. Construction of orthogonal arrays that permit the estimation of a few specific interactions, along with all main effects, is discussed in Chapter 7. ‘The verification experiment has two aspects: the first is that the predictions must agree under the laboratory conditions; the second aspect is that the predictions should be valid under actual manufacturing conditions for the process design and under actual field conditions for the product design. A judicious choice of both the noise factors to be included in the experiment and the testing conditions is essential for the predictions made through the laboratory experiment to be valid under both manufacturing and field conditions. For the polysilicon deposition case study, four batches of 50 wafers containing 3 test wafers were processed under both the optimum condition and under the starting conditions. The results are tabulated in Table 4.11. It is clear that the data agree very well with the predictions about the improvement in the S/N ratios and the deposition rate. So, we could adopt the optimum settings as the new process settings and proceed to implement these settings.92 Stops in Robust Design Chap. 4 TABLE 4.10 PREDICTION USING THE ADDITIVE MODEL, ‘Starting Condition ‘Optimum Condition Contribution (4B) Contribution? (4B) Surface Deposition Surface Deposition Factor | Setting | Defects | Thickness | Rate | Setting | Defects | Thickness | Rate av [oar | -am | 339 Crea ere teesnee fee yee eee) B | a, | -208| 000 06s | B, | -208} 000 0.66 c |e. éa5| om | Fit |e: 633 | 287 | -131 D | dy | -468| 335 149 | dD, | -468 | -335 149 ze | gz, | -616| 000 ooo | F, 482 | 0.00 0.00 po | oF, 0.00 | 4.48 00 | Fy 0.00 | 215 0.00 Overall nas36 | 3us2 | 3412 wasze | 3isz | 3412 Mean, Total -s669 | 2995 | 3497 -1988 | 3679 | 2960 * Indicates the factors whose levels are changed from the starting to the optimum condi + By contribution we mean the deviation from the overall mean caused by the particular factor level. TABLE 4.11, RESULTS OF VERIFICATION EXPERIMENT Starting | Optimum Condition | Condition | improvement Surface ms | e0ojom® | 7/em® Defects =) n | -s56aB | -16948 | 38:7 4B sid, dev. | 0.028 0013 Thickness wv 3i1aB | 3774B | 660B Deposition | rate | 60 Ajmin | 35 A min Rate a 38.6 aBam | 30.9 4Bam | -4.7 dBam * Standard deviation of thickness is expressed as a fraction of the mean thickness.Sec.4.9 Summary 93 Follow-up Experiments Optimization of a process or a product need not be completed in a single matrix exper- iment. Several matrix experiments may have to be conducted in sequence before com- pleting a product or process design. The information learned in one matrix experiment is used to plan the subsequent matrix experiments for achieving even more improve- ment in the process or the product. The factors studied in such subsequent experi- ments, or the levels of the factors, are typically different from those studied in the ear- lier experiments, From the case-study data_on the polysilicon deposition process, temperature stood out as the most important factor—both for quality and productivity. ‘The experi- mental data showed that high temperature leads to excessive formation of surface defects and nonuniform thickness. This led to identifying the type of temperature con- troller as a potentially important control factor. The controller used first was an under- damped controller, and, consequently, during the initial period of deposition, the reac- tor temperature rose significantly above the steady-state set-point temperature. It was then decided to try a critically damped controller. Thus, an auxiliary experiment was conducted with two control factors: (1) the type of controller, and (2) the temperature setting. This experiment identified the critically damped controller as being significantly better than the underdamped one. ‘The new controller allowed the temperature setting to be increased to Ty—10 °C while keeping the surface defect count below 1 defect/unit area. The, higher tempera- ture also led to a deposition rate of SSA /min rather than the 35A /min that was observed in the initial verification experiment. Simultaneously, a standard deviation of thickness equal to 0,007 times the mean thickness was achieved, Range of Applicability In any development activity, it is highly desirable that the conclusions continue to be valid when we advance to a new generation of technology. In the case study of the polysilicon deposition process, this means that having developed the process with 4- inch wafers, we would want it to be valid when we advance to S-inch wafers. The process developed for one application should be valid for other applications. Processes and products developed by the Robust Design method generally possess this charac- teristic of design transferability. In the case study, going from 4-inch wafers to 5-inch wafers was achieved by making minor changes dictated by the thermal capacity calcu- lations, Thus, a significant amount of development effort was saved in transferring the process to the reactor that handled 5-inch wafers. 4.9 SUMMARY Optimizing the product or process design means determining the best architecture, lev- els of control factors, and tolerances. Robust Design is a methodology for finding the94 ‘Stops in Robust Design Chap. 4 optimum settings of control factors to make the product or process insensitive to noise factors. It involves eight major steps which can be grouped as planning a matrix experiment to determine the effects of the control factors (Step 1 through 5), conduct- ing the matrix experiment (Step 6), and analyzing and verifying the results (Steps 7 and 8), + Step 1. Identify the main function, side effects and failure modes. This step requires engineering knowledge of the product or process and the customer's environment, + Step 2. Identify noise factors and testing conditions for evaluating the quality loss. The testing conditions are selected to capture the effect of the more impor- tant noise factors. It is important that the testing conditions permit a consistent estimation of the sensitivity to noise factors for any combination of control factor levels. In the polysilicon deposition case study, the effect of noise factors was captured by measuring the quality characteristics at three specific locations on each of three wafers, appropriately placed along the length of the tube. Noise orthogonal array and compound noise factor are two common techniques for con- structing testing conditions. These techniques are discussed in Chapter 8. * Step 3. Identify the quality characteristic to be observed and the objective func- tion to be optimized. Guidelines for selecting the quality characteristic and the objective function, which is generically called S/N ratio, are given in Chapters 5 and 6, The common temptation of using the percentage of products that meet the specification as the objective function to be optimized should be avoided. It leads to orders of magnitude reduction in efficiency of experimentation. While optimizing manufacturing processes, an appropriate throughput characteristic should also be studied along with the quality characteristics because the econom- ics of the process is determined by both of them. + Step 4. Identify the control factors and their alternate levels. The more complex 4 product or a process, the more control factors it has and vice versa. Typically, six to eight control factors are chosen at a time for optimization. For each con- trol factor two or three levels are selected, out of which one level is usually the starting level. The levels should be chosen sufficiently far apart to cover a wide experimental region because sensitivity to noise factors does not usually change with small changes in control factor settings. Also, by choosing a wide experi- mental region, we can identify good regions, as well as bad regions, for control factors. Chapter 6 gives additional guidelines for choosing control factors and their levels. In the polysilicon deposition case study, we investigated three levels each of six control factors. One of these factors (cleaning method) had discrete levels. For four of the factors the ratio of the largest to the smallest levels was between three and five. + Step 5. Design the matrix experiment and define the data analysis procedure. Using orthogonal arrays is an efficient way to study the effect of several control factors simultaneously. The factor effects thus obtained are valid over theSec.49 Summary 95 experimental region and it provides a way to test for the additivity of the factor effects. The experimental effort needed is much smaller when compared to other methods of experimentation, such as guess and test (trial and error), one factor at a time, and full factorial experiments. Also, the data analysis is easy when onthogonal arrays are used. The choice of an orthogonal array for a particular project depends on the number of factors and their levels, the convenience of changing the levels of a particular factor, and other practical considerations Methods for constructing a suitable orthogonal array are given in Chapter 7.. The orthogonal array Lg, consisting of 18 experiments, was used for the polysilicon deposition study. The array Lig happens to be the most commonly used array because it can be used to study up to seven 3-level and one 2-level factors. + Step 6. Conduct the matrix experiment. Levels of several control factors must be changed when going from one experiment to the next in a matrix experiment, Meticulousness in correctly setting the levels of the various control factors is cessential—that is, when a particular factor has to be at level 1, say, it should not be set at level 2 or 3. However, one should not worry about small perturbations that are inherent in the experimental equipment. Any erroneous experiments or ‘missing experiments must be repeated to complete the matrix. Errors can be avoided by preparing the experimenter’s log and data sheets prior to conducting, the experiments. This also speeds up the conduct of the experiments significantly. The 18 experiments for the polysilicon deposition case study were completed in 9 days. + Step 7. Analyze the data, determine optimum levels for the control factors, and predict performance under these levels. The various steps involved in analyzing the data resulting from matrix experiments are described in Chapter 3. S/N ratios and other summary statistics are first computed for each experiment. (In. Robust Design, the primary focus is on maximizing the S/N ratio.) Then, the factor effects are computed and ANOVA performed. The factor effects, along with their confidence intervals, are plotted to assist in the selection of their optimum levels. When a product or a process has multiple quality characteris- tics, it may become necessary to make some trade-offs while choosing the optimum factor levels. The observed factor effects together with the quality loss function can be used to make rational trade-offs. In the polysilicon case study, the data analysis indicated that levels of three factors—deposition temperature (A), settling time (E), and cleaning method (F—be changed, while the levels of the other five factors be kept at their starting levels, * Step 8. Conduct the verification (confirmation) experiment and plan future actions. The purpose of this final and crucial step is to verify that the optimum conditions suggested by the matrix experiments do indeed give the projected improvement. If the observed and the projected improvements match, we adopt the suggested optimum conditions. If not, then we conclude that the additive model underlying the matrix experiment has failed, and we find ways to correct that problem. The corrective actions include finding better quality characteristics, or signal-to-noise ratios, or different control factors and levels, or studying a fewStops in Robust Design Chap. 4 specific interactions among the control factors. Evaluating the improvement in quality loss, defining a plan for implementing the results, and deciding whether another cycle of experiments is needed are also a part of this final step of Robust Design. It is quite common for a product or process design to require more than ‘one cycle of Steps 1 through 8 for achieving needed quality and cost improve- ‘ment, In the polysilicon deposition case study, the verification experiment confirmed the optimum conditions suggested by the data analysis. In a follow up Robust Design cycle, two control factors. were studied—deposition temperature and type of temperature controller. The final optimum process gave nearly two orders of magnitude reduction in surface defects and a 4-fold reduction in the standard deviation of the thickness of the polysilicon layer.Chapter 5 SIGNAL-TO-NOISE RATIOS The concept of quadratic loss function introduced in Chapter 2 is ideally suited for evaluating the quality level of a product as it is shipped by a supplier to a customer. “As shipped” quality means that the customer would use the product without any adjustment to it or to the way it is used. Of course, the customer and the supplier could be two departments within the same company. ‘A few common variations of the quadratic loss function were given in Chapter 2. Can we use the quadratic loss function directly for finding the best levels of the control factors? What happens if we do so? What objective function should we use to minim- ize the sensitivity to noise? We examine these and other related questions in this chapter. In particular, we describe the concepts behind the signal-to-noise (S/N) ratio and the rationale for using it as the objective function for optimizing a product or pro- cess design. We identify a number of common types of engineering design problems and describe the appropriate S/N ratios for these problems. We also describe a pro- cedure that could be used to derive S/N ratios for other types of problems. This chapter has six sections: * Section 5.1 discusses the analysis of the polysilicon thickness uniformity. Through this discussion, we illustrate the disadvantages of direct minimization of the quadratic loss function and the benefits of using S/N ratio as the objective function for optimization + Section 5.2 presents a general procedure for deriving the S/N ratio,98 Signalto-Noise Ratios Chap. 5 * Section 5.3 describes common static problems (where the target value for the ‘quality characteristic is fixed) and the corresponding S/N ratios. ‘+ Section 5.4 discusses common dynamic problems (where the quality characteris- tic is expected to follow the signal factor) and the corresponding S/N ratios. + Section $.5 describes the accumulation analysis method for analyzing ordered categorical data. + Section 5.6 summarizes the important points of this chapter. 5.1 OPTIMIZATION FOR POLYSILICON LAYER THICKNESS UNIFORMITY One of the two quality characteristics optimized in the case study of the polysilicon deposition process in Chapter 4 was the thickness of the polysilicon layer. Recall that one of the goals was to achieve a uniform thickness of 3600 A . More precisely, the experimenters were interested in minimizing the variance of thickness while keeping the mean on target. The objective of many robust design projects is to achieve a par- ticular target value for the quality characteristic under all noise conditions. These types of projects were previously referred to as nominal-the-best type problems. The detailed analysis presented in this section will be helpful in formulating such projects. This section discusses the following issues: + Comparison of the quality of two process conditions * Relationship between S/N ratio and quality loss after adjustment (Q.) + Optimization for different target thickness * Interaction induced by the wrong choice of objective function ‘+ Identification of a scaling factor ‘+ Minimization of standard deviation and mean separately Comparing the Quality of Two Process Conditions ‘Suppose we are interested in determining which is a preferred temperature setting, To °C or (To + 25) °C, for achieving uniform thickness of the polysilicon layer around the target thickness of 3600 A . We may attempt to answer this question by running a number of batches under the two temperature settings while keeping the other control factors fixed at certain levels. Suppose the observed mean thickness and standard devi- ation of thickness for these two process conditions are as given in Table 5.1. Although no experiments were actually conducted under these conditions, the data in Table 5.1 are realistic based on experience with the process. This is also true for all other data used in this section. Note that under temperature T> °C, the mean thickness is 1800 A, which is far aivay from the target, but the standard deviation is small. Whereas under temperature (Tp + 25) °C, the mean thickness is 3400 A , which is close to theSoc. 5.1 Optimization for Polysiicon Layer Thickness Uniformity 99 target, but the standard deviation is large. As we observe here, it is very typical for both the mean and standard deviation to change when we change the level of a factor. TABLE 5.1. EFFECT OF TEMPERATURE ON THICKNESS UNIFORMITY [Mean | Standara Expt. | Temperature | Thickness | Deviation (a) | 7 Ne | co | & a 1 To | 1800 32 3.241 x 10% 2 | tesa | san 200 | 000 x 1" * Target mean thickness = po = 3600 A FO=U- WP +o From the data presented in Table 5.1, which temperature setting can we recom- mend? Since both the mean and standard deviation change when we change the tem- perature, we may decide to use the quadratic loss function to select the better tempera- ture setting. For a given mean, ji, and standard deviation, o, the quality loss without adjustment, denoted by Q, is given by (u-3600? +0] 6D Q = quality loss without adjustment = where k is the quality loss coefficient. Note that throughout this chapter we ignore the constant & (that is, set it equal to 1) because it has no effect on the choice of optimum levels for the control factors. The quality loss under Ty “C is 3.24 x 10°, while under (To + 25) °C it is 8.0 x 10°, Thus, we may conclude that (7) + 25) °C is the better temperature setting. But, is that really a correct conclusion? Recall that the deposition time is a scaling factor for the deposition process—that is, for any fixed settings of all other control factors, the polysilicon thickness at the various points within the reactor is proportional to the deposition time. Of course, the proportionality constant, which is the same as the deposition rate, could be different at different locations within the reactor. ‘This is what leads to the variance, 6°, of the polysilicon thickness. We can use this knowledge of the scaling factor to estimate the quality loss after adjusting the mean on target. For To °C temperature, we can attain the mean thickness of 3600 A by increas- ing the deposition time by a factor of 3600/1800 = 2.0. Correspondingly, the standard100 Signalto-Noise Ratios Chap. 5 deviation would also increase by the factor of 3600/1800 to 64A . Thus, the estimated quality loss after adjusting the mean is 4.1 x 10°. Similarly, for (To + 25) °C we can obtain 3600 A thickness by increasing the deposition time by a factor of 3600 / 3400, which would result in a standard deviation of 212 A . Thus, the estimated quality loss after adjusting the mean is 4.49 x 10*, From these calculations it is clear that when the mean is adjusted to be on target, the quality loss for Ty °C is an order of magnitude smaller than the quality loss for (T + 25) °C; that is, the sensi: tivity to noise is much less when the deposition temperature is Ty "C as opposed to (To + 25) °C. Hence, To °C is the preferred temperature setting. A decision based on quality loss without adjustment (Q) is influenced not only by the sensitivity to noise (6), but also by the deviation from the target mean (H—Ho). Often, such a decision is heavily influenced, if not dominated, by the deviation from the target mean. As a result, we risk the possibility of not choosing the factor level that minimizes sensitivity to noise. This, of course, is clearly undesirable. But when ‘we compute the quality loss after adjustment, denoted by Q,, for all practical purposes. we climinate the effect of change in mean. In fact, it is a Way of isolating the sensi- tivity to noise factors. Thus, a decision based on Q, minimizes the sensitivity to noise, which is what we are most interested in during robust design. Relationship between S/N Ratio and Q, ‘The general formula for computing the quality loss after adjustment for the polysilicon thickness problem, which is a nominal-the-best type problem, can be derived as fol- lows: If the observed mean thickness is 1, we have to increase the deposition time by a factor of J1g/H to get the mean thickness on target. The predicted standard deviation after adjusting the mean on target is (lig/}1) 6, where 6 is the observed standard devia- tion. So, we have 4 = quality loss after adjustment = (52) We can rewrite Equation (5.2) as follows: 2 2, = kui =] 63) ¥ Since in a given project k and pp are constants, we need to focus our attention only on (W/07), We call (17/6) the S/N ratio because 6° is the effect of noise factors and is the desirable part of the thickness data. Maximizing (\u*/o*) is equivalent to minim- izing the quality loss after adjustment, given by Equation (5.3), and also equivalent to minimizing sensitivity to noise factors.‘Sec. 5.1 Optimization for Potysiicon Layer Thickness Uniformity 101 For improved additivity of the control factor effects, it is common practice to take log transform of (4/0) and express the S/N ratio in decibels, n= 10 logo | 2 #] 64) Although it is customary to refer to both (i/o) and 1 as the S/N ratio, it is clear from the context which one we mean, The range of values of (u7/o") is (0, ), while the range of values of 1) is (— 2, <9). Thus, in the log domain, we have better additivity of the effects of two or more control factors. Since log is a monotone func- tion, maximizing 2/0") is equivalent to maximizing 1. Optimization for Different Target Thicknesses Using the S/N ratio rather than the mean square deviation from target as an objective function has one additional advantage. Suppose for a different application of the polysilicon deposition process, such as manufacturing a new code of microchips, we want to have 3000 A target thickness. Then, the optimum conditions obtained by maximizing the S/N ratio would still be valid, except for adjustment of the mean. However, the same cannot be said if we used the mean square deviation from target as the objective function. We would have to perform the optimization again, The problem of minimizing the variance of thickness while keeping the mean on target is a problem of constrained optimization. As discussed in Appendix B, by using the S/N ratio, the problem can be converted into an unconstrained optimization prob- lem that is much easier to solve. The property of unconstrained optimization is the basis for our ability to separate the actions of minimizing sensitivity to noise factors by maximizing the S/N ratio and the adjustment of mean thickness on target. ‘When we advance from one technology of integrated circuit manufacturing to a newer technology, we must produce thinner layers, print and etch smaller width lines, etc, With this in mind, it is crucial that we focus our efforts on reducing sensitivity to noise by optimizing the S/N ratio. The mean can then be adjusted to meet the desired target. This flexible approach to process optimization is needed not only for integrated circuit manufacturing, but also for virtually all manufacturing processes and optimiza- tion of all product designs. During product development, the design of subsystems and components must proceed in parallel. Even though the target values for various characteristics of the subsystems and components are specified at the beginning of the development activity, it often becomes necessary to change the target values as more is leamed about the product. Optimizing the S/N ratio gives us the flexibility to change the target later in the development effort. Also, the reusability of the subsystem design for other applica- tions is greatly enhanced. Thus, by using the S/N ratio we improve the overall produc- tivity of the development activity102 Signal-to-Noise Ratios Chap. § Interactions Induced by Wrong Choice of Objective Function Using the quality loss without adjustment as the objective function to be optimized can also lead to unnecessary interactions among the control factors. To understand this point, let us consider again the data in Table 5.1. Suppose the deposition time for the ‘two experiments in Table 5.1 was 36 minutes. Now suppose we conducted two more experiments with 80 minutes of deposition time and temperatures of To °C and (To +25) °C. Let the data for these two experiments be as given in Table 5.2. For ease of comparison, the data from Table 5.1 are also listed in Table 5.2. TABLE 5.2 INTERACTIONS CAUSED BY THE MEAN Mean | Standard ] Exp. emperature| Time" | Thickness q)* Devaton | Ot | Gat no| CO | (nin a& a) a ay iin %6 1800 32 | 3281 x 1 [4086 x 10 pero | 400 200 | 8.000% 10" | aes x 10" on %0 4000 70 | 1.689. 108 | 3.969 « 10 (oe 1550 a0 | 15:796% 10 | 4.402 x 108 * Target mean thickness = pp = 3600 FQ = (tte)? +0" a 80-0 | ‘The quality loss without adjustment is plotted as a function of temperature for the two values of deposition time in Figure 5.1(a). We see that for 36 minutes of deposition time, the (To + 25) °C is the preferred temperature, whereas for 80 minutes of deposition time the preferred temperature is Ty °C. Such opposite conclusions about the optimum levels of control factors (called interactions) are a major source of confu- sion and inefficiency in experimentation for product or process design improvement. Not only is the estimation of interaction expensive but the estimation might not yield the true optimum settings for the control factors—that is, if there are strong antisynergistic interactions among the control factors, we risk the possibility of choos- ing a wrong combination of factor levels for the optimum conditions. In this example, based on Q, we would pick the combination of (To +25) °C and 36 minutes as the best combination. But, if we use the S/N ratio or the Q, as the objective function, we would unambiguously conclude that To °C is the preferred temperature [see Figure 5.10))-‘Sec. 5.1 Optimization for Polysiicon Layer Thickness Uniformity 103 nf (a) When @ is the objective é ‘80 min funcion, the cont factors, g 96 min temperature and time, have $50 strong aneynergate 2 interaction 2 30 4_~$—__—» +28 4 70 (b) When Q, is the objective therein interaction 3 temperature and time. g ince tme isa sealing factor, 50 the curves for 36 min. and 8 “sk £80 min, deposition time are 2 38min almost overapping ool w to Th + 25 4 70 3 ‘60 min é 96 min (€) From this gure we see that 3 x04 much of the interaction in (a) g is caused by the deviation of 3 the mean frm the target 2 eo L$» % 425 Figure 5.1 Interactions caused by the mean. The squared deviation of the mean from the target thickness is a component of the objective function Q [see Equation (5.1)]. This component is plotted in Figure 5.1(6).. From the figure it is obvious that the interaction revealed in Figure 5.1(a) is primarily caused by this component. The objective function Q, does not have the squared deviation of the mean from the target as a component. Consequently, the corresponding interaction, which unnecessarily complicates the decision process, i eliminated, In general, if we observe that for a particular objective function the interactions among the control factors are strong, we should look for the possibility that the objec- tive function may have been selected incorrectly. ‘The possibility exists that the objec- tive function did not properly isolate the effect of noise factors and that it still has the deviation of the product’s mean function from the target as a component,104 Signalto-Nolse Ratios Chap. 5 Identification of a Scaling Factor In the polysilicon deposition case study, the deposition time is an easily identified scal- ing factor. However, in many situations where we want to obtain mean on target, the scaling factor cannot be identified readily. How should we determine the best settings of the control factors in such situations? It might, then, be tempting to use the mean squared deviation from the target as the objective function to be minimized. However, as explained earlier, minimizing the mean squared deviation from the target can lead to wrong conclusions about the optimum levels for the control factors; so, the temptation should be avoided. Instead, we should begin with an assumption that a scaling factor exists and identify such a fac- tor through experiments, ‘The objective function to be maximized, namely 1, can be computed from the observed jt and without knowing which factor is a scaling factor. Also, the scaling operation does not change the value of 11. Thus, the process of discovering a scaling factor and the optimum levels for the various control factors is a simple one. Tt con- sists of determining the effect of every control factor on 1) and i, and then classifying these factors as follows: 1. Factors that have a significant effect on 1. For these factors, we should pick the levels that maximize 1. 2. Factors that have a significant effect on wi but practically no effect on 1. Any ‘one of these factors can serve as a scaling factor. We use one such factor to adjust the mean on target. We are generally successful in finding at least one scaling factor. However, sometimes we must settle for a factor that has a small effect on 1 as a scaling factor. 3. Factors that have no effect on 1 and no effect on ji. ‘These are neutral factors and we can choose their best levels from other considerations such as ease of, operation oF cost. Minimizing Standard Deviation and Mean Separately Another way to approach the problem of minimizing variance with the constraint that the mean should be on target is, first, to minimize standard deviation while ignoring the mean, and, then, bring the mean on target without affecting the standard deviation by changing a suitable factor. The difficulty with this approach is that often we cannot Jind a factor that can change the mean over a wide range without affecting the stan- dard deviation. This can be understood as follows: In these problems, when the mean is zero, the standard deviation is also zero. However, for all other mean values, the standard deviation cannot be identically zero. Thus, whenever a factor changes the ‘mean, it also affects the standard deviation. Also, an attempt to minimize standard deviation without paying attention to the mean drives both the standard deviation andSec.5.2 Evaluation of Sensitivity to Noise 105 the mean to zero, which is not a worthwhile solution. Therefore, we should not try to minimize without paying attention to the mean. However, we can almost always find a scaling factor. Thus, an approach where we maximize the S/N ratio leads to useful solutions. Note that the above discussion pertains to the class of problems called nominal- the-best type problems, of which polysilicon thickness uniformity is an example. A class of problems called signed-target type problems where it is appropriate to first minimize variance and then bring the mean on target is described in Section 5.3. 5.2 EVALUATION OF SENSITIVITY TO NOISE Let us now examine the general problem of evaluating sensitivity to noise for a dynamic system, Recall that in a dynamic system the quality characteristic is expected to follow the signal factor. The ideal function for many products can be written as yam 6.3) where y is the quality characteristic (or the observed response) and M is the signal (or the command input). In this section we discuss the evaluation of sensitivity to noise for such dynamic systems. For specificity, suppose we are optimizing a servomotor (a device such as an electric motor whose movement is controlled by a signal from a command device) and that y is the displacement of the object that is being moved by the servomotor and M specifies the desired displacement. To determine the sensitivity of the servomotor, suppose we use the signal values Mj, Mz, °°, My: and for each signal value, we use the noise conditions x), x2. °°", %_. Let yj denote the observed displacement for a particular value of control factor settings, 2 = (21, 22.°°*, 29)". ‘when the signal is M; and noise is xj. Representative values of yi; and the ideal func- tion are shown in Figure 5.2. The average quality loss, Q(2), associated with the con- trol factor settings, z, is given by kama e@=— x Oy - Md. (5.6) As shown by Figure 5.2, Q(2) includes not only the effect of noise factors but also the deviation of the mean function from the ideal function. In practice, Q(z) could be dominated by the deviation of the mean function from the ideal function. Thus, the direct minimization of Q(@) could fail to achieve truly minimum sensitivity to noise. It could lead simply to bringing the mean function on target, which is not a difficult problem in most situations anyway. Therefore, whenever adjustment is possible, we should minimize the quality loss after adjustment.106 Signalto-Noise Ratios Chap. 5 Ideal Function y=M “Observed oo 8 Mean Function M. M Figure 5.2 Evaluation of sensitivity to noise, For the servomotor, it is possible to adjust a gear ratio so that, referring to Figure 5.2, the slope of the observed mean function can be made equal to the slope of the ideal function. Let the slope of the observed mean function be B. By changing the gear ratio we can change every displacement yy to vy = (1/B)yij.. This brings the mean function on target. For the servomotor, change of gear ratio leads to a simple linear transformation of the displacement yy. ‘In some products, however, the adjustment could lead to a more complicated function between the adjusted value vjy and the unadjusted value y,. For a general case, let the effect of the adjustment be to change each yi; to a value vj = he (is), Where the function hg defines the adjustment that is indexed by a param- eter R. After adjustment, we must have the mean function on target—that is, the errorsSoc.5.2 Evaluation of Sensitivity to Noise 107 (vy, — Mp) must be orthogonal to the signal M;. Mathematically, the requirement of orthogonality can be writen as EY Oy - MyM =O. 67 i Equation (5.7) can be solved to determine the best value of R for achieving the mean function on target. Then the quality loss after adjustment, Q,(2), can be evaluated as follows: Ou) = EF Fy Mi. 68) m it The quantity Q,(2) is a measure of sensitivity to noise. It does not contain any part that can be reduced by the chosen adjustment process. However, any systematic part of the relationship between y and M that cannot be adjusted is included in Q,(z). [For the servomotor, the nonlinearity nd, 3rd, and higher order terms) of the relationship between y and M are contained in Q,(2).] Minimization of Q,(2) makes the design robust against the noise factors and reduces the nonadjustable part of the relationship between y and M. Any control factor that has an effect on yj; but has no effect on Qo(2) can be used to adjust the mean function on target without altering the sensitivity to noise, which has already been ized. Such a control factor is called an adjust- iment factor. It is easy to verify that minimization of Q,(2), followed by adjusting the mean function on target using an adjustment factor, is equivalent to minimizing Q(z) subject to the constraint that the mean function is on target. This optimization procedure is called a two-step procedure for obvious reasons. For further discussion of the 2-step procedure and the S/N ratios, see Taguchi and Phadke [T6], Phadke and Dehnad [P4], Leon, Shoemaker, and Kackar [L2], Nair and Pregibon [N2], and Box [B1. It is important to be able to predict the combined effect of several control factors from the knowledge of the effects of the individual control factors. ‘The natural scale of Q,(2) is not suitable for this purpose because it could easily give us a negative pred- iction for Q,(2) which is absurd, By using the familiar decibel scale, we not only avoid the negative prediction, but also improve the additivity of the factor effects. ‘Thus, to minimize the sensitivity to noise factors, we maximize 1, which is given by 1 =-10 logio Q,(2). or) Note that the constant k in Q,(z) and sometimes some other constants are generally ignored because they have no effect on the optimization,108 Signalte-Noise Ratios Chap. § Following Taguchi, we refer to 1) as the S/N ratio. In the polysilicon deposi example discussed in Section 5.1, we saw that Q, 0 (7/07), where 6” is the effect of the noise factors, and tis the desirable part of the thickness data. Thus Q, is the ratio of the power of the signal (the desirable part) to the power of the noise (the undesirable part). As will be seen through the cases discussed in the subsequent sections of this chapter, whenever a scaling type of an adjustment factor exists, Q, takes the form of a ratio of the power of the signal (the desirable part of the response) to the power of the noise (the undesirable part of the response). Therefore, Q, and 1) are both referred to as the S/N ratio. As a matter of convention, we call Qy and 1 the S/N ratio, even in other cases where “ratio” form is not that apparent. ‘The general optimization strategy can be summarized as follows: 1. Evaluate the effects of the control factors under consideration on 7) and on the ‘mean function, 2. For the factors that have a significant effect on n, select levels that maxim n 3. Select any factor that has no effect on 1) but a significant effect on the mean function as an adjustment factor. In practice, we must sometimes settle for a fac- tor that has a small effect on 1) but a significant effect on the mean function as an adjustment factor. Use the adjustment factor to bring the mean function on target. Adjusting the mean function on target is the main quality control activity manufacturing. It is needed because of changing raw material, varying pro- cessing conditions, etc. ‘Thus, finding an adjustment factor that can be changed conveniently during manufacturing is important, However, finding the level of the adjustment factor that brings the mean precisely on target during product or process design is not important. 4. For factors that have no effect on n| and the mean function, we can choose any level that is most convenient from the point of view of other considerations, such as other quality characteristics and cost. ‘What adjustment is meaningful in a particular engineering problem and what fac- tor can be used to achieve the adjustment depend on the nature of the particular prob- lem, Subsequent sections discuss several common engineering problems and derive the appropriate S/N ratios using the results of this section, 5.3 S/N RATIOS FOR STATIC PROBLEMS Finding a correct objective function to maximize in an engineering design problem is very important. Failure to do so, as we saw earlier, can lead to great inefficiencies in experimentation and even wrong conclusions about the optimum levels. The task of finding what adjustments are meaningful in a particular problem and determining the correct S/N ratio is not always easy. Here, we describe some common types of static problems and the corresponding S/N ratios.
You might also like
Quality Engineering Using Robust Design PDF
PDF
100% (4)
Quality Engineering Using Robust Design PDF
342 pages
Pub Quality Engineering Using Robust Design
PDF
No ratings yet
Pub Quality Engineering Using Robust Design
342 pages
8D Question Paper
PDF
100% (1)
8D Question Paper
2 pages
7 QC Tools: Q7T/PPT-1
PDF
No ratings yet
7 QC Tools: Q7T/PPT-1
154 pages
Mid Sem - QP - QMZG528 - SUN10 - AN
PDF
100% (1)
Mid Sem - QP - QMZG528 - SUN10 - AN
1 page
Component Search Method Introduction 37
PDF
No ratings yet
Component Search Method Introduction 37
37 pages
Problem 1. The Number of Pages in A PDF Document You Create Has A Discrete Uniform
PDF
No ratings yet
Problem 1. The Number of Pages in A PDF Document You Create Has A Discrete Uniform
3 pages
IF The Result Is Higher That 1.25 The Component Is One The IMPORTANT RED X Contributor
PDF
No ratings yet
IF The Result Is Higher That 1.25 The Component Is One The IMPORTANT RED X Contributor
4 pages
Day4 Matl-Graph&Shainin
PDF
No ratings yet
Day4 Matl-Graph&Shainin
86 pages
Quality Function Deployment (QFD)
PDF
No ratings yet
Quality Function Deployment (QFD)
51 pages
Case Study 1
PDF
No ratings yet
Case Study 1
32 pages
Applied Statistics For QA/QC, MFG, and R+D Advanced Applications
PDF
No ratings yet
Applied Statistics For QA/QC, MFG, and R+D Advanced Applications
158 pages
MSA - Key Approach
PDF
No ratings yet
MSA - Key Approach
114 pages
Chapter 3 - Control Chart For Variables
PDF
100% (1)
Chapter 3 - Control Chart For Variables
66 pages
Shainin 123
PDF
No ratings yet
Shainin 123
10 pages
STA4C04 - Statistical Inference and Quality Control
PDF
No ratings yet
STA4C04 - Statistical Inference and Quality Control
170 pages
Smith, Gerald - Types of Quality Problems PDF
PDF
No ratings yet
Smith, Gerald - Types of Quality Problems PDF
7 pages
Six Sigma Project - Machining
PDF
No ratings yet
Six Sigma Project - Machining
39 pages
P Chart
PDF
No ratings yet
P Chart
71 pages
Chapter 1 Introduction To Quality Improvement - Part 1-1
PDF
33% (3)
Chapter 1 Introduction To Quality Improvement - Part 1-1
57 pages
Quality Engineering Assignment FINAL
PDF
No ratings yet
Quality Engineering Assignment FINAL
30 pages
Module 3 Matl - Measure Phase
PDF
No ratings yet
Module 3 Matl - Measure Phase
75 pages
Referance SSBB
PDF
No ratings yet
Referance SSBB
4 pages
Chapter5 Statistical Process Control Questions
PDF
100% (1)
Chapter5 Statistical Process Control Questions
3 pages
Measurement System Analysis - Gage R & R Study Data Sheet: AV EV
PDF
No ratings yet
Measurement System Analysis - Gage R & R Study Data Sheet: AV EV
2 pages
331 wk10 ProcessCapability
PDF
No ratings yet
331 wk10 ProcessCapability
22 pages
Six Sigma
PDF
No ratings yet
Six Sigma
16 pages
Control Chart Concepts
PDF
No ratings yet
Control Chart Concepts
32 pages
SSBB Affidavit
PDF
No ratings yet
SSBB Affidavit
2 pages
DOE 5.1class Notes
PDF
No ratings yet
DOE 5.1class Notes
250 pages
Chapter-5 Problem Solving Methodology: Introduction
PDF
No ratings yet
Chapter-5 Problem Solving Methodology: Introduction
19 pages
Reliability
PDF
100% (1)
Reliability
56 pages
Six Sigma Quality Improvement at BD Company
PDF
No ratings yet
Six Sigma Quality Improvement at BD Company
10 pages
Methods and Philosophy of Statistical Process Control
PDF
No ratings yet
Methods and Philosophy of Statistical Process Control
24 pages
Statistical Process Control: Samir Mistry
PDF
100% (1)
Statistical Process Control: Samir Mistry
26 pages
Seven Basic Quality Tools PDF
PDF
No ratings yet
Seven Basic Quality Tools PDF
20 pages
2. DoE + 3. Analysis
PDF
No ratings yet
2. DoE + 3. Analysis
59 pages
Process Capability
PDF
100% (1)
Process Capability
10 pages
He Shainin System
PDF
No ratings yet
He Shainin System
20 pages
ch2 - Decision Analysis
PDF
No ratings yet
ch2 - Decision Analysis
37 pages
Design For Six Sigma - An Overview
PDF
No ratings yet
Design For Six Sigma - An Overview
50 pages
20 Reliability Testing and Verification
PDF
No ratings yet
20 Reliability Testing and Verification
13 pages
Practice Problem Set On Control Charts
PDF
No ratings yet
Practice Problem Set On Control Charts
3 pages
RSM Part1 Intro
PDF
100% (1)
RSM Part1 Intro
61 pages
Mba ZG531 SQC
PDF
No ratings yet
Mba ZG531 SQC
6 pages
Lesson - 5.1 - Design of Experiments - Improve - Phase
PDF
No ratings yet
Lesson - 5.1 - Design of Experiments - Improve - Phase
39 pages
Iso/ts 16949: 2009
PDF
No ratings yet
Iso/ts 16949: 2009
117 pages
Why Threaded Fasteners?: Atlas Copco Tools
PDF
No ratings yet
Why Threaded Fasteners?: Atlas Copco Tools
18 pages
MSA (Measurement System Analys)
PDF
No ratings yet
MSA (Measurement System Analys)
19 pages
Minitab Training
PDF
No ratings yet
Minitab Training
15 pages
Quality Toolkit User Guide
PDF
No ratings yet
Quality Toolkit User Guide
127 pages
INDU 372 - Final Exam Crash 2023 - Part I - Upgrade Tutorials - Asif's Notes
PDF
No ratings yet
INDU 372 - Final Exam Crash 2023 - Part I - Upgrade Tutorials - Asif's Notes
154 pages
Basic Doe & Taguchi
PDF
No ratings yet
Basic Doe & Taguchi
58 pages
Qualman Quiz 3
PDF
No ratings yet
Qualman Quiz 3
4 pages
Six Sigma Questions Sample
PDF
No ratings yet
Six Sigma Questions Sample
5 pages
7 Quality Tools
PDF
100% (1)
7 Quality Tools
44 pages
The Taguchi Method:: The Service Manager S Primer To Quality by Ruth Robertson, Boise State University ©2002
PDF
100% (2)
The Taguchi Method:: The Service Manager S Primer To Quality by Ruth Robertson, Boise State University ©2002
26 pages
Practice Questions
PDF
No ratings yet
Practice Questions
2 pages
Shainin Variable Search
PDF
100% (1)
Shainin Variable Search
30 pages
Robust Design
PDF
No ratings yet
Robust Design
342 pages