Taguchi Methods Explained
August 26, 2022 | Author: Anonymous | Category: N/A
Short Description
Download Taguchi Methods Explained...
Description
-,f-y
KZast astern E con co n om y
^Tdition
g u c h i MSTEPS e t h TO o d ROBUST s E x pDESIGN lained T a PRACTICAL
Rs. 17 175 5
TAGUCHI METHODS EXPLAINED Practical Steps to Robust Design
Tapan P. Bagchi (Ph.D., Toronto) This well-organized, compact volume introduces the reader to Taguchi Methods — a revolutionary approach innovated in Japan to engineer quality and performance into new products and manufacturing processes. It explains on-the-job application of Taguchi Methods to make products and processes perform consistently on target, hence making them insensitive to factors that are difficult to control. Designed for practising engineering managers with responsibility in process performance, quality control, control, and R&D, a and nd for students of engineering and process design, the text provides all the essential tools for planning and conducting prototype development and tests which guarantee improved final field performance of products and manufacturing processes. Replete with examples, exercises, and actual case studies, the book shows how electronic circuit devicest mechanical fabrication methods, and chemical and metallurgical processes can be made robust and stable to consistently provide on-target performance, in spite of the presence of 'noise' — raw material quality variations, environmental changes, voltage fluctuations, operator's inconsistency, and so on —, all of which are external factors that cannot be economically controlled. The book also shows the reader how to plan reliable and efficient tests with physical prototypes and computer models to evaluate products and processes during development to improve them. Finally, it explains state-ofart methods to make even complex systems robust, where design variables “interact”, making conventional design optimization methods difficult to apply. (continued on back flap)
TAGUCHI METHODS EXPLAINED Practical Steps to Robust Design
.*
TAPAN P. BAGCHI Professor, Industrial and Management Engineering Indian Indi an Institute Institu te o f Technology, Technolog y, Kanp ur
Prentice^Hall of India
u
New Delhi-110001
1993
a
OTnl
Rs. 175.00
TAGUCHI METHODS EXPLAINED : Practical Steps to Robust Design
by Tapan P/Bagchi •*
*
..
PRENTICE-HALL INTERNATIONAL, INC., Englewood Cliffs. PRENTICE-HALL INTERNATIONAL (UK) LIMITED, London. PRENTICE-HALL OF AUSTRALIA PTY. LIMITED, Sydney. PRENTICE-HALL CANADA, INC., Toronto. PRENTICE-HALL HISPANOAMERICANA, S.A., Mexico. PRENTICE-HALL OF JAPAN, INC., Tokyo. SIMON & SCHUSTER ASIA PTE. LTD., Singapore. EDITORA PRENTICE-HALL DO BRASIL, LTDA., Rio de Janeiro.
© 1993 by Prentice-Hall of India Private Limited, New Delhi. All rights reserved. No part of this book may be reproduced in any form, by mimeograph or any other means, without permission in writing from the publishers.
ISBN-0-87692-808-4
The export rights of this book are vested solely with the publisher.
Published by Prentice-Hall of India Private Limited, M-97, Connaught Circus, New Delhi-110001 Delhi-110001 and Printed by Bhuvnesh Seth at Rajkamal Electr Electric ic Press, B-35/9, G.T. Karnal Road Industrial Area, Delhi-110033.
To
the Fond Memory of
Bhalokaku
Contents Preface
ix
1. What Are Taguchi Methods?
1- 17
1.1
The Road to Quality Starts at Design
1
1.2
Achiev Achieving ing Quality Quality—Taguchi —Taguchi* * s Seven Points
2
1.3 1.3
Optimized Design Reduces R&D, Product Production, ion, and Lifetime Lifeti me Cost
3
1.4 1.4
Tagu Taguchi’s chi’s Definition of Quality
6
1.5 1.5
What Causes Performance to Vary?
9
1.6
Prevention by Quality Design
11
1.7 1.7
Steps in Designing Performance into a Product
12
1.8 1.8
Functional Design: The Traditional Focus
13
1.9
Parametric Design: The Engineering Engineerin g of Quality
14
1.10 Statistical Experiments Discover the Best Design Reliably and Economically Exer
16
c is e s
17
2- Handling Uncert Uncertainty ainty
18 -40
2.1 2.1
The Mystique of Probability
18
2.2
The Idea of a Random Variable
21
2.3
Some Useful Formulas
25
2.4
‘Hypothesis Testing’: A Scientific Method to Validate or Refute Speculations
27
2.5
Comparing Two Population Means Using Observed Data
30
2.6
Cause-Effect Models and Regression
32
2.7
Evaluating a Suspected “Cause” Factor
33
2.8
The F-Statistic
37
2.9
An Alternative Approach to Finding F: The Mean Sum of Squares
39
Ex e r c i s e s
40
3L Desi Design gn of Exper Experimen iments ts
3-1
41-60
Testing Factors One-at-a-Time is Unscientific
41
vi
TAGUCHI METHODS METHODS EXPLA INED— PRACTICAL PRACTICAL STEPS TO ROBUST DESIGN
3.2
The One-Factor Designed Experiment
44
3.3
ANOVA Helps Compare Variabilities
49
3.4
The F -Test -Test Tells If Factor Effects are Statistically Significant
53
3.5
Formulas for Sum of Squares and the F-Test
55
3.6
Summary
58
Exer
59
c is e s
The Fou ndation o f Tag Taguchi uchi Methods: Th e Additi Additive ve Cause-Effect Model 4.1
What is Additivity?
61-78
61
4.2
W h y A c h i e v in g A d d it i v it y is S o I m p o r t a n t?
62
4.3 4.3
The Veri Verifi ficat cation ion of Addi Additi tivi vity ty
65
4.4
The Response Table: A Tool That Helps Find Main Effects Quickly
65
4.5
Graphic Evaluation of Main Effects
68
4.6
Optimization of Response Level and Variability
70
4.7 4.7
Orth Orthogon ogonal al Arrays Arrays vs. vs. Classic Classical al Sta Statis tistic tical al Ex Experi periments ments
72
4.8
Summary
77
Exer
78
c is e s
Optimization Using Signal-to-Noise Ratios 5.1 5.2 5.3
Select Selecting ing Fac Factor torss fo forr T Tagu aguchi chi Expe Experim riment entss
79
To Seek Robustness One Should Measure Performance by SIN Ratios Ratios
81
SIN Ratio Ratio in Optimization— An Exampl Examplee
84
5.4 Not All Perfo Performanc rmancee Chara Characteris cteristics tics Displ Display ay Additiv Additivity ity
85
5.5
The OA as the Experiment Matrix
86
5.6
The Axiomatic Approach to Design Summary
87
Exer
88
5.7
6.
79-89
88
c is e s
Use Use of Orthogonal Arrays
90-10 6
6.1
What are Orthogonal Arrays?
90
6.2
OAs are Fractional Factorial Designs
92
6.3
Not All Factors Affect Performance the Same Way
94
6.4
Identifying Control and Noise Factors: The Ishikawa Diagram
95
At What Levels Should One Study Each Factor?
97
6.5
CONTENTS
6. 6
Vii
6.7
Reaching the Optimized Design Testing for Additivity
99 100 100
6. 8
The Optimization Strategy
100
6.9
Tagu Taguchi’s chi’s Two Steps to On-Target Performance with with Minimum Variability
103 103
6.10 Summary Ex e x
c is e s
104 104
7 . C am Stviy 1: Process Optimization — Optical Filter
7.1
H r ftocess far far Manufacturing Optical Filters
107 107
1 2 Test Settings o f Control Parameters and the OA
108 108
73
rVr rVrfor formanre manre Mea Measurements surements and the S/N Ratio
110
7.4
Minimizing logjo (s2), the Varia Variability bility of Thick Thickness ness
11 111 1
1
7.5
The Confirmation Experiment Experimen t
111 111
7 j6
Adjusting Mean Crystal Thickness to Targe Targett
112
S ck d k g Orthog Orthogona onall Array Arrayss an and d Lin Linear ear Gra Graph phss
114-122
8.1
Sizing up tbe Design Optimization Problem
114
8.2
Linear Graphs and Interactions
116 116
83
Modification of Standard Linear Graphs
118 118
8.4
Estimation of Factor Interactions Using OAs
119 119
8.5
Summary
122
Ex e r c i s e
122
*
Product ct Optimization Optimization — Passi Passive ve Networ Network k 9. 9. Case Stady 2: Produ F iler Des Desig ign n
123-139
9.1
The Passive Network Filter
123 123
9.2
Formal Statement of the Design Problem
125 125
9.3
The Robust Design Formulation of the Problem
125 125
9.4
Data Analysis and Estimation of Effects
129 129
9_5
Effec Effects ts of the Design Param Parameters eters
13 131 1
9.6 Discussion on Results
135 135
9.7
136 136
Filter Design Optimization by Advanced Methods
1#. A Direct Method to Achieve Robust Design 10 10.1 .1 Re-Statement of the Multiple Objective Design Design Optimization Problem
14 0-1 61
140
10.2 10.2 Target Performance Requirements as Explicit Constraints
VIII
TAGUCHI METHODS EXPL AINED — PRACTICAL STEPS T O ROB UST DESIGN
10.3 10.3 Constraints Present in the Filter Design Problem
142
10.4 Seeking ParetoPareto-Optima Optimall Desi Designs gns
14 143 3
10 10.5 .5 Monte Carlo Evaluation of S/N Ratios
144
10.6 Can We Use C (or R 2) as the Independent DP instead o f /?3?
146
10.7 10.7 Some Necessary Mathematical Tools
147 147
10.8 Developing a Multiple Regression Model
150 150
10.99 Rationale of the Constrained Robust Design Approach 10.
153 153
10.10 Application of the Constrained Approach to Real Problems
155 155
10 10.11 .11 Discussion of the the Constrained Design Optimization Approach
159
11. Loss Functions and Manufacturing Tolerances 11 11.1 .1 Loss to Society is More Than Defective Goods
162 -17 1 162
11.2 Determining Manufacturing Tolerances
164 164
11 11.3 .3 Loss Functions for Mass-Produced Items
170
11.4 Summar Sum mary y E x e r c is e s
12. Total Quality Managemen t and Taguchi Methods
171 171 171 17 2-1 83
12 12.1 .1 Why Total Quality Manage Management? ment?
172
12.2 Wha Whatt Really is Quality?
174
12.3 Wha Whatt is Control?
174
141 141
12.4 Quality Management Methods
174 174
12.5 12. 5 The Business Impact of TQM
176
12.6 Control of Variability: Variability: Key to QA
177 177
12.7 How is Statistics Helpful? 12.8 12. 8 Practical Details of Planning a Taguchi Project
17 178 8 179 179
Appendix A: Standard Normal, t, t, Chi-square, and F-Tables
18 5-1 90
Appendix B: Selected Orthogonal Arrays and Their Linear Graphs
191-196
Glossary
197-202
References
203-204
Index
205-209
Preface Taguchi methods are the most recent additions to the toolkit of design, process, and manufacturing engineers, and Quality Assurance (QA) experts. In contrast to Statistical Process Control (SPC), which attempts to control the factors that adversely affect the quality of production, Taguchi methods focus on design —the — the development of superior performance performance designs (of both products and manufacturing processe proc esses) s) to deliv deliver er quality. Taguchi methods lead to excellence in the selection and setting of product/ process proc ess design param parameters eters and thei theirr to tolerances. lerances. In the ppast ast decade decade,, engineer engi neerss have applied these methods in over 500 automotive, electronics, information technology, and process industries worldwide. These applications have reduced cracks in castings, increased the life of drill bits, produced VLSI with fewer defects, speeded up the response time of UNIX V, and even guided human resource management systems design. Taguchi methods systematically reveal the complex cause-effect relationships between betw een design param eters and perform p erformance. ance. Thes Thesee in turn lead to bu buildin ilding g quality performa perfo rmance nce into proce processes sses and prod products ucts befo before re actual actua l produ production ction begins begins.. Taguchi methods have rapidly attained prominence because wherever they have been applied, they have led to major reductions in product/proces product/processs development lead time. They have also helped in rapidly improving the manufacturability of complex products and in the deployment of engineering expertise within an enterprise. The Fi Firs rstt objective objective of Taguchi m ethods— whic which h are empirical — is reducing the variability in quality. A key premise of Taguchi methods is that society incurs
a loss any time a product whose performance is not on target gets shipped to a customer This This loss is measurable by the loss function , a quantity dependent on the
deviation of the product’s performance from its target performance. Loss functions are directly usable in determining manufacturing tolerance limits. Delivering a robust design is the second objective of Taguchi methods. Often there are factors present in the environment on which the user of a product has little or no control. The robust design procedure adjusts the design features of the product such that the performance of the product remains unaffected by these factors. For a process, the robust design procedure optimizes the process parameters such that the quality of the product that the process delivers, stays on target, and is unaffected by factors fact ors beyond control. R Robust obust de design sign minim minimizes izes variab variability ility (and tthus hus the llifetime ifetime cost of the product), while retaining the performance of the product on target. Statistically designed experiments using orthogonal arrays and signal-to-nois signal-to-noisee {SIN) ratios constitute the core of the robust design procedure. This text provides the practising engineer an overview of the state-of-the-art in Taguchi Taguchi methods— the methods methods for engineering superior and lasting performance into products and processes.
X
PREFACE
Chapters 1-3 introduce the reader to the basic ideas in the engineering of quality, and the needed tools in probability and statistics. Chapter 4 presents the additive cause-effect model, the foundation of the Taguchi methodology for design optimization. Chapter 5 defines the signal-to-noise ratio—the key performance metric that measures the robustness of a design. Chapter 6 describes the use of orthogonal arrays (OAs), the experimental framework in which empirical studies to determine the dependency of performance on design and environmental factors can be efficiently done. Chapter 7 illustrates the use of these methods in reducing the sensitivity of a manufacturing process to uncontrolled environmental factors. Chapter 8 provides the guidelines for the selection of appropriate orthogonal arrays (OAs) for real-life robust design problems. A case study in Chapter 9 shows how one optimizes a product prod uct design. Cha Chapter pter 10 presen presents ts a cons constraine trained d optim optimization ization appro approach ach which would be of assistance when the design parameter effects interact. Chapter 11 shows how Taguchi loss functions can be used in setting tolerances for manufacturing. Chapter 12 places Taguchi methods in the general framework of Total Quality Management (TQM) in an enterprise. Throughout the text, examples and exercises have been provided for enabling the reader to have a better grasp of the ideas presented. Besides, the fairly large number of References should stimulate the student to delve deeper into the subject. I am indebted to Jim Templeton, my doctoral guide and Professor—from him I had the privilege of imbibing much of my knowledge in applied probability. I am also grateful to Birendra Sahay and Manjit Kalra, whose enormous confidence in me led to the writing of this book. I wish to thank Mita Bagchi, my wife, and Damayanti Singh, Rajesh Bhaduri and Ranjan Bhaduri whose comments and suggestions have been of considerable assistance in the preparation of the manuscript. The financial assistance provided by the Continuing Education Centre, Indian Institute of Technology, Kanpur to partially compensate for the preparation of the manuscript is gratefully acknowledged. Finally, this book could not have been completed without the professionalism and dedication demonstrated by the Publishers, Prentice-Hall of India, both during the editorial and production stages. Any comments and suggestions for improving the contents would be warmly appreciated.
Tapan P. Bagchi
%
What Are Taguchi Methods? i
1.1
THE ROAD TO QUALITY QUA LITY STARTS AT DES DESIGN IGN
Quality implies delivering products and services that meet customers’ standards and fulfill their needs and expectations. Quality has been traditionally assured by Sta tist tistica icall Pro Proces cesss Contr Co ntrol ol (SPC ) — a collec col lectio tion n o f po powe werfu rfull statis sta tistic tical al methods facilitating the production of quality goods by intelligently controlling the factors that affect a manufacturing process. SPC attempts to achieve quality by reactin rea ctin g to devi deviation ationss in the quality qual ity of wh what at the man manufac ufacturi turing ng plant pla nt has recently produced. In this chapter, however, we present an overview of a some what different approach for assuring quality — consisting essentially of certain specially designed experimental investigations. Collectively known as the Taguchi methods, these methods focus on improving the design of manufacturing processes and products. A designer applies Taguchi methods off-line — before production begins.. Wh begins When en aapplied pplied to proces processs design design,, Taguchi Tagu chi me methods thods ccan an hel helpp im improve prove pprocess rocess capability. These methods also reduce the sensitivity of the process to assignable causes, substantially reducing thereby the on-line SPC effort required to keep the quality of production on target. The significance of beginning Quality Assurance (QA) with an improved process proc ess or product pro duct design is no t diff difficul icultt to gauge. Expe Experienc riencee ssugges uggests ts that nearl nearly y 80 per cent of the lifetime cost of a product becomes fixed once its design is complete. Recent studies suggest that a superior product design ranks among the foremost attributes of a successful successful enterprise [1]. The application application o f Taguchi methods leads to superior performance designs known as robust designs. Statistical experimentation and analysis methods have been known for over the past 60 years [2, 3]. However, the Japanese appear to have been the fir§t to use these methods formally in selecting the best settings of process/product design paramete para meters rs [4]. In the West, the most mo st notab notable le use userr of Taguc Taguchi hi meth methods ods has been AT&T, U.S.A., whose product development efforts now incorporate parametric optimization [5]. The foundation of the Taguchi methods is based on two premises: L Society incurs a loss any time the performance of a product is not on target. Taguchi has argued that any deviation from target performance results in a loss to society. He has redefined the term ‘quality’ to be the losses a product imparts to society from the time it is shipped. *
2.
Product and process design requires a systematic systematic development development,, progressin progressing g
stepwise through system design, para metr ic desi gn , and finally, tolerance design. Taguchi methods provide an efficient, experimentation-based framework to achieve this. 1
2
TAGUCHI METHODS EXPLAINED— PRACTICAL PRACTICAL STEPS TO ROBUST DESIGN
The first premise suggests that whenever the performance of a product deviates from its target performance, society suffers a loss. Such a loss has two components: The manufacturer incurs a loss when he repairs or rectifies a returned or rejected product not measuring up to its target performance. The consumer incurs a loss in the form of inconvenience, monetary loss, or a hazardous consequence of using the product. The second premise forms the foundation of quality engineering, a discipline that aims at engineering not only the function, but also quality performance into products prod ucts and proces processes. ses. Taguchi’s original work circulated mainly within his native country, Japan, until the late ’70s, when some translations became available in other countries. The American Society for Quality Control published a review of Taguchi’s methods, especially of “off-line quality control”, in 1985 [6 ], Since then, many engineers outside Japan have also successfully applied these methods [7], The Taguchi philosophy professes that the task of assuring quality must begin begi n with the engineering engin eering of qualit quality y — prod product uct and proces processs design optim optimization ization for performance, quality, and cost. To be effective, it must be a team effort involving marketing, Research and Development (R&D), production, production, and engineering. Quality engineering must be completed before the product production One can often take countermeasures during reaches processitsand product stage. design. Such Suc h coxmtcmeasAKes coxmtcmeasAKes can eftec eftecVw VweVy assur assuree ttha hatt th thee produ pr oduct ct a m manufa anufacturi cturing ng process proc ess deliv delivers ers will be on targe targett and that, it will conti continue nue to perf perform orm on targ target et measures require weW-planned, systematic, and an essentially empirical investigation during process/product desigp and development. For tivvs, ressoa, TagucYtt. c&Wed this procedure “off-line” [8]; it precedes on-line Quality Control (QC) done during manufacturing, using control charts and other reactive methods (see Section 12.4). 1.2
ACHIEVING QUALITY — TAG TAGUCHI’ UCHI’S S SEVEN POINTS
Achieving superior performance calls for an attitude that must continuously search for incremental improvement. The Japanese call this kaizen. This trait is different from the commonly applied method of relying only on new technologies and innovations as the route to quality improvement. The following seven points highlight the distinguishing features of Taguchi’s approach (as different from the traditional approach) which is aimed at assuring quality: 1. Taguchi defined the term ‘quality’ as the deviation from on-target performanc perfo rmance, e, which appe appears ars at first ttoo be b e a pa paradox. radox. Accordi According ng to hi him, m, the qualit quality y of a manufactured product is the total loss generated by that product to society from the time it is shipped. 2. In a competitive economy, Continuous Quality Improvement (CQI) and cost reduction are necessary for staying in business. 3. A CQI programme includes continuous reduction in the variation of product pro duct perf performa ormance nce chara characteri cteristic stic in their target targe t values.
WHAT ARE TAGUCHI METHODS?
3
4. Custom Customer’s er’s loss loss attributable to product performance variation is often proportio prop ortional nal to the square of the deviation of the performance characteristic from its target value. 5. The final quality and cost (R&D, manufacturing, and operating) of a manufactured product depend primarily on the engineering design of the product and its manufacturing process. 6.
Variation in product (or process’) performance can be reduced by exploiting the nonlinear effects of the product (or process) parameters on the perform perf ormance ance charact characteristics. eristics. 7. Statistically S tatistically planned experiments can efficiently and reliably identify the settings of product and process parameters that reduce performance variation. One achieves kaizen by formally integrating design and R&D efforts with actual production in order to get the process right and continually improve it. A large number of design, process, and environmental factors are usually involved in such a task. Consequently, there is no effective way of doing kaizen except by the pervasive use of scientific methods. Statistically designed experiments, in particular, parti cular, can generate gene rate high highly ly valuab valuable le insigh insights ts abo about ut the beha behaviou viourr of a proc process ess or product, normally using only a surprisingly small number of experiments. The consequence of superior performance is the superior fit of the manufacturing proces pro cesss or produc pro ductt to its users us ers ’ requ requirem irements ents.. Sub Subsequ sequentl ently y this reduce red ucess the produc pro duct’s t’s lifetim lifetimee cost of use. 1.3
OPTIMI OPTIMIZED ZED DESIGN REDUCES R&D, PRODUCTION, PRODUCTION , AND LIFETIME COST
Cost trade-offs in quality decisions are not new. This is how industry sometimes justifiess its QA prog justifie programm rammes. es. Mos Mostt mana managers gers belie believe ve that quality requ requires ires action when quality-related operating costs — which belong to one of the three following categories — go out of line: Failure costs result from inferior quality products in the form of scrap, rejects, repair, etc. Failure costs are also involved in the returns from customers, loss of goodwill, or a plant failure causing loss of production, property, or life at the customer’s site. Appraisal costs are incurred while inspecting, appraising, and evaluating the quality of the products one manufactures, or the materials, parts, and supplies one receives. Prevention costs are incurred when one attempts to prevent quality problems from occurring by (a) engaging process control, optimization experiments and studies; (b) training operators on correct procedures; and (c) conducting R&D to produce prod uce clos close-toe-to-targ target et product products. s. A manufacturer often trades off one of these costs for another. Some manufacturers choose not to invest on prevention, engaging instead a team of technicians to do warranty service. When there is a monopoly, sometimes the warranty service is also cut — regardless of its effect on customers.
4
TAGUCHI METHODS EXPLA INED— PRACTICAL STEPS TO ROBUST DESIGN DESIGN
A large sum of money spent on appraisal can help screen out defective products, produ cts, preventing preven ting them from getting to customers customers.. This is inspe inspectionction-based based QA. As of today, most Very Large Scale Integration (VLSI) chips have to be produced prod uced this way. It shoul should d be clear, howe however, ver, that QA based on appra appraisal isal is reactive and not preventive — it takes action after production. If resources can be directed to prevention instead, one increases the likelihood of preventing defects and quality problems from developing. With preventive action, prevention costs for an enterprise may rise, but failure costs are often greatly reduced [9]. Reduction of defective production directly cuts down in-house scraps and rejects. This also reduces returns from customers and their dissatisfaction with the product. Also, the producer projects a quality image, which often gives a marketing edge. It may be possible, of course, to go overboard with quality if we disregard real requirements. The ISO 9000 Standards document [10] as well as QFD [34] also emphasize the value of establishing the customer’s real needs first. Business economists suggest that the target for quality should be set at a level at which the profit contribution of the product is most favourable (Fig. 1.1).
*
c o +>
Contribution
3 JD C
oo Market value
CO
o
O Q> 3 O >
M anuf ac t uri ng c os t C ont ri but i on
c
55 o 0) o t _
c
/
Increa sing
Fig. 1.1
pre cision —
\
N
\
Contribution and precision of design
In his writings Taguchi has stated that delivering a high quality product at low cost involves engineering, economics, use of statistical methods, and an appropriate management approach emphasizing continuous improvement. To this end Taguchi has proposed a powerful preventive procedure that he calls robust design. This procedure optimizes product and process designs such that the final performance is on target and and it has minimum variability about this target. One major outcome of off-target performance, be it with ill-fitting shoes, defective keyboards, or a low yielding chemical process, is the increase in the lifetime cost of the product or process (see Table 1.1). We may classify this total cost as the cost that the product/process imposes on society — the producer, the consumer, and others who may not even be its direct users — as follows:
WHAT ARE TAGUCHI METHO METHODS? DS?
5
Operating cost: The costs of energy, consumables, maintenance, environ mental control, inventory of spare parts, special skills needed to use the product, etc. constitute the product’s operating cost. Generally, with robust design this cost can be greatly reduced. M anufa an ufa ctu rin ring g cost: Jigs, special machinery machinery,, raw and semi-finis semi-finished hed ma materials, terials, skilled and unskilled labour, QC, scrap, rework, etc. constitute the manufacturing cost. Again, with robust design, the requirements of special skills, raw materials, special equipment, controlled environment, on-line QC effort, etc. can be substantially reduced. R&D cost: Engineering and laboratory resources, expert know-how, patents, technical collaborations, collaborations, prototype development, field trials, etc. constitute the R R&D &D cost of the product. R&D aims at producing drawings, specifications, and all other information about technology, machinery, skills, materials, etc. needed to manufacture products that mee meett custo customer mer requirements. requiremen ts. The goal here is to develop, docum document ent and deliver the capability for producing a product with the optimum performance — at lowes lowestt manufacturing manufa cturing and operatin operating g cost. Robu st desig n can play a key role in this effort too. TABLE X.l INITIAL PRICE
vs. L I F E T I M E C O S T O F P R O D U C T S I N C O M M O N U S E *
Product
A ir C o n d it io n er s Dishwasher Electric Dryer G a s D r ye r Freezer, 15 cu. ft. E l e c t r i c R a n ge G a s Ra ng e Frost-Free Refrigerator B & W T e lev isio n Colour Television Electric Typewriter V a c u u m C le a n er W a s h in g M a c h in e Industrial P roces s E qu ipm en t
Initial Price
Lifetime Cost
($)
($)
200 245 182 20 7 165 175 180 230 175 54 0 163 89 2 35 7 5 ,0 0 0
665 617 670 370 79 3 766 33 0 791 505 1086 395 171 8 52 4 3 2 ,1 8 2 / y r * *
* F.M. Gryna (1977): Qu ality Costs — User vs. Manufacturer, Manufacturer, Quality Progress , June, pp. 10-13. ** Includes repairs (part, (part, material material and labour), labour), contract la labour, bour, defec tive product produced and lost production.
Generally, the producer incurs the R&D and manufacturing costs and then passes these on to the consume consumer. r. In addition, the cons consumer umer incurs the operatin operating g cost as he uses the product, especially when performance deviates from target. The knowledge emerging from Taguchi’s work affirms that high quality means lower operating cost and vice versa. Loss fun cti ons provide a means to quantify this statement.
6
TAGUCHI METHODS EXPLAINED— EXPLAIN ED— PRACTICAL STEPS TO ROBUST DESIGN
The robust design method — the key QA procedure put forth by Taguchi — is a systematic method for keeping the producer’s costs low while delivering the highest quality to the consumer. Concerning the manufacturing process, the focus of robust design is to identify process setting regions that are most sensitive to inherent process variation. As will be shown later, this eventually helps improve the quality of what is produced — by minimizing the effect of the causes of variation — without necessarily eliminating the causes. 1.4
TAG UCHI’ S DEFINITION OF QUALITY
What quality should a manufacturer aim to deliver? To resolve this fundamental dilemma as the debate intensified, Juran and others [9] defined the quality of a produc pro ductt to be its “fitnes “fitnesss for use” as asse assessed ssed by the custo customer. mer. Taguchi Tagu chi has given an improved definition of this: a product has the ideal quality when it delivers on target performance each time its user uses the product, under un der all intended operating conditions, and throughout its intended life [4]. This ideal quality serves as a reference point even though it may not be possible to produce a product with ideal quality. A manufacturer should not think of o f quality except in terms of meeting customer expectations, which may be specific and many. People using pencils, for example, may desire durable points providing clear lines, and erasers that last until at least half the pencil is used. Pencil chewers would additionally want that the paint be lead-free! The ideal quality is performance at target rather than within some specification specification tolerance limits. This has been best shown by a study of customer preference of colour TV sets manufactured using identical designs and tolerances, but with different quality objectives. The Asahi newspaper reported this study on April 17, 1979 [5]. A Sony-U.S.A. factory aimed at producing sets within colour density tolerance m ± 5. It produced virtually no sets outside this tolerance. A SonyJapan factory produced identical sets but it aimed directly at hitting the target density m, resulting in a roughly normal distribution of densities with a standard deviation 5/3 (see Fig. 1.2). Careful customer preference studies showed that American customers who bought bou ght these TVs preferred prefe rred the sets made in Japan ove overr those mad madee in U.S.A. Even if the fraction of sets falling outside the spec limits in U.S. production was lower than that in the Japanese production, the proportion of “Grade A” sets (those judged to do the best) from Japan was considerably higher and that of “Grade C” sets considerably lower. Thus the average grade of sets made by Sony-Japan was better than that by Sony-U.S.A. This reflected the higher quality value of sets made by Sony-Japan. At least two other major industry studies involving automobile manufacture have led to identical conclusions [ 1 , 1 1 ]. Reflecting on experiences such as above, Taguchi suggested that a product imparts a loss to society when its performance is not on target. This loss includes any inconvenience, and monetary or other loss the customer incurs when he uses the product. Taguchi proposed that manufacturers approach the ideal quality by
WHAT ARE TAGUCHI METHODS?
7
Sony -U.S.A Sony - Ja pan
m -5
m
D
B Fig. 1.2
A
B
m+ 5
Colour dens i t y
D
• Grade
Distribution of colour density iin n television ssets. ets. CSource : The Asahi, April 17, 1979.)
examining the total loss a product causes because of its functional variation from this ideal quality and any harmful side effect the product causes. The primary goal of robust design is to evaluate these losses and effects, and determine (a) process conditions that would assure the product made is initially on target, and (b) characteristics of a product, which would make its performance robust (insensitive) to environmental and other factors not always in control at the site of use so that performan perfo rmance ce rema remains ins on target during the prod product’s uct’s lifetim lifetimee of use. To enforce these notions Taguchi (re)-defined the quality of a product to be the loss imparted to society from the time the product is shipped. Experts feel this loss should also include societal loss during manufacturing [6 ], The loss caused to a customer ranges from mere inconvenience to monetary loss and physical harm. If Y is the performance characteristic measured on a continuous scale when the ideal or target performance level is r, then, according to Taguchi, the loss caused L ( Y ) can be effectively modelled by a quadratic function (Fig. 1.3) L (Y ) = k ( Y - t ) 2
Note here that the loss function relat relates es quality to a mone monetary tary loss, not to a ‘gut ftTliag or other mere emotional reactions. As will be shown later, the quadratic Io b ta c tk m provi provides des the neces necessar saryy info informa rmatio tion n (throu (through gh signal-to-noise ratios) Id achieve effective quality improvement. Loss functions also show why it is not good o ra g h for pro produc ducts ts to bbee within specification limits. Parts and components that most fit together to function function are best made at their nominal (or the midpoint specification) dimensions than merely within their respective specification tolerances [11]. When performance varies, one determines the average loss to customers by stati statistical stically ly avera averaging ging the quadratic quad ratic loss. The aver average age loss is propo proportion rtional al to the mean squared error of Y about its target value r, found as follows: If one
8
TA TAGU GUCH CHII METH METHOD ODS S E EXP XPLAI LAINE NED—PR D—PRAC ACTI TICA CAL L S STE TEPS PS TO ROBU ROBUST ST DESI DESIGN GN
Loss
Performance Fig. 1.3
characteristic
The relationship between quality loss and performance deviation from target.
y 2, ^ 3, . -., yn respectively, produ ces n units of a product giving performances produces then the average loss caused by these units because of their not being exactly on target r is
^ - [ L ^ ) + L (y 2) + ... ... + L(y„ L(y„)]= )]= £[ (?! - r ff + ( y 2 - t ) 2 + .. .... + {yn - x )2] )2] /
a
n — 1
= k Ol-T) + —
2
O'
where fj, fj, = E y-J y-Jn n and a 2 - Z (yi - fJ)2/(n - 1). Thus the average loss, caused by variability, has two components: 1. The average performance ( jj)f being different from the target r, contributes jj )f being the loss k(ii - r)2. r)2. 2. Loss k a 2 results from the performance {#} of the individual items being different from their own average \x. \x. Thus the fundamental measure of variability is the mean squared error of Y (about the target t), and not the variance a 2 alone. Interestingly, it may be noted that ideal performance requires perfection in both accuracy (implying that jx be equal to r) as well as precision (implying that a 2 be zero). A high quality product performs near the target performance value consistently throughout the life span of the product.
WHAT ARE TAGUCHI METHODS?
9
Whenever available, a quantitative model that describes how the performance of a product or process design depends on the various design parameter parameterss is of great help in the optimization of designs. This dependency may become evident by invoking scientific and engineering principles, or by conducting experiments with a physical prototype. In the trial-and-error method of experimentation, intuition rather than a systematic procedure guides what levels of variable settings one should try. This approach appeals to many investigators for its apparent ‘simplicity’ [12]. In this approach, chance plays an important role to deliver the optimized design. The next popu lar approach popular appr oach is the one-v one-variab ariable-at le-at-a-tim -a-timee experimenta experi mentall search to find the optimum setting. This method too is simple, but (a) the one-at-a-time approach is
inefficient when the number of variables are many and (b) it can miss detection of critical interactions among design variables [12]. By sharp contrast to trial-and-error approach, statis statistical tical design o f experi ments is a systematic method for setting up experimental investigations. Several factors can be varied in these experiments at one time. This procedure yields the maximum amount of information about the effect of several variables and their interactions while using the minimum number of experiments. In a statistically designed experiment, one varies the levels of the independent input variables from trial to trial in a systematic fashion. A matrix of level settings defines these settings such that maximum information can be generated from a minimum number of trials. Moreover, some special statistical experiments require mere simple arithmetical calculations to yield sufficiently precise and reliable information. Classical statistical experiments, called fu ll fa ct ot ia l desi gns , require trials under all combinations of factors. Taguchi has shown that if one runs orthogonally designed experiment experimentss instead, many product and process designs can be optimized — economica econ omically lly and effectiv effectively, ely, and with surprising surpris ing efficie efficiency. ncy. Taguchi’s robust design experiments for most part use only orthogonal arrays (OAs) rather than full factorial designs. Orthogonally designed parametric optimization experiments act as an efficient distillation mechanism that identifies and separates the effect each significant design or environmental factor has on performan perfo rmance. ce. This in turn leads to produ products cts that (a) deli deliver ver on-ta on-target rget performan perfo rmance ce and (b) show minimum sensitivity to noise or uncontrolled environmental factors.
1.5
WHAT CAUSES PERFORMANCE TO VARY?
Variation of a product’s quality performance arises due to (a) environmental factors; (b) unit-to-unit variation in material, workmanship, manufacturing methods, etc.; and (c) aging or deterioration (see Table 1.2). The Taguchi approach focusses on minimizing variations in performance by determining the ~vital few” conditions of manufacture from the “trivial many”, economically and efficiently, such that when one finally manufactures the product, it is highly probable prob able that it is, and rem remains ains on, target. Rob Robust ust design aims speci specifical fically ly at determining product features such that performance becomes insensitive to the
10
TAGUCHI METHODS METHODS EXPLAINED— PRACTICAL PRACTICAL STEPS STEPS TO ROBUST DESIG DESIGN N
environmental and other factors that the customer would perhaps not be able to or wish to control. TABLE 1.2 F A C T O R S A F F E C T IN IN G P R O D U C T A N D P R O C E S S P E R F O R M A N C E
Product Performance
Process Performance Outer Noise
Consumer’s usage conditions Low temperature High temperature Solar radiation radiation Shock Vibration Humidity
Ambient temperature Humidity Dust Incoming mater material ial Operator performance Voltage and frequency Batch-to-batch variation
Dust Innerr No ise Inne
Deterioration of parts Deterioration Deteriorati on of material material Oxidation
Machinery aging Tool wear Shift in control Between Betwe en Pro duct s
Occurrence of piece-to-piece
Occurrence of process-to- process variation when the processes are supposed to be the same
variation when the pieces are supposed to be the same Controllable Factors
All design parameters such as dimension, material, configu ration, packaging, etc.
All process design parameters All process setting parameters
Most real-life manufacturing processes lead to unit-to-unit variation in prod uction productio n and to what is produced prod uced not being alway alwayss on target. Such variati variations ons may be caused by raw material differences, operator’s errors and inconsistencies, and factors such as vibration, temperature changes, humidity, etc. When one prod uces items in a batch, produces ba tch, batch batch-to-b -to-batch atch process setting differ differences ences also introd introduce uce variation in product performance. In addition, manufacturing processes have a tendency to drift, causing off-target production as time passes. The first step toward robust process design is the tentative identification of all the above mentioned factors. Such a step, to be effective, requires contributions from technology experts, workers, designers, marketers, and even customers. One then includes the factors found in the statistical experiments so that their effects (individual, or interactive) may be estimated and countermeasured, if necessary. The challenges in product design are similar. The opening/closing of a refrigerator door, the amount of food kept in it, and the initial temperature of food, variation in ambient temperature and power supply voltage fluctuation are environmental factors that can effect a refrigerator’s performance. For a solar cooker, all but the last aspect might be important. One requires engineering and operational experience with the product and sound scientific judgment to ensure that all relevant factors are included in robust product design studies. Only then experiments to optimize the design may be planned.
WHAT ARE TAGUCHt METHODS?
11
An efficient tool for locating and identifying the potential factors that may affect product or process performance is the Ishikawa Cause-Effect diagram (Fig. 1.4). Operator
Machine
tired s ^ y s i c a l
^
amplitude
_____
cutting
condition ^ nervous
lm—large
delay rotational frequency
inadequate
return
time
does not join P bloc k pressure removal solution solu tion 7/ '’ '’filte filte r fluid / ruitsr ruitsr Tiu Tiuia ia contaminant contaminant crack / / depleted removing shaving fluid grinding remains V agent V*— la arr g ge e hord mixing mixin g particle s lathe
crock
V— Short
Material
\
Method
contaminants mixed in
L
7
ratio
L • — l i g hhtt
^-----^----------------------*— -*— gr in de r
training
awareness
weight
grinding long
time bonding filled in pitch
Mony grinding cracks
Fig. 1.4
Cause and effect diagram for potential causes leading to cracks during contact lens grinding.
Engineers sometimes use screening experiments to review a large number of potentially important factors to separate the key factors. In such experiments the objective objecti ve is to identify input factors havingexperiments the largest impact thetoprocess — the vital few from the trivial many. Taguchi can be on used optimize and confirm the settings of these vital factors. 1.6
PREVENTION BY QUAL ITY DESIGN
Next to quality, manufa manufacturin cturing g cost is a primary attribut attributee of a product. However, it may appear impossible or at best difficult to reduce manufacturing cost while one is seeking on-target performance plus low variability. Somewhat surprisingly, Taguchi methods deliberately and consciously seek designs that use inexpensive components and parts and yet deliver on-target performance. The premise of this approach is that 80% of a lifetime cost (Table 1.1) of a product is fixed in its design stage. If the design calls for steel instead of pl plastic, astic, then manufacturing can only aim at the remaining 20% (mostly labour) by seeking productivity during production. It is very important, therefore, tthat hat besides those af affecting fecting performanc performance, e, the design engineer identifies aspects that have a significant bearing on the cost and manufacturability of the product, and then through statistical experiments >ets
these also at optimal levels.
12
TAGUCHI METHODS METHODS EXPLAINED— PRACTICAL PRACTICAL STEPS TO ROBUST DESIGN
One is often unaware of the dependency of the output (performance) and the input (design and environmental factors) even if the technology is familiar and the manufacturing plant has made the product many times over. For instance, it is possible for forgings to show cracks even after a plant has made thousands of them. In practice, one does not generally know the effect of all the control factors that can be manipulated by by the product/process product/p rocess designer. Also Also,, one is often unaware of the noise factors that are uncontrollable but present during production or in the environment in which the product is used. However, achieving robust quality design requires that one finds out these effects systematically and countermeasures them. The Japanese discovered in 1953 that the most effective solution of quality problems prob lems is during du ring produ product ct aand nd pproces rocesss design. In that year, the Ina In a Tile Ti le Compan Company y used statistical experiments to reduce successfully finished tile size variability by a factor of 10 [4, 5]. Many investigations have now confirmed that it is too late to start thinking about quality control when the product is coming out of the reactors or exiting the production line. The remedy here, as proposed by Taguchi, is a three-step approach to correctly designing the product. These steps must precede prec ede prod production uction to maxi maximize mize the produc pro duct’s t’s chances chanc es of delive delivering ring on-t on-targe argett performa perf ormance nce with mini minimum mum variabili variability. ty. 1.7
STEPS IN DESIGNING PERFORMANCE INTO A PRODUCT
Designing with the objective of building quality into a product involves three steps [4]: 1. System (o r concep t or functiona l) design. This is the first step in design and it uses technical knowledge to reach the initial design of the product prod uct that delivers the basic, desired functional performance. Several different types of circuits or chemical reactions or mechanisms may be investigated, for instance, to
arrive at a functional audio amplifier, a synthetic lubricant or a braking device. The technology of a special field often plays a major role in this step to reach the functional design — the initial, acceptable settings of the design parameters. 2. Pa ra m ete r design. In this step, step, one finds finds tthe he optimum settings of the design parameters. To achieve this, one fabricates or develops a physical or mathematical prototype of the product based on the functional design (from step 1) and subjects this prototype to efficient statistical experiments. This gives the paramet para meter er values v alues at which perfo performan rmance ce is optimum. Two types of exper experimen iments ts are conducted here: The first aims at identifying process parameter values or settings such that the product made by the process performs on target. The second aims at the type of experiments determining the effects of the uncontrolled, environmental, and other product design parameters to find design parameter settings such that performa perfo rmance nce suffers minim al deviati deviation on from targ target et (i.e., it is robust robust)) when one actually uses the product in the field. Parameter design identifies the optimum nominal values of the design parameters. 3. To lera nce design design.. Here, one determines the tolerances on the product design parameters, considering the loss that would be caused to society should the performa perf ormance nce of the prod product uct deviate from the target.
WHA T A RE RE TA GU GUCHI M ME ETHODS?
13
In the functional design, one develops a prototype design (physical or mathematical) by applying scientific and engineering knowledge. From this effort one produces a basic design that broadly meets the customer’s requirements. Functional design is a highly creative step in which the designer’s experience and creativity play a key role. Good judgment used in functional design can reduce both the sensiti sensitivity vity of the product prod uct to envir environme onmental ntal noise and its manu manufactur facturing ing cost. In parameter design, one conducts extensive empirical investigation to systematically identify the best settings of (a) process parameters that would yield a product that meets (the customer’s) performance requirement and (b) the design parameters of the produ product ct such that the product’s prod uct’s perf performa ormance nce will be robust (stay near the target performance) while the product is in actual field use. Parameter design uses orthogonal arrays and statistical experiments to determine parameter settings that deliver on-target perfo performan rmance ce as also minim minimum um variabili variability ty for the product’s quality characteristics. In tolerance design, one determines manufacturing tolerances that minimize the product’s lifetime and manufacturing costs. The special device used here for expressing costs and losses is the Taguchi Loss Function , mentioned earlier. The objective in tolerance design is to achieve a judicious trade-off between (a) the quality loss attributable to performance variation and (b) any increase in the product pro duct’s ’s m manufa anufacturin cturing g cost. The loss function philosophy acknowledges that society (consumers, manu facturers, facturer s, and those affected indirectly by the product) incurs a loss with the prod product uct whenever the product’s prod uct’s performance deviates from its expected target performance. Thus, it is not enough for a product to “meet specifications”. Its performance must be as close to the targe targett as possibl possible. e. The loss funct function-b ion-based ased approach approa ch to robust design (through measures known as signal-to-noise ratios) ratios) also reduces problems prob lems in the field and is thus a preventive quality assurance step. As will be explained later, a third major advantage of aiming at on-target production (rather than only meeting specifications) is the reduction of catastrophic stack-up of deviations [1]. Loss functions help in bringing the customer requirement orientation into a plant. They also elimin eliminate ate inequ inequitable itable assignme nment nt Of manufactur facturing ing toleranc tolerances es between betwe en depart departments ments making makin g parts thatassig should fit andmanu functio function n together together. . Each
department then views the department following it as its customer and sets its own manufacturing tolerances using the loss function. Chapter 10 discusses these techniques. In this manner the manufacturing organization makes tolerance adjustments in whichever departments they are most economical to make, resulting in the reduction of the total manufacturing cost per unit [8]. 1.8
FUNCTIONAL D DESIG ESIGN: N: THE TRADITIONAL FOCUS
Functional design ideally creates a prototype process or product that delivers functional performance. Sometimes a product has to meet more than one Functional Requirement (FR) [13]. This requires research into concepts, technologies, and specialized fields. Many innovations occur at this stage and the core of this effort is concept design.
14
TAGUCHI METHODS METHODS EXPLA INED— PRACTICAL PRACTICAL STEPS TO ROBUST DESIGN
A functional design sometimes produces a mathematical formula; by using it, performance can be expressed as an explicit function of the (values of) design parameters. For instance, developing a mathematical representation of the functional design of passive filter type devices is a common activity in electrical engineering. By using the Kirchhoff current law, the transfer function (V0/Vs) for the circuit shown in Fig. 9.1 may be obtained as g
o
V,
3
( * 2 + R Rg)( g)( R s + u 3) + R 3R 3RS S + ( R 2 + R g)R 3R 3Rss Cs '
where s ' is the Laplace variable. From this transfer function, the filter cutoff frequency coc and the galvanometer full-scale deflection D may be found respectively as ( * 2 + * , ) ( * , + * 3) + * 3 fl,
=
------------------------------------------------------------------------------------------------------------------------------------------
2 n ( R 2 + R )R 3R 3RSC SC Vs
D= sen
g
s
G s c n [ ( R 2 ^ R g ) ( R s ^ R 3) + R s R , ]
The design parameters (DPs) that the designer is free to specify are R 2, R$ R$,, and C. Another design example, from chemical engineering, illustrates a similar functional process model — also a mathematical relationship between the design pa para ram m et eter erss and perfo pe rfo rm ance an ce.. Many Ma ny chem ch em ical ic al proc pr oces esse sess apply ap ply m ec echa hani nica call agitation to promote contacting of gases with liquids to encourage reaction. Based on reaction engineering principles, the relationship between the utilization of the reacting gas and the two key controllable process variables may be given by Utilization (%) = K (mixing HP/1000 g)A (superficial velocity)5 As will be illustrated through a case study in Chapter 9, such mathematical models can be as useful as physical prototypes in achieving a robust design. Traditionally, product and process design receive maximum attention during functional design. Most engineering disciplines expound the translation of scientific concepts to their applications so that the designer is able to develop the functional design. Refinements to this initial design by trial and error may be attempted on the shop floor — combined possibly with limited field testing of the prototype. proto type. True optimizatio optim ization n of the design design,, howeve however, r, is rarely thus achie achieved ved or
attempted [12]. The Taguchi philosophy sharply contrasts with this traditional approach to design. Taguchi has contended that, besides building function into a product, its design should engineer quality also. In his own words: “quality is a virtue of design.” 1.9
PARA METRIC DESIG DESIGN: N: THE ENGINEERIN ENGINEERING G OF QUAL ITY
A quality product, during its period of use, should have no functional variation. The losses caused by it to society by repairs, returns, fixes, adjustment, etc., and by its
WHA T A RE RE T TA A GU GUCHI M ME ETHODS?
15
harmful side effects are designed to be small. During its design, one takes countermeasures to assure this objective. The use of Taguchi methods makes it possiblee that measu possibl measures res may be taken at the product prod uct design stage itse itself lf to achie achieve ve (a) a manufacturing process that delivers products on target and and (b) a product that has robust performance and continues to perform near its target performance. As already stated, the performance of a robust product is minimally affected by enviro environment nmental al condi conditions tions in the field, or by the exte extent nt of use (aging (aging)) or due to item-to-item variation during manufacturing. Besides, robust product design aims at the selection of parts, materials, components, and nominal operating conditions so that the product will be producible produc ible at minim minimum um cost. The three steps involved in robust design are: 1. Plan nin g the stat istic al exp erim ents is the first step and includes identification of the product’s main function(s), what side effects the product may have, and factor(s) constituting failure. This planning step spells out the quality characteristic Y to be observed, the control factors {0i, fy, #3}> the observable noise factors {wi, w2, vv3}, and the levels at which they will be set during the orthogonal design will be various test also states which employed (seeruns Fig.(experiments). 1.5) to conductItthe statistical experiments and how the observed data {yj. y2, >’3, . ..} will be analyzed.
Design matrix
Noise
matrix
(Control factors )
(Noise factors)
Observed performance characteristic
Computed performance statistic
R U
M |
d,
f
1
02 03 1
1
2
1
3
1
3
3
2
t
2
2
2
3
2
3
1
2
3 %
9
2
3
3
2
t
3
3
2
W1 W2 w3 1
-
1
2
2
2
1
2
- y2 - y3
2
2
•
\
1
c The
y, — Z ( 0 ) ,
► y4
outer orthogonal array made up of observable noise fa ct or leve ls; each noise fac tor has two noise two distinct levels. 1
\
1
1
2
2
2
1
2
2
2
t
y33 y33 y34 y34 y35 y35 ^36 ^36
— Z ( 0 ) 9
The inner orthogonal array constructed using the different design factor trea tmen ts; three treatments for each factor are available.
Fig. 1-5
A parameter design experiment plan.
16
TAGUCHI METHODS METHODS EXPLAINED— PRACTICAL PRACTICAL STEPS TO ROBUST DESIGN
2. Actu al condu cting o f the experiment experiments. s. 3. Analysis of the experimental observations to determine the optimum settings for the control factors, to predict the product’s performance at these settings, and to conduct the validation experiments for confirming the optimized design and making plans for future actions. Taguchi has recommended that one should analyze this, using a specially transformed form of the performance characteristic Y, known as the signal-to-noise ratio {Z(0)j}, see Section 4.2, rather than using the observed responses {y*} directly. One conducts all the required experiments needed, guided by the principles of design of experiments (see Chapter 3). This assures that the conclusions reached are valid, reliable, and reproducible. Briefly stated, the novel idea behind parametric design is to minimize the effect of the natural and uncontrolled variation in the noise factors, by choosing the settings of the control factors judiciously to exploit the interactions between control and noise factors, rather than by reaching for high precision and expensive parts,, com parts compone ponents nts and mate materials rials and plan t control schemes. Such possibility possi bility was perceiv per ceived ed by Tagu Taguchi chi before befor e anyon anyonee else. 1.10 STATISTICAL EXPERIMEN EXPERIMENTS TS DISCO DISCOVER VER T THE HE BEST DES DESIGN IGN RELIABLY AND ECONOMICALLY
Many engineers hesitate to use statistical formulas or analysis in their work. Sound decisions about quality, however, require that one obtains the appropriate data (in the most efficient way) and analyzes it correctly. That is precisely what statistics helps us to do. In particular, when several factors influence product or process performance, statistically designed experiments are able to separate reliably the vita vitall fe w factors that have the most effect on performance from the trivia triviall man y . This separation results in mathematical models that m ake true product and process design optimization po ssib ss ible le.. Also, statistical experiments produce the supporting data for verifying some hypothesis about a response (dependent) variable — usually with the smallest number of individual experiments. An example can best illustrate this point. If four types o f tyre materials materials (Mu M2, M3, and M4) are available, four vehicle types (Vi, V2, V2, V3, and V4) V4) are present, and four types of terrains (7\, T2 j T3, and T4 T4)) exist on which the vehicles will be used, then the total number of ways to combine these factors to study them is 43 or 64. 64. At first, 64 may appear to be the number of tests with the different vehicles, terrains etc. that one must run. However, if prior knowledge suggests that tyre wear is unaffected by which tyre material is used on which vehicle, and on which terrain, (i.e., there are no interactions (Section 3.1)) and the objective is to identify the main effect of materials and the effects of changing vehicle type and the driving terrain, one will need to run only 16 (Latin-square designed) statistical experiments (Fig. 1.6) to grade the materials based on wear. This is a substantial saving of effort.
WHAT ARE TAGUCHI METHODS?
17
V e h i c l e "T "T y p e " — ^
< _
< o r
r o
V4
i
> C
T,
M,
M2
m3
m4
o W
»
picions, one needs procedures that reliably analyze the observed data. Such analysis should separate the effect of the factor of interest from all other
34
TAGUCHI METHODS EXPLAINED— PRACTICAL STEPS STEPS TO ROBUST DESIGN DESIGN
influences and variabilities. The factor of interest here may be quantifiable (i.e., measurable in numbers) or merely observable as a condition (“morning shift” vs. “afternoon shift”, or steel vs. copper, etc.). There is a question here of sufficiency of data. Obviously, a single secretary typing only business letters cannot confirm whether wordprocessor A is easier to use than wordprocessor B. In order to establish that A is easier or harder to use (than B), one would be well-advised to test several secretaries on several different machines, working on various assignments. In hypothesis testing, when the response (the output of the process one is studying) is observed or the performance measurements are taken, one must ensure that data collected would allow the investigator to compare, for instance, machine-to-machine, person-to-person, assignment-to-assignment and word processo proc essor-tor-to-word wordproc processo essorr variatio variations ns in perfor performance mance.. These factors factor s can affect the secretary’s experience, other than any effect that changing the wordprocessor alone may produce. In making such a comparison, one must also decide what summaries (statistics) would be calculated from the observed data, and what tests should be applied on these statistics. Fisher [2] in 1926 established that the analysis of variance (ANOVA) procedure provides one of the best procedures to conduct such comparisons. Why is it necessary that several different machines, assignments, and secretaries be involved in such tests? Perhaps one feels that such elaboration adds needlessly to the complexity of the study and is perhaps wasteful of time and resources. If the same person is going to use always the same machine and type only business memos, one may perhaps get away by doing the ‘convenient’ investigation. However, after the investigator has made his recommendation, the typing assignments would perhaps differ — some requiring text work, numbers, columns, and tables, or even flow charts. It would be desirable then to use a method of comparison that is valid under less restrictive and perhaps more realistic conditions. Further, as we shall see later, some influencing factors might be beyond the investigator’s control. One here needs randomizing, a procedure that attempts to average out the influence of the uncontrolled factors on all observations. The scientific approach of evaluating or comparing the effects of various factors uses statistically designed experiments, a systematic procedure of drawing observations after setting the factors at certain desired levels, and then analyzes the observed data using ANOVA procedure. 2. 2.7. 7.1 1
An Illus trati on of th e ANOVA Method
Suppose one speculates that crops grow better with the application of the Miragro brand bran d fertilizer fertilizer.. This specula speculation tion coul could d be stated as the null hypo hypothes thesis is H0 (see Section 2.3): “Miragro is better than plain water”. To test the acceptability of this hypothesis, one plans an investigation. The investigator decides to measure plant growth as the height achieved in inches — after 12 weeks of planting healthy seedlings, with and without the application of Miragro. The other factors that also could influence growth (or the lack of it) are soil
HANDLING UNCERTAINTY
35
quality, amount of sunlight, seed quality, moisture, etc. To reach a valid conclusion in this investigation, therefore, one would have to neutralize these influences by
randomizing the plant growth trials with respect to these factors. If this randomizing is without any plan or logic, it is possible that by the luck of the draw, most plain water-fed plants would end up growing, for instance, in shade. In order to avoid this, some deliberate balancing would have to be planned. planne d. If 16 plant plantss are to be grown, eigh eightt would be given plain water, while the other eight would be given Miragro. However, randomizing would decide which plant s woul plants would d receiv receivee Miragro, and which plain water, regardles rega rdlesss of whe where re one plantss them. plant Suppose that one obtains the following height measurements after 12 weeks of planting, beginning with 16 equally healthy seedlings: ‘Treatment’
Sample Mean Variance
Miragro
Plain Water
26 28 30 33 22 24 26 27
25 27 29 30 21 23 25 24
27 10.25
25.5 8
The calculated means and variances under the two treatments immediately show t a t the mean mean heights of the plants grown under the two treatments ‘differ’. Also, Also, ■obce the considerable difference in height from plant to plant under each of the two treatments. This suggests that one cannot be certain that the fertilizer treatment caused the difference in means, and not chance (“chance” here includes all the factors the investigator did not or could not control). The observed difference between the two sample means, 27 and 25.5 inches, under the two treatments could be either because of a true difference influenced by these two treatmen treatments, ts, or the large variance of a single distrib distribution ution of plan plantt heights under various influences. Therefore, to probe the hypothesis H0 H 0 further, we begin by assuming that the effects of plain water and Miragro are unequal and set up two simple cause-effect models: Height with plain water: y = Pi + £ Height with Miragro: y = Pi + £ In these models the parameter /J, is the expected effect on height (caused by Miragro or by water), and £ is the unexplained deviation or error, a random (chance-influenced) variable representing the influence of all other uncontrolled factors (sunshine, moisture, soil condition, etc.).
36
TAGUCHI METHODS EXPLAINED— PRACTICAL ST STEPS EPS TO ROBUST DESIGN
Com paring M irag ro with plain w ater. In rea realit lity, y, if M Mira iragro gro makes no difference in growth over plain water, then f}x and fi2 would be equal and the observed difference between average heights (27 - 25.5) would be only attributable to the random error e. How can the investigator determine whether there exists a difference between betw een Miragro Mirag ro and plan plantt water w ater treatme treatment? nt? The key to this questi question on is the plan t- to-plant height variations. If the plants vary very little within a given treatment,
then (27
25.5) is persuasive enough to conclude that Miragro affects growth
over plain water. On the other hand, if the heights within a given treatment vary considerably from plant to plant, plant, the (27 - 25.5) difference noted is not persuasive enough. Therefore, we proceed next to compare the observed common variance across all plants grown to the observed difference in sample means under the two treatments. The overall pooled or common variance ( a 2) of plant heights reflects the variable influence of all factors — controlled and uncontrolled — that influence plant pla nt growth growth.. This common comm on varian variance ce may be estim estimated ated by the formula form ula variance across plants with Miragro
variance + across plants with plain water a 2 = ------------------------ --------------------------
(2.7.1.)
= (10.25 + 8)/2 = 9.125 The reader should verify Eq. (2.7.1) using Eq. (2.5.3). With this common variance of individual plant heights known, we can, given sample size as n, next estimate the variance of sample averages. This will equal ( a 2/n ), (see Eq. (2.3.10)). In the present example, the averages 27 and 25.5 are sample averages, each with sample size 8. Therefore, the variance of the sample averages is 9 125
= 1.140625
(2.7.2)
Since two sample means (27 and 25.5) were estimated with their average being 26.25, one could directly calculate the variance of sample means, using the definition of variance (Eq. 2.3.6), as . (27 - 26.25)2 + (25.5 - 26.25)2 Variance Varia nce of sample means means = — — ----------------------------------------------------------------- — 2 -1
(2 7 3)
= 1.125 One may now analyze the two variances calculated from Eq. (2.7.2) and Eq. (2.7.3) for the sample means. The observed plant-to-plant variance of 9.125 across all plants implies a variance of 9.125/8 ( = 1.140625) for the sample means with sample size of 8. If mean growth did get affected by Miragro, then that would cause the two sample means to be significantly different from each other or, in
HANDLING UNCERTAINTY
37
other words, to result in a directly calculated variance of sample means greater than 1.140625, and not 1.125 as calculated above. Since 1.125 < 1.140625, this suggests that the observations obtained do not support acceptance of the hypothesis ( H 0) that Miragro application is better than plain water. H 0) To summarize, in the foregoing discussion, we obtained two estimates of the variance of mean plant height, the first from the overall variance of the plants grown, and the second based directly on the average heights of plants grown under Miragro and under plain water treatment. If Miragro treatment affected mean plant height more (or less) than did plain water, the average heights under the two treatments would differ and thus produce a larger variance of mean plant height than that found by the overall variance of the plants grown. Thus, we base the test of the hypothesis here on a comparison of variances of sample means observed under different conditions.
2.8
THE F-STATISTIC
In Section 2.5 the Z-, the t --,, and the chi-square statistics have been described. Another useful data summary, known as the F-statistic, is calculated as the ratio o f observed satnple satnple varianc variances es. The F-statistic is particularly helpful in the comparison of variances, as attempted above in the Miragro fertilizer example. In the Miragro fertilizer example, we first estimated the variance of plant height across all plants, a 2, by averaging the two variances in the two 8-plant samples. Dividing this by n ( = 8, the sample size), size), we produced one estimate of the variance of sample means. Next, we computed the observed variance of sample means directly from the two sample means (27 and 25.5) estimated under the two treatment conditions. We are now in a position to calculate the F-statistic for this problem, which is the ratio of the two variance varian ce estim ates for the samp sample le means: P _
directly calculated variance of sample means (average variance of individual plant heights In )
A calculated ratio (of variance estimates) near 1.0, as one would expect, would suggest that all the plant growth data came from a single large population, suggesting that application of Miragro fertilizer made no difference. If, on the other hand, the ratio (F) is much larger than 1.0, this suggests that the variance of sample means, directly calculated using the observed sample means obtained at different treatments, is large. This may also suggest that the sample means obtained under different treatment conditions (here Miragro feeding and plain water) differ considerably from each other, or are too large to be explained away by the sampling samp ling variati variation on from one plan t to the next. In the Miragro fertilizer example, F = 0.986, which is not a convincing evidence that adding Miragro makes a difference. The general character of the F-statistic is as follows: The distribution of the F-statistic depends on two factors: the number of distinct treatments ( k \ and the lumber of observations (n) in each treatment. In the above example, k is 2 and i is 8. The numerator of the F-statistic is the directly calculated variance of the sample mean , determined from the k sample means obtained under k different
38
TAGUCHI METHODS EXPL AINED — PRACTICAL STEPS TO ROBUST DESIGN
treatments. The numerator has (k - 1) degrees of freedom, one dof being used in estimating the average of sample means for the variance of sample means to be calculated. The denominator, the estimate of the variance of sample means for a sample size n based on the averaged variance of individual observations, uses a total of k x n observations. The calculation of the mean of k sample sample variances (in the k treatments) treatments) from these kn observations, however, requires that one first calculate k sample means (one each in the k treatments used). Thus, the denominator of F will have k(n - 1 ) degrees of freedom. The two degrees of freedom (that of the numerator and the denominator) determine the exact distribution of the F-statistic. Various F-distributions appear on Fig. 2.4. One should remember that an F value near 1.0 indicates that the effects o f the treatments do not differ. On the other hand, if the F- statistic is significantly larger, it would suggest that the mean treatment effects vary significantly from each other.
(a )
(b ) Fig. 2.4
025; and The F-distribution: (a) With critical values F 0.975 and F 0 025 (b) three F-distributions with different degrees of freedom.
HANDLING UNCERTAINTY
2.9
39
AN AL TERNA TIVE APPROACH APPR OACH TO FINDING F : THE MEAN SUM OF SQUARES
As shown above, the F-statistic forms an important basis in the test of hypothesis about means and variances using experimental data. The F-statistic can be found by an al altern ternative ative meth method od that is ari arithmet thmeticlly iclly simpler, bec becaus ausee only certa certain in squares rather than actual variances have to be calculated in this second approach. This 1 hinges on the fact first pointed out by Fisher [2] in 1926 that the total o f squares of deviations of individual observations {F,} from their grand (Fbar) is equal to two parts. That is, let there be k treatments (Treatment 1, Treatment 2, ..., Treatment k) in an experiment, with n observations obtained wah each of these different treatments. The total set of observations are then F;. Y \ F?. F?. . . . , _ ij, Ykn - k times times n in number. Let the average of the effects bee F ba rj( = (Y\ + Y2 + ... + Yn)/n), the average of the effects m. Tieacnen t 1 b m Treatment 2 be Fbar2( = (F„+1 + Yn+2 + . .. + F2n)/n), and so on. The grand mean (average of all observations), Fbar, is given by
Fi + Yr> + ... + yv Fbar = — ----- 2 ------------- ^ kn
If one now calculates the total sum of squares of the deviation of each obserYj from Fbar, one obtains kn
Total sum of square squaress - L (F- - Fba r) ;=i 7
~
(2.9.1)
Now. for treatment i, which consists of n observations {F;S j = ( ( / - 1 )n + 1), 7 l)/zz + 33), ), . . m}, the term (F (F,,- - Fbar)2 Fb ar)2 in treatment treatmen t i may be ii - 1>i + 2), ((/ - l)/ as follows: (Yj
- Fbar)2 = [(Yj - Fbar Fbar,,-)) + (Fb (Fbar, ar,-- - Fbar)]2 Fbar )]2
Fbar,)2 + (Fbar,(Fbar,- - Fba Fbar)2 r)2 + 2(F 2(F;; - Fbar,) (Fbar,
= (Yj -
-
Fbar)
Now m I
(F. - Fbar.) (Fbar. - Fbar)
_-= _-= :- I ,n +l
J
in
I
in
r(yba r,. - ybar ybar)) -
I
yba ybar, r, (ybar ybar,,. - ybar) ybar)
; = (i-l) n+l
= *11) *11)31: 31: (y’ (y’b bar,- - ybar) - n ybar,ybar,- (yba (ybar,r,- - yba r) = 0
since Fbar,- = {>}, },;';' = 1, 2 , . . . , kn} about the respective treatment means {7bar,, i - 1, 2, 3, . . . , k}. The second term is the sum of the squares of the difference of each Fbar, (at treatment level /) from the grand mean average, ybar. Recall that the total sum of squares is a measure of variation among the individual individu al observatio observations ns {K; }. The abov abovee decomposition decompos ition shows show s that this total variation is the sum of (a) how much each observation varies about the mean of each treatment and (b) how much the average value of Y varies from one treatment to the next. This important result can also be expressed as i
Total variation in observations = variation within treatments + variation between treatments
(2.9.3)
The purpose of decomposing the observation-to-observation variation in an experiment as mentioned above is to clarify that the effect observed, Y , varies for the following reasons: 1. Each (controlled) treatment may have a different effect on Y. 2. For any given treatment, there aare re other uncontrolled factor s that also affect Y and cause it to vary about its expected value. The uncontrolled factors lead to the ‘within treatment’ variability in observed data. If there is no difference in the effect attributable to the different treatments, the total variation observed would only equal the within-treatment variability. If there is a treatment-to-treatment difference, the total variation observed and then quantified by the total sum of squares will be signifi significantly cantly larg larger er than the withi within-tre n-treatme atment nt sum of squares. This, as we shall see in Section 3.4, may be detected by the F-test. EXERCISES
1. An additional set of 25 lines was was randomly picked from the book used in Example 2.1, with the data summarized as follows:
Word Count o f 25 Randomly S elected Lines
9 14 11 8 14
7 13 12 14 2
11 8 4 7 13
5 5 2 13 7
13 2 13 9 10
Combine the above data with the data of Table 2.1 and confirm that the new 95% confidence interval for ji using the 50 random observations will be the narrower interval 7.81 7.8131 31 < / i < 10 10.3 .306 0699 [Hint: Use r(0.025,49) from Appendix A.] 2. Conduct Conduc t aan n F-test to accept or refute the hypothesis that the variance of the two sets of data presented in Table Table 2.1 and Exercise 1 above are equal equal.. If there are 435 pages in the book in question, give estimates for the total word count for this book and the variance of this count.
Design of Experiments 3. 3.1 1
TESTING FACTORS FAC TORS ONE-A T-A-TIME T-A- TIME IS UNSCIENTIFIC P)
Disputes over why quality is lacking or why a factory can’t produce acceptable goods often last for months and even years. Even ‘experts’ sometimes don’t seem to agree on the remedy — a switch over of material, loading methods, operator skills, tools, or QA practices. For want of irrefutable evidence, the blame may subsequently fall on manufacturing, R&D, the design office, suppliers and even the customer. This chapter elaborates the F-test — a highly precise data analysis method that ranks among the best known methods for empirically exploring what factors influence which features. Establishing the existence of cause-effect relationships scientifically is pivotal in resolving disputes and questions such as those cited above and guiding later decisions and actions. As we shall see, the F-test plays a key role in identifying design features that have significant influence on performan perfo rmance ce and robus robustness. tness. In the study of physical processes aimed at predicting the course of these processes, proces ses, one often explo explores res caus cause-eff e-effect ect relati relationsh onships ips using regre regression ssion analysi analysis. s. Strictly speaking, however, regression should be attempted only after one has established the presence of a cause-effect relationship, and the variables involved are measurable. When one has not already established the cause-effect relationship, or when the variables are a re functional or all influenced by a ‘third’ factor, regression or correlation studies can be misleading. Further, regression is decidedly not useful when the independent factors are attributive (e.g., ‘steel’ vs. ‘plastic’). By contrast, precise preci se and relia reliable ble insight insig ht into any cau cause-e se-effec ffectt relationsh relati onships ips existin existing g in such cases can be obtained from statistically designed experiments. Design is defined as the selection of parameters and specification of features that would help the creation of a product or process with a pre-defined, expected performanc perfo rmance. e. Whe When n complet complete, e, a design impro improves ves our capab capability ility to fulfill needs through the creation of physical or informational structures, including products,
machines, organizations, and even software. Except in the most trivial cases, however, the designer faces the jo in t optim optimizati ization on of all design features keeping in view objective aspects that may include functionality, manufacturability, maintain ability, serviceability and reliability. Often this cannot be done in one step because design as a process involves a continual interplay between the characteristics the design should deliver and how this is to be achieved. Producing a robust design, particular, ular, is a compl complex ex task. As m entioned entio ned in Chap Chapter ter 1, robust rob ust design aims at in partic finding parameter settings which would ensure that performance is on target, ■ mimiz ing simultan simultaneously eously the influence influence of any adver adverse se factors (the noise) that the po & xrt xr t user may be unable to cont control rol econ economica omically lly or eliminate eliminate.. Robu Robust st design 41
42
TAGUCHI METHODS EXPLAINED— PRACTICAL STEPS STEPS TO ROBUST DES DESIGN IGN
aims specifically at delivering quality performance throughout the lifetime of service and use of the product. Finding how the various design and environmental factors affect performance can become extremely difficult for the designer. Sometimes the direct application of scientific and engineering knowledge can lead to (mathematical) cause-effect models, and when such a model is available, the designer may attempt a direct optimization. When this is not possible, well-controlled experimentation with the suspect factors may enable one to evaluate empirically how performance depends on these various factors. Experimentation here would be a systematic, learning process proc ess — the act of obser observing ving to improve the understandin unders tanding g of certain physical physi cal processes. processe s. In physical, social, and sometimes in behavioural sciences, experimentation is a common experience. Use of litmus paper to test acidity might be perhaps the first ‘scientific experiment’ to which the reader was exposed. Observing reaction times of pilots under contrived stress, or polling viewers to understand their TV viewing habits and preferences are also experiments. In the improvement of quality also, controlled trials and tests with prototype products and processes followed by scientific analysis of the outcomes may produce valuable information. Such experiments may explore the effect of material choices, design features, or process conditions on the performance of products and processes. Experimentation is perhaps perh aps the only way for finding the averag averagee break breaking ing strengt strength h of new mixe mixess of concrete or for confirming that certain curing temperature and time are the best settings in moulding plastic parts from polypropylene resins. As mentioned above, statistically designed experiments are among the best known approaches for empirically discovering cause-effect relationships. These experiments also require the smallest number of trials, thereby providing the best economy. Statistical experiments are certainly not mere observations o f an uncontrolled, random process. Rather, these are well-planned, controlled experiments in which certain factors are systematically set and modified and their effect on the results (the response) observed. Statistical experimental designs specify the procedure of drawing a sample (certain special observations) with the intention of reaching a decision (about whether certain factors cause a certain effect or that they do not cause Statistical the effect).experiments provide many advantages over the popular “onefactor-at-a-time” studies for the following reasons: 1. Statistical experiments secure a larger amount of appropriate data (than do experiments conducted otherwise) for drawing sound conclusions about causeeffect relationships. 2. The data from a statistical experiment yield more information per observation. Statistical experiments routinely allow ‘all-at-once’ experimentation,
yet their precise data analysis procedure is able to separate the individual factor effects and any interaction effect due to the different factors influencing each others’ effects. Interactions cannot be uncovered by one-factor-at-a-time experiments. 3. In statistical experiments using Orthogonal Arrays or OAs (many design
DESIGN OF EXPERIMENTS
43
optimization experiments of this type),specified the data settings are obtained a form that makes the prediction o f theare output for some of o f the in input variables easy. Furthermore, OAs greatly simplify the estimation of individual factor effects even when several factors are varied simultaneously. The study of interaction is clearly one area in which statistical experiments continue to be the only procedure known to us. An illustration of the significance of interaction effects is provided by the lithograph printing example [5] in Table 3.1. TABLE 3.1 L I T H O G R A P H P R I N T IN IN G E X P E R I M E N T A L D A T A
Experiment
Exposure Time
1 2 3 4
Low H i gh Low H i gh
Development T im e Low L ow H ig h H ig h
Y ie ld (%) 40 75 75 40
The table above shows the typical observed effects of exposure and develop ment times on yield (per cent of prints in acceptable range) in lithography. Note the large fall in yield when one sets both exposure and development times high. Such effect (an interaction between exposure time and development time) could be at mos mostt suspected but not establishe established d by varying only one of these factors at a time. If the study involves more factors, interactions would be untraceable in onefactor-at-a-time experiment experiments. s. Statistical experiments consist of several well-planned individual experi ments conducted together. The setting up of a statistical experiment (also known as designing) involves several steps such as the following: 1. Selection o f resp response onsess (performance characteristics of interest) that will be observed 2. Ide Identif ntifica ication tion of the factor s (the independent or inf influencing luencing cconditions) onditions) to be studied 3. The different trea treatm tm en ts (or levels) levels) at which these fact factors ors will be set iin n the different individual experiments 4. Con sidera tion of blocks (the observab observable le noise fac factors tors that may influ influence ence die experiments as a source of error or variability). In the lithography example above, yield % is the response, and exposure time aod development time are the process design or influencing factors. Each of these factors has two possible treatment levels (‘high’ and ‘low’) at which the ikbographer would set them as needed. Non-uniformity of processing temperature m i that of tthe he concentration of chemicals would constitute the noise factors here. Before the investigator plans statistical experiments, he must clearly know ctojective of conducting the experiments. The clarity in this objective is of ceonoos value. For example, when one states the experiment’s objective as “Select
44
TAGUCHI METHODS EXPLAINED— PRACTICAL ST STEPS EPS TO ROBUST DESIGN
the optimal values for resistance R l and inductance L2 in the design of a power conditioner unit to minimize sensitivity to input voltage and frequency variations,” it has the required clarity. The domain in which the results of a set of designed experiments are applicable is called the influence space. It is important that one makes this influence space sufficiently wide by selecting well-spread factor settings without the concern for “off-quality production” during the conduct of the experimental investigation. During such experimentation, the investigator should uncover the range of the input variable over which performance improves, as also the range of input settings over which performance deteriorates. Only then appropriate countermeasures can be identified and devised. The elements of this domain on which the experiments are conducted are called experimental units. The experimental units are the objects, prototypes, mathematical models, or materials to which the investigator applies the different experimental conditions and then observes the response. In statistical experimentation, one distributes the experimental units randomly in the backdrop of noise factors to represent correctly the character of the overall influence space. This minimizes the chances of any biasing effect caused by the uncontrolled factors. For example, in testing the productive utility of fertilizers, one takes care to distribute the planting of seedling so that the effects of sun/shade, soil differences, depth of tilling, planting, etc. average out. These are the factors that the investigator does not control during the trials. In the design optimization experiments as study proposed Taguchi, changes settings of the parameters under frombytrial to trialthe in ainvestigator systematic manner. Special matrices, called OAs (Fig. 1.5), guide these settings. OAs are matrices that specify the exact combinations of factor treatments with which one conducts the different experimental trials. It is common to symbolically represent or ‘code’ the distinct levels of each design or noise parameter by (1, 0, -1), or (1, 2, 3), etc. (Fig. 1.5) to distinguish the different combinations of parameter settings from each other. The foremost reason for using OAs rather than other possible arrangements in robust robus t design experiments is that OAs allow rapid estimation of the individual factor (also known as main) effects, without the fear of distortion of results by the effect of other factors. In design optimization, one uses OAs to the maximum extent possible to achieve efficiency and economy. Orthogonal arrays also simplify the simultaneous study of the effects of several parameters. 3.2
THE ONE-FA CTOR DESIGNED EXPERIMEN EXPERIMENT T
It rarely suffices to study howsometimes a certain one response variable only one independent variable. However, may have strongdepends reasons on to determine the effect of only a single influencing factor on a process. One treats here all other factors — process conditions, operator actions, machine used, etc. — as noise or uncontrollable. Such a study constitutes a one-factor investigation. After he has reliably understood the influence of the single chosen factor, the investigator may expand the scope of the investigation to include some other factors, and conduct further experiments.
DESIG DE SIGN N OF EXPERI PERIME MENT NTS S
45
Even the one-factor statistical investigation has a formal, statistical structure. The investigator starts with some stated hypothesis or speculation (e.g., “material choice has no influence on quality”). The investigator then proceeds to obtain experimental evidence to confirm or reject this hypothesis. Since most real processes involve more than one influencing factor, the onefactor statistical experiment is not a very common practice. The purpose of our discussing the one-factor experiment in some detail here is to describe the steps steps and the methods involved in experimental data analysis. These methods apply to the larger and more complex experiments also. The data analysis procedure uses ANOVA and the F-test, mentioned in Sections 2.7 and 2.8. One plans one-factor statistical experiments only when there is sufficient reason to study the effect of only one independent factor; one treats here all remaining factors as uncontrolled or noise. noise. One uses ‘blocking’ ‘block ing’ and randomization here to minim minimize ize bias effects du due e to the factors not controlled, for otherwise factors like time of day, humidity, the load on the system, etc. may affect the conclusions. Randomization with respect to to the uncontrolled uncontrolle d factors is one important reason that the one-factor designed experiment is fundamentally different from the change-one-factor-at-a-time mode of experimentation. The other key difference arises from the application of AN OV A — the man manner ner in which th the e investiga investigator tor ana analyse lyses s the obse observed rved d data. ata. Sensing any effect (or ‘signal’ ‘s ignal’ ) due to the factor under study — in the presence prese nce of uncontrolled uncon trolled disturbances — is the primary challeng challenge e in the oneone-facto factorr experimental design. Besides randomization, replication is a common technique employed in one-factor design. Replication implies the repeating of experiments under identical controlled conditions, aimed at averaging out any extreme effect of the uncontrolled factors on a single experimental run. The goal of randomization and replication is to attempt to spread the uncontrolled disturbances evenly over the different observations. In statistical terminology such an arrangement is called the completely randomized one-factor experiment. The investigator contemplating a one-factor experiment starts his work by speculating or hypothesizing that the influence of the factor to be studied can be described by a simple cause-effect model. He assumes that if he sets the independent factor at k different levels — with n, replicate observations of the res response ponse taken taken when the independent independe nt factor is set at iits ts /th level — the effect on the resp response onse variable Y can be modelled simply as
Yij =
i- 1 , 2 , . . k; j = 1, 2, . . . , ttj ttj
(3.2. (3.2.1) 1)
where Y\ j is the observed value of Y in theyth yth run o off replication when the independent factor was set at its /th level, /i, is the mean effect on Y of the independent
factor being set at this level. £$ designates an independent, normally distributed random variable with zero mean and a 2 variance, representing the random contribution in Yy of all the uncontrolled factors. Thus one speculates the effects on Y due to the different treatments to be fa, fa, fa, . . . . The one-factor model cannot, however, be set up arbitrarily. In adopting tlus model the investigator assumes that the effect of all uncontrolled factors o f the observations and cause random disturbances {£/, {£/,} that aifonnly affect each of representable by a 2. This effect is shown in Eq. (3.2.1), which links the
4 6
TAGUC AGUCHI HI ME MET THODS HODS EXPLA PLAIN INED ED—P —P RA RACT CTICA ICAL STEPS STEPS TO ROBUST DESIG DESIGN N
observations {Fy} links s {Fy} to the influence (//,-) of the control factor; Eq. (3.2.1) also link {Yij} to [Eij], the effect of o f the uncontrolled factors. The statistical analysis of the results of the one-factor investigation depends strongly on three assumptions: linearity, additivity, and separability of effects of the control factor and the uncontrolled factors. Only under these assumptions the simple model of Eq. (3.2.1) would be a valid description of the factor-response relationship. The ing: one-factor statistical investigation can that be useful in situations as the following: follow A brid bridge ge designer may speculate the material chosensuch (steel, Alloy Allo y X , or A Alloy lloy Y) to fabr fabricate icate a s struct tructural ural beam ha has s an influence on the beam’ beam’s s deflection characteristics under a standard load, independent of other factors. Subsequently, the deflect deflections ions observed o f prototype beams bui built lt using these materials may be used to (a) establis establish h whether material ch choice oice has any influence on deflection, d eflection, and (b) identify the material with the maximum or minimum flexibility. As mentioned earlier, for the conclusions to be valid, it is critical, even in this apparently straightforward investigation, that one randomizes the runs and their replications with respect to the uncontrolled factors. When one uses k treatment levels (say k different materials to construct the beam), it is common to summarize the observed data in a table such as Table 3.2, which shows that the investigator obtained a, replicated observations at Treatment /, and a total of nx + n2 + n$ + . . . + n*(= n* (= AO obse observat rvations ions.. TABLE 3.2 OBSERVATIONS AND SAMPLE MEANS FOR A ONE-FACTOR EXPERIMENT
Factor Level
i
Observation Observations s
Sample Mean
ybar1= "i y1}
yn, n,y1 y12,...,y 1
;=i
n Ybar- 1 Y-. \ ;=i ,
Y i \\’’ Y w
Ykv yk2>
Y**k
Fbarj, = E Yki i
Often the focus of the one-factor statistical experiment is on determining whether it is reasonable to accept that the average effect //* caused by level i is identical for all the factor level se settin ttings gs ((z z = 1, 2 2,, 3, . . . , k). Alternatively, one may hypothesize that the effect is different at least at one level, which causes the response at this level to be noticeably different from the overall average. Note that one uses uses the tterm erms s ‘lev ‘level’ el’ an and d ‘treatment’ interchange interchangeably; ably; both represent the the distinct levels at which one sets the factor in control.
DESIG DE SIGN N OF EXPERI PERIME MENT NTS S
47
The observation observation avera averages ges rb ar1 ar 1# Fbar2 Fbar2,, etc etc.. shown under “Sam “Sample ple M Mean ean” ” in Table 3.1 estimate estimate the treatment effects i = 1, 2, 3, To determ determine ine now whether the treatment effects are unequal, one would statistically compare the two follo fo llowi wing ng sources that may cause the {Tbar, r,-} averages to differ diffe r from each ot other her (see Eq. 2.9.3): 1. Within-factor (also called within-treatment) variability. 2.
Between-factor (also called between-treatment) variability.
If between-treatment variability is (statistically speaking) larger than what one expects from variation that occurs within a typical treatment when one replicates observations, one would wou ld question whether the effects / = 1, 2 , 3 , . . . , *, a arre all the same. Perhaps the reader can see that the appro approach ach here parall parallels els the ideas that led to the illustration of ANOVA in Section 2.7. One key measure of variability in a set of observations is how far a. single observation deviates from its expected average. For a group of observations, one determines variability collectively by summing up the squares of the differences of the individual observations from the average. One calls this sum the sum of of squares of o f deviations or more explicitly the error sum o f sq squar uares. es. This quantity measures the experimental error (resulting from the influence of uncontrolled factors) in rep replica licating ting or repeating observations observations (/z,* time times) s) when treatment i is held constant. One computes the experimental error, which reflects the variability caused by all factors not in control or not deliberately set, as the sum of squares of the the deviation of individual observations from their respective expected averages. averages. Thus, if the observations resulting from replicating the experiment at treatment 1,, 2, 3, . . . , nh and their average is Ybaih then the experimental i are j = 1 error accumulated by replicated runs at treatment level i is £ (Y y - Kh Khar) ar)2 2
j =i The average variability among the observations is called the mean sum o f squares squares or mean square error. The mean square error at treatment i is
T ^ i
- 1' K )2
In the above, the quantity (n, - 1) is called the degree offreedom offreedom (or dof) of the mean sum of squares at treatment i. The dof d of acknow acknowledges ledges that o off the «,■observations obtained, if one calculates a statistic (Khar,) using these data values, then this statistic (Fbar,) and (n, - 1) observations together can determine the value of the one remaining (i.e., the ti, ti, th) observation.
EXAMPLE 3.1: A one-factor investigation — the beam deflection problem. problem. An investigator constructed several beams of identical geometry using each of the three available materials, steel, Alloy X, and Alloy Y (thus k = 3). The deflections {Ytj} observed under a standard load were as in Table 3.3.
4 8
T TAG AGU UCH CHII ME MET THODS HODS EXPLA PLAIN INED ED—P —P RACT RACTICA ICAL STEPS STEPS TO ROBUST DESIG DESIGN N TABLE 3.3 BEAM DEFLECTION TEST RESULTS
Material (0
Observations {5^} (Deflection measurements measurements with beams made of each material, under standa standard rd load, in 1/1000 in.)
Ybaij
Sum of Squares of the Difference of Individual Observations from Fbar,
Steel
82
86 79 83 85 84 86 87
84
48
Alloy X
74
82 78 75 75 76 77
77
40
Alloy Y
79
79 77 78 82 79
79
14
What can one infer from these test results? Note the following salient aspects of these observations: 1. The numbe numberr o f o observations bservations (i.e., replications done by build building ing sever several al beams and measuring their deflection under standard load) for the three different materials (i.e., the treatments) is unequal. This is an important fact about onefactor investigations in general. In these investigations it is not necessary that an equal number of observations be obtained for each treatment. 2. It is not readily apparent from the averag average e deflections { Fbar7} calculated for each material type that under standard load a material-to-material difference in flexibility (manifested by deflection) exists. 3. One cannot yet comm comment ent on the Sum o off Square values values (a m measure easure of variability from the respective expected average) in the table, for these contain contributions from an unequal number of data points (replications). With the help of these observations, an important data summary (statistic) can be calculated. If one pools all the three Sum of Squares and then averages them by dividing by the total dof for this pooled sum (as done to define the estimated pooled variance in Eq. (2.5.3)), one produces the overall observationto-observation (or error) variability, known as the within treatment variability variability (see Section 2.6). This variability equals I
Mean Me an SSe SSeiTor = —
k
n
-
Z (r„ - /bar /bar,) ,)
2
Mean SSenor, often called the Mean Sum o off Squa Squares, res, and a nd as already mentioned, reflects the typical observation-to-observation variability when any particular treatment treatme nt is he held ld constant and replicate obser observation vations s are made. In the beam deflection example, replications are made by fabricating several identical beams using the same material and observing their respective deflections under the standard load. Note that the averaging to get Mean SStTT0Tuses all N observations and it spans across each of the k treatments used. The other variability, the one that is closer to the objective of the one-factor experimental investigation, manifests the impact on the observations caused by different treatments. One calculates this variability as the “between-treatment sum of squares.” One determines the between-treatment sum of squares by setting a
DESIG DE SIGN N OF EXPERI PERIME MENT NTS S
49
reference refere nce av average erage value — equal to the grand averag average e of al alll observations, or yb ybar, ar, 1
k
nt
y yb bar - — E E y f=1 ;= 1 j
Recall that we have used a total of k treatments here and nxrepr represen esents ts the n numbe umberr
of observations obtained at treatment level i. A total of N observations were originally obtained. One finds the between-treatment sum of squares as
■ ^treatm ent = «i (^ba^ - ybar) ybar)2 2 + rt2(y t2(yba bar2 r2-- ybar)2 bar)2 + . . . + a* (ybar*
- ybar)2
Since one uses up one dof in the k treatmen treatments ts in calc calculating ulating the grand ave average rage ybar ybar,, one calculates the mean between-treatment sum of squares statistic as Mean
X nf (yb (ybar* ar* - yb ybar ar )2
K
;=i 1 ;=i
Mean SStreatment reflects the treatment-to-treatment variability, each treatment being a distinct level at which one has set the factor whose effect on Y is being investi gated. Thus the above procedure procedure lead leads s to the estimation o off two two average variabilities, the mean within-treatment variability (Mean SSQTr0T) and the mean treatment-totreatme treatment nt or betwe between-trea en-treatment tment v var aria iabi bilit lity, y, Me Mean an SStreatmem* One may now use the data of the beam deflection experiment to find these two variabilities (variation among deflections caused by material-to-material differences, differenc es, and replication o f the experiment with the sam same e material). On One e finds that
y yb bar = 80.4 48 + 40 + 14 Mean SScrror Mean .S;S'treatment
(8 - 1)
_
+ (6 - 1) + (6 - 1) ~
8(84 - 80.4 )2 + 6(77 - 80.4 )2 + 6(79 - 80.4 80 .4)2 )2
,
3-1 By these calculations we have separated the two sources of variabilities (withinand between-tre between-treatment atment varia variability bility)) — the prime object objective ive in statistically designing the one-factor experiment to study treatment effects. In the numerical example above, the average material-to-material (between treatment) variability appears to be large (it equals 92.4) when compared with observation-to-observation or within-treatment variability (which equals 6 .0 ). W itho ithout ut further analysis, though, one cannot yet sa say y that the effect due to materials is significant (see Section 2.4) in the backdrop of the noise (observationto-observation variability). Such analysis requires a statistical comparison of the two types of variabilities, within- and between-treatment. The comparison procedure to be used here is again ANOVA, introduced in Section 2.7.
3.3 ANOVA ANOVA HELPS HELPS COMP COMPARE ARE VARIABILITIES We elaborate some aspects of the ANOVA procedure in this section. As described in the section above, two types of variations may be present in the one-factor one-factor
5 0
T TAG AGU UCH CHII ME MET THODS HODS EXPLA PLAIN INED ED—P —P RA RACT CTICAL ICAL STEPS STEPS T TO O ROBUST ROBUST DESIG DESIGN N
experimental data, namely, the between- and the within-treatment variability. The purpose of ANOVA, which one performs with the mean sums of squares, is to separate separ ate and th then en compare such variabilities variabilities.. Also Also,, as we we wi will ll se see e lat later, er, AN O VA applies not only to one-factor investigations, but also to multiple factor studies. This is a considerable capability for a test because variabilities may be caused by one or several independently influencing factors, and by their interaction. Recall first that if one squares the deviation of each observed data {Yjj} {Yjj} from the grand average and then sums these squares to a total, one ends up with the result (due to Fisher [13]) derived in Section 2.9: Total Sum of Squares = Sum of Squares due to error + Sum of Squares due to treatment
This decomposition of total variability shows that in one factor experiments there is no other variability in the data {Yfj} except tho those se caused by within-treat ment variability (the errors in repeating or replicating the observations) and hetween-treatment variability. The k different treatments at which the investigator sets the factor under investigation cause the between-treatment variability. One may now use a standard result from probability theory to assist in the study of these variabilities. If random variables Xh X2 X2,, X3, X3, . . . , Xk are distributed normally norma lly (each with variance a 2), then the quan quantity tity £ (X; - X Xbar barff t a 2 1 = 1
represents the sum of k standard normal variates and therefore has a chi-square distribution, Xbar being XX,/fc. As mentioned in Section 2.5, the chi-square distribution is also a standard probability distribution like the normal distribution. The chi-square distribution has only one parameter, its dof. With the squares of deviations of k obse observ rvati ations ons {X,-, i = 1 1,, 2, 3 , . . . , k) from their mean Xbar summed, the chi-square variable written above will have (k - 1) degrees of freedom. In the one-factor investigation, if the mean effects f ix, //2, Hi * IM -> • • *> due to the k different treatments are all equal, then the total N observations taken in the experiments would all belong to the same normal population — with variance ’s + y6)2 y6)2/(2 + 2) - C F
Similarly, SB*c = (yi + )’ 5 + SB*c
+ )’s)2/(2 )’s )2/(2 + 2) + (y (y2 2 + y3 + ye + y if K 1 + 2 ) - CF
We can find the Sum of Squares for Error (Se) as follows:
Se = S f —S —SA A —S —Sg g —Sc — —S Sd — —Sg Sg —SAxC —SAxC —Sb xc We then determine the respective dof. The total dof f T is given by f T = total number of observations - 1 1,, or (N - 1). The other dof are as follows:
fA
= (number of distinct level levels s of A) - 1 = 2 - 1=1
fB
= (number of distinct level levels s of B) - 1 = 2 - 1 = 1
fc
= (number of distinct levels of Q - 1 = 2 - 1 = 1
f D
= (number of distinct level levels s of D) - 1 = 2 - 1=1
f E
= (number of distinct levels levels of E) - 1 = 2 - 1 = 1 fc ~
/axC =
I a
I b x c =
/ b x / c = 1 x
X
1 X 1 = 1 =
1 1
Hie dof for error (the influence of uncontrolled factors influencing the response) 0& 3Y be found as m
terror = /total /total
~ (fA + ,/s + fc +fa + I e + /a/a x C
for the data given in Table 3.6 equals ■S — 1) — 1 — 1 — 1 — 1 — 1 — 1 — 1 = 0
+ / f ix ix e )
5 8
T TA AGUCH GUCHII ME MET THO HODS DS EXPLA PLAIN INED ED— — PRAC PRACT TICA ICAL STE STEPS PS T TO O ROBUST DESIG DESIGN N
The Mean Sums of Squares are given by the general formula Mean Sum of Square = Sum of Squares/dof Accordingly, Mean SSA SSA
= Sum of Squares^/dof^ = SAlfA = { [Ax]2INm + [A2]2INa1 - C F }lfA
Mean SSAxC SSAxC - Sum of Squares^xc/dof^c Therefore,
Sa*c ~ {[ {[yi yi + }; };2 2 + y? + y8]2/(2 + 2) + [y3 + y4 + y 5 + y$]2l(2 + 2) - CF}f\ One uses the Mean Sum of Squares in the evaluation of significance of the factor and interaction effects on the response y. The F-test accomplishes this. We should point out that the F-test requires evaluation of the F-statistic, determined as the ratio Mean SSfactar F- s statistic tatistic = — -- -5K2SL Mean SSmor The Mean Sum of Squareserror may be evaluated if d o f ^ > 0. This is always possible if one replicates some or all the experiments. However, iiff one obtains only one observation per experimental setting (as in the 5-factor example in Table 3.6), d dofe ofeTTOr may equa equall 0, ma makin king g it i t imp imposs ossible ible to fin find d tthe he numerat numerator or o off the F-statistic F-statistic.. In such ca cases ses o one ne ‘pool ‘pools’ s’ certain sums of o f squa squares, res, as follows. If there are reasons to believe that certain main factors and interactions have no or little effect on the response y, then the sums of squares of these factors and interactions, and the corresponding dofs are are pooled — to construct the Error Sum of Squares, Sey and the d dof of fo forr error, / error. For inst instance, ance, if factors A and D have little effect on y and if the interaction A x C may be ignored, then
$e
= St - SB SB - Sc - SE - SBxC
/error = h “ Ob +fc + / e + I bxc)
This provides Mean SSeTTOr = S JJerror for substitution into the formula for the F-statistic given above
3.6 SUMMARY A key objective in Taguchi methods is to uncover how the various design parameters and environmental factors affect the ultimate performance of the product or process being designed. Performance may be affected not only individually by the design parameters and by some factors in the environment, but also by possible interactions among design factors and the interaction between the design and
DESIG DE SIGN N OF EXPERIME PERIMENT NTS S
59
environmental factors. Specifically, a robust design cannot be evolved without uncovering these effects and interactions [32]. The traditional traditiona l ‘vary ‘ vary on one e factor at a time’ tim e’ experiments are intri intrinsic nsically ally incapable of uncovering interaction among factors. Statistically designed experiments conducted with a prototype product or process constitute the only sound and scientific approach available today to study such phenomena. Statistical experimen experiments ts are wel well-planned, l-planned, both with resp respect ect to the combination of settings of the different independent factors at which the experimental trials have to be run, and with respect to the manner in which the response data (the outcome of these experiments) are analyzed. The objective of this well-planned effort is to uncover (a) which design and environmental factors significantly affect performance; and (b) what countermeasures can be devised to minimize the effect of the adverse factors and conditions such that the final performance will be least sensitive particularly to factors that the user of the product/process is unable to economically or physically control. ANOVA and the F-test provide some of the mathematical mathematical machinery needed here here..
EXERCISES 1. Desig Design n optimiz optimizatio ation n experim experiments ents tto o help prolong ro router uter bits in printed circuit board manufacture conducted by the AT&T Company reported the following average router life lif e da data ta [14 [14,, p. 19 194]: 4]: x-y feed (in/nun) (in/nu n)
stac stack k height (in)
Speed
60
80
3/16
1/4
30.000 rpm 40.000 rpm
5.75 9.75
1.375 6.875
3.875 13.25
3.25 5.5625
Develop appropriate graphical displays to evaluate the extent of interaction between (a) speed and x-y feed, and (b) speed and stack height. 2. Mixing Mix ing synthetic fibre with cotton is being contemplated for producing tarpaulin material with increase increased d tensile strength. Designers speculate that the strength of the cloth is affected by the percentage of cotton in die fibre. Twenty-five experiments were randomly conducted with % cotton mix in fibre as shown. Perform die appropriate approp riate AN ANOV OV A and comment on the acceptability of the desi designer’s gner’s suggestion suggestions. s. Develop a cause-effect diagram showing the various factors other than cotton (%) that might affect the results and discuss why repetition of the trials is necessary here. TABLE E 3.1 RESULTS OF CLOTH STRENGTH TESTS________________
% Cotton in
Tensile Strength of Cloth (lb/s q. in)
Fibre Fibr e 15 20 25 30 35
8 13 15 20 8
8 18 18 26 10
16 12 19 23 12
11 19 20 19 15
10 19 19 23 11
60
T TAG AGU UCH CHII ME MET THODS HODS EXPLA PLAIN INED ED—P —P RA RACT CTICA ICAL STE STEPS PS TO ROBU ROBUST ST DESIG DESIGN N
3. Thre Three e differe different nt nozzle designs are availab available le for assembling fire extinguishers. Five test runs made using each nozzle type with discharge velocities under identical inlet conditions produced observations as shown in E 3.2. Confirm diat at the significance level a = 0.05, the performance difference among the nozzle designs cannot be ignored. TABLE E 3.2 RESULTS OF NOZZLE DISCHARGE TESTS
Nozzle Design Design A B C
Discharge Velo Velocit city y (cm (cm/s /sec ec)) 97.6 98.5 98.0
98.2 97.4 97.0
97.4 98.0 96.6
98.4 97.2 96.8
98.8 97.8 98.0
What factors might affect the above observations? Describe a scheme for randomizing the experimental trials.
T h e F o u n d a t i o n o f T a g u c h i M e t h o d s : T h e A d d i t i v e C a u s e --EE f ffee c t M o d e l 4 4..1 WHAT IS ADDITIVITY? An experienced plant engineer is hardly surprised at finding that product or process performance Y depends on several different influencing parameters P, Q, Q, R, S, etc. These dependencies, in general, can be quite complicated. As a result, the empirical studies to determine them can become large and even difficult to run. Fortunately, as pointed out by Taguchi, in many practical situations these studies can be restricted to the main-ejfect dependencies (Section 3.5). In these cases the dependencies are additive and can be satisfactorily represented by what one calls the additive (or main factor) cause-effect model. The additive model has the form
y = M + Pi + Qj + rk + si + e
(4.1.1)
$
where fi is the mean value of y in the region of experiment, p iy q j, etc. are the individual or main effects of the influencing factors P, Q, etc., and £ is an error term. The term “main “ma in effec effect” t” designates the effect on the re response sponse y that one can trace to a single process or design parameter (DP), such as P. In an additive model such as the one given by Eq. (4.1.1), one assumes that interaction effects are absent. In this model, p t represents the portion of the deviation of y (or the effect on y) caused by setting the factor P at treatment Pit qj that due to the factor Q at Qj Qj,, and rk that due to setting R at Rk is rk, and so on. The term £ represe represents nts the combined errors resulting from the additive approximation (i.e., the omission of interactions) and the limited repeatability of an experiment run with experimental factor P set at Pv Q at Qj, R at Rh and S at £/. Repeated experiments usually show some variability, which reflects the influence of factors the investigator does not control. The additivity assumption also implies that the individual effects of the factors Py Qt R, etc. on performance Y are separable. Under this assumption the effect of each factor can be linear, quadratic, or of higher order, but the additive model mod el assumes that ther there e exist no cross cross product effects (interactions) among the individual factors. (Recall the instance of interaction of effects seen between exposure time and development time in the lithography example, Table 3.1.) I f we assume that the respective effects effects (a and /} /})) of two influencing factors
A and B on the response variable Y are additive, we are then effectively saying that the model Yij( = l^ij + £ij) = ji + ctj + j3j + eg
(4.1.2)
represents the total effect of the factors A and B on Y. Note again that this representation assumes that there is no interaction between factors A and B, i.e., 61
6 2
TAGUC AGUCHI HI ME MET THODS HODS EXPLA EXPLAIN INED ED—P —P RA RACT CTICA ICAL STEPS STEPS TO ROBU ROBUST ST DESIG DESIGN N
the effect of factor A does not depend on the level-of ffactor actor B and vice versa. Interactions make the effects of the individual factors non-additive. If at any
time n }j is different from ( jl i + a, + /3;), ;) , whe where re a, and an d fy are the individual (or the main) effects of the respective factors, then one says that the additivity (or separability) of main factor effects does not hold, and the effects interact. The chemical process model shown below provides an example of an interaction among the two process factors: Utilization (%) = K (mixing HP/1000 g)L (superficial velocity)** For this process process,, the effect on the res respon ponse se variable “Utiliza “Utilization tion (% (%)” )” is mu multiplicat ltiplicative ive rather than additive. Here, Here, the effect of “mixing “mix ing HP/1000 g” depends on the level of the second process factor, “superficial velocity”, and vice versa. This effect may be modelled by
ldtJ ldtJ =
(4.1.3)
Sometimes one is able to convert the multiplicative (or some other non-additive) model into an additive model by mathematically transforming the response Y into log [F], or 1IY IY,, or -/F, -/F, etc. Such a co conversion nversion grea greatly tly helps in plann planning ing and running multi-factor experiments using OAs. (We shall see in the next section that OAs impart much efficiency and economy to statistical experiments.) The presence of additivity also simplifies the analysis of experimental data. The transformation that would convert the above chemical process model (which involves the interaction of factors “mixing HP H P per 1000 g” g” and “superficial velocity”) is the taking of logarithms on both sides. This gives log (% (% utilization utilization)) = log (K) (K) + L log (HP per 1000 g) + M log (superficial velocity) The model equation (4.1.3) then becomes additive, and is written equivalently as
Hy = /J /J + or, + Pj
(4.1.4)
To remind the reader, because the interaction terms are absent in it, one often calls the additive model the main effects model.
4.2 4. 2 WHY ACHIEVING ADDITIVITY IS SO IMPORT PORTANT? In Taguchi’s robust design procedure, one earnestly seeks to confine to the main effects model or, equivalently, the additivity of effects. This permits use of certain special partial factorial designs and simple arithmetic, as we see below, in reaching the optimum settings for each of the product or process design parameters. Additivity of effects also leads to a major reduction in the number of experiments that need to be run. These benefits of additivity may be visualized as follows: Suppose that a designer wishes to investigate whether four potential design factors, P, Q, R, and S, have an influence on performance Y. Also suppose that the designer has the choice of setting each factor at any one of three distinct
T THE HE FOU FOUNDA NDATION ION OF TAGU AGUCHI CHI ME MET THODS: HODS: THE ADDIT DDITIV IVE CA CAUSEUSE-EFFE EFFECT CT MO MODE DEL L
63
treatment levels. If I f the respective respective eff effects ects of these fa factors ctors are additive, then performance Y may be modelled by
Y = fi + pi + qj + rk + Si + £
(4.2.1)
Since this model contains no interaction terms, it is an additive or main factor factor model. Now Now,, since each o f the four facto factors rs (P, Q, R, and S) may be set at three
distinct treatment levels, there will be 34 or 81 81 ways of comb combining ining these different treatments. It may then appear that to investigate the effects of the four factors, one has to run each one of these 81 experiments. We now show that if if additivity of main effects is present, then only a small subset (shown in Table 4.1) o f the possible 81 experiments need needs s to be run to evaluate the effect of the four design factors. This subset is called the orthogonal matrix experiment. TABLE 4.1 AN ORTHOGONAL MATRIX EXPERIMENT AND ITS RESULTS
The ‘Orthogonal’ Matrix of Treatments Treatments
R
s
By Additivity Assumption, yt
R,
Si
y\= f1 + Pi +
Ri
Si S3
yi =
Experiment
P
Q
1
Pi
Qi
2
Pi
Qi
3
Pi
e 3
4
Pi
Qi
Ri r2
5
Pi
Qi
r 3
6
Pi
23
7
Pi
Qi
8
P 3
9
P 3
.
+ ri + si + £\
+ Pi + #2 + r2 + s2 + E2 M+ Pl +# + #3 + r3+ s3 + £3
S 3
y4 = ju + pi + qi + r2 + S3 +
Si
ys = M + P2 +
Si
ye = f1 +P2 + 93 + rl + s2 + £6
r 3
S2
yi - M + P3 + Ql + r3 r3+ + ^2 + £1
Qi
*1
S 3
= M+ P3 + 92 + rx+ 53 + £g
Qi
Ri
Si
£4
#2 + r3 + sl + £ 5
yg = H + Pl + q 3 + r2 + S1 +
Table 4.1 4.1 cont contains ains an example of an orthogonal matrix of treatme treatments. nts. Note the two special aspects of the nine experiments shown in Table 4.1: 1. The total number of experiments experiments to be run above equals 3 x 3 or 9 — only a fraction of 81. The number 9 reflects the total number of combinations possible of the three levels of any two factors among P, Q, R, and S. Note N ote also that no experiment here is a repeat of any other experiment. (A question, nonetheless, remains: W il illl thes these e nine experiments suffice?) 2. The combinatio comb ination n of the treatm treatments ents o f th the e four factors in any of the nine experiments is not arbitrary. One constructs these combinations carefully — in order to permit quick estimation of each factor’s main effect, if such effect exists, from observations {yx, y2, y$, y^ . We now show how one may rapidly estimate the effects of factors P, Q, R, and S from the observations observ ations {yr-}. In the addit additive ive mo mode dell (4.1.1 (4.1.1), ), fj, represents the overall mean value of y ^n the region of experimentation in which one varies the
6 4
T TAG AGU UCH CHII ME MET THODS HODS EXPLA PLAIN INED ED—P —P RA RACT CTIC ICA AL STE STEPS PS T TO O ROBU ROBUST ST DESIG DESIGN N
factors P, Q, R, and S. Further, p h p 2 and p$ are the deviations deviations o off y from ji caused by factor settings (treatments) P h P2 and P3 P 3, respectively. Then, since each factor has its own (positive or negative) effect on y and one assumes the factor effects to be additive and hence separable from the overall mean p and from each other, one must have
Pi + Pi + p 3 = 0
(4.2.2)
Similarly, and y9); in which the P treatment equals P^ P^ and then averages them.
(y i + Js + ^ 9)^3 = [ft [ft + Pi +qi + r3 + $2 + £7 + I* + P 3 + Qi + n + 53 + £g + /Z + p$ +
View more...
Comments