Kato Development of Process For Lean Value in Product Development - Jin Kato - 2015 - MIT Thesis
Kato Development of Process For Lean Value in Product Development - Jin Kato - 2015 - MIT Thesis
by
Jin Kato
at the
Massachusetts Institute of Technology
June 2005
Signature of Author………………….……………………………………………………..………..
Department of Mechanical Engineering
May 6, 2005
Certified by ……….…………………………………………………………………………………..
Warren Seering
Weber-Shaughness Professor of Mechanical Engineering and Engineering Systems
Thesis Supervisor
Accepted by …….……………………………….…………………………………………………..
Lallit Anand
Chairman, Department Committee on Graduate Students
(This page left intentionally blank)
2
Development of a Process for Continuous Creation of Lean
Value in Product Development Organizations
by
Jin Kato
[email protected]
ABSTRACT
Ideas and methodologies of lean product development were developed into tools and processes that help
product development organizations improve their performances. The definition of waste in product
development processes was re-examined and developed into a frugal set to cover all types of waste in product
development processes through preliminary case studies. Value stream mapping (VSM) was optimized for
measuring the waste indicators in product development processes. Typical causes for low product
development project performances were organized into a root-cause analysis diagram.
Three case studies in product development companies were performed. The tools were tested and improved
through intensive interviews with both project managers and engineers. VSM was effective for identifying
and measuring waste indicators. The root-cause analysis diagram was effective for quickly identifying root
causes for low product development project performances. Synchronized uses of these tools made it possible
to measure each root cause’s impact on project performances. The result of measurements revealed both
problems shared by all the projects and the ones specific to the projects, indicating that the tools and processes
developed in this research can provide suggestions for continuous improvement of product development
processes.
Some waste indicators were more prevalent than the others, implying that the number of waste indicators to
be considered can be reduced. Inventory of information was prevalent in all the projects, and the analyses of it
implied that Today’s product development processes are as premature as those of manufacturing several
decades ago. Wastefulness of information inventory was proved quantitatively. Time spent on one occurrence
of rework was proved to take longer near the end of a project than at the beginning of it.
3
ACKNOWLEDGEMENTS
I am also thankful to Professor Seering’s students. Victor Tang kindly helped me to find
research investigation opportunities, based on his outstanding carrier in managing big
projects. Christoph Bauch, Martin Graebsch, Josef Oehmen, and Ryan Whitaker made it
possible to formulate my ideas through thoughtful suggestions.
I would like to thank East Japan Railway Company, my employer, for giving me a special
opportunity to learn in the best environment.
I am indebted to the two excellent companies that kindly accepted my proposal to conduct
my research investigation.
Lean Aerospace Initiative (LAI) made it possible to investigate these two companies in
Japan.
I could not have completed my study at MIT without the full support of my wife, Chiaki,
and my daughter, Niina.
4
TABLE OF CONTENTS
ABSTRACT 3
ACKNOWLEDGEMENTS 4
TABLE OF CONTENTS 5
5
3.4.4 EVALUATION 30
6.2.5 INTERACTIONS 52
6.2.7 INTERRUPTIONS 54
6
CHAPTER 7: HOW TO MEASURE WASTED TIME USING VALUE STREAM
MAPPING
7.1 INTRODUCTION OF THIS CHAPTER 57
7.2 OVERPRODUCTION 57
7.3 WAITING 58
7.6 MOTION 61
7.7 REWORK 62
7.8 RE-INVENTION 63
7.9 HAND-OFF 64
7
CHAPTER 10: RESULTS OF RESEARCH INVESTIGATIONS 1: ANALYSES ON
NINE WASTE INDICATORS
10.1 OVERVIEW OF THIS CHAPTER 75
10.2 OVERALL RESULTS
10.2.1 NUMBER OF OCCURENCES 75
10.3.1 OVERVIEW 87
10.3.6 MOTION 98
8
11.8.2 AVERAGE INVENTORY PERIODS 144
BIBLIOGRAPHY 205
9
LIST OF FIGURES AND TABLES
FIGURES
Figure 2.1 Ten Categories of Waste in Product Development (Bauch, 2004) 22
Figure 3.1 Value Stream Map for One Designer’s Activities with Waste Defined by
McManus (2004) and Morgan (2002) –Continued to Figure 3.2 24
Figure 3.2 Value Stream Map for One Engineer’s Activities with Waste Defined by
McManus (2004) and Morgan (2002) –Continued from Figure 3.1 25
Figure 3.3 Simplified Value Stream Map with Rework with Going-Back Arrows 26
Figure 3.4 Value Stream Map for Whole Railway Vehicle Development Project 28
Figure 3.5 Value Stream Map for a MIT Student Product Development Project – Continued
to Figure 3.6. 30
Figure 3.6 Value Stream Map for a MIT Student Product Development Project – Continued
from Figure 3.5. 31
Figure 5.1 Analyses of Relationships among the Categories of Waste Defined by Morgan
and McManus – The Categories without the Name of Morgan or McManus
were added by the Author. 36
10
Figure 6.10 Displaying Over Processing 49
Figure 6.11 Timely Information Transfer and Delayed Information Transfer – Information
Stored for a Specific Period is Distinguished by Using Green Lines 50
Figure 6.12 Displaying an Interruption 50
Figure 6.13 Different Types of Hand-Offs 51
Figure 6.14 Displaying Bi-directional Information Exchanges 52
Figure 6.15 Displaying Cross-Functional Tasks 53
Figure 6.16 Displaying Interruptions 54
Figure 6.17 Displaying Spent Time 55
Figure 6.18 Displaying Wasted Time 56
Figure 7.1 Measuring Time Spent on Overproduction – A Hatched Process Box Mean
Overproduction 57
Figure 7.2 Measuring Time Spent on Waiting 58
Figure 7.3 Measuring Time Spent on Transportation 59
Figure 7.4 Measuring Time Spent on Over Processing 60
Figure 7.5 Measuring Time Spent on Motion 61
Figure 7.6 Measuring Time Spent on Rework 62
Figure 7.7 Measuring Time Spent on Re-Invention 63
Figure 7.8 Measuring Time Spent on Hand-Off 64
Figure 7.9 Measuring Time Spent on Defective Information 65
11
Figure 10.7 Waste Indicator Distributions by Engineer (Project A) 84
Figure 10.8 Waste Indicator Distributions by Engineer (Project B) 85
Figure 10.9 Waste Indicator Distributions by Engineer (Project C) 86
Figure 10.10 Normalized Waste Time on Over Production per 50 Engineering Weeks and
the Corresponding Causes 88
Figure 10.11 Normalized Waste Time on Waiting per 50 Engineering Weeks and The
Corresponding Causes 89
Figure 10.12 Normalized Waste Time on Transportation per 50 Engineering Weeks and the
Corresponding Causes 90
Figure 10.13 Part of the Root-Cause Analysis Diagram of Transportation of Information
91
Figure 10.14 Normalized Waste Time on Over Processing per 50 Engineering Weeks and
The Corresponding Causes 94
Figure 10.15 Part of the Root-Cause Analysis Diagram of Over Processing 95
Figure 10.16 Wasteful Information Flow without Effective Verification Processes 97
Figure 10.17 Normalized Waste Time on Motion per 50 Engineering Weeks and The
Corresponding Causes 98
Figure 10.18 Part of the Root-Cause Analysis Diagram of Motion 100
Figure 10.19 Example of a Process with Frequent Design Reviews – Engineer KZ’s design
Process in Project C 101
Figure 10.20 An Example of Rework in Value Stream Map (Project A, Engineer T) 102
Figure 10.21 Normalized Waste Time on Rework per 50 Engineering Weeks and The
Corresponding Causes 103
Figure 10.22 Part of the Root-Cause Analysis Diagram of Motion 106
Figure 10.23Wasteful Information Flow without Effective Verification Processes 107
Figure 10.24 Changes in Rework Time (Projects A and B) 108
Figure 10.25 Changes in Rework Time (Project C) 108
Figure 10.26 Normalized Waste Time on Hand-Off per 50 Engineering Weeks and the
Corresponding Causes 109
Figure 10.27 Normalized Waste Time on Defective Information per 50 Engineering Weeks
and Causes 110
Figure 10.28 Company Y’s Relationship with Users – There’s Virtually No Communication
Channel with Users 112
12
Figure 11.1 Number of Occurrences of Inventory of Information per Week 115
Figure 11.2 Average Periods of Inventory 116
Figure 11.3 Total Inventory Time per Engineering Week 117
Figure 11.4 Example of Type 1 Inventory 120
Figure. 11.5 I328, Inventory of Information at the Center of This Figure was Caused by an
Interrupting Event from Inside of the Project (Engineer Y, Week11) 121
Figure 11.6 Example of Type 2 Inventory 122
Figure 11.7 Example of Switching to Higher-Priority Task outside of the Project –
System-Level Services Task was Interrupted by a Support Work Outside of
Project A 123
Figure 11.8 Waiting for Information from another Task 124
Figure 11.9 Example of Inventory (3)-1 – The Two Tasks Circled in This Figure were Both
Testing Processes. 125
Figure 11.10 Example of Inventory (3)-2 – the Task in the Dashed Circle was Left
Untouched until the Downstream Task Got All the Information It Needed.
126
Figure 11.11 Example of Inventory Caused by Review/ Testing Work 127
Figure 11.12 Example of Inventory Caused by Day Off – PMU DC CAL Design/
Implementation Task was Interrupted by a Week Off. 128
Figure 11.13 Inventory of Information Caused by Maintenance of Documenting 129
Figure 11.14 Example of Inventory of Information Caused by Maintenance of
Documenting 130
Figure 11.15 Inventory of Information Caused by Rework Discovery 131
Figure 11.16 Example of Inventory of Information Caused by Rework Discovery 132
Figure 11.17 Example of Inventory of Information Caused by Other Engineer’s Availability
133
Figure 11.18 Inventory of Information Caused by Downstream Engineer’s Availability 134
Figure 11.19 Example of Inventory of Information Caused by Downstream Engineer’s
Availability 135
Figure 11.20 Inventory of Information Caused by Waiting for an Answer 136
Figure 11.21 Example of Inventory of Information Caused by Waiting for an Answer 137
Figure 11.22 Example of Inventory of Information Caused by Ambiguous Information 138
13
Figure 11.23 Example of Inventory of Information Caused by Limited Availability of
Tool/Board/System 139
Figure 11.24 Distribution of Types of Inventory (Project A) 140
Figure 11.25 Distribution of Types of Inventory (Project B) 141
Figure 11.26 Distribution of Types of Inventory (Project C) 142
Figure 11.27 Normalized Occurrences of Inventory of Information per 50 Engineering
Weeks and the Corresponding Types 143
Figure 11.28 Average Inventory Periods and the Corresponding Types 145
Figure 11.29 Total Inventory Time and the Corresponding Types 147
Figure 11.30 Unsynchronized Manufacturing Process Described in “The Goal” (Goldratt,
1984). 148
Figure 11.31 Unsynchronized Development Process of Project A 149
Figure 11.32 Ratio of Rotten Inventory of Information in Number (Project A) 150
Figure 11.33 Changes in Ratio of Rotten Inventory of Information with Time (Project A)
151
Figure 11.34 Trend Line of Changes in Ratio of Rotten Inventory with Time (Project A)
152
Figure 11.35 Changes in Rework Ratio with Time 153
Figure 11.36 Relationships between Ratio of Rotten Inventory and the Corresponding
Types 155
14
Figure A-3 Relationship between Wasted Time and Nine Waste Indicators 166
Figure A-4 An Example of Identified Problem 166
Figure A-5 Average Time Spent on One Occurrence of Rework 167
Figure A-6 Total Inventory Time per Engineering Week: Number of Engineers in Projects A,
B, and C are 6, 6, and 5 Respectively. 168
Figure A-7 Total Inventory Time and the Corresponding Type 169
Figure A-8 Trend Line of Changes in Ratio of Rotten Inventory with Time (Project A) 170
Figure A-9 Changes in Rework Ratio with Time 170
15
TABLES
Table 5.1 Comparison of the Definitions of Waste 35
Table 5.2 Definitions and Examples of Nine Waste Indicators 39
16
CHAPTER 1 INTRODUCTION AND OVERVIEW
1.1 MOTIVATION
Although different types of waste in product development have been suggested, there have
been no research investigations in determining what types of waste are more prevalent than
others in terms of wasted engineering time. Although there have been substantial amount of
literature on how to successfully manage product development projects, there is no
practical tool with which project managers can quantitatively analyze their unsuccessful
projects. For these reasons, there is no effective project management tool that enables
product development project managers to know how much each factor quantitatively affects
their product development project performances. For example, they may attribute their
product failure to late specifications/ requirements changes, but they cannot estimate how
much those changes affected the overall project performances. Therefore, it is difficult for
them to know what the right thing is to improve their product development processes.
17
Chapter 2 takes a look at lean manufacturing, and how it has been developed into lean
product development.
Three preliminary case studies were performed in chapter three, in which value stream
mapping’s applicability for quantitative measurements of wastes suggested in literature on
lean product development was evaluated. Several problems were identified. These
problems were addressed in chapter 4.
Different types of waste that have been suggested in studies in lean product development
are compared in chapter 5 by exploring causal relationships among each waste. This study
was finally developed into the root-cause analysis diagram by adding more types of waste
identified in papers and books, and the preliminary case studies.
Chapter 6 is a how-to manual for drawing value stream maps for quantitative measurement
of waste. Many features of value stream mapping that were unnecessary for the purpose
were eliminated.
A methodology for measuring waste using nine waste indicators is described in chapter 7.
Inventory of information can be measured by the methodology described in chapter 8.
These methodologies were applied in the case studies covered in the following three
chapters.
Chapter 9 introduces the case studies in three industrial product development projects.
Chapter 10 discusses the quantitative analysis results obtained using nine waste indicators.
Three waste indicators detected significantly more waste than the other six. The results also
revealed rework takes more time at the end of projects than at the beginning of them.
Chapter 11 discusses results obtained by identifying inventory of information. In a project
in which market and technical risks are high, inventory of information became bad at the
rate of six percent per month. The analysis of the results described in chapters 10 and 11
suggests that the tools and methodologies developed in this research can show how
engineers’ time is wasted in each specific project, implying these can be used for improving
each project’s processes.
18
Chapter 12 concludes this research.
19
1.4 NOTE
This thesis contains some figures in color, although the author tried to convey all the
information in black and white whenever possible. The full-color version is available online
at Lean Aerospace Initiative (LAI)’s website: lean.mit.edu.
20
CHAPTER 2 LITERATURE REVIEW
The concept of “lean” was first applied to product development by introducing ideas and
tools of lean manufacturing. Wormack and Jones (1996) defined five lean principles,
“specifying value,” “identify the value stream,” “flow,” “pull,” and “striving for
perfection.” Two research topics, definition of waste and practical way of value stream
mapping have been focused on by many scholars and product development practitioners,
based on the idea that addressing these topics lead to realization of the five lean principles.
From the perspective of waste, Wormack and Jones introduced nine categories of waste by
adding two new categories to Toyota’s seven categories of waste in manufacturing. Slack
(1999) tried to prioritize the nine types of waste by conducting surveys of product
development organizations, questioning each category’s frequencies. He also analyzed each
category’s effect on value.
The definition of categories of waste has continuously been discussed by exploring the
differences between manufacturing and product development environment. Morgan (2002)
dramatically changed the definition of waste from the perspective of systems engineering.
Based on the idea that unsynchronization leads to low performance in product development
processes, he introduced eleven categories of waste, replacing all but one: waiting.
Recognizing interdependency among the categories of waste defined by forerunners, Bauch
(2004) re-defined ten categories of waste by analyzing interactions among the categories.
Value stream mapping has also been tried to apply to product development processes as
product development value stream mapping (PDVSM). Early versions of PDVSM,
inherited many features in value stream mapping for manufacturing, were not capable of
displaying activities specific to the product development environment such as iteration and
multiple tasking. Morgan (2002) improved PDVSM by making each process box’ length
proportional to the time spent on it. Although this suggestion clearly differentiated PDVSM
from ordinal process maps in that unsynchronization became visible, how to display
iteration and multiple tasking remained to be solved.
21
Exceeding capacity
utilization
4 Over processing
Stop and go tasks/ Task
switching 3 Movement
5 Inventory Poor synchronisation as
Ineffective Communication regards time and capacity
Time
Obvious Waste drivers
Poor synchronisation as
Resources/Capacity - Main categories - regards contents
Waiting for data, answers, 2 Transport/ Handoffs Info/Knowledge
specifications,
requirements, test results
approvals, decisions,
6 Overproduction/
Over-dissemination of
releases, review events, Waste Unsynchronized processes
information
signs
1 Waiting Quality Opportunity
Information is waiting for
Redundant tasks
people
The definitions of waste have not been explicitly utilized for displaying waste in value
stream maps until Graebsch (2005) applied Bauch’s definition to his microscopic case
studies of ongoing MIT student projects. He successfully measured occurrences of waste by
displaying it in value stream maps, making it possible to measure a frequency of each
category of waste on a value stream map. This achievement raised the following research
research questions:
1. Can value stream mapping be used for measuring wasted time?
2. Can value stream mapping be applied to analyses of industrial product development
processes?
22
CHAPTER 3 PRELIMINARY STUDIES
3.1 INTRODUCTION
To evaluate value stream mapping’s applicability for measuring wasted time, three
preliminary case studies had been performed.
23
W1/W3 Waiting 1
Legend
W1 Hand-offs 3
McManus (2004) W4 Transaction
Waste 3
Morgan (2002)
W10 Redundant Tasks 5
Both
W12 Unsynchronized
Concurrent Tasks 5
Receive Unofficial Order 3-3
with Rough Specification Outsourced
FEA Task
1 2 3-2
3
Design Ask Tokyu Car I
Coordinating Analyze Similar I Make a Contract
Tentative Layout Corp. to do FEA
Designer Products’ Layout (2D CAD Micro CADAM)
W6 Unnecessary W6 Unnecessary 4
4 Motion
W5 Transportation 4 Motion
W4 Transaction
Waste 4
Not converted automatically
4
Develop Current
Vehicle’s 3D W6 Unnecessary
Surface (Solidworks) 3 W6 Unnecessary
1
Motion Motion
5 6 7
W5 Transportation 4 Field Study (Go to Hold DR with Production
Define Tentative
See Existing Similar Management Sec., Manufacturing
Body Structure
Body Products in Use) Sec., and Purchasing Sec.
Structure
Team
8 10 11 12 13 14 15
9
Investigate Current Develop current Compare Test Set Guideline for Document Set Load/Boundary
Refine Assess Weight
Vehicle’s vehicle’s FE model Results and Analyzing FEA Obtained of Each Element Conditions
FE Model Expertise for New Vehicles
Load Test Results. (NASTRAN) FEA Results Results
W5 Transportation 3 W5 Transportation
2 W5 Transportation W5 Transportation
3 3
Develop Basic FEA Skill
30
Power Provide information of weight and position of center of gravity of the equipments
Select Power
Electronics Electronics Equipments
Team
31
Bogie
Set Basic Specification
Team of Bogie
Figure 3.1 Value Stream Map for One Designer’s Activities with Waste Defined by
McManus (2004) and Morgan (2002) –Continued to Figure 3.2
24
Step. 3. Estimate roughly wasted time spent on each waste.
4 A day
3 A few hours
2 An hour
W6 Unnecessary
Motion 4
W4 Transaction W5 Trans- W5 Trans-
Waste 4 portation 3 portation 3
17 18 19 21 25
16 20 22 23 24
Generate Several Screen Identify Problematic Examine and
Run Generate Ask Outsourced Document
Structures for Structures by Portions by Comparing Run Improve Document
FEA With Existing Products’ Several Company FE Models Expertise
New Vehicle Performance FEA Results
Solutions Corp. for Advice
Test Results
*one-off event *one-off event
*held occasionally
28 27 26 *held occasionally
29
Generate Hold DR
Modify Customer (Design Sec., Purchase Sec.,
Model New Review
Structure Manufacturing Sec.,
Preliminary Structure Design (Screening) Production Management Sec.)
(Set Boundary/Load Conditions, W5 Transportation 3 W6 Unnecessary
1
Screen Initial Ideas) Motion
W6 Unnecessary Motion 3
W13 Ineffective
Communication 2
Figure 3.2 Value Stream Map for One Engineer’s Activities with Waste Defined by
McManus (2004) and Morgan (2002) –Continued from Figure 3.1
3.2.4 EVALUATION
In this case study, rework was shown by using an arrow that went back to the reworked task.
For example, in figures 3.1 and 3.2, tasks between (2) and (20) were repeated several times.
This way of showing rework made it impossible to satisfy the following rules suggested by
Morgan.
This problem is obvious when some downstream tasks are affected by an occurrence of
25
rework like in figure 3.3. In this figure, task 3, which was started after task 2’s start, appears
before task 2, which does not satisfy the second rule above. Thus, using going-back arrows
makes it impossible to follow Morgan’s two rules.
Another problem is that using a going-back arrow makes it impossible to display how a
project’s schedule is affected by an occurrence of a wasteful activity, such as making
defective information. For example, it is not obvious in figure 3.3 whether task 2 was
reworked because of defective information received from task 1, or it was reworked
because of defective information made inside of task 2. Thus, using a going-back arrow
makes it unclear how other tasks are affected by a wasteful activity. For the same reason,
measuring waste is also difficult when a going-back arrow is used.
Figure 3.3 Simplified Value Stream Map with Rework with Going-Back Arrows
26
3.3.2 SELECTED PROJECT AND FOCUS
The same railway vehicle development project was selected for the second case study. This
time, the whole development processes were selected as the scope of the value stream map.
27
3.3.4 EVALUATION
Adoption of swim-lane form made it possible to draw a value stream map without making
it too complicated.
Processing &
Mfg. Center
Manufacturing Center
Body Structure
Center.
Fitting Center
Bogie Mfg.
Center
Figure 3.4 Value Stream Map for Whole Railway Vehicle Development Project
28
3.4 THE THIRD PRELIMINARY CASE STUDY
3.4.1 OBJECTIVE OF THE THIRD CASE STUDY
The objective of this case study was to test the applicability of value stream mapping to
measuring waste inside of each task.
29
Compromise in Found a Possibly
Feature Decided More Promising
Unrealistic features pointed out Solution
10/27 11/8
10/20 10/21
10/13 Slack Time 11/3 11/10
Lab Final Down Assembly
Lab Mockup
Lab selection Lab Model Lab
Review
Market Analysis/ Review
Customer Survey
10/15 10/26 Rework (more focused)11/2
Market Analysis/ Undiscovered Market Analysis/ Work Perceived to
Customer Survey 10/16rework Customer Survey be Complete
Rework
Generate/ Unplanned
Select solutions Undiscovered rework Rework Iteration
Generate/
Generate/ Select Solutions Root Cause
Select solutions
Undiscovered rework Rework Defects
Feasibility Analysis 10/15
Feasibility Analysis Feasibility Analysis
Work Perceived to Mutually
Team A Defects be Complete Dependent
Design
Under-Processing schedule pressure (No iteration was possible). Tasks
Root Cause Design Over- Most effort was
Not Optimum Work Perceived to Processing
Mock-up Rework Processing on just wasted.
Schedule be Complete
Root Cause 10/17 Defective Info.
Premature Mock-up Mock-up One root cause
Defects Root Cause Root Cause triggered many
Validation Skill Unrealistic
Under-Processing Defects
Schedule wastes! Draw a waste tree
Market Analysis/ 10/17 for this.
Customer Survey
Market Analysis/ Market Analysis/
Customer Survey Customer Survey
Team B
Feasibility Analysis 10/17
Feasibility Analysis
Mock-up
Work Perceived to
Mock-up
be Complete
Figure 3.5 Value Stream Map for a MIT Student Product Development Project –
Continued to Figure 3.6.
3.4.4 EVALUATION
In this case study, all rework took more time than the original work. One reason for this
phenomenon was that scope of the tasks extended as students identified several problems
through manufacturing process of a mock-up. However, there was another reason: the
students did not work as intensively as they had done in the original work: the tasks had not
been worked on all the time. This situation corresponds to today’s prevalent product
development environment in which the same engineers are shared by some projects. And,
generally, the sizes of impacts from outside of the project fluctuate. Possible problems
caused by impacts from outside of the project are the following:
1. Project delays due to low availability of engineers.
2. Information loses its value even while it is not worked on because of risks including
30
market risk (see chapter 8).
Therefore, even when a value stream map’s focus is one project, impacts from outside of
the focused project should be displayed explicitly somehow.
10/27
10/21 11/3 11/8 11/10 11/12 11/17
10/13 10/20 Lab Final
Mockup Lab Assey Mdl Lab In-Class Lab
Lab Lab Downselection
Review Review Review
Investigation on safety/regulations
Investigation on safety/regulations
CAD modeling
Waiting
11/5 CAD modeling
Waiting for
Hand-off
Root Cause
Delay in previous
task
Prototyping
Prototyping
Figure 3.6 Value Stream Map for a MIT Student Product Development Project –
Continued from Figure 3.5.
31
CHAPTER 4 IMPROVEMENT SUGGESTIONS FOR VALUE
STREAM MAPPING
Function A Function A
Phase 1 Phase 1
Only the same tasks should be
Original Work Rework
along the same line.
Function A
Phase 2
Function A
Phase 3
32
4.3 SHOWING BOTH PLANNED AND ACTUAL SCHEDLULES
In the third preliminary case study, interviews with students were performed several times.
When students were asked if they had found any wasteful activity in their development
process with the value stream map displaying only actual processes, it was difficult to get
enough information related to waste. After adding the original schedule information to the
value stream map, however, it became significantly easier. Thus, showing both planned and
actual schedules turned out effective for identifying wasteful activities through interviews.
Function A
Phase 1
Original Work
Function A Function A
Phase 2 Phase 3
An Expedited Task
33
CHAPTER 5 ROOT-CAUSE ANALYSIS DIAGRAM
34
Table 5.1 Comparison of the Definitions of Waste
Toyota's seven
Wormack and
wastes (Ohno, Morgan (2002) McManus (2004) Bauch (2004)
Jones (1996)
1978)
1 Waiting Wait Time Waiting Waiting Waiting
Excessive
3 Over Processing - - Over Processing
Processing
35
Cause High Process and Effect
Arrival Variation
(Morgan-W7)
Over Production
(McManus-W4)
Going Ahead
Lack of System Discipline Leaving Uncertainties Rework
(Morgan-W6)
Redundant Tasks
(Morgan-W10) Defects
Unsynchronized (McManus-W7)
Concurrent Tasks
(Morgan-W12)
Ineffective Communication
Existence of
(Morgan-W13) Re-Invention Waste
Interfaces
(Morgan-W5) Excessive Processing
(McManus-W3)
Poor Knowledge
Management Transaction Waste
(Morgan-W4)
Unnecessary Motion
External Quality Enforcement Transportation (McManus-W6)
(Morgan-W2) (McManus-W5)
Hand-Offs
(Morgan-W1)
Figure 5.1 Analyses of Relationships among the Categories of Waste Defined by Morgan and McManus – The
Categories without the Name of Morgan or McManus were added by the Author.
36
5.3 DEFINITIONS OF NINE WASTE INDICATORS AND INVENTORY OF
INFORMATION
5.3.1 DEDUCING NINE WASTE INDICATORS FROM THE ROOT-CAUSE ANALYSIS
DIAGRAM
The root-cause analysis diagram is shown in 5.4. The rightmost categories in the diagram
are root causes, and leftmost ones, effects. From the perspective of measurement, desirable
categories of waste are the ones that can easily be identified and the wasted time measured.
It was found that effects are easier to measure than their causes. For example, wasted time
on “waiting” can be measured by measuring an engineer’s waiting periods. However,
wasted time on “system over utilization” is difficult to measure; “System over utilization is
the root cause for waiting in figure 5.1. Therefore, the nine leftmost categories, which are
effects, were chosen as metrics of waste in product development processes. They are named
waste indicators because they are not causes for waste, but indicate that time is wasted for
some reasons.
Rework is the one that had neither been identified by the forerunners in Table. 5.1. It was
added by the author because it has frequently been pointed out as indicator of low
performances by many scholars and product development practitioners.
As can be understood from 5.4, the entries in the nine waste indicators are not mutually
exclusive. For example, defective information causes rework (5.4.5). The reason why the
37
author allowed this redundancy is the existence of strong interdependency among waste in
product development: one occurrence of waste can cause many different types of waste,
possibly forming a vicious circle, and this interdependency differs on a case-by-case basis.
Therefore, reducing one waste indicator may make the set of waste indicators an
insufficient one. Thus, the author maintained the nine (plus one described in 5.3.2) waste
indicators at this point; the waste indicators are prioritized in the case studies discussed in
chapter 10.
38
Table 5.2 Definitions and Examples of Nine Waste Indicators
Waste Indicators Description Typical Examples
same information.
2. Waiting of People People are waiting. -People are forced to wait because of
specifications
meetings)
requirement changes
7. Re-Invention Designing similar things -Design similar thing twice because past
experience.
groups/people.
(Coupled to Over Processing information. -Information that does not meet the
39
Inventory of information is different from the nine waste indicators in that engineers’ time
is not wasted while information is inventoried. In spite of this, inventory of information was
also identified as one waste indicator in this research because inventoried information can
lose value, causing rework. Inventory of formation is discussed in detail chapter 8.
40
CHAPTER 6: VALUE STREAM MAPPING FOR QUANTITATIVE
ANALYSIS OF PD PROJECTS
6.1 OBJECTIVE
Value stream mapping was originally developed for use in manufacturing environment, and
later its scope was expanded to product development environment as PDVSM (Product
Development Value Stream Mapping). For this historical reason, PDVSM took over various
rules for displaying different types of information flows and activities that had been
developed for detailed analyses of manufacturing processes. Some of these rules in
PDVSM are unnecessary in this research because the main purpose of using value stream
mapping is measuring wasted time. Unnecessary rules are eliminated, and, instead, some
minimum set of rules necessary for measurements are introduced in this chapter based on
the suggestions discussed in 4.2.
41
6.2 VALUE STREAM MAPPING (VSM) FOR DETAILED ANALYSIS
OF PROJECTS
Group A
Group B
Group C
42
6.2.2 PROCESS BOXES – ALLOCATION AND LENGTH
(1) Process Boxes for the Development of the Same Function
If development of a function can be divided into several different phases, each phase should
be assigned one process box (figure 6.2). As can be seen in this figure, downstream phase
for a function is allocated immediately below its adjacent upstream phase
Function A
Phase 1
Function A
Phase 2
Function A
Phase 3
Function A Function A
Phase 1 Phase 1
Function A
Phase 2
Function A
Phase 3
43
This rule does not apply to the process boxes for different functions (figure 6.4). Different
tasks can be on the same line if they are not for the same function.
Function B
Function A
Phase 1
Phase 1
Function A Function B
Phase 2 Phase 2
44
(3) Beginnings and Ends of Process Boxes
Beginnings and the ends of the process boxes should be consistent with the time line at the
top of the map (Morgan, 2002). As a result, process boxes’ lengths become proportional to
the time spent on them.
Week 1 Week 2
Function A
Phase 1
(3days)
Function A
Phase 2
(2days)
Function A
Phase 3
Phase 1 was started on the
(5days)
first day week 1, and had
been worked on for three
days without any interruption
lasting more than one day
Figure 6.5 Beginnings and Ends Process Boxes– Beginnings and Ends of the Process
Boxes Should Match the Time Line at the Top of the Value Stream Map
45
6.2.3 BOXES – COLOR CODES
(1) Displaying Actual Processes (figure 6.6)
Process boxes for actual processes are painted in blue as long as their length do not exceed
the scheduled periods. On the other hand, in cases in which tasks took longer than
scheduled, the excess time should be shown by painting in pink.
Function A
Finished earlier than the Took more time than
Phase 1 (1day)
original schedule by one the original schedule by
(2days)
day one day
Function A Extended
Phase 2 Period
(6days) (1day)
46
Another usage of white boxes is using them only when tasks are finished earlier than
scheduled; in figure 6.6, a white box is used for showing that phase 1 was finished earlier
before its due date by one day. This method is applied to case studies in projects A and C
(see chapter 9).
Function A
Phase 1
Finished earlier than the Took more time than
(2days)
original schedule by one the original schedule by
Func. A P1 (1day)
day one day
Function A Extended
Phase 2 Period
(6days) (1day)
Func. A P2 (6 days)
47
(3) Displaying Review and Testing Processes
Review and testing processes should be shown by using diamonds (figure 6.8).
Review
Review
Original Work
Rework
Function A Function A
Phase 1 Phase 1
Function A
Phase 2
Function A
Phase 3
48
(5) Displaying Over Processing
Usually over processing is submerged in value adding activities, but when the whole output
of a process is considered over processing, its process box should be painted in yellow
(figure 6.10)
Original Work
Rework
Over Processing
Function A
The whole work was
Phase 1
abandoned
Function A
Review Function A
Phase 2 Phase 2
(Over Processing)
49
Timely Transfer (used in a day)
Function A
Within One
Phase 2
Day
Function A
Phase 3
This rule is also applicable for the information stored because the group or the engineer is
interrupted in the middle of a task (figure. 6.12).
Function A Function A
Phase 1 Phase 1
Original Work Continued
One Day or More
50
(2) Information transfer with Hand-Off (Information Flow across Swim Lanes)
Hand-Offs (information transfer among engineers/ groups) are marked in blue if the
information is handed off immediately (figure 6.13). Typically, transferred information used
in one day satisfies this condition. However, this criterion is contingent on various factors
including as the market’s mobility and the scheduled development period. Hand-Off in
which information is kept untouched for one day or more is wasteful and is covered by two
waste indicators, hand-off and information inventory. Hand offs with inventory periods
should be distinguished from the other hand-offs by using red lines (figure 6.13).
Timely Hand-Off
Delayed Hand-Off
Hand-off with Inventory)
Phase 1
(2days)
In one day
Function A
Phase 2
(2days)
Function A
Phase 3
(2days)
51
6.2.5 INTERACTIONS
Bi-directional information exchanges are shown by using bi-directional arrows (figure
6.14)
Function A
Phase 2
Function B
Phase 1
52
6.2.6 CROSS-FUNCTIONAL TASKS
Cross-functional tasks should be shown in the way described in figure 6.15.
Function A Cross-
Phase 2 Functional Task
Function A
Phase 1
Func. B Ph. 1
Func. B Ph. 2
Function C
Phase 2
Function C Cross-
53
6.2.7 INTERRUPTIONS
In most organizations, engineers are required to work on multiple tasks. Many scholars and
product development practitioners have pointed out that this multiple tasking significantly
affects product development projects. In this value stream mapping, all tasks that are not
part of the focused project are treated as interruptions, and shown as in figure 6.16.
Function A Function A
Phase 2 Phase 3
An Expedited Task
54
6.2.8 TIME RECORDING
Time spent on every task should be put on a value stream map (figure 6.17). The total time
of weekly hours of labor of each functional group should also be put on a value stream map.
These numbers will be used for measuring wasted time (chapters 7 and 8).
Function A
Phase 1
Original Work 26
24 Function A Function A
Phase 2 Phase 3
16 24
40 40
55
6.2.9 WASTE INDICATORS
Measured wasted time should be put on value stream maps in the way described in figure
6.18.
Inventory of Information
Inventory period
i001-5days
Function A Function A
Phase 1 Phase 1
Original Work One of nine waste indicators Continued
Assigned ID
(w stands for waste indicator)
56
CHAPTER 7: HOW TO MEASURE WASTED TIME IDENTIFIED BY
NINE WASTE INIDICATORS USING VALUE STREAM MAPPING
7.2 OVERPRODUCTION
When overproduction occurred, engineering time spent on overproduction is regarded as
time wasted on overproduction (figure 7.1).
Non
Value-Adding
Work
Figure 7.1 Measuring Time Spent on Overproduction – A Hatched Process Box Mean
Overproduction
57
7.3 WAITING
When an engineer was forced to wait doing nothing, the period for which the engineer had
waited is regarded as time wasted on waiting (figure 7.2). Waiting is rare in today’s product
development environment, for engineers usually have several tasks in their cues.
nothing)
58
7.4 TRANSPORTATION OF INFORAMTION
Sometimes transportation of information takes up a substantial amount of engineers’ time.
Figure 7.3 is an example case in which an engineer needs to provide his/ her CAD data to a
supplier. He/ she may need to spend his/ her time on data conversion processes, which is
usually not completely automatic. In this case, time the engineer spent on data conversion is
wasted time on transportation.
Wasted Engineering
Time
Wasted time is this period
To a supplier, an outsourced
company, etc.
Figure 7.3 Measuring Time Spent on Transportation
59
7.5 OVER PROCESSING
Wasted time on over processing can be measured in the way shown in figure 7.4.
Determination of the actual time spent on over processing usually requires intensive
interviews with engineers, for over processing occurs concurrently with other value adding
work.
Non Value-Adding
Work (discarded)
Wasted Engineering
Time
Wasted time is this period
Figure 7.4 Measuring Time Spent on Over Processing
60
7.6 MOTION
Figure 7.5 is an example of motion. In this example, engineer spent some time on
reviewing another engineer’s work. Time spent on reviewing is considered to be wasted
time.
61
7.7 REWORK
Figure 7.6 is an example of measuring wasted time on rework. In this example, the original
work was partially reworked. In such a case, wasted time on rework should be the total time
of A and B (figure 7.6). A is time spent on discarded work, and B, time spent on
troubleshooting. C is considered to be value adding activity. A is sometimes difficult to
measure, but C can substitute for A when measuring wasted time. Examples of
measurements of rework is shown in figure 10.20.
Rework
(C)
(A)
(B)
Wasted Engineers’
Time
Measured period
Figure 7.6 Measuring Time Spent on Rework
62
7.8 RE-INVENTION
Figure 7.7 is an example in which two engineers invented the same information. In this
case, time spent on the second invention is regarded as the time wasted on re-invention.
engineer
Communication that should have
occurred
63
7.9 HAND-OFF
Sometimes hand-offs takes both the sender’s and the receiver’s time: the sender may need
to spend his/her time on documentation that could be avoided without hand-off, and the
receiver usually needs to spend his/her time on understanding the sender’s work. Figure 7.8
is an example in which both engineers wasted time on hand-off.
64
7.10 DEFECTIVE INFORMATION
Defective information causes waste of time in various forms including rework, time spent
on reviews and testing, and customer support work after launching the product. Figure 7.9
is an example in which defective information caused rework. In this case, wasted time is
the time spent on creating defective information and fixing it. In many cases, time spent on
creating defective information cannot be easily distinguished from other vale-adding
activities. In such cases, measuring time on fixing defective information is sufficient.
Rework
Occurrence of
Defective Information Discarded Portion
Time Wasted, but
Cannot be Easily Measured Period
Measured
Figure 7.9 Measuring Time Spent on Defective Information
65
CHAPTER 8: INVENTORY OF INFORMATION AND HOW TO
MEASURE IT USING VALUE STREAM MAPPING
8.1 INTRODUCTION
Goldratt (1997) insists in his book “Critical Chain” on not allocating buffer times except at
the ends of projects. McManus (2004) puts stress on wastefulness of inventory of
information in his “PDVSM Manual,” arguing “work in progress” information may become
obsolete while it is stored. Both arguments share the common idea that created information
should not be kept waiting. However, alike in manufacturing environment, inventory
cannot be completely eliminated for two reasons. One is that product development teams
usually do not have enough numbers of engineers to keep all information busy all the time.
The other is the risks and uncertainties existing in product development projects. This is
why even the scheduling methodology suggested by Goldratt requires buffer time allocated
on feeding paths.
Thus, there exist two tradeoffs related to inventory of information. One exists between cost
of having a big team and cost of having obsolete information caused by having a small
team. The other exists between the risk of depleting buffer time, and, again, the risk of
having obsolete information. Depletion of buffer time unsynchoronizes the whole project
schedule, causing subsequent waste.
Because of these trade-offs, there can be no universal solution for determining the right
number of engineers and the right buffer times: product development organizations need to
know how much inventory of information costs in their specific contexts. Without
quantitative data, they cannot optimize buffer allocations in their schedules. For instance, in
an environment in which market is significantly unstable, a huge team that realizes short
development cycle times may be desirable because information created goes bad quickly.
This research tries to shed light on this topic: the deterioration of information inventory and
how to measure it. 8.2 discusses how information goes bad. “Interest rate” of inventory of
information is calculated in the case study of Project A; the result is discussed in chapter 11.
66
8.2 DEFINITIONS OF ROTTEN INFORMATION AND FRESH INFORMATION
Rotten inventory in this thesis is the information inventory that needs to be reworked
partially or completely due to changes occurred inside or out of the project. For example,
information inventory may need to be reworked because a significant market change is
identified. More discussions on causes of rotten information are covered in 8.3.
67
Final output (inconsistent with the
Upstream Work pre-released information because
of technical problems)
Tentative
Replacement
output
Rotten Portion
(Engineers’ Inventory Period
time wasted)
68
CHAPTER 9: OVERVIEW OF THE RESEARCH INVESTIGATIONS
(5) OVERALL
• Can the lean tools and processes described above deliver to product development
organizations information that leads to continuous improvement of their
value-creating processes?
69
To answer these questions, the next step is determined to test the tools and processes in
several product development projects.
70
9.2.2 DIFFICULTIES IN THE PROJECTS
Although the three projects are similar in that they are all embedded software development
projects, they all had their own difficulties.
Project A
Design Issues
• Major architecture design change caused by their decision to integrate several
components of the existing model to a single component.
• Higher complexity incurred due to this integration.
• In spite of more constraints caused by the integrated design, higher performance was
required due to technological development in the market at the time of the beginning of
the project.
Project B
Design Issue
• Maintain compatibility with the previous model.
Project C
Design Issue
• Realize high compatibility with the other manufacturer’s machine.
• Basically no communication channel with users – their needs are communicated via the
customer.
71
9.3 INVESTIGATION SCHEDULE
9.3.1 OVERVIEW OF INVESTIGATION SCHEDULE
Companies X and Y were visited three times and twice respectively. Phone calls and emails
had been exchanged between and after the visits. Company X’s investigation lasted for
three months, and Company Y’s, two months. Detailed schedule for the investigation of
Company X is described in the following sections. The investigation of Company y
followed similar processes, although I could make it more efficient by applying what I had
learned through the investigation of Company X.
Although most meetings were held in the company’s conference rooms, I was allowed to
occupy a desk located close to the development team. This helped me to develop a better
72
understanding about how they work, how information is exchanged, and even each
engineer’s personality.
Project B:
• The MS-Project file that included both the scheduled and actual processes.
What I learned
Value stream maps for the finished portions of projects should be completed if possible, for
drawing them takes long time. In order to do to do this, both actual and planned schedules
should be obtained well before a visit. For this reason, before I visit Company Y, I obtained
as much information as possible, leading to more effective and efficient information
exchanges then.
73
Drawing Value Stream Maps
Project A’s value stream map was made based on the schedule information in the
MS-Project file. Project B’s value stream map was updated after each telephone conference.
74
CHAPTER 10: RESULTS OF RESEARCH INVESTIGATIONS 1:
ANALYSES ON NINE WASTE INDICATORS
75
Occurences of Waste Indicators per 50 Eng. Weeks
500
465
Normalized Number of Occurences [/50 Eng. Week]
450
400
350
300
Project A
250 Project B
Project C
200
143
150 121
103 100
85
100
56 62
37 35 44
50 30
9 5 15 15 15
3 2 0 4 2 0 1 0 0 5
0
Re-Invention
Information
Overproduction
Waiting
Transportation
Over processing
Motion
Rework
Hand-Offs
Defective
Waste Indicators
76
10.2.2 AVERAGE WASTED TIME PER ONE OCCURENCE
Figure 10.2 shows the average wasted time on each waste indicator in the three projects.
The overall average wasted time per one occurrence of waste indicator in Projects A, B, and
C were 17, 3, and 8 engineering hours respectively. Overproduction took 23 hours in
Project A on average. Waiting, along with motion and hand-off, had less average wasted
time than the others. Over processing, rework, and defective information took 17 hours or
more on average, except in Project B, which was on its investigation phase during my
survey period.
25 23 24
23
22
20 20 20
20
17 17
Wasted Time [Eng. Hour]
15
15
Project A
Project B
9 Project C
10
8
7 7
6 6 6
5
5 4 4 4
3 3
1 1 1
0 0 0 0
0
Re-Invention
Information
Overproduction
Waiting
Transportation
Over processing
Motion
All
Rework
Hand-Offs
Defective
Waste Indicators
Figure 10.2 Wasted Time per Each Occurrence of a Waste Indicator in Projects, A, B,
and C
77
10.2.3 TOTAL WASTED TIME
Figure 10.3 shows the total wasted time on each waste indicator in the three projects. Over
processing, motion, rework, and defective information were the top four waste indicators in
almost all the three projects. This implies that the four waste indicators are more important
than the others. Although one occurrence of overproduction wasted 23 hours on average in
Project A, the total wasted time on it was trivial compared to the top four waste indicators.
Waiting was also trivial, implying engineers always have some tasks in their cues.
Project A’s wasted time on over production was outstanding among the three projects,
indicating that 1,438 engineering hours were wasted for some reasons, including changes
and errors. Project A also wasted time on rework more than the other two projects. Project
B’s wasted time was much less than the others in transportation, motion, rework, and
defective information. This result will be analyzed in detail in 10.3.
78
Wasted Times per 50 Eng. Weeks and Waste Indicators
2500 2357
Normalized Wasted Time [Hour / 50 Eng. Week]
2000
1438
1500 290
1382 ProjectA
ProjectB
978
ProjectC
1000 713
871 243
760 343
500
192 194
135 145 111
70 60
18 0 17 12 0 9 22 0 0 20
0
Waiting
Overproduction
Transportation
Over processing
Motion
Re-Invention
Hand-Offs
Rework
Information
Defective
Waste Indicators
Figure 10.3 Normalized Total Wasted Time per 50 Engineering Weeks and Waste Indicators
79
10.2.4 TEMPORAL CHANGES IN WASTE INDICATOR DISTRIBUTIONS
Figures 10.4-10.6 show temporal changes in waste indicator distributions in Projects A-C
respectively. In Project A, the total wasted time fluctuated over time. This fluctuation
implies the software team’s activities, which are the downstream tasks of the hardware
team’s activities, had largely been affected by intermittent hardware releases, for Project A
involved major changes in both hardware and software. In contrast, Project C, which
involved no major hardware change, had much less fluctuation than Project A. Wasted time
on rework increased as time spent on the project increases in all three projects.
1000
900
800
Defective Information
700 Hand-Offs
Wasted Time [Eng. hour]
Re-Invention
600 Rework
Motion
500 Over processing
Transportation
400 Waiting
Overproduction
300
200
100
0
1-5 6-10 11-15 16-20 21-25 26-30 31-35 36-40 41-45 46-50
Eng. Week
80
Temporal Changes in Waste Indicator Distribution
(Project B)
1000
900
800
200
100
0
1-5 6-10 11-15
Eng. Week
81
Temporal Changes in Waste Indicator Distribution (Project C)
1000
900
800
Hand-Offs
600 Re-Invention
Rework
500 Motion
Over processing
400 Transportation
Waiting
300 Overproduction
200
100
0
1-5 6-10 11-15 16-20 21-25 26-30
Eng. Week
82
10.2.5 DISTRIBUTION OF WASTE INDICATORS AMONG ENGINEERS
Figures 10.7, 10.8, and 10.9 show the distributions of waste indicators of each engineer in
the three projects. As can be understood from these figures, the distributions differ
significantly among engineers. Tables 10.1, 10.2, and 10.3 briefly introduce the engineers’
profiles. These results illustrate the following:
Waste indicator distribution is contingent on each engineer’s qualification level, each
engineer’s role, and how information flows.
Looking into the distributions of U (in Project A) and NF (in Project C) reveals that the
waste indicator distribution is also affected by each engineer’s role. U in Project A and
Engineer NF in Project C had similar waste indicator distributions: they are the only
engineers who had wasted his/her time in the following order.
1. Motion, 2. Rework, 3. Defective information
As can be understood in tables 10.1 and 10.3, they share the same role: they are both
responsible for engineering issues of other engineers while working on their own design
tasks. In Project B, U’s waste distribution changes significantly: he/she wasted his/her time
on hand-off most. This is mainly because his/her role in Project B was project manager who
provide with the engineers tasks and necessary information including specifications.
Engineer H’s waste indicator distribution in Project A is a distinct proof that the distribution
is affected by how information flows. H wasted his time on over processing most. This was
due to his working on tentative information, which was caused by late information releases
from the hardware team.
83
Waste Indicator Distributions by Engineer (Project A)
100%
Defective Information
80% Hand-Off
Re-Invention
Rework
60%
Percentage
Motion
Over Processing
40% Transportation
Waiting
Overproduction
20%
0%
F M T Y H U
Engineer
84
Waste Indicator Distributions by Engineer (Project B)
100%
90%
Defective Information
80% Hand-Off
Re-Invention
70%
Rework
60% Motion
Percentage
Over Processing
50%
Transportation
40% Waiting
30% Overproduction
20%
10%
0%
U Y T F N J
Engineer
85
Waste Indicator Distributions by Engineer (Project C)
100%
Defective Information
Hand-Off
80%
Re-Invention
Rework
60% Motion
Percentage
Over Processing
Transportation
40%
Waiting
Overproduction
20%
0%
KZ NF HS HG IH
Engineer
86
10.3 DETAILED ANALYSIS ON EACH WASTE INDICATOR
10.3.1 OVERVIEW
The five waste indicators that were not more significant than the others (see figure 10.3),
overproduction, waiting, transportation, re-invention, and hand-off, are briefly reviewed in
this section. The top four waste indicators, over processing, motion, rework, and defective
information are analyzed in detail using the root-cause analysis diagram.
Under qualification
Inexperienced engineers tend to make redundant source codes.
87
Wasted Times on Overproduction and Causes
40
36
35
Wasted Time [ Hour / 50 Eng. Weeks]
30
24
25
Project A
20 18 Project B
Project C
15
10
10
5
0 0 0 0
0
qualification
Unclear division
Architecture
the Architecture
Making Use of
Premature
of Previous
Design
Under
of labor
Model
Causes
Figure 10.10 Normalized Waste Time on Over Production per 50 Engineering Weeks
and the Corresponding Causes
88
10.3.3 WAITING – WASTED ENGINEERING HOURS BY CAUSES
Figure 10.11 shows the relationship between wasted time on waiting and the corresponding
causes. Waiting was identified Projects A and B. Insufficient maintenance of development
environment and limited tools/ prototypes/ hardware were the causes respectively. This
result implies that engineers are forced to wait only in the unexpected situations in which
they encounter some hardware problems; in other situations, they switched their tasks. For
example, when some information was necessary to process a task, the engineers started
working on another one instead of waiting for the information.
17
18
16
14 12
Project A
12
10 Project B
8 Project C
6
4
2 0 0 0 0
0
Insufficient Maintenance
tools/prototypes/hardware
of Development
Environment
Limited
Causes
Figure 10.11 Normalized Waste Time on Waiting per 50 Engineering Weeks and The
Corresponding Causes
89
10.3.4 TRANSPORTATION – WASTED ENGINEERING HOURS BY CAUSES
Figure 10.12 shows the relationship between wasted time on transportation and the
corresponding causes. Although transportation was identified in all the three projects, their
distributions of causes were different from each other: only spatial/structural barrier was
shared by multiple projects. In Project A, Changes in design methodology and changes in
documenting / database format/ guidelines were the two significant causes. Both causes
re-formatting information (figure 10.13).
90
80 77
Wasted Time [ Hour / 50 Eng. Weeks]
68
70
62
60
53
51
50 Project A
Project B
40 Project C
30
20 16
9
10
0 0 0 0 0 0 0 0 0 0 0
0
New area to the
business relation
/Database Format
Spatial/Structural
Using new
Methodology
engineers
Changes in
Documenting
Design
/ Guidelines
with users
Changes in
Having no
engineer
Barrier
Causes
90
Figure 10.13 Part of the Root-Cause Analysis Diagram of Transportation of
Information
91
10.3.5 OVER PROCESSING
(1)Wasted Engineering Hours by Causes
Figure 10.14 shows the relationship between wasted time on over processing and the
corresponding causes. Over processing was more significant in Project A than in the other
two. (i) “Undiscovered errors in outputs from upstream”, (ii) “Upstream changes/ poor
concept design/ marketing information” were the most significant causes in Project A. (iii)
“Prototype version confusion” was the most significant one in Project B.
92
- Outsourcing
- Functional organization
- Complex hierarchy structure of organization
93
Wasted Time [ Hour / 50 Eng. Weeks]
0
100
200
300
400
500
600
700
800
900
1000
911
Undiscovered errors in
0 0
outputs from upstream
195
concept design /marketing
3 0
information
78
creating a workaround
0 0
solution
60
Insufficient /no
6 0
communication
33
Insufficient pervasion of
0 0
goal information
Prototype version 49
106
confusion 0
48
Defective information
94
0 0
Causes
24
Limited
0 0
tools/prototypes/hardware
18
Strategically processing
0 0
multiple solutions
Wasted Time on Over Processing and Causes
0
24
Unclear task
62
0
21
Schedule pressure
0
Design technique/
0
35
Poorly manufactured
40 0
prototype
Project B
Project A
Project C
Figure 10.14 Normalized Waste Time on Over Processing per 50 Engineering Weeks and The Corresponding Causes
PD’s Nature(2)
Iteration cannot be
eliminated
Optimization Interface
Process Redesign PD’s Nature(3)
Identifying All Interfaces
In Advance is Impossible
Upstream
Changes Poor Marketing
Information
Poor Information Quality
from Upstream Phases Premature Concept
Design
Premature Architecture
Design
Poorly
manufactured Verification Upstream Task’ s
Prototype Performed by Dependency on
Late /Lack Downstream Downstream Tasks
Verification Tasks for Verification(E.G.
Process Manufacturability)
Undiscovered Verification by
Errors in Independent Existence of
Outputs from Process Uncertainties/Risks
Working on Upstream (Testing,etc.)
4.Over
Unreliable/
Processing Limited
Defective Info. Verified by
External Resources(3)
19. Defective Organization/ Limited Capability of
Information People Organization
Limited
Resources(1)
Concurrent Engineering(2) System Over
Overlapping Information Pulled Before Utilization
Schedule Validation and/or Optimization (Unrealistic
of Design Parameter Sets Schedule)
Prototype Poor
Version Work-in-Process Version
Confusion Management
Scattered
Insufficient/ No Spatial Locations
Communication Barrier
Complex
Organizational
Structure
Outsourcing
Functional Organization
Complex Hierarchy
Structure of Organization
95
(3) Discussion
The most significant cause of over processing in Project A, “(i) Undiscovered errors in
outputs from upstream,” is common in many embedded software development projects in
which both hardware and software are developed concurrently. The first two root causes,
“Defective information” and “Upstream task’s dependency on downstream tasks for
verification” reveals the wasteful relationship between the hardware and the software team,
in which defective information flows down to the software team (figure 10.16). Because of
this pattern, the software team needed to waste time on defective information for which
they were not responsible. Creating defective information itself is wasteful. Passing the
defective information to downstream is also wasteful. If there had been an effective
verification processes before handing of information to the downstream team, both teams
could have reduced wasted time: the software team can reduce time on over processing, and
the hardware team can get feedback quickly. It is generally difficult to test prototype
hardware without a complete version of embedded software, but finding ways to check
errors effectively before handing off hardware prototype can reduce waste, along with
efforts to improve prototypes’ quality (this is more essential), can some portion of the
wasted time of 911 hours / 50 months (figure 10.13).
The third root cause, “Existence of risks / uncertainties” cannot be controlled. However,
wasted time on over processing can be reduced by identifying risks/ uncertainties earlier by
introducing front-loaded processes including set-based concurrent engineering, or spiral
processes.
The fourth root cause, “Limited capacity of organization,” may be controlled. However, it
is usually difficult because bringing up embedded software engineers takes longer time than
general software engineers because embedded software engineers need to have expertise
for specific hardware.
96
Hardware Team
(ii) The verification
process between two
teams is not effective.
The identified root causes for the second significant cause, “(ii) Upstream changes/ poor
concept design/ marketing information,” suggests that Project A’s concept development
phase might have been premature. Because the project was processed without mature or
good information, the team happened to spend 195 hours /week on over processing.
However, some portion of this wasted time is caused by deterioration of information, which
is discussed in the next chapter.
97
10.3.6 MOTION
(1) Wasted Engineering Hours by Causes
Figure 10.17 shows the relationship between wasted time on motion and the corresponding
causes. The top three causes for motion were (i) “Documenting”, (ii) “Testing/QC” (iii)
“Meeting.” In these investigations, testing/QC included only review and testing tasks
performed by other engineers. This was because self-testing activities were not clearly
distinguishable from fixing errors.
Documenting and testing/QC were the most significant causes in Projects A and C, while
they were not significant in Project B.
400 381
350
Wasted Time [ Hour / 50 Eng. Weeks]
298
300 271
250
220 215 Project A
200 Project B
Project C
150
94
100 84
36 40 40 40
50 28 26
18 20
0 0 0 0 0 0 0 0 0 3 0 0
0
Information
Outsourcing
Documenting
Meeting
Business trip
floor/building
Using new
engineers
Walking to
hunting
another
engineer
Causes
Figure 10.17 Normalized Waste Time on Motion per 50 Engineering Weeks and The
Corresponding Causes
98
(2) Examples of Root-Cause Analysis
The root cause analysis of the causes (i), (ii) and (iii) are quoted from the root cause
analysis diagram discussed in chapter 5 (see figure 10.18).
(i) Documenting
As can be understood from figure 10.18, the typical root causes for this cause are the
following:
- Required activity
- Transportation
While documenting wastes the ongoing project’s time, it may save time later in future
projects: without easy-to-access documentation, re-invention waste may occur in the future.
(ii) Testing/ QC
As can be understood from figure 10.18, the typical root causes for this cause are the
following:
-Defective information.
-Outsourcing
Testing activities are classified as waste because the better development processes are, the
less time for testing is needed. This means that a development process with long testing
time is not always a process that guarantees high quality products. Rather, it may be a poor
process whose work-in-process information’s quality is poor because defective information
causes long time for testing that might have been unnecessary without defective
information.
(iii) Meeting
Not all meetings are wasteful: because well-organized meetings lead to high-quality
information transfer (Graebsch, 2005). On the other hand, meetings for unilateral
information transfer and with unnecessary attendants are wasteful, causing other wastes by
taking engineers’ time.
(3) Discussion
Considering the three major causes for motion identified, measuring time spent on motion
is not a good way for waste reduction. Documenting can be considered as investment for
the future. Spending long time on testing is waste, but one of the two causes of it is
defective information, which is one of the waste indicators. In fact, in Project C, design
reviews sometimes worked as a training process. For example, many design reviews can be
99
seen figure 10.19. The project spent long time on the reviews, which lasted 1.5h-2h each,
but the reviews helped KZ learn the coding guidelines and the design philosophy of the
company. Using design reviews as an opportunity for training may not always be the best
way of training, but it can also be regarded as investment for the future. Deciding whether a
meeting is wasteful or not requires a careful analysis.
Defective Information
Testing/ QC
Outsourcing
Stop-and-Go
Tasks
Desire to Save
Meeting Just for
Information Releaser’ s
Information Sharing
Time
5. Motion
Intensive
Meeting for Value Interaction’ sHelpfulness
Meetings Creation/Problem for Creating
Solving/Design Review Ideas/Problem
Solving/DR
Narrow Bandwidth of
Existing
Network(Resulting inLow
Limited Capability of Resolution,Late
Tele-Conference Response)
Complex Organizational
Spatial Barrier Structure
Complexity of Support
Spend Time on Complex
Equipments
Operation of support
Equipments
Required Activity
Documenting 3.Transportation
100
Figure 10.19 Example of a Process with Frequent Design Reviews – Engineer KZ’s design Process in Project C
101
10.3.7 REWORK
(1) Rework in VSM
As can be seen in figure 10.20, rework is conspicuous in value stream maps. It can be easily
understand how time is spent on rework by taking a glance of a value stream map; in
Projects A and C, some engineers spent more than half of their time on rework.
Rework
102
Wasted Time on Rework and Causes
1400
1237
1200
Wasted Time [ Hour / 50 Eng. Weeks]
1000
861
800 Project A
Project B
600 Project C
400
181
200
76
31 35 33 40
6 27 19 25 0 0 25 0 0 4 0 0 7 0 0 0 9 0 18 0 0 0 10 0 0
0
Unclear division of
Hand-off
Upstream changes
Troubleshooting
Under qualification
information
communication
Prototype version
Over processing
Overproduction
Necessary iteration
Insufficient /no
Defective
confusion
labor
Causes
Figure 10.21 Normalized Waste Time on Rework per 50 Engineering Weeks and The Corresponding Causes
103
(3) Examples of Root-Cause Analysis
The root cause analysis of the causes (i) and (ii) are quoted from the root cause analysis
diagram discussed in chapter 5 (see figure 10.22). Troubleshooting is caused by defective
information, but, in this investigation, when a engineer had spent his/her time on
troubleshooting, the wasted time was classified not as the rework time caused by defective
information, but troubleshooting. On the other hand, when a engineer had spent his/time on
fixing defective information, the wasted time was classified as rework time caused by
defective information.
(4) Discussion
Figure 10.23 explains why 861 hours are spent on troubleshooting in Project A. As can be
understood from this figure, the embedded software team needed to take care of the errors
both in hardware and software. This made troubleshooting processes complicated.
Furthermore, the software engineers had limited knowledge in the prototype hardware.
Especially the inexperienced engineers (F and Y) spent long time on identifying where bugs
are in. The quality of their work were not as high as those of experienced engineers then
(they showed notable improvement in Project B). And, their troubleshooting processes were
not as effective as those of the experienced engineers. Their lack of confidence, coming
from inexperience, sometimes made them take long time before determining who was
responsible for the bugs. It was sometimes difficult for them, partly because of the Japanese
culture, to point out errors made by hardware engineers who were more experienced. Thus,
engineers wasted 861 hours on troubleshooting.
104
3. Try to retrieve alarming information from engineers: hidden problems are sometimes
troublesome than visible ones. (This seems to be already realized by U in Project B)
On the other hand Project C spent 1,237 hours on rework caused by defective information.
Defective information is discusses in 10.3.9.
105
9. Defective
Information
PD s Nature(1)
Complete Elimination of
Impossibility of completely clear Risks/
division of labor Uncertainties is
Unclear division Impossible
of labor
Scattered Locations
Complex Organizational
Unplanned Structure
Iteration Inability of
Inability of Prompt Tardy
Prompt Spatial/Structual
Adjustment of Information Outsourcing
Adjustment of Barrier
Division of Labor Transfer
Division of Labor
Functional Organization
Prototype Poor
Version Work-in-Process Version
Confusion Management
Scattered
Locations
Insufficient/ No Spatial
Communication Barrier Complex
Creation of Low
Quality Organizational
Structure
Information
Outsourcing
Functional Organization
Complex Hierarchy
Structure of Organization
Too Tight
Schedule Limited Resources (1)
(Unrealistic)
Pressure System Over Utilization
Schedule
Highly Competitive
Market
Time-Sensitive Product
Insufficient
Training
Troubleshooting /Diagnosis
9. Defective
Working on Unreliable/ Information
Defective Information
106
Hardware Team
The verification process
is not effective.
Embedded Software
Team
Embedded Software
Team
Flow of defective information (Wasted 861 hours)
The two graphs proved that one occurrence of rework takes longer time near the end of a
project than at the beginning of it, and their increases are exponential. Project A’s curve
fluctuated. This is because information came from the upstream team (hardware)
periodically: the software team identified rework after some information releases such as
detailed specifications releases, and prototype releases.
These results imply that problems should be identified as soon as possible. Possible
solutions are the following:
1. Introducing a suitable spiral process with frequent prototyping
2. Introducing a front loaded process such as set-based concurrent engineering
3. Trying to listen to engineers. Watching for hidden problems.
1
Not all the engineers switched themselves to the second phase at the same time in Project B.
107
Average Time Spent on One Occurrence of Rework and Week (Projects A and B)
80
70
67
Average Rework Time [Eng. Hour]
60
57
50
43 Av. rework time (Project A)
40
Av. rework time (Project B)
30
23
20 21
5 14 12
10 9
6 10
0 0
1-5 6-10 11-15 16-20 21-25 26-30 31-35 36-40 41-45 46-50 51-55 56-60
Week
50
47
45
Average Wasted Time [Eng. Hour]
40
35
30
25
23
20
17
15 14
10 10
0 0
1-5 6-10 11-15 16-20 21-25 26-30
Week
108
10.3.8 RE-INVENTION
Re-invention was identified only in Project A, and the identified primary root cause for it
was scattered locations. An engineer spent long time on a function that was similar to the
one developed by another engineer.
60
53
Wasted Time [ Hour / 50 Eng. Weeks]
50
40 37
32 Project A
30 Project B
Project C
20 17
14
10 10
10 6
3 2 3
0 0 0 0 1 0 1
0
of requirements
Absence of
Outsourcing
Switching to an
Work sharing
tasks on the
task owner
critical path
Dissemination
Unloading
expert
Causes
Figure 10.26 Normalized Waste Time on Hand-Off per 50 Engineering Weeks and the
Corresponding Causes
109
10.3.10 DEFECTIVE INFORMATION
(1) Wasted Engineering Hours by Causes
(i) Under processing / errors/ lapses was the most significant cause for defective
information in all the three projects (figure 10.27), and the amounts of waste time in
Projects A and C by the cause were similar. Under processing means that something is
missing in the output of a task: a task that was considered to be complete is not actually
complete. (ii) Under qualification was the second significant cause for wasted time on
defective information in Projects A and C. (iii) Insufficient communication was the third
significant cause in Project A.
700
Wasted Time [ Hour / 50 Eng. Weeks]
608
600 547
500
Project A
400 335
Project B
300 243
Project C
200 127 111.5
100 61
24
0 0 0 0 0 0 0 0 0 8 0 0 0 0 0 13
0
Schedule pressure (unrealistic
Under qualification
Over processing
interface design
Lapses
schedule)
Causes
110
(2) Discussion
Although Projects A and C similarly suffered from (i) Under processing/ errors/ lapses, the
mechanism of how they wasted time was different. Project C’s case was specific to
suppliers (figure 10.28). Because Company Y sells key components to Company Z,
Company Y has no reliable communication channel with user companies. This is critical for
them, for how their products are used differs among users, and many users use the products
in the ways the designers have never taken into account. For these reasons, designers in
Company Z do not exactly know every aspect of the specifications of their products. This is
similar to a case in which a user of a key uses it for opening bottles: the user may complain
about a new key that cannot be used for his/her unique purpose. Similar cases are prevalent
in Company Z’s market. Company Z sets specifications of new products without
completely covering every use case. Company Y sets more detailed specifications based on
the target specifications given from Company Z. Company’s engineers ask questions when
they encounter problems. This usually ends up finding that the task is more complicated
than they thought.
111
The development team in
Company Y
(supplier of key components) No reliable
communication channel
Company Z
(Company Y’s customer)
• Value stream mapping improved in this research was applicable for quantitative
measurements of nine waste indicators in all three projects.
• There was no need to add new types of waste indicators.
• Three waste indicators (over processing, rework, and defective information)
were more important than the others in terms of wasted engineering hours.
Motion was also significant in terms of wasted time, but analysis of its causes
revealed that trying to reduce time spent on motion is not likely to improve
product development processes significantly.
• Root cause analysis diagram was helpful for quickly identifying causes for the
occurrences of waste indicators.
• Quantitative analyses of causes for waste indicators showed different patterns
among companies and projects, proving that this methodology is helpful for a
112
company to identify its specific problems.
• Additionally, the empirical idea, “Time to solve a problem increases
exponentially as time goes by,” was valid in all projects.
113
CHAPTER 11: RESULTS OF THE RESEARCH INVESTIGATIONS 2:
ANALYSES ON INVENTORY OF INFORMATION
114
Comparison of Weekly Occurences of Inventory
7.0
5.9
6.0 5.7
Occurences of Invenotry [/Eng. Week]
5.0
4.0 Project A
Project B
3.0 Project C
2.0
1.4
1.0
0.0
Occurences of Inventory /Eng. Week
115
11.3 AVERAGE INVENTORY PERIOD
Figure 11.2 compares the average periods of inventory in the three projects. For example, in
Project A, once a task is stopped, it took twelve days on average before it is restarted. This
period is important especially in the contexts in which risks are high (see chapter 8).
14
12
12
10 9
Eng. Days
8 Project A
Project B
6 5 Project C
0
Average Periods of Inventory
116
11.4 TOTAL INVENTORY TIME
Figure 11.3 compares the total inventory time per engineering week in the three projects.
Project A had five times more inventory time per week than Project C, although the sizes of
the two teams are similar (Project A: six engineers, Project B: five engineers), and they are
both in the same development phase.
70 64
60
50
Eng. Days
40 Project A
30 Project B
30 Project C
20
13
10
0
Total Inventory Time / Eng. Week
11.5 IMPLICATIONS
The result of total inventory time showed that Project A had a substantial amount of
inventory time, indicating that the team suffered low throughput. This situation resembles
that of manufacturing processes in 1980’s, in which factories had huge amount of
work-in-process inventory. Goldratt (1984) attributed the situation to inappropriate
performance measurements that was prevalent then, claiming that measuring efficiencies of
machines did not lead to improving productivity, but lead to unsynchronized production
processes with huge amount of inventory. He recommended, in his theory, TOC, to measure
inventory instead of efficiencies of machines.
117
when they see their people idle, like a plant director who sees workers taking rests. On the
other hand, idle information is not as conspicuous as idle engineers. Rather, project
managers may be blamed if they fail to give their engineers fewer tasks than they can work
on. Goldratt’s suggestion was paradoxical in the manufacturing world twenty years ago:
because measurements were focused on measuring utilization levels of machines, huge
amount of inventory was not paid attention although it indicated low throughput of the
production processes. Today’s product development organizations may be in the same
situation. Engineers are busy because their utilization level is monitored. On the other hand,
idleness of information, which gets rotten with time, is significant because it is not
monitored.
The results shown in figures 11.3 and 10.1 back up this idea, especially in Project A. In
project A, identified waiting of engineers was only four hours in fifty weeks. On the other
hand, information was waiting (as inventory) sixty four hours per engineering week.
Projects B and C had the same tendency, although they are not as significant. These results
imply that today’s product development processes are in similar situation as manufacturing
processes twenty years ago. Toyota production system has seven wastes (Ohno, 1978).
TOC has only three metrics: throughput, inventory, and operating cost (Goldratt, 1984).
Toyota production system tries to control both waiting and inventory, while TOC ignores
waiting. The results in figures 11.3 and 10.1 implies that today’s product development
activities are so premature as manufacturing a few decades ago that applying all the seven
wastes in product development is too early. The more detailed analysis of inventory in 11.8
reveals the existence of striking similarities in manufacturing processes a few decades ago
and Project A’s development process.
118
11.6 CLASSIFICATION OF IDENTIFIED INVENTORY
The identified inventory of information falls into the thirteen types as follows:
(1) Type 1: Taking care of a more urgent task in the project
(2) Type 2: Switching to a higher-priority task outside of the project
(3) Type 3: Waiting for information from another task
(4) Type 4: Review/ testing work
(5) Type 5: Day off
(6) Type 6: Maintenance of Documents
(7) Type 7: Rework discovery
(8) Type 8: Other engineers’ availability
(9) Type 9: Downstream engineer’s availability
(10) Type 10: Waiting for an answer
(11) Type 11: Ambiguous information
(12) Type 12: Limited availability of tool/board/system
(13) Type 13: Others
119
(1) Type 1: Taking care of a more urgent task in the project
This occurs when a group/ an engineer needs to stop working on a process and switch to
another process in the same project because the latter process has higher priority. Typical
appearance of this in VSM is shown in figure 11.4.
A more A more
urgent task urgent task
Task A Task A
interruption
interruption
Task B
120
Figure. 11.5 I328, Inventory of Information at the Center of This Figure was Caused
by an Interrupting Event from Inside of the Project (Engineer Y, Week11)
121
(2) Type 2: Switching to a higher-priority task outside of the project
This type is the same as type (2) except that the interrupting processes do not belong to the
project. The typical appearance of this in VSM is shown in figure 11.6.
Another Another
Project Project
Task A Task A
interruption
Task B
122
Figure 11.7 Example of Switching to Higher-Priority Task outside of the Project –
System-Level Services Task was Interrupted by a Support Work Outside of Project A
123
(3) Type 3: Waiting for Information from another Task
This type of inventory is incurred when a task needs information from its dependent task(s)
to be processed further. The dependent task may be worked on by the same group/ engineer
((3)-1 in figure 11.8) or by different one ((3)-2 in figure 11.8). In this classification, not
only waiting for information, but also for hardware prototypes that had been developed
through the project is also included because what software engineers need is the
information whether their software works on the prototype hardware.
Task A
Task D
Task B Task B
Inventory (3)-1
Task C
Inventory (3)-2
124
Figure 11.9 Example of Inventory (3)-1 – The Two Tasks Circled in This Figure were
Both Testing Processes.
125
(2) Example of Inventory (3)-2
Figure 11.10 shows an example of inventory (3)-2. The task in the dashed circle was left
untouched until the downstream task got all the information it needed.
Figure 11.10 Example of Inventory (3)-2 – the Task in the Dashed Circle was Left
Untouched until the Downstream Task Got All the Information It Needed.
126
(4) Type 4: Review/ Testing Work
This type of cause was separated from type 1 because review/testing task had often (25
times) appeared throughout the development phase (figure 11.11). Review/ testing work has
high priority because of the following reasons:
- Conducting it earlier reduces rework discovery time.
- Several tasks may be processed based on tentative information from the task yet to be
reviewed.
- Some types of reviewing involve several busy key-players, causing difficulty in making
date changes.
127
(5) Type 5: Day Off
Although day offs is not waste of engineering hours, they cause inventory of information.
Figure 11.12 shows an example of inventory caused by day off – XXX DC CAL Design/
Implementation task was interrupted by a week off. Since the engineer took a week off then,
the total labor hours was zero (see the number in orange in figure 11.12).
Figure 11.12 Example of Inventory Caused by Day Off – PMU DC CAL Design/
Implementation Task was Interrupted by a Week Off.
128
(6)Type 6: Maintenance of Documents
As is shown in figure 11.13, this type of inventory appears when engineers postpone
documentation and its completion. Although Glodratt (1997) argues that documentation
should not be done immediately after every task because it can cause delay on the critical
path, delay in documentation could incur time to remember and memory loss.
Task A Task A
Task A Documenting Documenting
Inventory
caused by Inventory
postponing caused by
documentation postponing
completion of
documentation
Other Task Other
Task
129
Figure 11.14 Example of Inventory of Information Caused by Maintenance of
Documenting
130
(7)Type 7: Rework Discovery
This inventory appears when a task that was perceived to be completed is reworked (figure
11.15). Since this inventory is related to rework discovery time – one of the important
metrics in system dynamics – reduction of this inventory could prevent rework caused by
“upstream changes” (pp. x) and rework caused by “working on unreliable/defective info.”
Task A
Task A Rework
Perceived to be Rework is
completed at this discovered
time
131
Figure 11.16 Example of Inventory of Information Caused by Rework Discovery
132
(8) Type 8: Other Engineers’ Availability
Example from the investigation
Working in a team, sometimes tasks need to be stopped until its output is confirmed
acceptable for the upstream engineer. This can take long because the needed engineer may
have some high-priority tasks. Figure 11.17 shows an example of this type. In this case, the
testing task needed to be reviewed by another engineer who is taking care of its dependent
task. If the needed engineer is to process a downstream task, inventory of information is
classified as (9): downstream engineer’s availability.
133
(9) Type 9: Downstream Engineer’s Availability
As is shown in figure 11.18, this type of inventory appears when handed-off information is
left untouched for some period.
Task A
Task B
134
Example from the investigation
Figure 11.19 shows an example of this type. In this case, it took nine days before the
engineer used the information from the H/W team.
135
(10) Type 10: Waiting for an answer
As is shown in figure 11.20, this type of inventory occurs when an engineer finds a
question, which takes some time before he/she gets an answer for it.
Example: Rework
(Troubleshooting,
Task A Task B
Inventory of information caused by
waiting for an answer
136
Figure 11.21 Example of Inventory of Information Caused by Waiting for an Answer
137
(11) Type 11: Ambiguous information
This type of inventory of information occurs when engineers find the information on his
hand too unclear or unreliable to be used.
138
(12) Type 12: Limited availability of tool/board/system
Example from the investigation
In the example shown in figure 11.23, a limited number of prototype H/W boards were
shared by the software team, causing inventory of information against engineers’ will.
139
11.7 DISTRIBUTION OF INVENTORY TYPES AMONG ENGINEERS
Figures 11.24, 11.25, and 11.26 compare the distributions of types of inventory of
information by engineers. These results revealed that the distributions differ among
engineers and projects.
100%
90%
Others
80%
Limited availability of tool/board/system
Ambiguous information
70%
Waiting for answer
Rework Discovery
50%
Maintenance of Documents
Review/Testing Work
30%
Waiting for information from another task
10%
0%
U F H Y T M All
Engineers
140
Distributuion of Types of Inventory (Project B) Others
Rework Discovery
70%
Maintenance of Documents
60%
Day off
Review/Testing Work
50%
Waiting for information from another task
20%
10%
0%
U Y T F N J All
Engineers
141
Distributuion of Types of Inventory (Project C)
100%
Others
90%
Limited availability of tool/board/system
Rework Discovery
50%
Maintenance of Documents
40%
Day off
0%
KZ NF HS HG All
Engineers
142
Occurences of Inventory [/50 Eng. Week]
0
20
40
60
80
100
120
140
Taking care of a more
125
79
urgent task in the
12
project
42
Switching to higher-
priority task outside of
129
18
the project.
27
Waiting for information
47
from another task
0
29
Review/Testing Work
0 0
15
0
Day off
10
14
Maintenance of
Documents 0 0
9 9
Rework Discovery
Types
3
143
Types
8
Other engineers'
0
availability
5
Downstream engineer
Occurences of Inventory per 50 Eng. Weeks
necessary information
4 3
Ambiguous
information
2 0 0
3
Limited availability of
tool/board/system
0 0
2 0
Others
7
ProjectB
PorjectA
ProjectC
Figure 11.27 Normalized Occurrences of Inventory of Information per 50 Engineering Weeks and the Corresponding
11.8.2 AVERAGE INVENTORY PERIODS
Figure 11.28 compares the three project’s average lengths of inventory by types. In Project
A, “Ambiguous information (type 11),” “Rework discovery (type 7),” and “Waiting for
information from another task (type 3),” were the most outstanding ones with about the
average of forty engineering days. This is because the embedded software team depends on
its upstream process, hardware development. Rework discovery sometimes take long time
because some types of bugs in software cannot be identified without hardware prototypes.
Software engineers sometimes suspend their tasks until prototypes become available.
Project B had much less average inventory periods of the three types above. This was
mainly because there were no major changes in hardware specifications. Project C was
similar to Project B in terms of the three types of inventory, for the project did not involve
major hardware changes. In Project C, downstream engineer’s availability (type 9) caused
the longest average inventory period, 21 days. This implies that Project C is managed in a
way not to interrupt engineers.
144
Average Inventory Time [eng.day]
0
5
10
15
20
25
30
35
40
45
50
9
Taking care of a more urgent task in
2
the project
7
Switching to higher-priority task outside
of the project.
13
37
0
task
3
Review/Testing Work
0
5
Day off
1
9
Maintenance of Documents
0
37
Rework Discovery
5
5
Types
Other engineers' availability
145
2
0
0
44
Ambiguous information
0
17
Others
3
Total
9
Project B
Project A
Project C
11.8.3 TOTAL INVENTORY TIME
The overall inventory time by types is shown in figure 11.29. The two most significant
types of inventory in Project A were “Taking care of a more urgent task in the period (Type
1)” and “Waiting for information from another task (type 2).” The two types take about 2/3
of the total inventory time of Project A. Project B had the longest inventory time in
“Switching to higher-priority task outside of the project” (type 2) of all the three projects.
Project A is in a similar situation (figure. 11.31). WIP (a) in figure 11.30 corresponds to
information inventory (a), which is type 1 inventory and WIP (c) in figure 11.30,
information inventory (c). Therefore, Project A, with high amount of types 1 and 2
inventory, is considered to be as less productive as the manufacturing process in figure
11.30.
Morgan (2002) listed “over utilization” as one of twelve product development process
wastes, arguing utilization level over 80% significantly decreases the system’s throughput.
Project A’s unsynchronization implies that the project chronically suffered over utilization.
146
Inventory Time [Eng. Day]
0
200
400
600
800
1000
10851200
Taking care of a more urgent task in the
650
27
project
273
Switching to higher-priority task outside
374
of the project.
247
1007
229
0
task
0
83
Review/Testing Work 0
80
Day off
0 10
129
Maintenance of Documents
0 0
332
147
Rework Discovery
29 17
Types
37
24
Ambiguous information
0
52
Others
0 17
Total
WIP (d)
WIP (c) is waiting because
other parts are missing
Customer X
Machine A needed to
148
Info. Inventory (c) – type 2
Info. Engineer A:
149
11.9 ROTTEN AND FRESH INVENTORY
11.9.1 RATIO OF ROTTEN INVENTORY AND TIME
Investigation in rotten inventory was performed based on the idea described in chapter 8.
Figure 11.32 shows the percentage of rotten information identified in Project A; only 5 %
was found to be partially or completely rotten. Figure 11.33 shows changes in ratio of
rotten inventory of information with time. This figure reveals that the ratio of rotten
inventory increased with time, and almost twenty percent of information got rotten when it
was kept for three to four engineering weeks. Figure 11.34 shows the trend line of the
relationship between the ratio of rotten information and time. The trend line was the
following:
5%
Rotten
Fresh
95%
150
Ratio of Rotten Inventory
100%
90%
80%
70%
60%
Fresh Inventory
Ratio
50%
Rotten Inventory
40%
30%
20%
10%
0%
1-3 4-7 8-10 11-20 21-30 31-
151
Ratio of Rotten Inventory and Inventory Time
60%
y = 0.0054x + 0.0356
50%
R2 = 0.8094
Ratio of Rotten Inventory
40%
30%
20%
10%
0%
0 20 40 60 80 100
Figure 11.34 Trend Line of Changes in Ratio of Rotten Inventory with Time (Project
A)
152
11.9.2 RATIO OF LOST VALUE IN ROTTEN INFORMATION
Rework Ratio can be calculated with the following equation:
Rework Ratio = (Time Spent on Rework)/(Time Spent on Original Work)
(Equation 11.2)
Figure 11.35 shows the relationship between rework ratio and inventory time of all the
rotten information in Project A; there was no strong correlation between them. The
average rework ratio was 53%.
100%
90%
80%
70%
Rework Ratio
60%
50% Average: 53%
40%
30%
20%
10%
0 20 40 60 80 100
153
11.9.3 MONTHLY INTEREST RATE CALCULATION
Because the rotten inventory increased linearly with time (equation 11.1), and the rework
ratio had no correlation with time, inventory of information can be considered to have some
interest rate. The monthly interest rate can be calculated with the following equation:
0.0054[/ Eng. Day] * 21[Eng. Day / Eng. Month] * 0.53= 6% [/ Eng. Month]
(Equation 11.3)
This implies that if information is kept as inventory for a month, engineers need to work
extra 6% on average to make up for the loss.
154
Ratio of Rotten Inventory
100%
90%
80%
70% Fresh Inventory
60% (Number)
Ratio
Switching to higher-priority
Types
Figure 11.36 Relationships between Ratio of Rotten Inventory and the Corresponding
Types
155
CHAPTER 12 FUTURE WORK
Another topic related to the interest rate of information inventory is the exploring the
relationship between the interest rate of information inventory (X in figure 12.1) and the
reduction of released product’s value (Y in figure 12.2). X’s causes include market,
requirement, and technical risk and Y’s cause is market risk. Therefore, X and Y should be
correlated and X should be more than Y. Deducing X needs drawing a value stream map; Y
can be deduced from sales information, which takes less effort. If the correlation between X
and Y becomes known, X, which is useful for re-designing organizations, can be deduced
without detailed analysis requiring value stream mapping.
Information’s value
X% LOSS
156
Product’s value
Y% LOSS
157
12.3 SUBSTITUTING TASK INVENTORY FOR INFORMATION
Task inventory is idle tasks that are ready to be worked on by engineers. Task inventory can
be counted by engineers. It can also be counted with a value stream map; in figure 12.3,
inventoried tasks of this engineer on 2/19 are six. Counting task inventory is easier than
measuring inventory time, which is performed in this research. However, unlike in
manufacturing, time needed for a task varies significantly. And, engineers tend to start
working on the easiest task. Therefore, numbers of inventoried tasks may not have the same
meaning throughout a project. However, investigation of the correlation between
information inventory and task inventory may verify the possibility of replacing
information inventory with task inventory.
2/19
1st inventory
4th inventory
5th inventory
6th inventory
2/19
Figure 12.3 Counting Task Inventory with a Value Stream Map (Project B) – Task
Inventory Can be Calculated by Counting Green or Red Lines Crossing a Day.
158
12.4 ARE TOC’S THREE METRICS SUFFICIENT?
Analysis of Project A (see chapter 11) revealed that the project’s development process is
similar to the manufacturing process described in “The Goal” (Goldratt, 1984). This implies
that the three metrics (throughput, inventory, and operating expense) of Theory of
Constraints (TOC) (Goldratt) may be sufficient for addressing waste in Today’s product
development processes. Verification of this idea needs the following processes.
1. Defining “throughput” in product development processes.
2. Test the three metrics for measuring waste in product development processes.
3. Examine the three metrics address most of waste in product development
processes.
Non
Value-Adding
Work
Figure 12.4 Measuring Time Spent on Overproduction – A Hatched Process Box Mean
Overproduction (Same as Figure 7.1)
159
12.6 EXPLORATION OF ENGINEER UTILIZATION LEVELS
The value stream maps in this research have all the information necessary for measuring
engineer utilization levels. Measuring utilization levels, which was not performed in this
research, may make it clear how utilization levels affect product development processes.
160
CHAPTER 13 CONCLUSIONS
161
• It has been proved that time per one occurrence of rework exponentially increases as
time spent on the project increases.
• Analysis of inventory of information has revealed that the development processes of
the investigated projects has turned out more or less similar to the unsynchronized
manufacturing processes several decades ago.
• In one of the investigated projects, information got rotten at the rate of 6% a month.
This indicates information inventoried for a month causes additional engineering work
by 6%.
162
APPENDIX I EXECUTIVE SUMMARY
2. RESEARCH PROCESS
2.1 Define nine plus one waste indicators for waste measurement (table A-1).
1. Overproduction of Information (Duplication) Different people/groups are unintentionally creating the same
information.
forwarding information)
Nine Waste Indicators
4. Over Processing Engineers create information that won’t contribute the value of
product.
5. Motion of People (Information hunting, travel, reviews, People have to spend time on non value-adding motions.
8. Hand-Off (Hand-off inside of project) Information is handed off with its responsibility between two
groups/people.
and Rework)
163
2.2 Create the root-cause analysis diagram that is useful for quickly identify
root-causes for waste (figure A-1).
About 300
typical root
causes in
total
164
2.3 Optimize value stream mapping for measuring waste identified by the nine plus
one waste indicators (figure A-2).
Other
commitments
Inventory
Overdue
Rework
task
Identified
Waste
Indicators
Figure A-2 An Example of the Value Stream Map for Quantitative Analysis
165
3. RESULTS1: NINE WASTE INDICATORS
The value stream mapping improved in this research made it possible to measure waste
time on each category of waste (figure A-3). These results revealed that “Over Processing,”
“Rework”, and “Defective Information” are more prevalent that the other waste indicators.
2500 2357
Normalized Wasted Time [Hour / 50 Eng. Week]
2000
1438
1500 290
1382 ProjectA
ProjectB
978
ProjectC
1000 713
871 243
760 343
500
192 194
135 145 111
70 60
18 0 17 12 0 9 22 0 0 20
0
Waiting
Overproduction
Transportation
Over processing
Motion
Rework
Hand-Offs
Re-Invention
Information
Defective
Waste Indicators
Figure A-3 Relationship between Wasted Time and Nine Waste Indicators
Detailed analyses using the root-cause analysis diagram made it possible to identify
problems specific to each company and show how many hours are wasted on them. (Figure
A-4).
Hardware Team
The verification process
is not effective.
Embedded Software
Team
Embedded Software
Team
Flow of defective information (Wasted 861 hours)
Figure A-4 An Example of Identified Problem
166
One occurrence of rework turned out to take more time near the end of projects than at the
beginning of it (figure. A-5).
Average Time Spent on One Occurrence of Rework and Week (Projects A and B)
80
70
67
Average Rework Time [Eng. Hour]
60
57
50
43 Av. rework time (Project A)
40
Av. rework time (Project B)
30
23
20 21
5 14 12
10 9
6 10
0 0
1-5 6-10 11-15 16-20 21-25 26-30 31-35 36-40 41-45 46-50 51-55 56-60
Week
167
4. RESULTS2: INVENTORY OF INFORMATION
4.1 TOTAL INVENTORY TIME
Figure A-6 shows the total inventory Especially, Project A’s inventory time was significant:
64 engineering – day inventory time in a engineering week on average (it had 6 engineers).
70 64
60
50
Eng. Days
40 Project A
30 Project B
30 Project C
20
13
10
0
Total Inventory Time / Eng. Week
Figure A-6 Total Inventory Time per Engineering Week: Number of Engineers in
Projects A, B, and C are 6, 6, and 5 Respectively.
168
Inventory Time [Eng. Day]
0
200
400
600
800
1000
10851200
Taking care of a more urgent task in the
650
27
project
273
Switching to higher-priority task outside
374
of the project.
247
1007
229
0
task
0
83
Review/Testing Work 0
80
Day off
0 10
129
Maintenance of Documents
0 0
332
169
Rework Discovery
29 17
Types
37
24
Ambiguous information
0
52
Others
0 17
Total
4.2 ROTTEN INVENTORY
Figure A-8 shows how the number of rotten inventory increases with time. Figure A-9
shows the relationship between rework ratio and time. Rework ratio is defined by the
following equation:
Rework Ratio = (Time Spent on Rework) / (Time Spent on Original Work)
Monthly interest rate of information inventory can be calculated as follows:
0.0054[/ Eng. Day] * 21[Eng. Day / Eng. Month] * 0.53= 6% [/ Eng. Month]
If information is kept as inventory for a month, engineers need to work extra 6% on
average to make up for the loss.
Ratio of Rotten Inventory and Inventory Time
60%
y = 0.0054x + 0.0356
50% 2
Ratio of Rotten Inventory
R = 0.8094
40%
30%
20%
10%
0%
0 20 40 60 80 100
Figure A-8 Trend Line of Changes in Ratio of Rotten Inventory with Time (Project A)
100%
90%
80%
70%
Rework Ratio
60%
50% Average: 53%
40%
30%
20%
10%
0 20 40 60 80 100
170
5. CONCLUSION
Overall
The lean tools and processes developed in this research have proved to be able to
identify problems both peculiar and common to the organizations.
Therefore, these lean tools and processes can deliver to product development
organizations information that leads to continuous improvement of their value-creating
processes.
Nine Waste Indicators
The nine waste indicators were sufficient for identifying and measuring waste in product
development processes.
Among the nine waste indicators, three waste indicators, over processing, rework, and
defective information were more significant than the others, implying the possibility of
reducing the number of waste indicators.
Rework
It has been shown that time per one occurrence of rework exponentially increases as
time spent on the project increases.
Inventory of Information
Inventory of information was prevalent in product development processes.
Analysis of inventory of information has revealed that the development processes of
the investigated projects have turned out more or less similar to the unsynchronized
manufacturing processes several decades ago.
In one of the investigated projects, information got rotten at the rate of 6% a month.
This indicates information inventoried for a month causes additional engineering work
by 6%.
Root-Cause Analysis Diagram
The root-cause analysis diagram was useful for identifying typical root-causes for waste.
Value Stream Mapping for Quantitative Analysis
Value Stream Mapping Optimized for Quantitative Analysis was applicable for
measuring waste using the waste indicators.
171
172
APPENDIX II ROOT-CAUSE ANALYSIS DIAGRAM
1. OVERPRODUCTION Root-Cause
173
174
2. WAITING Root-Cause
8.Hand-Offs Originally Limited Resources(2)
Unsynchronized Limited Qualification of
Schedule Designers
Scheduled Waiting
(Waiting Time is
Excessively Long
within Scheduled
Buffer Time Inadequate Scheduling
Buffer/Slack Time)
(Over Statistical Technique
Fluctuation)
Nature of PD(4)
Statistical Fluctuation
Lack of Strong
Engineer’ s Neglect
Enforcement of Schedule
of Schedule
(Or Effective Incentive)
Waiting for
Handing off of
Limited Resources(1)
Information
System Over Utilization
(Unrealistic Schedule)
Too Tight
(Unrealistic) Highly Competitive
Schedule Market
Scheduled, but
Longer Than Time-Sensitive Product
Scheduled Not Optimum
Schedule
Inadequate Scheduling
Skills
9.Defective Information
2.Waiting Happenings
(People are
Waiting) Unidentified
Risks/Uncertainties
Concurrent
Engineering(1) Critical Design Issues
(Designers Get Involved
in Production’ s/User’ s
Expediting Trivial Questions
Problem)
Others
175
176
177
178
3. TRANSPORTATION Root-Cause
179
180
4 OVER PROCESSING Root-Cause
181
182
183
1
184
185
186
187
188
189
5 MOTION Root-Cause
190
191
6 REWORK Root-Cause
PD’s Nature(2)
Iteration cannot be
eliminated
Optimization Interface
Process Redesign PD’s Nature(3)
Identifying All Interfaces
Upstream In Advance is Impossible
Changes
Poor Marketing
Information
Poor Information Quality
from Upstream Phases Premature Concept
Design
Premature Architecture
Design
9.Defective
Information
PD s Nature(1)
Complete Elimination of
Impossibility of completely clear Risks/
division of labor Uncertainties is
Unclear division Impossible
of labor
Scattered Locations
Complex Organizational
Unplanned Structure
Iteration Inability of
Inability of Prompt Tardy
Prompt Spatial/Structual
Adjustment of Information Outsourcing
Adjustment of Barrier
Division of Labor Transfer
Division of Labor
Functional Organization
Prototype Poor
Version Work-in-Process Version
Confusion Management
192
Scattered
Locations
Insufficient/ No Spatial
Communication Barrier Complex
Creation of Low
Quality Organizational
Information Structure
Outsourcing
Functional Organization
Complex Hierarchy
Structure of Organization
Too Tight
Schedule Limited Resources (1)
(Unrealistic)
Pressure System Over Utilization
Schedule
Highly Competitive
Market
Time-Sensitive Product
Insufficient
Training
Troubleshooting /Diagnosis
9. Defective
Working on Unreliable/ Information
Defective Information
Go to
“9. Defective
Information”
193
Failure to PD’s Nature(1)
Leaving Complete Elimination of
Identify
Uncertainties Risks/
Risks/
/Risks Uncertainties is
Uncertainties
Intact Impossible
Early
Risk-Taking Strategy
Lack of Risk
Management
High-level Highly-Competitive
Risks Alarm Unrealistic
Schedule Market
Identified Suppressed Schedule
Pressure
Time-Sensitive Product
Corporate Culture(3)
Reluctance to
Validation Accept Bad News
could Bounded “Kill The Messenger”
have been Rationality
Possible
Risk-Taking Strategy
194
195
196
7. RE-INVENTION Root-Cause
Project Organization
(As opposed to Functional
Organization)
Scattered Locations
(Distributed Team)
Spatial/Structura
l Barrier Complex Organizational Structure
(Redundant Functions)
Outsourcing
Corporate Culture(1)
Designer’ s Try to Protect Their
Value By NOT Sharing Their
Designers’ Expertise
Unwillingness to Share
Their Expertise
Lack of Appropriate
Incentive/Measurement System
that Promotes Expertise Sharing
197
8. HAND-OFF Root-Cause
Dissemination of
Requirements
Scheduled Hand-Off
Time-Sensitive Product
Work Sharing
8. Hand-Off
Intention to Level Workload
Outsourcing
Absence of Task
Owner
198
9. DEFECTIVE INFORAMTION Root-Cause
199
Poor Resource
Allocation
Poor Resource
Allocation
No Leveled Scheduling
Local Over
Utilization Limited Resources (2)
Limited Qualification of
Designers
Corporate Culture(3)
Reluctance to
Alarm Bounded Accept Bad News
Suppressed Rationality “Kill The Messenger”
Risk-Taking Strategy
200
201
202
203
204
BIBLIOGRAPHY
1. Allen, Thomas J. (1984), “Managing the Flow of Technology: Technology Transfer and the
Dissemination of Technological Information within the R&D Organization,” The MIT Press,
Cambridge, MA.
2. Bauch, Christoph (2004), “Lean Product Development: Making Waste Trasparent,” Diploma
Thesis, Massachusetts Institute of Technology and Technical University of Munich.
3. Brooks, Frederick P. (1975), “The Mythical Man-Month,” Addison-Wesley Pub. Co, Reading,
4. C.M. Creveling, J.L. Slutsky and D. Antis, Jr. (2000), “Design for Six Sigma in Technology and
Product Development,” Prentice Hall PTR, Upper Saddle River, N.J.
5. Fleisher, Mitchell and Jeffrey Liker (1997), “Concurrent Engineering Effectiveness : Integrating
Product Development across Organizations,” Hanser Gardner Publications, Cincinnati.
6. Forsberg, Kevin, Hal Mooz, and Howard Cotterman (1996), “Visualizing Project
Management,” John Wiley & Sons, Inc.
7. Graebsch, Martin (2005), “Information and Communication in Lean Product Development,”
Diploma Thesis, Massachusetts Institute of Technology and Technical University of Munich.
8. Goldratt, Eliyahu M. and Jeff Cox (1984), “The Goal,” The North River Press, Great Barrington,
MA.
9. Goldratt, Eliyahu M. (1997), “Critical Chain,” The North River Press, Great Barrington, MA.
10. Kaplan, Robert S. and David P. Norton (1992), “The Balanced Scorecard – Measures that Drive
Performance” Harvard Business Review, Jan-Feb 1992, Cambridge, MA.
11. Kruchten, Philippe (1999), “The Rational Unified Process,” Addison-Wesley, Reading, MA.
12. Lyneis, James (1999), “System Dynamics for Market Forecasting and Structural Analysis,”
System Dynamics Review, Vol. 15-1, 2000
13. Lyneis, James (2000), “System Dynamics for Business Strategy: A Phased Approach,” System
Dynamics Review, Vol. 16-1, 2000
14. McManus, Hugh(2004), “Product Development Value Stream Mapping Manual, Beta Version,”
Lean Aerospace Initiative, MIT, Cambridge MA
15. Morgan, James M. (2002), High Performance Product Development: A Systems Approach to a
Lean Product Development Process,” PhD. Thesis in Industrial and Operations Engineering,
The University of Michigan.
16. Murman, Earll, Thomas Allen, Kirkor Bozdogan, Joel Cutcher-Gershenfeld, Hugh McManus,
Deborah Nightingale, Eric Rebentisch, Tom Shields, Fred Stahl, Myles Walton, Joyce
205
Warmkessel, Stanley Weiss, and Sheila Widnall (2002), “Lean enterprise value : insights from
MIT's Lean Aerospace Initiative” Palgrave, New York
17. Ohno, Taiichi (1978), “Toyota Production System: Beyond Large-Scale Production,”
Productivity Press, Cambridge, MA.
18. Oppenheim, Bohdan W. (2004) “Lean Product Development Flow,” System Engineering, Vol.
7-4, 2004.
19. Pich, Michael T., Christiph H. Loch, and Arnoud De Meyer (2002), “On Uncertainty,
Ambiguity and Complexity in Project Management,” Management Science, Vol. 48-8, 2002.
20. Repenning, Nelson P. (2001), “Understanding Fire Fighting in New Product Development,” The
Journal of Product Innovation Management, Vol. 18, 2001, pp. 285-300.
21. Shenhar Aaron J, Dov Dvir, Thomas Lechler, and Michael Poli (2003), “One Size Does Not Fit
All”
22. Slack, Robert A. (1999), “The Application of Lean Principles to the Military aerospace Product
Development Process,” Master’s Thesis in Engineering and Management, MIT, Cambridge,
MA.
23. Soderloud, Jonas (2002), “Managing Complex development Projects: Arenas, Knowledge
Processes and Time,” R&D Management, Vol. 32-5, 2002.
24. Sterman, John D. (2000), “Business Dynamics” McGraw-Hill Higher Edication.
25. Stewart, Wendy E (2001)., “Balanced Scorecard for Projects,” Project Management Journal,
March, 2001.
26. Ward, Allen, Jeffrey K. Liker, John J. Christiano, and Sobek II, Durward K. (1995), “The
Second Toyota Paradox: How Delaying Decisions Can Make Better Cars Faster,” Sloan
Management Review, Spring 1995, Cambridge, MA.
27. White, Diana and Joyce Fortune (2002), “Current practice in project management -- an
empirical study,” International Journal of Project Management, Vo. 20, 2002.
28. Womack, James P. and Daniel T. Jones, and Daniel Roos (1991), The Machine That Changed
the World,” Harper Perennial.
29. Womack, James P. and Daniel T. Jones (1996), “Lean Thinking,” Simon & Schuster.
206