Systems Engineering Agile Design Methodologies
Systems Engineering Agile Design Methodologies
123
James A. Crowder Shelli Friess
Raytheon Relevant Counseling LLC
Englewood, CO Englewood, CO
USA USA
Dr. Crowder has been involved in the research, design, development, implemen-
tation, and installation of engineering systems from several thousand dollars up to
a few billion dollars. Both Dr. Crowder and Ms. Friess have been involved in
raving successes and dismal failures not only in development efforts, but in team
building and team dynamics as well. All the failures have a common theme; the
inability of engineers, managers, and teams to respond well to change, whether
changes were due to problems in the development, or changes because the
requirements for the system were in flux. While this certainly was not the only
problem, it was a large contributing factor. The resistance to change has been a
part of not just engineers, but people in general, since humans first began to create
and build. However, the world is changing faster than ever before, and will con-
tinue to change, not just at the same rate, but at an ever increasing rate as time
progresses. The organizations that survive will be those that have technology,
people, processes, and methods that allow for and embrace change as a normal part
of doing business.
This book was written to help engineering organizations understand not just the
need for change, but to suggest methodologies, technologies, information systems,
management strategies, processes, and procedural philosophies that will allow
them to move into the future and be successful over the long term in our new
information-rich, hypermedia-driven, and global environment.
This book is not intended to be exhaustive, but to introduce systems, software,
hardware, and test engineers, as well as management to a new way of thinking; a
new way of doing business. This includes not just the technologies and organi-
zational structures, but the team dynamics and soft people skills that will be
required to create, attain, retain, and facilitate efficient product teams required for
future engineering development efforts. In short, we have to rethink everything we
have ever thought about how to design, build, install, and maintain engineering
systems. This book is a start along that process.
v
Contents
1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1 Change as a Precept Rather than a Fear . . . . . . . . . . . . . . . . . . 1
1.2 The Historical Significance of Change . . . . . . . . . . . . . . . . . . . 2
1.2.1 The Stirrup. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2.2 The Luddite . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.3 The Modern Design Folly: Engineering
Processes and Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . .... 4
1.4 The Modern Design Folly: Embracing Modern Capabilities .... 6
1.5 Layout of the Book . . . . . . . . . . . . . . . . . . . . . . . . . . . . .... 7
vii
viii Contents
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
Chapter 1
Introduction
The agile design methodology has become an unavoidable factor in the modern
design paradigm. The formal agile software design process has been utilized since
the mid-1990s. Scrum was introduced in 1995 and Extreme Programming in 1996.
In 2001, the Agile Alliance was formed and established the Agile Manifesto [4],
which states:
• Individuals and interactions: over processes and tools.
• Working software: over comprehensive documentation.
• Customer collaboration: over contract negotiation.
• Responding to change: over following a plan.
There was, and still, much resistance to agile software development. However,
it is an accepted, modern method for development of software in the new world of
rapid development. Unfortunately, systems engineering design methods have not
kept pace with software, creating schisms between the systems designs and the
final delivered software. A new culture must be created that integrates research and
development (R&D), systems, software, test, and reuse into a modern, agile,
methodology that makes change the rule or order, rather than the exception. Agile
engineering is a conundrum of fact, science, folklore, and misconceptions. This
drives the need not just for wisdom, but for well-established methodologies in all
areas pertaining to engineering design. Three areas of change will be discussed
here that are critical to creating an overall agile design methodology.
• New organizations are required to take advantage of new technology environ-
ments. Examples are:
– Domain centric
– Design teams
– Concurrent Engineering
– Systems of Systems Enterprise Architectures
Change has become an unavoidable factor in the modern design process. This is
partly due to constantly changing computer technology and its effect on the design
process [7]. The effect that IS technology is having on the engineer’s tools is
understood. The tools are providing a revolutionary effect on the organization of
the engineer’s work, but what must also be understood are the cultural changes
needed in the design process to match the evolution of the tools [23]. Without this
understanding we become outmoded, our relative productivity drops, and our
engineering becomes increasingly uncompetitive.
Revolutions in human development yield equally revolutionary increases in
human productivity [12]. The first was the Agricultural Revolution in 6000 B.C.,
thought to have been brought about by changing climatic conditions. It was
1.2 The Historical Significance of Change 3
farming that spawned so much of civilization that we now take for granted,
especially the accumulation of wealth and the forming of a government to protect
that wealth. The second was the Gunpowder Revolution in the fifteenth century
and the development of the cannon. This revolution gave rise to a new military
technology and a centralization of power that led to higher levels of organization
and greater wealth development, specifically the Industrial Revolution. The third
revolution is the Information Revolution (which we are in now), where the raw
materials for wealth development are no longer energy, ore, and muscle, but
computer technology, data, and intellect. The Information Revolution should
increase productivity through higher quality work (and workforce) to reduce
waste.
The trouble with implementing change in the engineering process is that it is
bound to upset someone. It is, in fact, akin to being an unwelcome prophet.
However, avoiding change is hazardous, and often so on a grand scale. The
following examples illustrate the point.
The invention of the stirrup meant that owners of horses had a powerful weapon
that could not be defeated by a soldier on-foot. The stirrup afforded the rider
greater stability and, therefore, greater lethality. Because horses were a commodity
in short supply, this new form of power was soon monopolized. The result was a
new social structure: feudalism. In this structure, the knightly class owned the
horses, and hence the power, the peasant class supplied the materials. In fact, the
word ‘‘imbecile’’ has its origins in the Gothic Latin word imbelle, which originally
was applied to the scorned masses of peasants who did not own horses and were
consequently weak [3].
Failure to implement this new technology explains King Harold’s loss of the
Battle of Hastings in 1066 [39]. If King Harold had possessed the same well-armed
knights as King William, the outcome might have been quite different. King
Harold suffered from being on an island isolated from technological change in
Europe.
Early in the nineteenth century, new labor saving devices invented in England
enabled greater productivity in the textile industry. Ned Ludd and his followers
disapproved on the grounds that it put at risk their livelihood by diminishing their
employment (the loom would replace them all). Their solution was to tour the
countryside destroying any new loom they could find. This solution was short-
sighted as change was inevitable. The term ‘‘Luddite’’ has since become part of the
4 1 Introduction
Having been involved in systems engineering of 25 years for most of the major
aerospace companies, the following has been often overheard from program
managers complaining about the same things:
Why is our overall productivity the same as it was in the 1900s?
There have been many advances in software development over the past
15 years that have sought to dramatically improve the efficiency of code pro-
duction. However, similar advances have not been seen in systems architecture and
test design to keep pace with agile and extreme programming initiatives. And
while the industry pushes to utilize Object Oriented System Engineering, with
tools like SysML,1 and Test-Driven Development [5], we can no longer see system
engineering, software development, and testing as separate organizations and
separate entities within a development project. Today we are faced with market
demands and evolving computer/IS technologies [15]. The modern equivalent to
King Harold’s Folly and the Luddite mentality is the reliance on ‘‘our process’’ and
‘‘its associated metrics.’’ Too often we rely on these processes and metrics to form
the bases for our proposals. Treating each of these design/development elements as
separate is why we still struggle with overall program execution productivity, are
often faced with costs that exceed projections, and end of with a final system that
does not conform to the design requirements and expectations.
As the computer and IS technologies continually improve, and our develop-
ment/design tools become more sophisticated, we tend to apply these improved
capabilities to enhance our proposals; however, the underlying processes and
metrics are utilized to form the basis of the bid. More often than not, the tech-
nologies are applied to current processes, which are then utilized to generate the
same kind of metrics. What we engage in is folly because we fail to realize that
these new technologies allow for a shift in engineering methods (e.g., automated
1
http://www.omgsysml.org/
1.3 The Modern Design Folly: Engineering Processes and Metrics 5
design, test, documentation, etc.), which changes productivity, hence, change the
metrics that form the bases for our bids.
We have seen over the past decade that computer, IS, and collaborative
technologies continually improve exponentially and appear to provide the abilities
to improve long-term productivity. However, long-term productivity has severely
failed to match the advances in technology. Many theories have been posed to
explain this phenomena; more complex problems, increased customer demands,
increase in data, etc. We see that one of the major long-term problems affecting
engineering organizations is ‘‘metric inflation.’’ This is caused by our continued
reliance on ‘‘tried and true’’ metrics, most of which are ineffective and wrong for
modern systems design/development/implementation, hence causing long-term
problems with our business models.
For most organizations, program performance is measured against the success
of meeting classical metrics (e.g., cost, schedule, line-of-code counts, etc.).
Therefore, if the proposal calls for 300 staff positions (systems, software, test,
hardware, etc.), these positions are staffed, based on these classical metrics. This
occurs without a complete and full productivity assessment against improved
computer/IS/collaborative technologies, but is based on the same tried-and-true
metrics we have been using since the 1960s. This becomes what we will call a
‘‘self-licking ice cream cone,’’ in that the application of improved productivity
tools is offset by conventional staffing metrics.
Inflation occurs by a number of factors, examples of these inflation factors are:
• Rather than modification of the engineering processes to account for new
technology advances, more ‘‘process’’ is applied to the engineering methods and
its use of technology to ‘‘improve’’ the process, therefore decreasing the
efficiency of any improvements and actually decreasing the expected increases
in quality, since too much process interferes with the creating process and
mistakes are made. We fall too easily into the ‘‘Process over Productivity’’
paradigm.
• Budgets are always fully expended; therefore, work is expanded to fill the
budget and schedule allowances.
Over the long term, over the course of program execution, any savings that
might have been felt (which equate to improved competitiveness) in the form of
improved productivity, are hidden beneath overbearing processes and budget
expenditures. For an organization to remain competitive, the engineering metrics
must be modified to baseline them against improved productivity tools and
technologies. This is not to say that process in and of itself is inherently bad. Many
a program has failed because of lack of any engineering processes. However, it
does not follow that more process makes the project better and more productive.
There must be a balance. And the processes that are used must be appropriate for
the project. One size does not fit all when it comes to engineering processes; and
that does not mean tailoring out given sets of processes for small, medium, or large
programs. It means having the right processes and the right metrics for each
individual project.
6 1 Introduction
The engineering methods and processes must evolve to account for new
technologies and methods, and the metrics associated with these changes must also
be evolved and established in order for an engineering organization to remain
competitive. The ability to evolve and remain competitive requires engineering
organizations to challenge its processes and metrics constantly.
We have arranged the book to build up to new methods for Agile Systems
Development. We start by understanding why people are resistant to change,
particularly engineers, and then describe a modern, agile system design method-
ology that is created to help people develop a philosophy of change as a way of
life. The progression of the book is described below:
Chapter 2: This describes why people, and in particular engineers, are resistant
to change, and how we can provide them tools to embrace change as a normal part
of their design process.
Chapter 3: This chapter describes changes that are required within engineering
organizations to facilitate modern design methodologies and processes and what
types of organizational structures actually suppress the modern, agile design
methodologies.
Chapter 4: Here we emphasize the domains that engineering organizations must
master in order to promote and execute Agile System Design methods.
Chapter 5: This chapter describes the types of organizations that are required
for modern system design, those that promote productive behavior. We introduce
new organizational structures (e.g., consensus engineering that facilitates collab-
orative engineering) that are required for future design methods. Also included are
design methods that will be required to drive agile systems designs, and the
transitions needed to get there (e.g., eliminate stove-pipe engineering).
Chapter 6: Here we introduce the new Informational and KM techniques that
provide the capabilities for agile systems designs, including a new automated
design tool, called the Functionbase, which provides a complete agile systems
design process capture and reuse paradigm.
Chapter 7: This chapter discusses the total agile systems design process,
including tools, that allow major increases in quality and efficiency over current
design and test methods.
Chapter 8: Here we wrap up our discussion, again emphasizing the need to
embrace change, and how the methods discussed in the book more easily allow
engineering organizations to embrace change as a normal part of their everyday
existence.
Chapter 2
The Psychology of Change
If you do not change your direction, you may end up where you
are going.
Lao Tzu
If change has always been an integral part of life, why do we resist it so? Why, in
every generation, do we have Luddites? What goes through a person’s mind when
they are informed of or predict change? Is my position safe? What will I have to do?
What will I need to know? Am I capable and confident with new direction? Do I have
any say about this or any control over what is about to happen? Do I really need to
change? Do I have time for this? How am I going to do that and this? How will this
impact what I have already done? As you can see there are many questions that come
up when even the thought of change occurs. It seems obvious that there should be
thought put into change theories as we ask for people and environments to change.
Some key components to encourage change are empowerment and communi-
cation. People need time to think about expected change. As we discussed earlier,
people are good at change in order to master or improve their world or environ-
ment. When people remain agile they are better at being agile. When change is
part of the regular process then one becomes agile. Alternatively, when one is used
to doing something exactly the same way or systematically then it becomes the
way one likes to operate.
Why, in particular, do some engineers not like change? When asking this
question it seems obvious to consider education. What is it that has been required
of engineers in the past and present and what will be required of them in the future.
Lucena [40] hypothesizes connecting engineers’ educational experiences with
their response to organizational change and offers a curriculum proposal to help
engineers prepare for changing work organizations. As our technology increases
and our work world becomes more agile it makes sense also that soft skills will
become more and more important for engineers. Engineers can organize them-
selves to optimize performance with soft skills.
Trust is the most important factor in change. Trust helps to balance fear which is
often the root of most resistance. Who in their right mind will blindly make changes
without trusting others? Agile methods increase trust by increasing transparency,
accountability, communication, and knowledge sharing [41]. Iteration/sprint planning
5. Staff—people/human resources.
6. Skills—the right skills, not generic engineers.
7. Style—soft skills that facilitate cooperation and collaboration within the team:
the cultural style of the organization.
there is less guess about where management came up with the decision to operate
in that way to get some end result. People experience change often. The facilitator
or QFD can promote healthy change that keeps stress to a minimum.
In ‘‘A Meta Model of Change’’ [55] the author writes about a Meta-Analysis of
many change theories. He found nine common themes that are appropriate for our
discussion. He writes that change starts with an existing paradigm and the nature
of this paradigm will determine if there will be recognition that change is needed.
Next, there is a stimulus and then consideration. The stimulus is the motivator
and the consideration is limited by the observer. He also observes that there is a
need for different viewpoints. The next stage involves what he describes as vali-
dating the need. This stage answers the question: is there enough evidence that
change should occur? Once it is determined that change should occur then one
must prepare, plan, or reengineer. The following step is a commitment to act
followed by what he calls transition; the do-check-act. Here we ask: is the vision
or reengineering meeting the goals and do adjustments need to be made? The next
phase defines the specific results of the efforts to change. The final phase is
enduring the benefits. The change has also produced the ability to change and all
that comes with the finished product.
How many more choices will an individual see when they feel empowered
versus ordered. Those are extremes but it helps bring the point home.
Empowerment gives people a sense of ownership and an opportunity for creativity.
People can come up with new ideas and solutions to problems and are more likely
to be motivated to work toward those solutions when they have some personal
investment in them.
Chapter 3
The Modern Design Philosophy: Avoiding
Change is Perilous
While most upper management would probably get pitch forks and torches and
come after a program manager for such a blasphemous statement, the simple fact is
that human nature is what it is: if there is no stimulus that drives us to change,
organizations continue to conduct business as usual and ignore opportunities to
grow, evolve, and change. The following was actually spoken by a customer to an
organization:
You are a dinosaur that is going extinct, you just don’t realize it and won’t until you’re
gone.
Government Customer, circa 2002.
The ideal bureaucratic method is associated with Max Weber [54] and the
objective was to define the best types of organizations to active company goals.
His method was to design job descriptions and a supervisory hierarchy that was
optimal. This led to job specialization and rigid structures with their respective
shortcomings that we often associate with the term ‘‘red tape’’. Such a system is
most effective for invariant processes, most of which have nothing to do with long-
term development projects. This type of management and team structure is wholly
inadequate for a dynamic development process. Unfortunately, the bureaucratic
organization is the most prevalent, as it is easy to conceptualize. Even more
unfortunate, most organizations consider bureaucracies essential to manage
changes in a complex system, due to the risks involved. They often add process on
top of process, review board on top of review board, assuming more process will
reduce risk and enhance engineering. Neither are true, as change is inevitable, and
change is only risky when managed by a bureaucratic organization. This is not to
say that we should not manage change, however, the ability to handle change
should be an integral part of the overall design and the overall organization, and
not a bureaucratic methodology.
Many modern engineering organizations have tried to move to, what is thought
to be, a more suitable organizations structure, the Matrix organization. This
organizational structure manages both the product structure and the functional
structure simultaneously, in an attempt to create the ‘‘best of both worlds’’ for
engineering. However, in most cases, this creates a struggle, as each engineer has
two supervisors and a confused chain of command, as it is not unusual for both
structures to be in disharmony. But, the Matrix Organization method is, at least, an
attempt to manage change positively.
We have gone from an environment where change was seen as a source of risk to our
customers, to a new environment where organizations that cannot adapt to change being
seen the new source of risk.
When you consider the technological changes that have happened over the last
40 years, and more importantly, over the last 10 years, you see an exponential
increase in capabilities, not just in computing power, but in Human System
Interfaces (HSI), network technologies, wireless technologies, miniaturization
technologies (e.g., nanotechnology), energy technologies, etc. For large, multiyear
3.2 The Push for Continual Change: Agile Design Patterns 17
In her article for the Daily Record, Nicole Black discussed the incredible speed at
which technology is advancing [7]. Her conclusion is that technology is advancing
faster than the legal and social communities can keep up with and ethics
committees and lawmakers need guidance from technology pioneers just to
understand and navigate the ever-changing technology world we live in.
As the velocity of technology increases and the need to have organizations that
are not just adapt at change, but are geared to embrace and use change as an
integral part of their development, we have to move away from classical
management structures. If we look at Matrix Management and its attempts to
manage employees, careers, projects, and functional needs, we find an organiza-
tion that is antithetic to change. In terms of ‘‘managing for change’’, the classical
Matrix Management would be considered an ‘‘Anti-pattern’’. Our current
Integrated Product Team structures are constructed with people from a variety of
disciplines, each of which has a separate matrix manager, possibly a functional
manager from within the project, an IPT lead, and a project manager. There is no
team cohesiveness because every member of the team has cross purposes driven by
a multiorthogonal organizations structure.
The next problem with classical matrix and IPT organizational structures is that
people are not trained with the ‘‘soft’’ people skills to adequately manage an actual
integrated team, where interpersonal team dynamics becomes extremely important
for success. Most organizations try to ‘‘fix’’ this problem with engineering
directives and multiple engineering processes that do not drive engineering, but
hamper and kill creativity, making the team a danger to change. That is not to say
process is not important, but what is needed is processes, technologies, and
methodologies that help manage and embrace change. Also, classical program
management training does not teach the soft people skills necessary to facilitate
teams of agile systems developers. These include skills like, communication,
18 3 The Modern Design Philosophy: Avoiding Change is Perilous
1
The Bible, Old Testament Bureaucracy, Genesis 47:23.
2
This name implies all the necessary ingredients for product development have been carefully
selected such that the team is adaptive to a changing environment.
New
Organizations
Market
Communication
Demands
Increasing Collaborative
Quality Methods
3
Notice we are not invoking ‘‘More’s Law’’ as we feel the need/want for information and
knowledge always outpaces More’s Law.
Chapter 5
A New Organization is Needed: Beyond
Integrated Product Teams
The phrase ‘‘throwing good money after bad’’ can be all too relevant when
developing new engineering. Although it is possible to bring new technologies and
concepts to an organization, there is no guarantee that they will be implemented.
All too often, a new machine is bought only to collect dust.
The starting point is to understand how the engineering organization affects
behavior and productivity. Then we can begin to define an organization designed
to respond to this new environment of continual, rapid change.
Classical methods of organization did not account for behavioral factors. This
was shown with the Hawthorne experiment [27] where worker productivity was
measured under different working conditions. Productivity rose whenever the
conditions changed, whether for better or worse. The conclusion was that pro-
ductivity was also tied to how the workers were responding to the sudden interest
shown in them. Behavior plays an important role.
For the purposes of our discussion here, we will focus on Rensis Likert’s research.
His contention was that productivity would best benefit from developing work
groups that had challenging objectives. Likert defined four systems that describe
the span of organizations, from the classical viewpoint that engineers are a cost to
be controlled, to a modern viewpoint that considers the engineers to be a resource
to be developed.
As the organization matures, the product teams must be free to exercise more
responsibility and migrate toward a System 4. Eventually the teams must advance
beyond System 4; being responsible for their own education and motivation, and
their dependency on the parent organization must continually lessen.
These considerations have been integrated into Fig. 5.1 to illustrate progressive
improvement in productivity as organizations implement less System 2 and more
System 4 characteristics. Argyris has shown how this migration toward a System 4
management structure does, in fact, improve productivity [25]. There is no doubt
that given the computer hardware, software, and humanware resources, a System 4
organization will outperform a System 2 organization.
System 4 Automated,
Collaborative
System 3
Teams (Theory Z)
Productivity
Integrated Product
Teams (IPTs)
System 2
Bureaucracy
System 1
Organization Potential
Codeterminism: This redefines the relationship between the manager and the
worker, such that the supervisor–subordinate language becomes outmoded. This
does NOT mean the elimination of leaders. It means management responsibilities
are shared and developed by mutually establishing and allocating objectives.
Consistent with this, rewards depend not on individual performance, but on team
performance. The method of implementing this will be described later.
Open Functionbases1: These enable members of an organization to participate
in the processes, which facilitate reuse and rapid integration of new members into
teams. This ‘‘openness’’ places an emphasis on continual improvement as it ele-
vates team performance; it is no longer a matter of what you know, but how you
use what you know. It is therefore incumbent upon each team member to browse
each other’s work and to add value for the betterment of the team. The openness is
extensible to include geographically distributed members such that the product
teams are now ‘‘virtual’’.
Automation of Processes: This, including documentation automation, will
transform the amount of spare time an engineer has to pursue continual
improvement. Making recurring engineering trivial will transform the work
environment such that the larger portion of time is spent on invention and
improvement—quite the reverse of today’s environment. There will be other
advantages in that automation enables the following: recurring engineering will be
right the first time, change can be rapidly incorporated, and elimination of wasteful
rework efforts. This will vex the Luddite as the workload is diminished. But this
attribute of Theory Z is essential if change is to be a natural feature on advanced
engineering methods. Quite simply, change takes time, and time must be made
available to account for it.
The standard IPT is an organization that might aspire to be Theory Z. The classical
organization approach, as illustrated in Fig. 5.2, has been described by Jack Heinz,
‘‘What went Wrong’’ [29].
The temptation is to superimpose management levels onto the organization
chart and fill in the blocks with the names of the leads. Both of these notions fall
out from the bureaucratic method of design. Furthermore, the lead position floats,
passing hands as the Enterprise Domains mature over the life cycle of the system.
The Theory Z organization, in contrast, is antithetic to most current manage-
ment structures, a point made clear by Heinz’s historic perspective [29]. Heinz
looks at a top-down management structure, derived from operations research,
1
Functionbase is a term created for this book. It refers to a database consisting of procedures,
software code, design algorithms, and other information, and is used in distinction from databases
that contain data used by and produced by those functions.
5.1 Organizing for Productive Behavior in the Presence of Change 27
Program
Manager
Program
SEIT
where decisions are made largely at the top. The underpinning assumption is that
managers do not need to understand the engineering they are managing. Conse-
quently, expert designers are excluded from the process of managing development,
and there is a gradual erosion of the technology base with a consequent loss in
productivity.
This loss in productivity affects a growing differential in rewards between the
top and bottom of the organization.
In contrast, there is obvious success of several commercial computer software
and hardware companies where management is in the hands of competent
designers. I would not name them, but you know who they are.
This loss in productivity affects a growing differential in rewards between the
top and bottom of the organization.
Instead of the classical organizational structure, an alternative organization,
based on Theory Z, is proposed that is based on knowledge workers and an
information-based organization. The Theory Z organization, portrayed in Fig. 5.3,
is a conceptual model of a human activity system for engineering [12]. As a formal
system, it has the following components:
A Mission: to produce higher quality engineering. This renders the systems
‘soft’ in that it is a continuing pursuit. However, it is also ‘hard’ in that there are
fixed objectives to pursue for each particular product.
28 5 A New Organization is Needed: Beyond Integrated Product Teams
Customers
Expectations & Central
& Users Legal
Funding Operations
Variances
Field Objectives and
Products Funding
Performance
Systems
Engineering
Research &
Development
Legend
Information-
Product Teams Specialty Support
Based
Functions
Environment
product teams must be given a high degree of autonomy to practice their skills on
objectives established with management, with complete control over their own
resources. The Theory Z product team is composed of knowledge specialists who
will resist a management chain of command and establish their own informal
groups if necessary.
The systems engineering group (SEIT), while not controlling the engineering
organization, ensures product integration and collects engineering performance
data to compile overall variance information. It is the variance information that is
required to derive R&D plans for developing new technology and concepts.
In contrast to the bureaucracy, the reformed Theory Z organization is a flat
network of teams. There is no longer a dependency on middle management,
because responsibility has been given back to the expert designer to manage the
processes locally. This approach is consistent with the Codeterminism and open
Functionbase parameters of a Theory Z organization. Another point of detail is the
absence of a Quality Department; quality MUST be built into the engineering
process as will be explained.
Perhaps another way to look at the Theory Z organization is to see it as a
learning organization that replicates and improves on the Matrix organizational
structure. On one axis there are programs decomposed into product teams
responsible for work capital. On the second axis there are corporate critical skills
decomposed into their specializations responsible for developing knowledge
capital. The term capital is used to describe both axes in order to convey the idea
that both work and knowledge are required to make money.
Knowledge capital delineates one company from another and becomes visible
during competition. It can be harder to develop than work capital, as it is the union
of the knowledge of the engineers within the organization. Figure 5.4 illustrates
the union of people within all aspects of a Theory Z organization and Table 5.1
describes the assumptions made by management in the Theory X, Y, and Z
organizations.
People Purpose
Data
30 5 A New Organization is Needed: Beyond Integrated Product Teams
Imposing discipline from above onto engineering programs with a large, new
engineering content is fraught with difficulties. One difficulty has been called the
Mythical Man Month [9], where planning makes assumptions about how an
engineering effort is going to proceed, all of which turns out to be a myth.
Another major problem causing the ‘‘Mythical Man Month’’ is the use of
Software Lines of Code (SLOC) to estimate the time required to produce engi-
neering products. In modern engineering environments, with automated code
generators and automated processes, SLOC becomes an outdated estimation
method, being replaced by estimation methods based on the Function Base of the
system (i.e., the number and complexity of functionalities and their interrelated
processes).
Fred Brooks, identified this problem back in the 1970s, while managing the OS/
360 software program development for IBM [9]. His conclusions can be sum-
marized as follows:
Optimism is a natural attribute of the design engineer. Therefore schedules and
plans built on historic data tailored by the design engineer are asking for trouble.
Any schedule analysis must be based on the non-finite probabilities of achieving
an objective (i.e., the functionality).
Erroneous Man Month estimates are caused by increasing system complex-
ities and the number of interrelated processes. Stated simply, no amount of effort is
going to change the fact that the gestation period for a baby is nine months. As
illustrated in Fig. 5.5, as the Functional Complexity increases, and the number of
interrelated processes increases, there are some engineering problems that are
actually worsened by increasing the level of effort (adding more engineers).
Clearly, when designing the engineering process, attention must be given to
minimizing the relationships between the individual processes, even if it impacts
the product technology [32].
5.1 Organizing for Productive Behavior in the Presence of Change 31
System Complexity
Months
Number of
Inter-related
Processes
Personnel
2
The term Surgical Team draws on the hospital analogy of a highly specialized team of
professionals, each with a specific skill, that cooperates and collaborates to achieve specific goals.
32 5 A New Organization is Needed: Beyond Integrated Product Teams
Change is never natural for anyone; therefore, this role is important in ensuring the
team never repeats King Harold’s folly.
The Toolmaker: This person has the role of integrating and interfacing all the
tool wealth of each individual team member, including searching the network for
better implementations that might exist. A diversity of tools complicates com-
munication between team members; conversely, defining a standard set of tools for
the whole organization kills ingenuity and invention, the lifeblood of engineering
change and advance.
The Implementer(s): These engineers design the assemblies. They are spe-
cialists in their respective fields and often must be coerced into communicating
with their peers by the Chief Whip and Technology Champion.
It is very important to understand that these are team functions that describe the
various roles played by members of an effective Theory Z team. As the product
matures, the roles may change hands. For instance, the Marking Implementer
might be the Chief Whip during proposal time; the Control Engineer during
design; the Software Engineer during implementation; the Test Engineer during
the test phase.
Before embarking on the technical impacts of the Theory Z organization, there are
two concerns that require attention:
• The affect that a shift from a command control structure has upon regulation;
and
• The problem of defining the organization’s mission.
To address these, first we review the consensus problem, and then the solution
using the Quality Function Deployment (QFD) process.
The problem appears at least once a year for the engineering organization; man-
agement wants a detailed plan that goes out for 5 years. The trouble is the work,
especially research, has not been performed to be able to see beyond 1 or 2 years,
and the competition is planning the next 20 years.
This is not uncommon in bureaucracies. Remember, the bureaucracy is
designed for invariant systems and business environments. Consequently, vision
and engineering don’t seem to mix. Clearly, a Theory Z organization requires a
clear definition of ‘‘vision’’ that is both practical and futuristic; without it there will
be a compromise and disagreement. One such definition is given in Table 5.2.
5.2 The Nature of Consensus Engineering 33
The idea of knowledge capital and Imagineering is not new. Here are some
examples of organizational methods for their implementation.
Gatekeepers: are the most common visionaries to be found in engineering;
they link organizations to the technical world at large. Gatekeepers have been
characterized by Tanzik [47] as: high performers recognized within the organi-
zations, tenure in the organization for about five years, have a high status level;
and tend to be first line supervisors that maintain an up-to-date understanding of
new technologies. It has been noted by [14], however, that without management
leadership, much of the organization can remain suspicious and not willing to
listen to the gatekeepers. Consequently, the Gatekeeper’s role is by its nature
informal and underground in organizations where they are not recognized by
management.
3
Although many people believe Walt Disney coined this term, it was actually popularized by
Alcoa in the 1940s.
34 5 A New Organization is Needed: Beyond Integrated Product Teams
The objectives described in Table 5.2 form the operations paradigm. If there is
no rational, measurable basis for those objectives, then it is unlikely that the
organization will accept the news that the paradigm is broken. Without that basis,
anyone who ventures to suggest that the paradigm is broken will be considered to
be making an unwarranted personal attack, as the paradigm is powerful in defining
the engineering culture. The consequence is paradigm paralysis, where an orga-
nization is blinded by the apparent success of its own culture.
Our Theory Z concepts of Codeterminism and Open Functionbases form an
essential part of the solution to paradigm paralysis. By having a rational basis for
objectives and treating captured knowledge as group owned, the various product
teams as well as the R&D groups can browse and identify shortcomings in the
paradigms and suggest alternatives based on objective facts.
5.2 The Nature of Consensus Engineering 35
The destinations described in Table 5.2 form the future paradigms. Often there
is no rational basis, it is intuitive, and evidence might even be anecdotal in nature.
This is why a Theory Z organization has an R&D group that is not dependent upon
product areas for approval of their funding. Intuition is anathema to the product
manager. The Theory Z concepts of automation are essential here to allow the
product teams time to communicate with the R&D group so as not to slip
schedules agreed to with the product manager.
QFD consists of seven tools, as described by King [38], that can be used to address
various aspects of the design process. They are system analysis tools that identify
the objectives to a fine degree of detail and consist of:
1. Affinity diagrams for brainstorming.
2. Interrelationship digraphs to ensure the problem is understood.
3. Tree diagrams to hierarchically structure ideas and system components.
4. Matrix diagrams to correlate the trees.
5. Matrix data analysis.
6. Decision program charts to capture failure modes.
7. The arrow diagram to map out the final process.
For a bureaucratic organization, QFD may have limited value and be considered
too expensive to implement, hence its lack of acceptance within engineering
organizations. But for the Theory Z organization, that has to manage change, QFD
provides a rational basis for continual improvement. It provides a discipline for
integrating the voice of the customer with the voice of the engineer. It enables the
team to understand what the customer really wants. It also enables the team to
evaluate and integrate one another’s values to form a solid consensus, thereby
integrating vision and constructively addressing the culture clash problem.
QFD also provides a rational basis for identifying new technology and con-
cepts. The product teams can work in concert with the R&D group to identify the
ideas with most promise, thereby pre-empting the ‘‘not invented’’ here syndrome.
If properly utilized as a systems tool, QFD provides a priori traceability from
the customer’s demands to the final product before design and manufacturing have
36 5 A New Organization is Needed: Beyond Integrated Product Teams
started. A corollary to this is that QFD facilitates reuse when implemented using
an automated, hypertext-driven system. Any new customer demand can immedi-
ately be reviewed relative to existing designs and the closest fit found.
Another element of QFD is the management of objectives such that they define
the desired robustness characteristics. Robustness is a key feature in Concurrent
Engineering. It pertains to the resilience of the engineering processes and product
to uncertainties and unexpected changes.
Therefore, the QFD methods, and the Functionbases they generate, are the
principal tools for integrating the organizational elements described in Fig. 5.3.
Figure 5.6 illustrates the balance gained by using QFD as the goals of Con-
current Engineering, which can be defined as ‘‘the systematic approach to the
integrated concurrent design of products and their related processes, including
manufacturing and support’’ [20–22].
The QFD process can be used by any organization, but it promises the best
return on investment when used by an organization that is optimized for change.
First, consistent with Likert’s System 4, QFD facilitates communications
vertically and horizontally, especially when it is accessible over a network.
Second, consistent with Theory Z, the QFD method should facilitate extended
communication such that a consensus is developed.
This is why the load-bearing beam in Fig. 5.6 is titled ‘‘Cooperation and
Discipline’’ and the operative term is ‘‘should facilitate’’. Such a return on
investment is unlikely in a Theory X or Theory Y organization.
The use of the QFD method and Concurrent Engineering is really a move away
from the sequential design approach to an incremental design approach (which
many might call an ‘‘agile’’ approach) [12].
The sequential method, often called stove piping, is used by organizations
composed of tight functional groups. The product design is passed from function to
Quality
Function
Deployment
$$
$$
function, each adding a layer of detail. If adding a layer uncovers an error in one of
the earlier layers, the whole design has to be peeled back, corrected, and then re-
layered.
The incremental approach uses the multifunctional (product) team to pre-empt
design faults, especially when the QFD process is utilized.
The overall effect is to change the normalized cost profiles, as illustrated in
Fig. 5.7. Sequential design (Fig. 5.7a) is alluring in that the initial costs are low,
giving the program manager a false sense of security. Even if cost and schedule are
monitored and on track the schedule never includes the hidden dangers of
sequential design. As the design matures, more problems surface causing cost to
grow exponentially.
Incremental design changes the nature of the cost-incurred curve (Fig. 5.7b).
The method is more expensive initially, leaving the program manager feeling
insecure. The program manager’s insecurity is heightened by the fact that the
incremental design still matures slowly; therefore, less work appears to be com-
pleted early in the program. But unlike the sequential method, the costs stabilize
(i.e., there are few hidden flaws left to be discovered).
Another problem with sequential design is that it commits costs early in the
program. With incremental design, more time is spent on the initial design;
therefore, costs are not committed unto the team is certain about the requirements
and their correctness.
During the initial phases, incremental design requires commitment on the part
of the organization and faith on the part of the engineering team, before payoff is
seen in the later phases.
The purpose of describing the matrices is not to give a detailed explanation of the
QFD process or use. That can be found in King’s work. The purpose is to show
how the QFD matrices provide the information that binds together the organiza-
tional team.
80 80
% of Total Costs
% of Total Costs
40 40
Costs Committed
20 20
0 0
Conception Design Testing Process Production Conception Design Testing Process Production
Fig. 5.7 a Cost profiles utilizing sequential engineering. b Changing cost profiles through
concurrent engineering
38 5 A New Organization is Needed: Beyond Integrated Product Teams
In King’s system of matrices, the A1 matrix (often called the House of Quality
matrix), which we illustrate in Fig. 5.8, correlates the Customer’s Demands to the
Requirements [including Quality Attributes, (i.e., non-functional requirements)].
The matrix is first completed horizontally to determine the most important cus-
tomer demands and their relative weighting. Then the matrix is completed verti-
cally to determine the most important requirements and their relative importance.
This matrix is completed with the concurrence of the customer, program man-
agement, systems engineering, and the product teams responsible for the engi-
neering. The important attributes of the Quality Matrix are described as follows:
The Quality Attributes act as robustness drivers. They can come from both
customer quality requirements and from statistical analysis to create robustness in
the system design, such that the Enterprise Domains are not unraveled by dis-
persions in the performance of the interfacing Enterprise Domains. Invariably, it is
the Quality Attributes that cause the greatest disruption to engineering and drive
the overall design of the system and Enterprise Domains. As the program matures,
the customer and designer discover the need for better, higher performance. As the
customer demands and functions change and adapt, the software and hardware
components of the system have to change and adapt to accommodate the new
quality attributes and changing functional requirements.
The Quality Matrix provides a roadmap for Continual Improvement.
Current design discipline dictates a pragmatic approach to engineering. Therefore,
requirements are seen only as absolutes, either ‘‘they are met’’ or ‘‘they are not
met.’’ The approach described above is focused on ‘‘how well’’ we meet the
requirements thereby providing information on what requires improvement and
how new technology might improve the quality.
The A2 Matrix is used as a crosscheck, ensuring all the Quality Attributes and
Enterprise Domain Functions have been documented and correlated. The A2
Matrix is illustrated in Fig. 5.9.
The A3 Matrix (Fig. 5.10) is used to cross-correlate the Quality Attributes. In
this manner, a notification map can be created, enabling the design engineers to
manage the interrelationships between Quality Attributes.
The first three matrices are important for product (hardware or service) man-
agement. The A4 matrix develops the second level of detail, identifying the
hardware and software components required for Enterprise component design and
implementation. An example is given in Fig. 5.11.
Other matrices can be developed to evaluate new concepts and new technol-
ogies. Each of the three trees, Quality Attributes, Functionality, and Enterprise
Components, are evaluated independently against new ideas. The process
Net Centricity
Availability
Scalability
Reliability
Flexibility
Integrity
Mission Management
Functions
Navigation
Command & Control
Security Services
Data Services
Web Services
40 5 A New Organization is Needed: Beyond Integrated Product Teams
facilitates a rational evaluation to help the team members overcome their preju-
dices, forging the farsighted vision required to define destinations for the
organization.
As will be seen, the QFD matrices form the basis for quality design and an
information database.
The engineering organization’s ability to change rests more than ever on the
education of its engineers and managers, meaning ‘‘to provide them with knowl-
edge and training’’. Education will differentiate the learning organizations that
implement Theory Z from older, more bureaucratic organizations. The process of
becoming a learning organization will include the following:
Implement QFD to capture the knowledge of experienced engineers to form a
computerized Functionbase. Such an implementation will increase the productivity
of all the members of the organization as information becomes more readily
available.
Automate Engineering Processes to free the engineers from mundane tasks.
Historically, automation has been viewed as a threat to one’s livelihood. However,
automation has always resulted in the user moving on to loftier endeavors. In the
case of the engineer, the organizational goal should be to enable the engineer to
spend at least 30 % of his/her working hours on education.
Develop autonomous product teams whose budget is not geared to time and
motion studies, but to the perceived value of their objectives (as determined
through the QFD process). The team must be free to generate their work plans and
resource management plans. This freedom must include management of their tools
and training requirements.
5.4 Summary: Organizational Transition 41
Paper &
Communication Information Books Locked Data
Perhaps the simplest example of how power is being shifted by computer technology
is to consider communication rights. The telephone (including cell phones) and
email are indispensable components in getting work done. This becomes painfully
evident in third world countries where telephones, cell towers, and Internet con-
nectivity are scarce; there are no beepers, no voice mail, and no email [25].
In modern companies workers have communications rights, including long
distance and international communication through email and web-based connec-
tivity without asking permission. The engineer can now meet colleagues elec-
tronically and develop relationships without ever meeting them face to face
(although they can meet virtually face to face through video conferencing).
This newfound freedom in communications has provided a plethora of oppor-
tunities. Web sites can be found that are full of any information you might be
looking for. News events are published onto the Web as they are happening.
Streaming video is available across the world on any event.
Suddenly we are living in an electronic village. The shy person is not afraid to
speak up, as they are secure in their electronic village. Also, distance no longer
prevents casual conversation worldwide as you can simply open up your virtual
cottage window and simply chat with the electronic passerby. In addition, one can
Web Based ,
Information Computers Slow Changing data
Office Automation
Electronic Computer/
InformaƟon Inf. Sys. Fast Changing data
InformaƟon
Technology
browse electronic databases from anywhere in the world like walking a trail
through the local park.
Current computer technology enables us to redeem the time we need to discover
what’s out there. It provides a broad horizontal integration of the workplace and
the rest of life.
Within an organization, the horizontal integration is more focused. Just as the
manager replaced the owner at the turn of the century, today the team is replacing
the manager. Teams can be formed on the fly as engineers and scientists find one
another over the network, identify their common vision, and then they can disband
once they reach their objective.
The bureaucracy, that integrated the clerks, is being replaced by the electronic
network of experts [22] and systems engineers. The middle manager will have a
new role on the team: facilitating the QFD sessions and keeping relative focus for
the team. This is illustrated in Fig. 6.3, where teams replace the manager and
experts replace the clerk.
However, there is another important distinction. The Industrial Revolution
enabled the rapid proliferation of information, and now the Information revolution
enables the rapid reuse of information and knowledge.
There is need for Functionbases, which captures rules for using information and
knowledge. In this manner a high level of organization can be imposed upon the
information for rapid reuse that provides history, background, and context for the
information. Automation then enables members of teams to use one another’s
functions with ease. Less time is wasted on the detail of how; more time is spent
on the details of why [21]. This is how teams self-manage, replacing managers as
owners of the product.
The paper paradigm pervades every aspect of our lives. Efficiency improvements
often lead to the generation of more paper as quality improvements are sought. The
46 6 Modern Design Methodologies: Information and Knowledge Management
Engineering tools like MATLABÒ and IDLÒ serve two important functions. They
can be used as executable specification to software engineers and provides the
design engineer with a sandbox for experiments as illustrated in Fig. 6.4.
6.2 Engineering Tools for Functionbases 47
Performance Production
Measures Code
Prototype Validation
Verification Pseudo-Code
Automated Target
Engineering Tools System
Fig. 6.4 Integrated analysis and design methodology using automated engineering tools
(3) Embedded Prose works hand-in-hand with the natural readability of the
engineering tool’s mathematical notation. Prose is easily embedded into the
code to provide further explanation as to what the code means.
(4) Executable Code is the icing on the cake. Because the code from the engi-
neering tools provides performance data, analyzing the performance implicitly
ensures the quality of the code as a pseudo-code for the target system software
engineers.
(5) Independent Review of the engineering tool code is a natural attribute of the
method, afforded by the target system’s software engineers. It ensures a
methodical, independent review of the code design as it is re-hosted onto the
target system.
Finding tools that work for all the teams can be one of the hardest tasks. Many
tools provide a means of integrating analysis code and real-time pseudo-code.
They provide the teams with a sandbox by virtue of their rich function libraries.
Whether MATLABÒ, IDLÒ, or Satellite Took KitÒ, or other tools, they can
provide the engineer with an inexhaustible supply of ideas and experiments,
especially when user groups are taken into consideration.
Designs that once required specialized code on servers can now be produced
using existing library functions on the desktop. This has collapsed analysis time by
up to 90 % in some instances; this productivity improvement is what makes
product improvement feasible. Without it, change becomes a major undertaking
for even the smallest improvement.
Configuration
Requirements
Data
Design
Verification Process
Engineering
Design Tool
Design-Code
Algorithm Performance
Development Measure
Data-Code
Data
Difference
Simulation Dictionaries
Files
Code
Pseudo Code
Legend
Process
Graphics
-
Flow
Text Text
Hyper- Hypertree
Hierarchy
Process
Tree
Dependency
being made). When run, the new scripts that generate the Performance Measure
Data will use the Data Dictionary Tree and the Pseudo-code Simulation. In this
manner the files used by the target system software engineers are automatically
checked through the performance analysis.
On completing the Recurring Design exercise, the target system software
engineers can access the pseudo-code trees and export the necessary data files.
Included are Difference Files that show exactly what pieces of data have changed.
This includes a change log with references back to the design discrepancies that
50 6 Modern Design Methodologies: Information and Knowledge Management
invoked the change. The log is important in that it provides a historic trail for new
members to understand the design and coding evolution. Without such a trail there
is no corporate memory and old mistakes will be repeated.
Verification Matrices are included to enable any user to trace how the
requirements are being met. In the QFD sense there is also a link to the Perfor-
mance Measures to ascertain how well the requirements are being met. Links are
again placed on the page so the user can ‘‘jump’’ from a requirement to the
particular page within the Functionbase that addresses that requirement.
The Algorithm Development tree provides additional information to enable the
user to understand the design and pseudo-codes/target system software. Again,
links are provided for use to jump through the Functionbase. This section might
also provide the scripts that run the pseudo-code and the notes that help explain
certain design decisions (i.e., trade studies and sensitivity analyses). Again,
without this errors will be repeated as team members relearn old lessons.
No one wants to waste time reinventing the wheel, and nobody wants to admit that
errors are repeated on a daily basis. Such a state of affairs is wasteful and lacks
quality. The engineer’s desire is to have a library of code to be reused with all
available design and test information available easily. But before jumping on the
reuse bandwagon, consider the classes of reuse.
Software Libraries contain annotated code for reuse (e.g., freeware). This type
of code can be helpful when it works first time and when proof of correctness is
not a requirement. But for most software in most companies, software libraries
might be better named ‘‘junkyards’’ as functionality and correctness cannot be
guaranteed.
Designed for Reuse Functionbases contain more than just code. Included are
the knowledge of the behavior, interfaces, design models, and proofs. This con-
stitutes the data required to certify the code for use.
Domain-Specific Solutions are complete hardware–software solutions. The
home PC delivered with preloaded operating system and office automation soft-
ware is an example.
Discussed here will be the second class: Designed for Reuse Functionbases. For
this class, the reuse requirements must be carefully defined.
First, knowledge must be defined and included in terms of a mathematical
basis. This is why the reuse junkyard is so hard to deal with, as the mathematical
basis is not known; therefore, its behavior within a system is unknown. The need
for a mathematical basis is the reason engineering tools, such as MATLAB, as an
executable functional specification are useful. The engineering tool pseudo-code
6.3 Engineering Design Reuse 51
syntax is mathematically based, and provides insight into the mathematically basis
for the design.
Second, interfaces have to be completely defined to avoid errors that involve
data flow, priorities, and timing errors at both the highest and lowest levels of the
system [28]. Common experience shows interface problems to take over 75–90 %
of the errors found after implantation of reuse code with conventional techniques.
This drives up cost due to all the investments in manufacturing interface control
documents (ICDs). Robust error checking is an essential design-for-reuse feature,
as it precludes this type of expensive error. Code correctness must be assured
before compiling.
Third, design proofs are required to preclude system errors. A prior verification
of code precludes expensive system bugs, such as instabilities. The design proof
leverages off the mathematical basis; algorithm correctness must also be assured
before compilation.
Four basic tools are required to provide the attributes for a real design-for-reuse
paradigm:
(1) A distributed hypermedia tool for workstations to allow creation and tracking
of Functionbases.
(2) Engineering design tools like MATLAB, IDL, and Satellite Toolkit for
building pseudo-code simulations and prototypes.
(3) Mathematical Equation tool, like MacsymaÒ for building and encoding
mathematical models.
(4) A Case tool used to implement the robust code building routines, needed to
preclude errors at their source.
The relations between these tools are illustrated in Fig. 6.6. The reuse meth-
odology integrates the behavior of the two ends of the system. The engineering
tools form a mathematical basis for capturing the system’s algorithmic behavior.
The case tools have a mathematical basis for capturing the target system’s
behavior. Experiments can quickly be run in the sandbox before time is spent
completing the design. The Data Dictionary can be built that complements the
engineering tool’s functions, and the target code re-ingested into the engineering
tool (in actual code C++, JAVA, etc.) for verification. Tests can then be devised in
the sandbox for running on the target system for final verification and validation.
Algorithm / Equation
CASE Tools
Generation S / W Functional
Models
Encoded
Math Model
System
Software
Engineering
Target System
Design Tools
Software Test
and Evaluation
Engineering Design Sandbox Operational System Development
In between these two life cycle ends is the Case Tool to bridge the gap between
the functional architecture and the resource architecture of the target system.
The functional architecture is designed by function hierarchies (called FMm-
aps) used by the engineering tools and type hierarchies (called TMaps) implied by
the engineering tools and used by the Case Tools. Three primitive control struc-
tures are used. There is one for defining dependent relationships, one for defining
independent relationships, and one for defining decision-making relationships. A
formal set of rules associated with these is used to remove design errors from the
maps. Because the primitive structures are reliable and because the building
mechanisms have formal proofs of correctness, the final system is reliable. Fur-
thermore, all modal viewpoints can be obtained from the FMaps and TMaps (e.g.,
data flows, control flows, state transitions, etc.) to aid the designer in visualizing
the design.
The engineering tool’s reuse value is in its mathematical basis. The reuse value
of the Case tools is founded in its separation of functional and resource archi-
tectures. Once the functional architecture is defined, it can be used on any target
system, whose resource architectures are fully reusable by the engineer, when they
match the engineering tools functions. Put another way, reuse requires two
Functionbases: the engineering tools functions and the case tools functional
architectures that capture the system behavior. These two Functionbases capture
all the information required to complete specificity of the system, no more, no less.
All that is required is the case tool that integrates and analyzes these architectures.
This is illustrated in Fig. 6.7.
The analyzer is used during the definition of the TMaps and FMaps to test for
consistency and logical correctness before placing the maps in the library. Tem-
plates of the particular target system are built and populate the resource archi-
tecture library. The Resource Allocation integrates templates from the library and
then automatically generates the required source code. Run-time performance
6.3 Engineering Design Reuse 53
Manage with
CASE Tools
analysis of the code can be verified locally to ensure it meets the constraints of the
target system (e.g., timing).
A synergism is realized by using the engineering tools and the case tools together.
They provide a design process with built-in quality. When combined with the QFD
methodologies, this paradigm provides a framework for automation, aiding the
organization in defining and implementing process improvements [37, 46].
The EEN requires no coaching to use, and includes tutorials to aid understanding.
The objective is to enable complete reuse of the Functionbase by a first time
‘‘functional stranger.’’ The architecture required to support the multi-functional
approach is illustrated in Fig. 6.8.
The EEN paradigm has some important attributes:
Risk Analyses
Process Objects are directories containing all the files and directories required
to complete a discrete design or analysis. The object is a collection of linked text,
graphics, and applications.
Transparent Network Objects enable team members to browse one another’s
Process Objects. Virtual documents can be built and printed if required, using links
that traverse the network to integrate Process Objects.
Process Automation is provided using a scripting language. The EEN can be
taught the process such that designs are automated.
Functionbase Security is provided to ensure data integrity is not violated.
Anyone who understands the above should see that this is infinitely doable
utilizing today’s Web Services and Java Scripting, coupled with Office Automa-
tion Tools that include hyperlinking capabilities.
To support a team, the EEN must contain a broad range of user functions. Two
classifications can be made: engineering functions that support the program and
knowledge management functions that support the engineer. These are, of course,
correlated, as illustrated in Fig. 6.9.
The major categories for Required Engineering, shown in Fig. 6.9 are:
Interactive Experiments refers to the process whereby an engineer can create a
‘‘living’’ notebook in an online environment to retain analytical results and
interleave comments and observations. This serves as a replacement of the classic
Engineer’s Notebook. This is accomplished in a real-time environment while
analyses are actively being performed. This also allows creation of hypertext links
to any other relevant material to provide a dynamically growing structure of cross-
linked reference material. Analysis differs from the sandbox only in terms of rigor.
Recurring Engineering refers to engineering tasks such as performance anal-
yses, design analyses, and the measurement of performance for quality assurance,
all of which can be automated.
Non-recurring Engineering—there are three domains. First, the Functionbase
that contains the QFD matrices governing the integration of the organizational
elements and the incorporation of new technologies. Second, resource planning
requires worksheets, networks, and analyses essential to the determination of the
work plans, the integration and management of resources, and the incorporation of
improvements. Third, the design phase which are all those activities performed by
the engineer, such as building a Functional Architecture Library.
Publications are the release of engineering requiring navigation paths to select
subsets of information. This can be performed in the knowledge management
domain (Web Services) or can involve Case Tools and may be published in
electronic media form.
56 6 Modern Design Methodologies: Information and Knowledge Management
The major categories for Knowledge Management Functions, shown in Fig. 6.9
are:
Data Management involves integration of both graphics and text into a
hypermedia Functionbase. Data access must be capable of being automated.
Function Management involves the development and structuring of procedures.
An example is code management (Configuration Management), where code should
be captured within a tree and executed directly (i.e., through a button on the
screen).
Linked Data Structures are methods of data organization, which consistently
allow the monitoring and modification of data interrelationships and interdepen-
dencies. The structures are composed of trees that form objects and navigation
paths across the trees.
Single File System means a data item exists in one place only. There are no
electronic copies (except for backup), as this would compromise the Functionbase.
All processes refer to the one copy.
Automatic Notification is the ability to automatically notify a designated user,
or list of users, when a particular, selectable, event has occurred. This is used when
a browser leaves notes and comments, or when data someone else is dependent on
is changed by the originator.
Quality Assurance ensures engineering is not released until all Quality Char-
acteristics have been verified. Because the QFD established the minimum per-
formance requirements, all that is required is the knowledge management system
within the EEN to check that each Process Capability Index (Cp) parameter is
greater than one (Cp will be explained later, see Fig. 7.2). Anything less would
indicate nonconformance.
Tool Interface is the ability to invoke Case Tool or Engineering Tool programs
by spawning a separate process. This includes the ability to interface with the
program by sending input to and receiving input from that program, from within
the EEN. The data so transmitted (and stored in the EEN) must be both alpha-
numeric and graphical. This program should be capable of being run in an inter-
active mode as well as batch.
The Need to use Metaphors: As with any paradigm shift, Knowledge Manage-
ment has been received with skepticism. The perception that the Knowledge
Management paradigm is an improvement is more readily perceived when it is
packaged within a familiar metaphor, hence the Electronic Engineer’s Notebook
name. Once engineers begin to experience it, and management sees it, the com-
munity will accept it as a productivity booster.
As a Life Cycle Tool: Knowledge Management will gain acceptance across the
industry, but not without some pressure being applied to the engineers that use it.
This can be attributed to the general reluctance to change we exhibit as humans.
Making the Knowledge Management paradigm broad in application will help in
that an engineer will be able to use it for a task that is personally comfortable (e.g.,
making viewgraphs). Once on the learning curve, the engineer will grasp some of
the other more subtle aspects of the paradigm with growing experience.
Advancing Communications: Being able to pass reusable knowledge to a peer
will have advantages on both the Intranet and Internet. Whole trees are treated as
objects, and can be shipped out to teammates in organizations in any geographi-
cally diverse location, or are used in conjunction with a Virtual Development
Environment. The advantage this provides is that teams now become a virtual
team. The teams will appear to operate ‘‘elbow to elbow’’ even though they seldom
meet ‘‘face to face.’’ The hypermedia aspect afforded by the Web and by Web
Services makes this process doable today.
58 6 Modern Design Methodologies: Information and Knowledge Management
Figure 5.6 implied quality which was a matter of balance. Another way to view
this is to consider three principal elements in tension within engineering. They are
People, Profit, and Process, illustrated in Fig. 7.1.
Conventional management is invariably focused on profit. This has been
described by Walton [52] as a management disease in terms of: lack of consistency
of purpose, emphasis on short-term profits, management by fear, and management
mobility that creates prima donnas and dissolves commitment. All three of these
concepts are antithetical to Theory Z.
Focusing on Process is the subject of the DoD Quality Master Plan, and is a
strategy that is intended to drive continuous improvement at every stage of a
program. Its objective is to combine management techniques, improvement
techniques, and specialized tools, and a disciplined approach to process
improvement. It states: ‘‘We have always managed the product to be in confor-
mance to requirements, but we have not managed the processes that produce the
products’’. Edward Deming [52] stated it this way: ‘‘do not manage the outcome;
manage the activities that produce the outcome’’. This is consistent with a Theory
Z organization in that team members must be disciplined to browse the Func-
tionbases to identify improvements before the product is made: quality is not a
priori. This is in contrast to the sign-off method where peer review is post-priori:
the discipline of inspecting quality into the product seems easier, but it is always
too little, too late.
Historically, the focus was on People through the use of formal organizations
and mentorship. Young engineers are ‘‘mentored’’ by more experienced engineers,
ensuring the continuance of product quality. The problem with mentoring is that
Profit
Reward Investment
Mentorship
Application
Specific Code
Improvement re
wa
oft
tS
rr ec
Production Speed
Co
Zero Cost
Trade
Quick and
Dirty Solutions
Production Speed
may have very little reuse if their codes and designs are not captured in context
with all of their information in one complete Functionbase.
Correct software is a challenge. Historically, programmers have depended upon
fast compilers and debuggers. With faster computers it is tempting to depend
extensively on the machine to debug code. However, ‘‘bug’’ is a euphemism for
error and constitutes wasted effort. The objective must be to write provably correct
code in order to both improve quality and reduce cost.
A statement often heard in design is, ‘‘if you want it really bad, you’ll get it
really bad’’, meaning that speed generally kills quality. Another similar statement
is, ‘‘we don’t have time to do it right, but we always have time to do it over’’. The
conclusions would seem to be that you could have either quality or speed, but not
both. This describes the tension between production speed and production quality.
In either case, the common denominator is waste. Reduced waste can be correlated
with both higher quality and higher speeds. To this end we have defined quality as
a process–product dual.
Process Quality is achieved through the minimization of waste and ‘‘loss to
society’’ through measurement and continual process improvement.
Product Quality is realized through the features and characteristics of a
product, or service, which bear on its ability to meet and exceed user expectations
(e.g., few discrepancies in our software).
life expectancy. That means minimizing the variance of normal energy and
exceeding the customer’s expectations, not just conformance to a specified
requirement. In short, ‘‘good is not good enough’’.
In a more general sense, there is requirements-pull and technology-push. The
organization that responds only to requirements, works on the basis of require-
ments-pull; vision is nearsighted and management is by objectives only; confor-
mance to specification is the predominant concept and quality is inspected into the
product; technology is generated only to satisfy requirements and R&D groups
must have funding approved by the program community.
The organization that responds to exceeding expectations is sensitive to tech-
nology-push; farsighted vision looks for new technology and concepts to integrate;
products have quality because their processes have quality; R&D groups are
looking to change paradigms from the outside of programs.
This was the farsighted goal of the Air Force’s USAF R&D 2000 Variability
Reduction Program (VRP) [35], illustrated in Fig. 7.3. Whereas most programs
stop improving the design once conformance to the specifications is met, R&M
2000 calls for continual improvement, requiring the engineering process to exceed
conformance [44]. In fact, the Statement of Work is to include a target for the
‘‘process capability index, Cp’’ defined as:
Specification Range
Cp ¼
Process Range ð6rÞ
With the variance computed and the sensitive parameters identified, SPC can be
applied for program management to remotely monitor the Quality Characteristics
identified during the QFD process development. Not all parameters will be iden-
tified for reduction; only some may simply be present [6]. It is important to note
that because the parameter under statistical control is a moving target (should be
changing), it does not mean the process is out of control. As has been explained,
engineering design means taking an idea, implementing it, then turning it into
reality. There is always a lot to be learned along the way, where experimental data
are accrued and lessons learned. We are never certain about the outcome of our
assumptions at the beginning of the design. ‘‘How do I know what I think until I
see what I say’’, is one way of describing this, a line attributed to the Harvard
psychologist Jerome Bruner. The purpose is to get the design moving, focusing on
the most important objectives determined during the QFD process.
The Software Cleanroom is a concept that integrates the product teams, infor-
mation technology, and the demand for increasing quality. It is designed to fulfill
the long-term vision: to develop and integrate new technologies into a software
development program. The quality goal is to increase the engineer’s productivity
tenfold over time and make continual improvement a reality.
Cleanroom software engineering is a concept that integrates statistical quality
control into engineering. There are three attributes defined for the cleanroom
discipline:
1. A design methodology that prevents defects in preference to correcting them;
2. A design that is incremental in preference to sequential; and
64 7 Agile, Robust Designs: Increasing Quality and Efficiency
The first attribute is provided through the engineering tool-case tool symbiosis.
The second is achieved through the integration of the QFD matrices, the sandbox,
and the target system. Monte Carlo simulations and a subset, the Use Cases, run on
the target system, provide the third attribute.
The cradle-to-grave view is illustrated in Fig. 7.4. In addition to the software team,
there is an integration team and the test (e.g., V&V) team. The objective is for the
software product teams to design, build, and test their software remotely from the
target system. The Test team validates the implementation using the identical
Quality Characteristics drawn from the QFD database and the Use Cases. The
quality of the code is measured against the proportion of the Use Cases, or User
Scenarios, the implementation passes. This form of quality measurement has little
relation to system reliability and dependability of the target system code. Based on
the User Test results, process improvement requirements can be identified.
Quality
Function
Deployment
Objectives
Target
Templates
Engineering Target
CASE Tools
Sandbox System
Monte Carlo
Measurements Test Generation
For example, if the User Tests reveal timing frames are being ‘‘blown’’, the
required improvement will be to the Resource Architecture Library, referenced in
Fig. 6.7.
Discipline in the cleanroom will result in corrections being made to the engi-
neering process, not the product. This ensures the lesson is truly learned and that
the error is not repeated.
The QFD A1 matrix is the cornerstone of the cleanroom and constitutes the first
increment in the design process. The A1 and A2 combined provide the bases for
measuring design progress through the Quality Characteristics. Using the reusable
libraries in the Functionbases, the software components in the A4 matrix can
change rapidly. Progress is not measured relative to the number of A4 components
completed nor to the lines of code produced. Process is monitored using Cp.
Deriving stable A1–A2 matrices becomes important as it enables clear observation
about the quality of the software process. Without it, low process quality will be
lost in the noise caused by ever-changing specifications.
The objective of the software cleanroom is to be able to certify the system
software. This is achieved by using SPC to monitor the Quality Characteristics and
Use Case statistics. Certification is achieved once conformance has been proved
(i.e., Cp = 1). Once the software has been qualified, it can be placed in the
Functionbase reuse library, along with all the relative materials (e.g., designs,
requirements, etc.) that compose a Functionbase.
Much of the processes discussed here are facilitated through Commercial Off-
the-Shelf (COTS) software products that are currently available. There are several
versions of the Electronic Engineering Notebook available. One is the
E-WorkBook Suite by IDBS, The Electronic Lab Notebook by LabArchives,
and the Oak Ridge National Laboratories (ORNL) Electronic Engineering Note-
book Project, sponsored by the DOE 2000 Electronic Notebook Project.1 The
purpose of these products is to provide an electronic equivalent to the paper
research or lab notebooks engineers have used for many decades. It will record
sketches, equations, plots, graphs, images, signatures; everything during the
process from R&D, to Systems Design, to Software Design and Development,
through Integration and Test, through maintenance, i.e., cradle-to-grave.
1
http://www.csm.ornl.gov/*geist/java/applets/enote/#demo
66 7 Agile, Robust Designs: Increasing Quality and Efficiency
Documentation
Management Collaborative Framework Architecture Documentation
(DocExpress) (System Architect for
DoDAF)
Systems
Integration Software
& Engineering
Test
Fig. 7.5 The agile systems/software development process with reverse engineering comparisons
The use of modern software development tools allows the Systems and Soft-
ware Architectures to be linked to requirements. This then allows the architecture
to be linked to class diagrams and then ultimately to the code to provide the
mechanisms to create the software ‘‘cleanroom’’ described in this book. By reverse
engineering the code in a ‘‘cleanroom’’ and then comparing the resultant software
design (based solely on the code), a comparison can be made to determine if the
architecture of the code written has resemblance to the original software archi-
tecture, i.e., ‘‘does the code do what it was architected to do’’? Figure 7.5
illustrates this process.
Architecture development tools, such as Rhapsody, Control Center, and
many others, in conjunction with the Electronic Engineering Notebook, Func-
tionbases and cleanroom concepts, provide the capability to forward and reverse
engineer the code for comparisons. This provides the necessary validation that the
code meets requirements by verifying that the code written matches the archi-
tecture and ties to the requirements that drove the architecture.
All of this work would be captured in the Electronic Engineer’s Notebook so
that the entire process and Functionbases can be delivered and archived for future
software and architecture reuse.
Chapter 8
Conclusion: Modern Design
Methodologies—Information
and Knowledge Management
People dislike change. It takes them out of their comfort zone. Human nature
demands continuity and consistency. These factors must be taken into consider-
ation in our engineering if continual improvement is to be realized. There are
telltale signs that indicate when change is a natural part of the process:
Quality is measured to prove the customer’s issues are being met and to provide
the basis for forcing change. Changes required for the Continual Process–Product
Improvement (CPPI) become an everyday affair.
Quality is implicit and is as much a part of the engineering as correct math; if
the team cannot generate correct math, then the organization needs a math
department. The same is true of quality. So if the organization has a quality
department, it indicates engineering processes that are not robust cannot integrate
change.
Quality is related to training and education because change is perpetual.
Quality is a race to be run, not just an objective to complete. Therefore, it is likely
that at least one-fifth of an engineer’s time will be spent in training and education.
Without it, the Interactive Experiments required to find improvement opportunities
will have low yields.
A Theory Z organization will ensure the team owns the product and the
resources. In some respects, this is a return to Taylor’s Scientific Management
where the team is the industrial engineer, determining how the resources can best
be utilized for process–product improvement [48]. These resources may be allo-
cated to education, faster hardware and software, additional team members, etc.
This may require an organizational focus on long-term profit before commitment
can be made, as the cultural change involves the engineer as well as the manager.
Some short-term losses will no doubt be incurred as the team learns to take
advantage of the new paradigm.
The Expert Systems Designer is our concept generated by all the ideas discussed in
this chapter and is graphically represented in Fig. 8.1.
Imagination and
Electronic
Experience Engineering
Notebook
QFD Process
Provable Synthesis
Methods & Tools
Engineering
Tools Improvement
Requirements
Target System
Analysis
Requirements
Analysis
QFD-Based
CPPI
Performance
Analysis
libraries have been made for reuse. What is required is to integrate the knowledge
about repeated errors to affect robust error checking, hence the Case Tools. Also,
there is the question of the quality of the requirements. This requires the analyst to
question his own thinking through experimentation. The Engineering Tools
provides the Engineering Sandbox with rich function libraries. The two tools
together provide a synergism where new ideas can be developed and implemented
Right-First-Time. The method for eliminating errors is aggressively preventative in
nature, not corrective as post-priori bug elimination is wasteful and uncertain.
The Electronic Engineer’s Notebook integrates the QFD, methods-tools, and
improvement requirements. It enables the functional-stranger to rapidly repeat an
analysis or regenerate a design. It is the Notebook that will enable the engineer to
spend less time on clerical work and, through automation, more time on invention
and design.
Analysis of Performance estimates, form a part of the Electronic Engineering
Notebook (EEN). Development of models provides the means to test and tighten
performance estimates and validate what we think we know. This provides the
basis for change; changing what we don’t know is always dicey. With proven
knowledge, change can be made in a managed fashion.
QFD-Based Continual Process–Product Improvement (CPPI) is the integration
of new technology, concepts, and knowledge. Decision-making is based on Cp
measurements and how they will be affected. Such a rational basis is a prerequisite
to paradigm busting. QFD is a discipline that ensures the product does not suffer
paradigm paralysis. If it dies, the team will experience King Harold’s folly.
The Evolutionary Rapid Prototype describes the capability realized by the
Expert Designer concept. Evolution is a gradual process in which something
changes into a different, and usually more complex, form. Rapid means moving
and moving swiftly. Prototype is an original type, form, or instance that serves as a
model on which later stages are based or judged. Therefore, the Evolutionary
Rapid Prototype means each process–product is a basis for the next. The process–
product is ever changing and always improving.
Three of the Required Engineering Domains described in Fig. 6.8 are shown in
Fig. 8.2 to illustrate their relationship to each other. The engineer’s time might
well be split evenly between each domain to generate the necessary rate of
improvement. Treating any one of the domains in isolation will result in ineffectual
change. Also, loss of balance between the three will result in loss of competi-
tiveness. Lack of commitment to the three by the organization is a lack of com-
mitment to Quality, People, and Process.
Recurring Engineering in enabled through the EEN, and allows software loads
to be generated and tested automatically [40]. With all the functions predetermined
70 8 Conclusion: Modern Design Methodologies
Function Performance
Data Data
Customer
Demands - C
P
Seed Improve
Technology
Leap
Sandbox/Prototype Browsing &
QFD Experiments
Lessons
Learned Modeling
New Concepts &Innovate
New Technology Interactive Experiments
Non-Recurring Engineering
Databases
and all the design tools interfaced through Knowledge Management, the generation
of the software loads is simply a matter of CPU time. The process is as follows:
• The analyst completes the data definitions using the data management tools
within the EEN.
• Once complete, the engineer can invoke the design macros built into the EEN
through the Knowledge Management Process. These macros know how to read
the data and spawn the design processes.
• These processes include the Monte Carlo simulations and/or regression tests
(if required) to measure performance; therefore performance analysis is built
into the process that enables requirements verification.
Interactive Experiments are essential to CPPI as it indicates where the paradigm
is weak or broken. Given robust error checking methods and thorough knowledge
captured through QFD, there still remains sets of errors that can be traced to
inadequate requirements analysis. When addressing this domain, there does not
appear to be any substitute for human intuition and insight. At present, the
organization depends on peer reviews using viewgraphs and questions. The Expert
Designer makes the process–product Functionbases available to anyone over the
internal organization web through a variety of collaborative environments, thereby
providing familiarity normally reserved for the designer, to the whole community.
The least member of the technical community will use the Recurring Engineering
functionality and the erudite will use the Interactive Experiments components.
8.2 Robust Designs the First Time 71
Great Spirits have always encountered violent opposition from mediocre minds.
Albert Einstein
will the manager cede control to the team? The solution will most likely come
through economic necessity, and the sheer inevitability of the Information Age.
Whatever happens, a valuable tool in the arsenal is QFD. Engineers can use it to
build technology roadmaps to new products. Managers can use it to build the New
Organization to improve Customer Satisfaction.
Increasing Quality should be closely linked to Imagineering. Imagine a
process with all waste eliminated. Now design it. In contrast, engineering that is
driven by conformance-to-spec is suffering paradigm paralysis. To eliminate waste
there must be quality in the process, as this will affect quality of the product. For
the design engineer, quality and robustness are often linked, as the measure of
performance can be the same for each. Therefore, advanced design algorithms
have become even more valued as a means of reducing variance. The toughest
challenge for the design engineer may be to make statistics an intuitive skill;
solutions to improving quality will then follow more comfortably [23].
Finally, there are two major points that this chapter is trying to put across:
• Quality and robustness are defined through measurements, and maintained
through automated measurements (e.g., regression tests), not impressions; and
• Higher quality demands change, which will happen with or without us.
References
52. Walton, M. (1986). The Deming management method. Mesa, AZ: Perigree Books.
53. Wan, T., Zhou, Y. (2011). Effectiveness of psychological empowerment model in the science
and technology innovation team and mechanism research. In Proceedings of the 2011
International Conference on Information Management, Innovation Management, and
Industrial Engineering (ICIII).
54. Weber, M. (1947). The theory of social and economic organization. New York: Simon &
Schuster Inc.
55. Young, M. (2009). A meta model of change. Journal of Organizational Change
Management, 22(5), 524–548.
Index
A G
Agile design, 1, 2, 7, 10, 16 Gatekeepers, 33
Application specific codes, 60
H
B Holistic, 21, 71
Bureaucratic method, 15, 16, 19, 24, 26 Horizontal integration, 45
Human system interface, 16
C
Case tool, 51–53, 57, 64 I
Chief whip, 31, 32 Information age, 25, 45, 72
Codeterminism, 26, 29, 34 Information revolution, 45
Configuration data, 48 Integrated product team, 17, 21
Consensus engineering, 7, 32
Continual improvement, 26, 35, 39, 62, 63, 67,
70 K
Continuous improvement, 59, 69 Knowledge capital, 29, 33
Knowledge management, 2, 4, 7, 48, 56, 57,
59, 60, 67, 70
D Knowledge paradigm, 43, 58
Data dictionary, 49, 51 Knowledge sharing, 9
Data management system, 6
Designed for reuse, 50
Development domains, 20 L
Likert’s systems, 23–25
E
Electronic engineering notebook, 55, 57, 65, M
66, 69 Matrix management, 17, 24
Expert systems designer, 68
Extreme programming, 1, 4
N
Non-recurring engineering, 55, 71
F
FMaps, 52
FPGA, 60 P
Functionbase, 7, 26, 29, 34, 36, 47–52, 55, Process automation, 55
57–61, 65, 68, 71 Process documentation, 48
Q T
Quality assurance, 31, 55, 57 Technology champion, 31, 32
Quality attributes, 38–40 Test-driven development, 4
Quality characteristics, 41, 61, 63–65, 71 Theory X, 24, 29, 36
Quality function deployment, 2, 12, 21, 32, 68 Theory Y, 24, 36
Theory Z, 10–12, 25–36, 40, 41, 59, 67
TMaps, 52
R Toolmaker, 32
Recurring design, 48, 49
Recurring engineering, 26, 55, 69, 71
Robust design, 58, 59, 68 U
Unified modeling language (UML), 47
Use cases, 64
S
Scientific management, 15, 67
SEIT, 24, 29 V
Sequential design, 36, 37, 58 Variability reduction program, 62
Shareability, 46 Velocity of change, 17
Skunkworks, 34 Verification and validation, 46, 51
Soft skills, 9–11, 17