Applied Space Systems Engineering Space Technology Series
Applied Space Systems Engineering Space Technology Series
Applied Space
Systems Engineering
SPACE TECHNOLOGY SERIES
This book is published as part of the Space Technology Series,
a cooperative activity of the United States Department of Defense
and the National Aeronautics and Space Administration.
Wiley J. Larson
Managing Editor
Space Mission Analysis and Design Second Edition by Larson and Wertz.
From McGraw-Hill:
Edited by:
Learning Solutions
Boston Burr Ridge, IL Dubuque, IA New York San Francisco St. Louis
Bangkok Bogota Caracas Lisbon London Madrid
Mexico City Milan New Delhi Seoul Singapore Sydney Taipei Toronto
Applied Space Systems Engineering
Space Technology Series
ISBN-13: 978-0-07-340886-6
ISBN-10: 0-07-340886-7
list of Authors
Preface
v
vi Table of Contents
Chapter Page
Chapter Page
Chapter Page
Chapter Page
Chapter Page
Index 873
List of Authors and Editors
William H. Arceneaux. Constellation Program Test and Evaluation Director,
NASA Johnson Space Center, Houston, Texas. B.S. (Computer Engineering)
Florida Institute of Technology; B.S. (Space Sciences) Florida Institute of
Technology. Chapter 11 — Verification and Validation.
Mary Jane Cary. President, Cyber Guides Business Solutions, Inc., Naples, Florida.
M.E.A. (Engineering Administration) George Washington University; B.S.
(Nuclear Engineering) Kansas State University. Chapter 16—Manage Configuration.
Bruce C. Chesley. Director, Strategic Architectures, The Boeing Company,
Colorado Springs, Colorado. Ph.D. (Aerospace Engineering) University of
Colorado, Boulder; M.S. (Aerospace Engineering) University of Texas at Austin;
B.S. (Aerospace Engineering) University of Notre Dame.
Wiley J. Larson. Space Technology Series Editor for the U.S. Air Force Academy's
Space Mission Analysis and Design Project, President, CEI, Colorado Springs,
Colorado. D.E. (Spacecraft Design) Texas A&M University; M.S. and B.S. (Electrical
Engineering), University of Michigan. Chapter 1 —Space Systems Engineering.
Ted A. Leemann. President, The Center for Systems Management, Vienna, Virginia.
M.S. (Logistics Management) Florida Institute of Technology; B.A. (Business and
Economics) Westminster College Fulton, Missouri. Chapter 6—Decision Making.
Randy Liefer. President and Partner, Teaching Science & Technology, Inc.,
Colorado Springs, Colorado. Ph.D. (Aerospace Engineering) University of Kansas;
M.S. (Aeronautics and Astronautics) Massachusetts Institute of Technology; B.S.
(Aeronautical Engineering) United States Air Force Academy. Chapter 12—Product
Transition.
Jerry Jon Sellers. Senior Space Systems Engineer, Teaching Science & Technology,
Inc., Manitou Springs, Colorado. D.Phil, (Satellite Engineering) University of
Surrey, UK; M.Sc. (Aero/Astro Engineering) Stanford University; M.Sc. (Physical
Science) University of Houston, Clear Lake; B.S. (Human Factors Engineering)
United States Air Force Academy. Chapter 11 — Verification and Validation; Chapter
19—FireSAT End-to-End Case Study.
Peter A. Swan. Vice President and Partner, Teaching Science and Technology, Inc.,
Paradise Valley, Arizona. Ph.D. (Space Systems Design) University of California at
Los Angeles; M.S.S.M, (Systems Management) University of Southern California.
Chapter 14—Technical Direction and Management: The Systems Engineering
Management Plan (SEMP) in Action.
Lawrence Dale Thomas. Deputy Manager, Constellation Program, NASA
Marshall Space Flight Center, Huntsville, Alabama. Ph.D. (Systems Engineering)
University of Alabama in Huntsville; M.S. (Industrial Engineering) North Carolina
State University; B.S.E. (Industrial and Systems Engineering) University of
Alabama in Huntsville. Chapter 13—Plan and Manage the Technical Effort.
Peter M. Van Wirt. Senior Space Systems Engineer, Teaching Science &
Technology, Inc., Monument, Colorado. Ph.D. (Electrical Engineering) Utah State
University. Chapter 19—FireSAT End-to-End Case Study.
Dinesh Verma. Professor of Systems Engineering, Stevens Institute of Technology,
Castle Point on Hudson, Hoboken, New Jersey. Ph.D. (Industrial and Systems
Engineering) Virginia Tech. Chapter 1 — Space Systems Engineering; Chapter 2—
Stakeholder Expectations and Requirements Definition; Chapter 3—Concept of
Operations and System Operational Architecture.
Preface
Today's challenges in designing, developing, and implementing complex
aerospace systems are staggering. The Department of Defense, the National
Aeronautics and Space Administration, National Reconnaissance Office (NRO),
and the Federal Aviation Administration alone invest up to $50 billion (in FY2008
dollars) annually in space-related products and services. Globally, governments
invest about $74 billion every year. And commercial spending on communications
spacecraft, navigation systems, imaging systems, and even human spaceflight
equals or exceeds government spending in some years. The complexities of
technology, multiple organizations, geographical distribution, various political
entities, assorted budget cycles, and intercultural concerns challenge even the most
experienced systems engineers and program managers.
For the editors, ASSE is a dream come true—and not just because we're
finished! After decades of working in the aerospace community, we were able to
persuade the "best of the best" performing system engineers to contribute their
wealth of experience, successful tools and approaches, and lessons learned to this
project. This is good for us, but REALLY good for you, because you'll benefit from
the 350-plus years (that's over three and a half centuries) of collective systems
engineering experience!
xvii
xviii Preface
Applied Space Systems Engineering is the 17th book produced by the US Air Force
Academy's Space Technology Series team. The Department of Astronautics at the
Academy has continued to provide the leadership for this project. Our deepest
gratitude goes to Brigadier Generals Bob Giffen and Mike DeLorenzo and Colonel
Marty France for their vision and persistence in leading this community effort.
Lieutenant Colonels Mark Charlton and Timothy Lawrence supplied the hands-on
leadership to make ASSE a reality. Thank you!
The Space Technology Series is sponsored and funded by the country's space
community—and has been for more than 20 years. Twenty-three national
organizations have championed this effort since it began in 1988—many of them on
an annual basis. Leadership, funding, and support from Air Force Space and
Missile Systems Center (SMC), numerous program offices and AF Wings, NRO,
NASA Headquarters (including several mission directorates), NASA/Goddard
Space Flight Center (GSFC), the Federal Aviation Administration, Naval Research
Laboratory, and a host of others, have made this work possible. Thank you for your
continued support!
Several sponsors that provided pivotal inspiration, guidance, and motivation
for this particular book include: Mike Ryschkewitsch, NASA's Chief Engineer;
Doug Loverro, (Chief Technical Officer); Colonel Jim Horejsi, Chief Engineer of
SMC; and Vernon Grapes, Chief Engineer of NRO.
Here we must recognize the forbearance and long suffering of the 68 authors of
this text. For four years they outlined, drafted, reorganized, reviewed, and re
crafted their material to suit the editors' demands. This book would not be possible
without their expertise, persistence, and patience. As editors, we have strived to
create a useful, professional book of which the authors can be proud.
The true heroes of this effort are Anita Shute and Marilyn McQuade. As a team,
they edited every word, developed and finalized all graphics, and did everything
necessary to create a camera-ready copy that truly represents the quality of the
author and editor team. Mary Tostanoski created many complicated graphics, as
well as the cover, and Perry Luckett provided early editing. Anita Shute personally
entered and formatted every word and sentence in this book, as she has 13 others—
over 11,000 pages of text! We could not have done this book with without you, Anita!
A team of 17 seasoned Government and industry systems engineers, guided by
Mark Goldman of GSFC, laid the foundation for this book. Working with NASA's
Academy of Program/Project and Engineering Leadership, his team painstakingly
documented the capabilities and competencies that systems engineers must possess
to be successful in complex systems design, development, and operation. The
resulting capabilities were vetted with and reviewed by representatives of the
International Council on Systems Engineering (INCOSE), Defense Acquisition
University, and NASA's Systems Engineering Working Group.
Preface xix
Multiple teams (over 100 individuals) critiqued drafts of this book to ensure
credibility—Constellation's EVA systems team, and NASA Ames's and Johnson
Space Center's (JSC's) participants in the Graduate Certificate in Space Systems
Engineering. Maria So led a highly effective review team at GSFC to conduct a
detailed review of the book's contents, providing a complete set of integrated
comments and improvements. We sincerely thank all of these folks for their
conscientious contributions to the overall quality of the book!
Finally, we want to express our heartfelt appreciation to our spouses, our
children, and our friends for their unswerving support and encouragement, while
sharing our dream with us!
The authors and editors of ASSE, and of its companion, Applied Project
Management for Space Systems, hope that our colleagues—old hands and neophytes
alike—will find useful and insightful information and approaches here to help
them meet the challenges in engineering complex aerospace systems for national
defense, civil application, and commercial enterprise.
1
2 Chapter l—Space Systems Engineering
instrument. Each instrument requires different expertise and skill. Some musicians
spend their entire careers mastering a single instrument, but sophisticated music
involves many different instruments played in unison. Depending on how well
they come together, they may produce beautiful music or an ugly cacophony.
We can think of a symphony as a system. The musicians apply the science of
music: they translate notes on a page to play their instruments. But an orchestra
conductor, a maestro, must lead them to connect the process of playing to the art
of creating great music. Maestros do a lot more than just keep time! They:
The systems engineer is like the maestro, who knows what the music should
sound like (the look and function of a design) and has the skills to lead a team in
achieving the desired sound (meeting the system requirements). Systems engineers:
Today's space systems are large and complex, requiring systems engineers to
work in teams and with technical and other professional experts to maintain and
enhance the system's technical integrity. The creativity and knowledge of all the
people involved must be brought to bear to achieve success. Thus leadership and
communications skills are as important as technical acumen and creativity. This
part of SE is about doing the job right.
For large complex systems, there are literally millions of ways to fail to meet
objectives, even after we have defined the "right system." It's crucial to work all the
details completely and consistently and ensure that the designs and technical
activities of all the people and organizations remain coordinated—art is not enough.
Systems management is the science of systems engineering. Its aim is to rigorously
and efficiently manage the development and operation of complex systems.
Effective systems management applies a systematic, disciplined approach that's
quantifiable, recursive, repeatable, and demonstrable. The emphasis here is on
organizational skills, processes, and persistence. Process definition and control are
essential to effective, efficient, and consistent implementation. They demand a clear
understanding and communication of the objectives, and vigilance in making sure
that all tasks directly support the objectives.
Systems management applies to developing, operating, and maintaining
integrated systems throughout a project's or program's lifecycle, which may
extend for decades, Since the lifecycle may exceed the memory of the individuals
involved in the development, it's critical to document the essential information.
To succeed, we must blend technical leadership and systems management into
complete systems engineering. Anything less results in systems not worth having
or that fail to function or perform.
FIGURE 1 -1. The Scope of Systems Engineering. Systems engineers often concentrate on one
lifecycle phase like architecture and design versus development or operations, but
good systems engineers have knowledge of and experience in all phases.
Intellectual curiosity—
ability and desire
Ability to see the big to learn new things
Ability to make
picture—yet get system-wide
into the details connections
in a cookbook. Processes are tools that provide them with a common frame of
reference, help them manage design, cost, schedule and risk, and allow the team
to work together to produce the right system. But processes alone don't guarantee
a great product.
Diverse technical skills. A systems engineer must be able to apply sound
principles across diverse technical disciplines. Good systems engineers know the
theory and practice of many technical disciplines, respect expert input, and can
credibly interact with most discipline experts. They also have enough engineering
maturity to delve into and learn new technical areas.
Herein lies the art—how well does the maestro lead the people and use the
tools provided? Maestros know how to bring out the best in their musicians; they
know how to vary the tempo and the right moment to cue the horn section to draw
in the listeners. The same is true for systems engineers. It's what we DO with the
processes and talents of the team that matters. Ideally/ as systems engineers gain
experience, they are able to deal with more complex systems through;
For our purpose, architects provide the rules; designers create solutions using
those rules. Systems engineers do both; they help create the design and maintain
its integrity throughout its lifecycle. Designing new aerospace missions and
systems is a very creative, technical activity. Most engineers use a variation of a
fundamental thought process —1) define the problem, 2) establish selection
criteria, 3) synthesize alternatives, 4) analyze alternatives, 5) compare alternatives,
6) make a decision, and 7) implement (and iterate, for that matter). Though not
usually mandated, we use this process, or one like it, because it produces good,
useful results—it works!
The first credible design for a space mission and its associated systems is
usually the product of a few individuals or a small design team. They:
• Determine stakeholders' needs and success criteria
• Identify critical top-level requirements (normally 3 to 7) and understand the
acceptance criteria
• Create a mission concept as well as physical and functional architectures
• Develop a concept of operations and integrate it with the mission concept,
architecture, and top-level requirements
• Design critical interfaces among the architecture's elements
• Develop clear and unambiguous requirements that derive from the mission
concept, architecture, concept of operations, and defined interfaces
The result of this intense, highly iterative, and creative activity is a first
credible design that's consistent with basic physics and engineering principles and
meets top-level requirements. It's a baseline from which we apply systems
management processes to do tradeoffs and more detailed quantitative analyses
that focus on enhancing the design detail. We also continue to identify and
mitigate technical, cost, and schedule risks.
Defining the interfaces is key. We have to keep the number of interfaces to an
acceptable minimum; less is usually more as long as the appropriate level of
isolation of interaction is maintained. We should also keep them as simple as
possible and, when confronted with a particularly difficult interface, try changing
its characteristics. And of course, we have to watch out for Murphy's Law!
Designers and systems engineers engaged in this early activity, and indeed,
throughout the lifecycle, follow several hard-learned principles:
• Apply equal sweat
• Maintain healthy tension
• Manage margin
• Look for gaps and overlaps
• Produce a robust design
• Study unintended consequences
• Know when to stop
10 Chapter 1 —Space Systems Engineering
Apply equal sweat. The concept of equal sweat is to apportion the required
performance or functional requirements in such a way that no single subsystem
has an insurmountable problem. Figure 1-3 provides an example of the allocation
and flow down of the top-level requirement for mapping error. If the mapping
error is misallocated, it can easily drive the cost and complexity of one element up
significantly, while allowing another element to launch with excess margin. Good
engineering judgment and communication are required to allocate a requirement
across subsystem boundaries in such a way that each element expends equal sweat
in meeting the requirement. We must be prepared to reallocate when a problem
becomes unexpectedly difficult in one area. To achieve this, team leaders must
maintain open communications and the expectation that the members can and
should raise issues. The concept of equal sweat applies to many aspects of space
systems. Table 1-1 lists a few.
Spacecraft
bus
Spacecraft
Design Payload
margin
Operations
facility
Mapping Mission
error control
Communications
Data
processing
Mission
data
Figure 1-3. An Example of Equal Sweat. Here we see a potential mapping error allocation for
a space system. Mapping error represents how well the system is expected to
pinpoint the location of an image created by the system. (Zero mapping error is
perfection.) The goal, after setting sufficient design margin aside, is to allocate the
mapping error to elements of the system in such a way that no element has an
insurmountable challenge and each element expends roughly equal effort (sweat) in
meeting the requirement.
1.1 The Art and Science of Systems Engineering 11
System Aspects
significant near term constraints such as cost, schedule, or mass. It also plays out
during development and operations, when teams balance design changes and
workarounds with ensuring safe, successful systems. Throughout the lifecycle
continual tension helps maintain the proper requirements, constraints, and testing.
For example, we must strike a balance between too little and too much testing. Not
enough testing adds risk to a program, whereas testing too much can be very costly
and may add unnecessary run-time to the equipment. These healthy tensions are
a key to creating and maintaining the environment that will produce the best-
balanced system, and the systems engineer must embrace and foster them.
Manage margin. Good systems engineers maintain a running score of the
product's resources: power, mass, delta-V, and many others. But more importantly,
they know the margins. What exactly does margin mean? Margin is the difference
between requirements and capability. If a spacecraft must do something (have
some capability), we allocate requirements. If we meet requirements, test
effectively, and do the job correctly, we create a capability. One way to add margin
is to make the requirements a little tougher than absolutely necessary to meet the
mission's level-one requirements, which some people call contingency.
In Figure 1-4, the outer shape defines the capability/ the inner shape represents
the requirement, and the space between the two represents margin. The
requirement comes very close to the capability on the right side of the diagram, so
there we have a minimum margin. (The figure also applies to characteristics.)
Capability
Margin
Requirements
FIGURE 1-4. Requirements, Capability, and Margin. Where the requirements come close to the
capability (as on the right side of the figure), we have little margin [Adapted, Lee,
2007J.
Look for gaps and overlaps* Once we begin feeling comfortable and confident
about our design, looking for gaps and overlaps will help us recover from our
comfort and confidence. What have we forgotten? Which requirements are
incomplete? Where are the disconnects among our project's top-level requirements,
1.1 The Art and Science of Systems Engineering 13
architecture, design, and concept of operations? We must also carefully consider all
system interfaces and look on both sides of these interfaces to identify what could
interfere with our system. When we do this we often find that our system of interest
or the design's scope is not sufficiently defined.
Create a robust design. Robust design is a proven development philosophy
for improving the reliability of systems, products, and services. Terms that
characterize it include resilient, stable, flexible, and fault tolerant. Robust design is
vital to successful missions and systems. It must be an early and integral part of
development. Our objective is to make our systems resistant to factors that could
harm performance and mission success. A robust design performs consistently as
intended throughout its lifecycle, under a wide range of conditions and outside
influences, and it withstands unforeseen events. In other words, a robust design
provides stability in the presence of ignorance!
Study unintended consequences. A key to success in spaceflight is to
rigorously analyze failure modes and effects to determine how the system will
perform when individual elements, subsystems, or components fail. Good systems
engineers study failures of complex systems to gain insights into their root causes,
ripple effects, and contributing factors. Hardware, software, interfaces,
organizations, and people introduce complexity, so we study failures to avoid
them. Henry Petroski, a professor at Duke University and author of Success
Through Failure, points out that studying failures helps us better assess our design's
unintended consequences [Petroski, 2006]. In Apollo: The Race to the Moon, Murray
and Cox offer a stirring account of the Apollo 13 oxygen tank's explosion—a
significant anomaly that resulted in mission failure. It shows how flight and
ground crews creatively worked together to save the lives of the astronauts
[Murray and Cox, 1989]. The systems engineer should study as many failures as
possible to develop good engineering judgment.
Know when to stop. At some point in the project, we'll have discussed the
philosophy of mission and system design, reviewed hard-earned wisdom about
design, and even applied what we have learned from previous failures and created
our "first credible" design. But we may hesitate to show the design to others until
we've enhanced it a little more, a little more, and even a little more. It's hard to stop
tweaking the design to make it better. Eventually, though, because of such realities
as lack of money or time, we have to say, "Better is the enemy of good enough."
The principle of knowing when to stop applies during the whole life of the project.
In universities, engineers learn to optimize designs, especially in the
traditional electrical and mechanical disciplines. But in a large, complex system
design, competing requirements and constraints often make it inappropriate to
optimize subsystems. We need a balanced design that meets stakeholder needs as
well as top-level critical requirements and constraints. However, system
constraints such as mass often require the overall system to be optimized.
14 Chapter l —Space Systems Engineering
With the increasing complexity of space missions and systems, the challenge
of engineering systems to meet the cost, schedule, and performance requirements
within acceptable levels of risk requires revitalizing SE. Functional and physical
interfaces are becoming more numerous and complicated. Software and
embedded hardware must be integrated with platforms of varying intricacy. Pre
planned project development and the extension of system applications drive
higher levels of integration. Another driver of increasing system complexity is the
significant reduction of operations staff to reduce lifecycle cost and the
incorporation of their workload into the system. In addition, systems are becoming
more autonomous with stored knowledge, data gathering, intra- and inter-system
communications, and decision-making capabilities.
While rising to the greater challenge, we must also address concerns over past
failures. The need for more rigorous guidance and approaches is driven both by
past experience and evolving project and program requirements. Drawing on the
result of reports and findings of space-related experience since the mid-1990s,
DOD, NASA, and industry have revamped their approach to SE to provide for
future missions. Achieving our goals in space requires systems-level thinking on
the part of all participants.
This book is intended to provide the processes, tools, and information that a
space systems engineer can use to design, develop, and operate space systems. To
this end we focus on "how to" as we proceed through the topics listed in Table 1-2.
The flow of activities from top to bottom looks linear, but nothing could be
further from the truth. Each activity has many iterations, which have impacts on
other activities, which are constantly changing.
TABLE 1-2. The Applied Space Systems Engineering Flow. This table provides an overview of
the topics in this book. The early emphasis is on getting the right design, followed by
realizing the system and managing the development and implementation of the system.
Where
Description Activities Discussed
Define needs and stakeholder expectations Chap. 2
Generate concept of operations and operational Chap. 3
architecture Chap. 4
Develop system architecture—functional and physical Chap. 5
DESIGN the system Determine technical requirements, constraints and Chap. 6
assumptions Chap. 7
Make decisions and conduct trade-off analyses Chap. 8
Estimate lifecycle cost
Assess technical risk
Integrate the system Chap. 9
Implement the system Chap. 10
REALIZE the system
Verify and validate the system Chap. 11
Transition the system into use Chap. 12
Plan and manage the technical effort Chap. 13
Develop and implement a systems engineering Chap. 14
MANAGE creation, management plan Chap. 15
development, and
Control interfaces Chap. 16
implementation of the
system Maintain configuration Chap. 17
Manage technical data Chap. 18
Review and assess technical effort
Document and iterate Chap. 1
Definitions
■ Decision authority—The highest-ranking individual assigned by the
organization with the authority to approve a project's formal transition to the
next lifecycle phase
• Gates—Events, usually mandatory, that a project must participate in during
which it will be reviewed by the stakeholders, user community, or decision
authority
• Key decision point (KDP)-A milestone at which the decision authority
determines the readiness of a project to formally proceed into the next phase
of the lifecycle. The decision is based on the completed work of the project
team as independently verified by the organization.
• Lifecycle—All the phases through which an end-item deliverable (i.e.,
system) passes from the time it's initially conceived and developed until the
time if s disposed of
• Milestone—An important event within a project, usually the achievement of
a key project deliverable or set of deliverables, or the demonstration of a
group of functionalities. Milestones can be "internal"—self imposed by the
project team or "external"—imposed by the stakeholder, user community,
or decision authority.
• Phases—In projects, a specific stage of the lifecycle during which the
majority of the projecf s resources are involved in the same primary activity
for a common goal or deliverable, such as requirements definition, design,
or operations
• System—The combination of elements—flight and ground—that function
together in an integrated fashion to produce the capability required to meet
an operational need. Each element comprises all the hardware, software,
firmware, equipment, facilities, personnel, processes, and procedures
needed for the elemenf s contribution to the overall purpose.
• Technical authority—The individual, typically the project's systems engineer,
that the decision authority assigns to maintain technical responsibility over
establishment of, changes to, and waivers of requirements in the system
18 Chapter 1 —Space Systems Engineering
19
20 Chapter 1—Space Systems Engineering
(Program
initiation)
Concept Technology System development Production and Operations
refinement development and demonstration deployment and support
encounter in a given activity. Where practical, the subject matter experts have
focused on the elements for success in terms of systems engineering as a discipline,
less on the specific nuances of any one community. Where information is specific
to one community, the author makes that distinction clear. Members of other
communities would be wise to study these instances to gain a richer
understanding of the trade space. Otherwise, this book directs the reader to the
core SE competencies that are fundamental ingredients for success. We leave it to
the systems engineer to take this knowledge and apply it within his or her
organization's environment.
one. Still, a space project's technical baseline typically corresponds to the system's
development phases and therefore consists of seven discrete parts:
Established
Baseline Representative Technical Artifacts at
Mission MCR
System SRR
Functional SDR
Design-to PDR
Build-to CDR
As-built SAR
ORR
As-deployed FRR
PLAR
Figure 1 -7. Typical Technical Baselines for Space Systems and Missions. Our discussion
centers on seven baselines in the project lifecycle. Each has a set of artifacts or
documents that represent its status. These baselines are established at certain
milestone reviews. (MCR is mission concept review; SRR is system requirements
review; SDR is system design review; PDR is preliminary design review; CDR is critical
design review; SAR is system acceptance review; ORR is operational readiness
review; FRR is flight readiness review; PLAR is post-launch assessment review.)
1. FAD Approved
2. Program requirements on the project (from the Draft Baseline Update
program plan)
3. ASM minutes Baseline
4. NEPA compliance documentation EIS
5. Interagency and international agreements Baseline
N>
TABLE 1-3. Typical Baseline Items from NASA’s NPR 7123.1a. (Continued) This table shows a more exhaustive list of system-related
24
products and artifacts and tells when they're initially developed and baselined. (KDP is key decision point; FAD is formulation
authorization document; ASM is Acquisition Strategy Meeting; NEPA is National Environmental Policy Act; EIS is environmental
impact statement; CADRe is cost analysis data requirement; SRB is standing review board; MD is mission director; CMC is Center
1. Work agreement for next phase Baseline Baseline Baseline Baseline Baseline
2. Integrated baseline Draft Preliminary Baseline
3. Project plan Preliminary Baseline Update Update
4. CADRe Preliminary Baseline
5. Planetary protection plan Planetary Baseline
protection
certification
6. Nuclear safety launch approval plan Baseline
7. Business case analysis for infrastructure Preliminary Baseline
8. Range safety risk management plan Preliminary Baseline
9. System decommissioning and disposal plan Preliminary Baseline
FIGURE 1-8. Another Perspective on Technical Baselines. This figure captures the evolution of
the system design through the various technical baselines. Space systems
engineering takes the relatively crude concepts in the Mission Baseline and uses the
design, manage, and realize processes to deliver the capability in the as-deployed
baseline.
requirements in the intended environments over the system's planned life within
cost and schedule constraints. To make this happen, all project participants need
to have a systems perspective.
Synthesis
Analysis
Evaluation
FIGURE 1-9. Systems Engineering Activities. The systems engineering framework (discussed
below) supports iterative and recursive synthesis, analysis, and evaluation activities
during complex systems development. Domain knowledge, systems thinking, and
engineering creativity are key prerequisites.
The SE framework itself consists of the three elements shown in Figure 1-10.
The integrated implementation of common technical processes, workforce, and
tools and methods provides greater efficiency and effectiveness in the engineering
of space systems. This section describes each element. Systems engineering
processes are one element in a larger context, including workforce (that's the
26 Chapter l —Space Systems Engineering
team), and tools and methods to produce quality products and achieve mission
success. Together, these elements constitute the SE capability of an organization.
Furthermore, the SE processes themselves represent a framework for the
coordinated evolution and increasing maturity of the mission, the required
operational scenarios and performance requirements, and the constraints on the
one hand; and the conceived solution and its architecture, design, and
configuration on the other.
Organizational
capability
TABLE 1-4. Systems Engineering Processes. There is considerable agreement on the definition
of the core set of SE processes, classified into technical management processes and
technical processes.
Systems Engineering Processes per OSD Systems Engineering Processes per NPR
Defense Acquisition Guidebook 7123.1a—SE Processes and Requirements
Technical Management Processes Technical Management Processes
• Decision analysis • Decision analysis
• Technical planning • Technical planning
• Technical assessment • Technical assessment
• Requirements management • Requirements management
• Risk management • Technical risk management
• Configuration management • Configuration management
• Technical data management • Technical data management
• Interface management • Interface management
Technical Processes Technical Processes
• Requirements development • Stakeholder expectation definition
• Logical analysis • Technical requirements definition
• Design solution • Logical decomposition
• Implementation • Physical solution
• Integration • Product implementation
• Verification • Product integration
• Validation • Product verification
• Transition • Product validation
• Product transition
FIGURE 1-11. The Systems Engineering Engine, Depicting the Common Systems
Engineering Sub-Processes, Per NASA NPR 7123.1a. These processes apply at
any level of the development effort. The three groups of processes shown here are
used to develop, produce, and deliver products and services.
• Know the technical "red line" through the system; be able to anticipate the
impact of changes throughout the system and lifecycle
• Maintain the big picture perspective, integrating all disciplines
• Establish and maintain the technical baseline of a system
• Maintain the technical integrity of the project, whatever it takes
do for each item in the table?" We can discuss each item and the integrated whole
in terms of performance levels. Table 1-6 lists four levels of performance.
Table 1 -5. Systems Engineer Capabilities. In the life of a project, a systems engineer should be
able to deal with the items shown here.
TABLE 1-6. Systems Engineer Performance Levels. The focus of this book is to help develop
systems engineers that can operate at performance levels 1-3. Those who enhance
their capabilities and gain experience may increase their performance level and be
candidates for more significant responsibilities.
Performance Generic
Level Title Description Example Job Titles
• Understand their technical domain better than anyone else, and continually
learn about other technical domains and technologies
• Learn to deal effectively with complexity
• Maintain a "big picture" perspective
• Strive to improve in written and oral communications
• Strive to improve interpersonal skills and team member and team leadership
abilities
Figure 1-12. The FireSAT Mission Architecture. Here we see the various elements of the
complete mission architecture.
0
FireSAT mission
architecture
System of systems
1 2 3 4 5
Orbits and trajectory Space element Launch element Mission operations Subject element
element element (wildfires)
Environment Element Element Element Element
FIGURE 1-13. FireSAT Mission Architecture (aka Physical Architecture) Hierarchy Diagram.
This is another way to depict information similar to that in Figure 1-12.
"FireSAT" to mean the project or mission level. However, for the most part our
primary emphasis is on the space element—the FireSAT spacecraft and its
subsystems. On occasion we even dip down to the part level to illustrate a given
point, but the objective is always to reinforce how the approaches and application
of SE processes apply at any level.
34 Chapter 1—Space Systems Engineering
Chapter 2 begins the FireSAT saga, starting with the initial needs of the
stakeholder and leading us through the development of stakeholder or mission-level
requirements. Subsequent chapters in the "Design" unit of the book continue through
architecture development, requirements engineering, logical decomposition and
eventually the physical solution of the FireSAT system. Then in the ''Manage" unit
we explore how decision analysis, risk management, and other processes support
both design and implementation throughout the lifecycle. In the "Implement" unit,
we first examine the issues associated with FireSAT buy, build, or reuse decisions,
then move on to integration, verification, and validation. The transition chapter
includes examples of how FireSAT goes from delivery to operations.
Finally, to tie everything up, the end-to-end case study chapter looks at the
FireSAT example from a different perspective. There the approach concentrates on
each of the seven primary technical baselines in the lifecycle, to illustrate how the
17 SE processes come to bear to tackle the real world challenges of moving a
program from concept to launch to operations, one major milestone at a time.
References
Chesley, Julie, Wiley J. Larson, Marilyn McQuade, and Robert J. Menrad. 2008. Applied
Project Management for Space Systems. New York, NY: McGraw-Hill Companies.
Griffin, Michael D. March 28,2007. Systems Engineering and the "Two Cultures" of Engineering,
NASA, The Boeing Lecture.
Larson, Wiley J. and lames R. Wertz. 1999. Space Mission Analysis and Design. 3rd Ed.
Dordrecht, Netherlands: Kluwer Academic Publishers.
Lee, Gentry. 2007. So You Want to Be a Systems Engineer. DVD, JPL, 2007.
Murray, Charles and Catherine Bly Cox. 1989. Apollo: Race to the Moon. New York, NY:
Simon and Schuster.
1.6 Introduction to the End-to-End Case Study—FireSAT 35
N ASA. 2007 (1). NPR 7123.1a—NASA Systems Engineering Processes and Requirements.
Washington, DC: NASA.
NASA. 2007 (2). NPR 7120.5D—Space Flight Program and Project Management
Requirements. Washington, DC: NASA.
Office of the Secretary of Defense - Acquisition, Technology and Logistics (OSD - AT&L).
December 8, 2008. DoDI 5000.02, Defense Acquisition Guidebook. Ft. Bel voir, VA:
Defense Acquisition University.
Personal Interviews, Presentations, Emails and Discussions:
Michael Bay, Goddard Space Flight Center
Harold Bell, NASA Headquarters
Bill Gerstenmaier, NASA Headquarters
Chris Hardcastle, Johnson Space Center
Jack Knight, Johnson Space Center
Ken Ledbetter, NASA Headquarters
Gentry Lee, Jet Propulsion Laboratory
Michael Menzel, Goddard Space Flight Center
Brian Muirhead, Jet Propulsion Laboratory
Bob Ryan, Marshall Space Flight Center
Petroski, Henry. 2006. Success through Failure: The Paradox of Design. Princeton, NJ: Princeton
University Press.
Schaible, Dawn, Michael Ryschkewitsch, and Wiley Larson. The Art and Science of Systems
Engineering. Unpublished work, 2009.
Additional Information
Collins, Michael. Carrying the Fire: An Astronaut's Journeys, New York, NY: Cooper Square
Press, June 25, 2001,
Defense Acquisition University Systems Engineering Fundamentals. Ft. Bel voir, Virginia:
Defense Acquisition University Press, December 2000.
Derro, Mary Ellen and P.A. Jansma. Coaching Valuable Systems Engineering Behaviors.
IEEE AC Paper #1535, Version 5, December 17, 2007.
Ferguson, Eugene S. 1992. Engineering and the Mind's Eye. Cambridge, MA: MIT Press.
Gladwell, Malcolm. Blink: The Power of Thinking Without Thinking. New York, NY: Back Bay
Books, April 3, 2007.
Gleick, James. Genius: The Life and Science of Richard Feynman. New York, NY: Vintage
Publications, November 2, 1993.
Griffin, Michael D. and James R. French. 2004. Space Vehicle Design. 2nd Ed., Reston, VA:
AIAA Education Series.
Johnson, Stephen B. 2006. The Secret of Apollo: Systems Management in American and European
Space Program (New Series in NASA History). Baltimore, MD: Johns Hopkins University
Press.
Kidder, Tracy. 2000. The Soul Of A New Machine. New York, NY: Back Bay Books.
36 Chapter 1—Space Systems Engineering
Larson, Wiley J., Robert S. Ryan, Vemon J. Weyers, and Douglas H. Kirkpatrick. 2005. Space
Launch and Transportation Systems. Government Printing Office, Washington, D.C.
Larson, Wiley J. and Linda Pranke. 2000. Human Spaceflight: Design and Operations. New
York, NY: McGraw-Hill Publishers.
Logsdon, Thomas. 1993. Breaking Through: Creative Problem Solving Using Six Successful
Strategies. Reading, MA: Addison-Wesley.
McCullough, David. 1999. The Path Between the Seas: The Creation of the Panama Canal 1870-
1914. New York, NY: Simon and Schuster.
Menrad, Robert J. and Wiley J. Larson. Development of a NASA Integrated Technical Workforce
Career Development Model. International Astronautical Federation (IAC) Paper IAC-08-
Dl.3.7, September 2008.
Perrow, Charles. 1999. Normal Accidents: Living with High-Risk Technologies. Princeton, NJ:
Princeton University Press.
Rechtin, Eberhard. 1991. System Architecting. Prentice Hall, Inc.
Ryan, Robert. 2006. Lessons Learned—Apollo through Space Transportation System. CD, MSFC.
Sellers, Jerry Jon. 2004. Understanding Space. 3rd Ed. New York, NY: McGraw Hill.
Squibb, Gael, Wiley J. Larson, and Daryl Boden. 2004. Cost-effective Space Mission Operations.
2nd Ed. New York, NY: McGraw-Hill.
USGOV. Columbia Accident Investigation Board, Report V. 5, October 2003 (3 Books),
PPBK, 2003, ISBN-10: 0160679044, ISBN-13: 978-0160679049.
Williams, Christine and Mary-Ellen Derro. 2008. NASA Systems Engineering Behavior
Study, October, 2008. Publication TBD.
Chapter 2
The top priority of any new program or project is to define and scope the
customer's expectations. These expectations lay out the mission, the required
capability, or the market opportunity. They drive all of design, development,
integration, and deployment. Customer expectations, established by the mission
need statement or capability definition, represent the problem space for systems
engineering. We use them to identify relevant stakeholders and the technical
requirements that describe what the mission is to achieve. Failure to adequately
capture these initial expectations results in a mismatch between what is needed
and what is delivered. As Yogi Berra famously said, "If you don't know where
you're going, you'll probably end up some place else."
Evolving initial customer expectations into a set of stakeholder requirements,
and translating these into technical requirements and system architecture is highly
iterative, as Figure 2-1 shows. In fact, developing solution concepts and
architectures often helps better define the need or capability itself.
37
38 Chapter 2-Stakeholder Expectations and REQUIREMENTS
Buiid-to/Cocfe-to baseline
FIGURE 2-1. Iterative Evolution of the Problem and Solution Spaces. The early elements of
systems engineering are iterative and interdependent, rather than procedural and
linear.
Figure 2-2 provides a context for the discussions in Chapters 2 through 5, and
emphasizes their interdependence. A "process oriented" view of systems
engineering suggests a procedural and linear implementation. In reality, these
processes are iterative and interdependent. While we mustn't confuse the problem
with the solution, better understanding of one leads to a better definition of the
other. An architecture often helps clarify the scope of the problem and the
stakeholder expectations, while the concept of operations helps us understand the
requirements that an architecture must satisfy.
Stakeholder requirements, their expectations, and major concepts drive the
development of the concept of operations which helps us assess system concepts
in an operational context. It also validates stakeholder expectations and mission
requirements. Table 2-1 reflects these activities and also outlines this chapter. We
start by seeing how to define the customer's initial expectations, then step through
the other activities. From there we can begin to define concepts and operational
architectures, and eventually technical requirements, as described in subsequent
chapters. Figure 2-2 diagrams this process.
2.1 Define Initial Customer Expectations 39
Figure 2-2. An Abstraction of the Framework Presented in Table 2-1. While the systems
engineering process is iterative, our understanding of the problem and solution
matures progressively.
Where
Step Description Discussed
2.3 Solicit and synthesize stakeholder expectations and requirements Section 2.3
2.5 Rank order mission requirements and identify critical acceptance criteria Section 2.5
2.7 Validate and baseline requirements. Translate into technical requirements. Section 2.7
* For our purposes, customer expectations encompass the mission or capability desired, or
the market opportunity being targeted.
2.1 Define Initial Customer Expectations 41
Finally, a study of how operators and maintainers use a system may suggest
ways to make this interaction more efficient and the overall system more effective.
In a complex system, the weakest link is often the human operator or maintainer.
A study of this link may be the basis for system improvements—in functionality,
reliability, safety, and security. Many of the efforts in NASA's Constellation
Program are aimed at reducing the cost and complexity of Ares 1 ground
processing. The automobile industry has designed many improvements based on
behavior analysis. These include automatic tilt-down of passenger side view
mirror (to aid in parallel parking), auto-dimming rear view mirrors, and cut-out
trunks for easier storage, to name a few.
• We can use a common format to write a good mission need statement. "[A
customer] has a need to [do something]." This is a succinct statement of the
operational need. We can then describe the rationale or business case. This
form expresses a need to "have" or "do" something, not a need "for"
something. This way, we focus on the need. If we state a need "for"
something, the focus becomes the solution. For example, we could write the
statement as "The US Forest Service needs to more effectively detect and
monitor potentially dangerous wildfires." This is better (less solution-
dependent) than writing it as "The US Forest Service has a need for a fire
detection satellite system".
With this in mind, let's review the initial customer expectations for the
FireSAT project, as defined by the customers (US Forest Service (USFS), NASA,
and the National Oceanic and Atmospheric Administration (NOAA)), and stated
as goals with associated objectives. Goals are generally qualitative and descriptive.
Objectives are specific and quantitative expansions on goals.
For a systems engineer, Table 2-2 is the beginning of understanding
stakeholder expectations. We have to clarify, validate, and further detail the list in
the table by eliciting the expectations from all the key stakeholders. We must also
understand the constraints that the legacy environment and the regulations
impose upon any solution. For example, what does "potentially dangerous
wildfires" mean? Will our system just detect and monitor certain wildfires, or will
it also be able to assess the potential "danger" associated with them? ju/'
difference will have a significant impact on the system we conceive. Furthermore,
some of the objectives as stated may actually be compound stakeholder
requirements, or multiple requirements included in a single stakeholder
expectation. We have to resolve this matter as well.
2.1 Define Initial Customer Expectations 43
TABLE 2-2. FireSAT Need, Goals, and Objectives. This is the beginning of the process of
understanding stakeholder expectations.
Mission Need: The US Forest Service (USFS) needs a more effective means
to detect and monitor potentially dangerous wildfires
Goals Objectives
1. Provide timely detection and notification of 1.1. Detect a potentially dangerous wildfire in
potentially dangerous wildfires less than 1 day (threshold), 12 hours
(objective)
2. Provide continuous monitoring of dangerous 2.1. Provide 24(7 monitoring of high priority
and potentially dangerous wildfires dangerous and potentially dangerous
wildfires
4. Reduce the risk to firefighting personal 4.1. Reduce the average size of fire at first
contact by firefighters by 20% from 2006
average baseline
contextual setting. This drives the definition of the boundary of the new system.
Undocumented or implicit interfaces and interactions pose significant risk and
can lead to unanticipated test and integration challenges. So we must understand
and make explicit the as-is contextual setting, business processes, and
operational environment. To fully understand the legacy system context, we
should gather three types of information:
• The legacy processes, including the logical interfaces
• The legacy resources, including the physical interfaces
• Limitations within the legacy context
The legacy processes describe how the legacy system is used. They specify the
business or mission flow of work as the users perform their duties. The processes
may be defined with a graphical representation (e.g., use cases, interaction
diagrams, etc.) or in text. Processes show how the business uses the legacy
systems, and the logical interfaces between these systems.
The legacy resources describe the physical assets and resources used to perform
the legacy processes and the interactions between these resources. Physical
resources include hardware, software, and people, as well as the interfaces
between these resources.
The legacy context limitations describe any defects, limitations, or potential risks
within the legacy processes. This information allows the systems engineer to better
understand the strengths and weaknesses of current processes and provide a
better solution to meet the operational need.
The legacy context described above may not apply to a completely new
mission or operational capability, but we still have to understand how the
operational need fits within the mission or business processes. The systems
engineer must understand the environment that the current need lives within to
better understand the need and, subsequently, the more detailed operational
requirements. We must also know who will interface with the system, to
determine possible stakeholders.
Consider a military SATCOM with a need to increase geostationary
communications bandwidth and coverage. A lot of capability already exists within
the legacy context. Any new systems built to support the mission must fit within
this context. The processes for spacecraft command and control have plenty of
heritage so the new satellite system has to use the same operational processes (with
minor differences to account for expanded capabilities). Similarly, we would face
considerable pressure to leverage the existing military infrastructure that supports
geostationary satellite communications command, control, and operations. Remote,
tracking and data relay sites, command and control centers, and operations centers
represent massive investments and years of design and development and will only
be modified or replaced reluctantly. Finally, compatibility with these existing
processes and resources imposes a host of constraints on the new system. In many
cases, we can't exploit the latest technologies or different concepts of operations
because we have to work within the existing architecture. Few development
2.2 Identify the Stakeholders 45
2.2.3 Sponsors
Sponsors control the program development and procurement funding or
resources. They either buy the product or fund it with a specific mission in mind
or a profitability objective. There may be more than one sponsor for a given need
(Table 2-3). A system may be targeted for several industries or to multiple
sponsoring stakeholders (e.g., Boeing's 787 has multiple stakeholders at many
airlines, cargo air carriers, aircrew members, ground crew members, etc.).
2.3 Solicit and Synthesize Stakeholder Expectations and Requirements 47
As the holder of the purse strings, the sponsor is arguably the most important
stakeholder. A customer may be either a passive or an active stakeholder. It
depends on whether the customer interacts with the system when it is operational.
An example of a sponsor who's an active stakeholder is a person who buys a
computer for personal use. This person is a sponsor (purchases the product) and an
active stakeholder (uses the product). An example of a passive customer is the
Army Acquisition Office, which buys the soldiers' weapons. The Acquisition Office
is the sponsor (buys the product), but does not use the product (the soldiers use it).
Table 2-3. Stakeholders and Their Roles for the FireSAT Mission. Sponsors provide funding,
and may be either active or passive.
Stakeholder Type
The mission requirements should state what the product should do, which
current products can't do or don't do well enough. The requirement should state
some capability that is missing or what current capability is deficient and the nature
of this deficiency (it can't be done fast enough or cheap enough, it's not robust
enough for its intended environment, or it's not available enough of the time). We
should state the requirement in terms of the operational outcome rather than the
system response. Operational requirements must address both the functional
(capabilities) and nonfunctional (characteristics) needs of the stakeholders.
Mission requirements should apply for the entire lifecycle of the solution.
Different stakeholders have requirements for different phases of the lifecycle.
Users and other active stakeholders normally define requirements for the
operational phase. Sponsors may define requirements for all phases (cost and
schedule for the development phase, cost for the production phase, ease of
upgrade for the operational phase, ease of disposal for the retirement phase, etc.).
Passive stakeholders define constraining requirements for the development,
manufacturing, and production phases. We must include all phases of the lifecycle
when we capture stakeholder requirements. The word capture indicates that we
must take control of and own the stakeholder requirements. No matter who writes
the original requirements, the systems engineering team must own, manage, and
control those requirements.
2.3 Solicit and Synthesize Stakeholder Expectations and Requirements 49
requirements: How fast? How many? How big? How accurate? or When? Some
examples of stakeholder expectations that are system characteristics include:
• If a wildfire is detected, the US Forest Service should be notified within an
hour of detection. If possible, the notification time should be reduced to 30
minutes.
• Reduce the average size of a wildfire first addressed by firefighters by an
average of 20%
• Once detected and classified as a wildfire, it should be monitored 24/7
Table 2-4. Information Gathering Tools and Techniques. These tools and techniques are
invaluable in obtaining information from active and passive stakeholders and sponsors.
Cost / benefit analysis Helps prioritize business requirements by estimating cost of delivery
against anticipated business benefits. Helps refine scope and
architecture.
SWOT (strengths, Identifies internal and external forces that drive an organization’s
weaknesses, opportunities, competitive position in the market. Helps rank order system
and threats) analysis development and commercialization phases: time to market (market
business opportunity) and probability of success.
Brainstorming or white Method to develop creative solutions for a problem. We can also use
board session it to define a set of problems.
Field data and analysis Analyzes existing and analogous systems and missions to
understand pros and cons. Analyzes field data, when available, to
deepen insights into the strengths and weaknesses of existing and
analogous solutions.
"We need to reduce the loss of life and property caused by forest
fires each year"
"We need to be able to detect small fires before they get out of
hand"
TABLE 2-5. Examples of Requirements and Images. Here are some classic examples of
requirement statements and images that can help us understand the current or desired
situation. (ISDN = Integrated Services Digital Network.)
“I need to have the same communication system “I’ve got phone wires in every nook and cranny’’
at home as I have at work”
“I’d like to be able to go to the manual and find “A typical day for me is 24 hours long”
out how to add an ISDN line"
“I wish I could separate Caribbean, domestic, “Sometimes, our receptionist, you hear her
and Latin American calls” screaming, ‘Oh my God!’"
TABLE 2-6. Differences Between Requirements and Images. Requirements are known, explicit
needs. Images are unknown, latent needs.
Describe a specific need or demand that will Describe the context and condition of a
solve a current problem solution’s use
Are usually not emotional Are emotional and vivid; draw on customer’s
frustrations, anxieties, likes, and dislikes
Conjure up a picture
2.3 Solicit and Synthesize Stakeholder Expectations and Requirements 53
"My home was destroyed by a forest fire; why isn't the USFS
doing something?"
"Forest fires are critical to the health of the forest, but too much of
a good thing can be dangerous to people"
• Start with a question about the interviewee's role with regard to the need
• Maintain a proper balance between specificity and generality (we can think
of the guide as a discussion outline with prompts and reminders)
• Define the topic areas (6-9 is ideal) to be covered, summarized by
overarching themes
• Document the interview in outline form with topics and low-level
questions; subjects include past breakdowns and weaknesses, current
needs, future enhancements
• Include a few probes (2-4 per topic area); probes should flow from
conversation with the stakeholder; variations on "why" phrases
• Avoid closed-ended questions ("will," "could," "is," "do," "can" or other
question leads that can be answered with a simple yes or no)
• Have questions in the guide elicit both images and requirements, not
potential solutions
• Provide a closing statement and a plan for future contact or follow-up
2. Conduct the Interview—We must remember the goal of the interview; to
elicit mission requirements. Interviewers should not use the interview to sell or
develop solutions, and should prevent the interviewees from describing solutions.
At this point, we want to define the problem; then we can design solutions. We
have to cover the topics and discussion points defined in the interview guide, but
also allow for open-ended discussion. The interview should include at least two
people besides the interviewees—the interviewer (moderator) and the note taker.
Role of the moderator:
• Manage and conduct the interview
• Lead the interview (if there is more than one interviewer)
• Build rapport with the customer
• Execute the interview guide
• Take very few notes
• Write observations
• Probe the interviewees for their true requirements
Role of the note taker:
• Take verbatim notes. Do not filter the gathered data. Get vivid, specific, and
rich nuggets of information
• Rarely interrupt
• Capture the images within the notes (the emotions and verbal pictures)
• Write observations
for success in Voice of the Customer is having the persistence to ask "Why?" until
the customer's true need is identified.
FIGURE 2-3. Probing for Requirements. It’s often difficult to determine a stakeholder's true
needs—we must persist and probe until we're sure we’ve identified the needs.
Repeatedly asking “Why?" increases the depth of probing to get past expressed and
tacit data to the underlying need.
Unless we probe for answers, stakeholders will talk in terms of features they
think they need (e.g., "I need a database to store our customer information"). The
interviewer needs to find out why that feature is important and to determine what
the stakeholder need truly is (e.g., "I need to be able to access our customer
information more quickly"). While probing for requirements:
stakeholders will agree that the solution meets the requirement or its sign-off
condition. Acceptance criteria constitute an agreement between the stakeholders
and the development team—it says, "If the solution meets these criteria, it meets
the stakeholder requirement." Effective acceptance criteria reduce the risk to both
the development team and the customer, since they point to the core capabilities
and characteristics needed to satisfy the mission. Acceptancc criteria afford several
benefits to the systems engineer and the development team. They:
• Provide the basis for the acceptance or sign-off of the mission requirements.
During solution validation and acceptance, the solution is deemed
acceptable for a requirement if it meets the acceptance criteria associated
with that requirement.
• Clarify or add detail to "fuzzy", vague, or poorly written mission
requirements, and help the development team and the customer agree on
their meaning. If a mission requirement contains words like "easy to use",
"world class", "fast", "inexpensive" or other vague expressions, the
acceptance criteria help clarify the meaning. For example, the business may
want a "world class web site". What does this mean? The customer's idea of
"world class" (much better than others) may not be that of the development
team (our usual product). Or if the functionality has to be "easy", what does
that mean? The acceptance criteria can clarify "easy" by saying that the
functionality will be provided "in fewer than 3 steps" or "in less than 5
seconds." By specifying the acceptance criteria, the stakeholder and
development team agree on the meaning of the vague requirement.
• Provide a basis for validation and user acceptance test plans. Acceptance
criteria provide all stakeholders (including the test team) insight into the
necessary number and complexity of systems tests.
We can later expand this table into a verification matrix as we develop the
derived system technical requirements from the stakeholder expectations, and it
becomes a critical part of system verification and validation. This is discussed in
more detail in Chapter 11.
TABLE 2-7. Stakeholder Requirements, Acceptance Criterion, and Rationale. We can expand
this table into a verification matrix. The matrix affords a compact summary of
requirements and their associated acceptance criteria.
1. Detection The FireSAT system shall detect The USFS has determined that a 95%
potentially dangerous wildfires (defined confidence interval is sufficient for the
to be greater than 150 m in any linear scope of FireSAT
dimension) with a confidence interval of
95%
2. Coverage The FireSAT system shall cover the As a US Government funded program,
entire United States, including Alaska coverage of all 50 states is a political
and Hawaii necessity
3. Persistence The FireSAT system shall monitor the The USFS has determined that this
coverage area for potentially revisit frequency is sufficient to meet
dangerous wildfires at least once per mission objectives for the available
12-hour period budget
4. Timeliness The FireSAT system shall send fire The USFS has determined that a 1-
notifications to users within 30 minutes hour to 30-mrnute notification time is
of fire detection (objective), 1 hour sufficient to meet mission objectives for
(threshold) the available budget
5. Geo-location The FireSAT system shall geo-locate The USFS has determined that a 500-
potentially dangerous wildfires to within m to 5-km geo-location accuracy on
5 km (objective), 500 m (threshold) detected wildfires will support the goal
of reducing firefighting costs
7. Design Life The FireSAT system shall have an The USFS has determined that a
operational on-orbit lifetime of 5 years. minimum 5-year design life is
The system should have an operational technically feasible. 7 years is a design
on-orbit lifetime of 7 years. objective.
8. Initial/Full The FireSAT system initial operational The on-going cost of fighting wildfires
Operational capability (IOC) shall be within 3 years demands a capability as soon as
Capability of Authority to Proceed (ATP) with full possible. A 3-year IOC is reasonable
operational capability within 5 years of given the scope of the FireSAT system
ATP compared to other spacecraft of similar
complexity.
2.6 Synthesize Essential Mission Requirements 61
10. Ground The FireSAT system shall use existing The NOAA ground stations represent a
System NOAA ground stations at Wallops considerable investment in
Interface* Island, Virginia and Fairbanks, Alaska infrastructure. The FireSAT project
for all mission command and control. must be able to leverage these existing
Detailed technical interface is defined assets, saving time, money, and effort.
in NOAA GS-1SD-XYX.
11. Budget The FireSAT system total mission This is the budget constraint levied on
lifecycle cost, including 5 years of on- the project based on projected funding
orbit operations, shall not exceed availability
$200M (in FY 2007 dollars)
KPPs in the initial capabilities document and validates them in the capabilities
description document. Defining KPPs often takes the collaboration of multiple
stakeholders, but it's critical to providing focus and emphasis on complex multi
year, multi-agency, multi-center (within NASA) development programs.
TABLE 2-8. Key Performance Parameters (KPPs) for the FireSAT Project. These are the
sacred requirements for the FireSAT Project from the stakeholder's perspective.
No. KPP
2 Coverage—The FireSAT system shall cover the entire United States, including Alaska and
Hawaii
3 Persistence—The FireSAT system shall monitor the coverage area at least once per 12-
hour period
4 Timeliness—The FireSAT system shall send fire notifications to users within 30 minutes of
fire detection (objective), 1 hour (threshold)
References
Institute of Electrical and Electronic Engineers (IEEE). 1998, Std 830-1998 — IEEE
Recommended Practice for Software Requirements Specifications.
National Aeronautics and Space Administration (NASA). March 2007. NPR 7123.1a —
"NASA Systems Engineering Processes and Requirements."
Office of the Under Secretary of Defense for Acquisition Technology, and Logistics (OSD
(AT&L)). July 24, 2006. Defense Acquisition Guidebook, Version 1.6.
Stevens Institute of Technology (SDOE). luly 2006. Course Notes: "SDOE 625—
Fundamentals of Systems Engineering, System Design and Operational Effectiveness
Program."
Chapter 3
65
66 Chapter 3—Concept of Operations and System Operational
Architecture
FIGURE 3-1. The Concept of Operations. Developing a concept of operations is a major step in
translating stakeholder expectations and requirements into system and technical
requirements.
3.1 Validate the Mission Scope and the System Boundary 67
Table 3-1. Framework for Developing the Concept of Operations. This table outlines a
process for developing the mission’s concept of operations and the system's
operational architecture.
1 Validate the mission scope and the system boundary Section 3.1
4 Synthesize, analyze, and assess key implementing concepts for the Section 3.4
system and its elements
5 Document the concept of operations and the system’s operational Section 3.5
architecture; apply architectural frameworks
statement from Chapter 2: The US Forest Service needs a more effective means to
detect and monitor potentially dangerous wildfires. This mission has the goals and
objectives shown in Table 3-2.
TABLE 3-2. FireSAT Need, Goals, and Objectives. This is the beginning of the process of
understanding stakeholder expectations.
Mission Need: The US Forest Service needs a more effective means to detect
and monitor potentially dangerous wildfires
Goals Objectives
1. Provide timely detection and notification of 1.1.. Detect a potentially dangerous wildfire in
potentially dangerous wildfires less than 1 day (threshold), 12 hours
(objective)
2. Provide continuous monitoring of dangerous 2.1. Provide 24/7 monitoring of high priority
and potentially dangerous wildfires dangerous and potentially dangerous
wildfires
4. Reduce the risk to firefighting personal 4.1. Reduce the average size of fire at first
contact by firefighters by 20% from 2006
average baseline
Ideally, our preconceived notion of how to apply the system doesn't constrain
the concept of operations. Instead, we start by describing the current operational
environment: inability to detect fires quickly and accurately, to forecast a fire's
spread tendencies, and so on. Then we describe the envisioned system to transition
the current environment, with its drawbacks, to the envisioned environment with
its benefits.
If we understand FireSAT's scope and goals, as well as the active stakeholders,
we can develop a simple context diagram for the system of interest and show how
to apply systems engineering principles. Because this book emphasizes space-
based systems, we take some shortcuts and make assumptions to arrive at an
implementing concept that includes a space-based element. But a different concept
is imaginable, such as more unpiloted airborne vehicles or fire-lookout towers.
3.1 Validate the Mission Scope and the System Boundary 69
Given the intent of the concept of operations, we begin with a simple black-box
view of the system and likely active stakeholders that represent the system's
context, as reflected in Figure 3-2. For complex systems, the development teams,
users, and operators often have an implementation concept and system elements
in mind, based on legacy systems and experience with similar systems. If a
reference operational architecture is available, and the community accepts the
system elements, we reflect this system-level perspective (Figure 3-3). Two other
perspectives on the initial context diagram for FireSAT are in Figures 3-3 and 3-4.
Diverse views are common and necessary to discussions with stakeholders and
within the development team. After several iterations, we converge on the final
context diagram, mission scope, and system boundary—based on trade studies of
the implementing concepts, scenarios, and so on.
FIGURE 3-2. A Simple Context Diagram for the FireSAT System. A context diagram reflects
the boundary of the system of interest and the active stakeholders.
FIGURE 3-3. Context Diagram for the FireSAT System, Including Likely Reference System
Elements. If we have a reference architecture for the system of interest, the context
diagram can reflect it.
7 0 Chapter 3 - Concept of Operations and System Operational
Architecture
•v-'V
Figure 3-4. Another Perspective on the FireSAT System’s Mission Scope and Boundary.
During a complex development’s early stages, people have different views of the
mission scope and system boundary. (Adapted from Wertz and Larson [1999].)
3.2 Describe the System's Operational Environment, Primary Constraints, 71
and Drivers
FireSAT
operational
concept
Component
FIGURE 3-5. Yet Another Perspective on the FireSAT System’s Mission Scope and Boundary.
We use many graphical tools to depict the mission scope and system boundary.
Figures 3-2, 3-4, and 3-5 reflect typically diverse views that lead to the kinds of
questions summarized in Table 3-3.
As discussions and decisions develop from these types of questions,
stakeholders deepen their understanding of mission and system boundaries. This
is iterative, and we may have to proceed to the next step before we answer some
of these questions.
Table 3-3. Questions that Stimulate Discussion. These questions raise key issues, which lead
to decisions that move the program forward.
Do orbits and constellation truly Here we’re also clarifying what is an actual element of the
represent a system element, a system versus simply the characteristics of one of these
contextual element, or just a elements. In this example, we could argue that the FireSAT
constraint on the space element as constellation represents the space element environment or
part of our selected concept for context, rather than a specific system element.
implementing the space element
(such as low versus geosynchronous
Earth orbit)?
Is the mission-operations center part We must be clear about the boundaries between the new
of NOAA or separate? Does FireSAT system and the existing and legacy systems. We often need
have a dedicated ground element for to think of the system boundary in logical and physical terms.
the FireSAT mission? NOAA may have a physical mission-operations center, but
we still have to decide whether to embed new functions for
FireSAT in this center. Would that be an advantage? Do we
need to logically separate or combine them?
The funding and schedule constraints, plus necessary changes to the legacy
doctrine, may require the concept of operations to reflect an evolutionary transition
involving airborne reconnaissance between the current and intended states. NASA's
Constellation Program and the Vision for Space Exploration are other examples of
evolving concepts of operations. In this vision, the Shuttle is first to be replaced by
the Orion crew module lifted by the new Ares 1 launch vehicle to service the Space
Station in Iow-Earth orbit (LEO). LEO transportation would eventually be
augmented by the commercial carriers under the Commercial Orbital
Transportation Systems effort. Finally, a lunar mission concept of operations would
evolve using the Orion and Ares 1 elements along with the new Ares 5 heavy lift
vehicle to place lunar landers and other large payloads into LEO where they would
dock with the Orion vehicles for departure to the Moon and beyond.
For the FireSAT mission, Figure 3-6 captures the current environment, and
Figure 3-7 shows the envisioned environment. In summary, the concept of
operations should capture the current and envisioned operational environment in
six areas:
TABLE 3-4. FireSAT’s Key Performance Parameters. Performance drivers and constraints affect
the concept of operations. [Wertz and Larson, 1999]
First-order First-order
Key Performance Algorithm (Iow- Algorithm Performance
Parameters Earth orbit) (Geosynchronous) Drivers
Persistence—Monitor the (Number of Scan frequency Number of
coverage area for potentially spacecraft)/12 hr spacecraft for low
dangerous wildfires at least orbit
once every 12 hours
Timeliness—Send fire Onboard storage Communications + Storage delay (if
notices to users within 30 delay + processing processing time applicable)
minutes of fire detection time
(objective), 1 hour (threshold)
FIGURE 3-6. The “As-is” Environment to Be Addressed by the FireSAT Mission. It’s
important to understand the legacy (current) context (such as systems, policies,
procedures, and doctrine) when developing a concept of operations. (UAV is
unmanned aerial vehicle.)
FIGURE 3-7. The FireSAT Mission’s Envisioned Environment. Although we show the as-is and
envisioned environments in physical terms, we must also consider the logical
perspective through operational scenarios. (UAV is unmanned aerial vehicle.)
TABLE 3-5. Performance Characteristics for FireSAT Users’ Main Need: To Detect and
Monitor Forest Fires. These characteristics help us understand the boundary
conditions for each capability and identify likely operational scenarios.
Characteristic Description
Number of targets How many targets (simultaneous wildfires) the system is tracking
Benign or hostile environment A benign environment requires no special considerations. A
hostile environment involves weather problems or solar flares and
other potentially anomalous conditions.
Number of platforms A platform is a single instance of the system being designed, in
this case the FireSAT spacecraft
Number of air-ground stations Number of stations, or other active stakeholder assets, that the
satellites will need to communicate with
Nature of activity The system must support various types of activities, such as target
tracking, search and rescue, and environmental situations, that
may require special behavior or functions from the system
scenarios successfully. Words and simple cartoons help everyone understand and
agree on how the system might operate and how we might assign system elements
to parts of the operation. These sequence diagrams are also useful for system-level
testing and integration planning.
3.3 Develop Operational Scenarios and Timelines 77
FIGURE 3-8. Simple Cartoon of Timeline for “Detect Wildfire” Capability, Pictures quickly
capture a simple scenario that shows how a fire starts, becomes larger, and is
detected by the FireSAT system, which then alerts NOAA and the firefighters.
(NOAA is National Oceanic and Atmospheric Administration.)
FIGURE 3-9. Block Diagram for an Operational Scenario That Responds to the “Detect
Wildfire” Capability. We use many informal and formal graphical methods to
describe operational scenarios. In this context, a “911” message is an alarm.
Table 3-6 describes the sequence for our first operational scenario and several
unresolved concerns, and Figure 3-10 is a simple illustration of this scenario in a
context diagram. This figure visually communicates interactions between system
elements and active stakeholders, but more formal schemes exist. For example,
graphics like Figure 3-11 make the sequence of threads explicit and enable
discussions that lead to agreed-on functional threads from end to end. Another
option is use-case scenarios, as we describe in Chapter 4.
78 Chapter 3—Concept of Operations and System Operational
Architecture
TABLE 3-6. Using Descriptive Language to Capture an Operational Scenario. This approach is
important because it gives users and systems engineers a shared understanding of
how the system handles a capability. It also makes it easier to validate users'
expectations and requirements.
Seq.
# Event and Description Remarks and Questions
The sequence in Table 3-6 and Figures 3-9 through 3-11 helps us create
timelines for different aspects of this operational scenario, which we use to analyze
a capability's latency requirements (how quickly the system must respond).
Latency influences how we package and apply the capability, as we describe in
Section 3.4. Capturing an operational scenario in words and graphics helps users
and systems engineers agree and share an understanding of the capability. Figures
3-12 and 3-13 show how timeline analysis works. "Detection lag" depends on
Objective 1.1 —Detect a wildfire in less than 1 day (threshold), 12 hours (objective).
3.3 Develop Operational Scenarios and Timelines 79
And the "'notification lag" depends on Objective 1.2—Notify USFS within 1 hour
of detection (threshold), 30 minutes (objective). This timeline provides a time
budget for each of these periods, which later analyses will allocate to the functions
that complete each aspect of the scenario.
FIGURE 3-10. Tracing an Operational Scenario on a Context Diagram. This type of diagram
brings users and systems engineers together in understanding required capabilities
and the system that will realize them.
Space element
NOAA
ground
stations
Archival
data
Ground elements
Wildfire
"911" command
messages center
Regional field
offices
Tasking
Firefighting
assets
Figure 3-11. Another Graphical Depiction of the Operational Scenario for “Detect Wildfire.”
Graphically depicting scenarios is critical to capturing and validating capabilities that
stakeholders want.
Figure 3-12. Timeline Analysis for the Operational Scenario. Analyzing how long detection
takes is the first step in allocating the time budget.
Figure 3-13. More Detailed Timeline Analysis for the Operational Scenario. This input is
critical to assessing and applying implementation concepts for the system elements.
[Wertz and Larson, 1999; Sellers, 2004]
FireSAT), we may need to identify more detailed concepts for selected subsystems.
Representative subsystems for FireSAT are its elements: ground-based, space-
based, and mission analysis. Thus, we must develop and analyze alternatives for
each element and then select a preferred concept.
For space missions, basic implementing concepts often develop around the
orbit or constellation design. (For a complete discussion of different orbit types see
Understanding Space [Sellers et al, 2005]). The same mission requires a different
satellite for every orbit type, and each orbit has advantages and disadvantages,
depending on the mission's goals and objectives. Table 3-7 summarizes
alternatives for the space-based element.
Generating conceptual solutions is highly subjective, so experience and
familiarity with similar systems often influence this process, which involves creative
thought and innovation. McGrath [1984] notes that people working separately on
concepts "generate many more, and more creative ideas than do groups, even when
the redundancies among member ideas are deleted, and, of course, without the
stimulation of hearing and piggy-backing on the ideas of others." On the other hand,
groups are better for evaluating and selecting concepts. Design teams often use
techniques such as brainstorming, analogy, and checklists to aid creative thinking
and conceptual solutions:
TABLE 3-7. Examples of Orbit-defined Implementing Concepts for Typical Earth Missions. The orbit we choose strongly affects which 00
concept to use for space missions. This table summarizes typical special and generic orbits with their advantages and N»
disadvantages. We use such a tabJe to trade off implementing concepts for the system elements. (ISS is International Space Station.)
Molriiya • Period-12 hr * Fixed apogee and perigee • High altitude is more expensive ♦ Communications
• Inclination = 63.4 deg locations to reach for high arctic
• Great high-latitude coverage • High altitude means lower regions
imaging resolution and higher
space losses
Low-Earth orbit • Period: 90-120 min ■ Low altitude is less expensive to * Limited coverage • Science missions
(LEO): single • Inclination: 0-90 deg reach * More complex tracking • ISS
satellite • Low altitude means higher • Space Shuttle
imaging resolution and lower
space losses
LEO constellation • Period: 90-120 min • Same as LEO single satellite • Multiple satellites and possibly • Communications
• Inclination: 0-90 deg * More satellites expand coverage multiple launches means higher (such as Iridium)
• Multiple planes, multiple cost
satellites per plane
3.5 Document the Concept of Operations and the System's Operational 83
Architecture; Apply Architectural Frameworks
FIGURE 3-14. First Concept for Implementing the Space-based Element. This concept is one
we synthesized for the FireSAT mission. (IR is infrared; GEO is geosynchronous
orbit; Comm is communication; EELV is Evolved Expendable Launch Vehicle.)
FIGURE 3-15. Second Concept for Implementing the Space-based Element. This is another
way of meeting FireSAT's mission. Now we must assess it against the concept in
Figure 3-14 and select the better one for development. (IR is infrared; LEO is Iow-
Earth orbit; EELV is Evolved Expendable Launch Vehicle; RAAN is right ascension
of the ascending node; Comm is gomm uni cation.)
control how contractors and other government organizations prepare and present
architecture information.
One of the first frameworks was the Department of Defense's Architecture
Framework, or (DoDAF [Dam, 2006]), which compares architectures from concept
definition through system design. Several agencies involved in space use the
DoDAF. The National Security Agency, National Geospatial Intelligence Agency,
and National Reconnaissance Office are a few examples.
Table 3-9 summarizes the DoDAF 1.0 artifacts; it offers a special version of some
of the products, such as SV-5, to focus on net-centridty. DoDAF 1.0's artifacts fall
into four categories of views. First are the "all views" or common views, consisting
of only two products: the AV-1, which is an executive summary of the architecture,
and the AV-2, which captures all architecture definitions. This glossary isn't the only
relevant one, but it also includes the metadata and schema for collecting the
architecture information.
86 Chapter 3—Concept of Operations and System Operational
Architecture
TABLE 3-8. Suggested Outline for a System Concept of Operations. This table consolidates the
major aspects of a concept of operations.
TABLE 3-9. Products of the Department of Defense’s Architecture Framework (DoDAF). The
QV-1 represents the system’s operational architecture.
All Views AV-1 Overview and summary Scope, purpose, intended users, environment
information depicted, analytical findings
All Views AV-2 Integrated dictionary Architecture data repository with definitions of
all terms used in all products
Operational OV-1 High-level operational High-level graphical and textual description of
concept graphic operational concept
Operational OV-2 Operational node Operational nodes, connectivity, and
connectivity description information exchange need lines between
nodes
Operational OV-3 Operational information Information exchanged between nodes and the
exchange matrix relevant attributes cf that exchange
Operational OV-4 Organizational Organizational roles or other relationships
relationships chart among organizations
Operational OV-5 Operational activity model Capabilities, operational activities,
relationships among activities, inputs, and
outputs; overlays can show cost, performing
nodes, or other pertinent information
Operational OV-6a Operational rules model One of three products that describe operational
activity—identifies business rules that
constrain operation
Operational OV-6b Operational state transition One of three products that describe operational
description activity—identifies business process
responses to events
Operational OV-6c Operational event-trace One of three products that describe operational
description activity—traces actions in a scenario or
sequence of events
Operational OV-7 Logical data model Documentation of the system data
requirements and structural business process
rufes of the operational view
Systems SV-1 Systems interface Identification of systems nodes, systems, and
description system items and their interconnections, within
and between nodes
Systems SV-2 Systems communications Systems nodes, systems, and system items,
description and their related communications lay-downs
Systems SV-3 Systems-systems matrix Relationships among systems in a given
architecture, can be designed to show
relationships of interest, e.g., system-type
interfaces, planned vs. existing interfaces, etc.
Systems SV-4 Systems functionality Functions performed by systems and the
description system data flows among system functions
Systems SV-5 Operational activity to Mapping of systems back to capabilities or of
systems function system functions back to operational activities
traceability matrix
Systems SV-6 Systems data exchange Provides details of system data elements being
matrix exchanged between systems and the
attributes of those exchanges
Systems SV-7 Systems performance Performance characteristics of systems view
parameters matrix elements for the appropriate timeframes
88 Chapter 3—Concept of Operations and System Operational
Architecture
Second are the operational views (OVs). These include a high-level concept
diagram (OV-1), interface diagrams (OV-2 and OV-3), organizational chart (OV-4),
functional analysis products (OV-5 and OV-6), and the logical data model (OV-7).
We say the OV-1 is the "as is" of the contextual setting for FireSAT (Figure 3-3b).
Third are the system views (SVs). They include several versions of interface
diagrams (SV-1, SV-2, SV-3, and SV-6); functional analysis products (SV-4, SV-5,
and SV-10); a performance matrix (SV-7); transition planning diagrams (SV-8 and
SV-9); and finally the physical data model (SV-11),
Last are the two standard technical views: one showing the current (or near-
term projected) standards and the other forecasting standards that may appear
over the life of the architecture.
The DoDAF is just one of the architectural frameworks used within industry
and government. It was originally designed for Command, Control,
Communications, Computers, Intelligence, Surveillance, and Reconnaissance
(C4ISR) but now applies generally to DoD systems. The Zachman Framework
(Figure 3-16) focuses on enterprise architectures. Many of the ideas and concepts
3.6 Validate and Baseline the System's Operational Architecture 89
from Zachman have become part of the Federal Enterprise Architecture Framework
[CIO, 2001], that now includes reference models and has been applied to projects
within the US federal government.
FIGURE 3-17. High-level Context and Concept View of FireSAT. This view clarifies the
envisioned system’s “physical boundary” within the larger context. (UAV is
unmanned aerial vehicle.)
92 Chapter 3—Concept of Operations and System Operational
Architecture
"911"
Telemetry
(packetized Messages
(packetized
data, RF link) data, RF link)
Telemetry
Commands
(packetized (packetized
data, RF link) data, secure
internet link)
Commands Archival data
(packetized (storage
data, secure media, e.g.,
internet link) DVD)
Recommend
ations and
requests
(email)
FIGURE 3-18. An NxN Diagram Showing the Connectivity among Envisioned System
Elements. This diagram further clarifies the interfaces among the system elements.
(RF is radio frequency; C2 is command and control.)
3.6 Validate and Baseline the System's Operational Architecture 93
TABLE 3-10. The Operational Nodes within the FireSAT System. This table lists the operational
nodes and a loose connectivity mapping of the system’s physical elements.
Wildfire command center • Receive recommendations and requests from field offices
• Rank order needs
• Generate tasking orders to firefighting assets
• Request and receive archival data
Firefighting assets • Receive orders from command center
• Fight wildfires
References
ANSI/AIAA G-043-1992. 1993, Guide for the Preparation of Operational Concept
Documents. Reston, VA: American Institute of Aeronautics and Astronautics.
Asimow, Morris. 1964. Introduction to Design. Englewood Cliffs, NJ: Prentice Hall, Inc.
Chief Information Officers Council (CIO). February 2001. A Practical Guide to Federal
Enterprise Architecture, version 1.1; available at the CIO website.
Dam, Steven H. 2006. DoD Architecture Framework - A Guide to Applying System Engineering
to Develop Integrated, Executable Architectures. Marshall, VA: SPEC Publishing.
Defense Systems Management College (DSMC). 2001. Systems Engineering Fundamentals.
Fort Belvoir, VA: Defense Acquisition University Press.
Larson, Wiley J. and Wertz, James R., eds. 1999. Space Mission Analysis und Design, 3rd
Edition. Torrance, CA: Microcosm Press and Dordrecht, The Netherlands: Kluwer
Academic Publishers.
McGrath, J. E. 1984. Groups: Interaction and Performance. Englewood Clifts, NJ: Prentice Hall,
Inc.
Osborn, A. 1957. Applied Imagination - Principles and Practices of Creative Thinking. New York,
NY: Charles Scribners Sons.
94 Chapter 3 —Concept of Operations and System Operational
Architecture
Pugh, S. 1991. Total Design: Integrated Methods for Successful Product Engineering. New York,
NY: Addison-Wesley, Inc.
Sellers, Jerry Jon. 2004. Understanding Space. 3rd Edition. New York, NY: McGraw-Hill
Companies, Inc.
Zachman, John. A Framework for Information Systems Architecture. IBM Systems Journal,
Vol. 26, No. 3, pp. 276-290,1987.
Chapter 4
95
96 Chapter 4—Engineering and Managing System Requirements
Table 4-1. Best Practices for Developing and Managing Requirements. The steps below focus
effort on the proper issues. (APMSS is Applied Project Management for Space
Systems [Chesley et al., 2008].)
Identify external Determine the boundary between the system and the Section 4.1.4,
interfaces rest of the world, clarifying the system’s inputs and Chaps. 2, 3, 5, and
outputs 15
Develop system Develop a set of system requirements derived from the Section 4.2,
requirements to stakeholders’ expectations (capabilities and Chaps. 2 and 5
meet stakeholder characteristics). These requirements define what the
expectations solution must do to meet the customer’s needs.
Write defect-free Guide system design, in unambiguous terms, toward Section 4.2,
system what the stakeholders expect. Expose assumptions and Chaps. 2 and 11
requirements factual errors; capture corporate knowledge. Write all
requirements at the correct level, being sure that we can
trace them back to their origins. Avoid implementation.
Identify each requirement's verification technique and
associated facilities and equipment.
Organize system Include the appropriate types of requirements, making Section 4.3
requirements sure that the development team can locate them within
a document or database. Organize system requirements
in a way that helps the reader to understand the system
being specified.
Baseline system Validate that requirements are correct, complete, Section 4.4,
requirements consistent and meet project scope and stakeholders’ Chaps. 2 and 11,
expectations; don’t add gold plating APMSS Chaps. 5,
6, and 8
Manage system Control foregoing activities; develop defect-free Section 4.5, Chaps.
requirements requirements to reduce unnecessary changes caused 2, 3, 15, and 16,
by poor requirements; capture management APMSS Chaps. 7,
information; control necessary requirement changes 8 and 14
97
share it with the whole team so everyone has the same vision and viewpoint when
generating and reviewing the requirements.
Once the team has gathered the scope elements from all stakeholders, they
prepare the scope document and review it with stakeholders to get their buy-in. A
representative from each stakeholder organization signs the scope document,
which then goes under configuration management just like any other project
document.
designers developers
end users trainers
testers maintenance personnel
reliability personnel safety personnel
operations personnel customers
We must identify relevant stakeholders by office code if possible and plan how
we will manage their expectations and acquire the information we need from
them. Chapter 2 suggests dividing stakeholders into active and passive
stakeholders. Active stakeholders use the system and typically generate capabilities
the system must have. Passive stakeholders influence the system and often generate
characteristics that the system must have. Each stakeholder has a unique
perspective and vision about the system, and each is a source of requirements. By
gathering stakeholder needs and expectations during scope development, we
reduce the likelihood of surprise later in the development cycle. Incorporating new
expectations (requirements) into the system late in the lifecycle entails a lot of
rework and resources—staffing, cost, and schedule.
4.1 Before Requirements 101
Not all stakeholders are created equal. For example, the stakeholder holding
the purse strings, the sponsor, probably has the biggest influence. The end users
offer the most detail about how they work and hence what they expect the system
to do. We should use our list of stakeholders as a starting point and tailor it for the
system. Although we have to consult many people, missing just one during scope
development could result in considerable pain later on. Finding missing
requirements during development or testing—or worse yet, during operations—
guarantees cost and schedule overruns.
Scenarios Remarks
Requirements are received from This decoupling often results in the development of
another organization or requirements that are not focused or aligned with the mission
individual, without any rationale need and goals. It may lead to misinformed trade-offs and
or “traceability” back to the priorities. Furthermore, in the event of ambiguities no good
mission need and goals clarifications exist, resulting in incorrect assumptions.
Requirements are solicited from This unhealthy scenario might reflect a lack of commitment on
an unresponsive set of the part of the customers and stakeholders. They may respond
stakeholders and customers with requirements that speak more to implementation than to the
real need and goals.
Requirements take a back seat In this approach, we simply assume that the engineers and
to design and implementation developers inherently and consistently understand the real
need, goals, and objectives. This highly risky approach is likely
to develop a system that’s irrelevant to the mission, or
unresponsive to the market opportunity.
Requirements are developed by This scenario reflects unhealthy engineering arrogance. The
engineers and developers engineers and developers assume knowledge of stakeholder
without engaging with and customer needs and expectations. This approach is also
stakeholders and customers very risky.
Although the goals are in general terms, we can verify they've been met.
Likewise, the objectives are in measurable terms, which the team can verify while
assessing the as-built system. Another way to look at goals and objectives is that
goals state the problem that a solution must solve (the need), and objectives state
the acceptable or expected performance.
• Resolve conflicts early—We must identify issues and address them early in
requirement development, instead of waiting until the requirement review,
or a major design review, to uncover a problem
• Set stakeholder expectations—The concept of operations establishes the
stakeholders' expectations and precludes surprises when we deliver the
system
TABLE 4-3. FireSAT Key Performance Parameters and Illustrative Constraints. Performance
drivers and constraints impact the selection of implementation concepts. [Larson and
Wertz, 1999] (IOC is initial operational capability; ATP is authority to proceed; FOC is
final operational capability; NOAA is National Oceanic and Atmospheric Administration;
GS-ISD-XYX is Ground System Interface System Document -XYX.)
Persistence—FireSAT shall monitor the coverage area for Number of spacecraft for low
potentially dangerous wildfires at least once per 12-hour period orbit
Timeliness—FireSAT shall send fire notifications to users within 30 Storage delay (if applicable)
minutes of fire detection (objective), 1 hour (threshold)
Except for the full operational capability schedule, all drivers depend on
assumptions about how the FireSAT spacecraft will be used. These assumptions
may also be program or project mandates, but they remain assumptions until
verified.
Figure 4-3. Context Diagram for the FireSAT Spacecraft's External Interfaces. This diagram
shows the major players in the FireSAT system, including the external interfaces to
wildfires, the US Forest Service, and the NOAA ground stations.
While each of these systems of interest may be subsystems to the higher level
system, their developers consider them to be systems and thus use the same
processes to develop their requirements. As we show in Figures 4-1 and 4-4 (and
in Chapters 2,3, and 5), technical requirements development is an iterative process
in conjunction with architecture development and synthesis of the concept of
operations to define the solution in response to a customer or mission need.
Through these processes, the information becomes less abstract and more detailed
as well as solution- and implementation-specific. A set of technical requirements at
any level of abstraction is based on (and should be traceable to) the requirements and
the architecture at the next higher level. The "how oriented" requirements at any level
of abstraction become "what oriented" requirements at the next level down. Chapter
5 describes how to develop the solution architecture, and this section describes how to
concurrently develop the solution (system) requirements.
need, goals, and objectives. The expectations should state what the business or
mission needs, but currently cannot realize. The requirement should state some
missing capability or what current capability can't be done well enough (it can't be
performed fast enough, cheap enough, or it's not available enough of the time).
The expectations should be stated in terms of the operational outcome rather than
the system response. These stakeholder expectations must address the functional
(capabilities required) and nonfunctional (characteristics required along with
imposed constraints) needs of the stakeholders.
The capabilities define the functionality, the services, the tasks, and the
activities that the stakeholders need. Because the active stakeholders use or
interact with the system when it's operational, they usually specify the capabilities.
Nonfunctional expectations and constraints (cost, schedule, and legacy
implementations) are the operational characteristics; they define the necessary
quality attributes of the system. Nonfunctional expectations include performance,
assumptions, dependencies, technology constraints, security and safety, human
factors and other system "ilities", standards and regulations, and cost and
schedule constraints. Most nonfunctional expectations are measurable. Some are
stated in terms of constraints on the desired capabilities (e.g., constraints on the
speed or efficiency of a given task). We often use capability-based expectations to
develop one or more performance or quality characteristics.
Mission need/
Market opportunity
Stakeholder requirements/
Mission requirements
(capabilities, characteristics, constraints)
Characteristics/
Capabilities Constraints
Translation anslation supported
supported by f market analysis and
methods such inch marking, modeling/
as use-case ototyping and simulation,
scenarios and id tradeoffs. Captured using
Input, output, Performance
interaction and and other non ethods such as quality
diagrams functional functional notional deployment.
system system
requirements requirements
System
requirements
FIGURE 4-5. Translation of Stakeholder Expectations into System Requirements. Different
methods and tools support the translation of capabilities and characteristics into
system requirements.
110 Chapter 4 — Engineering and Managing System Requirements
a) GPS FireSAT
Wildfire constellation space element Ground element USFS
b)
Facilities—Where will teams develop, test assemble, integrate, and deploy the
system? Do any facility restrictions affect it, such as size, weight, or cleanliness?
Example: The FireSAT spacecraft shall pass fully assembled through
the payload entry and exit doors of the Spacecraft Final
Assembly Building #166 at KSC.
Reliability involves attributes that enable the system to perform and maintain its
functions under specified conditions for a given period, such as mean time
between failure. Examples of these attributes are the probability that it will
perform as specified, the failure rate, and the lifetime.
Maintainability is associated with the mean time to repair (or replace), so this
requirement covers the system's attributes that make it easy to maintain.
Operability concerns the system's attributes that improve the ease of everyday
operations.
Security and safety relate to attributes that enable the system to comply with
regulations and standards in these two areas.
Example: The FireSAT spacecraft shall meet payload safety
requirements in Chapter 3 of EWR 127-1, Eastern and
Western Range Safety Requirements.
Table 4-5. Input/Output (I/O) Matrix Information. A well thought out I/O matrix reduces surprises
later in the lifecycle.
Inputs These inputs (in a particular form and These unintended, and sometimes
format, with a particular latency and undesired, inputs, are also important.
bandwidth, etc.) are expected by the They’re often beyond the system control,
system for it to produce the necessary which defines the environment in terms of
outputs or system behavior. The operating conditions, facilities, equipment
constraints associated with these inputs and infrastructure, personnel availability,
should also be articulated here. skill levels, etc.
Outputs Descriptions of desired outputs. Upon A newly developed system inevitably
development and deployment, the system produces some undesirable outputs that,
should produce these outputs. if anticipated in time, can be minimized.
Examples include unintended
electromagnetic and other emissions.
We show an example of an I/O matrix for different types of inputs and outputs
in Table 4-6. In this example, the systems engineer writes I/O performance
4.2 Engineer the System Requirements 115
requirements for the inputs and outputs pertaining to each type of requirement.
For example, if the SOI is to receive a signal from an external system, the systems
engineer would write I/O characteristic requirements describing the pulse shapes
of the input signal, its data rate, its signal-to-noise ratio, etc.
TABLE 4-6. An Example of an Input/Output (I/O) Matrix. We replace the generic descriptions in
the matrix cells with specific or quantitative requirements data as it becomes known.
Inputs Outputs
Checklists
The other approaches to translate the desired system characteristics into
system requirements depend on such factors as the maturity of an organization
with developing similar systems and the newness of the implementation concepts.
For example, organizations with experience in developing certain types of systems
(e.g., Ford Motor Company and automobiles, IBM and IT systems and solutions,
Nokia and cell phones) have the necessary domain knowledge and experience,
and understand the correlations between desired characteristics and related
system requirements. They have put together checklists delineating these
correlations for use by new programs. After all, their development projects have
much in common, and this way the development teams don’t have to go through
an extensive exploration and analysis to understand and derive the nonfunctional
system requirements. Illustrative examples of such checklists are included in
Figure 4-7. MIL-STD-490X-Specification Practices provides a set of guidelines and
checklists to support the development of nonfunctional system requirements in
the defense sector.
Unprecedented systems, or new technologies and new implementation
concepts, call for more engineering exploration, benchmarking, analysis and
modeling, prototyping, and tradeoffs. The engineering team has to address a
larger number of unknown variables and correlations. Models, prototyping,
116 Chapter 4 —Engineering and Managing System Requirements
benchmarking and trade-offs are vital. For example, one stakeholder expectation
might be, "The cell phone should feel good to the person holding it." The
engineering team needs to address this statement, and in the absence of domain
knowledge and experience with such matters, they have to make certain
engineering judgments. They build models and prototypes to test the correlations
and then perhaps leverage benchmarking to validate these correlations. The
engineering parameters they finally identify might relate to the physical shape of
the phone, its weight and center of gravity, its texture and thermal conductivity,
and so on. While each of these engineering parameters is important, equally
important are the correlations between them.
4.2 Engineer the System Requirements 117
FIGURE 4-8. Quality Function Deployment (QFD) Process Steps. We follow this process to
move from the program need to the nonfunctional system requirements.
The I/O matrices help us develop the input and output performance or
characteristic requirements, but not the other types of nonfunctional requirements.
These requirements (e.g., reliability, supportability, maintainability, size, mass,
usability, etc.) call for another technique. Requirement checklists apply to systems
in environments where similar systems have been previously developed. And QFD
is a technique for all types of nonfunctional requirement (I/O and others) in which
a direct correlation exists between the stakeholder requirements and the system
requirements. Table 4-7 lists the steps to develop a QFD matrix.
118 Chapter 4—Engineering and Managing System Requirements
FIGURE 4-9. Simplified Quality Function Deployment (QFD) for FireSAT Mission. This limited
set of needs and corresponding technical attributes is expanded significantly in the
complete design process. [Larson and Wertz, 1999]
TABLE 4-7. Steps to Develop a Quality Function Deployment (QFD) Matrix. The matrix reveals
correlations between expectations and nonfunctional requirements.
1. Identify and classify These entries are the row headers of the matrix. From the list of stakeholder
characteristics requirements, we find the characteristics (nonfunctional requirements). We
then classify or group them into a hierarchy. The grouping may be based on
several criteria, depending on the types and conciseness of the
requirements. Possible ways to sort are; by requirement details (abstract to
more detailed), by types of requirements (cost, availability, performance,
etc.), or by the stakeholder (user, customer, external systems, operations
and maintenance, etc.). In ail cases, the goal is a firm definition of WHAT
we want from the system.
2. Identify importance As Section 4.5.1 points out, we should rank order all requirements. The
of characteristics priorities may be relative (“this requirement is more important than that
one”) or absolute (“this is the most important requirement, then this one,
then...”). Either way, the priority represents the customer’s value system.
This effort lets the systems engineer know which requirements are more
important than others.
3. Identify relevant These entries are the column headers. This step is critical because the
design parameters design parameters describe HOW the system will address the stakeholder
or system requirements. They become system requirements. While the stakeholder
objectives requirements define what the stakeholder needs (in their language), the
design parameters reflect what the system will do to meet the stakeholder
requirements. The design parameters are the system requirements that
relate directly to the customer requirements and must be deployed
selectively throughout the design, manufacturing, assembly, and service
process to manifest themselves in the final system performance and
customer acceptance. Thus, the design parameters must be verifiable (they
are often quantifiable) and meet the requirement 'goodness’ characteristics
defined in Section 4.3.1.
4. Correlate This step populates the cells in the QFD matrix. We analyze each design
characteristics and parameter for its influence on customer requirements. The correlation
design parameters matrix has several levels of correlation. Depending upon how much
or system resolution we need, we use three to five levels of correlation.
objectives
5. Check correlation We have to see that the design parameters address and impact every
grid characteristic, eliminating any that do not. We look at each cell for;
* Blank rows that represent characteristics not addressed by the design
parameters
* Blank columns that represent possibly unnecessary design parameters
• Rows and columns with very weak correlations to indicate possible
missing correlations or design parameters
• A matrix that shows too many correlations, which typically suggests that
the stakeholder requirements need more detail
6. Specify design This step helps us understand whether the potential solution will meet the
priorities design parameters (system requirements). It also helps us understand the
design parameter priorities (which one is most important in meeting the
customer’s need). This step may drive trade studies or design studies into
potential solutions to meet the design parameters.
1 2 0 Chapter 4 — Engineering and Managing System Requirements
TABLE 4-8. A Sample Requirements Verification Matrix. Here we list sample system
requirements and acceptance criteria for the FireSAT system.
Verification
System Requirement Acceptance Criteria Method
The system shall provide current When the user logs on, the system must Demonstration
system status when the user logs on display the current status of all fires
The system shall provide the current Using a simulator to inject a potential fire, Simulation and
position of all fires within 5 km the position of the leading edge of the fire modeling
(threshold), 500 m (objective) must be within 5 km (threshold), 500 m
(objective) of the injected position
The system shall have an availability The system must be operational for a Demonstration
of at least 98%, excluding outages 100-hour demonstration time period
due to weather
Typical verification methods are: formal test (T), demonstration (D), inspection (I),
analysis (A), and simulation or modeling (S/M). We may add information to the
matrix during the system lifecycle as it becomes known, information such as:
TABLE 4-9. Examples of System and Statement-of-Work Requirements for FireSAT Here we
list system requirements and requirements for the statement of work. A complete list for
a small project could comprise dozens of items.
The FireSAT sensor shall operate on 28 Vdc The contractor shall submit cost and schedule
status each month according to...
The FireSAT spacecraft shall have a lifetime of The contractor shall do trade studies to
at least five years determine...
The FireSAT spacecraft shall have a diameter of The contractor shall provide technical status
not more than one meter following...
Mandatory Characteristics
Just because a sentence contains the word "shall" doesn't mean it's an
appropriate requirement. Every requirement must have three characteristics:
Other Characteristics
Requirements communicate what the project or the customer wants from the
provider or developer. The key to good requirements is good communication,
which depends on meeting five criteria:
• Simple—The same words should mean the same things. A gate is a gate, not
a door, portal, or opening. Good requirements can be boring to read; but we
shouldn't dive into a thesaurus to make them more interesting.
• Stated positively—Use positive statements. Verifying that a system doesn't
do something is next to impossible. Some exceptions exist, but in general we
should avoid negative requirements.
• Grammatically correct—Requirements are hard enough to understand
without introducing poor grammar
Things to Avoid
To improve requirements, avoid ambiguous terms and don't includc words
that refer to implementation (how to do something) and operations. Ambiguous
terms aren't verifiable, so statements containing them fail the "Is it verifiable?"
test. Some examples [Hooks, 2007] of ambiguous terms are
There are many others. We should create a list for our requirement
specification, search all the requirements for these terms, and replace them with
verifiable terms.
Including implementation in a requirement means saying how to provide
something rather than what we need. Requirements that tell the developers how
forces them to use a particular design or solution that may not be the best one.
Although we may believe that stating the solution covers all real requirements or
needs, we could be missing the true need. The developer may deliver what
stakeholders ask for, but it may not be what they wanted.
124 Chapter 4—Engineering and Managing System Requirements
For example, one of the requirements in the original specification for the DC-
3 aircraft was: "The aircraft shall have three engines." This statement imposed a
solution of three engines. The developer (Douglas Aircraft) didn't think it made
sense and asked the airline: "Why did you specify three engines?" The answer was
that if the aircraft lost one engine, it still needed to be able to land safely. So the real
requirement was: "The aircraft shall meet its requirements with a single engine
out." The initial requirement could have led to an aircraft with three engines that
crashed or became unstable if one of the three engines failed—totally missing the
stakeholder's intent.
Sometimes, we do have to state how to implement something, such as when a
higher-level requirement directs a particular solution or imposes a specific regulation
or standard. Still, best practices dictate that requirements typically state what
customers want, not how to provide it, so developers can deliver the best solution.
Operations show up in requirements when they blur the line between
operational concept and requirement. Operational concepts are an important part of
understanding what the requirements are, or the need for a particular requirement,
but they aren't requirements. In other words, we can't rewrite operational-concept
statements with shalls and turn them into requirements. Table 4-10 shows how we
must rewrite a requirement that includes an operational statement.
TABLE 4-10. Operations Statement Versus True Requirement. The requirement should state
what the product is to do, not how it will fit into an operational scenario.
The FireSAT sensor operator shall be able to The FireSAT spacecraft shall downlink sensor
receive a FireSAT sensor’s data download no data no later than 30 [TBD] seconds after receipt
more than 45 [TBD] seconds after initiating an of an “out-of-cycle download” command from the
“out-of-cycle download” command. ground.
The FireSAT system shall display updated
sensor data no more than 45 [TBD) seconds
after initiation of an “out-of-cycie download”
command.
We must remember that requirements need to follow the who shall do what
format, and that we write requirements for the system, not for people. The phrase
shall be or shall be able to often indicates an operations requirement. The
requirements should capture functions the system needs to do or features it must
have to meet the operational concept.
Correctly representing information in an operations statement may take more
than one requirement. The who for each of the new requirements in the example
refers to two systems: the FireSAT spacecraft and the FireSAT system as a whole.
For this reason, operations statements can cause requirement omissions throughout
the system hierarchy, so we need to remove them from all system requirements.
4.3 Document the Requirements 125
Rationale: Nothing can be launched from the Kennedy Space Center unless it
complies with the ground safety standards in Chapter 3 of EWR 127-1.
126 Chapter 4—Engineering and Managing System Requirements
TABLE 4-11. Example FireSAT Requirement with Rationale. Including the rationale beside the
requirement connects every requirement to the reason it exists. This process reduces
misunderstandings during the entire program lifecycle.
Requirement Rationale
The FireSAT spacecraft shall meet the payload Nothing can be launched from the Kennedy
safety requirements in Chapter 3 of EWR 127-1, Space Center unless it complies with the
Eastern and Western Range Safety ground safety standards in Chapter 3 of EWR
Requirements. 127-1.
may be very expensive, but an assessment may indicate that a wider tolerance is
acceptable. If we change our requirement to the wider tolerance, we may improve
cost and schedule during verification. Chapter 11 provides details on developing
verification requirements that address the "what," "when," and "how well" of
determining that the system has met a requirement.
FIGURE 4-10. Notional System-of-lnterest Hierarchy for FireSAT. This diagram shows the
various system-of-interest levels for the FireSAT program, Each level has
stakeholders and needs requirements prepared Io ensure it integrates well with the
next higher level and contributes to mission success.
documents change. Small projects may be able to allocate and trace requirements
in a spreadsheet, with precautions for keeping ID numbers unique, but larger
projects need to use a requirement management tool to track everything properly.
Validating the scope and the resulting requirements is the key to developing
quality, defect-free requirements. We want to remove requirement defects before,
not after, we baseline.
The answers to these questions quantify the scope's completeness. We use the
above list to decide if we're ready to hold the scope review. If we answer no to any
of these questions, we must assess the risk of holding a review. A scope review
should confirm that we've answered these questions to the stakeholders'
satisfaction and have agreement that we're ready to move on to writing
requirements. Our last step in the scope review is to document agreement among
the stakeholders, with the understanding that all scope documentation then goes
under formal configuration control.
Criteria for the review are determined in advance, and reviewers measure
requirements only against those criteria. Example criteria might be: "The
requirements are free of ambiguous terms, have the proper who shall do what
format, include rationales, and identify the verification method." So we inspect a
reduced set of the requirements and focus on specific defects we wish to remove.
We use inspection results not only to remove the requirement defects but also
to help determine their root causes, so we can improve how we define our
requirements. For example, if we find that several authors repeatedly use certain
ambiguous terms, we should update our ambiguous-terms checklist. Inspections
are a proven method and integral to practices under the Capability Maturity
Model® Integration (CMMI®).
Work-system inspections help us do more than goodness or quality checks on
the requirements. We also focus them on the requirements' content. For example, are
the requirements correct and consistent? If we're involved in model-based systems
engineering (MBSE), do they reflect the systems engineering models, and do the
models reflect what the requirements state? Validating the requirements is essential
to valid ating the evolving MBSE design. Even if we use MBSE, however, we have to
build the system and verify that it meets the requirements. The models aren't
sufficient to build the system; they simply help us understand the requirements the
same way, improve communications, and reduce misunderstandings.
In other words, we use requirements to generate the model, which in turn
helps us more efficiently determine the completeness of our requirements by using
the automated modeling tool capabilities to help us do consistency checks,
generate NxN interface diagrams, simulate system behavior, and complete other
automated cross-checking activities. But the model exists and is valid only insofar
as it correctly reflects the requirements.
Review of the requirements for a milestone review takes a lot of time and
money, so we want the requirements to be in the best condition possible. Table 4-
12 depicts a process to reduce the time to conduct the review and correct the
document, limit the effort of individual reviewers, and produce the best results
[Hooks and Farry, 2001].
TABLE 4-12. 4-1/2-step Process for a Requirement Review. Preparing for a
requirements review takes a staff of specialists and most of the
stakeholders to review the documents, make and review corrections,
and publish the best documents possible for the official review.
The 4-1/2-step process establishes the initial requirements baseline, such as for
a system requirements review, It's equally effective in preparing a scope review or
any other set of formal documents for review. Before distributing the requirements
in a specification or requirements database, someone must edit the working, and
afterward technical specialists must check the contents for goodness. If we inspect
all along, these two steps shouldn't require much time or resources. Then the
stakeholders have a good set of requirements to review and won't waste time with
misspellings, bad formatting, or unclear wording. With all changes resolved and
approved, we need to assess risk using the requirement risk factors:
• Unclear—The requirement is ambiguous or unverifiable, or has poor
grammar, the wrong format, or inaccurate or missing rationale
• Incomplete—The requirement is missing related requirements or requirement
attributes
• Subject to change—The stakeholders are not in agreement, the requirement
depends on an undefined interface, or the selected technology is not ready
If we do the 4-1/2-step process with proper management controls in place, we'll
never again have to apply this exact process to this document. We'll do
incremental changes and update versions, but we won't need a complete review.
Before we baseline the requirements, we can change them as needed with few
resources, such as inspections. But afterwards, a configuration change control
board will manage changes, which requires a lot more resources and more
strongly affects the schedule.
4.5 Manage the Requirements 133
• Is structured correctly and written at the correct level (in who shall do what
format, with the who matching the system for which we're writing
requirements)
• Meets all characteristics of a good requirement (one thought, concise,
simple, positively stated, grammatically correct, and unambiguous)
• Has its requirement quality attributes (rationale, verification method,
traceability, and allocation) documented and validated
• Has its requirement management attributes (owner, author, risk, priority,
validation status, change status) documented and validated
• Is accessible with all of its attributes to all stakeholders for review and
status, either through direct access in an automated tool for managing
requirements or through periodic status reports
• Is properly assessed to determine how changes affect it
Previously, we discussed the first three bullets and how we use them to improve
the quality of our requirements. But what are requirement management attributes
and how do we use them to manage requirements?
Assuming the allocations are correct (valid), we could safely assume we're nearly
ready for the SRR, if we have allocated and validated nearly 100 percent of the
requirements.
The following is a list of requirement attributes whose metrics provide us with
management information. The list isn't all-inclusive, but does suggest the kind of
information we should maintain with each requirement, so we have statistics that
are invaluable in managing the system development.
134 Chapter 4 —engineering and Managing System Requirements
Cross check against system models for projects using model-based systems
engineering.
• Requirement volatility—Determine the number of requirements with a
change number entered in the change status field, and evaluate over time to
determine if they need management attention. (This is most effective after
baselining the requirements.)
• Risk assessment—Determine the number of requirements with high (H),
medium (M), and low (L) risk; evaluate over time to determine how well
we're meeting the plan to reduce risk.
Note: In all of the above examples the team needs access to more than the
requirements for their own SOI. To manage their SOI requirements
effectively, they must also have access to the requirements (and related
requirement attributes) in the first level above and below their level of the
architecture.
When we generate the trace to all the requirements that a change affects and find
that many are priority 1, we may have a serious problem. This situation is particularly
difficult to resolve if the change is driven by decisions outside our project's span of
control. Examples include a change in an interface owned by another organization, a
correction to a safety hazard, and violation of a regulation or standard.
Alternatively, if we respond to a budget cut or schedule reduction, we can
search for requirements that have priority 3. Again, we look at the requirement
traces to determine how many requirements are affected (some of which may be
priority 1) and then assess which set of priority 3 requirements will least affect the
priority 1 requirements.
Before we baseline the requirements, we manage requirement risk by focusing
on developing a valid scope and inspecting requirements to reduce the impact of
requirement change risk factors (defined in Section 4.4.4) for "understood one way",
"incomplete", and "subject to change". As we describe in the 4-1/2-step process, we
assign a value for requirement risk as we baseline the requirements. Before then,
requirements are too volatile and management overhead too large for the measure
to be useful. Requirements traceable to key performance parameters (KPPs)
identified during scope development always carry a risk rating of high or medium
because a change in them affects whether we meet stakeholders' expectations.
We also must manage requirements that continue to contain an outstanding To
Be Determined (TBD) after they're baselined. Some organizations automatically rate
such a requirement as high risk, with good reason. In fact, we can quickly check
whether requirements are ready for baselining by determining the number of TBDs;
more than one or two percent of the total requirements being TBD should be a red
flag. Still, we manage these much as we do the requirements themselves. First, we
generate a list (or matrix) of all To Be Determined requirements in any SOI. The list
needs to contain the requirement ID, requirement text, rationale, and another
requirement attribute called TBD resolution date. Rationale should have the source of
4.5 Manage the Requirements 137
any numbers in the requirement and refer to the document that defines its value. We
then take them from the list and place them in a table or spreadsheet for tracking.
If we don't have a requirement-management tool, we just search for To Be
Determined requirements in our document and manually build a table containing
a TBD management matrix. For tracking, we could enter their numbers in a
tracking action log, but the requirement ID should be sufficient to track and
identify which one we're commenting on. Once the To Be Determined closes, we
won't need the TBD number. Table 4-13 is an example of a TBD tracking table.
Table 4-13. Example Management Matrix for To Be Determined (TBD) Requirements. Because
their uncertainty poses a risk to the project, we need to track TBD requirements until
we can resolve them.
We notice that the example TBD requirement also includes a value. We must have
a "current estimate" and place it in the requirement. This process at least bounds
the problem for the reader and gives a clue about the units, such as seconds or
milliseconds, centimeters or meters, grams or kilograms. By combining the
requirement statement (which identifies the requirement), the TBD related to it,
and the To Be Determined management matrix, we have all the information
needed to assess the status of our TBDs.
Any requirement for which we have an unresolved To Be Determined should
also have a high or medium risk, depending on the requirement's priority, how
soon we need the information to avoid affecting the schedule, and how the
unknown value may affect meeting stakeholders' expectations. When we know
the value, we enter it into the requirement and update its rationale to rcflcct the
justification for selecting the final value. Whether we change the risk attribute's
value depends on other risk factors.
To manage change we need to assess risk continually; we don't want to wait
until a problem occurs. One way to provide insight into the risk is a risk-priority
matrix, such as the one shown in Figure 4-11 for the FireSAT's system
requirements document. The risk-management plan states what to enter in the
138 Chapter 4— Engineering and Managing System Requirements
blocks, but normally we identify requirements with high priority and high-to-
medium risk by requirement number. For tracking, the matrix depicts only the
number of requirements meeting other lower-risk criteria.
Risk
Low Med High
Priority
FIGURE 4-11. Notional Risk (Likelihood)—Priority (Impact) Matrix for FireSAT Requirements.
Such a matrix provides a snapshot of the project's current risk status.
we will develop the scope while writing, but it's much easier to gather the
knowledge first.
If we take enough time up front to define scope before writing requirements and
stay consistent while defining product requirements, we increase our chances of
delivering a quality product. Otherwise, we’re guaranteed "big bad" requirement
documents and a failed or very unsatisfactory product.
References
Chesley, Julie, Wiley J. Larson, Marilyn McQuade, and Robert J. Menrad. 2008. Applied
Project Management for Space Systems. New York, NY: McGraw-Hill Companies.
Fagan, Michael E. July 1986. Advances in Software Inspections. IEEE Transactions on
Software Engineering, Vol. SE-12, No. 7, pp. 744-751.
Hooks, Ivy F., and Kristin A. Farry. 2001. Customer Centered Systems. New York, NY:
AMACOM.
Hooks, Ivy F. 2007. Systems Requirements Seminar. Boeme, TX: Compliance Automation Inc.
Larson, Wiley J. and James R. Wertz, eds. 1999- Space Mission Analysis and Design, 3rd
edition. Torrance, CA: Microcosm Press and Dordrecht, The Netherlands: Kluwer
Academic Publishers.
NASA, NPR 7123.1a. March 2007. NASA Systems Engineering Processes and Requirements.
Vinter, Otto, Soren Lauesen, and Jan Pries-Heje. 1999. A Methodology for Preventing
Requirements Issues from Becoming Defects. ESSI Project 21167. Final Report, Briiel & Kjaer
Sound & Vibration Measurement A/S, DK-2850 Naerum, Denmark.
Chapter 5
141
142 Chapter 5 —System Functional and Physical Partitioning
j i Mission | \
f/\s-de ployed') !
/ Define baseline ■ Develop ' I
FIGURE 5-1. Problems and Solutions Evolve Together. The early elements of systems
engineering are highly iterative and interdependent, not sequential or linear.
Where
Step Description Discussed
1 Specify the system context Section 5.1
2 Define the functional partitions Section 5.2
3 Create the physical partitions and allocate functions to the Section 5.3
physical components
FIGURE 5-2. Recursive Partitioning of Functions and Physical Components. This process
repeats and may advance in layers—top-down, bottom-up, or some combination of
both. (V&V is verification and validation.)
145
Increasing specificity
in the technical data package
for the system being developed
Figure 5-3. Developing System Architectures and Requirements. While the left side of the
figure reflects the evolution in expectations, requirements, and specifications, the right
side reflects the evolution in the concept of operations, the architectures, and the
system design. These activities are iterative at any level, and recursive across levels.
Iteratively define all three views and allocate the requirements to the
elements within these views to ‘completely’ define the architecture.
Figure 5-4. Architecture Viewpoints and Work Products. By developing these three architectures at the same timet we gain insights and
produce a more complete solution. (IDEFO is integrated definition for function modeling.)
5.1 Specify the System Context 149
FireSAT mission
architecture
System of systems
Launch Mission
Orbit and Space Ground
operations USFS Wildfire
trajectory element element element
element
Context Element Element Element Element Stakeholder Subject
element element
5.1
NOAA
ground
stations
Stakeholder
Figure 5-5. Space Element as Part of the FireSAT System of Systems. We focus on the
space element of the FireSAT System to illustrate the concepts discussed in this
chapter. (USFS is US Forest Service.)
• Portrays all external systems with which our system must interface and the
mechanisms for interfacing
• Provides a structure for partitioning behaviors and data assigned to the
interfaces
• Bounds the system problem so we don't add something unintended or
unneeded
FIGURE 5-6. System Context Diagram for the FireSAT Space Element. The system of interest
is at the center, linked by inputs and outputs (arrows) to and from external systems.
5.2 Define the Functional Partitions 151
What we can't tell from this diagram is how the system transforms the inputs it
receives to the outputs it sends or what physical elements allow it to do so.
Developing the system's functional architecture is the first step in framing
how a system will meet its requirements and operational scenarios. To do so, we
must understand each junction as a process that transforms inputs into outputs. A
function describes an action from the system or one of its elements, so it should
begin with a verb. We must not confuse the thing that performs the function with
the function itself. For example, a database is not a function, but we can use one to
perform the function of Store and retrieve data. Accordingly, the following are not
functions:
• Computers or software applications—we assign them the functions of
Process data or Compute current navigation state
• Cold-gas thrusters—they provide the function of Generate thrust
• Batteries—they have the function Provide electrical power
the system meets its input and output requirements, and more specifically “what
the system does." So it fills two roles:
• A partition of the system into more detailed functions
• A model indicating how the system transforms its inputs into outputs and
achieves its ability to realize required behaviors
Just as the system context diagram fully defines the system at a highly abstract
level, the functional architecture must completely define the system's functions at
increasing levels of detail. To ensure that it does, we follow two key principles:
• The sum of the lower-level functions is the same as the sum of the upper-
level functions
• External inputs and outputs are conserved, meaning subfunctions must
have the same number and type of inputs and outputs as the function
A data model usually captures these inputs and outputs. Thus, the functional
architecture shows:
• Elements—functions or tasks; what the system does
• Relationships among elements—the data, information, material, or energy
that functions exchange to carry out their tasks
• Relationships among functions—the chronological order in which
functions execute and how one action depends on another. As described in
Section 5.5, we use this relationship to develop an executable model of the
architecture.
Developing this architecture requires three major steps:
1. Determine the functions
2. Determine which functions receive the external inputs and generate the
external outputs
3. Determine the inputs and outputs between functions and verify that the
resulting functional architecture is correct and complete
TABLE 5-2. Characteristics of Diagrams for Functional Architectures. The architects should
develop the diagrams that best communicate the functional architecture to the
stakeholders. The diagrams may vary by the type of system or even by the preferences
of the architect and stakeholders.
NxN (also called N2) Functions are on a diagonal. The top row contains external inputs into
each function, with external outputs in the far right column. Shows
each input and output between functions in the function’s row or
column.
FFBD (functional flow block Shows functions in the order in which they interact with one another
diagram) throughout the functional flow.
FIGURE 5-7. Example FireSAT Diagram in an Integrated Definition {1DEF) Format. In this
format, we decompose the FireSAT mission functions into the primary subfunctions
shown here. We then represent the interactions between these functions with the
arrows showing the inputs and outputs.
154 Chapter 5 —System Functional and Physical Partitioning
FiGURE 5-8. FireSAT Mission Context Diagram in a Modified NxN Format. Mission inputs are
at the top, and outputs external to the system are on the left. Mission functions are
on the diagonal. The inputs and outputs between the functions are in the function's
row and column.
Perform ground operations. The diagrams also show inputs and outputs between each
external function and the system function.
Items that move across the system's boundary to any external system are
system-level outputs. They may develop whenever the system processes a stimulus
or generates and transmits a message or command to any external system.
Figure 5-9. Functional Decomposition Approach. This method starts with a single definition of
the system function, and allocates functionality systematically through the lower levels.
more than one level. Figure 5-10 shows an example of this situation. Rather than
breaking the one function into eight subfunctions, wc decompose it into four and
then further break two of the subfunctions into six more subfunctions. By limiting
the number of subfunctions, we reduce complexity and enable reviewers of the
architecture to comprehend it better. But a rule of thumb isn't a mandate.
Sometimes, seven or more subfunctions come from a single function, if the
decomposition isn't too complex.
FIGURE 5-11. Functional Composition Approach. This method starts by defining bottom-level
functions and builds them into a functional hierarchy.
1.0
Perform space
element mission
Function
FIGURE 5-12. Functional Hierarchy Based on Operational Modes. This diagram shows how we
partition the space element’s Perform space mission function into subfunctions
based on operating modes. This hierarchy violates the ’no more than six subfunctions'
rule of thumb, but having more than six subfunctions makes sense in this case.
Use inputs and outputs. We sometimes base a function's partitions on its inputs
and outputs (I/O) or the types of inputs and outputs, thus capturing its external
interfaces. For example, a computer's subfunctions might be: Provide visual output;
Provide audio output; Handle inputs for keypad, touch screen, and mouse; and Interface with
peripherals such as the printer, memory stick, fax machine, and so on. Because this
method focuses on the external I/O, our decomposition must account for internal
processing or control of the system in some function. We often add a separate
processing and control subfunction (see the Hatley-Pirbhai template below). Figure
5-14 shows how to use I/O for the space example.
Use stimulus-response threads (use cases or operational scenarios).
Operational scenarios or system use cases illustrate how the system will be used. We
partition the system's functions to match these scenarios, so users and customers
158 Chapter 5 —System Functional and Physical Partitioning
FIGURE 5-13. Functional Flow Block Diagram Based on Operational Modes. This diagram
helps us visualize the logical sequencing of system functions. (LP is loop; OR is an
“or” decision point; LE is loop exit.)
FIGURE 5-14. Functional Hierarchy Based on Inputs and Outputs. This partitioning method
emphasizes a system’s external interfaces by focusing on how it handles inputs and
produces outputs. (IR is infrared; GPS is Global Positioning System.)
better understand the architecture. If the scenarios don't interact much, the resulting
architecture is fairly simple. But if they do similar things or have similar interfaces
with users, decomposition by this method may hide possibilities for reuse. An
example of using stimulus-response threads is decomposing an automatic teller
machine system into functions such as Login, Perform withdrawal, Perform deposit, and
Perform balance inquiry. Figure 5-15 shows the FireSAT example.
5.2 Define the Functional Partitions 159
0
Perform
space
element
Function
1 2 3 4 5
Respond to Respond to Respond to
Respond to Respond to
ground launch space
ground wildfire IR
integration environment environment
commands emissions
and testing inputs inputs
Function Function Function Function Function
FIGURE 5-15. Functional Partitioning Based on Use Cases. This example uses stimulus-
response threads to partition FireSAT’s Perform FireSAT mission function into its
subfunctions.
FIGURE 5-16. The Hatley-Pirbhai Template. This template suggests that we can partition any
function into the generic subfunctions shown here,
either partition the processing in the architecture or leave it to the developer, but the
former better describes the triggers for each function and the throughput needed
for the architecture's physical components (described in Section 5.4).
Use organizational structure. Eventually, a project's functional architecture
gets divided into parts that development teams design and build or commercial
products handle. Thus, at the lowest level, every function should be assigned to a
single development team (or a purchased product). If we know the commercial
products and development-team structure before or while developing the
architecture, we should simply partition it to match the organizational structure.
We could also match the customer's organization—a common approach. For
example, because Department of Defense contracts often call for developing
systems of systems, several customers may fund a single solution. In this case,
partitioning to match the functions these customers request may be helpful and
effective. That way, customers can visualize the system from their own
perspectives and better understand what they're getting for their money.
Use functional requirements. Because a system's functional requirements
describe what it must do, we use them to determine its subfunctions.
Requirements and architectures develop iteratively, so we use the same method
for lower-level functions. Each subsystem's functional requirements help us define
its subfunctions. Figure 5-18 shows an example for FireSAT.
Consider stakeholder priorities. For any project, the stakeholders' rank-
ordered requirements trace to a set of rank-ordered system requirements. We use
this information when creating the system's functional architecture. For example,
we isolate functions with very high or low priority to customize services for our
customers. By isolating high-priority functions and specifying their interfaces, we
make sure that they're developed first. On the other hand, partitioning low-priority
2.1 2.2 2.3 3.1 3.2 5.1 5.2 5.3 5.4 5.5 6.1 6.2 6.3 6.4
Receive Receive Detect Produce Produce Process Produce Process Process
Control Control Contra! Control Control
payload spacecraft wildfire payload spacecraft wildfire wildfire''911" spacecraft payload
attitude orbit power temperature propellant
commands commands emissions telemetry telemetry emissions message commands commands
Figure 5-17. FireSAT Example Using the Hatley-Pirbhai Template. Here we show how the subfunotions for Perform space element
functions come from the template, and those subfunctions divide into particular items that FireSAT must receive, produce,
control, and process. (Functions 1 and 4 also have partitioning at this level; for simplicity, we don’t show it in this figure.)
162 Chapter 5—System Functional and Physical Partitioning
0
Perform
space element
functional
requirements
Function
1 2 3 4 5 6 7
Operate in the Revisit Provide for
Cover US Detect Geo-locate Notify
space coverage command
territory wildfires wildfires users
environment area and control
functions lets us reduce total risk. If cost or schedule issues arise, we can drop these
functions or move them to a later delivery.
Match the physical architecture. If we know a system's physical architecture
(Section 5.4), we can allocate functions to physical resources—hardware, software,
and people. Several conditions could enable architects to know these physical
components early in development. For example, we may be updating a legacy
system whose hardware and software are established. To avoid affecting this
system, we should partition the functions based on these components. Sometimes
stakeholders require certain physical resources. For example, they may require a
specific piece of hardware, software, or commercial-off-the-shelf product.
Architects may also want to match the physical architecture to limit costs for
support and maintenance. Suppose we know before developing the functional
architecture that a system will need to refresh many components while operating.
In this case, we partition the architecture to create distinct interface boundaries
between the physical components, so the refreshes will be easier and cheaper.
Figure 5-19 shows a way to partition system functions based on the physical
architecture, first dividing it into Lwo main parts and then partitioning each part
into lower-level functions.
Table 5-3 summarizes benefits and shortcomings of the 11 functional-
partitioning methods discussed in this section. Architects may use several ways to
partition a function because each method develops a different functional
architecture. They may also combine techniques. For example, they may
decompose a function into a set of subfunctions with the use-case method, but all
of the use cases may have a common input or output function. The best method
depends on the functions being partitioned and the external interfaces to users and
outside systems; it may be the simplest, the most modular, or the most detailed.
(Section 5.4 discusses techniques and metrics for assessing architectures.)
Oi
k>
Apply the Hatley-Pirbhai • A simple method that applies to • Doesn’t offer detailed
template many different types of functions understanding of processing or
• Effectively partitions at a high control functions
level • May need another method at a
lower level to understand
processing
Use the function states * Very good to describe functions • Architects must understand the
with multiple steps start and end states up front
* Similar to a state machine that is • Gets complicated for long
familiar to many engineers processing strings or complex
* Easy for most people to functions
comprehend
* Can help architects discover the
details of functions not
completely known
Use processing rates • Is simple ■ Partitions only part of the
• Matches how functions will system’s functions
apply, because hardware or • Must use with other methods for
software separates them by complete partitioning
processing rates
Use organizational * Matches how functions will * Because it forces functions to
structure apply, because they eventually meet the organization’s
allocate to components that requirements, may inhibit
different organizations will creativity or trade-offs for a
handle more efficient solution (see
• Used in organizations whose discussion of the Morphological
structure is set or difficult to Box in Section 5.3.2)
change
5.2 Define the Functional Partitions 165
Consider stakeholder ■ Enables effective support and • Partitions only part of the
priorities maintenance of systems system’s functions
* Supports giving customers the • Must use with other methods for
“best bang for the buck” by complete partitioning
creating an architecture that
meets their priorities
Match the physical • Matches how functions will • Because it forces one-to-one
architecture apply mapping of the functional and
• Normally used (for all or part) physical architectures, may
when we must incorporate a inhibit creativity or trade-offs for
legacy system, commercial-off- a more efficient solution (see
the-shelf items, or purchased discussion of the Morphological
products Box in Section 5.3.2)
Wildfire .1
IR emissions
Commands Handle
inputs
Environment
inputs, GPS Function
2
Wildfire "911" warnings
Handle Telemetry
outputs
Waste heat, thrust
Function
FIGURE 5-20. Allocating External Inputs and Outputs to the Subfunctions. The external inputs
and outputs are assigned to the appropriate subfunction. Here we simply partition
the space element into two subfunctions. These are later partitioned into more
descriptive subfunctions that better describe what the system does. (IR is infrared;
GPS is Global Positioning System.)
1. Trace the operational scenarios defined at the system level. We start with
system-level operational scenarios to verify whether each level of
functional partitioning generates required outputs based on given inputs.
These scenarios show the system's sequential interactions with operators
and other external systems as the operators use it. By tracing them through
the functional architecture, we show that the functional interactions meet
their requirements.
Figures 5-21 and 5-22 show two operational scenarios. Each figure depicts the
scenario as a sequence diagram at the system level and at the functional partition
level. In both figures, part a) shows the scenario at the system level. The sequence
diagram shows the system of interest (FireSAT space element) and the external
systems (users) as vertical lines. System inputs are arrows that end at the FireSAT
space element line. System outputs are arrows that begin with the FireSAT Space
element line. Time passes downward in the vertical axis. Thus, in Figure 5-21, the
system receives the input Navigation Signals before it produces the output
Wildfire "911" Notification. The scenario shows the sequence of inputs and
outputs to and from the system of interest.
Parts b) of Figures 5-21 and 5-22 illustrate the operational scenarios traced
through the functional partitions of the space element as described in Figure 5-14.
In these examples, the space element subfunctions appear as dashed vertical lines
instead of the solid vertical lines that represent the space element and the external
systems.
External inputs and outputs (I/O) come into or emerge from one of the
subfunctions. The diagrams also show internal I/O between the subfunctions, as the
functions execute during the scenario. For example, in Figure 5-21b, when the
subfunction 1.1 Handle Wildfire Inputs receives the external input IR Emissions, it sends
the internal message Detected Wildfire to the subfunction 2,1 Produce Wildfire “911”
Notifications. For the system to produce the actual "911" notification, it must know the
location of the wildfire. When the subfunction 1.4 Handle GPS Inputs receives the
external input Nav Signals, it sends the internal message Wildfire Location to the
subfunction 2.1 Produce Wildfire '‘911’' Notifications. This function now has all of the
information it needs to generate the external output Wildfire "911" Notification to the
Ground Element external system.
Whenever the system receives an external input, we must show a connection
between the subfunction that receives the input and the subfunction that generates
the subsequent external output. The system level scenarios (part a of the figures)
are fully realized at the subfunction level (part b of the figures).
2. Trace other functional scenarios we may need to develop. Sometimes the
system-level operational scenarios don't cover the system's external
inputs and outputs. In other cases, internal functions drive external
outputs, such as internal error processing that leads to an error status
being reported to the operator. Planners may not have developed all the
operational scenarios because, in the interest of cost and schedule, they've
168 Chapter 5— System Functional and Physical Partitioning
a)
Figure 5-21. The Wildfire “911” Scenario, a) At the System Level. We show the inputs and
outputs of the scenario for the system, b) At the Subfunction Level. The system
level inputs and outputs from a) are now allocated to the subfunctions. Inputs and
outputs between the subfunctions complete the scenario. (GPS is Global
Positioning System; IR is infrared; USFS is US Forest Service.)
5.2 Define the Functional Partitions 169
FIGURE 5-22. The Control Scenario, a) At the System Level. Here we show the scenario inputs
and outputs for the system, b) At the Subfunction Level. The system-level inputs
and outputs from a) are now allocated to the subfunctions. Inputs and outputs
between the subfunctions complete the scenario. (Space environment inputs and
waste heat are produced continuously, although we represent them discretely here.)
(NOAA is National Oceanic and Atmospheric Administration.)
170 Chapter 5—System Functional and Physical Partitioning
settled for only the architecturally significant ones at the system level. But
we must still address all external inputs and outputs in the functional
architecture by defining functional scenarios and tracing them through the
architecture. We then use these scenarios to determine if we need more
functions or more I/O to develop a complete functional architecture.
"Complete" means that the architecture covers all the system's external
inputs, outputs, and required functions.
Functions with differing inputs and outputs or exit conditions are different
functions, so we must account for their differences when we identify the unique
system functions while integrating threads. At the same time, we should limit the
size and complexity of our final integration. We should also try not to duplicate
functions and logic streams. Although integrating these scenarios would be easy if
we kept them in their original form and linked them in parallel, that approach makes
it difficult to understand how system functions interact in an integrated way. One
reason to integrate the scenarios into a single structure or architecture is to make it
understandable to the systems engineer. If we don't integrate well, design engineers
won't understand the interactions as system scenarios run concurrently.
5.2 Define the Functional Partitions 171
Figure 5-23. Functional Partitioning, a) The Wildfire “911” Scenario, b) The Control
Scenario. We combine the partitioning of all the scenarios to show a complete
functional partitioning of the system (see Figure 5-24 for an example). (GPS is
Global Positioning System; IR is infrared; NOAA is National Oceanic and
Atmospheric Administration.)
System scenarios identify how the system responds to each input independent
of all other inputs. But these individual scenarios don't account for functions that
work together or for interactions and logical controls between scenarios.
Integrating them is one way to address this apparent hole. This activity is
challenging but essential to creating efficient, understandable system logic.
172 Chapter 5 — System Functional and Physical Partitioning
FIGURE 5-25. System Context for the Subfunction Handle Wildfire IR Inputs. We must
understand the context of a function before partitioning it to the next level of detail.
(IR is infrared.)
1. Define the system context (Figure 5-25) for this subfunction, which is now
the ''function of interest"
2. Determine the subfunctions of the "function of interest" (Figure 5-26)
3. Determine which of the subfunctions receives or sends each of the
system's external inputs and outputs
Figure 5-26. The Subfunctions of the Function Handle Wildfire IR Inputs. We implement the
same process described above to determine the subfunctions of every function that
must be partitioned to a lower level of detail. (LP is loop; LE is loop exit.)
Figure 5-27. A System Context Diagram Represented as a Physical Block Diagram. The
diagram shows the space element, the external systems with which it interfaces,
and the physical interfaces with those external systems. (GPS is Global Positioning
System; IR is infrared; LV is launch vehicle; USFS is US Forest Service; NOAA is
National Oceanic and Atmospheric Administration; UHF is ultra-high frequency.)
176 Chapter 5—System Functional and Physical Partitioning
physical components, we should decide whether to split the function into several
subfunctions, so we can allocate each subfunction to a single component. This step
is important because the physical architecture drives requirements for the
components and the interfaces between them. If we split a function across
components, the requirements also split across their developers, so the interface
between them may be nebulous. As described in Sections 5.3.4 and 5.4, an
architecture is usually better when it's modular and the interfaces are well defined.
Sometimes splitting a function handled by more than one physical component
doesn't make sense. For example, a system may have a function, Provide information
to the user, that puts system responses on a display for viewing. Two or more
5.3 Create the Physical Partitions and Allocate Functions to the Physical 179
Components
physical components may cover this function—for example, a display monitor and
a software application. It may not be cost-effective to split this function into two
functions (one performed by the display monitor and one performed by the
application software) because the development teams know the functionality of
each of these components and the interfaces between them. Distributed systems are
another case in which several physical components may do the same function. As
long as we know the interfaces between the components, changing the functional
and physical architectures to further document this solution may not add value.
Figure 5-30 shows how we allocate functions to the physical architecture. A
single system function corresponds to the system of interest and each subfunction
allocates to a single subsystem.
FIGURE 5-30. Allocating Functions to the Physical Components. The functions in the
functional architecture must be allocated to the components in the physical
architecture. Every function must be performed by some component.
Although some people say each physical component should perform only one
function, this is typically not the case in real life, nor should it be. Architects must
understand functions in the functional architecture and allocate them in the best
way to the physical architecture. This process seldom produces a one-to-one
mapping of functions to components. Figure 5-31 shows an example of multiple
functions allocated to physical components. But all physical components must
normally perform at least one function. Reasons to group functions in a single
physical component include:
• Functions are related or handle similar tasks
• Performance will improve if functions combine in a single component
• Existing components (commercial-off-the-shelf or otherwise) cover several
functions
• Functions are easier to test
• Interfaces are less complex
180 Chapter 5—System Functional and Physical Partitioning
Functions Components
FIGURE 5-31. Other Examples of Allocating Functions to Components. Here we show generic
examples of allocating functions to physical components.
5.3 Create the Physical Partitions and Allocate Functions to the Physical 181
Components
Figure 5-32. Functional Architecture Showing Allocation to the Physical Elements. Here we
show the physical components within the parentheses and the arrow pointing up to the
function being performed by each component. (GPS is Global Positioning System; NOAA
is National Oceanic and Atmospheric Administration; USFS is US Forest Service.)
TABLE 5“4. Morphological Box for FireSAT’s Spacecraft Systems Design. We use the morphological box to select the best physical
None (total UHF Sun sensor None (free None (primary Primary Ground-based Cold-gas
autonomy) tumble) battery only) battery
S-band Ka-band Magnetometer Dual spin Fuel cell Ni-H Space-based Bi-propellant
optical
Reaction Arcjet
wheels
Control Ion
moment gyros
Hall effect
thruster
Pulse plasma
Ihruster
5.3 Create the Physical Partitions and Allocate Functions to the Physical 183
Components
Figure 5-33 shows a physical hierarchy of the architecture, with a block for
each of the spacecraft's physical components. We haven't yet dealt with the
interfaces between the components (Section 5.3.4).
stakeholder requirements or from characteristics that all (or most) systems meet.
We generally either measure the system (or intended system) to determine
whether its design meets the metrics or look at other system architectures to select
an alternative that best meets them. In either case, basic methods include using the
Software Engineering Institute's architecture trade-off analysis, applying quality
attribute characteristics, and assessing an architecture based on metrics common
to all or most systems.
Software Engineering Institute's Architecture Trade-off Analysis Method
(ATAM). This method looks at business or mission drivers for an architecture. It
describes each driver's measurable attributes and the scenarios in which it will be
measured (or estimated) to determine whether the architecture meets the attribute.
Figure 5-35 shows an example of this method.
Figure 5-35. The Software Engineering Institute’s Architecture Trade-off Analysis Method
(ATAM) for Assessment. Here we show an architecture assessment process that
is used within industry. The pairs of letters (high, medium, low) in parentheses show,
respectively, customer priority and expected difficulty in meeting the attribute.
(CORBA is Common Object Requesting Broker Architecture.)
Figure 5-36. Quality Attribute Characteristics Method. Here is another architecture assessment
process used within industry.
FIGURE 5-37. Commonly Used Metrics for Architectural Assessments. We use these metrics to assess architectures. These examples are
from the Architecture Trade-off Analysis Method (ATAM) Business Drivers (Figure 5-35) or Quality Attribute Requirements (Figure 5-
36). Middleware is a layer of software that enables open systems. (RMT is reliability, maintainability, and testability; OMI is operator
machine interface; BIT is built-in test.)
190 Chapter 5 —System Functional and Physical Partitioning
Models may also determine whether the system meets other technical
requirements, such as availability, maintainability, size, or mass.
5.5 Allocate Requirements to the Components and Generate Detailed 191
Specifications
Figure 5-39. Defining Requirements for the Physical Components. We generally trace
functional and input/output (I/O) requirements to functions. We also trace
nonfunctional requirements to functions or the physical components. Either way, the
requirements for a component include the ones traced to the component’s function
and the ones traced directly to the component.
future evolution of the architectures, adding to the long-term risk with regard to
integration and test, upgrade and refinement programs, and so on.
This step results in specified requirements for each system component, including
specifications for processing, interfaces, and nonfunctional requirements. They are
the basis for teams to develop or acquire hardware, software, people, and processes.
and
2.1 2.2 2.3 2.4 2.5 2.6 2.7 2.8 2.9 2.10
Physical Partitioning
Attitude Guidance,
Data Electrical Thermal
determination Communication navigation, Propulsion Structure Mechanisms Harness
handling power control
and control subsystem and control subsystem
subsystem subsystem subsystem
subsystem subsystem
2.6.1
Rocket
engine
component
FIGURE 5-41. Output of a Requirements Management Tool. This diagram shows the physical components of the space element to which we
trace the requirements.
5.6 Trace Requirements 197
FVT = Functional verification test Legend: Enter one or more of the following in
CVT = Component verification test each test method cell. A blank cell indicates
TVT = Translation verification test that the test method is not required for that
ASCA = Applications system control and auditability requirement.
SIT = System integration test A = Analysis
PST = Performance stress test D = Demonstration
DAT = User acceptance test I = Inspection
Pre-prod = Pre-production test S/M = Simulation and modeling
T = Test
FIGURE 5-42. Example of a Requirements Traceability and Verification Matrix (RTVM). The
RTVM shows a tracing of one level of requirements to the next level, the tracing of
each requirement to its acceptance criteria (pass-fail criteria), and the tracing of
requirements to the phase and method of verification. The requirements traceability
matrix (RTM) documents the tracing through the various levels of the architecture.
The requirements verification matrix (RVM) documents the tracing of each
requirement to its verification information.
As this example illustrates/ defining the derived requirements is also the start
of establishing a qualification strategy. In addition, setting the acceptance or
pass/fail criteria helps all stakeholders to better understand the requirements
statements, which may mean different things to each stakeholder.
Summary
Developing the functional and physical architecture results in several outputs:
• Architectural models that describe what the system does, the physical
components that perform these functions, and all interface definitions
between the functions and components. Depending on the system's
complexity, these models may be one level or many levels deep (with each
level further detailing the solution)
• Derived requirements for the hardware and software development teams,
from which they begin designing and building their components
198 Chapter 5 —System Functional and Physical Partitioning
Figure 5-43 shows how the development, test and integration, and systems
management teams use these outputs to create their work products.
Figure 5-43. Deliverables to Development and Test Organizations. This diagram shows the
deliverables from the requirements and architecture development processes to the
development and test processes. These deliverables provide a baseline from which
the development and test processes begin.
Not shown in the figure, but just as important, is how project or program
managers use the architecture outputs. The architecture and component
requirements give them the information to develop costs and schedules for the
solution. Because these requirements specify the hardware and software
components to build or acquire, the staff can develop the project or program plan
from them. The same information enables managers to better understand the costs
for developing, integrating, verifying, and validating the solution.
The process is iterative: each step uncovers more information about the
solution and may require revisiting previous steps. Thus, the process produces
important technical decisions and trade-off analyses that help project staff
understand the current solution and support future decisions. As discussed in
Section 5.4, all architectures create issues and risks. We assess architectures
formally and informally to understand their risks and to verify that they meet the
stakeholders' needs.
By developing the physical and functional architectures, as well as their
corresponding requirements, we provide a technical baseline for the rest of the
project or program's development. This baseline enables the project's team to
produce a detailed cost and schedule, understand the solution's risk, and give the
development, test, and systems management teams confidence that the solution
addresses the stakeholders' needs.
References
Lykins, Howard, Sanford Friedenthal, and Abraham Meilich. 2001. "Adapting UML for an
Object Oriented Systems Engineering Method (OOSEM)." International Council On
Systems Engineering (INCOSE).
SysML.org. Ongoing. "System Modeling Language (SysML)-Open Source Specification
Project." Available at the SysML website.
Chapter 6
Decision Making
Michael S. Dobson, Dobson Solutions
Paul Componation, Ph.D., University of Alabama
Ted Leemann, Center for Systems Management
Scott P. Hutchins, NASA Marshall Space Flight Center
6.1 Identify What We Need to Decide
6.2 Frame the Decision
6.3 Select a Method to Evaluate Alternatives
6.4 Generate Alternatives
6.5 Choose the Best Solution
6.6 Test and Evaluate
6.7 Consider a Case Study
We make decisions every day, mostly using informal processes based on our
intuition. So we might ask: "Why do we need a formal decision making process in
space system design and management?" The reasons include [Clemen and Reilly,
20011:
• The inherent complexity of problems we face in designing and managing
space systems
• The uncertainty of situations under which we address the problems
• The need to satisfy multiple objectives
• The conflicting needs of multiple stakeholders
These decisions can clearly benefit from a formal, rigorous process:
1. Establish guidelines to determine which technical issues require formal
analysis or evaluation
201
202 Chapter 6—Decision Making
Decision making is too big a subject for a single chapter, or even for a single
book. We can't prescribe one process for all environments and situations, so we
encourage the reader to seek more resources on this vital topic. Organizations
benefit from identifying common tools and approaches that improve the quality
and consistency of mission-critical decisions. Table 6-1 describes the steps to
consider in decision making. We discuss each step in this chapter, followed by
pertinent questions to ask to be sure that decisions have followed the process steps.
Table 6-1. Formal Process for Decision Making. A methodical decision process requires careful
progression from defining the problem through testing and evaluating the solution.
Formal and informal methods abound.
Decision Where
Making Steps Description Discussed
1. Identify what we The first step in getting the right answer is asking the right Sections 6.1,
need to decide question. What is the decision? What's at stake? In what 6.2.2, 6.7
context will the decision take place? Who should decide?
2. Frame the Decisions aren’t made in a vacuum. Define the environmental, Sections 6.2,
decision organizational, mission-related, and major uncertainty factors 6.7
that influence the right choice.
3. Select a Establish criteria for evaluating the alternatives. Which tools Sections 6.4,
method to are best for this decision? Formal processes take more time 6.7
evaluate and effort to document the decision, help establish a
alternatives consensus, and apply an objective method.
4. Generate Before settling on a single answer, look for several. Create Sections 6.3,
alternatives backups in case the first-choice solution doesn’t work. 6.4, 6.7
5. Evaluate Select evaluation methods and tools, and examine the Sections 6.4,
alternatives solutions in light of established criteria. Look for cognitive and 6.5, 6.7
perceptual biases that may distort the process.
6. Choose the Select the best solution. Document the process, alternatives Section 6.5
best solution considered and rejected, and rationales. List action steps
required, and then carry out the decision.
7. Test and Evaluate the applied decision’s effects. Determine the likely Sections 6.6,
evaluate outcome of rejected strategies. Prepare necessary 6.7
documentation. Integrate lessons learned for future decisions.
6.1 Identify What We Need to Decide 203
TABLE 6-2. Objectives for an Enterprise Resource Planning (ERP) System. Each objective is
important when selecting this system.
1 Internal rate of return The project’s internal rate of return will meet or exceed the
company’s minimum attractive rate of return: 25 percent
2 ERP requirements The computer-integrated manufacturing system will meet the
requirements identified by the plant’s departments, including
production, logistics, scheduling, and accounts payable
3 Serviceability The vendor will respond to and correct 95 percent of reported
system problems within 24 hours of being notified
4 Engineering support Engineering support, in labor hours, will meet the level
required to support designing and installing the system
5 Risk The project won’t exceed this level of technical, cost, and
schedule risk, as identified by the implementation team
TABLE 6-3. Using Rank Sum and Rank Reciprocal to Generate Priorities for Objectives.
These mathematical techniques enable decision makers to generate weights quickly
based on the objectives’ order of importance. Rank sum produces a relatively linear
decrease in weights; rank reciprocal produces a non-linear weight distribution because
it places greater importance on the first objective. (IRR is internal rate of return; ERP is
enterprise resource planning.)
Decisions often require trade-offs because a perfect solution may not exist.
Each potential choice has a downside, or includes risk or uncertainty. In some
ways, choosing the least bad alternative from a set of poor ones takes greater skill
and courage than making a conventionally "good" decision.
A decision's outcome, whether positive or negative, doesn't prove its quality,
especially where probability is concerned. The odds may dramatically favor a
positive outcome, but Iow-probability events occur. Conversely, a stupid decision
that results in a good outcome is still stupid. A solid decision process improves our
odds of achieving the desired outcome most of the time.
The decision making process must often be open and auditable. We have to
know the decision and the process that led us to it. If the outcome is bad, someone
else—a boss, a customer, a Congressional committee—uses "20-20 hindsight" to
determine whether our decision was reasonable and appropriate. This second-
guessing leads to the rational (but not always appropriate) strategy known as
"CYA" (cover your assets), in which the decision focuses less on the mission
perspective than on a method that ensures that blame and punishment fall
elsewhere if an outcome is bad. As Fletcher Knebel points out, a "decision is what
a man makes when he can't find anybody to serve on a committee." [Knebel, n.d.]
With enough time and resources, we can structure and study even the most
complex problems in a way that leads to the best decision. Without them, however,
we still must decide and remain accountable. That responsibility brings to mind
the story of a person who called an attorney and received this advice: “Don't
worry, they can't put you in jail for that." The person replied, "But counselor, I'm
calling from jail!"
No formal process or method removes all risk from decisions. Many tools and
techniques improve decision making, but ultimately it requires good judgment.
Good judgment comes from experience combined with wisdom, but experience
often comes from bad, unwise judgment. To improve on experience, we should
focus on central questions about risk and uncertainty:
• How much do we know? How reliable is that knowledge?
• What do we know we don't know?
• How certain is a given outcome?
• Is there stakeholder or political conflict?
• What are the consequences of being wrong?
• How much time do we have to gather information?
• What values are important, and how much weight should each receive?
• Are there any unequivocally positive outcomes, or do all the options carry
risk and uncertainty?
Decision
No action Action
Initial
(unknown)
situation
FIGURE 6-1. Options for Analyzing Decisions. Decisions (including the decision not to act) fall
into four categories—two appropriate and two inappropriate.
Driving carpet tacks with a sledgehammer is usually a bad idea, so the method
for a given decision must be in proportion. It can't take more time, energy, or
resources than the decision's value allows, but it must be rigorous enough to
support recommendations and choices. And if the outcome is bad, it must provide
enough detail to evaluate the decision process. When it's a short-fuse decision, we
may not have enough time to use the tools properly. Table 6-4 lists some ways to
evaluate decision making, but it's not exhaustive. The case study in Section 6.7
uses a different method.
TABLE 6-4. A Range of Ways to Evaluate Decision Making. Formal, informal, analytic, and
intuitive evaluation tools all have a place in effective decision making.
Method Description
Intuition “Gut feel” decisions don’t follow formal processes, but they may be the only tool
available when time is very short. Intuition often supplements other, more
formal, methods. If we’ve done a complete analysis, and our gut argues strongly
that something is wrong, we’re usually wise to listen.
Charts A table of options, a “pro-con” list, or a force-field analysis (described below) all
use charting techniques
Heuristics Rules, often derived from practical experience, guide decisions in this method.
For example, a traditional heuristic in problem solving is: “If understanding a
problem is difficult, try drawing a picture."
Qualitative These models are intended to completely describe the matter at hand
analytic models
Quantitative These models involve building complex statistical representations to explain the
analytic models matter at hand
Simulations Simulations imitate some real thing, state, or process by representing a
selected system’s main characteristics and behaviors
Real systems Ultimately, if we want to Know whether the Saturn V engines work, we have to
test test the real thing. Properly developed heuristics, models, and simulations can
be highly predictive but may omit variables that only a real test will reveal.
A number of factors influence our choice. What are the quality and
completeness? What will it cost in time and resources to get missing data? We
point out here that data isn't the same as knowledge. Numbers are not decisions;
they're inputs. Subjective data reflects psychology or behavior, as in how the
project's stakeholders feel about the amount of progress. We can make subjective
data at least partly quantitative using surveys, interviews, and analytical
hierarchies. But both types of input have value, depending on the nature of the
question. Data comes in many forms and levels of certainty, so we must
understand its nature and origin to use it properly. Objective data includes:
• Precise spot estimates (deterministic)
• Expected values (deterministic summary of probabilistic data)
• Range of estimates
• Probability distributions
6.3 Select a Method to Evaluate Alternatives 211
Table 6-5. Force-fieldAnalysis. This table shows forces that improve or worsen
organizational decisions. To alter the situation, we must add or amplify forces
on the positive side, or eliminate or mute them on the negative side.
Additive model. For the additive model [Philips, 1984], we add the products
of the weights and scores to arrive at a single score for each design alternative.
Figure 6-2 shows the general format for data collection.
Data collection to populate the model is usually one of the most time-
consuming steps in decision analysis. Two items are particularly important. First,
we must identify the performance range, threshold, and optimum for each
objective. Second, we have to develop a standardized scale, so we can relate each
objective's performance to other objectives. A five-point scale is common, and odd-
number scales allow for a median. Table 6-6 provides an example.
212 Chapter 6—Decision Making
Alternatives
1.
2.
3.
4.
Totals:
Figure 6-2. Additive Model. We customize this general form to collect data for
widely varying decisions.
TABLE 6-6. Objective Evaluation Sheet. This table illustrates how decision analysts might use a
five-point standardized scale to relate each objective’s performance to other objectives.
Very Good 4 The technology has passed type III (prototype) testing
Good 3 The technology has been verified by type II (brassboard) testing to
check for form, fit, and function
Fair 2 The technology has been verified by type I (breadboard) testing to
check for functionality
Poor 1 The technology has been verified by analytical testing
FIGURE 6-3. Decision Tree, This decision tree compares the financial impact of three possible
strategies to build a database management system (DBMS): build, reuse, or buy.
Build, with an expected monetary value of 1560, appears to be the best financial
option, but other information is usually necessary to arrive at the best decision.
Figure 6-4. Kepner-Tregoe Technique. This tool allows managers to list, weight, and rank
“musts," and “wants,” weights, and scores as they apply to each decision
alternative. Adapted from [Goodwin, 2005].
Table 6-7. Rating and Weighting Table. This example identifies four
design alternatives for the FireSAT satellite and how well
they satisfy the three weighted criteria shown. In this
example, quality is three times as important as cost in
making the decision. The lower the score, the better.
Alternative
Criteria A B C D
Rank x Weight 21 19 11 9
decisions with large numbers of values. Spreadsheet programs also allow pairwise
comparison modeling. Figure 6-5 shows a multi-variable approach to selecting a
launch vehicle. Table 6-8 summarizes the comparisons.
Figure 6-5. Applying the Pairwise Comparison Technique to Select a Launch Vehicle. This
diagram sets up the rating of three different launch vehicles against five quality
criteria. Criteria can be quantitative and qualitative.
TABLE 6-8. Weights for Top-level Criteria Resulting from the Pairwise Comparison
Technique. This table summarizes a series of pairwise comparisons, showing relative
weight for each criterion. (Weights shown total only to 99% due to rounding.)
Row
Payload Reliability Availability Cost Safety Totals Weight
Grand Total
Column Total 9.50 1.12 3.03 7.13 21.00 41.78
Each percentage represents that criterion's relative weight, which we then factor
into our launch vehicle selection, as depicted in Figure 6-6. Table 6-9 shows that
we're now ready to compare alternatives numerically, with all the data rolled into
a single total.
Figure 6-6. Relative Weighting of Criteria for Selecting a Launch Vehicle. The criteria now
have percentage weights, so their effect on the final answer will be proportional.
TABLE 6-9. Rating the Alternatives. To rate alternatives, we calculate relative scores for each
criterion and each launch vehicle. Then we multiply them by the weighting of each
criterion to get totals for meaningful comparison. Vehicle C appears to be the best
choice, even though it gets a 0% for payload capability.
Excellent = 100%; Very good = 80%; Good = 60%; Marginal = 20%; Poor = 0%
• How much time and resources can we afford to commit to this decision?
• What is this decision's value?
• What other decisions does the project require?
• What data is available?
• What is its quality and level of detail?
• Are the factors to consider objective, subjective, values-driven, or
unknowns?
• Would small changes in assumptions result in big changes in outcomes or
costs?
• Is the method we've selected giving us useful results?
Once we've answered these questions and analyzed alternatives using one of the
methods described in this section, our next step is to select a solution.
TABLE 6-10. Brainstorming Techniques. Several kinds of brainstorming techniques are available
to decision makers as a way to create a larger set of alternatives, which typically results
in better-quality decisions.
Table 6-11. Methods for Selecting a Decision. The individual decision’s nature and circumstances
influence the choice of best method.
When teams decide, other considerations come into play. Conflict is not only
inevitable but, if managed properly, desirable. The goal is to reach a consensus,
which isn't necessarily 100-percent agreement but a decision that all team
members can live with. Teams make effective decisions when they:
• Clearly define the problem or issue
6.5 Choose the Best Solution 219
that may distort our judgment. To improve intuitive decisions, we ask certain
shaping questions:
• Does my intuition recommend a particular decision?
• Does the formal process reach the same conclusion, or does it recommend
another decision?
• What factors not in the formal process may I have considered by using
intuition?
• Was the intuitive choice validated? Did it hamper my considering other,
possibly better, alternatives?
• When I finally know the outcome, was the intuitive call more or less correct
than the formal one?
Table 6-12. Common Decision Traps and Biases. The way in which we look at the world in
general is not always accurate. Various decision traps and biases often lead to serious
error if not corrected.
Single- and If we launch a single satellite with some probability of failure, we sense that all
multi-play our eggs are in one basket. Launching ten satellites with the same probability of
decisions failure per launch feels safer, but if the consequences of failure are
undiminished, the decision situation hasn’t improved.
Wishful thinking People often overestimate probabilities of desirable events and underestimate
the undesirable—Challenger is an example. This tendency results from
optimism or “organizational cheerleading”: the idea that imagining bad outcomes
is disloyal.
Groupthink Social psychologist Robert Cialdini [2006] describes this characteristic as “social
proof’: the idea that uncertain people decide what is correct by finding out what
other people think is correct. Standing against the majority is difficult. Even
determined nonconformists feel the pressure. If we're last in line to approve a
launch, and everyone before us has said “yes," the pressure to agree is very
strong.
Status quo bias “If it ain’t broke, don’t fix it,” says the proverb. People perceive change as
inherently risky. With more choices available, the status quo bias amplifies. It
also carries with it a type of immunity against wrong decisions: decision makers
perceive it as carrying a lower risk of personal consequences. For example, at
one time, managers of institutional technology said: “Nobody ever got fired for
buying IBM.”
Availability bias Decision makers often exaggerate more recent or more vivid information. For
example, far more people die in automobile accidents each year than fall victim
to terrorist attacks. Yet, the extreme vividness of such attacks makes them loom
far larger for decision makers. Events that lead to losing a space vehicle, no
matter how improbable, tend to have an exaggerated effect on the next launch.
Confirming People tend to seek information that supports their point of view and ignore or
evidence trivialize what contradicts it.
Anchoring bias People tend to weigh disproportionately the first information received, especially
when a large amount of data exists.
Sunk-cost bias People tend to decide in a way that justifies past choices. Cialdini [2006] reports
a study that bettors at a racetrack are much more confident their horse will win
immediately after placing the bet than before. Once people have taken a stand,
they believe they must stick to it.
• What steps have we taken to avoid common decision traps for ourselves
and others?
• What decision does the process recommend?
• Is one alternative clearly superior, or are several choices almost equally
good?
6.6 Test and Evaluate 223
FIGURE 6-7. International Space Station’s {ISS’s) Node 3. ISS Node 3 is a cylindrically shaped,
pressurized module that gives the Space Station more docking ports. It also houses
Key avionics and life-support systems. New constraints on extravehicular activity for
the assembly crew require a decision on how to protect the node from freezing while
temporarily docked for many extra hours to Node 1. [HEM, ca. 2008; NASA, n.d.]
The ISS Program Office established a Node 3 "Lead Test Assembly (LTA)
Team" (referred to here as the team) to assess the issue and offer alternative
solutions. What would be the most effective way to keep Node 3 from freezing
while temporarily docked to Node 1? The team consisted of representatives from
the Node 3 Project Office and its hardware contractor, the EVA Office, and the ISS
Program Office [Node 3 PMR, 2005].
FIGURE 6-8. Lead Test Assembly (LTA) Cable’s Number on Its Routing to Node 3’s Shell
Heaters. The purpose of this modification is to keep Node 3 from freezing during its
protracted docking to Node 1. [Alenia Spazio, 2004] (MDPS is meteoroid and debris
protection system.)
Potential for Significant Task Complexity and EVA Potential for Issues with Work
Effect on Timeline for EVA Fatigue in EVA Astronauts Crew TraFning Access or Reach Envelope
Score Operations (Ranked #1) (Ranked #2) (Ranked #3) (Ranked #4)
Excellent Any effects on timeline for EVA Low concern that added EVA New EVA tasks are simple— Initial worksite analysis indicates
(5) stay within the planned 6.5-hour tasks will increase the no new dedicated training of all anticipated gloved-hand
work day significance of general body EVA crew likely access and EMU reach envelopes
fatigue or local hand fatigue for LTA cable connections easily
above current expected levels exceed EVA requirements
Very Requires a planning timeline Intermediate level Intermediate level—expect Intermediate level
good between 6.5 and 6.75 hours. one to two dedicated training
(4) Flagged as a concern, but ac sessions
ceptable with no EVA operational
requirement waivers required.
Good Requires planning timeline Moderate concern that the Moderate complexity—two to Anticipate several tight but
(3) between 6.75 and 7 hours. extra EVA tasks will increase three dedicated training acceptable working spaces for
Acceptable with approval of EVA the significance of general sessions expected for EVA the EVA crew’s gloved-hand and
operations management. body or local hand fatigue crews EMU reach access— pending a
above currentexpected levels more detailed worksite evaluation
Fair Requires planning timeline Intermediate level Intermediate level—expect Intermediate level
(2) between 7 and 7.25 hours. four to five dedicated training
Requires formal waiver to EVA sessions
requirements approved by EVA-
level management.
Poor Requires planning timeline of High concern that the added New tasks are complex—six Anticipate one or more tight work
(1) more than 7.25 hours, Requires EVA tasks will increase the or more dedicated EVA crew ing spaces for the EVA crew.
formal waiver to EVA significance of general body training sessions are Expect gloved-hand and reach-
operational requirements fatigue or local hand fatigue anticipated access requirements to be
approved by senior-level ISS above current expected levels unacceptable after more detailed
management. worksite evaluation. Likely to
require developing unique tools or
handl ing aids to help the EVA crew
finfsh the task.
6.7 Consider a Case Study 227
Interview experts. The team talked to the NBL's simulation EVA crew and
training staff, covering results of the partial NBL simulation and experience with
the LTA alternatives. They used this knowledge to extrapolate likely times for
work elements that they couldn't measure directly, relative task complexity and
training requirements, and fatigue-related issues.
Use computer-based models to analyze work sites. The team applied the
worksite analysis to understand the extravehicular mobility unit's reach envelopes
and reach-access conditions for each alternative.
Potential for significant astronaut fatigue. At the end of the simulation, the
team interviewed the crew to discuss potential fatigue for each alternative. They
told the crew that the task of connecting the LTA power cable must occur at the
end of the current six-hour day, after completing other tasks in the sequence, just
as in the five-hour training session.
The team used a variation of Borg's Subjective Ratings of Perceived Exertion
with Verbal Anchors [Freivalds and Niebel, 2003] to help the crew evaluate the
individual and cumulative effects of task elements for general body and local hand
fatigue. Based on the simulation and past experience, the crew rated the elements
for body and hand discomfort from fatigue on a scale of 0 (no discomfort) to 10
(extremely strong discomfort). The crew then reported an overall evaluation score
for the second criterion: potential for significant EVA astronaut fatigue.
The team and crew compared results for Alternative I to those anticipated for
Alternatives II and III. The crew estimated how much easier or harder the latter's
element tasks would be compared to Alternative I and evaluated them in terms of
general and local fatigue. Table 6-14 summarizes the fatigue assessment.
TABLE 6-14. Assessments of Astronaut Fatigue. Alternative Ilf scored the best on this criterion.
(LTA is lead test assembly.)
Alt I Fair A moderate to high concern about significant fatigue from the extra LTA
tasks. A long work time with a lot of cable manipulation and translation.
Estimate of 48 significant hand manipulations to secure and connect a loose
cable.
Alt II Good Moderate concern about significant fatigue from the extra LTA tasks. Work
time is more than half of Alt I. Requires inspection translations, as well as
cable deployments at each end of Node 3’s heaters. Only 20 hand
manipulations to secure and connect the cable.
Alt III Excellent Low concern about significant fatigue from the extra LTA tasks. Short
amount of added work time. Cable manipulations are the shortest; expect
fewer than ten hand manipulations to secure and connect the cable.
Task complexity and crew training. The LTA team discussed with training
experts the task complexity and estimated training required for each alternative. The
group evaluated task complexity and potential for dedicated training based on the
premise that the following cases may require unique training [Crocker et al., 2003]:
• The task is highly complex, requiring multiple steps and tools
• The task is safety-critical, requiring a very high probability for success
• The error margin is small with regard to avoiding hazards
• The sequence of events is time-critical for accomplishing tasks or because of
the EMU's endurance limits
For all alternatives, time is critical to determining the need for more unique
training. Installing the power cable before the life support systems reach their
endurance limit is critical. Although none of the alternatives is terribly complex, all
6.7 Consider a Case Study 229
have at least some unique tasks that will require more training. Table 6-15
summarizes the results of the assessment for training effects.
TABLE 6-15. Assessments of How the Alternatives Affect Training. Alternative III scored the
best on this criterion.
Ml Fair The estimated one-hour time line and unique cable deployment and
routing work elements will likely require four to five dedicated training
sessions in the Neutral Buoyancy Laboratory (NBL) or other specialized
training
Alt II Good Expect two or three dedicated training sessions to train for external cable
inspections and unique tasks for routing end cone cables
Alt III Very good Expect only one or two training sessions in the NBL
Potential for issues with work access or reach envelope. The team assessed
work access and reach envelope during the NBL run. They used the assessment for
Alternative I to help indicate areas that might cause issues for a particular
worksite. They were especially concerned about LTA task activities in the
relatively confined work space where Node 3 joins Node 1.
The team asked a worksite analysis team to help identify potential reach and
access concerns for the alternatives using computer modeling. The latter modeled
cxpected crew positions and reach and access requirements for each work element
in tight areas. Their analysis complemented the overall simulation, as summarized
in Table 6-16.
TABLE 6-16. Assessment of Potential Issues with Work Access and Reach Envelopes.
Alternative I scored the best on this criterion. (LTA is lead test assembly; EVA is
extravehicular activity.)
Alt I Very good Using EVA to install the LTA cable results in very few close-access areas.
No issues expected.
Alt II Fair The worksite analysis indicated a possible violation of access requirements
for the EVA crew and extravehicular mobility unit. The crew must access
the preinstalled LTA cable very close to Node 1 hardware, possibly keeping
them from releasing the stowed cable. The project may require redesigning
cable routes, developing new cable-handling tools, or using other
operational workarounds to avoid the potential interference.
Alt III Good Expect reach envelopes and access clearances to be acceptable pending
more detailed hardware information.
Summary evaluation- The LTA team used the rank reciprocal method
[Componation, 2005] to determine the relative scoring for each alternative and
recommend one. This method places the most disproportionate weight on the
criterion ranked number 1. They ranked the four selection criteria 1 through 4, as
previously mentioned.
This method is a good way to establish the relative weights of trade-off criteria
whenever all agree that the number one objective is clearly the most important
230 Chapter 6—Decision Making
[Componation, 2005]. The LTA team, with direction from the EVA team and ISS
management, established the timeline as the most important criterion. Table 6-17
summarizes the rank reciprocal model's analysis and evaluation results.
Alt II Alt HI
Ground Ground
Installed Installed
Weight Alt I Score Above Score Below Score
(A) (B) Calculation (B)/ EVA Installed (Weight x Shields (Weight x Shields (Weight x
Criteria Rank 1/A Σ (1/ (A)) Weight Evaluation Eval) Evaluation EvaL) Evaluation Eval)
Task
Complexity
and Fair Good Very Good
Training 3 0.33 0.33^2.083 0.16 (2 ) 0.32 (3) 0.48 (4) 0.64
References
Adams, Douglas. 1979. The Hitchhiker's Guide to the Galaxy. New York, NY: Harmony Books.
Aienia Spazio. December 2004. Node 3 Launch-To-Activation Report. N3-RP-AI-0180 Turin,
Italy.
Cialdini, Robert B. 2006. Influence: The Psychology of Persuasion. New York, NY: William
Morrow and Company, Inc.
Clemen, Robert T., and Terence Reilly. 2001. Making Hard Decisions with Decision Tools.
Pacific Grove, CA: Dusbury.
Columbia Accident Investigation Board (CAIB). August 2003. Columbia Accident
Investigation Board Report. Washington, DC: U. S. Government Printing Office.
Componation, Paul. 2005. ISE 734, Decision Analysis Lecture Notes. University of Alabama
in Huntsville, Alabama.
Crocker, Lori, Stephanie E. Barr, Robert Adams, and Tara Jochim. May 2003. EVA Design
Requirements and Considerations, EVA Office Technical Document, No. JSC 28918.
Houston, YX: NASA Johnson Space Center.
Dobson, Michael, and Heidi Feickert. 2007. The Six Dimensions of Project Management,
Vienna, Virginia: Management Concepts, pp. 20-34.
Freivalds, Andris, and Benjamin Niebel. 2003. Methods, Standards, and Work Design. Eleventh
Edition. New York: McGraw-Hill Companies, Inc.
Goodwin, Paul and A. George Wright. 2005. Decision Analysis for Management Judgment.
West Sussex, England: John Wiley & Sons, Ltd.
Habitation Extension Module (HEM) home page. c.a. 2008. www.aer.bris.ac.uk/.../hem
/hem_and_node_3 .jpg
Hammond, John S., Ralph L. Keeney, and Howard Raiffa. 1999. Smart Choices: A Practical
Guide to Making Better Life Decisions. Boston, MA: Harvard Business School Press.
Harless, D. 2005. Preliminary Node 3 LTA WSA. 2005. Houston, TX: NASA Johnson Space
Center.
Knebel, Fletcher, n.d. Original source unavailable, http://thinkexist.com/quotes/fletcher_
knebel/
NASA. n.d. https://node-msfc.nasa.gov/g6ninfo/PhotoGallery/Node3/aeanEoom/DSCN1609.JP
Node 3 Program Management Review (PMR). January 2005. Briefing for Node 3 LTA, no
document number. Huntsville, Alabama: Marshall Space Flight Center.
Shaw, George Bernard. 1898. Caesar and Cleopatra, Act II.
US Marine Corps. October 1996. MCDP6 - Command and Control. Washington, DC:
Department of the Navy.
Chapter 7
Cost estimating and analysis have always been part of defining space systems.
We often think of cost analysis as a separate discipline, and in many organizations
a dedicated functional unit estimates costs. But the trend today is to consider cost
as one of the engineering design variables of systems engineering. These days,
most aerospace design organizations integrate cost analysis into concurrent
engineering. An initiative in systems engineering called "cost as an independent
variable" strongly emphasizes this approach.
In concurrent engineering, cost analysts quantify the cost effects of design
decisions. This chapter provides an effective cost model, which we use to evaluate
the acquisition costs of technical performance metrics, technology level, and degree
of new design for various launch systems and spacecraft. We offer other ways to
estimate cost for the systems' operations, so we can evaluate the total lifecycle cost.
Table 7-1 displays the process to estimate the lifecycle cost of space systems. In
the first step, we develop the project's work breakdown structure (WBS), which
delineates the systems and subsystems that constitute the flight hardware as well
233
234 Chapter 7—Lifecycle Cost Analysis
TABLE 7-1. Estimating Lifecycle Costs of Space Systems. We must iterate this process many
times to get a credible estimate. Here we use a launch system (space launch and
transportation system) as an example.
6 Estimate total lifecycle cost and assess cost risk Section 7.6
We also gather the system characteristics for estimating the lifecycle cost. We
collect the cost estimating inputs such as technical data, ground rules,
assumptions, and enough information to allow us to understand the project's
inheritance and complexity. The following are key principles in estimating costs:
• Use cost models and other estimating techniques to estimate cost for system
acquisition, operations/ and support, and for infrastructure development
• Aggregate the total lifecycle cost phased over time using beta distributions
and the project schedule
• Do a sensitivity or cost-risk analysis to evaluate the cost estimate's
sensitivity to the major assumptions and variables, weigh the major projcct
risks, and quantify our confidence in the estimate
• Employ, as needed, economic analyses to compare alternatives and to
access the project's metrics for return on investment
There are two distinct approaches to estimate a space system's lifecycle cost:
parametric cost estimating and detailed engineering estimating (other methods,
such as analogy estimating, are similar to these). Parametric cost estimating
mathematically relates cost to engineering variables of the system. Detailed
engineering estimating uses functional estimates of the labor hours and materials we
expect to need for designing and fabricating items in the WBS or vendor quotes on
those items. Early in project definition, planners commonly prefer parametric cost
estimating, moving to detailed engineering estimating once the design is relatively
mature or manufacturing has begun. Because we focus on conceptual approaches,
this chapter illustrates parametric techniques.
235
The premise of parametric estimating is that we can predict cost with variables
analogous to, but not wholly the cause of, final cost. The Rand Corporation
invented this technique just after World War II, when they needed a way to
estimate military aircraft cost rapidly and early in development. Today many
industries, including aerospace, the chemical industry, shipbuilding, building
construction, mining, power plants, and software development, use parametric
estimating. The technique relies on statistically derived mathematical relationships
called cost estimating relationships (CERs). Cost is the (predicted) variable, and
engineering variables are the independent (input) ones. We usually derive CERs
using data from records of past projects and often do regression analysis to find the
best statistical fit among cost and engineering variables. In space launch systems,
dry mass is one of the most common engineering parameters, but the best CERs also
take into account other variables, including those for management.
In the example presented here, parametric cost estimating helps gauge
essential elements of the lifecycle cost for developing, producing, launching, and
operating space launch and transportation systems (SLaTS). A space system's
lifecycle cost divides into three main phases. The design, development, test, and
evaluation (DDT&E) phase includes designing, analyzing, and testing breadboards,
brassboards, prototypes, and qualification units. It also encompasses proto-flight
units and one-time infrastructure costs, but not technology development for
system components. The production phase includes the cost of producing the
system. One concept in modeling costs is the theoretical first unit (TFU), the cost of
the first flight-qualified vehicle or system off the assembly line. For multiple units,
we estimate the production cost using a learning curve factor applied to the TFU
cost, which we discuss later. Design, development, test, and evaluation, together
with production cost, form acquisition cost. The operations and support (O&S) phase
consists of ongoing operations and support costs, often called recurring costs,
including the vehicle or system.
We use a SLaTS as an example throughout this chapter. But the cost model is
appropriate for most space systems, including remote spacecraft, crewed
spacecraft or space stations, launch systems (crewed or not), and the supporting
infrastructure. This chapter is not so much about the SLaTS cost model as it is
about the issues for any cost model. In using one we must consider the following:
• Impact of the selected cost model
• Definitions in the work breakdown structure
• Cost estimating relationships, variables, maturity, range of values, and
applicability
• Development and operations as part of total lifecycle cost
• Models useful for estimating the cost of system test operations, systems
engineering and integration, and project management
• Assessment of the technology's readiness
• Time phasing of money for budget development
• Cost risk
236 Chapter 7—Lifecycle Cost Analysis
Project managers and systems engineers should be very concerned about the
lifecycle cost model for their projects, because it greatly affects both the project and
their professional life. Ideally, they estimate their project's lifecycle cost with the
estimating package and estimator by which others will judge the project in the
future. A project's lifecycle cost dangles between two extremes: the estimate is too
high, so the project gets killed or de-scoped, or the estimate is too low, gets
accepted, and makes the project manager's and systems engineer's life excessively
difficult. Our goal is to avoid both extremes.
Selecting the proper estimating package and approach for lifecycle cost is vital
to project success. The package should be validated and verified, just like other
models in the project. We have to track the model's maturity and how well its
results align with the known costs of previous projects. Does the current project
estimate exceed the range of values for any of the CERs? If so, the estimate may be
unusable. Is the cost estimating package truly appropriate for the type of project
and the environment within which the project is developed and operated? Many
cost estimating packages provide an estimate of development costs, but very few
estimate operations and support costs for space systems.
Below, we show one approach to both types of cost, with many details to help
understand the complexities of such models. The development cost model also
tells how to estimate the cost of systems engineering and integration, a particularly
difficult task. What's more, it illustrates cost estimating for project management
and detailed areas, such as system test operations.
Projects often get into trouble through optimism about the technology
readiness level (TRL) of their hardware and software. In the example below, we
describe an excellent way to assess these levels.
To prepare or review the project budget, or certain elements of it, the section
on time phasing project cost is important. It helps us think through the strategy for
budgeting a project. The last section offers some useful thoughts about identifying
and resolving cost risk.
FIGURE 7-1. Work Breakdown Structure (WBS) for Estimating the Cost of the Space Launch
and Transportation System (SLaTS). These elements constitute the WBS. See
APMSS [Chesley et al., 2008] for general details on work breakdown structures.
space systems cost model takes the latter approach, developing acquisition CERs
at the group level. Table 7-2 exemplifies the dictionary for WBS elements in the
model, including their functional description, typical components, content, and
other considerations.
Table 7-2. Example of Work Breakdown Structure (WBS) Definitions for the Space Systems
Cost Model. The table deals only with the structural and mechanical element. Good
practice for WBS definitions demands three basic elements; functional description,
typical components and content, and other considerations. (RCS is reaction control
system; CER is cost estimating relationship.)
We derived the space systems cost model's acquisition CERs using data from
NASA's REDSTAR database: a limited-access repository of historical data on most
space missions flown since the 1960s. For each group in the WBS, the model
provides two acquisition CERs, one for design, development, test, and evaluation
(DDT&E) and another for the theoretical first unit's (TFU) production cost. These
two costs are the basic building blocks of the model. In addition, the model
provides CERs for the system-level costs that result when we sum costs for the
DDT&E and TFU groups.
Table 7-3, the SLaTS cost model's CER variables, lists the system
characteristics by the elements that the model requires. The global variables
(affecting all WBS elements) are first, followed by variables for specific elements in
the WBS. The first column gives the element variable name with the corresponding
range of values in the second column. Dry mass is a required input to all CERs for
the flight-hardware systems. Each system and element CER also uses other
independent variables specific to it. Descriptions of global variables follow:
Other independent variables for the system or its elements appear in the CERs, as
defined in Table 7-3.
Just having a cost estimating relationship doesn't mean the estimated cost is
correct. The CER may exclude certain variables of interest that affect cost. It does
the best it can with the information given to estimate cost, but it often doesn't
contain all pertinent cost drivers. So when we say a CER is valid, we're just saying
it does a decent job of explaining variances between the historical database and
included variables. Other variables may be at work, and they may change over
time. A further caveat: the input values fed into the cost estimating relationship
may be wrong—CERs don't correct for bad input.
7.2 Gather Space System Characteristics 241
TABLE 7-3. Space Systems Cost Model CER Variables. We must be precise in defining
variables, since we use them to develop our estimate. See the discussion in the text for
expanded definition of some of the variables. (Al is aluminum.)
System or Element/Variable
Abbreviation Variable Range of Values
Table 7-3. Space Systems Cost Model CER Variables. (Continued) We must be precise in
defining variables, since we use them to develop our estimate. See the discussion in
the text for expanded definition of some of the variables. (Al is aluminum.)
System or Element/Variable
Abbreviation Variable Range of Values
Deployables complexity 1 = no deployables 4 = moderately complex
2 = simple or minor 5 = complex
3 = nominal
Parts count 1 = very low (extensive use of 3 = nominal parts count
net casting/machining to 4 = high parts count
avoid assemblies of parts; 5 = very high parts count
minimal number of different
parts
2 - low parts count
Thermal Control
Dry mass
Coatings, surfaces, multi 1 = none 5.5 = low temperature tiles
layer insulation (MLI), tiles, 2 = coatings, reflective surfaces 7 ~ RCC
reinforced carbon-carbon 3 = MLI, blankets, reusable
(RCC), etc. flexible surface insulation,
etc.
Cold plates, heaters, etc. 1 = none
2 = cold plates without
radiators, electric heaters,
etc.
Radiators 1 = none
2 = radiators with passive
cold plates
2.2 = radiators with louvers
Pumped fluids, heat pipes, 1 = none 2.2 = radiators with louvers
etc. 2 = radiators with pumped fluid 2.5 = heat pipes
cold plates
Stored cryogenics or 1 = none
refrigeration 2 = stored cryogenics
2.5 = active refrigeration
CC&DH (Communication, Command, and Data Handling)
Dry mass
Communication downlink 1 = very low (~1 Kbps) 4 = high (-1 Mbps)
2 = low (-10 Kbps) 5 = very high (-10 Mbps)
3 = moderate (-100 Kbps) 6 = ultra high (-100 Mbps)
Redundancy level 1 = single string 4 = quadruple redundancy
2 = dual string 5 - quintuple redundancy
3 = triple string
Electronics parts class 1 = Class B
2 = Class B+
3 = Class S
7.2 Gather Space System Characteristics 243
Table 7-3. Space Systems Cost Model CER Variables. (Continued) We must be precise in
defining variables, since we use them to develop our estimate. See the discussion in
the text for expanded definition of some of the variables. (Al is aluminum.)
System or Element/Variable
Abbreviation Variable Range of Values
Table 7-3. Space Systems Cost Model CER Variables. (Continued) We must be precise in
defining variables, since we use them to develop our estimate. See the discussion in
the text for expanded definition of some of the variables. (Al is aluminum.)
System or Element/Variable
Abbreviation Variable Range of Values
TABLE 7-3. Space Systems Cost Model CER Variables. (Continued) We must be precise in
defining variables, since we use them to develop our estimate. See the discussion in
the text for expanded definition of some of the variables. (Al is aluminum.)
System or Element/Variable
Abbreviation Variable Range of Values
Dry mass
Electrical and data 1 = no
requirements 2 = yes
Fluid requirements 1 = no
2 = yes
Jet Engine Package
Dry mass
Air start qualification 1 = no
2 = yes
navigation, power, propulsion, and environmental control and life support systems
may use inherited materials or components. But they're usually configured into a
new geometry, which engineers must design, analyze, and test (and this is where
most of the expense is anyway). Reusing software isn't very practical for launch
systems because most are unique and were developed many years apart. For
example, Ariane V's first flight failed because it reused software from Ariane IV.
Off-the-shelf hardware and software also may not meet specifications as the
project's definition matures and engineers cope with demands to optimize vehicle
performance. These issues have been major reasons for cost growth during concept
formulation and while transitioning to implementation. We have to judge
carefully the reasonableness of early estimates regarding how much to use off-the-
shelf elements. We should document assumptions and rationale for inheritance,
examine the rationale carefully, and carry enough cost reserves to cover new
hardware requirements that naturally creep into projects.
By dividing the actual historical cost by an estimated percent new factor from
Table 7-5, we've adjusted the D&D-group CERs for our space systems cost model
to yield 100% new design costs. The percent new design factor included as a
multiplier in each of these CERs enables us to account for specific inheritance
situations when estimating the cost of a new system. We should select the new-
design factor in the SLaTS cost model based on the guidelines below.
Table 7-5. Percent New Design Definitions. We should choose new design factors carefully to
correspond to the design effort’s expected scope and the technology's maturity.
New Design
Factor Basis
0.0 to 0.1 Existing design
• The system is an off-the-shelf design requiring no modifications and no
qualification.
• All drawings are available in required format
• All engineering analyses apply
• All elements are within the current state-of the-art
• Needs no new standards, specifications, or material adaptations or requires only
very minor interface verification
• Requires no qualification testing of the system
• The system is in production or has been produced recently
• Current team fully understands the design
TABLE 7-5. Percent New Design Definitions. (Continued) We should choose new design factors
carefully to correspond to the design effort’s expected scope and the technology’s maturity.
New Design
Factor Basis
0.6 to 0.7 Requires very major redesign or new design with considerable inheritance
• An adaptable design exists but requires major modifications to most of the
components or features
• Requires a new design but one that can reuse many components, engineering
analyses, or knowledge
• Current team fully understands the design
0.8 to 0.9 Almost totally new design with minor inheritance or advanced development work
’ TRL refers to NASA’s Technology Readiness Level scale (Chapter 13). In general, no new NASA
project should enter the implementation phase before achieving TRL 6 or higher.
7.3 Compute the System Development Cost 249
where yavg is the average unit cost over x units, and a and b are defined as above.
Thus, the tenth average unit cost using a TFU of $100 and assuming 90% learning is
250 Chapter 7—Lifecycle Cost Analysis
The resulting average unit cost of $80.06 from the computationally convenient
formula is acceptably close to the actual average unit cost of $79.94 calculated the
hard way (and shown in Table 7-6).
Table 7-6. Comparison of Unit and Cumulative Average Theories. Results from the learning
curve depend on which of the two types of learning-curve theories we apply.
Average Average
Unit# Cost/Uniff Cost Total Cost Cost/Unit* Cost Total Cost
Space launch and transportation systems have exhibited learning curve rates
between 80% and 100% (100% means no learning). We should choose a learning-
curve percentage based on our best judgment about the economies of scale being
contemplated for the program as production ramps up. Learning doesn't affect the
cost of RLVs much (because not many units are likely to be built in the foreseeable
future), but it's a very important factor in ELVs at their higher production quantities.
7.3 Compute the System Development Cost 251
Table 7-7. Mass Statement and Other Information For Costing the Reference SLaTS Vehicle.
Cost analysts often do not receive all the information required for cost estimating, so
assumptions are made and verified at a later date. (CC&DH is communication,
command, and data handling; GN&C is guidance, navigation, and control; LH2 is liquid
hydrogen; LOX is liquid oxygen.)
Not provided; calculated as 10% x total structural and mechanical weight for booster, 5% for 2nd
stage and fairing
*** Moved to 2nd stage for costing; obtained from the given avionics mass of 126 kg by alloting 70% to
CC&DH and 30% to GN&C
** Not provided; calculated as 10% x total other stage mass
We begin by specifying the cost model's global variables in Table 7-8. (For
brevity, we show only the inputs for the second stage here, excluding a similar set
of inputs required for the booster and for the payload fairing). The first global
variable reflects the fact that our vehicle isn't human-rated (human rated = 1). Next
we set mission type to 2, corresponding to an ELV. Our configuration is a fully
operational vehicle (XYZ factor = 3) and has an authority to proceed date of 2003
(implying a year of technology freeze = 2003). We estimate that an average amount
of work has defined the configuration before the project start and therefore set pre
development = 2. We expect no more than the usual amount of changes to occur in
252 CHAPTER 7-LIFECYCLE COST ANALYSIS
Table 7-8A. Global Inputs for the Space Systems Cost Model. These are for the second stage of
the reference vehicle. Globa! inputs are the same for all CERs. (D&D is design and
development; TFU is theoretical first unit.)
Human rated 1 1
Mission type 2 2
XYZ factor 3 3
Year of technology freeze 2003 2003
Pre-development 2 2
Engineering change activity 3 3
Number of organizations 3 3
Team experience 3 3
TABLE 7-8 B. Subsystem Inputs for the Space Systems Cost Model. Results here are only for
structural and mechanical, thermal control, communication, command, and data handling
(CC&DH) and GN&C elements. (D&D is design and development; TFU is theoretical first
unit; TRL is technology readiness level; MLI is multi-layered insulation; RCC is reinforced
carbon-carbon.) For “percent new” see Table 7-5.
Table 7-8B. Subsystem Inputs for the Space Systems Cost Model. (Continued) Results here are
only for structural and mechanical, thermal control, communication, command, and data
handling (CC&DH) and GN&C elements. (D&D is design and development; TFU is
theoretical first unit; TRL is technology readiness level; MLI is multi-layered insulation;
RCC is reinforced carbon-carbon.) For “percent new” see Table 7-5.
D&D TFU Computed
Cost Estimating D&D TFU Co Co ($ Millions)
Relationship (CER) Exponent Exponent efficients efficients
Variables Inputs (b) (b) (a) (a) D&D TFU
Team experience 3 0.25 0.15 0.76 0.85
Subsystem TRL 6 1.10 0.30 1.00 1.00
Percent new 1.00 1.00 1.00
CC&DH 5.93 0.198 $39.9 $1.8
Mass (kg) 88 0.40 0.70 -- --
Comm kilobits down 3 0.20 0.10 1.25 1.12
Redundancy level 2 0.20 0.10 1.15 1.07
Electronics parts class 2 0.08 0.12 1.06 1.09
Reserved 1 1.00 1.00 1.00 1.00
Reserved 1 1.00 1.00 1.00 1.00
Human rated 1 0.60 0.30 1.00 1.00
Mission type 2 -- -- 1.30 0.40
XYZ factor 3 0.90 0.50 2.59 1.73
Year of technology freeze 2002 0.040 0.030 0.19 0.29
Pre-development 2 0.20 0.05 0.87 0.97
Engineering change activity 3 0.35 0.05 1.47 1.06
Number of organizations 3 0.20 0.10 1.25 1.12
Team experience 3 0.25 0.15 0.76 0.85
Subsystem TRL 6 1.10 0.30 1.00 1.00
Percent new 1 1.00 1.00
Calculated nonredundant 53 -- -- -- --
CC&DH mass (kg)
GN&C 3.55 0.164 $29.5 $1.3
Mass (kg) 38 0.40 0.70 -- --
Stabilization type 1 0.10 0.20 1.00 1.00
Redundancy level 2 0.20 0.10 1.15 1.07
Sensors type 1 0.20 0.10 1.00 1.00
Degree of autonomy 1 0.10 0.20 1.00 1.00
Reserved 1 1.00 1.00 1.00 1.00
Human rated 1 0.90 0.50 1.00 1.00
Mission type 2 -- -- 1.30 0.40
XYZ factor 3 1.10 0.70 3.35 2.16
Year of technology freeze 2002 0.025 0.020 0.35 0.44
7.3 Compute the System Development Cost 255
TABLE 7-8B. Subsystem Inputs for the Space Systems Cost Model. (Continued) Results here are
only for structural and mechanical, thermal control, communication, command, and data
handling (CC&DH) and GN&C elements, (D&D is design and development; TFU is
theoretical first unit; TRL is technology readiness level; MLI is multi-layered insulation;
RCC is reinforced carbon-carbon.) For “percent new” see Table 7-5.
Next we need to feed information on each subsystem and other cost elements
in our WBS into the space systems cost model in Table 7-8. Some of this
information comes from Table 7-7, but we need to supply other inputs based on
our knowledge of the project and our experience. For example, to estimate
structural and mechanical cost, the space systems cost model requires inputs on
material complexity, deployables complexity, parts count, and so on. In this case,
we've generated these inputs ourselves, as shown in Table 7-8, using our best
engineering judgment. We specify a material complexity of 1.0 corresponding to
"mostly aluminum" structure because design engineers tell us this material is
dominant. We use a deployables complexity of 2.0 corresponding to simple or
minor deployables because the only mechanism required is the staging
mechanism that separates the booster from the second stage. The assigned parts-
count value is 3, which corresponds to a normal number of parts, because the
design uses some net casting and other parts-count strategies but not more than
usual for a project of this type. We get several other inputs for structural and
mechanical parts from the global inputs. The final two structural and mechanical
inputs are the subsystems' technology readiness level (TRL), which we set at 6
because all technology issues related to this element have been demonstrated in a
relevant environment.
We set percent new design to 1.0 to reflect our assessment that the design
represents all new effort. This last input requires more explanation. Table 7-5
implies that, in general, a TRL level of 6 also implies a percent new design factor
less than 1.0 (the table gives values of 0.5 to 0.8 for a TRL of 6). But we must
sometimes depart from the cookbook instructions and use our own judgment. In
this case, even with a technology readiness level of 6, we judged the design to be
all new with essentially no inheritance from previous designs, so it warrants a new
design factor of 1.0.
256 Chapter 7—Lifecycle Cost Analysis
Now we must provide the cost model's required inputs for the reference
vehicle's other subsystems and elements/ either by estimating them ourselves or
by interviewing design engineers or project managers. In Table 7-8, for brevity, we
show only three of these additional subsystems: thermal control (TCS);
communication, command, and data handling (CC&DH); and guidance,
navigation, and control (GN&C).
After we insert inputs for all WBS elements of each stage of the reference vehicle,
the space systems cost model computes each element's cost (based on all the CERs).
Table 7-8B gives the resulting estimated costs for the second stage. The estimates for
the structural and mechanical element for the SLaTS vehicle's second stage are a
D&D cost of $162.0 million and a TFU cost of $13.8 million. Similarly, the table shows
costs for the TCS, CC&DH, and GN&C elements. All these cost elements accumulate
to a DDT&E cost of $1200 million and a TFU cost of $85 million for the second stage.
(We use the term D&D cost for the subsystem and the term DDT&E for total stage
cost because the stage-level costs include those for system-test hardware and
operations. We combine these costs with the stage-level integrate and check-out,
ground support equipment (GSE), systems engineering and integration and project
management, to bring the D&D cost of the subsystems up to full DDT&E.)
Though we don't discuss it here, we employ the same process to generate costs
for the booster stage and the payload shroud. These costs are, in total, a booster-
stage cost of $2240 million DDT&E and $189 million TFU and a payload shroud
cost of $330 million DDT&E and $13 million TFU.
Table 7-9 summarizes these results. We derived the average unit costs of the
vehicle stages and shroud in the table by using the TFU from the space systems
cost model and the learning curve method we described in Section 7.3.2. To apply
learning, we multiplied the TFU costs for the booster, the second stage, and the
shroud by the average unit cost factor—assuming a production run of 12 units per
year over 10 years (120 total units). We assume the fairly standard 90% learning
curve here. Using the learning curve equations from Section 7.3, we find that the
average unit cost decreases to 56% of the TFU cost over the production run of 120
units. This computation is
As we see in Table 7-9, applying a 0.56 learning factor to the TFU costs results in a
total average unit cost of $161 million for the vehicle stage.
Table 7-9. Summary of the SLaTS Reference Vehicle’s Average Unit Costs for Design,
Development, Test, and Evaluation (DDT&E). The table gives the costs in 2003
dollars. (TFU is theoretical first unit.)
systems. Operations data and the development of assessment and costing methods
have particularly suffered from neglect. As a result, we've lacked the information
needed to help develop an analysis process, and the aerospace community hasn't
agreed on the approaches to use. More recently, emerging analysis methods have
addressed space transportation O&S from several perspectives and various levels
of detail to give us better insight. Many of these tools were developed to assess the
operability of new concepts, but they now offer the best ways to get information
needed to assess operations and support costs.
Figure 7-2. Lifecycle Phases. Here we show that the operations and support costs are usually
the most significant cost element over a program’s lifecycle. [DoD, 1992]
258 Chapter 7—Lifecycle Cost Analysis
Because reusable launch systems are unique and relatively new to operations,
we haven't developed simple rules of thumb for estimating O&S costs based on
their development or production cost. We also haven't had enough systems to
develop parametric relationships of cost-to-engineering variables for O&S, as is
the case for DDT&E estimates. Unlike aircraft systems, which have a common
operational scenario, the launch systems' low flight rates and unique support
structures require a more involved approach to developing O&S cost estimates.
Possible launch concepts are crewed and uncrewed; horizontal and vertical takeoff
and landing; reusable and expendable rocket systems, aircraft-like systems with
air-breathing engines, and others.
Because no two systems are alike, our analysis must reflect the specialized
nature of each system's operation and support needs that may also vary with flight
rate. Such needs include facilities, specialized support equipment, processes,
levels of support, and so forth. The analysis process and models must fit the
complexity of these distinctive concepts with their different operating
requirements. Costing for O&S requires us to realistically estimate the resources
needed to achieve the flight rates required over the system's operating life. In this
section, we:
Off-line functions:
Flight Landing and Payload processing
Launch recovery
operations Traffic and flight control
Depot maintenance
Logistics
Transportation
Preflight Post landing
processing operations* Flight crew support
Manufacturing
Management
Processing
Integration* servicing
maintenance
Part of the O&S cost is the nonrecurring startup costs (incurred while
preparing for operations) and the costs of support when the system is fully
operational. The nonrecurring cost elements usually cover initial costs of designing,
developing, and building the support facilities, as well as the cost of acquiring and
training people to operate and support the system before it deploys. The recurring
costs include the cost of people needed to process and maintain the launch vehicle,
together with the launch system's support equipment and facilities. We usually
express them as an annual cost that depends on flight rate and the hardware's
characteristics. Costs for O&S also encompass those for material, consumables,
and expendables used for operations. (We normally account for the cost of O&S
software and its maintenance as a part of the cost element that the software
supports.) Costs to replace components that exceed limits on hardware life, such
as an engine, also fall under O&S. Table 7-10 summarizes typical O&S costs.
In addition, recurring costs have fixed and variable components, as Figure 7-4
illustrates. In economic terms, fixed costs are those not affected by changes in output
or activity for a specified system capability, whereas variable costs change with the
output level. We express a space launch system's capability in terms of flight rate.
Fixed costs are for maintaining the system's ability to meet a specified flight rate at
a given time, whether or not the specified number of flights is actually flown.
Among fixed costs are staff salaries and infrastructure upkeep required to sustain
the specified flight rate (users may add interest, fees, insurance, and so on). A fixed
cost may change with major changes in the system's capability. Variable costs apply
to individual flights. Examples are the cost of spares, expendable hardware,
consumables, and extra people needed to meet a higher flight rate. No specific
conventions govern how to classify O&S costs, so we should compare carefully with
other O&S costs to be sure we include similar cost elements in each study.
260 Chapter 7— Lifecycle Cost Analysis
TABLE 7-10. Operations and Support (O&S) Cost Elements. These divide into nonrecurring costs to
build the infrastructure for initial operation and recurring costs for ongoing operations. The
costs include facilities, personnel, hardware, and software. Many of these items are
discussed in Chapter 9 of APMSS [Chesley et al., 2008], (GSE is ground support equipment.)
FIGURE 7-4. Elements of Operations and Support (O&S) Costs. The figure illustrates the
taxonomy we use to distinguish O&S cost elements.
Because decisions that affect these issues occur early in the design process, we
must capture their effects on the O&S costs during the conceptual phase. Designers
can modify a concept early in design to meet its cost goals when changes are least
costly. Next, we look at the effect of each issue on new launch concepts.
Operations Concept
An operations concept is integral to assessing O&S costs. Chapter 3 and Cost
Effective Space Mission Operations (CESMO) [Squibb et alv 2006] describe how to
develop an operations concept. Suppose a concept includes vehicles that land
away from the launch site or arrive in segments to be mated at the pad. These
features affect O&S costs. Typical choices for a launch vehicle operations concept
are an integrate-transfer-launch or assemble-on-pad scenario, vertical or
horizontal processing and integration, where to install the payload, and how much
processing to place in a processing station versus a launch pad. Horizontal or
vertical processing and integration affect O&S differently for any concept, but the
ultimate driver—the amount of work—influences it the most.
Support Policy
Repairs and servicing must take place before reflight, but support policy
affects the environment under which they occur. Support policy encompasses such
decisions as the level of repair at the launch site, the number of available spares,
shift and staffing for a facility, and the level of inspection required to ensure safe
and reliable flight. These policies often result from trying to meet mission flight-
rate goals as safely and economically as possible. Except for shift policy, many of
these decisions are implicit in conceptual studies. We should be aware of a study's
assumptions and stay consistent with them. For instance, the planned work and
level of inspection necessary before flight are a matter of policy because they
depend on the system's maturity, technology, and design confidence.
Developmental and crewed systems receive more extensive servicing and
inspection to ensure the safety of flight, so they require more staffing and longer
turnaround times between flights. That effort reduces a vehicle's availability and
increases O&S cost. The change in minimum turnaround times observed after the
7.4 Estimate Operations and Support Costs 263
Challenger accident best exemplifies the effect of a change in support policies. The
shortest turnaround time before the Challenger accident had been 46 workdays
(STS 61-B), but new policies for flight safety have lengthened the best time since
then to 81 workdays (STS 94) for what is essentially the same vehicle system.
Table 7-11. Steps Required to Estimate Operations and Support (O&S) Cost. We use these
steps to estimate the cost for operations and support. (APMSS is Applied Project
Management for Space Systems [Chesley et al., 2008].)
Where
Step Task Discussed
1. Understand the system and Define purpose, scope, and depth of analysis Chaps. 1-6
purpose of the analysis
4. Determine the ground rules and Derive from mission needs, requirements, Chaps. 1 and
assumptions and constraints. Limit scope and depth. 2
5. Choose a cost element Develop a CES to assure that results capture APMSS
structure (CES) all the cost elements needed for analysis Chap. 11
We should make sure that the study's depth matches its purpose and the
available information. If the purpose is to trade the number of stages, we may not
need to estimate cost to the subsystem level. If the purpose is to evaluate the effect
of a new subsystem technology, we must estimate subsystem support costs so we
can assess that effect. Not all parts of the cost analysis require the same level of
detail for analysis.
TABLE 7-12. Examples of Cost Element Structures (CES). Column A reflects a CES based on the
support of ground and launch operations for the Shuttle. Column B represents a more
general type of cost structure used by the military for logistics support. Column C lists
the major cost elements for new Spaceport launch systems. (GFE is government-
furnished equipment; GSE is ground support equipment; JSC is Johnson Space
Center; KSC is Kennedy Space Center; LPS is launch-processing system; O&M is
operations and maintenance; ILS is integrated logistic support; R&D is research and
development.)
C. Spaceport Operations
A. Access to Space Cost Modules and Facility
Estimating Structure B. General CES Functions
TABLE 7-13. Comparing Per-flight Prices. Here we show the estimated price ranges for current
launch systems. Values come from Isakowitz, Hopkins, and Hopkins [1999] and aren’t
converted to 2003 dollars.
R is Reusable, E is Expendable
** Cost based on 5 to 8 flights per year
* Here "payload" refers to the entire satellite that is placed in orbit, not just to the part of
the satellite that carries out the mission.
7.4 Estimate Operations and Support Costs 269
on the pad, and no elements will return for recovery. The first stage's
characteristics are 4.6 m diameter, 30.6 m height 145,600 kg of propellants. The
second stage's characteristics are 4.6 m diameter, 10.3 m height, 30,400 kg of
propellants. The payload shroud is 14.7 m in height. The flight vehicle's total
stacked height is 55.5 m. (We assume a mixture ratio of 6:1 for oxidizer: fuel.)
Step 1. Understand the System and Purpose of the Analysis
The scope of the analysis includes the launch site and mission support
functions only, capturing the direct cost for the design and its support
infrastructure. This allows us to compare designs and support concepts. We don't
address secondary (but not inconsequential) costs for off-site support such as
sustaining engineering and testing, off-site depot work, supporting off-site
landing sites, and improving or managing products. (We would need to address
these costs if the purpose were to develop a cost for budgeting or if we wanted to
establish a commercial launch price.)
Step 2. Gather Information
This step organizes the information we have obtained so we can ascertain
whether we are missing any critical data.
• Design—Uncrewed, expendable vehicle requires two stages to orbit
• Technologies—The first and second stages (boosters) each require a single
LH2/LOX engine. The payload has a solid kick motor (upper stage) to place
it in orbit.
• Dimensions — Assembled flight vehicle has 4.6 m diameter, 55.5 m stacked
height
• Propellant masses—Propellant includes 151,000 kg LOX and 25,200 kg LH2
based on an assumed mixture ratio of 6:1
• Mission—Requires payload delivery to orbit 12 flights per year
Step 3. Operations Concept
We deliver launch elements to the site and erect them directly on the launch
pad. Teams check each stage independently before adding the next stage. They
integrate the payload stage on the pad, enclose it with the payload shroud, and do
an integrated check-out. A separate support crew provides mission support from
an onsite firing room. We assume launch-day operations require one day. An off-
site organization handles on-orbit payload support.
Step 4. Ground Rules and Assumptions
• "Clean-sheet" approach: estimate the cost of major facilities
• Ten-year lifecycle for operations
• Flight rate of 12 flights per year
• Missions split between NASA and commercial enterprises (50% each)
• No learning curve
270 Chapter 7—Lifecycle Cost Analysis
• No developmental flights
• Constant flight rate (no growth over lifecycle)
• Results in FY2003 dollars
Step 5. Choose a Cost Element Structure
In this case, the COMET/OCM model in Step 6 determines the cost element
structure and thus defines the cost data needs.
Step 6. Select Method or Model for Analysis
The preceding sections in this chapter describe elements we should consider in
developing O&S costs. Computer-based models that address them are still being
developed. To exemplify how to compute recurring costs for O&S, we've developed
a high-level modeling process based mainly on information extracted from the
Conceptual Operations Manpower Estimating Tool/Operations Cost Model
(COMET/OCM, or OCM). We use it for rough order-of-magnitude cost estimates
using parameters typically available during concept development. But the model
can't directly reflect all the benefits or disadvantages of using newer technologies,
processes, or designs—all of which we should consider in a cost estimate. For that
level of estimate, we suggest reviewing the models discussed in Section 7.4.3, Step 6.
Step 7. Perform Analysis
We trace costs for space launch operations to the tasks required to prepare a
vehicle for launch, conduct mission operations, and provide support. These cost
elements focus on people, facilities, infrastructure, ground support equipment,
consumables, expendables, spares, and so forth. But detailed information required to
determine accurately the number of people or the time needed for specific tasks
usually isn't available for a conceptual system. So OCM uses historical data and a
technique called ratio analysis to estimate costs for new concepts. Its main parameters
are the number of people required to process vehicles and plan flights, which then
helps estimate the staffing needed to do the additional launch and flight operations
(Table 7-14 and Figure 7-5). Most of the other cost elements depend on these staffing
levels. We also allow for cost elements not related to staffing levels, such as
consumables and spares. Below, we describe that process for the example concept.
We've developed the total recurring O&S cost for ground and mission
operations. As presented here, the results reflect the support environments in the
mid-1990s for the Shuttle and expendable launch systems such as Atlas and Titan.
We have to adjust these values based on engineering judgment on how new
designs, technologies, and processing changes affect the support of new concepts.
Step 7.1. Describe Vehicle and Mission Concepts
We begin by using Tables 7-15 and 7-16 to describe the vehicle and the mission.
The parameters we need to cost these functions are based on concepts defined in
that format. We use the following terms: booster is any stage that provides lift to
orbit but doesn't achieve orbit or participate in on-orbit maneuvers. A low-Earth-
orbit (LEO) stage does on-orbit maneuvers but need not be reusable. An upper
7.4 Estimate Operations and Support Costs 271
TABLE 7-14. Process to Estimate Recurring Costs for Operations and Support (O&S)- These
five steps describe how to estimate recurring support costs based on information from
the Operational Cost Model.
Step Source
7.1. Describe vehicle and mission concepts Tables 7-15 and 7-16
7.2. Compute primary head count for vehicle processing and flight
planning
• Launch operations (vehicle processing) Tables 7-17 through 7-19
• Flight operations (flight planning) Table 7-20
7.3. Compute total head counts
• Head count for basic flight rate of 8 per year
* Head count for desired flight rate Eqs. (7-12) and (7-13)
stage is the final element that either places a payload in orbit or transfers it from
LEO to a higher orbit. It may be carried in the bay of a LEO vehicle such as the
Shuttle, or atop a booster stage such as Titan or Ariane. An upper stage may act as
a cargo-transfer vehicle or orbital-maneuvering vehicle if we have no LEO stage.
We must complete the process in Table 7-14 for each booster element of the concept
(we show only the stage-one element). A major purpose for filling out Tables 7-15
and 7-16 is to define the system in terms that match the format needed in Tables 7-
17, 7-18, and 7-20.
Step 7.2. Compute Primary Head Counts for Vehicle Processing and Flight
Planning
To estimate the number of people for vehicle processing, we consider them by
operational phase (Table 7-17) and type of element being processed (Tables 7-18
and 7-19). Then we use Equation (7-5) to estimate the head count for vehicle
processing. We estimate the number of people for flight planning in a similar way,
using Equations (7-7) through (7-11) plus Table 7-20.
Figure 7-5. The Contribution of Head Count to Total System Costs. Head counts for vehicle
processing and flight planning drive the total head count, which in turn feeds into the
aggregate cost of the system. Step 7.1, “Describe vehicle and mission concepts", is
omitted in this figure. (O&M is operations and maintenance; O&S is operations and
support.)
TABLE 7-15. Concept Definition for Operations and Support (O&S). The first step in estimating
O&S costs is to describe the concept and vehicle using this table and others that follow.
As shown here, we fill out the table for the first-stage element (first and second stages are
boosters), the upper stage (with the solid kick motor) which contains the payload, and the
total propellant requirements for the launch vehicle. (CTV is crew transfer vehicle; QMV is
orbital maneuvering vehicle; TPS is thermal protection system.)
Concept Description
Booster Definition:
For each booster type:
1) Enter the number of boosters: 1
2) Enter the type of boosters: Solid:
Hybrid:
Liquid: X
a. If solid or hybrid, what type? Monolithic: Segmented: [ ]
b. Enter the number of engines per
booster: 1
3) Are the boosters reusable? Reusable: Expendable: [X]
a. if so, enter recovery type: Parachute/water:
Parachute/land:
Flyback/land:
7.4 Estimate Operations and Support Costs 273
TABLE 7-15. Concept Definition for Operations and Support (O&S). (Continued) The first step in
estimating O&S costs is to describe the concept and vehicle using this table and others
that follow. As shown here, we fill out the table for the first-stage element (first and second
stages are boosters), the upper stage (with the solid kick motor) which contains the
payload, and the total propellant requirements for the launch vehicle. (CTV is crew
transfer vehicle: OMV is orbital maneuvering vehicle; TPS is thermal protection system.)
Concept Description
Table 7-16. Mission Description. The table helps define the mission parameters in enough detail
to facilitate operations and support O&S cost estimation. The events are based on
liftoff, engine ignition, cutoff (both stages), and separation for each stage plus jettison of
the payload fairing. The table uses a standard design and mission type and splits
mission payloads evenly between commercial and NASA launches. (OMS/RCS is
orbital maneuvering system/reaction control system; ET is external tank; EVA is
extravehicular activity.)
Mission Description
Mission Profile: Factor Factor
Number of events during:
Ascent: 8 See Events Table*
On-orbit 2 See Events Table*
Descent See Events Table*
Type of mission: Standard X Unique
Maturity of payload design: Mature X First flight
Trajectory and post-flight analysis Manual Automated
will be: X
Crew/Payload Planning:
Will the vehicle have a flight crew? No X Yes
Average mission length (days):
Average crew size:
Mission man-days: Mission
length x crew
size
Will there be EVA operations? No !X Yes
Mission Model:
Percent of missions that are:
Commercial 50
Civil/NASA 50
DoD 0
'Events Table
Ascent Maneuvers or Events On-Orbit Maneuvers or Events
Main engine start Orbit change OMS/RCS ignition
Booster engine start Orbit change OMS/RCS cut-off
Liftoff Alignment to spacecraft separation attitude
Booster engine burnout EVA attitude adjustments
Main engine cut-off Spacecraft separation
1st stage separation Rendezvous with docking platform
Payload module separation Docking maneuver
2nd-stage engine ignition Separation from docking platform
2nd-stage engine cut-off
2nd-stage separation Descent Maneuvers or Events
OMS ignition Deorbit OMS/RCS ignition
QMS cut-off Deorbit OMS/RCS cut-off
Insulation panel jettison Pre-reentry attitude alignment
Payload fairing jettison Post-reentry attitude alignment
Alignment to spacecraft separation attitude Parachute deployment
Spacecraft separation Final approach landing alignment
Upper stage collision avoidance maneuver Runway landing
Booster or payload module parachute deployment Splashdown
ET separation Flotation device deployment
7.4 Estimate Operations and Support Costs 275
For launch operations, we calculate the personnel head count (HC) for each
element in the concept (booster, LEO vehicle, upper stage, cargo, and integrated
vehicle) for each applicable vehicle-processing phase (process, stack, integrate,
countdown, and recover) by multiplying the number of elements of each type by the
HC required to perform the phase activity for the element. Table 7-17 gives guidance
on matching the necessary hcad-count parameters to our concept description, while
Table 7-18 provides the basic HC values required to perform each function.
TABLE 7-17. Vehicle Processing Manpower Requirements by Phase. For each element type, the
table identifies the corresponding head-count parameters to compute the vehicle
processing head count. Table 7-18 defines the terms and head-count values. Table 7-
19 summarizes the results. {R is reusable; E is expendable; other terms are defined in
Table 7-18.)
Table 7-18. Vehicle Processing Manpower Requirements by Element Type. We use these
values to define head counts (HC) for each element identified in Table 7-17. (CTV is
crew transfer vehicle; OMV is orbital maneuvering vehicle.)
Head
Element Count per
Abbreviation Element Element Description or Function
1Eng 30 Basic engine processing head count per stage
BLEO 770 Basic head count required for vehicles such as an Orbiter, CTV, etc.
performing operations in Iow-Earth orbit (LEO)
C 62 Head count for a crewed system
CDM 288 Crewed countdown head count
CDU 98 Uncrewed countdown head count
COF 10 Head count for upper stages performing CTV/OMV functions
ET 60 External tank processing head count
GR 100 Glider recovery head count
LCBase 10 Liquid rocket booster and core base head count
LMate 34 Liquid rocket booster mating head count
LSI 230 LEO stage integration head count
Mono 4 Monolithic processing head count
OffEP 9 Head count contribution for off-line encapsulated processing and
integration
OnEP 55 Head count to encapsulate and integrate cargo on the launch pad
PBI 136 Head count contribution to integrate cargo into the payload bay
PBP 70 Head count contribution to payload bay processing
PR 17 Parachute recovery head count
RLEO 200 Head count required for reusable LEO vehicle processing
RUS 15 Reusable upper stage head count
SegP 36 Segmented processing head count
SegS 48 Segmented stacking head count
sus 10 Solid upper stage processing base head count term
TPS 300 Thermal protection system processing head count (Shuttle value)
USB 10 Liquid upper stage processing base head count
XEng 15 Head count for processing each additional engine on a stage
TABLE 7-19. Estimated Manpower to Process the Reference Vehicle- The table summarizes the
results from using Tables 7-17 and 7-18 by element type and processing phase. The
total is the head count required to support vehicle processing. We use this value later
to define the total head count to support launch operations as part of defining recurring
costs for ground operations. Numbers in this table are based on eight flights per year.
We derive the cost of labor to support flight operations from the number of people
required to support flight planning. This primary flight-planning head count comes
from the mission description in Table 7-16. We compute flight planning from the
number of people needed to design and plan the mission, plus the number required
to plan the crew activities and payload handling while on orbit. All these
calculations are adjusted by factors that account for the change in management
requirements based on the type of organization launching the payload.
Flight planning head count =
[(HC for flight profile design and mission planning) +
(HC for crew or payload activity planning)] x
(program management, document control, data reporting, testing, etc. factor) (7-8)
where
HC for flight profile design and mission planning =
{[(10 x ascent phase difficulty level x ln(# events during ascent phase +1))
+ (10 x on-orbit phase difficulty level x ln(# events during on-orbit phase +1))
+ (10 x descent phase difficulty level x ln(# events during descent phase +1))]
x crewed mission multiplier x payload design multiplier
x automated post-flight analysis (PFA) multiplier
x mission-peculiar multiplier} (7-9)
Tabic 7-20 defines the terms in Equation (7-10). The difficulty level ranges from 1.0
to 8.0, with values for the reference system equal to 4.5 for the ascent phase, 7.5 for
the on-orbit phase, and 6.0 for the re-entry phase. We derive these values from
model calibrations using flight-planning histories for the Shuttle and expendable
launch vehicles. The crewed mission multiplier is 1.0 for uncrewed and 1.9 for crewed
flights. The payload design, automated PFA, and mission-peculiar multiplier values
range from 1.0 to 1.2,1.0 to 0.97, and 1.0 to 1.2, respectively.
278 Chapter 7—Lifecycle Cost Analysis
where man-days in space = mission length in days x crew size and where the crew
activity difficulty level is similar to that in the flight planning difficulty level (Table
7-20) with a value range from 1-8. EVA factor is 1.2 for a mission with EVA and 1.0
for no EVA. We find the final term in Equation (7-7) using
Program management, document control, data reporting, testing, etc. =
(% commercial payloads x commercial payload factor)
+ (% civil/NASA payloads x civil/NASA payload factor)
+ (% DoD payloads x DoD payload factor) (7-11)
TABLE 7-20. Mission Operations Factors. This table contains typical factors and ranges needed to
estimate the flight-planning head count [Eqs. (7-7) through (7-11)]. (PFA is post-flight
analysis; EVA is extravehicular activity.)
Staffing for the flight design and crew activity functions depends strongly on
the customer. Commercial missions in general don't require the same level of
effort for these tasks as, for instance, a DoD or NASA mission. NASA missions
7.4 Estimate Operations and Support Costs 279
normally require many briefings to report the results of analysis, as well as more
program management. These add a lot of staff to mission planning for flight design
and crew activities. Values for these three types of missions are
• Commercial missions (baseline) = 1.0
• Civil/NASA missions = 1.4
• DoD missions = 1.6
For our example problem, the head count for flight planning equals 176 for the
flight profile design and mission planning and 0 for the crew or payload activity
planning. The program management factor of 1.2 means the total head count is 211
for launching 50% commercial and 50% NASA payloads at a rate of eight flights
per year.
Step 7.3. Compute Total Head Count
We based the head counts for vehicle processing (VPHC) and flight planning
(FPHC)—computed in the previous steps—on a rate of eight flights per year,
because this rate was typical when we collected the support data. We must adjust
these head-count values to the desired flight rate (R) using
(7-12)
(7-13)
for the flight-planning head count. (Note: In estimating the O&S costs of
conceptual launch systems, we have to be cautious using Eqs. (7-12) and (7-13).
They represent Shuttle operations and are usually accuratc only for relatively low
flight rates—fewer than 17 per year. Above this rate they tend to underestimate the
head counts.)
For the example case, the vehicle-processing head count is 287 for eight flights
per year and 337 for 12 flights per year. The flight-planning head count is 211 and
223 respectively. From the primary vehicle-processing and flight-planning head
counts for launch and flight operations, we use a ratio analysis with historical data
as a basis to calculatc the HC for other launch and flight-operation activities. The
analysis is based on Table 7-21 and the following equation:
(7-14)
Here the standard operating percentage (SOP) represents the fractional
contribution of each cost element for the launch and flight-operations head counts
respectively. The parameters Base, Kl, K2, and b come from the table values.
Because the distribution of people varies with flight rate and the type of architecture,
we need to base the parameters from Table 7-21 on one of these four types: crewed-
reusable, uncrewed-reusable, crewed-expendable, and uncrewed-expendable. The
distributions that this equation computes capture the way of doing business in the
280 Chapter 7—Lifecycle Cost Analysis
early 1990s and for that combination of characteristics (manning, reusability, etc.)
and flight rate. We need engineering judgment to adjust these values and assess the
effects of changing technologies, designs, or support policies. For the example
concept, we apply the uncrewed-expendable portion (Set 4) of the table.
Using ratio analysis, we calculate the head counts for the added launch and
flight-operations elements, such as process engineering or fixed support for launch
operations. First, we compute the total launch-operations head count (LOHC) using
the VPHC for 12 flights per year (VPHC12 = 337). Then we divide by the fraction
that it represents of the total (SOP = 0.512) as calculated by Equation (7-13).
Ratio analysis helps us find the total number of people required to support
launch operations: LOHC12 equals 337/0.512 = 658. We then determine the head
counts for the
(7-15)
We calculate similarly to determine the total flight-operations head count, the result
(7-16)
7.4 Estimate Operations and Support Costs 281
Table 7-21. Coefficients (Base, K1, K2, and b) for Standard Operating Percentage (SOP)
Equations. This table provides the coefficients to compute the fraction of support
personnel required (out of the total) to support each element of launch and mission
operations. Four combinations exist for manning and reusability: crewed/reusable
(M/R), uncrewed/reusable (UM/R), crewed/expendable (M/EX), and uncrewed/
expendable (UM/EX). (O&M is operations and maintenance.)
Launch Operations
Fixed support -0.0050 0.2627 -0.1149 0.2283 -0.0325 0.2541 -0.1424 0.2197
Facility O&M -0.0280 0.2104 -0.0665 0.1360 -0.0376 0.1918 -0.0761 0.1174
Base support -0.0566 0.1428 -0.0925 0.1538 -0.0656 0.1455 -0.1015 0.1565
Flight Operations
Fixed support 0.0103 0.1043 -0.0747 0.1218 -0.0110 0.1087 -0.0960 0.1262
Table 7-22. Operations and Support Cost Estimate for the Reference Mission and
Vehicle. We use this table in conjunction with Table 7-23 to summarize the cost
elements for ground and mission operations' annual costs. (Dist is distribution; MP is
manpower; HC is head count; prop is propellant mass; GSE is ground support
equipment.)
Ground Operations
Fractional
Cost Element Dist of MP HC $M/Yr
Vehicle processing 0.50 337 $33.2
Process engineering 0.09 58 $5-7
Recovery operations 0.00 0 $0
Fixed support 0.18 117 $11.5
Facility operations and 0.10 63 $6.2
maintenance
Base support 0.13 83 $8.2
Labor 1.00 658 $64.8 = Operations HC x annual cost factor of
$98,600
Supplies and material $6.5 = Labor cost x operations factor
Propellants $3.2 = Flight rate x [{prop $/kg) x (kg prop) x
(1 + %boil-off)
GSE spares $0.9 = Cost of nonrecurring GSE* x GSE
spares factor
Wraps $22.7 = (Fee + contingency + government
support) x labor cost
TABLE 7-22. Operations and Support Cost Estimate for the Reference Mission and Vehicle.
(Continued) We use this table in conjunction with Table 7-23 to summarize the cost
elements for ground and mission operations’ annual costs. (Dist is distribution; MP is
manpower; HC is head count; prop is propellant mass; GSE is ground support
equipment.)
Ground Operations
Fractional
Cost Element Dist of MP HC $M/Yr
Total cost/year for $95.1
flight planning
Cost/flight $7.9
* Obtain “Cost of nonrecurring GSE” from infrastructure computations in Chap. 8, Step 5, Space
Launch and Transportation Systems [Larson et al., 2005].
t “Average hours /flight” is the amount of time we need the range and mission networks to support the
flight.
Table 7-23. General Rates and Factors. We use these values in Table 7-15 to compute costs.
(LOX is liquid oxygen; LH2 is liquid hydrogen; CH4 is methane; C3H8 is propane; RP1 is
rocket propellant 1; N204 is nitrogen peroxide; N2H4-UDMH is hydrazine-
unsym metrical dimethyl hydrazine; MMH is monomethyl hydrazine.)
Supplies Government
c3h8 0.516 0.0% 10%
and materials support
Total Wrap
RP1 1.22 0.0% Launch operations 10.0% 35%
Factor
% of Non
n2h4-udmh 35.30 0.0% Network support 1.17%
recurring
Table 7-24 captures the building's "brick and mortar" costs plus that of equipment
to support what it does. Among these are the costs to design and develop the
systems, as well as a contingency allowance, site inspection and engineering
studies, and activation. All values are expressed in FY 2003 dollars, which we must
convert to the time frame of interest.
We extracted the cost estimating relationships in Table 7-24 from a modeling
analysis conducted at Kennedy Space Center in the 1980s. They come from the
Shuttle and Saturn facilities, so they reflect a heavy influence from those concepts
and support needs. But we've added notes to help adjust calculations for the
required dimensions. We must choose facilities whose functions best match the
needs in the new operations concept. Then, using the dimensions of the element or
integrated vehicle, we compute the cost of the facility, associated GSE, and initial
spares from the cost coefficients for each facility in the operations concept. These
facilities mainly support the online processing flow. Although many ancillary
buildings at a launch site are important to the support, those in Table 7-24 are of
primary interest and generally capture the main facility cost requirements.
Table 7-24. Facility Cost Table. We use these cost coefficients to estimate the cost of facilities that
support new concepts. Estimates are usually based on concept dimensions. (GSE is
ground support equipment; COF is cost of facilities.)
Cost
Coeff Initial Suggested for
FY2003 GSE Spares R- Reusable
Major Facility Type $/m3 %COF %GSE Notes E- Expendables
Solid rocket motor processing facility 185 95 6.5 1 R, E
Solid rocket motor assembly and 685 278 6.5 1 R, E
refurbish facility
Solid rocket motor recovery facility 277 112 6.5 1 R
Solid rocket motor stacking facility 211 55 6.5 1 R, E
Tank processing facility 458 80 6.5 1 R, E
Horizontal vehicle processing facility 590 447 6.5 1,6 R
Table 7-24. Facility Cost Table. (Continued) We use these cost coefficients to estimate the cost
of facilities that support new concepts. Estimates are usually based on concept
dimensions. (GSE is ground support equipment; COF is cost of facilities.)
Cost
Coeff Initial Suggested for:
FY2003 GSE Spares R- Reusable
Major Facility Type $/m3 %COF %GSE Notes E- Expendables
8. A single Shuttle main engine processing cell measures 19.5 m x 11.3 m x 14.0 m. The Space
Shuttle main engine dimensions are 4.3 m long x 2.3 m diameter. Multiply the appropriate
dimensions by the number of cells desired to obtain facility size.
9. This cost coefficient is per square meter and used for total runway, taxiway, and apron areas.
Shuttle runway is 91.4 m wide by 4572 m long.
Information derived primarily from ground-operations cost model (GOCOM).
Cost depends heavily on the number of facilities required, on how long the
launch element occupies the facility, and on the necessary flight rate. Computing
these values is beyond this book's scope, but Table 7-25 has representative times
for successive launches of several EL Vs from the same launch pad [Isakowitz et al.,
1999]. We use these values to guide our assumptions for the required launch-pad
time. For integrate-transfer-launch (ITL) systems, it encompasses readying the
integrated vehicle for launch as well as preparing the payload. For build-on-pad
(BOP) concepts, it's how long it takes to stack and integrate the vehicle and
integrate the payload.
Although we emphasize time-on-pad (TOP) as a driver for the number of
launch pads required to support a flight rate, we can apply the same process to
other facilities used in processing. We define BOP or ITL as a type of operations
concept, but a clear distinction usually doesn't exist. Preprocessing, stacking, and
7.5 Estimate Cost of Developing the Launch Site's Infrastructure 287
TABLE 7-25. Representative Times Selected Vehicles Need to Use the Launch Pad. To establish
a time-on-pad (TOP) estimate for new launch concepts, select a system with
characteristics similar to the new concept and use the TOP value as a starting point. (Cal
Days are calendar days; BOP is build-on-pad; ITL is integrate-transfer-launch.)
Stack Gross
Launch Height Liftoff Time-on-
Vehicles Up to Mass Payload Support Pad (TOP)
Model Model Stages Boosters _<">) (x1000 kg) (kg) Type Cal Days
integrating of some elements occur off the pad. We must estimate the number and
mix of facilities the new concept needs to support the desired flight rate. Picking a
length of time similar to one in Table 7-25 implies a similar type of launch vehicle
and support concept. Assuming TOP values different from the ranges shown
implies new technologies or support procedures. To determine the number of
launch pads needed to support a given flight rate, we multiply the annual flight
rate desired by the assumed time-on-pad (in days) for that concept, divided by 365
days. A similar process applies for the other facilities based on the time they're
occupied (plus refurbishment time) and not available for the next element.
Table 7-25 shows the Shuttle's TOP for comparison of a reusable concept. In
addition to the 25 workdays on the pad, this well-defined processing flow usually
has a dwell time of 84 workdays in the Orbiter Processing Facility and five
workdays in the Vertical Assembly Building to integrate with the external tank
and solid rocket boosters.
For the example system, which will be built on the pad, the facilities are a
launch pad with a mobile service structure for assembly, a launch umbilical tower,
and a firing room for launch operations. Based on Table 7-25, we choose a TOP of
60 days for the new concept, as its assembly, number of stages, stack height, and
payload characteristics are similar to the Atlas HAS. The estimate for the example
is lower because it uses no solid boosters. Using the required flight rate of 12
flights/year, we need two launch pads to meet this demand ((12 flt/yr x 60 d/flt)/
(365 d/yr)) * 2 pads.
288 Chapter 7—Lifecycle Cost Analysis
From Table 7-24, we use a cost coefficient of $336 per cubic meter to estimate
the cost of a launch pad with a mobile service structure. As the note indicates, we
add 12.2 meters of work space on all sides of the vehicle and to the stack height (for
an overhead crane) to estimate the size of the structure used to compute the costs:
[(12.2 + 4.6 + 12.2) (12.2 + 4.6 + 12.2) (55 + 12.2)]m3 = 57,400 m3, for a cost of $19.3
million each. Because the flight rate requires two pads, the total cost including GSE
and initial spares is $75 million in FY2003 dollars.
The cost coefficient for the launch umbilical tower is $1568 per cubic meter
based on the Saturn's tower (as the note indicates). Because the new concept is
about half the stack height and doesn't support a crewed launch, we assume the
tower dimensions chosen for the example concept are 9 m x 9 m x 65 m, resulting
in a facility cost of $8.3 million and a total of $24 million for two towers including
the GSE and spares.
Again from Table 7-24, we use a cost coefficient of $1011 per cubic meter to
estimate cost for the firing room. Based on the Shuttle firing room's size of 36.6 m
x 27.4 m x 4.6 m and the fact that this system isn't crewed, we assume for the
example concept a smaller room of 18 m x 14 m x 3 m. This concept yields a facility
cost of $0.76 million. With the GSE and initial spares included, the cost is $11
million. So we estimate the total nonrecurring cost for O&S infrastructure is $110
million in FY2003 dollars.
TABLE 7*26. Summary of SLaTS Reference Vehicle’s Lifecycle Costs. The table gives costs in
millions of 2003 dollars.
60% of the cost in 50% of the time. This shows fairly accurately the funding
requirements by time for space projects. If we need very accurate spreading, the
time periods can be quarters or even months. Here, we spread cost by year.
Figure 7-6 gives the spreading percentages for a 60% cost in 50% time beta
distribution for up to 10 periods of time. In our case, this is 10 years, so we use the
10th row to obtain the cost allocation by year.
9 1.16% 7.75% 16.50% 22.16% 23.99% 16.44% 8.93% 2.83% 0.24% 100.00%
10 0.87% 5.96% 13.32% 19.05% 20.83% 18.32% 12.83% 6.63% 2.03% 0.17% 100.00%
FIGURE 7-6. Cost Spreading for Budgeting Analysis. Spreading percentages represent a 60%
cost in 50% time beta distribution for one to ten periods of time. Start and end dates
for the time phasing come from the project schedule.
290 Chapter 7—Lifecycle Cost Analysis
We should take the start and end dates for the time phasing from the project
schedule. The Shuttle required about nine years from authority to proceed to first
flight. Because the SLaTS vehicle is expendable and uncrewed, we assume seven
years is enough time for total DDT&E. This can be time-phased using the beta
distribution percentages from Figure 7-6 for seven years and applying those
percentages against the total DDT&E cost of $3880 million from Table 7-27. For
example, the first year of DDT&E is 0.0242 x $3880 million = $94 million. We spread
production costs over two or three years using the beta distribution (two or three
years being the likely production span for one vehicle). This approach increases
production costs in the first two or three years before reaching steady state. But we
assume production is a steady-state cost of $1927 million per year (1/10 of the total
ten-year production cost of $19,270 million from Table 7-27). Similarly, O&S costs are
$193 million per year (1/10 of the total ten year O&S cost of $1930 million).
Table 7-27. Time Phased Lifecycle Cost (2003 Dollars in Millions). These numbers can be used
to establish budget requirements for the reference vehicle. Negative years are before
launch. (DDT&E is design, development, test and engineering; O&S is operations and
support.)
Year -7 -6 -5 -4 -3 -2 -1 1 2 3 10 Total
Total $94 $624 $1046 $1082 $728 $283 $23 $2120 $2120 $2120 $2120 $25,080
a time. Project cost simulations can gauge the effects of numerous variables that
change simultaneously.
Figure 7-7 shows one approach for cost-risk analysis, in which we "dollarize"
safety, technical, schedule, and cost risk using CERs similar to a deterministic cost
estimate. But the cost-risk process accounts for uncertainty in the CERs, project
schedule, and technical specifications. We define these uncertainties with
probability distributions and model the entire cost estimating process in a Monte
Carlo simulation to build a statistical sample of cost outcomes over a large number
of simulations.
"Dollarizing" Risks
Estimating variance
Risk Areas Safety and
technical Schedule
• Safety
risk mitigators
• Technical/performance/engineering
• Schedule
• Redundancy $
• Parts program
• Cost • Tests Estimating variance
• Etc.
Cost
FIGURE 7-7. Cost-Risk Analysis Using CERs in a Deterministic Estimate. Here, we define
uncertainties with probability distributions and model the entire cost estimating
process in a Monte Carlo simulation to build a statistical sample of cost outcomes
over many simulations.
Summary
References
Chesley, Julie, Wiley J. Larson, Marilyn McQuade, and Robert J. Menrad. 2008. Applied
Project Management for Space Systems. New York, NY: McGraw-Hill Companies.
Department of Defense. 1992. Cost Analysis Improvement Group, Operating and Support Cos
Estimating Guide. Washington, D.C.: Office of the Secretary of Defense.
Isakowitz, Steven J., Joseph P. Hopkins Jr., and Joshua B. Hopkins. 1999. International
Reference Guide To Space Launch Systems. Washington, DC: American Institute of
Aeronautics and Astronautics.
Larson, Wiley J., Robert S. Ryan, Vernon J. Weycrs, and Douglas H. Kirkpatrick. 2005. Space
Launch and Transportation Systems. Government Printing Office, Washington, D.C.
Morris, W. Douglas et al. 1995. Defining Support Requirements During Conceptual Design o
Reusable Launch Vehicles. Presented at the AIAA Space Programs and Technologies
Conference, Sept. 1995, Paper No. AIAA 95-3619.
NASA. 1993. Transportation Systems Analysis. Operations Cost Model User's/Analyst's Gui
(NAS8-39209). Hunstville, AL: NASA Marshall Space Flight Center.
NASA. 1997. A Guide for the Design of Highly Reusable Space Transportation. Cape Canaveral,
FL: NASA Space Propulsion Synergy Team, Kennedy Space Center.
NASA. 2000. The NASA Reliability and Maintainability Model (RMAT 2001), User's Manual
(NAS1-99148). Hampton, VA: NASA Langley Research Center.
NASA. 2001. Spaceport Concept and Technology Road mapping, Investment Steps to Routine, L
Cost Spaceport Systems. Final Report to the NASA Space Solar Power Exploratory
Research and Technology (SERT) Program, by the Vision Spaceport Partnership,
National Aeronautics and Space Administration, John. F. Kennedy Space Center, and
Barker-Ramos Associates, Inc., Boeing Company, Command and Control Technologie
Corp., and Lockheed Martin Space Systems, JSRA NCA10-0030.
Nix, Michael, Carey McCleskey, Edgar Zapata, Russel Rhodes, Don Nelson, Robert Bruce,
Doug Morris, Nancy White, Richard Brown, Rick Christenson, and Dan O'Neil. 1998. An
Operational Assessment of Concepts and Technologies for Highly Reusable Space Transportat
Hunstville, AL: NASA Marshall Space Flight Center.
Squibb, Gael, Daryl Boden, and Wiley Larson. 2006. Cost-Effective Space Mission Operation
New York, NY: McCraw Hill.
Chapter 8
293
294 Chapter 8—Technical Risk Management
objective have limited resources (usually budget) or must meet strict return-on-
investment (ROI) constraints. In a resource-constrained environment, a project has
a better chance of succeeding if it deals with its risks systematically and objectively.
Even then we have no guarantees that the project will remain viable, but technical
risk management encourages or even forces the project team to collect and analyze
risk information and control the risk within its resources. (For an in-depth
treatment of risk management from a project management viewpoint, see Chapter
15 of Applied Project Management for Space Systems (APMSS) [Chesley et al., 2008.)
So in technical risk management, the project risk manager and project team
need to ask first: "What are the project risks and what is the overall project risk
position?" and second: "What can and should we do about the risks?" The
activities involved continue throughout the lifecycle and this chapter explains
them in greater detail.
To determine the project's risk position, the risk manager works to identify and
collect the "knowable" risks in the project. This step is intended to be
comprehensive, covering all types of risks, project elements, and project phases. The
risk manager, project manager, and some project team members supported by
others with specialized skills in risk assessment work together to assess the collected
risk items. Ideally, this activity results in consistent and useful assessments, finding
the "tall poles" of the risk tent. Some of these tall poles may warrant in-depth
analysis to confirm or reject likelihood or consequence estimates. For the overall risk
position, the risk manager aggregates the risks into an integrated picture of the
project's risk exposure and communicates that picture to stakeholders.
In deciding what can and should be done, the risk manager and risk owners try
to identify, before the risk is realized, fruitful mitigations and other pro-active
actions to reduce the risk exposure. As part of this thinking ahead, they also
determine if and when to trigger such actions. Lastly, they track the risk and the
effects of any mitigation actions to see if these are working as intended. Throughout,
the risk manager communicates to stakeholders any changes in risk exposure.
Risk as a Vector
To identify and characterize a project risk in terms of its likelihood and
consequences, we plot the risk in a three-dimensional space. One axis represents
the likelihood (probability) and the other represents the consequence (e.g., dollars,
reduced mission return). Unfortunately, we may not know either of these with
certainty. The real-world complexity of risks coupled with the practical limitations
of even the best engineering models results in some uncertainty in one or both
measures. To a risk manager, a risk item looks like Figure 8-1 (left).
When we apply risk reduction mitigations and they work, the risk likelihood,
consequences, or both are lowered. The uncertainty may also show less of a spread.
To a risk manager, a successful risk reduction action looks like Figure 8-1 (right).
295
FIGURE 8-1. Risk as a Vector. The size of a risk is characterized by its likelihood and
consequence, though considerable uncertainty may exist in both measures.
Uncertainties are plotted in the third dimension as probability distributions. The left
shows the size of the risk before risk reduction actions are taken. The right shows
how the size of the risk (along the axes) and its uncertainty decrease (narrower
probability curves) after such actions.
Types of Risks
Categorizing risks is useful if it communicates something important about the
source, timing, and consequences of the risk and possible mitigations. Table 8-1
shows some categories, but there appears to be no formal taxonomy free of
ambiguities. One reason for this is that one type of risk can often be traded for
another, for example, exchanging a schedule risk for a cost risk.
Table 8-1. Risk Types and Examples. Most risk items involve more than one type of risk.
Programmatic risk Failure to secure long-term political support for program or project
Regulatory risk Failure to secure proper approvals for launch of nuclear materials
Operational risk Failure of spacecraft during mission
Safety risk Hazardous material release while fueling during ground operations
Sources of Risks
While each space project has its unique risks, the underlying sources of risks
include the following:
• Technical complexity—many design constraints or many dependent
operational sequences having to occur in the right sequence and at the right
time
• Organizational complexity—many independent organizations having to
perform with limited coordination
• Inadequate margins and reserves
• Inadequate implementation plans
• Unrealistic schedules
• Total and year-by-year budgets mismatched to the implementation risks
• Over-optimistic designs pressured by mission expectations
• Limited engineering analysis and understanding due to inadequate
engineering tools and models
• Limited understanding of the mission's space environments
• Inadequately trained or inexperienced project personnel
• Inadequate processes or inadequate adherence to proven processes
Figure 8-2. Risk Management Process Flowchart. This summarizes the project risk
management activities, methods, and interfaces discussed in this chapter. The dashed
lines for risk analysis indicate options; the analyst may choose one or more of the
techniques shown. (FMECA is failure modes, effects, and criticality analysis; FMEA is
failure modes and effects analysis; TPM is technical performance measurement.)
298 Chapter 8—Technical Risk Management
TABLE 8-2. Risk Management Process. These steps are for managing risk in a space
systems engineering project. [NASA, 2007]
Even so, risk-averse design practices cannot eliminate all risks. We still need a
technical risk management strategy. This need translates the received philosophical
direction into an action plan. It identifies the technical risk management products to
be created, by whom (at least notionally), on what schedule, with what degree of
depth, and against what standards. The work breakdown structure and mission
operations phases (launch, proximity operations, entry, descent, and landing)
usually serve as organizing structures for these products. Generally projects with a
greater lifecycle cost spend more on technical risk management activities than
smaller ones, although the percentage of the lifecycle cost may be less.
Whatever the resources assigned to technical risk management, the risk
manager should create a strategy that maximizes its benefits (or return-on-
investment). We document the strategy formally in the project risk management
plan. This plan also describes other essential technical risk management activities:
• An overview of the technical risk management process (see technical risk
management basics at the beginning of this chapter)
• The organization within the project responsible for technical risk
management activities
• The communication paths for reporting risk issues and status to review
boards
• The risk tracking and reporting frequency and methods to use during
formulation and implementation (including during operations)
• General risk handling responses based on the risk classification (Section 8.3)
• Training needed for technical risk management personnel
• Integration with other cross-cutting processes such as technology insertion,
risk-based acquisition, earned-value measurement, IT security, and
knowledge capture
• Assessment
- Implementation risk
• Likelihood
• Consequences
- Mission risk
• Likelihood
• Consequences
• Mitigation options
- Descriptions
- Costs
- Reduction in the assessed risk
• Significant milestones
- Opening and closing of the window of occurrence
- Risk change points
- Decision points for implementing mitigation effectively
In filling out the SRL, we must list as many specifics as we can to describe the
concern and identify the context or condition for the risk. We should try to
envision and describe the feared event as specifically as possible. We need to
describe the effect of the event in terms that other project team members
understand. This may involve thinking ahead to characterize the effort (time,
money, mass, power, etc.) needed to recover from the event (implementation risk)
and the penalty in mission success that would result (mission risk). The risk
manager assigns ownership of a risk to the individual or organization responsible
for its detailed management. This is often the one that first identified the risk.
Sometimes the risk management plan includes an initial SRL, i.e., a list of those
risks that have been identified early in the systems engineering process. It's good
practice to maintain a single, configuration-managed data repository for the
project's significant risks that is kept current. That's why on large projects the
operational SRL is kept in some form of database or dedicated risk management tool.
valuable for determining how to allocate scarce mitigation resources and project
reserves. It's also sometimes the only way to verify risk-related requirements.
Table 8-3. Consequence Levels. This table provides definitions to guide the
classification of each risk in terms of consequences. We use separate
definitions to distinguish mission risk from implementation risk.
5 Mission failure
4 Significant reduction in mission return
3 Moderate reduction in mission return
2 Small reduction in mission return
1 Minimal or no impact to mission
For high or medium risks, the systems engineer may decide whether the
project can benefit from a quantitative analysis. If an unplanned quantitative
analysis is deemed advantageous and the resources are available, we amend the
risk management plan accordingly.
8.3 Conduct Technical Risk Assessments 303
Table 8-4. Likelihood Levels. This table classifies risk—here, cost risk—in terms of likelihood.
For other types of risk, we assign different values to the levels.
Likelihood
Level Definitions Probability (Pr) Qualitative Description
Tools like 5x5 matrices and likelihood tables have limitations. They allow only a
few subjective levels, which must cover probabilities from zero to one. They also pose
problems with their imprecise language. Qualitative terms like "low", "medium",
and "high" mean different things to different people, and the meaning changes
depending on the context. For example, Table 8-4 provides typical values for
likelihood definitions of cost risk, perhaps that of a cost overrun. We routinely accept
a 30% probability of a cost overrun—regrettable, but that’s the reserves target for
NASA projects. Acceptable mission risks, however, are considerably lower, and for
human spaceflight, a loss-of-mission risk of 0.5% (1 in 200) may be unacceptably high.
* Common to these methods is Monte Carlo simulation techniques [Morgan et al., 1990].
304 Chapter 8—Technical Risk Management
Consequences
Figure 8-3. A Typical 5x5 Risk Classification Matrix. Risks are subjectively assigned a level
of consequences and a level of likelihood using expert judgment. Each is classified
as high, medium, or low risk based on the box it’s in.
Consequences
FIGURE 8-4. A 5x5 Risk Classification Matrix for FireSAT in Phase A. These top five risks for
FireSAT are the focus of the risk mitigation planning for the project. (USFS is United
States Forest Service; NOAA is National Oceanic and Atmospheric Administration;
MQA is memorandum of agreement.)
8.3 Conduct Technical Risk Assessments 305
Mars terrain
simulator Computation and Plug-and-play
(generates high- Visualization subsystem
resolution synthetic models
Rover operations
Martian terrain) Monte Carlo
simulation
Decision tree
(Initializes analysis Satellite Orbit
and computes Database Analysis Program
system reliability (Mediates transfer (SOAP)
cdf) of data across
models/simulations)
Computation and Computation and
Visualization Visualization
Hardware Reliability
Model (HRM)
(Computes system
reliability for each Component Rover
combination of reliability equipment
environmental database list
parameters
. Computation
FIGURE 8-5. A Decision Tree for Sojourner Surface Operations. This decision tree was
constructed using commercial software to capture formally the uncertainty in the
atmospheric opacity and surface roughness at the Mars Pathfinder landing site.
Project scientists supplied the probabilities. A limited probabilistic risk assessment
(PRA), using hardware failure models, calculated the outcomes.
This technique was part of a quantitative risk assessment of the Mars rover,
Sojourner [Shishko, 2002]. In that assessment, we faced major uncertainties
regarding the conditions the rover would find when it reached the Martian surface.
These uncertainties concerncd the atmospheric opacity, which had implications for
anticipated near-surface temperatures, and the rock size distribution in the
immediate vicinity of the Mars Pathfinder lander. Would the frequency distribution
of sufficiently large rocks impede rover movements and would temperatures be too
low for the rover to survive its mission? We captured these uncertainties in the
decision tree shown in Figure 8-5 and encoded probabilities. While the
environmental conditions and the attendant uncertainties were of interest by
themselves, our concern was to estimate how such conditions would affect the
ability of the rover to perform its full mission. We did a limited probabilistic risk
assessment (PRA), described below, to deal with this loss-of-mission issue.
FIGURE 8-6. The Project Cost S-Curve. A project cost probability density function (PDF) is
shown on the left and its cumulative distribution function (CDF) or S-Curve on the
right, The line on the CDF indicates the 85th percentile of the estimated cost
distribution corresponding to the shaded area on the PDF.
design reference mission. But we have yet to understand the detailed technical and
schedule or programmatic risks. As a proxy for these risks, we commonly place
probability distributions on the continuous inputs (such as mass) in the estimating
relationship, and use Monte Carlo simulations to develop the cost S-curve.
These probability distributions are often subjective. The usual source for them
is the project team, though the risk manager should be aware of the potential for
advocacy optimism here. Any probability elicitation and encoding should follow
established protocols and methods such as those in Morgan et al. [1990],
Cost Risk Analysis for Analogy Estimation. Even with analogy estimation,
we can capture cost risk and build a project cost S-curve. As with parametric
estimation, we often do analogy estimation before we understand the detailed
technical and schedule or programmatic risks. In analogy estimation, each
estimator, usually a discipline expert with substantial project experience, scales an
analogous project's actual cost, accounting for changes in requirements,
technology, and other project implementation factors. As a proxy for project risks,
we represent each scaling factor by a subjective probability distribution, thus
turning a point estimate into a probability distribution. We then use Monte Carlo
simulation to develop the cost S-curve. As with any subjective probability
elicitation, we should use established protocols and methods.
Co$t-Risk Analysis for Grass-Roots Estimation. A cost-risk analysis for
grass-roots estimation requires an understanding of the sources of cost risk. A
thorough risk analysis of the project should have already identified those elements
of the WBS that have significant technical and schedule or programmatic risk.
These risks typically arise from inadequacies in the project definition or
requirements information, optimistic hardware and software heritage
assumptions, assumptions about the timeliness of required technological
advances, and overestimating the performance of potential contractors and other
implemented. Two methods are available for performing a cost risk analysis for a
grass-roots estimate. We should identify both the method and the analysis data.
308 Chapter 8-technical Risk management
where
= predicted WBS element cost at time t in year t dollars
= inflation rate per period
= WBS element volatility parameter
= WBS element duration
= random variable that is normally distributed with mean zero and
variance dt
Each WBS element has a characteristic volatility parameter (a) derived from
the 95th percentile elicitation and the element's duration [Ebbeler et al., 2003].
Because the cost growth process is stochastic, we have to do many runs for each
WBS element to generate a cost PDF like the one shown in Figure 8-6 (left).
The principal effort in PRA is developing scenarios, which are series of key
events. Each scenario begins with an initiating event, which propagates through
the system, and ends with a particular outcome. Non-quantitative failure modes
and effects analyses (FMEAs) often serve as precursor analyses for a PRA. In
developing scenarios, a PRA analyst also attempts to capture any subtle
interactions among system elements (hardware, software, and operator) that
might together lead to undesired end states. Scenarios are developed and
documented in a variety of diagrams and formal PRA computational tools such as
the Systems Analysis Programs for Hands-on Integrated Reliability (SAPHIRE)
and the Quantitative Risk Assessment System (QRAS). Diagrams help organize
the information needed for the formal computational tools in a way that's "human-
readable" so that the underlying logic may be verified.
Probabilistic risk assessments come in a variety of "flavors," depending on the
scope and nature of the problem, the time, resources, and data available. NASA
characterizes these flavors as full, limited, and simplified [NASA, 2004 (2)]. In the
following paragraphs we describe applications of these three types of PRAs.
FIGURE 8-7. The International Space Station (ISS) Probabilistic Risk Assessment (PRA)
Top-down Logic. The architecture of the ISS PRA is depicted, showing the flow
from the master logic diagram (MLD) through the event sequence diagrams, fault
trees, and component reliability data. Though not shown in the figure, the event
sequence diagrams (ESDs) are translated into event trees for computational data
entry. Event trees are essentially decision trees with chance nodes only.
International Space Station (ISS). The PRA conducted for the ISS provides a
particularly good didactic example of a full assessment [Smith, 2002]. The
principal end states of concern were those involving the loss of the station, loss of
310 Chapter 8—Technical Rrsic Management
crew, and evacuation of the station resulting in a loss of mission (LOM). Figure 8-
7 shows the top-down logic of this model. The master logic diagram (MLD)
represents a decomposition of the ISS into functions and systems needed for a
working station. At the bottom are initiating events, cach of which kicks off an
event sequence diagram (ESD). One such ESD for the Russian C02 removal
assembly, Vozdukh, is illustrated in Figure 8-8.
Figure 8-8. Event Sequence Diagram (ESD) for Vozdukh Failure. In this ESD, the Vozdukh
assembly failure is the initiating event. Other events depicted involve whether
backup systems (CDRA and Service module C02 scrubbers) work and whether
spares or maintenance resources to repair either Vozdukh or backups are available
in time to avoid the loss of the environmental control and life-support system (LO
Sys ECLSS). (CDRA is Carbon Dioxide Removal Assembly.)
For each key event (called pivotal events in this application) in the ESD, a fault
tree (FT) diagram is associated to help determine the event's likelihood of
occurrence. At the bottom of each FT are the basic hardware, software, and human
failure events, whose probabilities of occurrence come from actual or estimated
failure data. Because of the uncertainty in such failure data, that uncertainty is
allowed to propagate (via Monte Carlo simulation) up to the top-level PRA results.
The likelihoods of occurrence and their uncertainty for the end states of concern
for the ISS were computed and aggregated using the SAPHIRE software. TTie
principal value in building the logical and data structures for performing a PRA is
8.3 Conduct Technical Risk Assessments 311
the ability to exercise it in trade studies and other "what if" scenarios. Although it
takes a great deal of effort, once built, the PRA model and supporting data become
part of the risk management process for the remainder of the project lifecycle.
Mars Rover Surface Operations. A full PRA in the style described above was
performed for the entry, descent, and landing of the Mars Pathfinder spacecraft. A
PRA for the surface operations of its principal payload, the Sojourner rover,
required a more limited approach focused on the major mobility and electronic
hardware items and the risks identified by the project team.
The first step was to identify a quantified performance objective for the
Sojourner rover during its surface mission. A performance requirement for the
rover's total travel distance during the mission was set at 100 meters of geodesic
(straight-line) travel. The project-level systems engineer working with a risk
manager established the hardware and software configuration of the rover in the
analysis, the Mars site to be explored, and with the help of the mission designers,
the surface mission start time.
Next, the risk manger and project scientist established a probabilistic
description of the environmental conditions at the selected landing site. They used
the decision tree tool to elicit the scientist's beliefs about surface, atmospheric, and
near-surface thermal conditions, and their probabilities. The scientist used
whatever hard data were available from actual observations. The results of this
step were a set of quantitative values and concomitant probabilities describing the
surface terrain (including rock-size frequency distribution, average slope, and
surface roughness), atmospheric opacity, and diurnal temperature minimums and
maximums (including an estimate of their statistical dispersion). Atmospheric
opacity and diurnal temperatures are causally connected, so these values and
probabilities were elicited conditionally. Figure 8-5 shows part of the user interface
of the decision tree tool used in the elicitation.
One of the PRA's uncertainties was how far the Sojourner rover would have to
travel on the odometer to meet the 100-meter goal. The actual travel distance is
affected by the rover's physical size, navigation and hazard avoidance algorithms,
and the rock-size frequency distribution. This question was important because
actual travel distance was the main failure driver for the rover's mobility system.
(The failure driver was really the wheel motor revolutions, which is nearly
proportional to distance traveled.)
To produce an estimate of this random variable, a simulation was performed
in which a virtual Sojourner rover, complete with actual flight software, was run
on computer-generated patches of Martian terrain. This was no small feat since it
required specially developed parallel processing software running on the Caltech
supercomputer. The characteristics of the synthetic Martian terrain could be
parametrically varied to match the different values represented in the decision
tree, For each terrain, the simulation captured the actual travel distance. By
randomizing the initial location of the virtual rover relative to its target, and
running the simulation many times, the risk manager generated enough data to
produce a Weibull probability density function (PDF) that described the actual
312 Chapter 8—Technical Risk Management
travel distance needed to complete the 100 meters geodesic distance. Figure 8-9
shows the resultant PDFs for two Martian terrains.
Figure 8-9. Results of the Sojourner Surface Operations Simulations. The two curves
represent different terrains; the dashed fine, a terrain with 50 percent of the rock-
size frequency of the actual Viking Lander 2 site; and the solid line, a terrain with 25
percent. Each probability density function was produced from multiple simulations of
a virtual Sojourner moving across a computer-generated terrain.
The results of each simulation (time, distance traveled, and on-off cycles) were
also passed to another model, the hardware reliability model (HRM). For each
simulation run, the HRM computed the likelihood of no critical failures (system
reliability). It based its calculations on data provided by the simulation of failure
drivers and on component-level reliability data provided by reliability engineers.
The HRM user interface served mainly to input these component-level data,
namely the failure modes for each component, the associated failure driver, failure
density functional form (exponential, lognormal, Weibull, etc.), and the
parameters of the distribution. Data collection was a challenge for the reliability
engineers, since quantitative data of this sort are very scarce.
To complete the limited PRA, the risk manager used the original decision tree
to merge the system reliability results with the probability estimates for each
combination of Mars environmental parameters. Gathering successful end states
resulted in a cumulative distribution function (CDF) for Sojourner's reliability at
the Mars Pathfinder landing site. The PRA results in Figure 8-10 capture the
uncertainty in the estimate.
FireSAT. The methodology for full and simplified PRAs is the same, but the
latter usually contains a reduced set of scenarios designed to capture only the
major (rather than all) mission risk contributors. A simplified PRA may be
8.3 Conduct Technical Risk Assessments 313
FIGURE 8-10. Estimated Reliability of the Sojourner Rover at the Mars Pathfinder Landing
Site. A limited probabilistic risk analysis (PRA) was performed to obtain the
reliability of the rover at 100 meters of geodesic travel. The PRA used a Monte
Carlo simulation of the rover traversing Mars terrains combined with component
reliability models and a decision tree to determine the uncertainty in the estimate.
The shaded band approximates the 75 percent confidence interval.
appropriate early in the project when less detail design data are available and as a
precursor to a full PRA later. During trade studies,, for example, it may provide the
systems engineer with some early quantification of the top-level risks inherent in
alternative designs. The next few paragraphs illustrate a simplified PRA for the
FireSAT mission and describe some of the issues involved.
One FireSAT success criterion requires the constellation of satellites to be
operational for at least five years, with a goal of seven years. The simplified PRA
master logic diagram (MLD) for FireSAT, which addresses the probability of
failing to meet that criterion/ might look like the one in Figure 8-11.
Understanding the risks due to the space environments depends on scenarios
in which micro-meteoroid and orbital debris events, charged-particle events, and
solar proton events lead to the loss of a satellite. We analyze each of these events
to determine the likelihood of their occurrence using models (some standardized)
of such phenomena*. The systems engineer needs to validate these models to be
sure that they apply to FireSAT and to account for the FireSAT orbit (700 km
circular, 55° inclination), spacecraft shielding, and parts types.
We analyze spacecraft hardware reliability early in the project based on
subsystem block diagrams showing the degree of redundancy. And we estimate
subsystem failure rates by applying parts reliability and uncertainty estimates
from several standard databases. The systems engineer may wish to augment
random failure rate estimates by taking into account infant failures, which occur
* In the case of solar proton events, these models generally take the form of predictions for
the probability of a particular fluence (particles/m2) during the period of solar activity
maxima [Larson and Wertz, 1999].
314 Chapter 8—Technical Risk Management
Figure 8-11. Master Logic Diagram (MLD) for the FireSAT Project. A simplified probabilistic
risk assessment (PRA) does not include risks that are deemed less significant.
Here, software failures and operator mistakes do not appear in the MLD. Later in the
project cycle, these risk sources should be included in a full PRA.
early in operations, and wear-out failures, which occur with greater likelihood as
the system ages.
The exhaustion of consumables (primarily propellant) is another risk to the
FireSAT mission success. The systems engineer can determine the likelihood of
such scenarios from assumptions made in developing the project's AV budget. For
example, what assumption was made regarding atmospheric density during
FireSAT's operational period? What is the probability that the actual density will
exceed the assumed density and how does that translate into propellant
consumption rates? The latter depends on the FireSAT spacecraft effective frontal
area, orbital parameters, and phase in the solar cycle.
To illustrate the results of a simplified PRA, we assume that the estimated five-
year loss of mission (LOM) probability from space environments effects is 7 x IO-4
±15%, from spacecraft hardware failures 1 x IO-3 ±10%, and from exhaustion of
consumables 5 x IO-® ±10%. Then using the laws of probability, assuming
stochastic independence for convenience, the overall ten-year LOM probability is
approximately 1.75 x IO-3 ±8%, or roughly one chance in 570. It is perhaps more
interesting to view these results in terms illustrated in Figure 8-12, which shows
the relative contribution of each top-level factor in the MLD. To the systems
engineer, the simplified PRA suggests that the spacecraft hardware reliability and
space environments effects contribute over 97% of the risk. We should confirm this
result by further analyses as better design information becomes available.
8.3 Conduct Technical Risk Assessments 315
□ Space
Environments
■ Spacecraft
Hardware
■ Consumables
Figure 8-12. Loss of Mission (LOM) Contributors. The figure shows how the top-level factors
in the master logic diagram contribute to the total LOM probability for one particular
project architecture and spacecraft design. Changes in the design or orbit
parameters affect the relative contributions as well as the total probability.
* Instantaneous availability A(t) is formally defined as the probability that the system is
performing its function at a prescribed time, t. Average availability in the interval [tj, t2] is
—1 ^2
defined as (f2 - f,) |’ A ( t ) d t . Steady-state availability is defined as l i m t _ > a 0A ( t ) when that
limit exists. Analytic expressions for these availability functions can be derived for simple
systems that can be modeled as Markov chains with constanL hazard and repair rates.
316 Chapter 8—Technical Risk Management
(8-2)
(8-3)
(8-4)
We find the solution by first taking the natural log of the availability function,
transforming the product into a weighted sum, and then applying the Kuhn-Tucker
Theorem for constrained optimization. The solution requires for each ORU type the
calculation of the left-hand side of Equation (8-5), which is approximated by the
right-hand side. If the total mass of the spares is the resource constraint, then the
denominator is just the mass of ORU type i, m(-.
(8-5)
We find the optimal spares set by ranking ORU selections from highest to
lowest value according to the value of the right-hand side of Equation (8-5). The
cut-off ORU is determined when the constraint mass is reached. The resulting
availability then is computed from Equation (8-2) [Kline, 2006].
8.3 Conduct Technical Risk Assessments 317
We can embellish this model by adding features that allow indentured parts,
usually shop replaceable units (SRUs), to be included in the set of potential spare
parts in the availability function. This allows presumably lighter and smaller SRUs
to be carried on human space missions rather than their parent ORUs. Repairs
using SRUs, however, may engender increases in crew maintenance time that are
not valued in the model, so the optimality of the solution is necessarily local. Other
sparing-to-availability models have been proposed that also allow for multiple
echelons of spares inventories [Sherbrooke, 2004],
SpaceNet. This model is a discrete event simulation of an interplanetary
supply chain, developed at MIT and JPL [de Week et al., 2006]. The scope of the
simulation is comprehensive, from launch to Earth return, and models exploration
from a logistics perspective. SpaceNet simulates individual missions (e.g., sortie,
pre-deploy, or resupply) or campaigns (sets of missions). It allows evaluation of
alternative exploration scenarios with respect to mission-level and campaign-level
measures of effectiveness (MOEs) that include supply chain risk metrics.
The basic building blocks of the simulation are nodes, elements, and supplies,
along with two concepts that tie these together: the time-expanded network and
processes for movement through the time-expanded network. We explain each of
these below. Collectively, these building blocks allow a complete description of the
demand and the movement of all items in a logistics scenario,
• Nodes—Nodes are dynamic spatial locations in the solar system. Nodes can
be of three types: surface locations, orbits, or Lagrange points.
• Supplies — Supplies are any items that move through the network from node
to node. Supplies include all the items needed at the planetary base, or
during the journeys to and from the base. Examples include consumables,
science equipment, surface vehicles, and spares. To track and model the
extraordinary variety of supplies that could be required, they are
aggregated into larger supply classes. Each supply class has a unique
demand model.
• Elements — Elements are defined as the indivisible physical objects that
travel through the network and in general hold or transport crew and
supplies. Most elements are what we generally think of as 'Vehicles/' such
as the Orion Crew Exploration Vehicle and various propulsion stages, but
they also include other major end items such as surface habitats and
pressurized rovers.
• Time-expanded network—Nodes in the simulation are linked to other nodes to
create a static network. The time-expanded network adds the "arrow of time"
by allowing only those trajectories that reflect astrodynamic constraints.
• Processes—Five processes are modeled in SpaceNet: each has its particular
parameters (e.g., AV) that create relationships among the other building
blocks and time:
- Waiting—remaining at the same physical node
318 Chafi'er 8-Technical Risk Management
* The proximity mishap of Mir happened in June 1997, when there was a collision
between a Progress resupply spacecraft and the Mir space station, damaging a solar
array and the Spektr module.
8.4 Determine an Initial Risk Handling Approach 319
Table 8-5. Typical Mitigations. This table shows some actions to consider in risk mitigation
planning. (FTA is fault tree analysis; FMEA is failure modes and effects analysis; CPU
is central processing unit.)
Design margins • Provide mass, power, and other design margins early
Workforce application » Control the tendency to maintain “standing armies” (large
groups of workers) during development and operations
Monitor cost trends • Use earned value management to track the estimate at
completion (EAC)
Schedule Schedule estimation • Know and schedule all deliverables; understand
risks integrated schedule risk; monitor the critical paths
Schedule margins • Plan schedule margins at key points in the project cycle
(e.g., major reviews, system tests)
New technology • Provide hedges for new technology developments;
insertion monitor new technology
The cost variance (CV) at a given point in the project is the difference between
what the project budgeted for the work elements completed to that point (BCWP)
and the actual cost for that work (ACWP). A large negative CV (as in Figure 8-15)
suggests, in the absence of corrective actions, a project cost overrun. Schedule
variance (SV), is the difference between what the project budgeted for the work
elements accomplished up to that point (BCWP) and what the project budgeted for
the work elements planned up to that point (BCWS). A large negative SV suggests/
in the absence of corrective action, a project schedule slip.
Now
FIGURE 8-15. Earned Value Management (EVM). EVM combines actual cost, schedule, work
planned, and work completed data to produce the estimate at completion (EAC).
(8-7)
FIGURE 8-16. Project Risk Position Reporting. In this combination chart, the project’s top ten
risks are in a 5x5 risk matrix on the left. The right side identifies each risk, shows the
risk’s trend since the last risk report with block arrows, and displays codes for the
risk handling approach. This chart is used often in high-level presentations, because
it conveys a large amount of risk information in a compact form.
References
Bankes, Steven C. May 14,2002. "Tools and Techniques for Developing Policies for Complex
and Uncertain Systems." Proceedings of ihe National Academy of Sciences. 99:7263-7266.
Chesley, Julie, Wiley J. Larson, Marilyn McQuade, and Robert J. Menrad. 2008. Applied
Project Management for Space Systems. New York, NY; McGraw-Hill Companies.
Department of Defense. 2000. Systems Engineering Fundamentals. Ft. Belvoir, VA: Defense
Acquisition University Press.
Department of the Army. May 2002. Cost Analysis Manual. Arlington, VA: U.S. Army Cost
and Economic Analysis Center,
de Week, Olivier L., David Simchi-Levi, et. al. 2006. "SpaceNet vl.3 User's Guide."
NASA/TP—2007-214725. Cambridge, MA: Massachusetts Institute of Technology and
Pasadena, CA: Jet Propulsion Laboratory.
Ebbeler, Donald H-, George Fox, and Hamid Habib-agahi. September 2003. "Dynamic Cost
Risk Estimation and Budget Misspecification/' Proceedings of the AIAA Space 2003
Conference. Long Beach, CA.
Frankel, Ernst G. March 31,1988. Systems Reliability and Risk Analysis (Engineering
Applications of Systems Reliability and Risk Analysis), 2nd edition. Dordrecht, The
Netherlands; Kluwer Academic Publishers.
Jorgensen, Edward J., Jake R. Matijevic, and Robert Shishko. 1998. "Microrover Flight
Experiment: Risk Management End-of-Mission Report." JPL D-11181-EOM. Pasadena,
CA: Jet Propulsion Laboratory.
326 Chapter 8—Technical Risk management
Kline, Robert C., Tovey C. Bachman, and Carol A. DeZwarte. 2006. LMI Model for Logistics
Assessments of New Space Systems and Upgrades. LMI, NS520T1. McLean, VA: Logistics
Management Institute.
Kouveils, Panos and Gang Yu. 1997. Robust Discrete Optimization and Its Applications.
Dordrecht, The Netherlands: Kluwer Academic Publishers.
Larson, Wiley J. and James R. Wertz, eds. 1999. Space Mission Analysis and Design, 3rd
edition. Torrance, CA: Microcosm Press and Dordrecht, The Netherlands: Kluwer
Academic Publishers.
McCormick, Norman J. 1981. Reliability and Risk Analysis. New York: Academic Press.
Morgan, Millett Gv and Max Henrion. 1990. Uncertainty: A Guide to Dealing with Uncertainty
in Quantitative Risk and Policy Analysis. Cambridge, UK: Cambridge University Press.
NASA. March 26, 2007. "Systems Engineering Processes and Requirements/' NPR 7123.1.
Washington, DC: Office of Safety and Mission Assurance.
NASA. April 25, 2002. "Risk Management Procedural Requirements/' NPR 8000.4.
Washington, DC: Office of Safety and Mission Assurance.
NASA. August 2002. "Probabilistic Risk Assessment Procedures Guide for NASA Managers
and Practitioners." Washington, DC: Office of Safety and Mission Assurance.
NASA. June 14, 2004. "Risk Classification for NASA Payloads." NPR 8705.4. Washington,
DC: Office of Safety and Mission Assurance.
NASA, July 12, 2004. "Probabilistic Risk Assessment (PRA) Procedures for NASA Programs
and Projects." NPR 8705.5. Washington, DC: Office of Safety and Mission Assurance.
Sherbrooke, Craig C. 2004. Optimal Inventory Modeling of Systems: Multi-Echelon Techniques,
2nd edition. Norwell, MA: Kluwer Academic Publishers.
Shishko, Robert. August 2002. "Risk Analysis Simulation of Rover Operations for Mars
Surface Exploration." Proceedings of the Joint ESA-NASA Space-Flight Safety Conference.
ESA SP-486. Noordwijk, The Netherlands: ESTEC.
Smith, Clayton A. August 2002. '"Probabilistic Risk Assessment for the International Space
Station." Proceedings of the Joint ESA-NASA Space-Flight Safety Conference. ESA SP-486.
Noordwijk, The Netherlands: ESTEC.
United States Air Force (USAF). April 2007. Air Force Cost Risk and Uncertainty Analysis
Handbook. Arlington, VA: U.S. Air Force Cost Analysis Agency.
Chapter 9
Product Implementation
David Y. Kusnierkiewicz,
Johns Hopkins University Applied Physics Laboratory
9.1 Prepare for Implementation
9.2 Participate in Buying the Product
9.3 Participate in Acquiring the Reuse End Product
9.4 Evaluate the Readiness of Items that Enable
Product Implementation
9.5 Make the Product
9.6 Prepare Appropriate Product Support
Documentation
9.7 Capture Product Implementation Work
Products
9.8 Ensure Effective Communication
327
328 Chapter 9 —Product Implementation
TABLE 9-1. The Product Implementation Process. The process includes not only physical
acquisition, but preparation, documentation, and communication. Not all steps apply to
all acquisitions.
Figure 9-1 shows how the process steps interact; all participants in
manufacturing do the activities shown in the dotted box. Inputs to the
implementation process fall into three main categories:
• Documentation (design and interface specifications and configuration
documentation)
• Raw materials (metallic, non-metallic, electronic piece-parts, etc.)
• Things that enable product implementation (computer-aided design,
modeling, and simulation tools, machine tools, facilities, etc.)
Outputs include:
• The product
• Documentation (end-item data package, user's manuals, etc.)
• Work products (procedures used, decision rationale and assumptions,
corrective actions, and lessons learned)
9.1 Prepare for Implementation 329
FIGURE 9-1. Implementation Process Interactions. Manufacture requires more steps than either
purchase or reuse.
FIGURE 9-2. The Players in Product Implementation. Activities between tiers are similar.
FIGURE 9-3. Implementation Activities and Interactions. Sponsor, prime, and subcontractor
activities and interactions are similar between tiers. Activities for reuse are much the
same as for purchase. (SOW is statement of work.)
Table 9-2. Communication for Implementation. Sponsor, prime, and subcontractor communications
must be timely and thorough throughout the project.
Communication and
Input to Lower Tier Output Received Oversight
Example: Acme Space Systems Enterprises, the FireSAT spacecraft supplier, has
elected to buy a star tracker through a competitive procurement. They awarded the
contract to a relatively new company on the Approved Supplier List (the company
having passed an initial audit). A larger corporation certified to IS09001, but not the
more stringent AS9100, recently acquired the company. After negotiation with the
subcontractor, the Acme supplier quality assurance manager conducts an on-site
audit of the quality management system and practices. The audit focuses on the
following areas:
• Configuration management process
• Corrective and preventive action system
• Equipment calibration records
• Electrostatic discharge (ESD) and material handling processes
• Inspection process
• Test process
• Limited-life (e.g., shelf-life) material tracking and handling process
• Documentation and disposition of non-conformances
• Manufacturing process control
• Quality records control and retention
• Training and certification
• Self-audit process
The auditor reviews process documents and as-built records, inspects facilities,
and interviews personnel. The audit results in two findings:
1. The solder training is conducted to military/DOD standards, but not to
NASA-5TD-8739.3 (Soldered Electrical Connections), which the contract
requires
2. Some of the ESD workstations had prohibited materials that produced
excessive static fields, as determined by the auditor's hand-held static
meter
The audit recommendation is to "conditionally accept the supplier." The
subcontractor receives the results of the audit, and agrees to address the findings
through their corrective action system. They train to the required NASA soldering
standard, and forward copies of the certification records. They also retrain and
certify staff to ESD handling and housekeeping practices, and increase the
frequency of self-audits.
systems engineer, the primary technical point of contact for the project (often the
lead subsystem engineer), a performance assurance engineer, and others as
desired, such as the sponsor.
Part of this kick-off may be the heritage review for an off-the-shelf purchase.
The review focuses on confirming the component's suitability for the application.
It encompasses assessments regarding technical interfaces (hardware and
software) and performance, and the environments to which the unit has been
previously qualified, like electromagnetic compatibility (EMC), radiation, and
contamination. It also assesses the design's compatibility with parts quality
requirements. We have to identify and document all non-compliances, and
address them either by modifying the components to bring them into compliance,
or with formal waivers or deviations for accepted deficiencies.
We must carefully review and analyze all modifications to an existing,
qualified design before incorporating it. How will these changes affect the
performance and qualification of the product? Are the changes driven by
requirements? A variety of factors may drive modifications:
• Quality requirements—do the existing parts in the design comply with the
program parts quality requirements?
• Environmental qualification requirements (EMC and mechanical, which
includes random vibration or acoustics and shock, thermal, radiation, and
contamination)
• Parts obsolescence issues
• Product improvement
We must take care when buying even simple off-the-shelf components or
mechanisms with previous flight heritage. Manufacturers may make seemingly
minor changes for reasons of materials availability. They may consider such
changes minor enough not to mention in the documentation normally reviewed by
a customer. We have to be conscientious in inquiring into such changes and work
with the vendor to ensure the changes are acceptable, and that the modified design
still qualifies for the application.
The vendor and the project team members must communicate problems,
failures, and non-conformances. It's usual to report failures within 24 hours. The
project should participate in the disposition of all non-conformances. Any changes
to requirements must be agreed to and documented via revisions to requirements
documents, contracts, or formal waivers and deviations as appropriate.
The statement of work may specify formal milestone reviews. Reviews to
determine readiness to enter the environmental qualification program, and a pre
ship or buy-off review at the completion of the test program, are common.
FIGURE 9-4. Electronics Box Integration. The integration progresses in steps, from component
through subsystem.
Example: During the fabrication of electronic boards for FireSAT, Acme Space
Systems Enterprises quality control inspectors start rejecting solder workmanship
associated with the attachment of surface-mounted capacitors using tin-lead solder.
The solder is not adhering properly to the capacitor end-caps. Materials engineers
determine that the end-cap finish is silver-palladium, and these parts are intended
for use with silver epoxy for terrestrial applications, and not tin-lead solder. The
manufacturer, who has eliminated lead from their processes for environmental
reasons, discontinued the tin-lead end-cap finish. Parts are available with a pure tin
end-cap finish that's compatible with tin-lead solder, but parts with a pure tin finish
are prohibited due to concerns over tin whisker growth.
The Acme systems engineer and project office inform GSFC of the situation,
and ask to engage GSFC materials engineers in resolving the issue. Between the
two organizations, they determine that pure tin end-caps can be tinned with tin-
lead solder before being attached to the boards, mitigating the tin whisker issue.
They modify the part attachment process for these parts, and replace all affected
surface-mount capacitors with pure tin end-caps, which are readily available from
the part supplier. Schedule and cost impacts to the program are minimal.
The build processes are analogous in that construction starts with lower levels
of assembly that are tested individually, and then integrated into a more complete
subsystem with increasing complexity and functionality. That is, an electronics box
may consist of a number of boards that are tested individually before being
integrated into the box, and several boxes may constitute a single subsystem.
Similarly, software may consist of a number of individual modules that are
developed separately and then integrated into a "build" for integration with the
target hardware. Subsequently, the completed hardware subsystem undergoes a
qualification (or acceptance) test program, after which modification is limited to
rework due to failures or incompatibility issues. An incompatibility may be a
requirement or interface non-compliance that we can't remedy with a software fix.
Software development is often a series of successive builds of increasing
functionality that extends after the first build delivered for hardware integration.
Several software builds may be planned during the system (spacecraft) integration
phase. The schedule may reserve a slot for delivery of a software build intended
solely to fix bugs identified through testing and use. The final software build is
usually delivered during system-level integration and testing. Figure 9-5
illustrates this process.
Summary
References
International Organization for Standardization (ISO). November 2002. Systems
Engineering—System Life Cycle Processes, TSO/IEC 15288. ISO/IEC.
Chapter 10
System Integration
Gerrit Muller, Buskerud University College
Eberhard Gill, Delft University of Technology
10.1 Define the Role of System Integration within
Systems Engineering
10.2 Select a System Integration Approach and
Strategy
10.3 Set Up a lest and Integration Flan
10.4 Schedule in the Presence of Unknowns
10.5 Form the Integration Team
10.6 Manage a Changing Configuration
10.7 Adapt Integration to the Project Specifics
10.8 Solve Typical Problems
351
352 Chapter 10—System Integra! ion
Table 10-1. Chapter structure. This table summarizes the topics of this
chapter.
following the deployment and check-out phase of a spacecraft. In this phase the
space segment operates together with the ground and control segments to realize
the mission objectives.
Figure 10-1 depicts the role of system integration within system and use
contexts. Here/ components and functions come together to achieve a reproducible
and supportable system working in its environment and fulfilling its users' needs.
Detailed knowledge of integration on the technical, organizational, and reflection
levels serves as a framework for the chapter.
FIGURE 10-1. Map of System Integration. The systems engineering context ties together the
system integration effort.
the design solution definition into the desired end product of the work breakdown
structure (WBS) model through assembly and integration of lower-level validated
end products in a form consistent with the product-line lifecycle phase exit criteria
and that satisfies the design solution definition requirements.
These definitions of system integration vary substantially in emphasis, but
they all share the idea of combining less complex functions to achieve a system
satisfying its requirements. Everyone on a space project must have a clear and
unambiguous understanding of what system integration is and what it comprises.
Its processes and activities depend on the scope and complexity of the space
mission,, and on the customer. Large-scale integration for manned space flight, for
example, is far more complex than integrating a technology demonstration
mission using a micro-satellite. Table 10-2 lists typical engineering activities
related to the system integration tasks within the development of a space mission.
354 Chapter io-System Integration
Table 10-2. Space System Levels with Their Associated System Integration Tasks. The tasks
depend not only on system level, as shown here, but on mission complexity.
Typical Integration
System Level Example Task Task Example
Part Resistor Integrate on board Functional test
Component Printed circuit board Integrate component Soldering
Subassembly Actuator electronics Integrate hardware and Software upload
software
Figure 10-2. System Decomposition and Integration within the Systems Engineering
Framework. Decomposition follows a top-down approach, which is complemented
by the bottom-up integration process. The vertical verification arrow indicates
verification during final integration; the horizontal verification arrows indicate
verification at the respective requirement levels.
FIGURE 10-3. Typical Phases of a Systems Engineering Process, a) The top diagram depicts a
simplistic unidirectional waterfall scheme, which is inefficient b) The middle process
is acceptable, but suboptimal, as it limits integration activities, e) The bottom shows
a shifting focus of activities, which is preferable from an integration perspective.
Table 10-3. Types of Interfaces. The examples here are typical of space systems.
example, may lack a common language and a sufficient knowledge of the others'
area. Beyond the conventional ICDs, a system integration point of view demands
that we consider all interfaces between subsystems, assemblies, etc. A matrix of all
elements at a particular system level is a good way to indicate and trace interfaces.
FIGURE 10-4. Integration Over Time. Integration takes place in a bottom-up fashion with large
overlaps between the integration levels.
FIGURE 10-5. NASA’s AURA Earth Observation Platform with a Mass of about 1800 kg. Here
we show the completely integrated AURA spacecraft during acoustic tests in 2003.
FIGURE 10-6. Integration of the Delfi-C3 Nano-satellite with a Mass of 3 kg. Here the
components come together to form subassemblies, subsystems, and the systems in
the clean room at the Delft University of Technology.
360 Chapter 10—System Integration
Figure 10-7. A Process that Integrates a New Application and a New Subsystem for Final
Integration into a New System. During integration, we transition from previous
systems and partial systems to the new system configuration. The integration does not
necessarily have to wait for new hardware but can apply simulated hardware instead.
and orbit control subsystem (AOCS), with the goal to verify its correct functional
operation. We selected the AOCS because such subsystems are notoriously complex.
This complexity is not due primarily to the dynamics or the control law, but because
it involves many sensors and actuators, and we must accommodate many modes,
safety precautions, and operational constraints. This scenario considers a set of four
gyroscopes to measure the spacecraft's rotation rates, a coarse Earth-Sun sensor to
determine the spacecraft's orientation with respect to Earth and the Sun, a set of two
star trackers for determining the precise inertial spacecraft orientation, and fine Sun
sensors for precision orientation with respect to the Sun.
The actuators for the FireSAT AOCS are a set of four reaction wheels to orient
the spacecraft in all three axes, two magnetic torquers to support wheel saturation,
and a thruster system for orbit control- Typical attitude modes are: Sun acquisition
mode where the solar panels orient toward the Sun for charging the batteries;
Earth acquisition mode for the payload, such as a camera, pointing toward Earth;
and safe mode with a limited set of sensors and actuators to assure survival in
critical situations. As the AOCS requires micro-second timing, we must prove the
performance of the subsystem with flight software and hardware. Thus, in
addition to computer simulations, we also need closed-loop hardware~in-the-loop
tests. The relevant elements of the FireSAT environment include the spacecraft
orbit, torques acting on the spacecraft, the position of the Sun, and other spacecraft
subsystems, all of which are software simulated. The AOCS sensor and actuator
units and the attitude control computer (ACC) are first software simulated and
then replaced step-by-step by hardware.
10.2 Select a System Integration Approach and Strategy 361
Figure 10-8 shows a test setup for the FireSAT attitude orbit and control
subsystem unit. These tests usually happen at the unit contractor to test the
interfaces of their unit against the standardized check-out equipment (COE). They
also test the physical performance of the sensor with specialized equipment.
FIGURE 10-8. Integrated FireSAT Attitude and Orbit Control System (AOCS) Closed-loop
Test with the Integrated Attitude Control Computer (ACC). Here, the electronics
of the fine Sun sensor (FSS) are stimulated by check-out equipment (COE). The
loop is closed by a serial data bus between the FSS electronics and the COE.
The integrator establishes a closed-loop test bed for the FireSAT AOCS. The
attitude control computer is tested on interfaces and is then the first unit to be
integrated into the subsystem. At this stage, the subsystem check-out system
(AOCS check-out equipment), which simulates the spacecraft's environment, is
integrated as well (Figure 10-9). After establishing and testing this setup, engineers
enhance it for new functionality. Software models for the sensors and actuators
then are integrated into the ACC. The prescribed integration setup is rather
flexible. If, for example, the attitude control computer is not yet available in
hardware, the subsystem COE could substitute for it.
The goal of the integration test is to integrate as much FireSAT AOCS
hardware as possible to verify the system's performance. To this end, the software
models are replaced step-by-step by the COE (Figure 10-7) and by their sensor and
actuator counterparts (based on Figure 10-8). Figure 10-10 shows a fully integrated
closed-loop test setup for the FireSAT AOCS. The stimuli for the sensors and
actuators, generated by the COE, are electrical. A physical excitation is in most
cases not possible in the test environment.
In the figures, spacecraft hardware or engineering models are indicated as
boxes with rounded comers. In Figure 10-10, actuators are indicated with bold
362 CHAPTER 10-SYSTEM INTEGRATION
FIGURE 10-9. Closed-loop Integration Test Setup for the FireSAT Attitude and Orbit Control
System (AOCS). The attitude control computer (ACC) connects to the AOCS check
out equipment (COE) via a remote terminal unit (RTU). The ACC connection through
the onboard data handling (OBDH) and the telemetry and telecommand interface
(TM/TC IF) allows us to achieve an integrated operation at early stages. We note that
the software interface (SW IF) is available to provide the means to upload software or
debug output generated by the ACC. (Adapted from the Infrared Space Observatory
[ISO] AOCS.)
boxes. The setup comprises assembly check-out equipment providing stimuli for
gyroscopes, a coarse Earth-Sun sensor, star trackers, a fine Sun sensor, magnetic
torquers, reaction wheels, and thrusters, which are connected through a data bus
to an attitude control computer (ACC). The ACC connects to the simulator through
a remote terminal unit. Communication happens through an operator interface to
the AOCS simulator and through the onboard data handling system and a
telemetry and telecommand interface.
In closed loop AOCS test setups, such as the one depicted in Figure 10-10, we
have to verify the correct functional performance of all the attitude modes and the
transitions between them. We must also test all of the safeguard functions. This
amount of testing often challenges engineers because it may require massive
manipulation of the test configuration to activate these functions, which are not
activated in normal operations.
The discussion so far has focused on the functional aspects of integration
testing. But another important role of environment simulation is the removal of the
terrestrial environment, Earth's gravity in a laboratory is a completely different
environment from the free-fall of objects in space. The deployment mechanisms
10.2 Select a System Integration Approach and Strategy 363
FIGURE 10-10. Closed Loop Integration Test Setup for the FireSAT AOCS Subsystem. Multiple
interfaces present special challenges in testing. (AOCS is attitude and orbit control
system; COE is check-out equipment; GYR is gyroscope; CES is coarse Earth-Sun
sensor; ST is star trackers; FSS is fine Sun sensor; MTQ is magnetic torquers; RW is
reaction wheel; TH is thrusters; TM/TC IF is telemetry and telecommand interface;
Operator IF is operator interface; RTU is remote terminal unit; OBDH is onboard data
handling; ACC is attitude control computer.) (Adapted from Infrared Space
Ob&ervatory attitude and orbit control system.)
and drives for large solar arrays require special test setups to eliminate or
compensate for the effects of gravity. The magnetic environment in an integration
facility must be known and possibly changed to test the functional behavior of
AOCS sensors like magnetic coils.
The challenge for the project team is to determine which intermediate
integration configurations are beneficial. Every additional configuration costs
364 Chapter 10-system Integration
money to create and keep up-to-date and running. An even more difficult matter
is that the same critical resources, an imaging expert for instance, are needed for
the different configurations. Do we concentrate completely on the final product, or
do we invest in intermediate steps? Finally we have the difficulty of configuration
management for all the integration configurations. When hundreds or thousands
of engineers are working on a system, most are busy with changing
implementations. Strict change procedures for integration configurations reduce
the management problem, but in practice they conflict with the troubleshooting
needs during integration. Crucial questions in determining what intermediate
configurations to create are:
• How critical or sensitive is the subsystem or function to be integrated?
• What aspects are sufficiently close to final operation that the feedback from
the configuration makes sense?
• How much must we invest in this intermediate step? We have to pay special
attention to critical resources,
• Can we formulate the goal of this integration system in such a way that it
guides the configuration management problem?
• How do contractual relations (e.g., acceptance) affect and determine the
configurations?
Table 10-4 details a stepwise integration approach based on these
considerations. The first step is to determine a limited set of the most critical
system performance parameters, such as image quality, attitude control accuracy,
or power consumption. These parameters are the outcome of a complicated
interaction of system functions and subsystems; we call the set of functions and
subsystems that result in a system parameter a chain. In the second step we
identify the chains that lead to critical performance parameters. For the FireSAT
AOCS example of Section 10.2.3, the chain comprises the star trackers as attitude
determination sensors, the reaction wheels as attitude actuators, and the
algorithms implemented in the attitude control computer.
Step three specifies these chains as the baseline for integration configurations.
We test components and subsystems to verify compliance with the specification.
The integration focuses on the aggregate performance of the elements. In this way
we verify that the composition behaves and performs as it should. We start to
define partial system configurations as integration vehicles after we identify
critical chains. These chains serve as guidance for the integration process.
Performance parameters and their related chains are critical for different reasons:
a parameter may be critical per se, or we may know (perhaps based on historic data)
that the parameter is very sensitive, vulnerable, or uncertain. An example is the
performance of the attitude control algorithm, composed of many low-level
functions. Experience shows that individual functions tend to perform well, but the
composition of many functions performs badly, due to low-level resource conflicts.
We should identify the critical system performance parameters as early as
possible. In Table 10-4, this occurs in steps 4 and 5, starting with a focus on typical
10.3 Set Up a Test and Integration Plan 365
TABLE 10-4. Stepwise Integration Approach. This approach guides the use of resources to the
intermediate integration of the most important system parameters.
Step Description
1 Determine the most critical system performance parameters
4 Show system performance parameters as early as possible; start with “typical” system
performance
5 Show worst-case and boundary system performance
6 Rework manual integration tests in steps into automated regression tests
performance. Once the system gets more stable and predictable, we have room to
study worst-case and boundary performance, e.g., a reduced number of star
trackers. In steps 6 and 7, we replace preliminary integration test setups with
mature systems, which allow, depending on the test case, automated regression
testing. We then evaluate, analyze, and iterate the test results until they show
compliance with the requirements. The final step integrates the chains and enables
a simultaneous performance demonstration of the system.
Table 10-5. Physical Models for Space Systems Integration and Verification. Almost no space
mission uses all of these models. The project must identify the relevant ones based on
the mission characteristics and the constraints. (FEM is functional engineering model.)
These test methods may be applied to different physical models at different system
levels.
368 Chapter 10—System Integration
Functional tests verify the system's functional requirements at all levels. They
may include a variety of test types depending on the functions of the product.
Examples are deployment tests of a solar array, closed-loop control tests of an attitude
control system with real hardware in a simulated environment, radio frequency tests
of a communication system, or the test of an algorithm on the onboard computer.
Integration tests normally comprise mechanical, electrical, and software
integration. In mechanical integration tests, we test the assembly of lower-level
systems, check mechanical interfaces, verify envelopes and alignments, and
determine mass properties. Electrical integration tests check the electrical and power
interfaces, power consumption, data transfer, grounding, and soldering and may
include electromagnetic compatibility tests as well. Software tests check the correct
functioning of software on simulated hardware or in a simulated environment.
Environmental tests verify that items will withstand the conditions they're
exposed to during their operational lifetime. Included are tests related to radiation
exposure and thermal and mechanical loads. Electromagnetic compatibility
demonstrates that the electromagnetic emission and susceptibility of the
equipment won't lead to malfunction. Total dose tests verify that the equipment
can tolerate the ionizing space environment. Thermal testing demonstrates the
equipment's ability to withstand worst-case conditions in a thermal-vacuum
environment. These tests predominantly address orbital conditions, but
mechanical tests, for such matters as linear acceleration or vibration and shock,
focus on transport, handling, launch, and reentry conditions.
Some items may require other test methods, such as humidity tests if the
equipment can't be environmentally protected, or leakage tests for pressurized
equipment. Extraterrestrial space missions must meet protection standards to
prevent biological contamination of planetary surfaces. Human space missions
require tests for toxic off-gassing and audible noise.
Environmental tests are usually conducted sequentially and combined with a
number of functional tests as shown in Figure 10-11. Testing the quality of
workmanship is also important. Besides a visible or electrical inspection, a test under
harsh environmental conditions, especially thermal-vacuum and mechanical ones,
may uncover problems with workmanship, like soldering errors or loose screws.
Testing is a significant cost factor in space systems engineering. Costs come
from more than planning, defining, conducting, analyzing and documenting tests.
Testing also entails the necessary infrastructure, particularly costly check-out
equipment (COE). The equipment is sometimes specific for a particular test or
mission and developing it is often a project in itself. Examples of COE are
electronic ground support equipment (EGSE) or the GPS signal simulator (GSS).
We use the EGSE during ground tests to connect to the space segment via the space
link and to effectively replace the ground station. The GSS simulates, based on a
predefined user satellite trajectory and GPS ephemerides, the radio frequency
signals that the antenna of a GNSS receiver would receive in space, including the
time-varying Doppler shift. Mechanical ground support equipment should
eliminate or compensate for gravity effects during the deployment tests for booms,
antennas, or solar arrays.
10.3 Set Up a Test and Integration Plan 369
Figure 10-11. Typical Sequence of Environmental Tests Combined with Functional Tests.
Environmental tests are depicted as mechanical (rectangles) and radiation (rounded
boxes) tests. The sequence varies depending on the tested item and the mission
requirements. The embedded functional tests vary depending on the effort of
functional tests and specific environmental tests and the risks associated with it.
(EMC is electromagnetic compatibility)
physically beyond their specification limits. Typical examples are thermal testing
of electronic equipment at temperatures close to or exceeding the component
specifications/ and vibration tests that simulate the stresses on the spacecraft
structure during launch.
Lifecycle testing—This testing denotes system tests under conditions that emulate
the lifetime of the system. It's comparable to load testing with time as the critical
parameter. In software dominated areas, we target effects with long time
constants, such as drift or resource fragmentation. For hardware dominated areas,
we address lubrication, fatigue, wear, and creep. We have to design the load and
vary the inputs so as to find as many problems as possible with long time constants
in a short time. The available integration and test time is often orders of magnitude
shorter than the system's expected operational lifetime. Creativity and
engineering ingenuity is required to "accelerate" system aging to find the effects
of a very long time constant. Risk analysis helps decide the need for costly lifecycle
tests. Lifecycle testing is a good example of the link between operational and
functional analysis.
Test Configuration
Testing using systematic data—This involves letting parameter sets vary
systematically within a predefined parameter space. An example is the power
input generated by solar arrays to the spacecraft's power subsystem. Systematic
test sets are desirable for their reproducibility and ease of diagnostics. We
systematically test the boundary of the parameter space to verify that the system
is well-behaved. This last conclusion, however, is based on the assumption that the
design is smooth and continuous, without any resonances or hot spots. But in
many complex systems this assumption isn't valid. For instance, mechanical
systems have resonance frequencies that cause significantly higher deformations
when excited than do other frequencies. So we must include test configurations at
pre-determined or suspect parts of the parameter space.
Regression testing—Since many parts of the system are still changing, we have to
monitor system performance regularly. Common problems with software updates
include systems that may not be executed any longer or that deviate from nominal
performance. Regression tests allow us to monitor and identify regression bugs—
one way is by redoing previously successful test runs. Early integration tests are
usually done manually, because the system is still premature, and because
10.3 Set Up a Test and Integration Flan 371
integrators have to respond to many unexpected problems. In time, the chain and
surrounding system get more stable, allowing automated testing. The early
manual integration steps then transition into automated regression testing.
Systems engineers must monitor and analyze the results of regularly performed
regression tests. This analysis isn't pass or fail, but looks for trends, unexplained
discontinuities, or variations.
Interface Testing
The objective is to uncover inconsistencies or errors in design of subsystems or
interfaces. This is often done on the mechanical, electrical, and software levels.
Interface testing allows us to identify and resolve these problems before a
subsystem or instrument undergoes final production and verification. It's an
important tool to reduce schedule risk during integration.
Later during integration, we integrate the chains and show the simultaneous
performance of the critical performance parameters. We must document and
archive all test results and test conditions. This process creates a set of historical
system data, which helps troubleshooting. It answers such questions as, "Did we
see this effect before?" and "How and when did the effect occur?"
Table 10-6. Verification and Test Levels Related to the Model Philosophy. Test levels are
indicated by demonstration (D), qualification (Q), and acceptance (A). The relation
shows a typical space scenario, which may be adapted depending on mission
characteristics.
Mock-up X D
Development model X X X D
Integration model X X X D
Structural model X Q
Thermal model X 0
Engineering model X X X Q
Qualification model X X Q
Flight model X X X A
Figure 10-12. Simplified Subset of Functions Required for the Performance Parameter of an
Image Quality System of a Waferstepper. At the left hand side, elementary (local)
functions are shown. As we move to the right, these local functions are combined
into system-level functions. Combining functions continues until the desired integral
system function is achieved {an exposure that qualifies in the semiconductor factory
context).
10.4.2 Reviews
Integration is a continuous process extending through most phases of a space
mission. The systems integration review (SIR) marks the end of the final design
and fabrication phase (Phase C) and its successful completion indicates readiness
to proceed to the system assembly, integration and test, and launch phase (Phase
D) [NASA, 2007]. The objective of the SIR is to ensure that the system is ready to
be integrated. To that end all subsystems and components, integration plans and
procedures, and integration facilities and personnel must be prepared. The
review's success is determined by criteria for accepting the integration plan,
identifying and accepting the risk level, and defining and documenting the work
flow. The process also includes defining consequences and recovery strategies in
the case of a less than fully successful review.
Given the close relation between integration and testing, a test readiness
review (TRR), conducted early in Phase D, follows the SIR. Here, the objective is to
10.5 Form the Integration Team 375
ensure that the article to be tested, the required test facilities, and means for data
handling are ready. A successful TRR provides approved test plans, including the
coordination of required test resources and facilities. It doesn't mark the beginning
of the project test program but provides a formal initiation of tests to be conducted
at a higher integration level.
FIGURE 10-13. Roles and Responsibilities during the Integration Process. These roles and
responsibilities aren't isolated. Communication and coordination among team
members are crucial for successful integration of a complex system.
The project leader takes care of managing resources, schedule, and budget.
Based on inputs from the systems engineer, the project leader claims and acquires
the required resources, and also facilitates the integration process. This
contribution is critical for project timing.
The systems engineer's role is in fact a spectrum of roles comprising
architecture, integration, and systems engineering. Depending on their
capabilities, one or more persons may take on this role—a systems engineer might
be a good architect and a bad integrator or vice versa. This role depends on
content, relating critical system performance parameters to design and test. It
determines the rationale of the integration schedule; the initial integration
schedule is a joint effort of project leader and systems engineer. The integral
perspective of this role results in a natural contribution to the troubleshooting. The
systems architect or engineer is responsible for the integration tests7 success, and is
also in charge of evaluating the tests.
The system tester actually performs most of the tests. During the integration
phase, this person spends a lot of time in troubleshooting, often taking care of
trivial problems. More difficult problems should be referred to engineers or the
system integrator. The system tester documents test results in reports for the
systems engineer to review and evaluate.
The system owner is responsible for maintaining a working up-to-date test
model. This is a challenging job, because many engineers are busy making updates
and doing local tests, while system integrators and system testers need
undisturbed access to a stable test model. Explicit ownership of one test model by
one system owner increases the test model stability significantly. Projects without
such a role lose a lot of time to test model configuration problems.
10.6 Manage a Changing Configuration 377
Figure 10-14. Simplified Process Diagram That Shows Processes That are Relevant from a
Configuration Management Perspective. Here we show that the supplier fills
orders from the company that is filling orders from the customer. All orders must
meet specifications. All data and physical entities shown in the diagram are under
configuration management. The company is responsible to the customer for the
supplier’s configuration management activities. (TPD is technical product
documentation.)
of the figure shows the real situation: a number of undocumented relations between
design choices pop up, causing an avalanche of additional affected requirements.
Every additional relationship triggers its own avalanche of affected higher-level
requirements. The counteraction to repair the design triggers a new avalanche in the
top-down direction.
FIGURE 10-15. Impact of Changes on Requirements. The left side shows an ideal case while the
right side exhibits a much more complicated situation as encountered in practice.
Table 10-7. Typical System Integration Problems, Their Occurrence, and Howto Avoid Them.
In spite of the project team's best efforts, and adherence to correct processes,
unforeseen difficulties can and do arise.
System performance is not After the system operates * Review the test conditions
achieved • Analyze the performance under
simplified conditions
System is not reliable After the system operates * Analyze late design changes
nominally * Identify conditions not tested before
System cannot be integrated. The failure to integrate the system (or to build it
when software integration is concerned) often arises from the use of implicit
know-how. An example is a relatively addressed data file that resides on the
382 Chapter 10-System Integration
Summary
Systems integration is critical to the entire creation process. All unknowns and
uncertainties left over from previous activities show up during this phase, In space
systems the final integration takes place after launching the spacecraft, which
increases the importance of all types of tests during integration. We've discussed
how to set up tests and an integration plan, and have elaborated the roles in the
integration team and the complexity of configuration management. Integrators
always have to troubleshoot and solve problems, despite the smooth interplay of
process, people, and plans.
References
Department of Defense (DoD). January 2001. System Engineering Fundamentals. Fort
Belvoir, VA: Defense Acquisition University Press (DAU).
European Cooperation for Space Standardization (ECSS) Secretariat. April 1996. Space
Project Management. ECSS-M-40A. Noordwijk, The Netherlands: ESA Publications
Division.
ECSS Secretariat, ESA-ESTEC Requirements and Standards Division. November 17, 1998.
Space Engineering—Verification. ECSS-E-10-02A. Noordwijk, The Netherlands: ESA
Publications Division.
ECSS. February 15, 2002. Space Engineering—Testing. ECSS-E-10-03A. Noordwijk, The
Netherlands: ESA Publications Division.
ECSS. 2008. System Engineering—Part 1: Requirements and Process. ECSS-E-ST-10 Part IB.
Noordwijk, The Netherlands: ESA Publications Division.
Forsberg, Kevin and Hal Mooz. September 1992. "The Relationship of Systems Engineering
to the Project Cycle." Engineering Management Journal, Vol. 4, No. 3, pp. 36-43.
Hamann, Robert J. and Wim van Leeuwen. June 1999. "Introducing and Maintaining
Systems Engineering in a Medium Sized Company." Proceedings of the 9th Annual
International Symposium of the International Council on Systems Engineering.
International Council on Systems Engineering (INCOSE). June 2004. INCOSE Systems
Engineering Handbook, Version 2a. INCOSE-TP-2003-016-02.
Meijers, M.A.H.A., Robert J. I Iamann, and B.T.C. Zandbergen. 2004. "Applying Systems
Engineering to University Satellite Projects." Proceedings of the 14th Annual International
Symposium of the International Council on Systems Engineering.
National Aeronautics and Space Administration (NASA). June 1995. NASA Systems
Engineering Handbook. NASA SP-610S.
NASA. 2001. Launch Vehicle Design Process Characterization, Technical Integration, and Lessons
Learned. NASA TP-2001-210992. Huntsville, AL: Marshall Space Flight Center.
NASA. March 26,2007. NASA Systems Engineering Procedural Requirements, NPR 7123.1a.
National Research Council (NRC) Committee on Systems Integration for Project
Constellation. September 21, 2004. Systems Integration for Project Constellation: Letter
Report. Washington, DC: The National Academies Press.
384 Chapter 10—System Integration
Office of the Under Secretary of Defense (OUSD) for Acquisition, Technology and Logistics.
July 2006. Defense Acquisition Guidebook, Version 1.6. Fort Belvoir: DAU Press.
http://akss. dau.mil/dag.
Pisacane, Vincent L., ed. 2005. Fundamentals of Space Systems, 2nd Edition. Oxford: Oxford
University Press.
Ramesh, Balasubramaniam et al. January 1997. "Requirements traceability: Theory and
practice." Annals of Software Engineering, no. 3 (January): 397-^15. Dordrecht, The
Netherlands: Springer Netherlands.
Chapter 11
In this chapter we turn our attention to the evaluation processes of the systems
engineering "engine" as shown in Figure 11-1. In Table 11-1 we show the structure of
our discussion. Inadequately planned or poorly executed verification and validation
(V&V) activities have been at the heart of too many high-profile space failures. Table
11-2 summarizes a few of the major mission failures that more rigorous system V&V
might have prevented. Project verification and validation encompasses a wide
variety of highly interrelated activities aimed at answering several key questions
throughout the mission lifecycle in roughly the following order:
385
386 Chapter 11 - Verification and Validation
Figure 11-1. We Are Here! Verification and validation activities fit on the right side of the systems
engineering “V.” However, much of the planning begins considerably earlier, on the
left side of the “V."
Table 11 -1. Overview of Verification and Validation Activities. This table summarizes the major
steps and key activities that fall under the broad umbrella of verification and validation.
Genesis 2004 G-switch installed backwards “...tests used a bypass of the G-switch
• Parachute not deployed sensors and focused on higher-level
• Hard landing verification and validation..."
“...inappropriate faith in heritage designs...”
Columbia 2003 Debris damaged thermal tiles “...current tools, including the Crater model, are
• Loss of crew and vehicle inadequate to evaluate Orbiter Thermal
Protection System damage...” “Flight
configuration was validated using extrapolated
test data...rather than direct testing"
Table 11-3 shows the top-level process flow for each of these activities. Because
systems of interest vary from the part level to the complete system of systems level,
or, in the case of software, from the code level up to the installed full system level,
the terms product verification and product validation apply at any level.
Verification and validation occur for products we use every day. From
shampoo to ships, extensive V&V has occurred (we hope) to ensure these products
meet their intended purpose safely. What makes space systems so different? Unlike
shampoo, space systems are usually one-of-a-kind. We don't have the luxury of lot
testing to verify production processes. Unlike ships, space systems don't go on
"shakedown cruises" to look for flaws, then sail back to port to be fixed; we have
only one chance to get it right the first time. So space system verification and
validation must be creative, rigorous, and unfortunately, often expensive.
388 Chapter 11—Verification and Validation
TABLE 11-3. Top-level Process Flow for Verification, Validation, and Certification Activities.
Many of the sub-process outputs serve as inputs for succeeding sub-processes. For
instance, validated requirements are an output of the requirements validation sub-
process, and serve as an input to the product verification sub-process. (FCA is
functional configuration audit; PCA is physical configuration audit.)
Verification,
Validation, and
Inputs Certification Activities Outputs
• Unvalidated requirements Requirements • Validated requirements
validation
• Unvalidated mission critical Model validation • Validated models
models • Model uncertainty factors
• Validated model requirements • List of model idiosyncrasies
• Validated requirements Product verification • Verification plan (as implemented)
* Unverified end products • Verified end products
* Verification plan (including • Verification products (data,
incompressible test list) reports, verification completion
• Verification enabling products notices, work products)
(e.g., validated models)
• Verified end products Product validation * Validated products
• Customer expectations (e.g., • Validation products such as data
measures of effectiveness and test reports and work products
other acceptance criteria)
• Operations concept
• Validation plan
• Validation enabling products
• Verified and validated products Flight certification • Certified product
• Verification and validation • Certification products (e.g., signed
products DD250-Material Inspection and
• Real-world operational context Receiving Report; completed FCA
for end products and PCA; mission rules)
Let's start with some basic definitions. NPR 7120.5d [NASA (1), 2007] defines
verification as "proof of compliance with design solution specifications and
descriptive documents." It defines validation as "proof that the product
accomplishes the intended purpose based on stakeholder expectations." In both
cases, we achieve the proof by some combination of test, demonstration, analysis,
inspection, or simulation and modeling. Historically, the space industry has
developed one-of-a-kind products to fulfill a single set of requirements that
encompass the design and manufacturing specifications, as well as the customer
requirements. This heritage has merged verification and validation into one
process in the minds of most practicing aerospace engineers. But products
developed by the space industry are becoming increasing varied, and we need to
fit new space systems into larger, existing system architectures supporting a
greater diversity of customers. So we treat the two processes as having two distinct
objectives. This chapter describes product verification and validation separately. It
also includes a separate description of certification, and highlights how V&V
supports this important activity. For any mission ultimately to succeed, we carry
out all five of the activities listed in Table 11-3.
389
Figure 11-2 shows the basic flow of V&V activities. The emphasis of the
different types of validation and verification varies throughout the project lifecycle,
as shown in Figure 11-3. An appreciation for the activities that constitute V&V, and
their shifting emphasis throughout the project, better prepares us to develop good
requirements and models, verified products, and ultimately systems that work the
way the customer intended.
FIGURE 11-2. Verification and Validation Activities in the System Lifecycle. This diagram
depicts the logical sequence of these activities and their interdependencies. Shaded
blocks indicate topics discussed at length in this chapter. Parallelograms indicate
inputs or outputs of activities, which are shown in rectangular boxes.
This chapter examines each of the five activities of project V&V, starting with a
brief review of the importance of having properly validated requirements as the
basis. Next we discuss model validation as a critical but somewhat separate activity
that supports other V&V and operational planning activities. We then address
product verification activities, the most crucial, and typically most expensive, aspect
of all V&V. After this, we examine the issues associated with product validation
390 Chapter 11—Verification and Validation
Mission Lifecycle
Launch Start of
normal
operations
Figure 11-3. Nominal Phasing of Various Types of Verification and Validation (V&V) in the
Project Lifecycle. This figure illustrates the relative level of effort expended on each
of the V&V activities throughout the lifecycle. The traditional scope of V&V activities
centers on product verification and validation. But the broader scope includes
requirements and model validation, and flight certification. For completeness, we
include operational certification, also known as on-orbit commissioning.
before turning our attention to the capstone problem of flight certification. Finally,
we review unique aspects of using commercial-off-the-shelf (COTS) or non-
developmental items (NDI) for space applications followed by an examination of
software V&V. The chapter concludes with case studies and lessons learned
followed by a look at the FireSAT example.
This chapter uses the terms "system" and "product" interchangeably. A
product or system V&V program includes flight hardware, flight software, ground
support equipment (GSE), ground facilities, people, processes, and procedures
(i.e., everything we use to conduct the mission). As Figure 11-3 illustrates, these
activities span the entire project lifecycle. We categorize V&V activities as follows:
1. Pre-launch V&V leading to certification of flight readiness—Phases A-D
2. Operations V&V such as on-orbit check-out and commissioning—Phase E
11.1 Validate Requirements 391
We focus primarily on pre-launch V&V activities, but most of these also apply to
operational and calibration mission phases. Let's review the supporting elements
that make V&V possible. These include:
• Requirements
• Resources, such as:
- Time—put V&V activities in the schedule on day one
- Money
- Facilities—GSE, simulation resources
- People—subject matter and verification experts who have "been there
and done that" to avoid reinventing the wheel and repeating old
mistakes
• Risk management
• Configuration management
• Interface management
• Technical reviews
The first item is requirements. Without good requirements we have nothing to
verify the product against and can't even build the product. Therefore, good, valid
requirements form the basis for all other V&V. The next section addresses this first
of the five activities.
FIGURE 11-4. Requirements Validation Flow Chart. This figure illustrates the feedback between
customer expectations and technical requirements. Requirements validation
determines whether a system built to meet the requirements will perform to the
customers’ expectations. Shaded blocks indicate topics discussed at length in this
chapter. Parallelograms indicate inputs or outputs of activities, which are shown in
rectangular boxes.
Good requirements may have attributes that go beyond these (e.g., "needed" as
described in Chapter 4). However, VALID serves as a good starting point and
provides succinct guidance for anyone charged with reviewing and validating
requirements.
We usually complete formal validation of the preliminary or top-level system
requirements before the system requirements review (SRR) early in Phase A.
Subsequently, we should constantly review all new, derived, or evolving
requirements for validity as the system design matures. Requirements we initially
thought to be achievable and verifiable, for example, may turn out to be neither,
due to unavailability of technology, facilities, time, or money. Therefore, we
should periodically revalidate the set of requirements for each system of interest
as part of each technical baseline.
Table 11-4 shows an example of FireSAT requirements. We start with a
mission-level requirement to maintain 12-hour revisit of the coverage area and see
how this flows down through a series of mission analyses and allocation to
component-level performance specifications. We then judge each requirement
against the VALID criteria with methods and issues presented in the discussion.
Such a matrix is useful to systematically assess each requirement. The earlier we
identify flaws in requirements, the less it costs to fix them.
Chapter 4 addresses the challenge of crafting well-written requirements that
are clear and concise. Here we focus more on their achievability and verifiability.
While precise wording is essential to communicating requirements, it's equally
important that they be the right ones. Thus, to truly validate a requirement, we
must dig into the analyses or other models used to justify it. The next section
examines the issue of model validation in greater detail.
Poorly defined or poorly allocated requirements lead to mission failure. This
occurred on the Mars Polar Lander mission. Premature shutdown of the descent
engines was the probable cause of the failure. The mishap report attributed the root
cause of this failure to poor allocation of system-level requirements to subsystems,
in this case, software-level requirements. Figure 11-5 illustrates the mapping from
system to software requirements. The system requirements specified not using
sensor data until reaching 12-meters altitude. They set further criteria to protect
against premature descent engine thrust termination in the event of failed sensors
and possible transients. However, the requirements did not specifically state failure
modes. So software designers did not include protection against transients, nor think
of testing for such transients. The lack of a complete software specification meant
failure to verify that requirement during unit-level tests. In Section 11.5, we describe
how to use product validation to uncover such problems.
A key part of requirements validation, and product verification and validation,
is conducting and reviewing the results of the analyses we did to develop
requirements. We often think of analyses as models that predict or describe system
behavior. A fundamental challenge in any system development is keeping in synch
all the myriad assumptions used in multiple, parallel analysis exercises.
System complexity often defies the "black-box" model. Real-world systems
exhibit tight coupling and interdependencies between the subsystems and the
3 9 4 Chapter 1 1 —Verification and Validation
TABLE 11-4. Example of FireSAT Requirement Validation Exercise. A compliance matrix such
as this helps us to systematically review the validity of each requirement and includes
the validation thought process for it. Requirements must embody the VALID
characteristics. (USFS is US Forest Service; N/A is not applicable.)
Component
Orbital maneuver thruster Verifiable—by test. Component-level tests of the
specific impulse (lsp) shall be propulsion subsystem thruster provide evidence of
at least 210 seconds this performance level.
(Rationale: Rocket Achievable—by inspection. Mono-propellant thrusters
performance was derived by with this lsp are widely available and represent state of
system trade-off analysis the industry.
[Sellers, 2005] using A V Logical—by engineering judgment. This requirement
requirement, spacecraft dry is reasonable given the goals of the project and follows
mass.) logically from the mission AV requirement, it’s based
on a reasoned trade-off of propulsion technology
options.
Integral—by N/A. Achieving this lsp is necessary to
support the mission AV requirement.
Definitive—by engineering judgment. This
requirement is concise and unambiguous.
Component
1) The touchdown sensors shall be sampled a. The lander flight software shall cyclically
at 100-Hz rate. check the state of each of the three
The sampling process shall be initiated touchdown sensors (one per leg) at
prior to lander entry to keep processor 100 Hz during EDL.
demand constant. b. The lander flight software shall be able
However, the use of the touchdown to cyclically check the touchdown event
sensor data shall not begin until 12 state with or without touchdown event
meters above the surface. generation enabled.
2) Each of the 3 touchdown sensors shall be c. Upon enabling touchdown event
tested automatically and independently generation, the lander flight software
prior to use of the touchdown sensor data shall attempt to detect failed sensors
in the onboard logic. by marking the sensor as bad when
the sensor indicates “touchdown state”
The test shall consist of two (2) sequential on two consecutive reads.
sensor readings showing the expected
sensor status. d. The lander flight software shall
generate the landing event based on
If a sensor appears failed, it shall not be two consecutive reads indicating
considered in the descent engine touchdown from any one of the “good”
termination decision. touchdown sensors.
3) Touchdown determination shall be based
on two sequential reads of a single sensor
indicating touchdown.
FIGURE 11-5. Incorrect Allocation of Requirements for the Mars Polar Lander Mission. Part
of system requirement #1 isn't allocated to any flight software requirement, and so
the final system doesn't incorporate it. (EDL is entry, descent, and landing.) [Feather
etal.,2003]
TABLE 11-5. Types of Models. We employ a wide range of model types in varying fidelities for
systems analyses throughout the system lifecycle.
• Wind tunnel model • Functional flow • Dynamic motion models • Monte Carlo
* Mock up (various charts ♦ Structural analysis, either * Logistical support
degrees of fidelity) • Behavior diagrams finite element or • Process
• Engineering model • Plus function flow polynomial fitting modeling
(partial or charts • Thermal analysis * Manufacturing
complete) • NxN charts • Vibrational analysis layout modeling
* Hangar queen • PERT charts • Electrical analysis as in * Sequence
• Testbed • Logic trees wave form or connectivity estimation
• Breadboards and - Document trees • Finite element modeling
brassboards * Time lines • Linear programming
* Prototype • Waterfall charts * Cost modeling
• Mass/inertial * Floor plans * Network or nodal analysis
model * Blue prints * Decision analysis
• Scale model of * Schematics ■ Operational or production
section throughput analysis
• Representative
• Laser lithographic drawings • Flow field studies
model • Thermal model
• Topographical
• Structural test representations • Work flow analysis
model
* Computer-aided • Hydrodynamics studies
• Acoustic model drafting of systems ■ Control systems modeling
• Trainer or components
should recognize the need for opportunities to cross-validate models between the
different dimensions. In fact, we ought to seek out such opportunities.
Table 11-6 outlines the model validation activities. The starting point for
model validation depends on when in the lifecycle it occurs. In concept it begins
with an understanding of at least preliminary mission, system, or product
requirements, depending on the system of interest. But models of individual
product-level behavior, especially in the space environment, may depend highly
on mission-level definitions of that environment as reflected in mission-level
models. How we develop these models in response to project needs is beyond the
scope of this discussion. Industry-standard references such as Space Mission
Analysis and Design [Larson and Wertz, 1999] or Space Launch and Transportation
Systems [Larson et al., 2005] contain the detailed technical processes for modeling
spacecraft and launch vehicles respectively at the system and subsystem levels.
Depending on the type of model needed, and its required fidelity, modeling can
cost a lot of time and money. The next section, on product verification, describes
the factors driving project model philosophy. The specific requirements for these
models, as well as the timing of their delivery, is a central aspect of technical
planning reflected in Chapter 13. With requirements and unvalidated models in
hand, model validation begins.
11.2 Validate Models 399
Table 11-6. Model Validation Inputs, Outputs, and Associated Activities. Model validation
activities transform unvalidated models into validated models and provide insights
into system interdependencies and behavioral characteristics.
FireSAT payload shall have an angular resolution <4.0 x 10-5 rad at 4.3 mirometer
Cost and physical constraints usually preclude a true flight-like test of such an
instrument. Instead, we verify lower-level requirements by test, using that
information to validate various models and then rely on analysis to combine the
results to verify the higher-level requirement. Figure 11-7 shows this process. We
first measure the instrument's optical prescription, producing a test-validated
model. We do the same for reaction wheel jitter, the spacecraft's structural finite
element model (FEM), and its thermal model. The resulting test-validated models
feed a simulation of the closed-loop pointing control of the instrument. This
simulation, when convolved with the instrument optical prescription, attempts to
predict the ultimate instrument angular resolution in operation.
Due to cost and physical limitations, neither the model used in the pointing
control simulation, nor the final model of instrument response have been validated
by test (short of hanging the integrated spacecraft and instrument in a free-fall
configuration in a vacuum chamber, we don't do true flight-like closed loop imaging
tests on the ground). Instead, we accept the risk of using other methods to validate
those models (e.g., peer-review of the physics and computer code).
The verification storyboard is useful in two respects: 1) It highlights where we
don't have test-validated models, and 2) It highlights where we have "test as you fly"
exceptions. In both cases, this information guides risk assessment, and where
warranted, mitigation (such as additional independent peer-review and cross-checks).
The results of this effort also help define, refine, and validate the requirements levied
on these mission-critical model functions, capabilities, and accuracies.
11.2 Validate Models 401
FIGURE 11-7. Verification Storyboard (FireSAT Example). A verification storyboard such as this
one helps to define a combined sequence of tests and analyses to verify payload
performance requirements. Here a “check" in the box implies that a model has been
or will be test validated. An “A” indicates the model will be validated by analysis only.
We now turn to developing the model validation plan. Most projects include it as
part of the overall system verification or validation activities. But it's useful to at least
conceptually separate this activity given the criticality and long shelf-life of some
models (the Space Shuttle program relies on models for operational planning at the
end of the program that were initially created a generation earlier during system
development). Table 11-7 gives an example of the information in the model validation
matrix. We must also show the specific requirements for each model. The information
generated by test, inspection, or separate analysis is then compared against each
requirement for each model. In some cases, we use models to validate other models.
If s crucial to get at the underlying assumptions behind each model, and to perform
a sanity check against reality before using the results generated by the model.
TABLE 11-7. Examples of FireSAT Model Validation Planning. During FireSAT system design
and development, we need to validate different models using different approaches at
different points in the system lifecycle.
noise. In early project phases, we sometimes compare models with real systems by
modifying the model's capabilities to describe a similar existing system and
analyzing the differences, making adjustments to the model as necessary.
We manage project costs by limiting formal validation efforts to those aspects of
models deemed mission-critical. The systems engineer must decide what level of
credibility is sufficient to meet the needs, and balance that against model cost and
utility requirements. We may decide that we're better off investing project resources
in more comprehensive testing than in spending more to obtain an incremental
improvement in the credibility and utility of a particular model. The space industry
unfortunately has numerous examples of overdependence on what turned out to be
insufficiently accurate models. The failure during the maiden flight of the Ariane V
launch vehicle, for example, illustrates a case of guidance, navigation, and control
models not fully accounting for real-world hardware-in-the-loop issues. When the
navigation code originally developed for the much different Ariane IV rocket was
used on the Ariane V, the guidance system failed, resulting in a total loss of the
vehicle less than 40 seconds after launch. [Ariane 501 Inquiry Board, 1996]
• It may reveal that the model behaves as expected, with negligible errors
• It may reveal that the model warrants modification to more accurately
reflect the real system (followed by revalidation to confirm the correction)
• It may reveal differences between the model and real system that can't be
corrected but are important to quantify (knowledge versus control)
We must understand and document these differences as part of an archive of
model uncertainty factors (MUFs). We capture these MUFs in a model validation
matrix and in summaries of the individual analyses. Model uncertainty factors are
effectively the error bars around the model results and help guide the need for
design or other margin. But documentation activities must go beyond capturing
MUFs. We must also manage the model configuration (as with any other mission-
critical software) to ensure that the model's future users understand the evolution
of changes and idiosyncrasies exposed by the V&V program.
Consider a model that predicts communication links between Earth and a deep
space mission to Pluto, for example. Such a model would be essential early in the
project to guide system design and analysis efforts. We would need it again 12 to 15
years later for operational planning after the spacecraft reaches its destination on the
edge of the solar system. By that time, it's unlikely that any of the original RF
engineers who developed the model would still be readily available. To make
matters worse, software and hardware platforms will have gone through many
generations, making it more difficult to maintain legacy software models. All this
underscores the critical need to first validate and then meticulously document the
results to guide mission design, analysis, and operation throughout the lifecycle.
404 Chapter 11 — Verification and Validation
The processes we use to plan for and execute both types of product verification
are similar, so we focus largely on the second type, formal product verification.
Figure 11-8 gives a big-picture look at the overall flow. Table 11-8 lists the top-level
activities that constitute product verification. These activities fall into two broad
categories, preparing for product verification and implementing product verification. We
start with validated technical requirements, along with enabling products such as
validated models, test equipment, personnel, and other resources (time and money
being the most important). From there, we develop verification requirements based
on the project verification philosophy. We then use the verification requirements to
guide definition of more detailed test and verification requirements (TVRs). These,
together with detailed plans and procedures as well as facilities, equipment, and
trained personnel, come together at verification events (tests, demonstrations,
analyses, inspections, or modeling and simulation, or some combination of these)
along with the end product to be verified.
Verification delivers a verified end product, one that has been shown
objectively through one or more methods to meet the defined requirements. This
result is supported by an as-executed verification plan and a variety of
documentation products including verification completion notices, test reports,
and other critical evidence.
Figure 11-9 illustrates how we implement verification activities throughout the
project lifecycle. At every phase, the project manager (PM) and systems engineer
11.3 Verify Products 405
FIGURE 11-8. Product Verification Process Flow Chart. This flow chart illustrates how technical
requirements are ultimately verified and closed out.
TABLE 11-8. Product Verification Activities. The verification activities provide objective evidence
that the end product meets the system’s requirements.
(SE) must see that product verification activities are performed in accordance with
the verification plan and defined procedures. These procedures collect data on each
requirement giving special attention to measures of effectiveness (MOEs).
All military leaders learn that "no plan survives contact with the enemy."
Problems are inevitable when product verification moves into high gear. Together
with the stakeholders, the PM and SE must repeat as necessary those steps that
aren't compliant with planned product verification procedures or the planned
environment including equipment measurement or data capture failures.
Throughout the assembly, integration, and testing phase, we perform periodic
system integration reviews (SIRs) at each major step as identified by the integrated
verification fishbone, described later. The SIR is a formal review to certify that flight
elements and systems are ready to be integrated, including confirmation that
facilities, support personnel, plans, and procedures are ready to support integration
(see Chapter 10). The SIR focuses on the integration of flight elements and systems
following acceptance by the customer. Most missions have several SIRs.
4 0 6 Chapter 1 1 —Verification and Validation
Figure 11-9. Verification and Validation in the Project Lifecycle. This figure illustrates how
verification activities fit within the project lifecycle, beginning immediately after the
system requirements review (SRR). (VRM is verification requirements matrix; VRSD
is verification requirements and specifications document; PDR is preliminary design
review; CDR is critical design review; SAR is system acceptance review; FRR is
flight readiness review; ORR is operational readiness review; TRR is test readiness
review.) (Adapted from NASA (4), [2007].)
11.3 Verify Products 407
TABLE 11-9. Inputs, Outputs, and Activities Supporting Verification Preparation. Early
verification planning is important to assure readiness of test articles, facilities, and other
long-lead items needed for system verification. (GSE is ground support equipment.)
Verification Preparation
Inputs Activities Outputs
The inputs for verification preparation are as described above. The intermediate
outputs all aim at defining what methods and events we need to fully verify each
technical requirement along with detailed plans, procedures, and supporting
requirements. The following sections step through each of the verification preparation
activities.
• The number and types of models developed for verification and flight
• The number and types of tests, and the appropriate levels
• The project phases and events during which we implement verification
The program verification philosophy is driven by risk tolerance. Less
verification implies (but doesn't necessarily create) more risk. More verification
implies (but doesn't guarantee) less risk. Chapter 8 addresses risk management in
the broad sense. Here we're concerned with risk management in developing the
verification program. In deciding how this program will look, risk managers and
verification engineers must account for:
• System or item type
- Manned versus unmanned
- Experiment versus commercial payload
- Hardware versus software
- Part, component, subsystem, complete vehicle, or system of systems
• Number of systems to be verified
- One of a kind
- Multiple, single-use
- Multiple, multiple re-use
• Re-use of hardware or software
• Use of commercial-off-the-shelf (COTS) or non-developmental items (NDI)
• Cost and schedule constraints
• Available facilities
• Acquisition strategy
"When we consider all these factors, two broad types of verification programs
emerge, traditional and proto-flight, with lots of variations on these basic themes. Ihe
basic difference between them is that the traditional approach makes a clear
distinction between verification activities for qualification and verification for
acceptance. The proto-flight approach normally does not. This difference is reflected
in the number and types of models developed. Before we implement either
approach, we often conduct some level of developmental testing. Developmental
testing is essential for early risk reduction and proof-of-concept. However, because
it's done under laboratory conditions by developers absent rigorous quality
assurance, it normally doesn’t play a role in final close-out of requirements.
In the traditional approach, qualification activities verify the soundness of the
design. Because requirements focus mostly on the design, evidence collected during
a qualification program is usually defined to be the formal close-out of the
requirements, documented in something like a verification completion notice
(VCN). Verification requirements for qualification are associated with the design-to
performance or design-to development specifications. When verified, qualification
11.3 Verify Products 409
requirements establish confidence that the design solution satisfies the functional or
performance specifications (compliance with the functional baseline). We typically
complete flight qualification activities only once or when modifications are made
that may invalidate or not be covered within the original verification scope.
Qualification activities are conducted using dedicated flight-like qualification
hardware with basic flight software as needed to conduct the verification. The
hardware subjected to qualification should be produced from the same drawings,
using the same materials, tooling, manufacturing process, and level of personnel
competcncy as flight hardware. For existing COTS or NDIs, the qualification item
would ideally be randomly selected from a group of production items (lot testing).
A vehicle or subsystem qualification test article should be fabricated using
qualification units if possible. Modifications are permitted if required to
accommodate benign changes necessary to conduct the test. For example, we
might add instrumentation or access ports to record functional parameters, test
control limits, or design parameters for engineering evaluation.
For qualification, we set test levels with ample margin above expected flight
levels. These conditions include the flight environment and also a maximum time
or number of cycles that certain components are allowed to accumulate in
acceptance testing and retesting. However, qualification activities should not
create conditions that exceed applicable design safety margins or cause unrealistic
modes of failure. For example, at the vehicle level, qualification test levels for a
random vibration test are typically set at four times (+6 dB) the anticipated flight
levels induced by the launch vehicle for two full minutes.
In the traditional approach, we follow a formal qualification program with
separate acceptance of flight hardware. Acceptance verification ensures
conformance to specification requirements and provides quality-control assurance
against workmanship or material deficiencies. Acceptance verifies workmanship,
not design. Acceptance requirements are associated with build-to or product
specifications and readiness for delivery to and acceptance by the customer. They
often include a subset and less extreme set of test requirements derived from the
qualification requirements. When verified, acceptance requirements confirm that
the hardware and software were manufactured in accordance with build-to
requirements and are free of workmanship defects (compliance with certified
baseline). We normally perform acceptance verification for each deliverable end
item. "Acceptance" encompasses delivery from a contractor or vendor to the
customer as well as delivery from a government supplier (e.g., NASA or DoD) to
the project as government furnished equipment.
Acceptance activities are intended to stress items sufficiently to precipitate
incipient failures due to latent defects in parts, materials, and workmanship. They
should not create conditions that exceed appropriate design safety margins or cause
unrealistic modes of failure. Environmental test conditions stress the acceptance
hardware only to the levels expected during flight, with no additional margin.
Continuing the example above, at the vehicle level, flight article acceptance test
levels for a random vibration test typically are set at the anticipated flight level (+0
dB) induced by the launch vehicle for only one minute.
410 Chapter 11 — Verification and Validation
activities and get at the heart of the mission. For example, FireSAT has an
operational requirement for on-orbit sensor calibration before moving to normal
operations. These calibration activities verify that the sensor survived launch and
is operating well enough in the space environment to fulfill the mission. When
verified, operational requirements confirm readiness to proceed to the next phase
of the flight (i.e., normal operations).
We now envision the three-dimensional problem posed by developing a
robust product verification program as illustrated in Figure 11-10. The depth of the
program encompasses the top-to-bottom system breakdown structure. The length
of the program includes the verification activities during different program phases
to build a "portfolio" of verification evidence throughout the product lifecycle.
The width encompasses the various methods of verification at our disposal, with
different methods being more or less appropriate for different parts of the system
at different points in the lifecycle.
requirement. But several practical reasons often preclude this. Usually the pressing
schedule leads planners to defer the detailed consideration of verification.
Furthermore, the person or organization writing the design or performance
requirement may not be an expert in the best verification methods to employ.
Persons representing both sets of expertise—design and verification—should write
the VRs as early as practical in the program.
Let's examine verification requirements in more detail. As Chapter 4
describes, writing good requirements is challenging enough. Writing good VRs
can be even more taxing. They define the method and establish the criteria for
providing evidence that the system complies with the requirements imposed on it.
One technique for writing a verification requirement is to break it into three parts:
1. The method of verification (inspection, analysis, simulation and modeling,
test, or demonstration)
2. A description of the verification work
• For simulation and modeling, the item to be modeled and which
characteristics to simulate
• For an inspection, the item to be reviewed and for what quality
• For an analysis/ the source of the required data and description of the
analysis
• For a demonstration, the action to demonstrate
• For a test, the top level test requirements
3. The success criteria that determine when the verification is complete
Writing a verification requirement starts by reviewing the originating
requirement and stating concisely what we're verifying. If we can't, the original
requirement may not be valid and may need to be renegotiated or rewritten! At
this point we mustn't try to converge too quickly on the verification methods.
Simply writing the description and success criteria helps define the most
reasonable methods.
Next, we describe what work or process (at a high level) we need to perform
the verification activity. We state a single top-level procedure for each activity,
though we choose different procedures for different activities. Some example
procedures are:
• Be performed in accordance with x (standard, another requirement,
constraint)
• Include x, y, and z (given partial objectives or sub-methods)
• Using xyz (a specific tool, mode or data, or hardware or software)
The success criteria should define what the customer will accept when we
submit the product with the formal verification completion notice. In the rationale,
we have to capture any additional thoughts about the verification activity and why
we selected certain methods. Then we go back to the first sentence and determine
11.3 Verify Products 413
the methods to use at this verification level. We don't have to use every method for
every VR. We'll probably use additional methods as we decompose the
requirement to lower levels.
Figure 11-11 provides a simple template for writing a verification requirement.
After we write and document them, we summarize the VRs in a verification
requirements matrix (VRM). This allows us to see the source requirement side-by-
side with the verification requirement. It may also include verification levels and
sequencing.
FIGURE 11-11. Writing Verification Requirements. A good verification requirement defines the
requirement to be verified and includes the verification method and the success
criteria for determining when verification is complete.
Verification requirements should not be blindly "thrown over the wall" from
designers to verification engineers. Rather, they should reflect a meeting of the
minds between the two legs of the SE "V." And we must be careful not to imbed
implied design requirements or contractor statement of work direction in the
verification requirements. An example of a disconnect between design and
verification is, 'The test shall connect diagnostic equipment to the test port..."
when no test port is specified in the design requirements. An example of a
disconnect between engineering and contracting is, "...verified using a
qualification model subjected to flight random vibration levels plus 6 d b . . w h e n
the SOW places no contractual requirement on the vendor to produce a
qualification model. To avoid this tendency, some organizations use more generic
verification requirements that simply state the method to use and rely on the VRM
for the acceptance criteria. Table 11-10 shows a partial VRM from the FireSAT
mission. Verification matrices are useful for summarizing source requirements,
verification requirements, and other information in a compact form.
414 Chapter 11 - Verification and Validation
Table 11-10. Excerpts from the FireSAT Verification Requirements Matrix (VRM). A VRM
displays the originating requirement, the verification requirement (“what,” “when,” and
“how well’’), and the level at which verification will occur.
Space vehicle The space vehicle first-mode natural frequency Analysis Vehicle
first-mode natural shall be verified by analysis and test. The analysis and test
frequency shall be shall develop a multi-node finite element model to
greater than 20 Hz estimate natural modes. The test shall conduct a
modal survey (sine sweep) of the vehicle using a
vibration table. The analysis and test shall be
considered successful if the estimated and
measured first mode is greater than 25 Hz.
Battery charging Battery charge GSE state of charge display shall Demon System
GSE shall display be verified by demonstration. The demonstration stration
current state of shall show that state of charge is indicated when
charge. connected to a representative load. The
demonstration shall be considered successful if
state of charge is displayed.
Mechanical Fastener type shall be verified by inspection. The Inspection Part
interface between inspection shall review the vendor’s records to look
structural for the type and size of fasteners used. The
assemblies shall inspection shall also review the documentation on
use 4-40 stainless fastener material. The verification shall be
steel fasteners. considered successful if all interface fasteners are
4-40 in size made from stainless steel.
Table 11-11. Verification Methods. The description of each method includes typical applications.
Verification
Method Description Typical Language
Analysis The techniques may include engineering analysis, statistics The kind of language used in
(including and qualitative analysis, computer and hardware system or item requirements
verification simulations, and analog modeling. Analysis is appropriate that usually indicates
by when: (1) rigorous and accurate analysis is possible, (2) verification by analysis is:
similarity) testing is not cost effective, and (3) verification by inspection “... shall be designed to...”
is not adequate. "... shall be developed to ...”
Verification by similarity analyzes the system or item “... shall have a probability
requirements for hardware configuration and application to of...”
determine if it is similar in design, manufacturing process,
and quality control to one that has previously been qualified
to equivalent or more stringent requirements. We must
avoid duplication of previous tests from this or similar
programs. If the previous application is similar, but not equal
or greater in severity, additional qualification tests
concentrate on the areas of new or increased requirements.
Demonstra Demonstration determines conformance to system or item The kind of language used in
tion requirements through the operation, adjustment, or system or item requirements
reconsideration of a test article. We use demonstration that usually indicates
whenever we have designed functions under specific verification by
scenarios for observing such characteristics as human demonstration is:
engineering features and services, access features, and "... shall be accessible....
transportability. The test article may be instrumented and “... shall take less than one
quantitative limits or performance monitored, but check hour...”
sheets rather than actual performance data are recorded. shall provide the
Demonstration is normally part of a test procedure. following displays in the X
mode of operation...”
Test Testing determines conformance to system or item The kind of language used in
requirements through technical means such as special system or item requirements
equipment, instrumentation, simulation techniques, and the that usually indicates
application of established principles and procedures for verification by test is;
evaluating components, subsystems, and systems. "... shall be less than 2.5
Testing is the preferred method of requirement verification, mW at DC and less than
and we use it when: (1) analytical techniques do not 100 mW at 1 MHz...’’
produce adequate results, (2) failure modes exist that could “... shall remove 98% of the
compromise personal safety, adversely affect flight systems particulates larger than 3
or payload operation, or result in a loss of mission
objectives, or (3) for any components directly associated .. shall not be permanently
with critical system interfaces. deformed more than 0.2%
The analysis of data derived from tests is an integral part of at proof pressure.,.”
the test program and should not be confused with analysis
as defined above. Tests determine quantitative compliance
to requirements and produce quantitative results.
416 Chapter 11—Verification and Validation
FIGURE 11-12. Generalized Verification Method Selection Process. We determine the preferred
verification method on the basis of data availability—can we acquire the data
visually, by operating the system, by testing the system, or by analysis of system
models?
severe mechanical stresses a spacecraft will ever see. However, about 25% of all
recorded spacecraft anomalies are traceable to interactions with the space
environment Famous space environment problems include:
• Boeing 702 satellite: contamination of solar array collectors due to outgassing
• Stardust comet flyby: spacecraft went silent for four days after solar
radiation burst caused the spacecraft to go into standby mode
11.3 Verify Products 417
Table 11-12 summarizes the requirements imposed by each of these along with
typical verification methods and techniques they drive.
TABLE 11-12. Summary of Launch Environment Issues, Design Requirements, and Verification
Methods. The payload-to-launch-vehicle interface requirements and verification
requirements are in an interface control document (ICD) as described in Chapter 15.
Interface verification assures that the spacecraft and launch vehicle work together.
(CAD is computer-aided design; EMC is electromagnetic compatibility; EMI is
electromagnetic interference; RF is radio frequency; N2 is nitrogen.)
Launch
Environment Issue Design Requirements Imposed Verification Methods
Mechanical and The interface between the spacecraft and • Analysis (clearance, tip-
electrical interfaces launch vehicle is typically through a off torques)
separation system such as a Marmon clamp • Inspection (of design
or light band. Design requirements arising using CAD or paper
from the launch vehicle interface include: mechanical drawings)
• Mechanical interface (bolt hole pattern) • Demonstration (fit
• Static and dynamic envelopes check)
• Center-of-gravity (CG) and moment of • Testing (separation or
inertia (MOl) ranges deployment test, end-to-
• Electrical interface (connector types, end electrical tests, CG
number and assignment of pins, power and MOl spin table test)
conditioning, telecommand ing, grounding)
• Deployment AV and tip-off torques (spring
stiffness and misalignment)
• Accessibility
TABLE 11-12. Summary of Launch Environment Issues, Design Requirements, and Verification
Methods. (Continued) The payload-to-launch-vehic!e interface requirements and
verification requirements are in an interface control document (ICD) as described in
Chapter 15. Interface verification assures that the spacecraft and launch vehicle work
together. (CAD is computer-aided design; EMC is electromagnetic compatibility; EMI is
electromagnetic interference; RF is radio frequency; N2 is nitrogen.)
Launch
Environment Issue Design Requirements Imposed Verification Methods
Thermal and other • Temperature (under fairing, pre-launch, • Analysis (venting,
environments and in-flight) thermal models)
• Aero-thermal flux at fairing separation • Inspection (interface
• Humidity drawings, part of launch
• Venting Rates vehicle interface
deliverables)
Radio frequency and • Electromagnetic (EM) environment (range- • Analysis (RF flux)
EM environment to-spacecraft, launch vehicle-to- • Inspection (RF inhibit
spacecraft) chain)
• Testing (EMC/EMI)
Contamination and • Cleanliness class (e.g., Class 10,000) • Inspection (of spacecraft
cleanliness • Need for or availability of N2 purge storage requirements)
TABLE 11-13. Space Environment Issues and Associated Design Requirements and
Verification Methods. Different aspects of the space environment place various
constraints on the system design in the form of requirements that we must verify. (UV
is ultra violet.)
Space
Environment
Issue Design Requirements Imposed Verification Methods
TABLE 11-13. Space Environment Issues and Associated Design Requirements and
Verification Methods. (Continued) Different aspects of the space environment
place various constraints on the system design in the form of requirements that we
must verify. (UV is ultra violet.)
Space
Environment
Issue Design Requirements Imposed Verification Methods
Figure 11-13 gives an example of the coupled planning that must go into
decisions on the number and types of units to be developed and the testing and
levels on each for the FireSAT spacecraft.
Engineering development units are low fidelity and support system design.
Qualification units are high-fidelity models or prototypes used for system
verification. Flight units are subjected to acceptance tests to verify workmanship and
demonstrate fitness for mission operations. Few systems in the real world rely
purely on qualification or proto-flight models. Most systems are hybrid
combinations of some components developed using dedicated qualification
hardware and others relying more on proto-flight hardware.
During and between each of the above events, we also do hardware
inspections and functional tests. These routine check-outs verify that nothing has
been broken during a verification event or while moving or configuring an item for
an event. Table 11-14 summarizes the major environmental tests, their purpose,
needed equipment or facilities, and the basic processes to use.
Figure 11-14 shows a hypothetical environmental test campaign sequence for
the FireSAT spacecraft at the vehicle level (assuming all events occur at the same
test facility). Each event uses a specific system or item configuration (hardware or
software) being verified along with some support equipment and implements
detailed procedures f designed to perform the event precisely. The result of each
event is captured in a verification report and other documentation that serves as a
formal closeout (e.g., a verification completion notice) of that requirement.
Maximizing the number of TVRs for each event is more efficient, so careful
planning is important. One useful planning tool is to prepare a logical flow of all the
activities to better group them into events. The integrated verification fishbone (IVF)
defines and documents the integration-related TVRs (e.g., project-to-project,
project-to-program) associated with the different stages of mission integration. It
uses a "top down" and "leftward" approach. The IVF is similar to a PERT diagram
in that it illustrates logical relationships between integration TVRs. It may also
include additional information not usually included on a PERT diagram, such as
program milestones, facility locations, etc. IVFs help planners focus on the end item
and encompasses all activities associated with final mission integration, such as:
The IVF also does not illustrate timing per se; rather it shows the logical order
of things. All qualification and acceptance activities must be complete before an
end item is fully verified. Figure 11-15 illustrates an example IVF for FireSAT, with
emphasis on the logical flow of events for the propulsion module.
11.3 Verify Products 423
Figure 11-13. Example Development and Test Campaign Planning Diagram for FireSAT. A diagram such as this describes the evolution
and pedigree of the myriad models developed throughout a project. (EMC/EMI is electromagnetic compatibility/electromagnetic
interference; EDU is engineering development units.)
424 Chapter 11—Verification and Validation
TABLE 11-14. Summary of Major Space Environmental Tests. We do this testing at the system
component or box level, and on higher-level assemblies within facility capabilities.
Environmental verification at the system level is usually done by analysis on the basis
of component or box testing results. (EM is electromagnetic: EMC/EMI is
electromagnetic compatibility/electromagnetic interference.)
Equipment or
Test Purpose Facilities Required Process
Vibration • Make sure that product • Vibration table and * Do low-level vibration
and shock will survive launch fixture enabling 3- survey (modal survey) to
testing • Comply with launch axis testing determine vibration
authority’s requirements • Acoustic chamber modes and establish
• Validate structural baseline
models * Do high-level random
vibration following profile
provided by launch
vehicle to prescribed
levels (qualification or
acceptance)
• Repeat low-level survey
to look for changes
• Compare results to model
Thermal • Induce and measure • Thermal or * Operate and characterize
and vacuum outgassing to ensure thermal/vacuum performance at room
(TVAC) compliance with mission chamber temperature and
testing requirements • Equipment to detect pressure
• Be sure that product outgassing (e.g., ♦ Operate in thermal or
performs in a vacuum cold-finger or gas thermal vacuum chamber
under extreme flight analyzer) as needed during hot-and cold-soak
temperatures - Instrumentation to conditions
• Validate thermal models measure • Oscillate between hot
temperatures at key and cold conditions and
points on product monitor performance
(e.g., batteries) * Compare results to model
Electromag * Make certain that product * Radiated test: » Detect emitted signals,
netic com doesn’t generate EM sensitive receiver, especially at the
patibility/ energy that may interfere anechoic chamber, harmonics of the clock
electromag with other spacecraft antenna with known frequencies
netic inter components or with gain * Check for normal
ference launch vehicle or range • Conduction operation while injecting
(EMC/EMI) safety signals susceptibility signals or power losses
• Verify that the product is matched “box”
not susceptible to the
range or launch EM
environment
Next, we develop detailed procedures for each event, along with a list of
needed equipment, personnel, and facilities. The test procedure for each item
should include, as a minimum, descriptions of the following;
• Criteria, objectives, assumptions, and constraints
• Test setup—equipment list, set-up instructions
• Initialization requirements
• Input data
11.3 Verify Products 425
FIGURE 11-14. FireSAT Flight-Model Environmental Test Campaign Sequence. This flow chart shows the notional sequence for major events
in the FireSAT environmental test campaign. (EMC is electromagnetic compatibility; EMI is electromagnetic interference; TVAC is
thermal vacuum.)
426 Chapter 11 — Verification and Validation
Figure 11-15. Example Integrated Verification Fishbone (IVF) for FireSAT. This diagram
illustrates the use gf an IVF for planning the sequence of assembly, integration, and
verification activities for the FireSAT propulsion module. (He is helium.)
• Test instrumentation
• Test input or environment levels
• Expected intermediate test results
• Requirements for recording output data
• Expected output data
• Minimum requirements for valid data to consider the test successful
• Pass-fail criteria for evaluating results
• Safety considerations and hazardous conditions
• Step-by-step detailed sequence of instructions for the test operators
We then organize the TVRs into individual events that tie back to the project's
integrated master schedule (IMS). This tells what to verify, how to verify it, and
when. The next logical question is how much. Knowing how much testing is "good
enough" is a major part of the planning process. The best answer is, "It depends/'
For example, the test levels and durations for a random vibration test depend on:
11.3 Verify Products 427
Table 11-15. Representative Environmental Test Levels and Durations at the Vehicle Level.
Depending on the model, industry standards recommend more or less severe test
levels. These levels are representative only. For actual test levels, each project
should refer to its own governing standards. (Adapted from NASA/GSFC, 11996],
ECSS, [1998], and DoD, [1998].) (MEOP is maximum expected operating pressure.)
Structural loads 1.25 x limit load 1.25 x limit load 1.0 x limit load
Acoustic level Limit level + 6dB Limit level + 3dB Limit level
(duration) (2 minutes) (1 minute) (1 minute)
Random vibration Limit level + 6dB Limit level + 3dB Limit level
level (duration) (2 minutes) (1 minute) (1 minute)
each axis
Mechanical shock 2 actuations 2 actuations 1 actuation
1.4 x limit level 1.4 x limit level 1.0 x limit level
2x each axis 1x each axis 1 x each axis
Combined thermal 10° C beyond 10° C beyond 0° C beyond acceptance
vacuum and acceptance temperature acceptance temperature temperature extremes
thermal cycle extremes for a total of 8 extremes for a total of 4 for a total of 4 thermal
thermal cycles thermal cycles vacuum cycles
TABLE 11-16. Master Verification Plan (MVP) Outline. The MVP describes the roles and
responsibilities of project team members in the project verification activities and the
overall verification approach.
• Launch slips—an inevitable part of any mission planning. These slips lead
to widely known, but not officially recognized, changes in required
delivery dates that involve last-minute schedule perturbations.
• Uncontrolled processes—such as set-up in an unknown test facility, or
getting data acquisition systems to talk to each other.
• Hazardous operations—such as propellant and pyrotechnic loading or moving
large, expensive equipment. Schedule doesn't take priority over safety.
11.3 Verify Products 429
FIGURE 11-16. Top-level FireSAT Assembly, Integration, and Test (AIT) Schedule. This Gantt chart illustrates a hypothetical schedule for the
FireSAT AIT phase. We budget about three months for system integration and baseline testing, followed by about four months of
environmental testing. The environmental test sequence follows the flow chart shown in Figure 11-14. (EMC is electromagnetic
compatibility; EMI is electromagnetic interference.)
430 Chapter 11 — Verification and Vaudaiion
Table 11-17. inputs, Outputs, and Activities Associated with Implementing Verification. We
employ the verification methods (analysis, inspection, demonstration, simulation and
modeling, and test) on analytical and physical models for qualification and on flight
articles for acceptance. All the outputs from the previous section serve as inputs to this
phase. The results are verified end products along with the body of evidence that
formally confirms to stakeholders that the item complies with the validated requirements.
Verification
Inputs Implementation Activities Outputs
• Verification requirements (e.g., • Execute TVR plans and • Verified end product
verification matrix) procedures • Verification plan (as-
• Master verification plan (as-planned, • Track completion of (or executed)
including incompressible test list) exception to) each TVR • Verification products
• Test and verification requirements • Audit (spot-check) lower- (data, reports,
(TVR), with associated plans and level verification events verification compliance
procedures • Identify regression test matrix, verification
• Unverified end products needs completion notices,
• Verification enabling products (e.g., • Identify test uncertainty work products)
validated models, test facilities) factors
• Document results
Figure 11-17. Product Validation Versus Product Verification. This flowchart illustrates the main
differences between product verification and product validation (adapted from Perttula
[2007]). While product verification closes the loop between product and requirements,
product validation closes the loop between product and customer expectations.
Unfortunately, in the real world these conditions are never true. To begin with,
real-life systems are organic. No matter how many requirements we write, we're
producing only an approximation of the real thing (say at the 95% confidence
level). A perfect description of a design would require an infinite number of
requirements. But it's the missing 5% (often the ones someone assumes without
communicating the assumption) that can spell the difference between mission
success and mission failure.
Furthermore, producing a completely defect-free system is practically
impossible for many reasons. The process of translating customer needs into
verifiable requirements is imperfect, because stakeholders often don't have
completely clear ideas of their true needs, developers don't grasp fully what
stakeholders' expectations truly are, or some combination of both. And it's the
nature of any project that requirements inevitably change as we refine the system
definition and as needs evolve. As this happens, the loop between developer and
stakeholder is not fully closed.
Another issue is that some customer expectations are ultimately subjective
(e.g., astronaut comfort or work-load) no matter how exactingly we try to translate
them into design requirements. They can't be fully validated until exercised during
flight-like operational scenarios as an integrated system. For example, automobiles
undergo validation when test-driven by members of a target market or customer
advocates. Human space systems are subjected to extensive man-in-the-loop test
scenarios before operational deployment.
434 Chapter 11 — Verification and Validation
TABLE 11-18. System Validation Inputs, Outputs, and Associated Activities. System validation
begins with a verified end product and documented stakeholder expectations. The
activity affirms that the system will meet stakeholder expectations.
Product Validation
Inputs Activities Outputs
Figure 11-18. FireSAT Concept of Operations. The mission concept of operations is one of
several key inputs into the validation planning process. We can’t “test like we fly” if we
don’t know how we're going to fly! (TLM is telemetry; UAV is unmanned aerial vehicle;
NOAA is National Oceanic and Atmospheric Administration; RF is radio frequency.)
FIGURE 11-19. FireSAT Fault Tree. We derive a fault tree from an understanding of the mission
concept of operations and the underlying system functionality. The concept of
operations, together with the fault tree, guides overall validation planning. (ADCS is
attitude determination and control subsystem; C&DH is command and data handling
subsystem.)
TABLE 11-19. Prepare for Product Validation. Validation planning determines how to objectively
ensure that the system will meet customer expectations. (FS/GS ICD is flight
system/ground system interface control documents; GSE is ground support equipment.)
Validation Preparation
Inputs Activities Outputs
FIGURE 11-20. Example Relationship between Science “Goodness” and instrument Pointing
Jitter. Knowing where the “cliffs” are in system performance is a useful guide to
validation planning. (Science floor is the minimum science return for mission
success.)
to identify “soft" spots using such techniques as fault-tree analysis, state analysis,
or performance sensitivity analysis.
The importance of stress testing and simulation was cited in the WIRE failure
report: 'Testing only for correct functional behavior should be augmented with
significant effort in testing for anomalous behavior" [WIRE Mishap Investigation
Board, 1999]. Some projects can't do full fault-injection testing with the flight
system without incurring excessive risks or costs. The solution is software
testbeds. An early verification and validation (V&V) plan identifies the needs for
such testing and the required testbed capabilities.
The Jet Propulsion Laboratory has amassed several decades of lessons learned
in deep-space missions, in a set of design principles and flight project practices.
Besides the V&V principles already discussed in this chapter, they address the
following:
• Carry out stress testing involving single faults that cause multiple-fault
symptoms, occurrence of subsequent faults in an already faulted state, etc.
• Perform system-level electrical "plugs-out" tests using the minimum
number of test equipment connections
• In addition to real-time data analysis, perform comprehensive non-real-
time analysis of test data before considering an item validated or verified
• Regard testing as the primary, preferred method for design verification and
validation
• Arrange for independent review of any V&V results obtained by modeling,
simulation, or analysis
• Reverify by regression testing any changes made to the system to address
issues found during testing
• Perform verification by visual inspection, particularly for mechanical
clearances and margins (e.g., potential reduced clearances after blanket
expansion in a vacuum), on the final as-built hardware before and after
environmental tests
• Include full-range articulation during verification of all deployable or
movable appendages and mechanisms
• Validate the navigation design by peer review using independent subject
matter experts
• Employ mission operations capabilities (e.g., flight sequences, command
and telemetry data bases, displays) in system testing of the flight system
Tools for product validation are similar to those for product verification
planning. A validation matrix (VaM) maps validation methods against customer
expectations and key operational behaviors. However, unlike verification
planning, which goes back to the design requirements, validation must reach back
further and tap into the original intent of the product. This reaching back
highlights the critical importance of getting early buy-in on the mission need,
goals, and objectives as well as mission-level requirements as described in Chapter
2. These measures of effectiveness (MOEs) or key performance parameters (KPPs),
along with the operations concept and failure modes and effects analysis, guide
validation planners in preparing a cogent "test as you fly" plan. This plan aims at
discovering unintended or unwanted product behaviors in the real-world
environment. Table 11-20 shows a partial VaM for the FireSAT mission with a
cross-section of originating intent.
Because detailed planning for validation so closely parallels that of verification
planning (and may be almost indistinguishable in all but intent), we do not
address it to the same level of detail here. The same tools and techniques easily
apply. Because validation focuses primarily on system-level versus component or
subsystem behavior, we usually don't need to develop IVFs as we do for assembly,
integration, and test planning. But we must attend to configuration control
440 Chapter 11 — Verification and Validation
TABLE 11-20. Example FireSAT Validation Matrix. This excerpt from the FireSAT validation matrix
shows the information we need in preparing for product validation.
(Chapter 15), because not all flight elements (especially software) are available
during a ground-based validation event.
A major part of this strategic planning is developing the project's
incompressible test list (ITL). Project resources are scarce commodities. As the
launch date looms, schedules get compressed and budgets get depleted so we
must make tough decisions on how best to allocate resources to V&V activities.
This resource reallocation often creates overwhelming pressure to reduce or even
eliminate some planned testing. Recognizing this eventuality, the project leaders
should define an ITL that identifies the absolute minimum set of tests that must
be done before certain project milestones (such as launch) occur, regardless of
schedule or budget pressures. Systems engineering and project management
must approve any changes to this list. We should resist the temptation to
immediately reduce the entire program to this minimum list under the "if the
minimum wasn't good enough, it wouldn't be the minimum" philosophy.
Instead, if properly presented and agreed to at the highest project levels early in
the lifecycle, it's useful ammunition against pressures to arbitrarily reduce or
eliminate verification or validation activities. Test objective considerations for all
tests on the ITL generally fall into one or more of the following:
Table 11-21 gives an example of the critical items included in the FireSAT ITL.
Table 11-21. Example FireSAT Incompressible Test List (ITL). Subject to the demands of
schedule and hardware safety, the flight segment (FS) for the FireSAT spacecraft is the
preferred venue for testing. Testing may be done on the system test bed (STB), or
during flight software flight qualification test (FSWFQT) on the software test bench
(SWTB) as long as the fidelity of the test environment is adequate to demonstrate the
required function or characteristic. (Adapted from the Kepler Project Incompressible
Test List, courtesy of JPL). (GS is ground system; KSC is Kennedy Space Center; LV is
launch vehicle; NOAA is National Oceanic and Atmospheric Administration; EMC/EMI
is electromagnetic compatibility/electromagnetic interference; RF is radio frequency.)
Table 11-22. Input/Output and Activities Associated with Implementing Product Validation.
Using the master validation plan as a guide, the validation implementation activities
transform the verified end product into a validated end product with the objective
rationale underlying that transformation.
conduct of mission operations. It's the technical hand-off between the product's
known characteristics and behavior and the mission operations community, who
must live with them.
FIGURE 11-21. Flight Certification in the Big Picture of Verification and Validation Activities.
This figure illustrates the relationship between flight certification and product
verification and validation.
Flight certification comprises those activities aimed at sifting through the large
body of V&V data to formally—and usually independently—determine if the fully
integrated vehicle (hardware, software, support equipment, people, and
processes) is ready for flight. A certificate of flight readiness after the flight readiness
review confirms that the integrated system has been properly tested and processed
so that it's ready for launch. That is, the system has been successfully integrated
with other major components and performance has been verified; sufficient
objective evidence exists to verify all system requirements and specifications; and
all anomalies or discrepancies identified during product verification have been
satisfactorily dispositioned. Final flight system certification nomenclature differs
widely between organizations. The terminology used here is adapted from DoD
and NASA projects. In any case, the basic intent is universal—formally review the
objective evidence indicating that the flight system is ready to fly.
Flight certification closes the loop between the end product and the operators
or users of the product. It draws upon all the data collected throughout product
verification and validation, as well as a detailed understanding of the mission
operations concept. Table 11-23 shows the primary activities that support flight
certification, along with their inputs and outputs.
Every stage of mission integration generates important data. Data products
include verification plans, data, and reports; problem reports and history;
variances (waivers and deviations); material review board dispositions;
acceptance data packages; etc. Other important build data is also recorded, such as
cycle counts and run time for limited life items, maintenance history, conforming
but "out-of-family" anomalies, etc. For any project to effectively and efficiently
plan, execute, and be sustained throughout its lifecycle, we must maintain this
materiel history and keep it readily retrievable, especially when mission lifetimes
span entire careers.
446 CHAPTER 11 - VERIFICATION AND VALIDATION
TABLE 11-23. Flight Certification Inputs, Outputs, and Associated Activities. The flight
certification activities transform the verified and validated products into products that
are certified to successfully perform mission operations. (DCR is design certification
review; FCA is functional configuration audit; PCA is physical configuration audit;
SAR is system acceptance review.)
1.6 Ground data system and mission operations system design reviews
(including mission design and navigation), through the operations
readiness review, including action item closures, are complete
1.7 Hardware and software certification reviews, acceptance data
packages, inspection reports, log books, discrepancy reports, open
analysis items, and problem reports are complete and all open items
closed out
2 Documentation of residual risks to mission success
2.1 Functional and performance requirements for complete and minimum
mission success (including planetary protection) are documented and
are being met
2.2 Matrices showing compliance with institutional requirements, design
principles, and project practices have been audited and approved by
the cognizant independent technical authority
2-3 Verification and validation requirements compliance matrix, including
calibration, alignment and phasing tests and as-run procedures and
test/analysis reports are complete and have been reviewed by the
cognizant independent technical authority
2.4 Testbed certification of equivalence to flight system is complete and
all differences documented and accounted for
2.5 Incompressible test list (ITL) tests, including operational readiness
tests with flight software and sequences, are complete and have been
reviewed, and any deviations have been approved by the cognizant
independent technical authority
2.6 Test-as-you-fly exception list is complete and has been reviewed by
the cognizant independent technical authority
2.7 All safety compliance documents have been approved
2.8 Commissioning activities, flight rules, launch/hold criteria,
idiosyncrasies, and contingency plans are complete, reviewed and
delivered to the flight team
2.9 Waivers and high-risk problem reports have been audited, are
complete, and have been approved
2.10 All external interface (e.g., communication networks, launch vehicle,
foreign partners) design and operational issues have been closed
2.11 Flight hardware has been certified and any shortfalls for critical
events readiness, to allow post-launch development, have been
identified, reviewed, and approved by the cognizant independent
technical authority
448 Chapter 11—Verification and Validation
2.12 All post-launch development work has been planned, reviewed, and
approved
2.13 All work-to-go to launch activities have been planned, reviewed, and
approved
2.14 Residual risk list has been completed, reviewed, and approved by the
cognizant independent technical authority
In addition to focusing on the prima facie evidence derived from model
validation and product verification efforts, flight certification dictates how the
system should be operated. Early in the project, stakeholders define their
expectations and develop an operations concept. Developers derive detailed
performance specifications from this intent that ultimately flow down to every
system component and part. But users and operators must live with the system as
it actually is, not as it was intended to be.
One example is temperature limits. A mission design decision to operate the
spacecraft in low-Earth orbit largely fixes the thermal conditions the system will
experience. With this in mind, a good systems engineer strives to obtain the widest
possible operational margins by defining survival and operational temperature
range specifications for every component. Verification events, such as thermal node
analysis and thermal-vacuum testing, determine if a given component will operate
reliably in the anticipated thermal environment, plus margin.
This example uses a simple pass/fail criterion. Either the component works
within specifications during and following thermal-vacuum testing, or it doesn't.
For the designer and test engineer, this criterion is perfectly acceptable. But for the
operator (one of the major active stakeholders), this result may not suffice for
optimal operations. A transmitter, say, may perform within specifications between
-20° and +40° C. But optimal performance may be between 5° and 15° C. Such
behavior may even be unique to a single component. The same model on a
different spacecraft may have a slightly different optimal range. Space hardware,
like people, often exhibits different personalities.
Failure to account for model uncertainties when selecting parameters such as
fault-protection alarm limits can result in a spacecraft unnecessarily initiating safe
mode response during critical early operations due to modest temperature
excursions. Validation testing—particularly stress and scenario testing—plays a
valuable role in selecting and certifying the set of key operational parameters.
Flight certification involves translating observed system behaviors into flight or
mission rules to guide the operator in squeezing the best performance out of the
system once it's on orbit.
Other roles involve interactions of subsystems and payloads. System validation
should uncover the "unknown unknowns" of system behavior. For example, we may
find during system validation that for some reason, Payload A generates spurious
data if Payload B is simultaneously in its calibration mode. Instead of redesigning,
rebuilding, and reverifying the entire system, we may decide that an acceptable
operational work-around is to impose a flight rule that says, "Don't collect data from
11.5 Certify Products for Flight 449
The as-built baseline may be a separate document from the design-to or build-
to performance specifications. Its limits may initially be the same as defined by the
original specifications, but only because the verification results support those
limits. The as-built baseline may also define "derived" constraints that aren't in the
design or performance specifications. Engineering drawings and models,
analytical tools and models (e.g., thermal, loads), and variances (with associated
constraints and expansions) also constitute part of the as-built baseline. Expansion
of a certification limit or change to the product configuration baseline (e.g.,
fabrication process) requires rigorous examination of all of the components that
make up the certification to determine where additional verification activities,
such as delta qualification or revised analysis, are warranted. Any approved
changes are made to the as-built baseline, not necessarily the design or
performance specification. We only revise hardware or software specifications
when we identify changes to the production or fabrication processes, acceptance
processes, or deliverables associated with acceptance.
Normally each "box" in a given subsystem has a design or performance
specification. For example, the radiator and heat exchanger in a thermal control
subsystem may be certified to different pressure and temperature limits.
Sometimes, specific fabrication processes (e.g., braze welding, curing) for a box
would, if changed, invalidate its certification, requiring requalification. As we
450 Chapter 11 — Verification and Validation
assemble the boxes into a larger assembly, we may need additional operational
constraints at the subsystem or higher levels. For example, the radiator and heat
exchanger may have upper operating temperature limits of 200° C and 180° C,
respectively. But their temperatures may be measured by a single sensor
somewhere between them. To protect subsystem components, we may need to
limit the maximum subsystem operating temperature to 150° C based on testing or
thermal modeling showing that a 150° C temperature sensor reading equates to a
radiator temperature of 200° C during operation. The subsystem certification limit
for temperature thus becomes 150° C, while the radiator and heat exchanger
certification limits remain unchanged. The thermal model we use to define the
constraint is part of the certification baseline. We also may need to establish
vehicle-level operational constraints (e.g., attitude and orientation) to avoid
exceeding system, subsystem, or lower-level certification limits.
The design certification review (DCR) focuses on the qualification and first-use
requirements identified during development of the VRs and TVRs. It's a formal
review chaired by the customer, with other stakeholders represented, to certify that
a configuration item (Cl), contract end item (CEI), or computer software
configuration item (CSCI) has met its specified design-to performance requirements
and that the design is ready for operational use or further integration in the mission
flow. We may hold a DCR for any such item, including software, line-replaceable
unit (LRU), assembly, subsystem, element, system, vehicle, and architecture.
The DCR is an expansion of the functional configuration audit (FCA) process
described in the next paragraph. Its primary objectives are to ensure the following:
• All qualification requirements for the item have been satisfied and verified
or variances approved
• All operating constraints for the item have been baselined and placed under
configuration management (CM) control
• All production or fabrication constraints associated with the item that could
affect qualification and acceptance have been baselined and placed under
CM control
• All maintenance requirements associated with the item's qualification have
been baselined and placed under CM control
• All anomalies and non-conformances have been successfully dispositioned
• All hazards have been dispositioned and successfully mitigated
• The required materiel history associated with the product and certification
baseline has been delivered to the customer and placed under CM control
• Customer and suppliers acknowledge, through a formal certification
process, that all of the above steps have been completed and that the item
being certified is ready for operational use and integration into the next
phase of mission assembly
In contrast, the customer performs the FCA before delivery of an item from a
contractor (after the critical design review (CDR) but before the system acceptance
11.5 Certify Products for Flight 451
• The required materiel history associated with the product baseline has been
delivered to the customer and placed under CM control
• The customer and the contractor acknowledge, through a formal
certification, that all of the above steps have been completed and that the
item being certified is ready for operational use and integration into the
next phase of mission assembly
An SAR focuses on completing the manufacturing and assembly processes; it
isn't meant to replace or repeat day-to-day customer oversight and integration. It's
also unlikely that all V&V activities will be complete by the SAR. Many of them,
especially those involving final configuration changes, occur just before launch (e.g.,
installation of thermal blankets, or propellant loading). Chapter 12 addresses these
as part of system transition.
Until now, our discussion approached the challenges of V&V from the
perspective of developing nominally new hardware items. But space systems are
increasingly composed largely of existing items. Furthermore, software differs
from hardware when it comes to V&V. The next section explores the challenges of
commercial-off-the-shelf (COTS) and non-developmental items (NDI). Software
V&V issues are then examined in the section that follows.
Ariane IV launch vehicle. NASA's Lewis spacecraft was lost soon after launch due to
an attitude control problem [Harland and Lorenz, 2005]. Both of these missions relied
on COTS or NDI hardware or software with proven flight heritage. This section
highlights some of the verification challenges that COTS and NDI items pose, in the
hope of preventing similar problems in the future.
From the standpoint of product verification, the widespread availability and
use of COTS and NDI can be either good or bad. The benefit is that they already
exist—we don't have to invent them. They likely have proven performance,
possibly with demonstrated flight heritage. But these facts don't obviate the need
for some level of verification, especially for new applications. This verification may
be complicated by the unwillingness or inability of a commercial vendor to supply
the level and type of detailed design or previous verification data we need. A
further complication is the flow-down requirements that come from the decision
to use a given COTS or NDI item. Any item comes with a pre-defined set of
interfaces that the rest of the system may have to adapt to, entailing more
verification activities.
We may insert COTS into a design anywhere during the flow-down side of the
system design effort. But doing so before PDR is far more likely to maximize
savings while minimizing system redesign efforts. If we use an item as is, we must
still understand the requirements for it in the context of the new application. If we
modify a COTS item, the requirements flow-down drives new requirements for
the modified item to meet.
After we select a COTS or NDI item, derived requirements need to flow to the
other subsystems, e.g., electrical power subsystem, thermal control subsystem,
data handling subsystem. A heritage review (HR), as described in Chapter 9, is
advisable to identify issues with the reuse of a particular component in the new
design. Table 11-25 summarizes a set of HR topics. The purpose of the review, best
done before the preliminary design review (PDR), is to:
• Evaluate the compatibility of the inherited or COTS item with project
requirements
• Assess potential risk associated with item use
• Assess the need for modification or additional testing
• Match the requirements of the inherited design against the requirements of
the target subsystem. A compliance matrix is very useful for this activity.
• Identify changes to current design requirements
The heritage review forces a systematic look at the item's history and
applicability to a new project. At the end of it, an item initially thought to be
technology readiness level (TRL) 9 may be only TRL 7 or 8 for a new application,
necessitating further verification before flight.
The outcome of the HR helps determine the need for, and level of, additional
verification for the item. As we describe in Section 10.7.1, a further complication
with COTS is that we can't apply the same rigor for V&V at the lowest levels of
454 Chapter 11- Verification and Validation
assembly simply because the vendor probably can't provide the required
traceability, nor the rigor defined in the specifications. Proprietary issues may exist
as well. We must often settle for higher-level verification of the unit and accept
some risk (e.g., run some stressing workmanship, vibration, thermal-vacuum,
radiation, and other tests on some sacrificial units to determine if they're good
enough, even if we don't know exactly what's inside the box). The main point here
is that tougher verification standards at higher levels, such as worst-case analysis;
parts-stress analysis; or failure mode, effects, and criticality analysis, may be
necessary to provide sufficient margins to offset the lack of lower-level knowledge.
By far the biggest risk in the use of COTS and NDI is the decision to fly
terrestrial hardware in space. The launch and space environments pose unique
challenges that otherwise perfectly suitable terrestrial components may not handle
well. And even if they do, we determine this capability only after an extensive
environmental qualification program. After careful analysis, we may decide that
the additional time, cost, and complexity of such a program negate any advantage
that COTS and NDI offer in the first place.
determine the criticality early in the project lifecycle, when we specify the top-level
system design. Thereafter, software classified as safety or mission critical is
developed with a high degree of process rigor. NPR 8715.3b [NASA (3), 2007]
defines mission critical and safety critical as follows:
TABLE 11-25. System Heritage Review (SHR) Topics. This table lists a series of topics that can
be addressed as part of a SHR. A formal system heritage review provides an
opportunity to address the specific design and construction of the item in question
and fully vet its suitability for a new application.
Topic Description
early system design trades seek to reduce this critical software. For modem non
human rated space systems, examples of safety and mission critical software include:
Flight Software
• Executable images—The executable binary image derived from the software
source code and persistently stored onboard a spacecraft
• Configuration parameters—Onboard, persistently stored parameter values
used to adjust flight software functionality
• Command blocks—Persistently stored sequences of commands embodying
common software functionality and often used for fault protection responses
Ground Software
• Uplink tools—Any software used to generate, package, or transmit uplink
data once we determine that the data is correct
11.7 Verify and Validate Software 457
Ground software
Figure 11-22. FireSAT Flight and Ground Software Architecture. The complexity of the FireSAT mission software (flight and ground)
illustrates the pervasiveness of software in every aspect of the mission. (GSE is ground support equipment; I/O is input/output;
ADCS is attitude determination and control system; GN&C is guidance, navigation, and control.)
458 Chapter 11—Verification and Validation
The focus here is on space mission software classified as either safety critical or
mission critical and which thus demands the highest level of process rigor and
corresponding V&V.
These generic qualities in space mission software usually result from the software
development process or with software engineer experience, both of which are
inadequate, although accepted, for safety and mission critical software. They never
have a comprehensive set of requirements defined and rarely have more than a
few requirements defined that are critical to mission success. This means we don't
have a comprehensive set of tests analogous to hardware testing.
Without adequately defined requirements, traditional verification of these
qualities is problematic. So we must rely on software validation, at multiple levels,
in multiple test venues, and for multiple development phases, to demonstrate the
suitability of the software for its mission application.
Figure 11-23. Waterfall Paths in Software Development. Path A (top to bottom with no loops)
illustrates a traditional single-pass waterfall lifecycle for software development. Path
B shows a lifecycle appropriate for a development with confidence in the software
requirements, architecture, and detailed design but without a delivery into system
test for any increments. Path C shows a lifecycle with multiple, detailed design and
system test phases. This path is common for space mission software development.
Table 11-27. Verification and Validation (V&V) Techniques Applicable for Each Lifecycle
Phase. Different phases employ different V&V techniques. This table highlights the
most useful ones. Some techniques are appropriate during multiple phases.
Implementation • Reuse
• Source language restrictions
* Continuous integration
* Static analysis
* Auto-codegeneration
• Software operational resource assessment
Requirements Analysis
Flaws in requirements analysis, including elicitation and elucidation, often
propagate through the majority of the software development lifecycle only to be
exposed as residual defects during the system test phase or during operations.
Such errors are the most expensive to fix because of their late detection and the
resulting full lifecycle rework needed to resolve them.
In Section 11.1 we discuss requirements validation. Applying the VALID
criteria described there, we have several useful techniques for identifying flaws in
software requirements. These include:
• Early test case generation—Forces system, software, and test engineers to
understand requirements statements so as to codify test cases and thus
uncover weaknesses in a requirement's description
• Bidirectional traceability—Ensures that all requirements and their
implementing components are interrelated. The interrelationships allow us
to identify requirements that don't have any implementation process or
conversely, requirements or implementations that are unneeded.
• Prototyping with customer feedback—Enables the customer to assess the
correctness of the system early in the software development lifecycle. The
prototype should stress end-to-end, customer-facing behavior.
• Incremental delivery—Allows us to perform the full scope of V&V activities,
including system testing, early in the software development schedule. A
root cause analysis of flaws in requirements during the early increments
sometimes reveals systematic errors in the requirements analysis phase.
Subsequent incremental software builds should be able to eliminate these
errors.
• Stress testing—Examines the 'boundary conditions' of requirement statements
that identify misunderstandings and weaknesses in the statements
• Hardware-in-ihe-loop—Prevents defects in the software development
simulation environment from suffering identical or related requirement
errors and thus from hiding flaws in the software product
We don't apply all of these techniques during the requirements phase. Some
may be applied only after the software has reached a certain stage of development.
Numerous software development tools are available to facilitate requirements
analysis, documentation, and V&V. (Tools suitable for requirements analysis
include DOORS by Telelogic, CORE by Vitech Corp, and Cradle by 3SL.) Increased
rigor in requirement specification is particularly useful in eliminating requirement
flaws and fostering correct design.
Architectural and Detailed Design
After the valid requirements are defined, architectural and detailed design
leads to a decomposition of the software into modules. The modules should be
recursively structured and designed depending on their size or complexity. The
11.7 Verify and Validate Software 465
primary results of a design are these modules, their interconnectivity, and their
interface specifications. The design must be shown formally to satisfy the
requirements. Techniques to avoid flaws in design include:
• Re use—Exploits prior designs, particularly architectural but detailed design
if possible, to reduce and to simplify V&V. Adopting an architecture or
design that's unsuitable for the current product introduces many of the
design problems that reuse is meant to avoid. Careful systems engineering
is required to adopt a reuse approach that reduces the project risk.
• Design constraints—Limits the space of design solutions to facilitate other
aspects of the development. Facilitation could support formal analysis of
the design or implementation, reduce the complexity of testing, adapt to
limitations in software engineer skills, ensure suitable visibility for remote
debugging, or others. A common example is to forbid preemptive
multiprocessing systems and dynamic memory allocation and thus the
types of implementation errors resulting from such systems.
• Prototyping for qualities—Provides early feedback on the architecture's
strengths and weaknesses relative to the product's generic qualities
(reliability, for example). This kind of prototype focuses not on customer-
facing behavior but on the driving product qualities. For example, a
prototype helps assess a design's ability to meet strict temporal deadline
requirements during the mission's critical events, such as a planetary
landing or orbit insertion.
• Complexity measures—Assesses the design relative to its modular
complexity. A complex design, based on a prescribed measure, either
warrants a design rework or additional software development resources for
later lifecycle phases. Common measures are based on the nature of the
interconnectivity of the modules within the software design.
• Viability analysis—Using high-level scenarios derived from the
requirements ensures that the design meets its requirements. The scenarios
are, at least conceptually, run through the design to confirm the ability to
implement them. This analysis often identifies overlooked modules and
module interrelationships. If the design is executable, we execute the
scenarios directly to yield definitive viability results.
• Formal verification and logic model checkers—It we express requirements in a
semantically rigorous way, we can apply logic model checkers to verify
designs in an implementation-independent manner. (The Spinroot website
has a software tool that can formally verify distributed software systems.)
These design phase V&V activities result in a software design with fewer residual
errors, giving us greater confidence that the implementation phase will produce
correct software code.
466 Chapter 11 — Verification and Validation
Implementation
During the implementation phase the detailed design is converted into a
binary executable code based on software source code compiled for a target
platform. Errors in design and requirements easily propagate into the executable
code. The source code, expressed in a computer language, is prone to syntactic and
semantic errors. Of these, the semantic errors are harder to eliminate. Common
software source languages include C, C++, and Java, although growing numbers of
higher-level languages exist. For example, The Mathworks and Telelogic websites
have more information about their capabilities. Techniques to avoid flaws during
implementation include:
implementation—the missed paths are surely the paths with latent errors.
Furthermore, normal flight code has considerably more conditionals and
non-deterministic external interrupts and multiprocessing task schedules.
In practice, we do true full-path coverage only on subsets of space mission
software. The focus is on the safety critical software or the most important
mission critical software—typically fault protection, resource management,
and attitude control. Other techniques include:
• Independent testers—Software tested by the software developer tends to
contain errors based on the developer's incorrect assumptions. For example,
if a developer assumes an interface contains values from 0 through 7, then the
source code and the test code will handle only 0 to 7, even if the actual values
are from 0 through 9. Independent testers bring different assumptions and
thus produce test cases that expose developer errors.
• Differential testing—If we have a provably correct gold standard
implementation, such as previously existing baseline data or results from
software being reused, then we do differential testing by running test cases
through the gold standard and the software under test and then compare
the results. Such an implementation only exists for a small subset of the full
software system.
• Boundary conditions lie at the extreme edges of the anticipated flight
conditions. Test cases that test all boundary conditions improve the chance
of catching latent errors.
• Random reboots injection—Spacecraft computers, because of space
environmental conditions (primarily charged particles and hardware
interactions), tend to shut down without giving the software a chance to
prepare. Such spontaneous shutdowns leave the software's persistent
storage in an inconsistent and thus corrupted state. Testing that involves
random reboot injections helps identify storage corruption errors.
• Hardware off-line faults—Spacecraft software interacts with a large variety of
hardware sensors, actuators, and controllers. These devices sometimes fail
or have their communication paths interrupted. Testing that simulates
hardware off-line faults at random times identifies software errors related
to hardware trouble.
• Longevity assessment ("soak tests")—Operating the software for long periods
tends to expose subtle resource management bugs. A common error is a
memory leak in which a finite memory resource is eventually exhausted.
Particularly insidious types of resource management errors are those
involving persistent storage; these errors manifest themselves only after a
very long time and perhaps multiple reboots. Unfortunately, longevity
assessments are difficult to implement in practice due to the durations
required and the limited availability of test beds. Long is generally defined
to be a significant fraction of the expected time between computer reboots;
however, for the insidious persistent storage errors it's even longer.
11.7 Verify and Validate Software 469
when explicitly rolled off the project or simply by restricted access to the
system and its test environment.
* Personnel and procedures—The system context uses real test venue support
personnel, operations personnel, and associated operational procedures.
The software developers' assumptions no longer apply. The consequences
of a software error may now be safety critical from the perspective of
human life. Operational procedures may have errors that use the software
in off-nominal, even dangerous, ways.
To avoid these issues we need additional tests within suitable test venues. The
software development team's completion of the software based on their own
qualification testing is only a first step in achieving a qualified software executable
for flight or ground operations.
• Stress tests or risk reduction tests—Ensure that the mission and safety-critical
software is robust and has no hidden behaviors not exposed in prior testing.
The complexity of modem software makes it impossible to completely test all
permutations. The V&V techniques applied in a software context, while
necessary, are not sufficient. Stress-testing at the system level on the launch
version of the software, helps catch any remaining flaws. Particularly
appropriate test phase techniques are: boundary conditions; longevity
assessment; multiple, simultaneous fault injection; and multi-tasking overload.
carefully, because a missed test case might obscure latent errors that lead to
loss of mission.
• Incomplete launch loads—A fundamental advantage to flight software is that
it is changeable in flight whereas hardware is not. Taking this advantage to
an extreme allows for launching a spacecraft without the full required
functionality and then uploading the software sometime in the future. In
some applications, notably deep space missions with cruise phases lasting
from months to years, the project might exploit this time by deferring
software development. However, the future builds need to be developed
and tested with all the rigor of the launch build.
• Post-launch software builds—A fundamental trade-off in the decision for a
post-launch build is whether post-launch V&V activities only on testbeds
are sufficient. It's crucial for this trade to cross-validate the testbed against
the spacecraft before launch, and as part of that cross-validation to quantify
differences and idiosyncrasies that could otherwise impair the in-flight
V&V. Failing to complete this cross-validation before launch leads to high
risks later in the mission after the spacecraft is gone!
• System safety—Safety critical software is one aspect of overall system safety.
A safe system generally requires multiple inhibits, with the software
counting for one inhibit at most. Other system inhibits may include, for
example, firing pins in propulsion systems. During final stages of launch
preparations, a spacecraft's propulsion system is usually filled with
hazardous material and its deployment ordinances (pyrotechnic initiators)
are installed and armed. From this point onward, any powered test of the
spacecraft carries a risk to personnel and vehicle safety in the form of
accidental firing of the propulsion system or pyrotechnic initiators. By this
time, safety-critical software, including flight, ground, and GSE software,
must be complete and fully qualified so that the software's defined inhibits
preclude accidents.
Let's look at the example test flow for the FireSAT flight software whose top-level
architecture was shown in Figure 11-22. Figure 11-24 illustrates a nominal test flow
for this software from the unit, to module, to integrated functional level. It also
includes place holders for integrated functional tests, end-to-end information system
(EEIS) tests, and mission scenario tests. Like the IVF presented in Section 11.4, this
diagram helps systems and software engineers integrate their test planning.
474 Chapter 11—Verification and Valid ation
FIGURE 11-24. Notional FireSAT Flight Software Test Flow. This figure illustrates the flow of flight software testing from unit to module to
integrated software deliverable items. Included in the diagram are placeholders for integrated spacecraft and ground system
software tests, use of software testbeds, end-to-end information system (EEIS) tests, operational readiness tests, and software
stress tests.
11.8 Document and Iterate 475
TABLE 11-28. Lesson Learned Example. The NASA Lessons Learned website captures
thousands of lessons such as this one on a vast array of topics. This example
illustrates just one of hundreds on the subject of spacecraft verification. All systems
engineers, especially those involved in verification and validation (V&V), have a duty
to learn from these lessons so as not to repeat previous mistakes.
Table 11-28. Lesson Learned Example. (Continued) The NASA Lessons Learned website
captures thousands of lessons such as this one on a vast array of topics. This
example illustrates just one of hundreds on the subject of spacecraft verification. All
systems engineers, especially those involved in verification and validation (V&V),
have a duty to learn from these lessons so as not to repeat previous mistakes.
References:
1. Report on the Mars Polar Lander and Deep Space 2 Missions, JPL Special Review Board (Casani
Report), JPL Internal Document D-18709, 22 March 2000, Sections 3.4 and 5.2.
2. JPL Corrective Action Notice No. Z69164, Mars Program Investigation Results: “System
Engineering/Risk Management/Error Detection,” 1 May 2000.
3. JPL Corrective Action Notice No. Z69165, Mars Program Investigation Results: “Verification and
Validation Process,” 1 May 2000.
Additional Keywords: Entry, Descent, and Landing (EDL), Environmental Test, Fault Protection,
Integration and Test, Risk Assessment, Robust Design, Simulation Accuracy, Software Testing,
Spacecraft Test, Technical Margins, Test and Evaluation, Test Errors, Test Fidelity, Test Planning
Lesson Learned: The process of end-to-end system verification (either through testing, simulation,
or analysts) may be compromised when it is not consistent with the mission profile (plus margin and
the appropriate off-design parameters).
Recommendations:
1. Enforce the system-level test principle of “test as you fly, and fly as you test." Carefully assess any
planned violations of this principle; if they are necessary, take alternate measures such as
independent validation. Departures from this principle must be reflected in the project risk
management plan, communicated to senior management for concurrence, and reported at
reviews.
2. When using simulations for system-level verification, models must have been validated (e.g.,
supported by test); and sufficient parametric variations in the simulations must be performed to
ensure that adequate margins exist.
References
Ariane 501 Inquiry Board. 1996. Ariane 5 Flight 501 Failure—Report by the Inquiry Board.
URL: sunnyday.mit.edu/accidents/Ariane5accidentreport.html
Arthur, James D. et al. 1999. "Verification and Validation: What Impact Should Project Size
and Complexity have on Attendant V&V Activities and Supporting Infrastructure?"
Proceedings of the IEEE: 1999 Winter Simulation Conference, Institute of Electrical and
Electronic Engineers, P.A. Farrington, ed. pp. 148-155.
Bald, Osman. March 1997. "Principles of Simulation, Model Validation, Verification, and
Testing." Transactions of the Society for Computer Simulation International, 14(1): 3-12.
Department of Defense (DoD). April 10,1998. MIL-STD-340(AT)— Process For Coating, Pack
Cementation, Chrome Aluminide. Washington, DC: DoD.
478 Chapter 11—Verification and Validation
European Cooperative for Space Standardization (ECSS). July 14, 2004. ECSS-P-001B —
Glossary of Terms. Noordwijk, The Netherlands: ESA-ESTEC.
ECSS Requirements and Standards Division. November 17, 1998. ECSS-E-10-02-A—Space
Engineering: Verification. Noordwijk, The Netherlands: ESA-ESTEC.
Feather, Martin S., Allen P. Nikora, and Jane Oh. 2003. NASA OSMA SAS Presentation.
URL: www.nasa.gov/centers/iw/ppt/172561main_Fcather_ATS_RDA_v7.ppt
Harland, David Michael and Ralph D. Lorenz, 2005. Space Systems Failures: Disasters and
Rescues of Satellites, Rockets and Space Probes, Berlin: Springer.
JPL Special Review Board ()PL). March 22,2000. "JPL D-18709—Report on the Loss of the Mars
Polar Lander and Deep Space 2 Missions. URL: spaceflight.nasa.gov/spacenews/releases/
2000/mpl/mpl_report_l .pdf
Larson, Wiley J., Robert S. Ryan, Vernon J. Weyers, and Douglas H. Kirkpatrick. 2005. Space
Launch and Transportation Systems. Government Printing Office, Washington, D.C.
Larson, Wiley J. and James R. Wertz (eds.). 1999. Space Mission Analysis and Design.
Dordrecht, The Netherlands: Kluwer Academic Publishers.
National Aeronautics and Space Administration (NASA (1)). March 6,2007. NPR 7120.5d—
NASA Space Flight Program and Project Management Requirements. Washington, DC: NASA
NASA (2). March 26, 2007. NPR 7123.1a—NASA Systems Engineering Processes and
Requirements. Washington, DC: NASA.
NASA (3). April 4, 2007. NPR 8715.3b—NASA General Safety Program Requirements.
Washington, DC: NASA.
NASA (4). December 2007. NASA/SP-2007-6105, Rev 1. NASA Systems Engineering
Handbook. Washington, DC: US Government Printing Office.
NASA. March 2004. NASA/JPG 7120.3 Project Management: Systems Engineering & Project
Control Processes and Requirements. Washington DC: NASA.
NASA Goddard Space Flight Center (NASA/GSFC). June 1996. GEVS-SE Rev. A—General
Environmental Verification Specification For STS and ELV Payloads, Subsystems, And
Components. Greenbelt, MD: GSFC.
Perttula, Antti. March 2007. "Challenges and Improvements of Verification and Validation
Activities in Mobile Device Development.'" 5th Annual Conference on Systems
Engineering Research. Hoboken, NJ: Stevens Institute of Technology.
Sellers, Jerry Jon. 2005. Understanding Space. 3rd Edition. New York, NY: McGraw-Hill
Companies, Inc.
Smith, Moira I., Duncan Hickman, and David J. Murray-Smith. July 1998. "Test,
Verification, and Validation Issues in Modelling a Generic Electro-Optic System/'
Proceedings of the SP1E: Infrared Technology and Applications XXIV, 3436: 903-914.
WIRE Mishap Investigation Board. June 8,1999. WIRE Mishap Investigation Report. Available
at URL http://klabs.org/richcontent/Reports/wiremishap.htm
Youngblood, Simone M. and Dale K. Pace. "An Overview of Model and Simulation
Verification, Validation, and Accreditation", Johns Hopkins APL Technical Digest 16(2):
197-206. Apr-Jun 1995.
Chapter 12
Product Transition
Dr. Randy Liefer, Teaching Science and Technology, Inc.
Dr. Katherine Erlick, The Boeing Company
Jaya Bajpayee, NASA Headquarters
12.1 Plan the Transition
12.2 Verify that the Product is Ready for Transition
12.3 Prepare the Product for Transportation to the
Launch Site
12.4 Transport to the Launch Site
12.5 Unpack and Store the Product
12.6 Integrate With the Launch Vehicle
12.7 Roll Vehicle Out to the Launch Pad
12.8 Integrate With the Launch Pad
12.9 Conduct Launch and Early Operations
12.10 Transfer to the End User
12.11 Document the Transition
479
480 Chapter 12-Product Transition
TABLE 12-1. A Framework for Organizing Transition Activities. A general process for
product transition is shown here along with which section discusses each
step.
Transition Activities
Step Description Where Discussed
Transitioning a satellite to the end user occurs late in the product lifecycle
(Figure 12-1, [NASA, 2007 (1)]). It begins toward the end of Phase D after the
satellite is integrated and tested and is ready to begin launch and on-orbit
operations. Transition is complete at the post-launch assessment review, which
marks the beginning of Phase E. Depending on mission complexity, this portion
takes about three to six months. NPR 7123.1a [NASA, 2007 (1)], maps transition as
a set of inputs, activities, and outputs as shown in Figure 12-2.
FIGURE 12-1. Project Lifecycle. Here we depict the NASA project lifecycle. Commercial and other government organizations divide the
lifecycle differently and use different names, but the essence is the same. The oval indicates the part of the lifecycle that this
chapter discusses.
12.1 Plan the Transition 483
Figure 12-2. The Product Transition Process. At the end of this process, the system begins
routine operations. [NASA, 2007 (2)]
these as "end items/' with the same calibration and maintenance requirements as
the main product. Because transition requires a supportive infrastructure, support
elements for this equipment must also be transitioned to the end user, including:
• Spares and inventory management
• Personnel requirements for operations and maintenance
• Facilities requirements for operations and storage
In some missions, the end user (e.g., NASA) takes responsibility for integrating
the satellite to the launch vehicle. In this case, we must define the point at which the
end user accepts the satellite. The satellite vendor often tries to define the pre-ship
review as the acceptance review. But the project managers and systems engineers at
NASA's Goddard Space Flight Center always define the point of acceptance as the
post-delivery functional tests at the launch site. This means the vendor is responsible
for proper design of the transportation container, choosing the transportation
method, and transporting the satellite. An end user that accepts the hardware at the
pre-ship review is then responsible for its transportation. The satellite provider
cannot be held responsible for activities outside the contract.
488 Chapter 12—Product Transition
Figure 12-3. Testing a Satellite before Transition, a) A DirecTV satellite goes into the compact
antenna test range, a massive an echoic chamber at Space Systems/Loral where its
broadcast capabilities will be tested, b) Satellite being lowered into thermal vacuum
chamber for testing. (Courtesy of Palo Alto Online)
Other factors include economy and ease of handling and transport, accountability
(by shrink wrapping to keep all parts and manuals together during shipping, for
example), security, and safety of unpacking.
After packaging the product and accompanying documentation (technical
manuals), the sender either moves them to the shipping location or gives access to
their facility to the shipping provider. Interfaces here need to be carefully negotiated
and defined. For example, does the transport provider assume responsibility for the
product at the manufacturer's loading dock or at the aircraft loading ramp?
FIGURE 12-4. Typical Transportation Modes, a) The first MetOp meteorological satellite arrives
on 18 April 2006 in Kazakhstan, on board an Antonov-124 transport plane. It was
launched on 19 October 2006 from the Baikonur Cosmodrome in Kazakhstan, on a
Soyuz ST rocket with a Fregat upper stage. (Source: ESA) b) Lockheed Martin and
NASA personnel at NASA’s Michoud Assembly Facility in New Orleans load the
Space Shuttle’s external tank designated ET-118 onto its seagoing transport, the
NASA ship Liberty Star. (Source: NASA/Michoud Assembly Facility)
TaBLE 12-2. Sample Decision Matrix for Selecting Transportation Modes. The choice of
transportation mode can be aided and documented by weighting the 17 criteria listed
here and then scoring the competing transportation inodes against these criteria.
[Larson et al., 2005] (Wt. is weight of factor; Sc. is score; Wt.Sc. is Wt. x Sc.)
Iridium spacecraft to each of the launch sites in the US, China, and Russia. Finally,
they used pathfinders at each launch facility to rehearse loading fuel into the
satellites and loading the satellites onto the launch vehicles. This detailed
planning, pathfinding, and practice contributed to a launch campaign that orbited
67 operational satellites with 15 launches in slightly more than 12 months.
12.5.1 Receive and Check Out the Product at the Launch Site
Now we transport the product with its accompanying documentation, spares,
and support products to the launch site. Trained personnel and support
equipment should be available and prepared; facilities must be ready at the
sending and receiving sites; procedures, roles, and responsibilities must be
defined, negotiated, and practiced.
The launch site usually makes available an assembly and test facility, where
the systems engineers test the product to be sure it wasn't damaged in shipment
and that it still meets all performance requirements. After arrival at the launch site,
the receive and check-out operations occur as follows:
12.5 Unpack and Store the Product 493
The tests begin with a visual inspection of the container to verify that no visible
damage has occurred during transportation. This check is done almost
immediately after dismounting from the transport carrier (ship, airplane, truck).
Next, we review the sensor data from accelerometers, temperature monitors,
cleanliness monitors, etc., to determine the real environments that the satellite
endured during transport. Early in the design process, we include the
transportation environment in determining the satellite's design environments.
Ideally, after arrival at the launch site, all the sensor data will show that the
transportation environment was well within the satellite's design environment. If
not, tiie systems engineer must assemble the engineering team to determine the
impact of these environments on the satellite, what tests must be done to confirm
the integrity of the product for launch, how much disassembly is needed, where to
disassemble it, and whether it should it be shipped back to the vendor's facility.
The next step is the visual inspection after opening the container. The systems
engineer hopes to find no dents, discoloration, or loose parts in the box. Otherwise
we have a real headache. Now the systems engineering team must determine what
happened, how it happened, how it affects the product, what needs to be done to
ready the product for launch, and where this should be done.
After unpacking and conducting a more thorough inspection, we place the
satellite in its environmentally controlled facility to begin its functional testing. A
check-out procedure should be part of the acceptance data package, to give
assurance that no damage occurred to the system during transit to the launch site.
This functional test (versus a verification test) usually requires hookup to the
ground support equipment (GSE) that accompanies the space system. If the check
out indicates anomalies, we formally record and track these problems in a problem
and corrective action (PRACA) system, as described in the safety and mission
assurance plan. Depending on the severity of the anomaly, launch site personnel
will attempt resolution per the technical manuals (also in the acceptance data
package) that accompany the product. Chapter 11 also discusses functional testing
and problem resolution during test. Upon successful check-out, the space system
begins integration to the launch vehicle. The GOES-N mission, shown in Figure 12-
5, underwent two months of tests on the imaging system, instrumentation,
communication, and power systems. At the end of this step, the product with its
technical manuals and support equipment is at the launch facility in the
configuration that meets all stakeholders' requirements.
494 Chapter 12—Product Transition
and the data flow system. Ground system readiness also requires that a
security plan, risk management plan, and contingency plans be in place and
that personnel be trained to execute them.
• Mission operations plans are in place and the team has practiced them—
The MOCs must be ready. Hardware and software must be compatible with
that of the orbiting satellite and the mission operations team must be ready.
Ensuring mission operations readiness includes performing simulations of
each phase—launch, separation, deployment, acquisition, maneuver into
operational orbit, normal operations, and contingencies. The MOCs often
perform the maneuver simulations weekly and keep a training matrix to
track each team member's progress. These simulations exercise the entire
ground system and tracking networks. The MOCs perform multiple mission
operations simulations to resolve all issues and to exercise the contingency
and launch scrub procedures.
• Ground tracking networks are ready to support the on-orbit operations—
A number of ground-based receive and tracking sites have been established
around the world to support satellite operations. For example, NOAA
operates Command and Data Acquisition ground stations in Virginia and
Alaska to track satellites in low-Earth orbit. The Deep Space Network, which
tracks and receives from interplanetary and other extremely distant
spacecraft, has stations in California, Spain, and Australia.
Early in the planning phase, mission architects must decide which ground
stations to use based on the orbit and trajectory of the spacecraft and the
availability of potential stations. As the spacecraft flies over a ground site, it
establishes communication with the site, then downlinks data to it. The
satellite controllers send commands to the spacecraft through antennas
located at these sites. The tracking network is ready once we've scheduled the
use of each site, all agreements are in place, and we've performed a
communication test by flowing data to that site.
• Space networks (if applicable) are ready—If a space network, such as
TDRSS, is required, the systems engineers must verify that it's ready to
support on-orbit operations. This means we must have signed agreements
documenting the use schedule and verifying communication through the
space network asset by flowing data from the spacecraft.
• Mission timeline is clear and well understood—This plan includes
documented timelines of pre-launch, ascent, and initial orbital operations.
Often the satellite must be maneuvered after launch into its operational orbit.
We must document the bum sequence for maneuver operations, develop the
software scripts, and practice maneuver operations during end-to-end
testing. The maneuver team documents and tests contingency procedures.
• Contingency philosophy is clear and well understood for all mission
phases, especially launch countdown and early-orbit check-out—During
these mission phases, the time for reacting to anomalies is short. Success
496 Chapter 12 —Product Transition
test their electrical functionality using simulators. Technicians verify that the flight
EED has the appropriate safing mechanism installed with "remove before flight//
safety flags.
We conduct every step in this process using a carefully scripted procedure and
in the presence of quality assurance (QA) personnel. Completion of each
procedure step requires the signatures of at least two people—the technician or
engineer performing the step and the QA representative. Repeated functional
checks are done at each step, consistent with the integration and test processes
described in Chapters 10 and 11,
The last step of integration is placing the payload fairing onto the integrated
launch vehicle to cover the payload. After the fairing is placed, access to the
satellite becomes limited. Strategically placed doors allow access to safety critical
items (e.g., arming EEDs, venting high-pressure vessels, disposing of hazardous
materials such as hydrazine) and allow monitoring of satellite temperature and
pressure to ascertain that proper environmental conditions have been maintained.
The series of photos in Figure 12-6 shows ESA's Venus Express Satellite
undergoing launch vehicle integration. In this example, the payload with the
fairing installed is attached to the Soyuz-Fregat launch vehicle in the MIK-40
integration building.
Figure 12-7 shows GOES-N, which was integrated with the launch vehicle on
the launch pad. For this integration, only the payload module was rolled out to the
launch pad, then attached to the launch vehicle.
Figure 12-6. Venus Express. Here we show the Venus Express during launch vehicle
integration, Fall 2005. a) 30 Oct 2005: Venus Express is inspected before
integration, b) 30 Oct 2005: Inspection of the fairing, c) 30 Oct 2005; The Venus
Express spacecraft and Fregat upper stage in the MIK-112 building, d) 4 Nov 2005:
Final integration of the Soyuz launch vehicle with the Venus Express Spacecraft has
taken place in the MIK-40 integration building. The launch vehicle and its payload
are now fully assembled and preparations begin for rollout to the launch pad.
(Source: European Space Agency website)
Figure 12-8. Roll-out to the Pad. Here we show various approaches for transporting the launch
vehicle to the pad. a) Nov 5, 2005: ESA’s Venus Express is transported to the
launch pad horizontally (left). After arriving on the launch pad, it’s raised to the
upright position for launch. (Courtesy of European Space Agency) b) May 26, 2005:
NASA’s Crawler hauls the Space Shuttle Discovery’s launch stack back to the
Vehicle Assembly Building. The external tank is swapped with another, which has
an additional heater to minimize ice debris that could strike Discovery during launch.
(Courtesy of NASA Kennedy Space Center) c) March 9, 2007: Ariane 5 rolls out
with INSAT 4B and Skynet 5A. (Source; EFE, the Spanish News Agency website.)
500 Chapter 12 — Prod uct transition
FIGURE 12-9. Soyuz on the Launcher. The Soyuz launcher is integrated horizontally, then
erected on the pad. a) November 5, 2005: The integrated Soyuz launcher being
prepared to tilt the rocket to vertical position on the launch pad. b) November 5,
2005: Tilting launcher to vertical position, c) November 5, 2005: The fully integrated
Soyuz FG-Fregat vehicle carrying Venus Express stands upright on the launch
platform, secured by the four support arms. (Courtesy of European Space Agency)
• Confirm that each stage of the launch vehicle can ignite and separate at the
right time. We also test the flight termination system to be sure that it
responds to the range safety officer's commands from the control center. Of
course, no ordnance is actually activated during these tests. It's placed in a
safe position by mechanical safe and arm devices, which isolate all ordnance
initiators from the circuit by mechanically rotating them out of line. And safe
plugs are placed on the circuit to short it, which allows for continuity checks.
• Validate blockhouse functionality by establishing that it can send commands,
for example to charge batteries, safe a circuit, turn power off, and receive data.
• Prove that the mission communication system is operational. As the
spacecraft flies over a receive site, it establishes communication with the
site, then dumps data to it. The controllers must be able to send commands
to the spacecraft through their antennas. We verify communication system
performance by flowing data from die satellite to all receive sites around the
world—usually before the satellite is mated to the launch vehicle.
Spacecraft controllers also send commands to each of these sites to see that
the site transmits them.
The ground segment is ready for launch when the TT&C is configured for launch,
and personnel are trained to handle nominal and anomalous situations. In ground
segment functional tests:
Table 12-3 defines the entrance and success criteria for a flight readiness
review.
Table 12-3. Flight Readiness Review (FRR) Entrance and Success Criteria [NASA, 2007 {1)J.
Here we show a useful checklist to decide when a system is ready for its FRR and
afterwards, to determine if it is, indeed, ready for flight. (DGA is Designated Governing
Authority.)
1. Certification has been received that flight 1. The flight vehicle is ready for flight
operations can safely proceed with 2. The hardware is deemed acceptably safe for
acceptable risk flight (i.e., meeting the established
2. The system and support elements have been acceptable risk criteria or documented as
confirmed as properly configured and ready being accepted by the project manager and
for flight DGA)
3. Interfaces are compatible and function as 3. Flight and ground software elements are
expected ready to support flight and flight operations
4. The system state supports a launch "go" 4. Interfaces are checked and found to be
decision based on go/no-go criteria functional
5. Flight failures and anomalies from previously 5. Open items and waivers have been
completed flights and reviews have been examined and found to be acceptable
resolved and the results incorporated into all 6. The flight and recovery environmental factors
supporting and enabling operational are within constraints
products 7. All open safety and mission risk items have
7. The system has been configured for flight been addressed
Verify health
Controlled-ascent Acquire radio and safety; Begin activities
Launch-site profile for launch frequency and asse33 system to meet mission
control vehicles attitude signals performance objectives
Figure 12-11. Pictorial Depiction of the Launch and Operational Phases. Handover to the end
user occurs at item 4 in the picture. (Source: NASA)
TABLE 12-4. Description of Launch and Early-orbit (L&EO) Operations Phases, This table lists
the major steps involved in each phase of launch and early operations [Boden and
Larson, 1996].
(normal and contingencies), and that the system is now ready to begin the
operations phase. Below are the conditions needed for a successful PLAR:
• Launch and orbital insertion phase has been completed and documented,
especially variances between planned and actual events. The systems
engineers must have an explanation of the impact of each variance on the
overall mission. For example, if additional maneuvers were required to get
into orbit, the systems engineers must explain why these maneuvers were
required, how much propellant they used, and how much propellant remains.
• The spacecraft bus and payload have successfully passed all functional
tests, including all operational modes. Here again, the systems engineers
must explain all variances in test results and procedures. A list of each test
performed, its results, variances from expected, impact of each variance on
operations, and how lessons learned from each variance are incorporated
12.10 Transfer to the End User 505
Table 12-5 defines the entrance and success criteria for a post-launch
assessment review.
TABLE 12-5. Post-launch Assessment Review Entrance and Success Criteria. This table gives a
useful checklist to determine that a spacecraft is ready to commence norma!
operations. [NASA, 2007 (1)]
• The launch and early operations performance, including the • The observed spacecraft and
early propulsive maneuver results, are available science payload performance
• The observed spacecraft and science instrument performance, agrees with prediction, or if
including instrument calibration plans and status, is available not, is adequately understood
• The launch vehicle performance assessment and mission so that future behavior can be
implications, including launch sequence assessment and predicted with confidence
launch operations experience with lessons learned, are • All anomalies have been
completed adequately documented, and
• The mission operations and ground data system experience, their impact on operations
including tracking and data acquisition support and spacecraft assessed. Further, anomalies
telemetry data analysis is available affecting spacecraft health
• The mission operations organization, including status of and safety or critical flight
staffing, facilities, tools, and mission software (e.g., spacecraft operations are properly
analysis and sequencing), is available disposed.
• In-flight anomalies and the responsive actions taken, including • The mission operations
any autonomous fault protection actions taken by the spacecraft capabilities, including staffing
or any unexplained spacecraft telemetry, including alarms, are and plans, are adequate to
documented accommodate the actual flight
• The need for significant changes to procedures, interface performance
agreements, software, and staffing has been documented • Liens, if any, on operations
• Documentation is updated, including any updates originating identified as part of the
from the early operations experience operational readiness review
• Future development and test plans are developed have been satisfactorily
disposed
recurring functions such as trend analysis and mission planning. Certainly they
will need to participate in unscheduled activities such as anomaly resolution and
incident investigations. These sustaining activities are often underestimated or
neglected in the developers' planning and budgeting processes and can be a
significant burden when missions last far longer than originally anticipated.
Voyager operations and the Mars rovers Spirit and Opportunity are classic
examples of this "nice to have" problem.
References
Boden, Daryl G. and Wiley J. Larson. 1996. Cost Effective Space Mission Operations. New York:
McGraw-Hill.
Larson, WileyJ., Robert S. Ryan, Vernon J. Weyers, and Douglas H. Kirkpatrick. 2005, Space
Launch, and Transportation Systems. Government Printing Office, Washington, D.C.
Lewis Spacecraft Mission Failure Investigation Board. 12 Feb 1998. Final Report.
Washington, DC: NASA.
National Aeronautics and Space Administration (NASA). January 30, 1995. Lessons
Learned website, Public Lessons Learned Entry 0376.
NASA (1). March 26, 2007. NPR 7123.1a, NASA Systems Engineering Processes and
Requirements. Washington, DC: NASA.
NASA (2). March 6, 2007. NPR 7120.5D, NASA Space Flight Program and Project Management
Requirements. Washington, DC: NASA.
Chapter 13
'It's not the plan,, it's the planning. " Dwight D. Eisenhower
"The battle plan is obsolete once the first shot is fired/' Norman
Schwartzkoff
507
508 Chapter 13—Plan and Manage the I'echnical Effort
Any plan includes what to do, who will do what, when the work must be
complete, and how many resources will be available to do it. Thus, we have four
main objectives in technical planning, as shown in Table 13-2.
Table 13-2. Matching Technical Planning’s Four Main Objectives to Key Activities. These
activities produce the plan, which we revisit as the need arises.
Our ultimate objective isn't the plan but completing the technical work—for
this book, developing a space system and transitioning it to mission operations.
Technical management activities apply the plan and complete the work, including
adjusting the technical plans to respond to the inevitable deviations between plans
and reality. Figure 13-1 shows these activities, and Sections 13.4 through 13.6
describe them.
13.1 Prepare for Technical Planning 509
FIGURE 13-1. Progression of Technical Planning Activities. This chapter discusses the four
main activities that constitute technical planning, plus the technical management
that implements the planning. The diagram includes the planning products and
shows how these activities depend on each other.
Figure 13-2. Typical System View of an Aircraft in Its Environment of Use [ISO, 2002]. We
can view an entity at any level in a hierarchical structure as a system, a system
element, or a system’s operational environment.
By establishing the system of interest, we limit the scope of this work at any
location because technical planners need to consider only one level up and one
level down in the system hierarchy. For example, in Figure 13-2, technical planners
for the navigation system work within the context, environment, and system
performance set only by the aircraft system; they needn't consider the air transport
system's environment and context. Likewise, technical work for the navigation
system defines the context, environment, and system performance for the
navigation system's elements: the GPS receiver, display, and so forth. But its scope
focuses only on integrating these elements, rather than exhaustively describing
each element's technical work.
We consider also the FireSAT case study. Figure 13-3 shows various system of
interest perspectives, including the overall system (the project), the FireSAT
system (integrated spacecraft and payload), the spacecraft; and the spacecraft's
electrical power subsystem. Indeed, the system of interest for the FireSAT SEMP is
the FireSAT system; other elements of the project, such as the ground systems,
provide context and environmental considerations. Also, the SEMP describes only
one level of refinement in the spacecraft and payload systems. For instance,
whereas it identifies the spacecraft and payload as system elements, it doesn't
address components of the electrical power subsystem. The SEMP's technical plan
encompasses the overall technical work for the electrical power subsystems, but it
defers their details to technical planning.
One consequence of this approach is that we understand more as we step
through levels of refinement in the systems hierarchy. Presumably, a good
understanding of the FireSAT project from Figure 13-3 precedes understanding of
the FireSAT system, consistent with the discussion in Chapter 2. This progression
isn't a strict waterfall, as any systems engineer knows that system definition is
iterative. Rather, technical planning moves sequentially from higher to lower
levels in the system hierarchy. Although concurrency is desirable in that it leads to
a more effective system, it comes at the price of iteration.
FIGURE 13-3. System-of-lnterest Diagram for FireSAT. Here we depict the differing system
environments and elements for the FireSAT project, the FireSAT system, and the
FireSAT spacecraft.
the phasing sets its pace. (We define these technical products fully in Section 13.2.)
This historical survey also helps us scope other elements of the technical effort:
• Technical reporting requirements, including measures of effectiveness
(MOEs), measures of performance, and technical performance measures
(Chapter 2 discusses MOE development)
• Key technical events, with technical information needed for reviews or to
satisfy criteria for entering or exiting phases of system lifecycle management
(see our brief discussion in Section 13.2 and full discussion in Chapter 18
concerning technical data packages for technical milestone reviews)
• Product and process measures to gauge technical performance, cost, and
schedule process. Given the preliminary definition and time phasing of
technical products, we develop resource requirements as described in
Chapter 7 and may then track technical progress against the technical plan,
as described in Section 13.6.
13.2 Define the Technical Work 513
• The approach for collecting and storing data, as well as how to analyze,
report, and if necessary, store measurement data as federal records (Chapters
16 and 17 discuss configuration and data management respectively)
• How we should manage technical risks in planning (Chapter 8)
• Tools and engineering methods the technical effort will use (Chapter 14 tells
how they fit into SEMP development)
• The technical integration approach for integrated systems analysis and
cross-discipline integration (Section 13.2.3)
Though not exhaustive, this list typifies the foundational technical planning we
can complete based on a historical survey. Reviewing technical plans for recently
developed Earth-observing satellites, for example, gives technical planners
significant insight for the FireSAT project. We save time and effort by refining
these basic plans, as described in Section 13.2, rather than generating all the
planning information from scratch.
Figure 13-4. Using NASA Procedural Requirement (NPR) 7123.1a Process Outputs as a
Reference. The process outputs in NPR 7123.1a Appendix C are a point of
departure for identifying FireSAT’s technical artifacts.
Tailoring acquisition to fit various strategies is the rule rather than the
exception. A project's acquisition strategy strongly influences its technical artifacts.
The technical products in Table 13-3 would satisfy the technical work for FireSAT's
system elements, if we were making everything. The bolded items in the table are
13.2 Define the Technical Work 515
what they'd be if we were buying everything. In this case, we would scope the
technical work to get the bolded products from suppliers and then oversee the
prime contractors, who would produce the non-bolded items. Acquisition uses
pure "make" or "buy" only in exceptional circumstances. For example,, TRW was
the prime contractor on NASA's Chandra X-ray Observatory to:
Table 13-3. FireSAT’s Technical Products. Deliverables for the FireSAT Project include plans,
specifications, drawings, and other technical artifacts produced by systems
engineering processes in addition to the operable system.
Detailed Full
investigation production
Preliminary (business case Testing and and market
assessment preparation) validation launch
Development
FIGURE 13-5. The Stage-Gate Process for System Development [Cooper, 1990]. A typical
stage-gate process models system development like manufacturing, with quality
control checkpoints, or gates, between each process stage.
Stage-gate development dates to at least the 1960s in industry and the 1940s and
1950s in government, particularly the US Department of Defense (DoD). Space
systems development today uses various stage-gate processes. NASA employs four
different ones, depending on the type of system development; for example, the one
for flight systems has seven stages or phases. The DoD's stage-gate process for
system acquisitions has five phases, and IEEE-1220 specifies six phases [IEEE, 2005].
Although the number of phases appears arbitrary, it always depends on the system
13.2 Define the Technical Work 517
Figure 13-6. How the Technical Baseline Evolves. Each successive technical baseline builds
on the preceding ones and is a step toward the operational system. (See Figure 13-
7 for acronym definitions.)
Figure 13-7 shows the key technical events within the system's lifecycle, based on
NPR 7123.1a. Appendix G of the NPR describes review objectives,, as well as entry
and success criteria. (Chapter 18 gives a full description of technical review and
assessment.)
Figure 13-7. Key Technical Events during the Flight System’s Lifecycle. The project must
pass these reviews successfully to continue through the lifecycle.
Entrance and success criteria for reviews help us refine the technical baseline
by specifying artifacts at each key technical event. For instance, consider the
system requirements review (SRR) for NASA's Ares 1 launch vehicle. Table 13-4
lists the success criteria, adapted from NPR 7123.1a Appendix G, with the technical
artifacts reviewed as objective evidence that the criteria are fully satisfied.
Figure 13-8 shows how to develop notional technical baselines in terms of their
technical artifacts. The as-deployed baseline has three reviews associated with it
but this depends strongly on the type of system. For instance, Space Shuttle
13.2 Define the Technical Work 519
Table 13-4. Technical Work Products for a System Requirements Review of NASA's Ares 1
[NASA, 2006 (2)]. The first column lists success criteria for the system requirements
review. The second column lists the technical artifacts offered as objective evidence of
having met the criteria.
1. The resulting overall concept is reasonable, Requirements validation matrix, Ares system
feasible, complete, responsive to the mission requirements document, requirements
requirements, and consistent with available traceability matrix, integrated vehicle design
resources, such as schedule, mass, or power definition, constellation architecture, and
requirements document
2. The project uses a sound process for controlling Systems engineering management plan,
and allocating requirements throughout levels project plan, configuration management
and has a plan to complete the system definition plan, data management plan
within schedule constraints
3. The project has defined top-level mission Ares system requirements document,
requirements, interfaces with external entities, interface requirements documents,
and interfaces between major internal elements functional flow block diagrams, constellation
architecture and requirements document,
design reference missions
4. Planners have allocated requirements and flowed Traceability matrix, functional flow block
down key driving requirements to the elements diagrams, Ares system requirements
document
5. System and element design approaches and Operations concept document, integrated
operational concepts exist and are consistent with vehicle design definition, design reference
requirements missions
7. Planners have identified major risks and ways to Risk management plan, Ares risk list,
handle them functional failure mode and effects analyses
8. The project’s technical maturity and planning are Pre-board and board assessments,
adequate to proceed to preliminary design review technical plans
Applying success criteria for each review yields the technical baseline's
phased evolution, as shown in Figure 13-9. For example, baseline releases of all six
520 Chapter 13 —Plan and Manage the Technical Effort
Figure 13-8. Progressive Development of the Technical Baseline. The technical baseline
matures incrementally as a project baselines groups of technical products following
key technical events. (MCR is mission concept review; SRR is system
requirements review; SDR is system definition review; PDR is preliminary design
review; CDR is critical design review; SAR is system acceptance review; ORR is
operational readiness review; FRR is flight readiness review; PLAR is post-launch
assessment review.)
documents for the system baseline must be ready to support the system
requirements review. If approved following their respective reviews, these
technical artifacts then become part of their respective baselines.
This synchronizing of discrete technical baselines, development phases, and
associated key technical events isn't coincidental; it has developed over the years
and converged around logical breaks in the nature of technical activities during a
system's development. The concept of a discretely evolving technical baseline is
important for two reasons:
Figure 13-9. Evolution of FireSAT’s Technical Baseline. Analyzing the success criteria of each
key project milestone against the technical work products enables us to begin
identifying the technical artifacts that will make up the technical baseline. Here we
show only the first four milestone reviews.
FIGURE 13-10. NxN Matrix for Launch Vehicles [Blair et al., 2001]. For thermal analyses and
design, the thermal discipline needs outputs from aerodynamic analyses, including
histories of ascent and entry heating, compartment flow rates, and plume heating
environments.
Figure 13-11. NxN Diagram for the FireSAT System. The diagram includes selected
interdisciplinary interactions for analysis and design of the telecommunication
subsystem.
loads or electrical power consumption. But integrated systems analysis for the
Spacecraft requires data and information from the payload technical effort, so
appropriate contracts or technical directives must include provisions for this data.
As the FireSAT example illustrates, the system of interest significantly
influences technical integration, so we must account for it in technical integration
planning. For instance, if the acquisition strategy has suppliers providing
subsystems with relatively intense interactions; we must plan appropriate
integrating mechanisms. We might include technical integration planning in the
contract requirements. Or we might have to consider how much time the project
schedule allows for a design and analysis cycle (DAC). The DAC iterates integrated
engineering analysis across all disciplines, as illustrated by the NxN matrix in
Figure 13-11. Planners typically schedule DACs to end with the project's various
formal reviews, to best support evolution of the technical baseline.
Figure 13-12 depicts the planned progression of design and analysis cycles
through system development of the Ares 1 launch vehicle. This progression
supports the technical product deliverables for each key technical event, as
described in Section 13.3.4. Brown [1996] describes the DAC process used to
develop the International Space Station.
In summary, planning for technical integration identifies interdependencies
between engineering disciplines and improves project success in three ways:
13.2 Define the Technical Work 525
FIGURE 13-12. Design and Analysis Cycle (DAC) Support of Reviews for the Ares Launch
Vehicle Project [NASA, 2006 (4)]. DACs and verification analysis cycles (VACs)
support key technical events and provide objective evidence for decisions
concerning the technical baseline. A VAC provides results of analytical verification.
(SRR is system requirements review; PDR is preliminary design review; CDR is
critical design review; DCR is design certification review.)
Ol
VI
528 Chapter 13—Plan and Manage the Technical Effort
Testing considerations:
Flight software test environment
T&C system
Simulator
(ground system)
FIGURE 13-14. Basic Test Environment for Flight Software [Pfarr and Obenschain, 2008].
Each software interface to the hardware requires a simulator, including the interface
to the ground system. (T&C is telemetry and command.)
1.0 FireSAT
System-level WBS
pro ect
1.01 Project 1.02 Systems 1.03 Safety 1.04 Science 1.05 1.06
management engineering and mission and Payload Spacecraft
assurance technology
1.07 Mission 1.08 Launch 1.09 Ground 1.10 System 1.11 Education
operations vehicle and systems integration and publish
services testing outreach
1.05.7 1.05.8
Electrical Thermal 1.05.9 1.05.10 1.05.11
power control Structures Mechanisms Harness
subsystem
A project's WBS contains end product and enabling product elements. The end
product part is based on the physical architectures developed from operational
530 Chapter 13—Plan and Manage the Technical Effort
requirements. The enabling produ ct part identifies the products and services required
to develop, produce, and support the system throughout its lifecycle. The WBS is a
foundation for all project activities, including planning for the project and its technical
activities; defining the event schedule; managing the system's configuration, risk, and
data; preparing specifications and statements of work; reporting status and analyzing
problems; estimating costs; and budgeting [DoD, 20001.
To illustrate how the WBS organizes a project, we consider how to map the list
of technical products in Table 13-2 to the WBS. The concept of operations and
system requirements document are normally systems engineering products, so we
identify them with WBS 1.02. The manufacturing and assembly plan maps to 1.05
and 1.06, so we have to decide whether to develop separate plans for the payload
and spacecraft or a single plan that addresses both. If we assign a single plan to 1.05
or 1.06, the other WBS element has an input to the plan—identified as one of its
technical products. Figure 13-16 shows part of the mapping of FireSAT's technical
products from Table 13-3 to its WBS from Figure 13-15.
Figure 13-16. Categorizing the FireSAT Project’s Technical Work Products by Work
Breakdown Structure (WBS). The WBS logically groups technical products.
projects, a process is the only deliverable. As with those for development, someone
must decide how to manage them.
It's usually appropriate to have both product- and process-oriented elements
within the organizational structure. The most common management strategy for
processes is to form one organizational element, such as an office, department, or
branch, and then integrate the deliverable products into a coherent, effective
system. A common name for this element is the "systems engineering and
integration office." It integrates the efforts of the individual product teams,
ensuring that they communicate and effectively apply accepted systems
engineering principles to developing the product. The element delivers an
integration process, whose effectiveness we measure by the integrated product's
success or failure to meet total system requirements with the best balance of cost,
schedule, and performance.
A project's acquisition strategy strongly influences organizational structure,
which typically forms around major suppliers' products or processes. The more
diverse the suppliers for end product and enabling product elements (processes),
the more likely they will affect the project's organization. One example is the
Chandra X-ray Observatory acquisition strategy we describe in Section 13.2.
Although the acquiring organization may combine elements into a single
organization to manage suppliers cleanly, the suppliers may refine that structure
to reflect their own distinct elements.
To illustrate, let's consider the project organization for the FireSAT Project. From
FireSAT's SEMP the acquisition strategy relies on a prime contractor, with NASA's
Goddard Space Flight Center serving as the procuring organization. The prime
contractor produces the integrated FireSAT System (spacecraft and payload) and
support for launch service integration, and the government provides launch
services. The prime contractor also supplies some elements of the mission's
operations and command, control, and communication architecture in a joint
development with the government. The government furnishes the ground element,
including the communication and mission operations infrastructure and launch site,
including launch processing and operations. The FireSAT system requires three
distinct acquisition approaches:
• Procure deliverables from a prime contractor
• Get ground systems and launch services from the government
• Employ a prime contractor and the government for operations
Starting with the WBS in Figure 13-15 and using the prime contractor for WBS
elements 1.05 and 1.06, it's appropriate to combine the two into an organizational
element that covers the scope of the government's contract with the prime
contractor. Likewise, with the government providing the ground systems (WBS
1.09) and launch services (WBS 1.08), it makes sense to form organizational
elements to manage those activities. Goddard will acquire FireSAT for the US
Forest Service's firefighting office, and the National Oceanic and Atmospheric
Administration (NOAA) will operate it.
13.3 Schedule, Organize, and Cost the Technical Work 533
FIGURE 13-17. Office Organization for the FireSAT Project. Each office's project responsibilities
correspond to elements of the work breakdown structure (WBS). (SE&I is systems
engineering and integration.)
activity [PMI, 2004]. We build them at a high level (WBS elements) or a detailed
level (individual work products), and nest them in hierarchies. Developing them
with participating organizations inevitably leads to clearer organizational roles
and responsibilities, especially for process-oriented organizational elements.
Returning to the FireSAT case, we need to create a responsibility assignment
matrix for the technical work products in Table 13-3. Here, we map them to the
organization shown in Figure 13-17. Table 13-5 captures the results. Because this is
a matrix for the technical work products, it shows limited roles for the
management support office. The project office has the approval role for the major
plans and for acceptance of the end products, but the systems engineering and
integration office approves most of the technical work that other offices produce.
The matrix reveals the scale of this office's responsibilities: it must produce most
of the project's key planning documents.
As the procuring organization, Goddard turns to its own directorates for
engineering and safety and mission assurance to support systems engineering. So
these can use a refined responsibility assignment matrix to establish their matrixed
team members' roles and responsibilities (Table 13-6).
Organizational Roles
OJ
Table 13-6. Refined Responsibility Assignment Matrix from the Project Office for Systems Engineering and
Organizational Roles
SE&I SE&I
Project Project Prime
WBS Technical Products Office Office Engineering S&MA Center
SYSTEM XX
Integrated Master Plan (IMP)
Figure 13-18, How the Integrated Master Plan (IMP) and integrated Master Schedule (IMS)
Relate [DoD, 2005], The IMP consists of key events and their planned occurrence
dates; the IMS details all the activities required to achieve the IMP.
mission operations. But the critical design reviews (CDRs) for these same
elements come before the project CDR. Exceptions may be necessary. For
example, requirement reviews for long-lead items may occur before those for the
system into which they will integrate. When the sequencing changes, managers
must identify and manage the risks (Chapter 8).
538 Chapter 13—Plan and Manage the Technical Effort
Figure 13-19 shows an integrated master plan for developing NASA's Ares 1
launch vehicle. Technical reviews dominate the milestones, but the project has
other key events:
• Deliveries of test articles, such as development motors one and two for the
first-stage element
• The cold-fire and hot-fire events for the main propulsion test article on the
upper-stage element
• Deliveries of flight elements to support launch integration for test and
operational flights
• Test flights and operational flights
Figure 13-19. Integrated Master Plan (IMP) for NASA’s Ares 1 Launch Vehicle. The IMP
identifies the milestones that managers use to schedule the technical work. (TIM is
technical interchange meeting; RBR is requirements baseline review; SRR is
systems requirements review; NAR is non-advocate review; PDR is preliminary
design review, CDR is critical design review; MPTA CF is main propulsion test
article cold flow test; MPTA HF is main propulsion test article hot flow test; SRB is
standing review board; ICDR is initial critical design review; DCR is design
certification review; DDCR is demonstration design certification review; DM-1 is
development motor #1; QM-1 is qualification motor #1; SWRR is software
requirements review.)
13.3 Schedule, Organize, and Cost the Technical Work 539
The FireSAT Project has a dedicated series of key milestone events, as will the
FireSAT System, ground system, and mission operations. These events are
scheduled to support each other and the integrated FireSAT Project. Figure 13-20
shows a multi-level IMP that captures acquisition milestones, technical reviews,
launch dates, operational capability milestones, and system decommissioning.
Events in the integrated master plan set the pace for technical work. Because
each event has defined expectations, mapping event descriptions to the associated
technical work products is a good starting point for the IMS. Although the number
of tasks often appears unmanageable, a well-constructed IMS effectively guides
managers in controlling the daily technical work.
To develop an effective IMS, we rely on hierarchical tiers in the integrated
master plan that correspond to the work breakdown structure (WBS). These are
refined by the products for each WBS element. Each product requires tens or even
hundreds of tasks, but the structure makes them clear. The WBS and product
hierarchy also help us identify predecessor and successor relationships between
tasks. For a given product, the lead or manager usually identifies these
relationships. At the next level, managers of all products in a single WBS must
collectively identify them, and so on through the WBS hierarchy.
Given a collection of discrete tasks, along with their precedence and succession
relationships, we build a schedule network to organize the tasks and
interrelationships. Figure 13-21 shows part of a schedule network for FireSAT's
design and analysis cycle, derived from the NxN diagram in Figure 13-11.
By containing the durations for each task, the schedule network underpins the
program schedule with one of several techniques: the program evaluation and
review technique, arrow diagram method (also known as the critical path
method), precedence diagramming method, or graphical evaluation and review
technique. Several references cover network schedule methods [PMT, 2004;
Kerzner, 1998; Hillier and Lieberman, 1995]. Figure 13-22 shows an example
schedule for a design and analysis cycle on NASA's Ares 1 launch vehicle.
Finally, as with virtually all technical planning, developing the integrated
master plan and master schedule is iterative: the pace of technical work (as
reflected in the master schedule) influences the master plan's timing and content.
Figure 13-21. Schedule Network for FireSAT’s Design and Analysis Cycle. This partial
schedule network supports the preliminary design review and depicts a subset of
the task relationships illustrated in Figure 13-11.
work will be done. We get a first-order definition of the work packages for
technical work from the responsibility assignment matrix. In the FireSAT example,
Goddard Space Flight Center (GSFC) uses its Applied Engineering and
Technology Directorate (AETD) and Safety and Mission Assurance Directorate
(SMA) to do technical work for systems engineering and integration (SE&I), under
the FireSAT project office's management. By refining this matrix, we clarify which
organizations will do which work. Table 13-5 illustrates such a refined matrix.
In this example, the SE&I office has at least three formal work packages: one
each for itself and the two GSFC directorates. We may also want to develop a work
package for each of the four WBS elements assigned to the SE&I office, for a total
of twelve. If some small ones combine with others for the same performing
organization, we may end up with fewer work packages, as shown in Figure 13-23.
Although the refined matrix clarifies roles and responsibilities, we need more
detail to define the work package. Continuing with the FireSAT case, we assume
Goddard's organizations for engineering and for safety and mission assurance are
experienced in developing scientific satellites like FireSAT. So engineering needs
only the schedule to produce the system requirements document. On the other
hand, we need to consider the informed role assigned to engineering for the mass
properties reports. This role ranges from a casual review for errors and adherence
to the mass properties plan, to independently analyzing and comparing results.
Required resources vary a lot over this spectrum, so we must precisely describe the
work package content. For the systems engineering management plan (SEMP),
542 Chapter 13—Plan and Manage the Technical Effort
FIGURE 13-22. Schedule of the Design and Analysis Cycle for NASA’s Ares Launch Vehicle. This summary schedule illustrates durations
and interdependencies between discipline analyses. The analysis for integrated flight loads is a critical-path dependency on the
results from the computational fluid dynamics (CFD) analysis. The former includes structural loads from engine thrust and
gimballing, sloshing propellant, operating the reaction control system, and so on. The latter is a numerical simulation of
aerodynamics during ascent. (FF is final finite element model; LC is load cycle; AE is acoustic environment.)
13.3 Schedule, Organize, and Cost the Technical Work 543
Figure 13-23. Work Packages for Systems Engineering and Integration (SE&I) on FireSAT.
The intersections of the work breakdown structure (WBS) elements and the
functional organization elements define the work packages. (GSFC is Goddard
Space Flight Center; SMA is Safety and Mission Assurance; AETD is Applied
Engineering and Technology Directorate.)
engineering also has many potential roles, so project managers must provide
precise detail to make their intentions clear. To illustrate this point, we give the
following examples of misunderstanding on intent between the acquirer and
supplier [Kerzner, 2005]:
• The task description calls for at least fifteen tests to determine a new
substance's material properties. The supplier prices twenty tests to include
some margin, but at the end of the fifteenth test the acquirer demands
fifteen more because the results are inconclusive. Those extra tests result in
a cost overrun of $40,000.
• The task description calls for a prototype to be tested in "water." The
supplier prices testing the prototype in a swimming pool. Unfortunately,
the acquirer (the US Navy in this instance) defines water to be the Atlantic
Ocean. The supplier incurs $1 million in unplanned expenses to transport
test engineers and equipment to the Atlantic Ocean for the test.
5 4 4 Chapter 1 3 —Plan and Manage the Technical Effort
As these examples illustrate, it's hard to give a rule of thumb for the detail
needed in a work package description. Misinterpretations of task descriptions are
inevitable no matter how meticulously we prepare them, but clarity and precision
greatly reduce their number and effect. To avoid misinterpretation, we employ the
following strategies:
FIGURE 13-24. Intensity of Resource Use for Systems Engineering Processes by Project
Phase. Analogous to a cellular phone signal, zero bars represents no or very low
resource requirements, and five bars represent highest use. We must bear in mind
that a number of bars for one process doesn’t imply the same level of resources as
that number of bars for a different process.
13.4 Prepare the Systems Engineering Management Plan and Other 547
Technical Plans
The resultant cost estimate invariably leads to changes in the scope of the
technical work. Space systems engineers tend to avoid risk because the systems
they build are complex, must work in an extreme environment, and have many
routes to failure. For instance, given adequate resources, the leader of FireSAT's
systems engineering and integration office wants to scope the "T" (informed) tasks
for engineering and for safety and mission assurance with independent analysis
across the board, even though not all the technical work is equally challenging.
FireSAT's project manager should remember the conservative tendencies of all
office leaders and provide enough resources for this comprehensive analysis. Risk
management (Chapter 8) converges the scope and resource estimates, reducing the
scope whenever necessary to keep the project's risk at acceptable levels.
For FireSAT, we assume we can reasonably develop the satellite using any of
several satellite buses. This assumption greatly simplifies the integrated systems
analysis and decreases the amount of work that engineering must do in reviewing
its reports for consistency with the integrated systems analysis plan. But FireSAT's
payload includes a sensor that may need cryogenic cooling, which none of the
potential satellite buses has used. Therefore, we probably need full independent
system analysis of the payload's cryogenic cooling system, including spacecraft
thermal analysis, to be sure its thermal environment allows the satellite bus to
work properly. If we don't include this detail in the technical work package for
integrated system analysis, the resources needed to complete it won't be in the
technical resource estimate for the appropriate work breakdown structure.
Another example involves the failure of NASA's Lewis spacecraft. The design
for its attitude control system (ACS) derived from a heritage design for the proven
Total Ozone Mapping Spacecraft. But Lewis's ACS design differed from the
heritage design in subtle yet ultimately significant ways. The ACS analysis, based
on the heritage spacecraft's tools and models, inaccurately predicted the control
system's performance. Following a successful launch and orbital injection, the
satellite entered a flat spin that caused a loss of solar power, fatal battery discharge,
and ultimately mission failure [Thomas, 2007],
scale and acquisition strategy. For example, NASA's Constellation program is very
large. At the system requirements review, a key technical event, teams had developed
22 distinct technical plans, plus the SEMP. Although this number may appear
unwieldy, the program's scale and duration suggest that 23 plans aren't too many.
Constellation encompasses a launch vehicle project and spacecraft project to
transport astronauts to the International Space Station and will include other launch
vehicles, spacecraft, and habitats for lunar and Mars exploration through the 2020s.
In contrast, the FireSAT program is much smaller in scale, requiring only 13 technical
plans, including the SEMP. Table 13-7 lists these plans, and the chapters that address
each one.
The number of technical plans for a project is a matter of style and preference.
For example, we could easily merge the 23 technical plans for the Constellation
Program into a single document, though a very thick one. Whether we separately
document an aspect such as technical integration planning or include it in the
SEMP isn't as important as doing the technical planning and doing it well.
13.4 Prepare the Systems Engineering Management Plan and Other 549
chapters that discuss them. Some plans contain information that could split out to form a separate plan (e.g., the supportability plan
includes the plan for training), whereas other documents might combine into one. (MOE is measure of effectiveness; MOP is measure
of performance; TPM is technical performance measure.)
Technical Plans
Relevant
FireSAT Technical Plans to
Electromagnetic Compatibility/Interference Control Plan—Describes design procedures and techniques to assure Chap. 5
electromagnetic compatibility for subsystems and equipment. Includes management processes, design, analysis, and
developmental testing.
Mass Properties Control Plan—Describes the management procedures to be used for mass properties control and verification Chap. 5
during the various development cycle phases.
Risk Management Plan—Summarizes the risk management approach for the project, including actions to mitigate risk and Chap. 8
program de-scope plans. Includes a technology development plan that may be split off as a stand-alone plan for technology
intensive system developments.
Manufacturing and Assembly Plan—Describes the manufacturing strategy, facility requirements, organizational implementation, Chap. 9
quality assurance, critical assembly steps, development of the master parts list, and manufacturing data collection, including
problem reporting and corrective actions (PRACA).
Master Verification Plan—Describes the approach to verification and validation for the assurance of project success. Addresses Chap. 11
requirements for hardware and software verification and validation.
Supportability Plan—Includes initial establishment of logistics requirements, implementation of supportability analysis activities, Chap. 12
and requirements development for supportability validation. Typically integrates the reliability and maintainability plans. Also
includes the training plan.
Activation and Check-out Plan—Provides information needed for an engineering check-out of the system to determine readiness Chap. 12
forfull mission operational status. Information includes activation requirements, engineering test plans, and data evaluation tasks.
Spacecraft Systems Analysis Plan—Approach to and phasing of integrated engineering analysis of functional/logical and physical Chap. 13
system designs, decomposition of functional requirements and allocation of performance requirements, assessments of system
effectiveness (MOEs, MOPs, and TPMs), decision support, and managing risk factors and technical margins throughout the
systems engineering effort.
Software Management Plan—Defines software management processes for the technical team, and describes responsibilities, Chap. 13
standards, procedures and organizational relationships for all software activities.
Safety and Mission Assurance Plan—Addresses the activities and steps to ensure mission success and the safely of the general Chap. 13
public, the project workforce, and high value equipment and property used to develop and deploy the system.
TABLE 13-7. Key Technical Plans for FireSAT. (Continued) Entries briefly describe 13 discrete technical plans for the FireSAT program and
Relevant
FireSAT Technical Plans to
Systems Engineering Management Plan—The chief technical plan, which describes the project’s technical effort and provides Chap. 14
the integration framework for all subordinate technical plans.
Configuration Management Plan—Describes the structure of the configuration management organization and tools. This plan Chap. 16
identifies the methods and procedures for configuration identification, configuration control, interface management, configuration
traceability, configuration audits, and configuration status accounting and communications. It also describes how supplier
configuration management processes will be integrated with those of the acquiring organization.
Data Management Plan—Addresses the data that the project team must capture as well as its availability. It includes plans for data Chap. 17
rights and services, addressing issues that often require tradeoffs between the interests of various communities (e.g., acquirer
versus supplier, performing versus managing, etc.).
13.5 Get Stakeholder Commitments to Technical Plans 551
customers must regularly review schedule and budget status to concur with
corrections to actual or anticipated deviations.
For FireSAT, the customer is the US Forest Service's firefighting office, so an
interagency agreement between NASA and the Forest Service should detail how the
customer will participate. To establish clear mutual expectations, the project
manager must work closely with the customer and get approval for the concept of
operations and systems requirements document. The customer should also help
develop and approve the master schedule and budget.
Project managers should discuss with the customer the summaries of results
from the design and analysis cycle to show how FireSAT's evolving design is
expected to meet system requirements. Managers also must engage the customer in
resolving technical issues that could affect system requirements, as well as in
managing risk, so the customer knows about events that might change expectations.
Customers also expect periodic status reports on the schedule and budget.
The active stakeholders for FireSAT include NASA's Goddard Space Flight
Center, the National Oceanic and Atmospheric Administration (NOAA), and the
prime contractor. The contract between NASA and the prime contractor commits
both to the project. Formal interagency agreements secure mutual commitment from
Goddard and NOAA to their roles and responsibilities. The Federal
Communications Commission is a passive stakeholder because their regulations
control how the FireSAT satellite uses the radio frequency spectrum to communicate
with its ground systems. Because commands from the satellite to the ground will be
encrypted, the National Security Agency is also a passive stakeholder.
Finally, organizations build roles and behaviors as they learn from and react to
their successes and failures, as technology advances change processes and
procedures, and as people come and go. Passive stakeholders for technical planning
especially emerge and fade away over time. Changes in active stakeholders are less
common, but stakeholders sometimes switch from active to passive roles.
Therefore, we must continually confirm a stakeholder's commitment, not just
assume we can "check the box" and be done.
acquiring organizations must formally approve the work and jointly issue the
directive.
A contract between an acquiring organization and a prime contractor is
analogous to a technical work directive. But the latter commonly authorizes doing
work and accruing costs within a given organization—for example, between a
project office and a functional matrix organization. Issuing technical work
directives means we've finished technical planning and are now realizing the work
products. Thus, work directives include the work package descriptions as
developed in Section 13.3.5 and cost estimates such as those in Section 13.3.6. The
directive template used by NASA's Marshall Space Flight Center (see Appendix at
the end of this chapter) is fairly typical of the sort used in large organizations.
Information on the work package and resources fills out several fields in this form.
The other fields include such items as
• Labor codes that require a visit to the organization's business office
• Names of people in the requesting and providing organizations that will be
points of contact for the technical work
• Names of approving officials for the requesting and providing organizations
In general, the approving official's level depends on the resources associated with
a technical work directive. Organizations (even small ones) have policies on who
can approve technical work directives, so we must know the policies well in
advance to keep stakeholders engaged. Also, changes to the work directive may
occur at any time before approval to capture revisions to the technical work
description, schedule, and resources. After approval, revisions require formal
changes to the directive.
In the FireSAT example, the Applied Engineering and Technology Directorate
(AETD) and Safety and Mission Assurance Directorate (SMA) at Goddard receive
technical work directives to perform the work defined in their respective work
packages. Goddard determines the approving officials, e.g., FireSAT's project
manager and the directors of AETD and SMA.
schedule. They consider resource issues only by exception, mainly when they need
more resources to resolve a technical problem.
Technical status reviews differ from the project status reviews in that their
purpose is to surface and solve technical problems before they lead to cost and
schedule variances. In general, the leader of a functional organization is the expert
in that discipline, and these status meetings make this expertise available if
needed. A "quad chart" of the sort depicted in Table 13-8 commonly summarizes
status, with backup material as necessary to elaborate on technical issues or risks.
Astute functional managers also glean a lot of information from what's not on the
quad chart. For example, sketchily planned accomplishments or a dearth of
schedule milestones for the next status point may indicate a poorly conceived plan
or one overcome by events—worthy of attention in either case.
Table 13-8. Quad Chart for Technical Status. This type of chart helps functional managers stay
on top of progress and problems with the technical work.
Humphries, William R., Wayne Holland, and R. Bishop. 1999. Information Flow in the
Launch Vehicle Design/Analysis Process. NASA/TM-1999-209887, Marshall Space Flight
Center (MSFC), Alabama.
Institute of Electrical and Electronics Engineers (IEEE). 2005. IEEE Standard far Application
and Management of the Systems Engineering Process, IEEE Standard 1220-2005.
International Organization for Standardization (ISO). 2002. Systems Engineering—System Life
Cycle Processes, ISO/IEC 15288.
Kerzner, Harold. 1998. Project Management: A Systems Approach to Planning, Scheduling, and
Controlling, 6th ed. Hoboken, New Jersey: Wiley.
Monk, Gregg B. 2002. Integrated Product Team Effectiveness in the Department of Defense.
Master's Thesis, Naval Postgraduate School.
NASA. March 2007. Systems Engineering Processes and Requirements. NPR 7123.1a.
Washington, DC: NASA.
NASA. 2006 (1). Constellation Program System Engineering Management Plan, CxP 70013.
NASA Johnson Space Center (JSC), Texas: Constellation Program Management Office.
NASA. 2006 (2). Constellation Program System Requirements Review (SRR) Process Plan Annex
2.1 Crew Launch Vehicle, CxP 70006-ANX2.1. MSFC, Alabama: Exploration Launch Office.
NASA. 2006 (3). Constellation Program System Integrated Analysis Plan (SIAP) Volume 1, CxP
70009. JSC, Texas: Constellation Program Management Office.
NASA. 2006 (4). Exploration Launch Project Systems Analysis Plan (SAP), CxP 72024. MSFC,
Alabama: Exploration Launch Office.
Pfarr, Barbara, and Rick Obenschain. 2008. Applied Project Management for Space Systems.
Chapter 16, Mission Software Processes and Considerations. New York: McGraw Hill.
Project Management Institute (PMI). 2004. A Guide to the Project Management Body of
Knowledge, 3rd ed. Newton Square, Pennsylvania: Project Management Institute.
Schaefer, William. 2008. Non-published presentation entitled "ISS Hardware/Software
Technical Planning," NASA Johnson Space Center, 2008.
Thomas, L. Dale. 2007. "Selected Systems Engineering Process Deficiencies and Their
Consequences," Acta Astronautica 61:406-415.
13.7 Document and Iterate 559
6. POINT OF CONTACT (REQUESTING) 7. CENTER 6 MAIL CODE 9. EMAIL 10. PhONE NUMBER
n. POINT OF CONTACT {PROVIDING) 12, CENTER 13. MAIL CODE 14. EMAIL 15. PHONE NUMBER
POC
16, PARTICIPATING ORGAN IZATION(S)
17. TA$K DESCRIPTION (inglyde spfltiliwWn ftntf cite re^r^nws vrfi^r? Appropriate}
1ft DATARFOtJJRPMPNTS- 20. GOVERNMENT FURNI$KEO OATA fTSM NO. 21. DELIVERY DATE
22. DELIVERABLE ITEMS: 21. GOVERNMENT FURNISHED EQUIPMENT ITEM NO. 2A. DEUVEA Y DATE
APPENDIX A-1. Technical Work Directive for NASA’s Marshall Space Flight Center (page 1 of 4).
560 Chapter 13—Plan and Manage the Technical Effort
APPENDIX A-2. Technical Work Directive for Marshall Space Flight Center (page 2 of 4).
13.7 Document and Iterate 561
APPENDIX A-3. Technical Work Directive for Marshall Space Flight Center (page 3 of 4),
562 Chapter 13 —Plan and Manage the Technical Effort
APPENDIX A-4. Technical Work Directive for Marshall Space Flight Center (page 4 of 4).
13.7 Document and Iterate 563
565
566 Chapter 14—Technical Direction and Management: The
Systems Engineering Management Plan (SEMP) in Action
Eliminate or
reduce
uncertainty Acquire better
understanding
of goals
Improve
efficiency
Establish a basis
to monitor
and control
FIGURE 14-1. Reasons for Planning. Planning contributes to a better project in four major ways.
The SEMP planning team consists of the chief engineer, the lead systems engineer,
and other members of the systems engineering team. (Courtesy of Honourcode, Inc.)
The SEMP designates the roles and responsibilities of the space systems
engineering efforts as well as those of others on the technical team. It must explain
communication activities, including at least vertical and horizontal integration,
team communication, scope of decision making authority, and systems engineering
staffing (when-which domain emphasis). Positive team-based communication is
critical to success.
The SEMP emphasizes different efforts during each phase of the space
program acquisition. An example of this is in Table 14-1, which lists the technical
14.2 Determine the Contents of the SEMP 567
activities during the early phases of concept development and the systems level
testing phase.
TABLE 14-1. Systems Engineering Management Plan Activities Versus Program Phase. The
SEMP is key to understanding each phase of the program.
14.1.2 Timelines
All experienced space systems engineers know that early in a major
acquisition program the master schedule dominates their lives. So the systems
engineering team should emphasize an achievable technical schedule early. The
integrated master plan should be compatible with the initial acquisition schedule,
but not so rigid that the work can't be done in the allotted time. Chapter 13
discusses schedule management. Figure 14-2 shows an example of a schedule for
the Constellation project.
Figure 14-2. Constellation Sequence. Early presentation of the program schedule enables
systems engineers to understand their long-term involvement.
3. Enough detail of
the system being
developed for context 4. How the individual
technical efforts are
to be integrated
5. Discussion of each
of the 17 common
technical processes as
related to ibis project 6. The plan to find and
keep proper technology
7. Any SE activity or over the life of project
additional processes
necessary for this
project
8. Coordination, roles, and
responsibilities between
9. Proof of permission Program Management Officer
to deviate from standard Various pieces and
types of supporting and SE plus resource allocation
process or methods
information
Figure 14-3. Systems Engineering Management Plan (SEMP) Outline. The SEMP is the
primary document for planning technical processes on a project.
TABLE 14-2. Programmatic Elements and Questions. Early assessment of key “daunting
questions” lowers risk. Note: the items in the right column don't correspond one-to-
one with those in the left column.
* Describe the space systems technical • What are the technical issues and risks?
baseline • Who has responsibility and authority for
* Identify systems engineering processes executing the technical project?
* Identify resources required (funding, • What processes and tools should we use?
organization, facilities) • Which processes must be managed and
» Identify key technical tasks and activities controlled?
* Identify success criteria for each major task • How does the technical effort link to outside
* Schedule the technical development efforts forces?
» Identify reference documents • How many test programs must the project
* Identify specialty engineering tasks have?
• What are the major interfaces within each
segment of the project?
• Where will all the expertise come from?
System structure. This section describes the work breakdown structure (WBS)
so that we see all technical efforts and recognize all interfaces. It gives the products
of the system structure along with the specification tree and drawing tree. It also
describes the interface specifications as the WBS is prepared to ensure that the
design team covers them.
570 CHATTER 14 —TECHNICAL DIRECTION AND MANAGEMENT: THE
Systems Engineering Management Plan (SEMP) in Action
Figure 14-4. Early FireSAT Systems Image [TSTI, 2006]. An early estimate of shape enhances
the team’s understanding of the problems.
Space element
FireSAT
TLM and . TLM and control
commands commands center
NOAA
ground
stations Archival
“911"
downlink data
Ground elements
Wildfire
“911” command
messages center
Regional field
offices
Tasking
Firefighting
assets
Figure 14-5. FireSAT Concept of Operations. A high-level concept of operations provides the
context within which the system will function and depicts system-level interfaces.
(TLM is telemetry; NOAA is National Oceanic and Atmospheric Administration.)
14.2 Determine the Contents of the SEMP 571
Traditional systems
Constellation architecture
Crew exploration vehicle
Lunar surface access module
Requirements and products
EVA systems
Flight crew equipment
Crew launch vehicle
Cargo launch vehicle
Ground systems
Mission systems
Resource use (future)
Robotic systems (future)
Power systems (future)
Surface mobility (future)
Habitat (future)
Mars transfer vehicle (future)
Descent ascent vehicle (future)
FIGURE 14-6. Architecture Integration Model for Constellation [NASA (2), 2006], Here we
show how to divide a complex project into definable parts. (CSCI is computer system
configuration item; ECLS is environmental control and life support; SE&I is systems
engineering and integration; T&V is test and verification; OI is operations integration;
SR&QA is safety, reliability, and quality assurance; APO is advanced projects office;
PP&C is program planning and control; EVA is extravehicular activity.)
verification and validation reports, and (6) who signs off on test results. The real
key is determining early who has the authority over which process (or portions of
a process in case of shared responsibility).
Support integration. This portion of the SEMP describes the integrated
support equipment that sustains the total effort. It includes data bases (common
parts, etc.), computer design tools and manufacturing tools, planning-
management information systems, and modeling and simulation setups.
Common technical processes implementation. The SEMP describes, with
appropriate tailoring, each of the 17 common technical processes. Implementation
includes (1) defining the outcomes to satisfy entry and exit criteria for each lifecycle
phase, and (2) major inputs for other technical processes. Each process section
contains a description of the approach, methods, and tools for the following:
FIGURE 14-7. Technical Effort Integration for Constellation [NASA (2), 2006]. Here we show
how the Constellation Program leaders divided the program into more manageable
tasks. (JSC is Johnson Space Center; ARC is Ames Research Center; GSFC is
Goddard Space Flight Center; SSC is Stennis Space Center; KSC is Kennedy
Space Center; GRC is Glenn Research Center; LaRC is Langley Research Center;
JPL is Jet Propulsion Laboratory; MSFC is Marshall Spaceflight Center; DFRC is
Dryden Flight Research Center; S&MA is safety and mission assurance; C&P is
contracts and pricing.)
Table 14-3. Miracles Required. Risk reduction includes understanding the level of
technological maturity of major system components.
Technology
Readiness Level Description FireSAT Example
9. Actual system flight proven In almost all cases, the end of the last ‘bug fixing’ aspect of true system Infrared payloads that have flown
through successful mission development. This TRL does not include planned product improvement of on space missions
operations ongoing or reusable systems,
8. Actual system completed This level is the end of true system development for most technology Space system end-to-end
and flight qualified through test elements, it might include integration of new technology into an existing prototype for testing of FireSAT
and demonstration (ground or system. concept
space)
7. System prototype This is a significant step beyond TRL 6, requiring an actual system Total system concept with payload
demonstration in a space prototype demonstration in a space environment. The prototype should be supported by hardware in a
environment near or at the scale of the planned operational system and the communication network
demonstration must happen in space.
6. System or subsystem A major step in the level of fidelity of the technology demonstration follows Payload optics tested on aircraft
model or prototype the completion of TRL 5. At TRL 6, a representative model or prototype flying over forest fires
demonstration in a relevant system or system, which goes well beyond ad hoc, ‘patch-cord," or
environment (ground or discrete component level breadboarding, is tested in a relevant
space) environment. At this level, if the only relevant environment is space, then
the model or prototype must be demonstrated in space.
5. Component or breadboard At this level, the fidelity of the component or breadboard being tested has Payload optics tested on
validation in relevant to increase significantly. The basic technological elements must be laboratory benches with simulated
environment integrated with reasonably realistic supporting elements so that the total wildfires
applications (component level, subsystem level, or system level) can be
tested in a simulated or somewhat realistic environment.
4. Component or breadboard Following successful proof-of-concept work, basic technological elements Payload optics for fire recognition
validation in laboratory must be integrated to establish that the pieces will work together to based upon laboratory parts
environment achieve concept-enabling levels of performance for a component or
breadboard. This validation must support the concept that was formulated
earlier, and should also be consistent with the requirements of potential
system applications. The validation is relatively low-fidelity compared to
the eventual system: it could be composed of ad hocdiscrete components
in a laboratory.
cji
Ol
TABLE 14-4. FireSAT Technology Readiness Levels (TRLs). (Continued) Early identification of maturity levels lessens risk. (Adapted from
Technology
Readiness Level Description FireSAT Example
3. Analytical and experimental At this step, active research and development (R&D) is initiated. This must Constellation design with
critical function or include both analytical studies to set the technology into an appropriate coverage timelines and revisit
characteristic proof of concept context and Jaboratory-based studies to physically validate that the times
analytical predictions are correct. These studies and experiments should
constitute proof-of-concept validation of the applications and concepts
formulated at TRL 2.
2. Technology concept or Once basic physical principles are observed, practical applications of Needs analysis with satellite
application formulated those characteristics can be invented. The application is still speculative option
and no experimental proof or detailed analysis exists to support the
conjecture.
1. Basic principles observed This is the lowest level of technology readiness. Scientific research begins Physics of 1000° C fire being
and reported to be translated into applied research and development. observed from space
Table 14-5. FireSAT Systems Engineering Management Plan (SEMP). This table describes what goes into a SEMP and gives a FireSAT
1 Plan for • Establish technical content Purpose: The purpose of the FireSAT Project is to save lives and property by
managing SE • Provide specifics of technical efforts providing near-real-time detection and notification of wildfires withir the US
• Describe technical processes Mission objectives: ‘The US needs the means to detect and monitor potentially
• Illustrate the project organization dangerous wildfires,"
Technical effort breakout: See FireSAT preliminary architecture and ConOps
• Establish technical schedule of key
FireSAT organization: Small program office (Two managers, systems engineer,
events
satellite engineer, GrSysEngr, financial)
• Serve as the communication bridge Master schedule: Start ASAP, development 5 yrs, DL 6 yrs, SRR within 6 mos
between management and technical Technically feasible: Yes, within 5 year development schedule—no miracles
required!
5 Engineering Each of the 17 common technical During this early phase, we list the 17 processes and the phases in which they
processes processes has a separate subsection dominate. We then draft a preliminary write-up of these processes to include:
that contains the plan for performing 1) Identifying and obtaining adequate human and other resources for performing
the process activities as appropriately the planned process, developing the work products, and providing the process
tailored. Implementing the processes sen/ices. We must know the peak activities across the master schedule so we
includes: (1) generating the outcomes can spread out the effort for ease of technical management.
needed to satisfy the entry and exit 2) Assigning responsibility and authority for performing the process, developing
criteria of the applicable lifecycle the work products, and providing the services (with dates of delivery for each
phases and (2) generating the of these, draft and final)
necessary inputs for other technical 3) Training for the technical staff
processes. The section contains a 4) Designating work products and placing them under configuration management
description of the approach, methods,
across the master schedule
and tools.
5) Identifying and involving stakeholders
6) Monitoring and controlling the processes
6 Technology This section describes the approach We normally use the NASA technology readiness level (TRL) approach to decide
insertion and methods for identifying key whether a technology is ready for the project. We need to determine immediately
technologies and their associated if any “miraclcs" are required. For FireSAT at this preliminary stage, we assume
risks. It also sets criteria for assessing that all parts of the system have been developed to at least TRL 8. Integrating the
and inserting technologies, including system is the key to the technology development. This integration is definitely a
critical technologies from technology challenge to the System Program Office, but not a technological one.
development projects.
7 SE activities This section describes other areas not During the design concept development phase, determining necessary systems
specifically included in previous engineering tasks is of prime importance. The timing of the needs, the staffing
sections but essential for planning and requirements, and the tools and methodologies are required at least for:
conducting the technical effort. • 7.1 System safety
• 7.2 Engineering methods and tools
• 7.3 Specialty engineering
TABLE 14-5. FireSAT Systems Engineering Management Plan (SEMP). (Continued)
8 Project plan This section tells how the technical At this early phase, the project plan is being put together with assistance from the
integration effort will integrate with project chief systems engineer. This person is responsible for seeing that the project plan
management and defines roles and is consistent with the SEMP. Key to success is consistency across the
responsibilities. It addresses how management team and the engineering team with respect to the WBS, the master
technical requirements will be schedule, the staffing estimates, the breakout of contractor-internal efforts, and
integrated with the project plan to the basic financial estimates.
allocate resources, including time,
money, and personnel, and how we
will coordinate changes to the
allocations.
9 Waivers This section contains all approved Ascertaining waivers early enables the process to proceed without undue
waivers to the SE implementation overhead. We do this by reviewing programs similar to FireSAT (similar
plan, required for the SEMP. It also customers, similar orbits, similar timelines) and reviewing their waivers. We should
has a subsection that includes any list all waivers before the first draft of the SEMP because it drives all the processes
tailored SE NPR requirements that are and many of the project management activities.
not related and can’t be documented
elsewhere in the SEMP.
A Appendices Appendices provide a glossary, Ascertaining the cross-program engineering issues early is essential to smooth
acronyms and abbreviations, and operations. This effort includes information that references the customer, the
information published separately for needs, the vocabulary, the acronyms, etc. The categories of this information are:
convenience in document • Information that may be pertinent to multiple topic areas
maintenance. • Charts and proprietary data applicable to the technical effort
• A summary of technical plans
U1
5 8 0 Chapter 1 4 — Technical Direction and Management: The
Systems Engineering Management Plan (SEMP) in Action
Relationship Gaining
Tailoring and
to other stakeholders’
waivers
plans commitment
FIGURE 14-8. The Systems Engineering Management Plan (SEMP) Process. The SEMP is
built upon the foundation of responsibilities and authority, bolstered by tailoring and
waivers to established processes, and tied to other technical plans and stakeholder
commitment.
Step 2: Assign a lead and support members. The SEMP writing team is critical
to a project's success. All 17 process owners plus the program managers need
inputs. Deciding on the lead for the SEMP development is crucial.
Step 3: Schedule the SEMP. As in all project activities, setting the due dates is
essential. This schedule includes dates for the outline, section inputs, preliminary
draft, final version for the program manager and chief engineer, and the
presentation to the whole project team, both management and technical.
Step 4: Identify technology insertion needs. This portion reviews the
standard technologies and includes recommendations for inserting higher-risk
technologies into the program. To mitigate risk, we should have a parallel
development plan for the risky technologies.
Step 5: Develop technical schedule. In concert with the program management
team, the technical team must decide when, where, and who will implement the
technical activities. This input to the master schedule should allow flexibility for
dealing with unforeseen problems as well as margin for technical issues. We must
list all the processes and then spread resources across the schedule to support each
one.
Step 6: Specify parallel plans. A major space systems development requires a
whole series of plans, The SEMP should note how each plan works in concert with
it and describe approaches to assure compatibility. This includes, as a minimum, the
program management plan, the test plan, and the verification and validation plan.
Step 7: Determine needed technical expertise. We must ascertain early which
specialties we need during the development program and then schedule them as
necessary. Many times a specialty is needed throughout the program, but only
periodically. Judicious planning and scheduling of specialist time is the best way
to be sure the expertise is available when called for.
Step 8: Conduct draft reviews. Developing the SEMP requires many inputs
from many sources inside and outside of the program office. These sources must
review the SEMP many times during its development to make sure it contains all
necessary inputs before we publish it.
Step 9: Submit the final SEMP. The space program's leadership team must
review the draft SEMP. The final SEMP then goes to the leadership team for
approval.
Step 10: Present the SEMP. We should present the final version of the SEMP
during the month before the SRR. This timing lets all the participants of a technical
program review it to ensure that it meets stakeholder needs.
Step 11: Implement continual improvement. Because the SEMP has many
reviews and updates before all major milestones, we have to continually improve it.
The proven approach for continual improvement entails a single point of contact
responsible for refinements and updates to the SEMP. These changes arrive all the
time from players inside and outside the program office. We insert the updates to the
schedule in the SEMP and the master schedule in a timely manner; updates to the
technical processes require approval from their owners. This approach allows all
players to know where to get the latest version of the SEMP and how to help
improve it. A separate annex should identify a process for SEMP inputs.
582 Chapter 14— Technical Direction and Management: The
Systems Engineering Management Plan (SEMP) in Action
Table 14-6. Factors in Tailoring. Tailoring should be applied with judgment and experience.
Tailoring adapts processes to better match the purpose, complexity, and scope
of a space project. It varies from project to project based on complexity,
uncertainty, urgency, and willingness to accept risk. If a large project misses any
steps in the systems engineering processes, it may be extremely costly to recover
or—as in the case of the failed 1999 Mars Climate Orbiter—it may not recover at
all. On the other hand, a small project that uses the full formality of all process
steps will be burdened unnecessarily and may founder in the details of analysis.
Tailoring strikes a balance between cost and risk.
Tailoring for specific project types. NASA's NPR 7120.5 specifies several
classes of space system programs and projects. Depending on the type of project,
14.3 Develop the Systems Engineering Management Plan (SEMP) 583
some tailoring guidance applies as shown in Table 14-7. We have to consider the
impact of all these factors and more when tailoring, so that the processes are
applicable to the project.
TABLE 14-7. Tailoring Guidance by Project Type. By understanding which SEMP sections are
most relevant to different types of space system projects, the chief systems engineer
sees that appropriate effort is applied.
SEMP Sections to
Project Type Needed? Sections to Emphasize De-emphasize
FIGURE 14-9. Example of NASA’s Lessons Learned Information System (LLIS). A careful
search of the database helps keep common problems from afflicting the current
project.
TABLE 14-8. Systems Engineering Management Plan (SEMP) Lessons Learned from DoD
Programs [NASA, 1995]. The project leadership team should review this table before
the systems requirements review.
5 Weak systems engineering, or systems engineering placed too low in the organization,
cannot perform the functions as required
6 The systems engineering effort must be skillfully managed and well communicated to all
project participants
7 The systems engineering effort must be responsive to the customers’ and the
contractors’ interests
14.3 Develop the Systems Engineering Management Plan (SEMP) 585
Systems engineering processes in many forms are all around us. While NASA
has specified forms of systems engineering, other standards have alternate process
forms that may be applicable to the project. Table 14-9 lists some useful standards
documents. All of these references provide alternate processes for tailoring the plan,
TABLE 14-9- Systems Engineering Standard References. Many organizations are sources of
processes, standards, and approaches. (DoD is Department of Defense; FAA is
Federal Aviation Administration; ISO is International Standards Organization; I EC is
International Electrotechnical Commission; ANSI is American National Standards
Institute; GEIA is Government Electronics and Information Association; IEEE is
Institute of Electrical and Electronics Engineers.)
Project Plan
The project plan has the following characteristics:
• It's the overall project management lead document, to which the SEMP is
subordinate
• It details how the technical effort will integrate with project management
and defines roles and responsibilities
• It contains the project's systems engineering scope and approach
• It lists technical standards applicable to the project
• It's the result of the technical planning effort which should be summarized
and provided as input to the technical summary section of the project plan
plan; training plan; in-flight check-out plan; test and evaluation master plan;
disposal plan; technical review plans; technology development plan; launch
operations plan; and payload-to-carrier integration plan. (See Chapter 13.)
The key to coordinating all these other plans is that the SEMP is the dominant
plan for the program's technical processes. Every plan dealing with engineering
should follow the SEMP for the sake of consistency. The SEMP engineering aspects:
• Control product requirements, product interfaces, technical risks,
configurations, and technical data
• Ensure that common technical process implementations comply with
requirements for software aspects of the system
• Provide the big picture for the technical view
14.4.1 Clarity
Anyone reading the SEMP must be able to understand it. A good SEMP,
therefore, is written in language that is familiar and informative. The language and
grammar should be appropriate to a plan, not a specification. Table 14-10 shows
samples of good and bad writing styles. The SEMP is a formal document and
should be fully reviewed for style. If written in haste with inadequate proofing, the
plan probably won't convey the essential information. Project team members
become confused, or simply don't read it, leading to technical confusion during
execution. Grammar might seem to be a minor issue, but the impact of poor
grammar is anything but minor. The SEMP writing style should feature fairly short
sentences with declarative information in the active voice. Technical writers often
get in the habit of using passive voice, explanatory phrases, and overuse of
attempted precision. We need to avoid these.
Different grammatical tenses serve different purposes. When reciting the
history of a project or the plans, we use past tense. When describing the product,
the project, the plans, or the organization, we use present tense. We do so even
when describing the future configuration of the product, as if that configuration
were in front of the reader. Future tense is appropriate only when describing a set
of actions that are part of the future. Present tense is usually preferable unless it
would mislead the reader.
14.4.2 Conciseness
The SEMP is a working plan, documenting and providing essential
information to guide the technical team. It should structure that information so
readers quickly find and assimilate it. It should focus on the deviations from
590 Chapter 14—Technical Direction and Management: The
Systems Engineering Management Plan (SEMP) in Action
TABLE 14-10. Samples of Writing Style. After completing the first draft, we should compare it
against this list.
The lead systems engineer is When necessary for management This sample is too long, uses
responsible for the content of guidance, this SEMP shall be passive voice, adds
the SEMP. controlled, managed, explanatory phrases, and is
implemented, and maintained by overly precise.
that individual assigned, at the
time of control, to the responsibility
of lead systems engineer.
The project completed the The project will follow the Progression of time will
concept development phase in scheduled phase milestones in invalidate the future tense as
January 2006. Figure 3-1. milestones pass.
The Earth Observing Satellite The Earth Observing Satellite Extra words in future tense
system provides geological data system will provide geological data obscure the primary
to a ground support station. to a ground support station. meaning. Even though the
product, its delivery, and
At each design review, the lead At each design review, the Lead other plans may be in the
systems engineer is responsible Systems Engineer will be future, future tense is not
for gathering and disseminating responsible for gathering and necessary to convey the
action items. disseminating action items. intended meaning.
The satellite will be mated with The satellite is mated with the This is a specific action to be
the launch vehicle during the launch vehicle during the vehicle taken at a definite time in the
vehicle integration phase. integration phase. future.
Table 14-11. Samples of Level of Detail. After we write the draft, we edit the level of detail using
samples such as these.
Atechnical summary of the A technical summary of the project Does not facilitate change
project that details the that details the work effort and
major components, work products of each person
efforts, and their
relationships A technical summary that only Does not help the reader
mentions the project and its understand the scope of the
purpose work effort
Organizational charts that Organizational charts that identify Does not facilitate change
display the major each individual by name
contributing organizations
and team groups, and Organizational charts at the Does not effectively assign
assign leadership organization or center level without responsibility
responsibility team groups depicted
Technical staffing levels by Technical staffing levels by team Generates excessive tracking
organization, skill type, group, by experience level, by and management burden
and month or quarter week, or other detailed levels
14.4.3 Completeness
A good SEMP is complete; it covers all necessary topics. The outline in NPR
7123.1a identifies the topics. The SEMP should address each topic in the outline, with
sufficient thought to the level of detail and tailoring necessary for the project.
Maintaining the complete primary outline is a good practice. NPR 7123.1a lists
nine major sections with specific content. Each major section in the SEMP should
have a title identical to that in the standard, so that others who read it understand
the content. The major section should cover the topics specified, and in order. Where
the standard specifies a second-level outline (as in Sections 3, 4, and 5), the SEMP
second-level sections should also have titles identical to those in the standard. This
compliance with standard is a sign of cooperation and experience, and it gives the
reader confidence that the technical plan is worth reading and following.
If a section doesn't apply, or if the technical plan uses only standard processes
for a topic, then the SEMP should document this situation. If the decision
represents a waiver, then we must provide the rationale. Appropriate wording for
such a section might be (1) "The [Name] project uses the standard processes for
technical reviews as specified in MWI 8060.3." or (2) "This section is not applicable
to the [Name] project because..."
Section 5 of the SEMP covers the 17 common technical processes. NPR 7123.1a
Appendix C describes each process in sufficient detail to provide plan guidance.
Authors might be tempted to gloss over this section, or to include duplicative
detail from the standard; neither is appropriate. The section should cover all 17
processes, but be aimed at the unique differences and implementations.
592 Chapter 14—Technical Direction and Management: The
Systems Engineering Management Plan (SEMP) in Action
The SEMP may also include appendices. A glossary of terms, acronyms, and
abbreviations is essential. Technical staffing levels are often placed in an appendix
due to the volume of information that may be necessary. Other appendices help
meet the desired level of background detail for specific sections.
14.4.4 Currency
A good SEMP is current and relevant. It's a living document and should be
kept alive. When first created, during the early formulation phase, it may be quite
short—perhaps fewer than ten pages—and many sections may have only
preliminary information. At the end of each development phase, we should
update the SEMP in a "rolling wave" of planning for the upcoming phases. Every
major project review addresses the sufficiency of plans for the next phase, so we
need updates at least at the following reviews: system requirements review or
mission definition review, system, definition review, preliminary design review,
critical design review, test readiness review or system acceptance review, flight
readiness review, and operational readiness review. We should also update it in
response to any significant project change, such as a technology breakthrough,
funding change, or change in priority or emphasis.
Each SEMP revision should be documented and approved. A revisions page
following the title page shows the revisions along with their date, reason for
revision, and scopc of revision. Approval of the revisions follows the same path as
approval of the original SEMP, including obtaining concurrence from stakeholders
(as described in Section 14.3.5) and signature approval from the designated
governing authority.
14.4.5 Correctness
Finally, a good SEMP accurately reflects the technical plan. If it's not clear,
concise, complete, and current as we describe in the preceding sections, it's a poor
SEMP regardless of the quality of the technical plan. Conversely, it can be clear,
concise, complete, and current, but mislead the reader because it obscures
weaknesses or gaps in the technical plan. Although in general the adage "garbage
in, garbage out" is true, if s possible to write a SEMP that looks better than the
reality of the underlying technical plan. (Table 14-12)
The SEMP author must balance a desire for brevity (since it is a management
plan) against the additional detail needed for correctness. In the words of Albert
Einstein, "Make everything as simple as possible, but not simpler." The author
must include sufficient detail to accurately communicate the technical plan, yet
refer appropriate details to the other peer and subordinate technical plans. To the
extent that the SEMP is correct, the continual improvement process (Step 11 in
Section 14.3.2) focuses on and corrects the weaknesses in the technical plan over
time. Conversely, a SEMP that glosses over weaknesses in the technical plan
hinders and delays corrections.
14.5 Necessary Elements for Executing a SEMP 593
The SEMP author and its critical readers should ask the following question: How do we know that
the SEMP reflects good technical planning? A good technical plan yields the following answers:
14.5.1 Leadership
In most projects, the systems engineer is not in a position of organizational
authority, but does carry full responsibility for the technical performance of the
system. Technical management must therefore be handled through leadership
rather than through mandated authority. To implement a SEMP well, the lead
systems engineer must help the technical team members cooperate and collaborate
toward common goals. In technical teams, this task becomes more difficult by the
nature of the people being led, who are typically engineers. Engineers tend to be
highly intelligent and independent, confident in their own knowledge and
expertise within their field. They also tend to be introverts, more comfortable
working alone than in groups, and their confidence often leads to a technical
arrogance; they're certain their way is the only correct way. These characteristics
make leadership all the more important.
Leadership is more than just telling people what to do. The leader's role is to
induce others to follow, as part of a team. It calls for an attitude of support rather
than an attitude of power. The leader supports and encourages the rest of the full
time team, while the leader creates supportive relationships with others who may
have to provide work for the project. The kinds of support vary and depend on
circumstances, but usually include creating smooth interpersonal interfaces,
resolving conflicts, and making others' jobs easier and more rewarding.
Necessary element for SEMP execution # 1—The technical leader must lead the technical
aspects of the project.
594 Chapter 14—Technical Direction and Management: The
Systems Engineering Management Plan (SEMP) in Action
Technical Management
• Decision data
• Technical, cost, and schedule trades • Support
• Technical data • Risk and opportunity management • Success
• Creativity • Technical team relations
• Status • Leadership focus
System Design
• Technical analysis • Guidance
• Creative engineering • Leadership
• Focus
• Requirements focus
Any of several different people may perform the technical management role.
Figure 14-11 shows several possible configurations; the right one for any given
project depends on the skill levels and interpersonal relationships of the project
leaders. A strong project manager or a strong systems engineer may perform all
technical management. They may share tasks, or others may be involved. A project
14.5 Necessary Elements for Executing a SEMP 595
engineer may be assigned specifically to this leadership role. Tools and techniques
appropriate to technical management are: (1) general management methods such
as communicating and negotiating, (2) product skills and knowledge, (3) status
review meetings, (4) technical work directive system, (5) organizational processes,
and (6) technical tracking databases. The technical manager communicates with
technical experts through technical information. The architecture and design at the
system or subsystem level provide the context within which the experts apply
their expertise. Communication between the system and subsystem domains and
the component and part domains determines the system's success. Team
coordination and management are critical. Table 14-13 illustrates some of the
strengths and shortcomings of teams. The technical leader, using the SEMP, must
see that the teams work at their top potential.
Strong PM
- Program Manager
* Systems Engineer
Project
Management
Strong SE
• Program Manager
• Systems Engineer Technical
Management
Shared Strength
System • Program Manager
Design • Systems Engineer
Triad
• Program Manager
« Project Engineer
* Systems Engineer
FIGURE 14-11. Who Performs Technical Management. Any one of several relationships may
succeed. (Courtesy of Honourcode, Inc.)
Table 14-13. Traits of Effective and Ineffective Teams. A key to success is recognizing when a
team is weak.
Necessary element for SEMP execution # 3—The technical leader must excel
within a collaborative environment, and enable it across the project.
If properly planned, the SEMP defines the work required for the project. When
we implement that work through negotiation, technical work directives, and
leadership, the project should be successful. The challenges arise in the realizations
that happen during implementation, because no plan is perfect. Nineteenth-
century German general Helmuth von Moltke said, "No plan survives first
contact," and this applies to technical plans as well. As the project proceeds,
unplanned events occur and unexpected problems arise. Scope control aims to
ensure that the project responds appropriately to these issues without overly
affecting the original plan.
while another wants project costs reduced. These conflicts create the environment
within which scope control is necessary.
Project status "traffic light" reporting. Senior managers sponsor the project
through their organization, but are not involved in its day-to-day workings. These
sponsors feel ownership and keen interest in the project even if their participation
may be minimal for months at a time while things are going well. Still, the technical
manager and project manager must keep the sponsor informed about the project and
its needs. A simple way is to use "traffic light" reporting, as shown in Figure 14-12.
Each major element of the project is reported with a color that keeps the sponsor
informed as to the status and the participation that the project team deems
appropriate. Monthly traffic light reporting is common.
598 Chapter 14—Technical Direction and Management: The
Systems Engineering Management Plan (SEMP) in Action
• What processes and tools will we use to address the technical issues and
risks?
• How will that process be managed and controlled?
• How does the technical effort link to the overall management of the
program?
The multiple pages of specific questions are very valuable for any space
development program [DoD (2), 2006]. Table 14-14 gives 5 of the 50 detailed
questions in the SEP Preparation Guide that we use to develop the SEP for
different phases of a program.
Table 14-14. Systems Engineering Plan (SEP) Question Examples. The approach to
developing a government SEP uses questions that have been historically shown to
result in a strong technical plan.
Concept Program How well does the technical approach reflect the
refinement requirements program team’s understanding of the user’s desired
capabilities and concepts?
Technical staffing How well does the technical approach describe how
technical authority will be implemented on the
program?
Production Technical How well does the technical approach describe who is
management responsible for managing the technical baselines?
planning
Technical review How well does the technical approach describe who is
planning responsible for overall management of the technical
reviews?
Reduce logistics Integration with How well does the technical approach describe how
footprint overall management the program managerwill use the in-service reviews to
manage the technical effort and overall operational
and support (O&S) cost containment?
The outline for the SEP is very similar to the SEMP, but it emphasizes the
program management side of the equation.
The list below shows the difference between technical planning in an SEP
versus an SEMP. Both approaches rely on SEMPs at the lower levels of a project
hierarchy. Both have tremendous strengths while emphasizing excellent
management of the systems and discipline engineering aspects of the project. To
achieve this high level of technical leadership, the project must have superb people
and excellent processes. Completing an SEMP or an SEP early in the project is
critical to seeing that the engineering elements are guided wisely.
DoD approach
• Dominant DoD SEP at the System Program Office (SPO) level
• Lower level contractor SEMPs supporting the government SEP
14.6 A Look at DoD's SEP 601
NASA approach
• Tiered SEMPs from government SPO level
• Contractor SEMPs supporting government leads
We must develop the SEMP early and continually improve it during the life of
the program. This chapter illustrates some key elements of the process, along with
a comparison with the DoD SEP methodology. Successful systems engineers focus
on early involvement; concentrate on technical details (especially in requirements),
uncover and resolve schedule issues early, conduct significant trade studies as
appropriate; and pay constant attention to risk tradeoffs. A well-done SEMP
enables the project to move forward smoothly while helping to uncover problems
early enough to resolve them within project constraints.
References
Department of Defense (DoD). 2004. Systems Engineering Plan Preparation Guide, Version
0.85. OUSD(AT&L) Defense Systems/Systems Engineering/Enterprise Development.
DoD. 2006. Systems Engineering Plan Preparation Guide, Version 1.02, OUSD(AT&L) Defense
Systems/Systems Engineering/Enterprise Development, [email protected].
National Aeronautics and Space Administration (NASA). 1995. Systems Engineering
Handbook, SP-6105. Washington, D.C.: NASA Headquarters.
NASA. August 31, 2006. Constellation Program Systems Engineering Management Plan, CxP
70013. NASA.
Teaching Science and Technology, Inc. (TSTI). 2006. FireSAT Systems Image.
Chapter 15
Manage Interfaces
Robert S. Ryan, Engineering Consultant
Joey D. Shelton, TriVector Services
As many systems engineers have said: "Get the interfaces right and everything
else will fall into place." Managing interfaces is crucial to a space system's success
and a fundamental part of design. When we decide how to segment the system, we
automatically choose the interfaces and what they must do. These interfaces take
many forms, including electrical, mechanical, hydraulic, and human-machine.
Our choice also includes communications (electromagnetic signals), as well as
603
604 CHAPTER 15 —MANAGE INTERFACES
computer (software code) interfaces to other systems. Figure 15-1 shows typical
space system components and their interfaces.
FIGURE 15-1. Apollo’s Top-level Interfaces. Here we show interfaces for human to machine;
computations for guidance, navigation, and control; other controls; and telemetry.
(CSM is command service module; CM is crew module; SM is service module; IMU
is inertial measurement unit; LM is lunar module; VHF is very high frequency.)
Kossiakoff and Sweet [2002] say that managing interfaces consists of "(1)
Identification and description of interfaces as part of system concept definition and
(2) Coordination and control of interfaces to maintain system integrity during
engineering development, production, and subsequent system enhancements."
They go on to assert that defining "internal interfaces is the concern of the systems
engineer because they fall between the responsibility boundaries of engineers
concerned with the individual components." Similarly, according to the Department
of Energy [DOE, 2006]:
To define and apply interfaces, we must often consider design trade-offs that affect
both components. Implied in this idea are internal interactions, as well as external
ones such as transportation, handling with ground support equipment (GSE),
communications, human-machine interactions, and natural and induced
environments.
The management process, outlined in Figure 15-1, is critical to product success
and a fundamental part of systems engineering. We recommend NASA
Procedural Requirement (NPR) 7123.1a for interface management requirements.
TABLE 15-1. Process for Managing Interfaces. Several other chapters describe topics
similar to some of these steps, as noted in the “Where Discussed" column.
Where
Step Definition Documentation Discussed
3 List interfaces and prepare initial Interface list, IRDs Section 15.3
interface requirements documents
(IRDs)
4 Develop NxN and Ixl diagrams NxN and ixl matrices Section 15.4
and Chap. 13
8 Design interfaces, iterate, and trade Analysis reports, IRD and ICD Section 15.8
updates
Figure 15-2. Flow Diagram for Designing Interfaces. Standards, philosophies, and so on
furnish guidance to the process, but we manage it by means of interface control
documents (ICDs). (COQ is certification of qualification) (Source: NASA)
15.1 Prepare or Update Interface Management Procedures 607
Figure 15-3. Process Flow for Interface Management. Here we define the necessary inputs and
outputs for interfaoe management. The key inputs come from the system, work
breakdown structure, and requirements. The outputs focus on interface design, change
control, and verification. (IRD is interface requirements document.) [NASA, 2007]
connected using a pin connector, which required the parts to move toward each
other to release a coupler that allowed them to separate. But teams wrapped
insulation on each part, so the parts couldn't move, and the coupler never released.
The unseparated cablc kept the two stages coupled and thus destroyed the
mission. Many types of stage interfaces are available to offer stiffness and load
paths during one stage bum, as well as a way to separate the two stages.
Decomposition produces various interfaces: from pad hold-down, to fluid or
electrical, to human-machine. Table 15-2 lists the results of decomposing a project
into systems, their requirements, and their interfaces.
Table 15-2. Results of Decomposing the Project into Systems, Their Requirements, and
Interfaces. In the left column we list the project-level requirements, specifications,
standards, and constraints, In the right column we list where we must account for
interfaces within the decomposition.
• Allocated physical constraints for end • Derived agreements on design constraints for
items physical interfaces
• Allocated requirements for heat • Allocated requirements and constraints for heat
generation and rejection transport
• Agreements for heat transport
When we decompose the project, wc get systems and their interfaces. Figure
15-4 shows a general interface plane and sample quantities that must pass through
from one system to another. Table 15-3 lists the categories of interfaces, their
description, and comments.
We illustrate how to define system interfaces using the FireSAT project. Figure
15-5 charts the FireSAT spacecraft's subsystems and shows how we divide the
spacecraft into a number of configuration items for configuration management.
The spacecraft's implementation plan calls for Acme Aerospace Inc. to be the
15.2 Decompose the System Physically and Functionally 611
Interface plane
Power Power
Structural/ Structural/
Mechanical Mechanical
Fluid Fluid
Data Data
Other Other
FIGURE 15-4. General Plane for Element-to-element Interfaces. This generic interface plane
passes power, air, data, and other quantities. It must also hold together structurally.
(IMV is intermodule ventilation.)
primary spacecraft integrator. They also furnish the propulsion module as a single
configuration item from their subcontractor, PN Test Systems Ltd. Thus, we must
define carefully the interfaces between the propulsion module and the rest of the
spacecraft. Figure 15-6 shows the location of key interfaces between the propulsion
module and baseplate assembly. The structural-mechanical interface of the
propellant module and base plate depend on the agreed-to design of the physical
interface between the two items. Electrical interfaces activate the valves and
pyrotechnic devices, and they measure pressure and temperature for telemetry.
The fluid interface defines how the propellant (monomethyl hydrazine) and
pressurant gas (helium) flow between the propellant module and the base plate
(rocket engines). In the next few sections, we look at how to analyze and define
these and other interfaces between the two.
TABLE 15-3. Categories and Functions of Interface Types. This table lists the types of interfaces, their functions, examples, and remarks. (EM!
1. Structural 1. Structural integrity between 1. a. Flanges 1. Form and fit between interface mating parts is critical
and elements, subsystems, and b. Bolts and a source of many problems.
mechanical components c. Welds Verification of form and fit as well as structural capability
• Load paths d. Links is a major challenge.
• Stiffness e. Fasteners 2. Malfunctions in separation systems have created many
• Strength f. Adhesives problems.
• Durability 2. a. Pyros Verification of separation systems is a major activity and
2. Separation of elements as mission b. Springs challenge.
timeline dictates c. Hydraulics
II. Fluid and 1. Propellant flow between elements 1. Duct and flanges 1. Prevention of leaks with ability to separate as required
hydraulic 2. Airflow between elements 2. Duct and flanges 2. Prevention of leaks with ability to separate as required
3. Control forces and separation forces 3. Actuators and links 3. Ability to handle point loads and varying dynamics
III. Electrical 1. Transmit power between elements • 1-, 2-, and 3-pin 1. Provide adequate power with the ability to separate
2. Communication between elements connectors 2. Provide clear communications with the ability to
3.1 nformalion flow between elements ■ Wires separate
• Busses 3. Provide information with the ability to separate
• Transmission waves
IV. 1. EMI and EMC 1. Wire shielding and 1. Design for integrity of electronic or electrical signals
Environmental 2. Natural environments separation 2. System must function in the environment (examples:
3. Induced environments 2. System’s ability to temperature, winds)
function in the 3. System must function in the environment it produces
surrounding (examples: thruster plume, thermal-prolection system,
environment icing)
3. System’s ability to
control, manage, and
function in the
created environment
15.2 Decompose the System Physically and Functionally 613
FIGURE 15-5. Configuration Management Chart for the FireSAT Spacecraft. The diagram shows
the FireSAT spacecraft's major elements as hardware configuration items (HWCI).
FIGURE 15-6. FireSAT’s Propulsion System and Its Interfaces. The electrical interface
connects the batteries to electrical systems. Flow from the pressure vessels to the
propulsion subsystem requires a fluid interface. The mechanical interface physically
connects the structures.
614 Chapter 15-Manage Interfaces
Table 15-4. Sample Entries from an Ixl Matrix. Here we show several examples of diagonal
terms and their influence on other diagonal terms from an Ixl matrix for typical
propulsion and structures systems.
• The propulsion Requirements 1. Thrust bearing to transfer thrust load to upper stage
system to 2. Engine’s gimballing ability for control authority (throw
structures angle in degrees)
3. Fluid propellant input to engine (flow rate)
Description 1. Engine’s induced environments (thermal, acoustic,
vibration, thrust)
2 Engine’s dimensions
3. Mass characteristics
4. Flow rates
Ixl matrices are numerous because they're associated with each system or
subsystem divided into lower-tier entities on the tree. (The lower tier is the off-
diagonal elements that share interface characteristics and information with the
diagonal elements.) The parts on the tree's last tiers, however, are simple enough to
require no further subdivision. Figure 15-7 shows an Ixl matrix for FireSAT.
NxN matrices represent information flow among the design functions and the
discipline functions. The example in this case is the launch vehicle system with its
associated design functions and disciplines. Notice the matrix's upper left element
represents the launch vehicle system's plane, and other diagonal elements
represent the remaining planes for lower design functions.
Including Ixl and NxN matrices with the subsystem tree and design function
stacks provides locations or placeholders for the technical information that must flow
among the participants in the design process. (The subsystem tree is the
decomposition of the system into its lower elements; the design function stack
describes the activities associated with the tree.) It also suggests a framework for
electronic information and communication to enable efficient, concurrent interactions.
These diagrams are outstanding tools to define a project's decomposition and
all critical information, interactions, and requirements. They're essential to design
and systems engineering and are the cradle for technical integration activities. We
must define them accurately, review them for completeness, and update them
periodically throughout the project's lifecycle.
Let's return to our FireSAT interface example. The propulsion system's main
function is to generate thrust. Figure 15-8 shows its two sub-functions, which are
at a level where we can describe detailed interfaces. We need to analyze the inputs
and outputs between control propulsion and manage fluids, where fluids include
helium pressurant gas and monomethyl hydrazine propellant.
An effective way to analyze the sub-functions is to create an NxN diagram just
for them and study the inputs and outputs to each. Figure 15-9 shows the matrix
with appropriate interactions.
Next we allocate the two sub-functions to spacecraft hardware. In this case,
control propulsion goes to the baseplate assembly, and manage fluids to the propulsion
module. To account for the many inputs and outputs to the two main subfunctions,
we list them in an input-output and allocation table (Table 15-5). Each input and
output function is allocated to one of the hardware configuration items. Figure 15-
10 shows how this allocation looks for control propulsion and manage fluids. To reach
the ICD-IRD inputs from the NxN matrix, we use the allocation table entries to
define links between the baseplate assembly and the propulsion module.
The next step is to define interface requirements for each link. A requirements
document for spacecraft-to-propulsion module interfaces captures these
requirements, and controlled interface drawings (Figure 15-11) show details.
Because Acme is the prime contractor, they write interface requirements for their
subcontractor, PN Test Systems. "Will" statements reflect the interfaces that Acme
plans to provide and "shall" statements are requirements that PN must meet.
Interface requirements look like the following examples:
15.4 Develop NxN and Ixl Diagrams
Wttf/a ift SPSS*™ IsCownsn* Swl^ii Sotertfl Co^pfed
KFWw*** Rf Vptolk £W/5«foi>on EMfla<**6<vi*>»#»
IVAtapfer
Payioad Data £i«tfj>cal Gn*jrw Loads. Kail
Paylvftd fl5»arrtam«» iiH W«*ar«*f
frf, Coortret^rj
Torqut C^'rei} Poirinfl Currer>< Poi-tftff ADCS cUteiEfccfrksaJ Groyix] lo3d*. Meat
D*is AUftuto Uteftttiy jAvwoc$ $t*ck
ft Drift
DfltoWs/n*?* 0ef3 Wam953 MtchtnkJVF,
Oal* H&rossControl Co/xhKbcn
Current Nav
Ma v Data GNCtetemaliy SJactraal GfOyfXl LqM». HmI
Dots H&r)999 D@ta, timefevfealta* Oaf* Hsmew ftavertfamaM |
^vtcn^a Sffft'c
CootroJ «acfca^c^l^(
COfXfactw*
TeJflcnetiy. Co-TtmarKis Eredral Grcn/^j LoKt^Heat
Paytoad Dala CofivriunlcaUofi Dab Per** r Harness SfecV
RFOovnM s^ $ub«y*l*m s jUaeharwc^^,
s ConAtc^on
Pstfo&i G«C Corona
Payio&d rtd4 EP8 Cvnmj^s, Loads, Heel Propti-s-<vi
command?.CdfnmandaOefa Wvness Ifitemotry ! fcsadrical Croor4 A*v.afcj Sfa<Acomma rrf»cob-system
OaJa Carvess Oefc^ ^Harness Date Subsystem 03 U Manvu.?. OetoWdmaw commar^t
rtwerH'ei'ne** CcN^jcfton Oaia Nmwu
-EficftwalBectooal
PoMt £j«4riMJF<*W EJ«tfical Power
fieelric$l Power, lEtecfckaJ Povrtf EiettrkA] Powar
Lowfs, HeatE!«Arvo*lPw^>r
PaverPOvnS^f/artlMf
Hamm A>»ier Watrte&is IflVrwL'y | El actncal Ground
Anvar <4vkvuc« StAtk
GsW W*r*ts ElaclricalFowtr
$ut»ytbam
s GxAk^po.
✓^ M^MrMCAT
BontfeM
Umit ✓^ Temperature eedTxalGwEif^
EMPadbfon m&asuWiarila Thermal C<m*©1
/s D&aUBmB$a ✓
i loadi I Lc*di Leeds Loads Loath toads/ Heal Loads loads
-x' Mae#ia/iiear Cbftixfoti dtnjch^M UKhankJ MacAaivcaf
✓ (■VsdisiW fr-JfW&M Mai^a
Commands Electrical Ground Meehariam /
tfttemety /■ / i4t*oniC9 S?wlr |
Communication Data Harness Power Harness Aȣa flames* MechsrMVF.UacKanttma
Qiihcufiiom s
CdKtocboo
■ ~V
,- Proposal Ei«Lrit3lGw«vl LoacH, HaaL
tetefrwty FtowvrWamwa /v»rufr3 Stjdr Prop utalon
f /&toH«F7t*£9 JUac^a/vcs/^ SuUyBtafn
EPS ( ommands, ✓ (W^fon
Payload dajta,
~Data Handling Electn;al Ground /
telemetry “*■
/ //
Data Harness Subsystem Data 1 farness,
■R0W9I -kiarness s //
i.... - I
s
/
Electrical Pov«:r EEectricai Rower, r| ✓
f
Power Hamesi EPS telemetfy
s
Elect! leal Power
//
Power Harness,
Data Harness Su!nsysiem f✓
[EPS) /
/ /J
s
FIGURE 15-7. FireSAT Ixl Matrix. The Ixl matrix is a useful tool for capturing the interfaces between system elements or between subsystems
within a given system.
618 Chapter 15—Manage Interfaces
Figure 15-8. FireSAT’s Generate Thrust Function. Here we show how to divide the
spacecraft’s propulsion system function—generate thrust—into two sub-functions
that require interfaces.
FIGURE 15-9. NxN Matrix for the FireSAT Subfunctions Control Propulsion and Manage
Fluids. Here we show inputs to the manage fluids subfunction, such as Helium
(He) fill and He tank Iso-valve (isolation valve) open, as well as inputs to the control
propulsion subfunction, such as He drain or He tank pressure telemetry. Also
shown are their outputs (directed to the right from the subfunction boxes) and other
inputs, such as heat and mechanical loads. (TLM is telemetry; He is helium.)
15.4 Develop NxN and Ixl Diagrams 619
TABLE 15-5. FireSAT Input-output and Allocation Table. This table lists inputs and outputs for
control propulsion and manage fluids, allocated to the baseplate and propulsion
modules. (TLM is telemetry; He is helium.)
FIGURE 15-10. Links between FireSAT’s Baseplate Assembly and Propulsion Module. Here
we show links that represent physical connections between the two hardware
configuration items. Each connection represents an interface taken from the types
listed in Table 15-3. Each interface must have a written requirement that describes
its physical design and function,
620 Chapter 1 5 — Manage Interfaces
Figure 15-11. FireSAT’s Control Drawing for Interfaces between the Baseplate Assembly
and Propulsion Module. This figure shows three interface drawings with hole
placement (mechanical), pin identification (electrical), and fitting design (fluid).
15.5 Develop Sub-level Work Breakdown Structure (WBS) of the Lifecycle 621
Interface Design and Verification Tasks
Interface control
documents
The ICD for an interface typically consists of two parts: the first defines all
interface performance requirements, and the second defines how to apply them.
For example, the ICD for a solid rocket motor (SRM) segment-to-segment
interface might read:
Requirements:
1. Maintain motor pressure without leaking, verification of seal after
assembly
2. Simple assembly joint at launch site
3. Structural load path between segments
4. Continuity of lateral and longitudinal stiffness of SRM case
5. Thermal insulation of joint against motor thrust
Implementation:
1. Redundant O-ring seal using a compound double clevis structural joint,
grooves for O-rings. Pressure O-ring check port or ports.
2. Compound clevis joint with locking pins
3. Compound clevis joint with locking pins
4. Case thickness
5. Joint pressure sealing thermal flap
Contents
Physical interface definition
Envelopes
Dimensions/Tolerances
Interface design drawings
Fastener locations
Connector locations
Connector/Coupling definition
Specifications
Part numbers
pin-outs
Signal definitions
1.0 INTRODUCTION
1.1 PURPOSE AND SCOPE
State the purpose of this document and briefly identify the interface to be
defined herein. For example, ("This document defines and controls the
interfaces between_______ and______ ").
624 Chapter 15—Manage Interfaces
TABLE 15-6. Derived Interface Requirements. These and other derived functional requirements,
industry standards, and agreements affect interface designs.
1.2 PRECEDENCE
Define the relationship of this document to other program documents and
specify which document is controlling in the event of a conflict.
1.3 RESPONSIBILITY AND CHANGE AUTHORITY
State the responsibilities of the interfacing organizations for development of
this document and its contents. Define document approval authority (including
change approval authority).
2.0 DOCUMENT
2.1 APPLICABLE DOCUMENTS
List binding documents that are invoked to the extent specified within this
document. The latest revision or most recent version should be listed.
15.6 Develop an Interface Control Document (ICD) for Each Interface 625
FireSAT spacecraft
software
architecture
Bus software
Telecommand
Payload I/O receipt and GSE and test
module processing I/O module
module
Bus command
Operating and control
system module
Wildfire''811" Telemetry
ADCS unit GN&C unit
Thermal message generation
control unit generation unit
unit
Figure 15-14. FireSAT Software Architecture. This section of Figure 11-22 depicts software
interfaces for the FireSAT spacecraft.
3.0 INTERFACES
3.1 GENERAL
In the subsections that follow, provide the detailed description, responsibilities,
coordinate systems, and numerical requirements as they relate to the interface
plane.
3.1.1 INTERFACE DESCRIPTION
In this section, describe the interface as defined in the System Specification.
Use and refer to figures, drawings, or tables for numerical information.
3.2.1.1 ENVELOPE
Define the linear sizes, areas, volumes, articulating device motions,
and other physical characteristics for both sides of each interface.
3.2.1.4 FLUID
This section of the document includes the derived interface based on
the allocations contained in the applicable segment specification
pertaining to that side of the interface. For example, it should cover
fluid interfaces such as thermal control, liquid O2 and N2 control and
flow, potable and waste water uses, fuel cell water uses, atmospheric
sample control, and all other fluids.
3.2.1.5 ELECTRICAL (POWER)
This section of the document includes the derived interface based on
the allocations contained in the applicable segment specification
pertaining to that side of the interface. For example, it should cover all
15.6 Develop an Interface Control Document (ICD) for Each Interface 627
the electric currents, voltages, wattages, resistance levels, and all other
electrical interfaces.
3.2.1.8 ENVIRONMENTS
This section of the document includes the derived interface based on
the allocations contained in the applicable segment specification
pertaining to that side of the interface. For example, it should cover the
dynamic envelope measures of the element in English units or the
metric equivalent on this side of the interface.
3.2.1.8.1.3 GROUNDING
The end item 1 to end item two 2 interface shall meet the
requirements of SSP 30240, Rev D—Space Station Grounding
Requirements [Jablonski et al., 2002 (1)].
3.2.1.8.1.4 BONDING
The end item 1 to end item 2 structural and mechanical interface
shall meet the requirements of SSP 30245, Rev E—Space Station
Electrical Bonding Requirements [Brueggeman et al., 1999].
628 Chapter 15-Manage Interfaces
3.2.1.8.2 ACOUSTIC
The acoustic noise levels on each side of the interface shall be
required to meet the program or project requirements.
FIGURE 15-15. Typical Process Flow for Interface Management. From the first definition of
system requirements to spacecraft operations, we control the process using
interface control documents based on standards, guidelines, criteria, philosophies,
and lessons learned.
Interface definition
and configuration
control
FIGURE 15-16. Approach to Controlling Interface Configurations. Here we show the relationships
between the designers (chief engineer and engineering) and the controllers (ICWG
and PRCB). Requirements, standards, and guidelines apply to everyone in a project.
630 Chapter 15—Manage Interfaces
This interface design process requires members to consider not only their own
discipline but also how they interact with others, so the PRCB drives project
balance by developing the right project decisions at the appropriate level. Any
change or decision then goes under configuration control (Chapter 16). In this way,
we successfully design, control, and operate interfaces. Figure 15-17 shows an
example of ICWG membership for the International Space Station.
Figure 15-17. Member Organizations for the International Space Station’s Interface Control
Working Group (ICWG). The ICWG includes members from all important players in
the space station project. (ASI is Agenzia Spaziale Italiana; RSA is Russian Space
Agency; CSA is Canadian Space Agency; JAXA is the Japan Aerospace Exploration
Agency; BSA is Brazilian Space Agency; ESA is European Space Agency; MSFC is
Marshall Space Flight Center; JSC is Johnson Space Center; KSC is Kennedy
Space Center; GSFC is Goddard Space Flight Center; MPLM is multi-purpose
logistics module; MOD is Management Operations Directorate; FCOD is Flight Crew
Operations Directorate; OPS is operations; GFE is government furnished
equipment; CPE is change package engineer.)
15.8 Design Interfaces, Iterate, and Trade 631
The interface control working group (ICWG) coordinates these design activities
and the resulting trades; government or contractor engineers typically complete
them. Normally, the chief engineer contributes heavily toward ensuring that the
product meets requirements and is mature.
Full-scale
structural
verification
Concept
demonstration at
component level
Manufacturing process
development and scale-up
Design concepts
and analysis development
FIGURE 15-18. Manufacturing Process. Here we show the steps for manufacturing using metals
and composites, which includes selecting and characterizing the design approach,
developing and verifying the manufacturing process, designing and building materials
and manufacturing methods, and verifying hardware and processes, (A coupon is a
small sample of the material specially shaped to fit in a testing machine.)
15.10 Verify Interfaces Including Integration with the System 633
We must not only design the manufacturing approach but also develop the
assembly plans and procedures to ensure adequate mating and so on. Simplicity of
assembly is just as critical as simplicity of manufacturing, inspecting, verifying, and
validating. We must document these processes well and train employees to apply
them. Many problems in space hardware have occurred because workers didn't
know and follow the building and inspecting procedures. For example, in the
above-mentioned mission to place a satellite into GEO, the Inertial Upper Stage
didn't separate its first and second stages because someone overwrapped
insulation on the electrical connection between the stages, preventing them from
separating. Proper training and inspecting may have saved this mission. Each of the
other interface types, such as electrical, pyrotechnics, human-machine interactions,
and communications, follows a similar process.
FIGURE 15-19. Verification Flow. Verification starts at the system level and progresses through the
subsystems, components, and even the piece parts. It must be consistent with
specifications, procedures, and other guidance, It’s iterative and is complete when
the system is verified and signed off by certifications of qualification (COQs).
the unknowns we can't define in the uncertainties and sensitivities, and 4) develop
risks around the above three to help manage the system. All design decisions and
changes must have this kind of information. In the early design phases, design
teams, working groups, engineering teams, and the project integration board help
keep the system balanced.
Figure 15-20 outlines the process we use as the project matures and the design
is under strict configuration control. In this case, we prepare a preliminary
interface revision notice (PIRN) containing all information from the sensitivity
analyses—such as risk, cost, and schedule, as well as the changes in design
characteristics. We coordinate the PIRN and its effect on costs throughout the
design and project organization and then send it to the interface working group for
discussion and approval. In many projects, PIRNs go with members to the project
control board for final approval.
FIGURE 15-20. Documenting and Controlling the Interface Design. Every change to an interface
design requires a preliminary interface revision notice (PIRN). Assigned team
members must evaluate the PIRN’s effect on cost, risk, performance, and schedule.
Then various working groups must decide whether the change is worthy of passing
along to the Chief Mission Officer (CMO). (SLI-IWG is system-level integration-
interfaee working group.)
After completing all procedures, we must develop a training plan that includes
transportation, assembly, check-out, and mission operations to control interfaces
and ensure mission success. The plan must contain functional characteristics,
limitations, and procedures for performing each function. Training must be
intensive and complete. History has shown that mission success depends on the
training program's quality because human-machine interactions determine most
interface design and functions, whether in building, checking out, or operating
space systems. Figure 15-21 depicts the operations sequence and characteristics.
Summary
We have established the criticality of interface design and management to
successful products. We developed a process for the definition of interfaces, their
design, control, verification, and operations. The secret to interface design is, first
and foremost, communication among the various elements of a system and with
the system, followed by rigid control of the total interface using uncertainties,
sensitivities, margins, and risks with a rigid quality control. The process results in
quality interfaces and is essential to project success.
15.12 Develop Operating Procedures and Training 637
References
Brueggeman, James, Linda Crow, Kreg Rice, Matt McCollum, and Adam Burkey. 15
October 1999. NASDA SSP 30245, Rev E - Space Station Electrical Bonding Requirements.
Fort Belvoir, VA: DAU.
Brueggeman, James, Linda Crow, Kreg Rice, Matt McCollum, and Adam Burkey. 1998.
NASDA "SSP 30243, Rev E - Space Station Requirements for Electromagnetic
Compatibility." Fort Belvoir, VA: DAU.
Buede, Dennis. 1992. The Engineering Design of Systems. New York, NJ: John Wiley and Sons,
Inc.
Department of Defense (DoD). 1973. Military Standard (Mil Std) 1553B: Structure for Bus
Controllers and Remote Terminals. Washington, DC: Government Printing Office.
Jablonski, Edward, Rebccca Chaky, Kreg Rice, Matt McCollum, and Cindy George. 31 July
2002 (1). NASDA SSP 30240, Rev D - Space Station Grounding Requirements. Fort
Belvoir, VA: DAU.
Jablonski, Edward, Rebecca Chaky, Kreg Rice, Matt McCollum, and Cindy George. 31 July
2002 (2). NASDA SSP 30242, Rev F - Space Station Cable/Wire Design and Control
Requirements for Electromagnetic Compatibility. Fort Belvoir, VA: DAU.
Kossiakoff, Alexander and William N. Sweet. 2002. Systems Engineering: Principles and
Practices. New York, NY: J. Wiley.
National Space Development Agency of Japan (NASDA), 31 May 1996. SSP 30237, Rev C -
Space Station Electromagnetic Emission and Susceptibility Requirements. Fort Belvoir,
VA: Defense Acquisition University (DAU).
US Department of Energy (DOE). July 2006. DOE P 413.1, Program and Project Management
Policy. Washington, DC: DOE.
Chapter 16
Manage Configuration
Mary Jane Cary, Cyber Guides Business Solutions, Inc.
639
640 Chapter 16—Manage Configuration
TABLE 16-1. Configuration Management (CM) Process Steps. Here we list the five major
configuration management processes, and the chapters offering explanations
of each one.
also increased our ability to provide the infrastructure and information needed for
collaboration among project stakeholders.
A key ingredient to mission success is sharing CM roles and responsibilities
among the stakeholders. All team members must accept responsibility for
delivering complete, accurate, and timely information to their colleagues. This
principle is affirmed in several CM performance standards, including the
International Organization for Standardization (ISO) 10007 Quality Management
Guidelines for CM, the companion guide for ISO 9001 Quality Management System
Standard, (both published by the International Organization for Standardization),
the Software Engineering Institute's Capability Maturity Model® Integration, and
the EIA649A CM National Consensus Standard, published by the Electronics
Industry Association. Each of the five CM processes relies on systems engineers,
project managers, quality assurance team members, and other stakeholders to
provide complete, accurate, and timely information, as illustrated in Table 16-2.
TABLE 16-2. Configuration Management Processes and Players. This table describes the
contributors to each CM process, providing inputs and creating outputs. Typical lead
contributors for each process are identified in bold. (Cl is configuration item; ECP is
engineering change proposal; CCB is configuration control board; FCA is functional
configuration audit; PCA is physical configuration audit.)
TABLE 16-2. Configuration Management Processes and Players. (Continued) This table
describes the contributors to each CM process, providing inputs and creating outputs.
Typical lead contributors for each process are identified in bold. (Cl is configuration
item; ECP is engineering change proposal; CCB is configuration control board; FCA is
functional configuration audit; PCA is physical configuration audit.)
Lessons Learned
Improve Organizational CM Performance
Lessons Learned
Improve Project CM Performance
Lessons Learned
Using Technical Resources Effectively
• Maintain situation list. Identify conditions when specific technical resources are
frequently required.
• Maintain technical resource network. Publicize and encourage use of technical
expertise, tools, models, and facilities to identify, evaluate, or resolve problems.
Include alternates in Key fields.
• Provide network with news. Keep them informed on project progress, performance,
and trend reporting.
• Provide rewards. Have ways to thank people for their assistance, such as public
recognition, technical collaboration opportunities, and educational conferences.
FIGURE 16-1. Example Organization Structure for a Complex Project. This chart illustrates an
organization that designs complex systems using networked configuration
management (CM), (CCB is configuration control board; QA is quality assurance.)
FIGURE 16-2. FireSAT Example Physical Makeup Diagram. Here we show the FireSAT
spacecraft with its six configuration items as well as the three configuration items on
the baseplate module. (SADA is solar array drive assemblies.)
650 Chapter 16—Manage Configuration
The FireSAT team uses this method to identify items of significance from a
technical, risk, and resource perspective. The result is an approved and released Cl
listing, which identifies the FireSAT mission system configuration item, and the
hierarchy of subordinate ones, including the satellite and auxiliary system
deliverables. All CIs must be directly traceable to the work breakdown structure.
The FireSAT team's proposed listing identifies a mere 300 CIs, which they will use
to manage a system with over 300,000 components!
Technical documentation for each component in a Cl includes such things as
functional specifications, schematics, and testing requirements, all subject to
configuration control through design and engineering releases. But designating an
item or group of items as a Cl creates the requirement for specifications, discrete
identifiers, change approvals, qualification testing, and design review, approvals,
and baseline releases for the item. These requirements help the team to better
manage the volume and tempo of technical activity.
Lessons Learned
Tips for Establishing CIs
reference drawings and design documentation, while the serial number allows us to
reference quality control and verification documentation unique to a specific part.
Lessons Learned
Tip for improving data clarity
FIGURE 16-3. Engineering Change Proposal (ECP) Process. This flow chart shows the steps and
people involved in initiating, receiving, recording, evaluating, approving, and releasing
ECPs. (CM is configuration management; CCB is configuration control board.)
Lessons Learned
Tips for Effective Priority Setting
Lessons Learned
Tips for Effective Change Impact Analysis
If the CCB rejects or defers an ECP or SPR, the CM technician records this
disposition along with the reasons for rejection or deferral. If the change is rejected,
we record the relevant evaluation documentation for historical purposes. If it's
deferred, we mark documentation for re-examination with the specified date for
tracking purposes. The ECP or SPR number serves as the common thread for
change tracking and traceability, appearing in configuration status accounting lists
and reports, and upon approval and implementation, ultimately on drawings in
the revision block.
when the final design phase starts, with the CM manager as the chair, and the
addition of the project manager and integrated logistics support stakeholders. We
must also collaborate with software and other technical review boards or interface
control working groups (ICWGs) to be sure that all technical performance issues
and "used-on" impacts are adequately evaluated.
The CCB chair also arranges board meetings, agenda preparation, and change
package review coordination, and sees that meeting minutes, including
discussions, decisions, and required action items are recorded. Resulting decisions
are issued as a CCB directive, which notifies all stakeholders of the decision and its
implementation. Systems engineers play a major role in developing directive
contents, which include:
• An implementation plan identifying all required actions by responsible
stakeholders, and their sequencing, associated costs, and completion dates
• Effective date (effectivity) of each required change, which specifies when to
incorporate changes to a drawing, document, software, or hardware for
each Cl and its contract. We express this as an incorporation date, block
number, lot number, or unit number.
• All documents and drawings affected, including their change effectivity
• Any relevant directives or associated orders that must be issued
When changes affect customer-approved baselines, the customer CCB must also
approve these changes before we can implement the directives.
Regularly scheduled CCB meetings can process changes efficiently when the
volume of changes is consistent. The chair may call special meetings to review
urgent changes. The actions of the CM manager and CCB members are a powerful
indicator of an organization's configuration management performance. Board
members that give top priority to meeting attendance, participation, and active
involvement in guiding change evaluations to successful outcomes encourage top
technical performance. Complex, cumbersome, or inefficient CM processes induce
project teams to attempt work-arounds. While the need for effective change
management cannot be over-stated, CM managers must see that their methods
balance formality and simplicity for best results.
usually include only the affected design document pages, and support documents
are referenced with a required date, to assure availability when needed.
requirements and is ready for the operational readiness review. Functional and
physical configuration audits assure that the project fulfills all of its functional and
physical specifications. Audit activities include reviews of the latest version of the
as-built baseline configuration, all approved changes, completed test units and
results, project schedules, and contract deliverables. After this review, the SEs
publish an ER, releasing the approved as-deployed baseline for use, and
authorizing deployment of FireSAT.
During the remaining lifecycle phases, all changes to components, CIs, and
their documentation require formal CM approval. We may still make changes in
order to correct deficiencies or enhance project performance, subject to the change
order, approval, and release process in the CM plan.
Figure 16-4. Product Number Revision Process. This flow diagram describes the decisions
process for retaining or revising an item number.
Unit (serial number) effectivity works on a similar principle, with the change tied
to a specific end-item serial number unit. This method allows us to pre-define the
configuration of each end-item serial number, and separates the configuration
change from the effects of shifting schedules. The configuration manager retains
the change implementation records, including effectivity information, to allow us
to trace specific end-items.
TABLE 16-3. Typical Sources and Reports for Technical Processes. Here we list example
sources of CM data, and common status outputs. (Cl is configuration item; ECP is
engineering charge proposal; SPR is software problem report.)
• Focus time and effort on the few most important configuration management
objectives
• Show a trend, with a simple and clear cause-and-effect relationship
• Support meaningful understanding of CM processes among stakeholders
• Provide insight quickly and easily, with data that's easy to gather, update,
and understand
We must have buy-in from process users to secure lasting improvements.
Metrics focusing on the weakest links in our CM processes allow us to isolate
problems and identify their root causes. Removing non-value-added process steps,
performing tasks in parallel, and improving communication accuracy and speed
often result in improved performance. But we must clearly document the lessons
learned, including the reasons for change, to assure continued improvement. We
should record such process changes in the CM plan, thus transforming this
document into a powerful working tool for translating plans into reality.
The result of CSA activities is a reliable source of configuration information to
support all project activities, including project management, systems engineering,
hardware and software development, fabrication, testing, and logistics. A high-
performance CSA process provides project team members with the information
they need to carry out any configuration management activity quickly, accurately,
and efficiently.
once for each Cl, group of CIs, or system, but may perform FCA-like activities at
other times. A successful FCA certification package includes these two validations:
• Verification procedures were followed, results are accurate, and the design
meets all configuration item (Cl) requirements
• Drawings and documentation used to procure long-lead-time products
satisfy all design requirements
The physical configuration audit (PCA) takes place in conjunction with, or after,
the FCA. It ensures that the CFs as-built configuration, including the physical
location of components and their versions, conforms to its as-designed
configuration, including validation of all relevant tools and documentation.
Software data library compliance inspections verify and validate that all software
versions are correct, that software and documentation releases have been
completed according to procedure, and that all changes have been accurately
identified, controlled, and tracked. The audit and project teams validate the
proven match through joint reviews of the engineering specifications, released
engineering documentation, and QA records.
Because project specifications evolve with the project, any differences in
documentation or product between the FCA and FCA audits require explanation
and validation. Some projects conduct these audits simultaneously to avoid this
situation. If simultaneous audits are impractical, we can minimize the complexity
by incorporating the following four activities into the FCA:
1. Compare the FCA audited configuration with the PCA configuration,
noting any differences. Include a review of FCA minutes for discrepancies
requiring action
2. Record in the PCA minutes any differences between documentation and
product, or between FCA and PCA configurations
3. Determine the validity of the previous FCA and current PCA, and any
impact on current activities
4. Accept any differences resulting from approved and tested changes that are
compatible with approved specifications and recorded in engineering data.
Differences caused only by test instrumentation are acceptable as well.
A successful PCA certification package normally addresses these ten validations:
• The Cl product baseline specification accurately defines the Cl, and its
required testing, transportability, and packaging requirements
• Drawings describing equipment are complete and accurate
• The software specification listing matches the delivered software media
• Acceptance test procedures and results satisfy all specification requirements
• The software version description document completely and accurately
describes all required documentation for software operation and support
• Software media satisfy contract requirements for their intended purpose
666 Chapter 16—Man ace Configuration
Lessons Learned
Tips for Successful Audits
the appropriate project team members. These members review and analyze the issue
for root cause, recommend resolution plans, obtain any required approvals, and
implement corrective actions. Resolution may be as simple as offering additional
information that clarifies and resolves the issue, or may result in disagreement with
the finding. Configuration management change control processes apply here,
including approvals and oversight by the system CCB or similar audit executive
panel, to assure satisfactory closure of all discrepancies.
Most FCA and PCA audits today are iterative and hierarchical. Audit teams
frequently sequence these audits as "rolling''' reviews, with the lowest Cl levels
audited first, and higher levels audited progressively as they are produced,
continuing through the system level Cl audits. This method of planned, sequential
reviews is a more productive method for participating project teams, and less
disruptive to system lifecycle management.
While we focus on configuration audits to be sure that CIs fulfill their project
requirements, others carry out configuration management process audits to ensure
adequate CM performance and to guide continuous improvement efforts. These
audits may be done by QA team members, by customer or independent
configuration management auditors as self-assessments, or some combination of
these. But the purpose remains the same—to verify that our CM procedures are
sufficient to meet the organization's contract and policy obligations.
Process audits focus on one of the four CM processes—request, change, build
and test, or release. A written audit plan defines each audit, identifying the
purpose, scope, timing, and project team members involved. Audit activities
include publishing audit minutes, with identification of any discrepancies, along
with their resolution. Corrective actions may include revisions to CM policies and
procedures with subsequent training recommended to ensure continued CM
quality performance.
References
Hass, Anne Mette Jonassen. 2003. Configuration Management Principles and Practice. Boston,
MA: Addison-Wesley.
Office of the Undersecretary of Defense for Acquisition, Technology and Logistics
(OUSD(AT&L). 7 February 2001. MIL-HDBK-61 A—Configuration Management
Guidance. Fort Belvoir, VA: Defense Acquisition University (DAU).
Chapter 17
669
670 Chapter 17-Manage Technical Data
Table 17-1. Managing Technical Data for Applied Space Systems Engineering. This table lists
the five main steps for managing technical data related to space systems engineering
and matches them to discussion sections in this chapter.
hierarchy helps us clearly understand SSE processes and their relationships to one
another. Figure 17-1 exemplifies this approach. In this manner, systems engineers
define and communicate the TDM process and infrastructure that supports their
projects. These processes underpin the digital-engineering environment in which
engineers operate, create, and share specifications and engineering artifacts, and
collaborate to produce the best designs and products.
SSE
Standard
Process
FIGURE 17-1. Architecture for Space Systems Engineering (SSE). Here we show a sample
hierarchy of SSE processes for a space operations company. It illustrates the broad
array of processes surrounding mainstream engineering activities (2.0 Engineering
column). Each block has artifacts that document the process results; the technical-
data management system stores these artifacts and makes them accessible. (The
figures and tables in this chapter are based on Engineering Policies and Procedures
of United Space Alliance, LLC. Copyright © 2007. Used with permission.)
672 Chapter 17-Manage Technical Data
TABLE 17-2. Tailoring Guide for Space Systems Engineering. This matrix contains criteria for
project visibility, approval, and risk that allow engineers to determine how much rigor
(tailoring) they must apply to developing, reviewing, and maintaining artifacts during a
project’s lifecycle. We use the level of tailoring determined in Table 17-3 to establish
requirements for managing technical data.
Using Table 17-2, we can assess the amount of rigor required in the SSE effort.
The highest factor level determines an activity's SSE. Table 17-3 defines
requirements for managing technical data at each level of SSE outlined in the
tailoring guide.
TABLE 17-3. Requirements to Manage Technical Data for Space Systems Engineering. This matrix identifies requirements for stakeholder
Stakeholder Coordination of engineering artifacts Distribute engineering artifacts to Make all key stakeholders members of
involvement (It's a before approval is within a cross-functional stakeholders for development teams. They must know
good idea to use a department-level organization, review before submitting then to the engineering artifacts before the
checklist similar to including customer counterparts board or management review. Some board or management acts.
that in Table 17-4.) stakeholders may be members of
development teams.
Product Systems engineering artifacts might Formally document systems- Formally document all systems
documentation and not be formally documented engineering artifacts only as much as engineering artifacts, including interim
control required for board or management results, rationale, risk assessment and
approval (if that approval is mitigation plan, and evidence of
necessary). Systems engineering applying lessons learned
includes informal plans for assessing
and mitigating risk.
Process Systems engineering process might Document systems engineering Document systems engineering
documentation and not be formally documented process. Changes to engineering process and place it under program
control processes require formal approval. manager’s control. Process changes
require program manager’s or
designee’s approval.
Progress reporting Little review of systems engineering Regularly do an informal review of Regularly do a formal review of
status systems engineering status with all systems engineering status with all
stakeholders stakeholders
ON
674 Chapter 17—Manage Technical Data
Design engineer X X X
Manufacturing engineer X X
Logistics specialist X X
Maintenance engineer X
Maintenance technician X
Safety engineer X
Flight operations engineer X X
Software engineer X
Financial analyst
Technical manager X X
Program Integration X
Reliability engineer X
Quality engineer X
Systems engineer X X
Quality control X
Human factors X
Materials and processes X X
Relevant Stakeholders
R-1 Project manager
R-2 Project management staff
R-n
Inputs
1-1 Common process inputs [PS 0 common processes]
I-2 Statement of work [external]
Specifications l-n
Outputs
0-1 Allocated requirements
0-2 Project plan (PP)
O-n
1.1.n
A. Resources
Supporting information B. Process flow
C. Artifacts
D. Measures
FIGURE 17-2. Sample Specification for a Standard Process. This type of specification helps us
define processes during the space systems engineering lifecycle, as well as their
artifacts and flows. (PS is process specification.)
PS 2.0 ENGINEERING
PS 2.2 DESIGN
The design process specification (PS) translates detailed functional requirements into a
design concept (the high-level design), transforms this concept into a detailed design, and
finalizes the plans for implementation
Relevant stakeholders
Definitions:
D-1. Project team—Members of the team that conducts the project as defined in the project
plan
D-2. Engineers—Individuals assigned responsibility for designing, implementing, and
sustaining the engineering work products
D-3. Additional definitions
Responsibilities:
R-1. Project team—Create the high-level and detailed design documents; do peer reviews,
conduct the preliminary design review (PDR) and critical design review (CDR); and
update the technical data package
R-2. Engineers—Work as part of the project team to create the high-level and detailed
designs
R-3. Additional responsibilities
Inputs
I-1. Approved project plan
I-2. Approved system requirements specification
I-3. Additional inputs
Outputs
0-1. Updated work request
0-2. Updated software design document, preliminary and detailed, as required
0-3. Additional outputs
Process requirements
2.2.1. Common processes—Perform the common process activities
2.2.2. Generate preliminary design (high-level design)—The project team analyzes the
requirements specification, refines the operations concept, develops detailed
alternate solutions, and prototypes and develops or updates the design concept
2.2.3. Conduct peer reviews—If required by the project plan, the project team does a peer
review of the high-level design
2.2.4. Conduct a PDR—If required by the project plan, the project team does a PDR
2.2.5. Additional process requirements
FIGURE 17-3. Specification for a Sample Design Process. This sample uses the process
specification framework from Figure 17-2 for the architecture shown in Figure 17-1.
(PS is process specification.)
Figure 17-4. Flow Diagram for a Sample Design Process. We uniquely identify each step and
denote its outputs. For example, the preliminary design review in step 2.2-3
produces review minutes and triggers updating of the project plan in step 2.2-5 and
the test plan in step 2.2-6. (SDD is system design document; SRS is system
requirements specification; RTM is requirements traceability matrix; ICD is interface
control document; DD is design document.)
To show how we apply this process flow, we consider FireSAT's critical design
review (CDR, step 2.2-9 in Figure 17-4). To specify this review process, we define
steps for transmitting draft artifacts to reviewers; reviewing draft content;
submitting review item discrepancies (RID); reviewing, disposing of, and
incorporating RIDs; and approving and releasing the final design documentation.
The systems engineer developing this process can take any of several approaches,
including those based on workflow, serial reviews, collaboration (with concurrent
reviews), ad-hoc (relying on independent reviews and due dates), or some
combination. The process decisions that the FireS AT team makes to review such
items as the system requirements document or the system design document (see
the project's document tree in Chapters 13 and 19) become a major driver for
managing the technical data.
Artifacts are products of space systems engineering. They're the plans,
specifications, drawings, models, analyses, review records and other documents,
datasets, and content that define and describe the system or components we're
engineering and the lifecycle model we're using. Table 17-5 shows a sample set of
required artifacts for the design process in Figure 17-4. Table 17-6 defines the
required artifacts for the design-review package in detail.
Depending on the organization's definition of the engineering lifecycle,
processes may include manufacturing and producing the system and its
678 Chapter 17—Manage Technical Data
TABLE 17-5. A Sample Set of Required Artifacts for the Design Process in Figure 17-4. We
annotate each artifact with the main practitioner responsible for producing it. We retain
all these artifacts for the life of the system plus two years. (SDD is system design
document; PDR is preliminary design review; CDR is critical design review; SRS is
system requirements specification; ICD is interface control document; RTM is
requirements traceability matrix.)
Table 17-6. A Sample Set of Artifacts Required for the Design Review Package. We annotate
each artifact with the main practitioner responsible for producing it and with how it
applies to the PDR and CDR. (PDR is preliminary design review; CDR is critical
design review; PM is project manager; SDE is systems design engineer; DE is design
engineer; A/R is as required; MAE is mission assurance engineer; LP&S is logistics
planning and supportability; X means the artifact applies.)
Project managers Monitor design progress • Variance in achieving major milestone Measurement targets:
and project leads based on achieving • Variance in achieving minor milestone • Schedule dates and work-in-progress
need to see design major and minor status of deliverables
If any variance is more than 7 days (one
progress weekly to milestones
reporting period), identify root causes and assign Descriptive measures:
address factors that
corrective actions • Milestone type
may prevent
achieving baselined If a major-milestone variance is more than 14 • Planned completion date
milestones days (two reporting periods), report the variance • Actual completion date
to all stakeholders and assemble them to Collection techniques:
evaluate corrective actions and how delay will
affect the project • Record planned and actual dates for
milestones in the project’s schedule
Calculations:
• Compare planned to actual dates to
determine achievement variance (-17-
days)
Results:
• Variance in achieving major milestone
(+/- days)
• Variance in achieving minor milestone
(+/- days)
17.1 Prepare a Strategy for Managing Technical Data 681
Project
and organization:
• Contract terms
and conditions
• Organization goals
and objectives
FIGURE 17-6. Rights, Obligations, and Commitments for Technical-data Items. Systems
engineers who define policies and procedures for data governance must consider
factors ranging from very broad to very specific. (TDM is technical-data management.)
17.1 Prepare a Strategy for Managing Technical Data 683
The strategy also entails partitioning into layers the software tools that
constitute the digital-engineering environment and the TDMS. The tools interact
through the data and content stored in the logical data repository, and an
enterprise-wide messaging bus (not shown) allows tools to communicate with
each other when necessary. This strategy loosely couples the tools and data, in that
the data is independent of a particular tool.
The strategy outlined in Figure 17-7 aims to minimize lifecycle costs by
favoring tools based on non-proprietary or open formats. This approach eases
competition and usually keeps proprietary vendors from locking in. It also reduces
upgrade costs over the system's lifespan by enabling projects to replace higher-
cost tools with less expensive ones as competition drives down prices. This
strategy doesn't require costly migration of data and content from one proprietary
17.1 Prepare a Strategy for Managing Technical Data 685
format or data structure to another. Of course, the lower-cost tool must still deliver
the necessary functions, supportability, and performance.
FIGURE 17-7. A Data-storage Strategy Based on a Common Logical Repository for Data and
Content. This physically distributed arrangement offers many benefits. Successful
data-storage strategies must account for lifecycle costs, flexibility, responsiveness
to change, and risk.
Flexibility means being able to add or subtract physical locations (part of the
architecture's design) and tools as needs dictate. Responsiveness derives from
loosely coupling tools and data, adopting open or industry standards, and using
metadata scoped to the enterprise, which allows the organization to see data and
content as they use it. The strategy in Figure 17-7 also limits risk in several ways:
• More diversification and redundancy among multiple physical repositories
to cover system failure at a single location
• Diversification by selecting tools from multiple vendors, while avoiding the
risk of poor integration and support if the number of vendors becomes
unwieldy or difficult to manage
686 Chapter 17—Manage Technical Data
17.1.7 Describe the Strategy for the Methods, Tools, and Metrics
Used During the Technical Effort and for Managing
Technical Data
We can't build a strategy for storing and accessing data, as outlined in Section
17.1.6, without considering tools for creating, managing, and using data and content.
So we must also evaluate and select tools, based on the organization's style. We begin
with the organization's belief in developing its own tools (make), buying commercial-
off-the-shelf (COTS) tools from software vendors (buy), or buying COTS and
modifying them to its own requirements and preferences (modified COTS).
Once an organization has decided on whether to make or buy tools, it must
define requirements in some way, ranging from formal functional requirements,
to checklists of "musts" and "wants/" to loosely assembled marketing materials in
which vendors feature various products. The organization's style and practices
also influence these requirements. We recommend structured, formal definition
processes because ad hoc, informal approaches tend to produce subjective results
that depend on the biases and preferences of those involved.
After defining requirements for tools, we can use several forms of decision
making to select them, ranging from a decision by one or more organizational
executives; to formal decision-analysis techniques, such as Kepner-Tregoe
Decision Analysis, SWOT (strengths, weaknesses, opportunities, and threats)
analysis, or decision trees; to group-decision techniques such as unanimity,
majority vote, range-voting, multi-voting, consensus building, or the step-ladder
method. Regardless of the process, the final selection leads to applying and
deploying the digital-engineering environment and technical-data management
system (TDMS).
Some organizations formally specify the selection process. For example,
Federal Acquisition Regulations (FARs) determine contracting processes for
federal acquisition in the United States. Part 15 Subpart 3 prescribes how to select
sources of products that represent the best value. In other situations, the
techniques for make-versus-buy decisions apply. In many instances, the
alternatives are structured so that details of the approach reside in a single process
for decision making. All decisions ultimately involve evaluating alternatives
against criteria or requirements, whether the process is deterministic, heuristic, or
subjective. It's often tempting to skimp on or omit requirements definition, but
doing so dramatically increases the risk that the digital-engineering environment
and TDMS will fall far short of their intended goals. Chapter 4 discusses
requirements definition in detail.
17.2 Collect and Store Required Technical-data Artifacts During the 687
Engineering Lifecycle
decommission or demolish to make way for the current project. Sometimes these
systems are very old and their engineering data isn't even in electronic form.
Project managers and leads must consider the form and format of baselines
targeted for update with respect to the tools and capabilities of the organization's
digital-engineering environment and TDMS. Artifact baselines, such as CAD
drawings, 2-D and 3-D engineering models, or simulations, may have used earlier
versions of the toolset or tools that are no longer in use. So converting to the current
version may require a lot of money and effort. In fact, we may have to re-create the
baselined artifacts using current tools. Project plans and cost estimates must
account for this potential data translation to avoid unpleasant surprises when
engineers try to use or update these artifacts.
approve content and instruction changes, and assign approval for process changes
to the owners of those processes.
In another approach, we might post a policy or procedure requiring review on
a wiki-based internal web site and then assign parties to update the policy during
a review window. Once the review window closes, we could open an approval
window, so approvers could view the changes and sign them off online. A third
approach involves electronic workflows triggered by annual review timers that
route policies and procedures to identified parties for review, update, and release.
engineering environment into the review and release flow by creating web services
that capture its interaction with the flow. Because BPEL is based on standards, it
easily flows processes across organizations, companies, and agencies that employ
different infrastructures for information technology.
Figure 17-8 illustrates a conceptual flow for reviewing and releasing
engineering artifacts using workflow or BPEL. If s a sequence diagram using the
Unified Modeling Language, which shows how review and release processes
interact. It shows each participant with a lifeline (running vertically), the messages
targeted to and from the participant (running horizontally), and in certain cases, the
parameters that messages carry.
The sequence in Figure 17-8 models a simplified, success-oriented scenario.
But models for a robust production process must cover all scenarios and
incorporate ways to process errors, handle exceptions, iterate, and account for
many other details, such as
• How to identify artifacts for revision
• Criteria to determine who the reviewers and approvers are
• Mechanics of incorporating notices and comments
• Whether artifacts will carry effectivity—and if so, what type (date, lot, serial
number, and so on)
FIGURE 17-8. A Sample Process for Reviewing and Releasing Engineering Data, Expressed
as a Sequence Diagram in the Unified Modeling Language. The diagram lists
each participant across the top and specifies interactions among participants as
messages to and from them. (CM is configuration management.)
17.3 Maintain Stored Technical Data 693
FIGURE 17-9. A Sample Process for Updating and Releasing Engineering Artifacts. This
sequence diagram in the universal modeling language is nearly identical to the
process flow for creating artifacts in Figure 17-8. We could easily apply it using a
services-oriented architecture. (CM is configuration management.)
Figure 17-10. Access Controls Can Use Any of Several Control Models. In this example,
access-control lists associate users—individually or by groups—to technical-data
resources, such as artifacts, datasets, or tools. These lists also detail the rights and
privileges that each user has to the resource. The key symbol represents lockable
access, requiring approval. The number 1 means access is granted to only one. The
asterisk symbol represents a multiplication by 0 or more between the boxes.
Figure 17-10 has four associations; three are bi-directional, one is uni-directional.
The asterisks mean "zero or more," The "1" means "one only." The arrow at the end
of an association shows its direction. Bi-directional associations have arrows at both
ends. For example, the bi-directional association between group and user at left
signifies that from the group perspective "a group has zero or more users/' and from
the user perspective "a user is a member of zero or more groups."
The example illustrated in Figure 17-10 is one of many access-control
configurations possible with today's digital-engineering environments and
TDMSs, but we must use a robust scheme to manage technical data successfully.
So we should treat access control as a separate systems engineering project.
design of alternate support elements. Generally, hot standby and failover greatly
increase the cost of the TDMS. Analyzing costs versus benefits and return on
investment helps determine how aggressively to employ these solutions.
Content and data recovery. Having backups in place and hot standby or failover
sites ready to step in is meaningless if we don't know how to use them. A sound
DRCOP details recovery procedures step by step to show how to failover a system
or recover part or all of it. Best are procedures that segregate the steps by role and
offer an integration process that coordinates actions among the roles.
Test scripts for content and data recovery. Using a recovery procedure for the
first time when we need it is very risky. The unexpected often trips up best-laid
plans. The solution is to develop test scripts that exercise recovery procedures
before we need them. This approach also trains system operators and support staff
on recovering the system, builds their proficiency in applying the procedures,
decreases the time needed to recover, and gives system users some confidence in it.
• The format of an archived record, which generally has a long life with
respect to the technology used to create and access it
• The archived record's location, usually an independent repository dedicated
to archiving and set up for long-term reliability of data-storage media
• Procedures used to access archived records, which normally involve a
librarian department, organization, or function that creates and maintains
the archive
We must write procedures for archiving and make sure that the digital-
engineering environment or TDMS supports it. Archiving requirements depend
on several perspectives, including the organization's goals and objectives, the
contract7s terms and conditions, regulations from government or industry, and the
organization's policies on retaining records to reduce litigation and risk. One such
regulation is the Code of Federal Regulations (CFR) Title 36 Part 1234.30 —
Standards for the Creation, Use, Preservation, and Disposition of Electronic Records.
We must balance the future usefulness of archiving a project's history and
artifacts against the cost to create and maintain the archive. Because knowledge
changes rapidly in space systems engineering, we're unlikely to apply much of an
archive to future projects. But lessons learned, data acquired, and historical
preservation still justify it.
17.4 Provide Technical Data to Authorized Parties 701
Figure 17-11. A Web-based Portal that Gives Systems Engineers a Reference Index to
Engineering Processes, Internal Operating Procedures, Templates, and Forms.
Each procedure, template, and form callout is a hyperlink that brings up the item.
Hyperlinks in the leftmost column offer easy access to other systems engineering
portals, documents, artifacts, online training, tools, and related websites.
• To organizations
- A consolidated view of who has access to what
- A framework for control procedures such as account-approval workflows,
periodic revalidation of system access, and license management in cases
where access requires licensed commercial-off-the-shelf (COTS) software
- A way to establish access rights, limits, obligations, and commitments
with users through agreements they enter into when accessing or
registering for accounts —essential in certain government settings and
useful in others
17.4 Provide Technical Data to Authorized Parties 703
FIGURE 17-12. Identifying a User for Automated User Access Registration. The identity
management system captures or displays attribute data that uniquely identifies a
person. For existing users, the system simply fills in this information.
FIGURE 17-13. Continuing Registration. The process continues as users select an application or
facility and provide information on their roles, profiles, groups, and justification.
FIGURE 17-14. Submitting the Request for Processing. By clicking the “submit” button, the user
electronically acknowledges an understanding of the policies and procedures that
must be followed.
17.4 Provide Technical Data to Authorized Parties 707
FIGURE 17-15. Revalidating User Accounts and Access Grants. Having an automated user
interface simplifies this process and is essential when we must periodically
revalidate thousands of accounts and access grants.
708 Chapter 17—Manage Technical Data
In most large organizations, an internal audit group does the audits. They focus
on business processes to ensure compliance with all standards and procedures, as
well as generally accepted practices. They also evaluate supporting systems and
tools to be sure the artifacts produced are correct and reliable. Organizations that
don't have an internal audit group can contract with external auditors.
Audits of the systems engineering lifecycle and TDMS evaluate SE practices,
procedures, and systems to confirm that required technical data is appropriately
captured, managed, and distributed to satisfy the needs of project teams and their
customers. Audits also verify that lifecycles follow established procedures,
directives and agreements. They typically carry out these eight actions:
• Review content and data submissions against technical data management
system quality controls and standards, which evaluates the accuracy and
integrity of content and data, and gauges the effectiveness of input edit
checks and validation features
• Review exception-handling processes to gauge how effectively the system
detects and resolves exceptions
• Review data backups and the DRCOP's recovery procedures; review the
results of periodic testing under the recovery plan
• Review users' accounts and evaluate how effectively the system manages
them; randomly sample approvals and responses to computer users'
requests to gauge how effectively the system provisions and de-provisions
accounts
• Review access controls and mechanisms; take random samples of access
controls to gauge the effectiveness of these processes
• Review all privileged-user accounts for rationale and need
• Document all audit results and findings
• Track findings through resolution and retesting
An audit assesses the correctness, reliability, and security of data and content
stored and managed by the TDMS. It provides a level of confidence to stakeholders
of the TDMS's services and functions.
FIGURE 17-16. A Website for Integrated Data Management Based on Visual Search and
Retrieval Techniques. This portal supports searching different data and content
domains by establishing a visual context and setting filters to narrow the search
results.
Processes, systems,, and techniques for managing technical data result from an
evolution that began with the computer age. It planted the seeds for an
infrastructure that has substituted integrated digital design environments for
drafting boards and computer-based modeling systems for slide rules. These
modeling systems can accurately analyze virtually every physical, economic, and
managerial aspect of a space system's lifecycle. Engineering has steadily shifted
from paper artifacts to digital ones. Functionally aligned organizations whose
members interact face-to-face have given way to virtual project teams spread
across continents, collaborating through web-based tools. These changes have
radically changed how space systems engineering operates and how we create,
manage, and use its products and deliverables.
17.5 Collaborate by Effectively Using System and Process Artifacts 711
FIGURE 17-17. Web Portal for Integrated Data Management (IDM). This I DM portal searches
collections of hardware baselines, ground processing facilities, and mission-
operations topics for the ISS. Users can drill into each baseline through hyperlinks
or hotspots on the web page.
FIGURE 17-18. Baseline for Hardware on the International Space Station (ISS). The baseline for
ISS hardware enables users to drill down into any modules or assemblies on orbit or
being processed for launch.
17.5 Collaborate by Effectively Using System and Process Artifacts 713
Figure 17-19. Data Search Results for the International Space Station (ISS). Search results
are in a grid with data attributes stored in the system that describe each item. The
item’s thumbnail enables users to launch the viewer associated with this content.
7 1 4 Chapter 1 7 - Manage Technical Data
References
Government Electronics and Information Technology Association (GEIA). 15 June 2002.
El A-836 —Configuration Management Data Exchange and Interoperability. GEIA website:
www.geia.org
Institute for Printed Circuits (IPC). November 2001. IPC-2578—Sectional Requirements for
Supply Chain Communication of Bill of Material and Product Design Configuration Data —
Product Data eXchange (PDX). Bannockburn, IL: IPC website: www.ipc.org
International Organization for Standardization (ISO). 1994. ISO 10303—Industrial
Automation Systems and Integration—Production Data Representation and Exchange, also
known as Standard for the Exchange of Product Model Data (STEP). ISO website:
www.iso.org
Chapter 18
Technical Assessment
and Reviews
Michael C. Pennotti, Ph.D., Stevens Institute of Technology
Mark P. Saunders, NASA Langley Research Center
18.1 Define the Subject and Scope of the Review
18.2 Establish the Entrance Criteria
18.3 Identify and Invite the Review Team
18.4 Conduct the Review
18.5 Ensure that the System Meets Success Criteria
18.6 Specify and Document the Decisions
18.7 Clarify and Document the Action Items
18.8 Baseline the Design
18.9 Improve the Process for Technical Reviews
715
716 Chapter 18—Technical Assessment and Reviews
Table 18-1. Technical Assessment Process. Here we list the process steps for
assessing a project’s progress during its lifecycle.
For example, at the system definition review for the Mars Science Laboratory,
the independent review team was relatively comfortable. They knew the teams
hadn't fully assessed entry, descent, and landing risks. And they may not have
verified and validated their designs adequately. But the schedule allowed plenty
of time to address these issues before the preliminary design review. If the same
deficiencies remained at critical design review, it would have called for a more
drastic set of actions.
Different systems engineering processes define technical reviews that all
projects must pass through. Processes differ, and some organizations use different
words to mean the same thing, but their intent is the same: to ensure the orderly
spending of resources to produce what users want within cost and schedule
constraints. This chapter focuses on the key reviews in NPR 7123.1a [NASA, 2007
(1)1 concerning the mission concept, system requirements, system definition,
preliminary design, and critical design reviews (Figure 18-1). We mention other
terms where appropriate.
FIGURE 18-1. Major Technical Reviews. NASA’s lifecycle model (from NPR 7123.1a) specifies
technical reviews at major development points. (MCR is mission concept review;
SRR is system requirements review; SDR is system definition review; PDR is
preliminary design review; CDR is critical design review.)
In Section 18.8 we discuss other formal technical reviews, including those for
test readiness, system acceptance, flight readiness, operational readiness, and
decommissioning. We also describe informal technical reviews that project teams
may hold at any time they consider appropriate. But first we describe the rest of
the review process.
For example, the Orion Project held its system definition review as scheduled
in August 2007, even though the spacecraft architecture couldn't meet its mass
allocation. So the review had to remain open for five more months until they met
that milestone. This extension increased cost, delayed the decision to proceed to the
next phase, and left uncertain how design changes would affect other projects.
Project teams must produce required artifacts early enough so reviewers have
plenty of time to digest them before the review meeting.
Too often, those who define and manage development processes focus more on
activities than on outputs. They would be far more effective if they concentrated on
deliverables for each stage and on thoroughly reviewing these deliverables at the
next gate. Otherwise, the teams' activities may be unfocused, and they may show
up for the subsequent review with a long story about how they spent their time
instead of the products they were supposed to deliver. The purpose of a technical
review is not to listen to such stories, but to assess the design's maturity and its
fitness for the next development stage.
Table 18-2. Sample Review Criteria. The entrance and success criteria for a mission concept
review are specific and clearly defined. (MOE is measure of effectiveness; MOP is
measure of performance.)
support, or training. Other experts not directly involved in the project should offer
independent perspectives [Starr and Zimmerman, 2002]. Examples are:
• Someone with general design expertise
• Someone familiar with the technology
• A program manager from another, unrelated program
• An expert on the application domain
• Someone with general knowledge of architectural principles, whom we
count on to ask questions that are a little "off-the-wall"
For example, the review board for the Mars Science Laboratory includes experts
in the technical disciplines of guidance, navigation and control; propulsion; nuclear
power; telecommunications; robotics; science instrumentation; systems engineering;
and other technical disciplines, as well as experts in mission operations; project
management; cost, schedule and resource analysis; and mission assurance.
In some cases, sharp reviewers from outside the project team join a review and
recognize technical problems. For example, during the system definition review
for the Space Interferometry Mission, deficiencies emerged in the two-dimensional
metrology approach to establish the mirrors' relative positions. But it wasn't an
optics expert who pointed them out—a reviewer only partly familiar with the
technology found them. So, inviting outside reviewers often brings valuable talent
to the review process.
Formal reviews must also include those who can commit the resources
required for the next development stage [Cooper, 2001]. These are usually senior
managers or administrators with demanding schedules who are often difficult to
engage. But if they don't participate, decisions made during the review won't be
binding, and their absence suggests to other reviewers and the development team
that the review isn't important. On the other hand, their participation greatly
strengthens a formal review's effect and its contribution to project success.
project will not go forward until I say so." Despite prompt action to temper his
decree, this remark harmed the relationship between the project team and review
board for months. It became much more difficult for the review board to get
accurate, timely information from the project team, and the project team resisted
the board's suggestions and recommendations. Both wasted valuable time and
energy worrying about each other when they should have focused on designing
the mission. Although the situation gradually improved, it would have been far
better if the relationship hadn't been damaged in the first place.
Reviewers can help prevent adversarial relationships. For example,
mentioning all the flaws in a presentation seldom builds trust and a collaborative
spirit. Most teams produce some ideas worthy of praise, so reviewers should first
say what they like. Acknowledging the positive elements reinforces the team's
accomplishments, helps keep these contributions in subsequent work, and makes
the presenters more receptive to constructive criticism. Couching that criticism as
"what else we'd like you to do" instead of "what's wrong with what you've done"
also maintains a positive atmosphere. Teams can always do more, no matter how
well they've done. A comment from this perspective emphasizes what needs to
happen in the future, which the team can influence, not what's wrong with the
past, which the team can't change.
Though important for an effective review, including the senior managers
creates a special challenge because hierarchical relationships between reviewers
and presenters may stifle debate and minority views [Surowiecki, 2004]. This
happens when a leader, intentionally or unintentionally, expresses opinions that
others fear to challenge. Effective leaders encourage debate, minority opinions,
objective evidence, and all possible interpretations before deciding. TTiey clearly
value others' opinions, even if those opinions contradict their own or if they decide
to go in a different direction. All types of disasters have occurred when reviews
didn't foster such debate. In fact, that's the reason NASA's Space Flight Program
and Project Management Requirements [NASA, 2007 (2)] now require a way to
document dissenting opinions and carry them forward.
Nearly all reviewers should address material presented to them, but if s often
far more difficult to critique what isn't presented—and that may be the most
important contribution they make. An effective review bores under the surface to:
• Question the critical assumptions that underlie a design
• Make sure that we've considered a broad range of options before selecting
the preferred one
• Examine every assertion to ascertain the facts and data behind it
• Explore relationships among the system's elements and between different
presentations
• Identify blind spots that people immersed in the details may have missed
18.5 Ensure that the System Meets Success Criteria 721
Example: Mars Pathfinder, the first successful Mars landing since Viking in the
1970s, relied on a parachute to control the spacecraft's final descent to the surface.
At the critical design review, one of the reviewers questioned whether the Kalman
filters in the radar altimeter properly accounted for fluctuations from the
spacecraft's swinging beneath the chute as it approached the uneven terrain
below. The question produced a strong response—an extensive test program to be
sure that the altimeter worked properly.
aggressive schedules, reviewers may have a strong incentive to choose this option,
especially if "recycle" will result in a slip in a major project milestone or a missed
market opportunity. But it's a dangerous choice. Too often, despite best intentions,
nu one revises the deficient deliverables. Having passed through die gate, and under
intense schedule pressure, the development team may simply move forward
without satisfactorily completing some design element that reviewers considered
critical. Unless a disciplined process holds the development team strictly
accountable, "proceed with conditions" can thwart the review's purpose.
For example, at the preliminary design review (PDR) for Mars Science
Laboratory, the independent review team said the authorized funds weren't
enough to complete the project as planned, but the development team decided to
proceed with only a modest increase. At the critical design review, three major
systems weren't at the required maturity level and authorized funds were still
inadequate. As a result, the project received authorization to proceed but with a
smaller scope. If that decision had been made at PDR, it would have left more time
to respond.
If we meet these success criteria, the mission baseline is defined and the system
moves to concept development. For a complete list of entrance and success criteria
for the mission concept review, see Table 18-2.
Success criteria for assessing the SRR's deliverables emphasize five main areas:
• Have we satisfactorily completed all action items from the MCR?
• Are the system requirements complete, clearly written, and traceable to
stakeholder requirements in the mission baseline? Is each system requirement
verifiable and do we have a preliminary plan for its verification?
• Can the proposed functional architecture accomplish what the system must
do? Is the technical approach reasonable? Do the timing and sequencing of
lower tasks match higher functional requirements?
• Have we done enough technical planning to go to the next development
phase? Have we identified risks and planned how to handle them?
If we've met these criteria, the functional baseline is complete and preliminary
design continues. Table 19-27 gives a full list of entrance and success criteria for the
system definition review.
Example: The Jet Propulsion Lab (JPL) is designing the Mars Science
Laboratory Mission to send the next-generation rover to Mars in 2009. They did a
preliminary mission and systems review (PMSR)—a different name for the system
definition review—in December 2006. The independent review team (IRT)
consisted of system experts from JPL, Goddard Space Flight Center, and other
external organizations. The project team presented their design and development
plans to the IRT, and then responded to the IRT's questions and requests for action.
As part of this review, the IRT found several technical areas to improve
through further project analysis and work. For example, they believed the project
team hadn't finished assessing the risks of entry, descent, and landing (EDL) at
Mars and had taken a potentially faulty approach to verification and validation.
The IRT also thought the design appeared to carry more single-string systems than
was warranted for a $1.5 billion mission and recommended that the project
reexamine both areas.
At the next milestone—the preliminary design review (PDR)—the IRT
reassessed the project for entrance and success criteria mentioned above. Having
taken the PMSR's issues and actions seriously, the project offered analyses and
system changes as part of their PDR presentations. The project showed the IRT the
results of their EDL analyses and convinced the IRT that their validation and
verification approach was adequate to demonstrate the EDL performance. The
project had concurred with the IRT's assessment of single-string systems and had
added avionics hardware to increase the spacecraft's redundancy.
728 Chapter 18—Technical Assessment and Reviews
The IRT was satisfied with these changes and found the technical design to be
in good shape, but they had noted in the previous review that the schedule looked
extremely tight. During this later review, they strongly recommended the project
increase their schedule margin by pulling forward their critical design review
(CDR), moving their software deliveries to their test beds earlier, and making their
actuator deliveries as early as possible, along with several other suggestions. The
project leader concurred.
At the CDR the independent review team again assessed the project's progress
and their plans for development and operations, as specified in paragraph 18.8.5.
The IRT said the end-to-end system design held together with adequate (though
tight) technical margins, but several critical systems weren't yet at the CDR level.
Just before CDR, the project discovered that the actuator design wouldn't meet the
design specifications so they had started developing a new design, Coupled with
late software development, design issues with the sample handling equipment
chain, and a few other late designs, this actuator problem had added schedule
pressure to the planetary-constrained launch window. The IRT suggested several
unique ways to resolve this schedule pressure, which the project agreed to
investigate as part of their continuing planning for workarounds.
Finally, the independent review team recognized that the project understood
the risks they faced and had plans in place to address them, but they didn't have
enough money to complete the spacecraft development. The review team
communicated this important fact to management in their report, so the latter
could correct the problem.
References
Cooper, R. G. 2001. Winning at New Products: Accelerating the Process from Idea to Launch.
Cambridge, MA: Perseus Publishing.
NASA. March 26,2007 (1). NPR 7123.1a - Systems Engineering Processes and Requirements.
Washington, DC: NASA.
NASA. March 6, 2007 (2). NPR 7120.5d - Space Flight Program and Project Management
Requirements. Washington, DC: NASA.
Naval Air Systems Command (NAVAIR). April 10, 2006. NAVAIRINST 4355.19C - (AIR-
4.0/5.0/6.0) Systems Engineering Technical Review Process. Patuxent River, MD:
Department of the Navy.
Starr, Daniel, and Gus Zimmerman. July/August, 2002. "A Blueprint for Success:
Implementing an Architectural Review System," STQE Magazine.
Surowiecki, J. 2004. The Wisdom of Crowds: Why the Many Are Smarter Than the Few and How
Collective Wisdom Shapes Business, Economies, Societies and Nations. New York, NY:
Doubleday.
Chapter 19
FireSAT is arguably the most famous space mission that never flew. The idea
for this fictitious mission was first developed in Space Mission Analysis and Design
(SMAD) [Larson and Wertz, 1999] as a running example throughout each chapter
to highlight how theory applies to a concrete problem. Later, Understanding Space:
An Introduction to Astronautics (US) [Sellers, 2004] adapted the FireSAT idea to
illustrate concepts of space systems engineering and subsystem design. FireSAT
has since become a workhorse in other books and in classrooms the world around.
The incarnation of FireSAT in this chapter will look familiar to anyone who has
encountered the mission in either SMAD or US, but the authors took academic
license with the mission and system definition for the sake of consistency and to
illustrate different technical points.
This chapter uses FireSAT to highlight significant ideas and issues in both the
science and art of space systems engineering. The science issues are underscored
through numerous exhibits of systems engineering products such as requirements
documents and verification plans. For each technical baseline, we present issues
733
734 Chapter 19-FireSAT End-to-End Case study
and anecdotes to give a flavor for the art of systems engineering. These examples
come from real-world experiences but "the names have been changed to protect
the innocent." The reader may ask why certain products or examples are discussed
at one point in the lifecycle but not others. Of course, one of the biggest challenges
of space systems engineering (and any book on the subject) is that in the real world,
many things are happening at once. Life happens in parallel but books can only be
written one page at a time. So the authors chose to highlight various issues at
different points in the lifecycle to even out the discussion.
It's impossible to fully plumb the depths of a real-life mission in a single book,
let alone a single chapter. This forced the authors to pick and choose the products
and issues they presented. Our aim is to provide enough depth to give a feel for
the real complexity of the problem (often not appreciated until one really gets into
this business) without overwhelming the reader with detail. Table 19-1 lists the
activities described in this chapter along with cross-references to other chapters.
Unlike the rest of the book, which takes a process view of space systems
engineering, this chapter takes us longitudinally through a project lifecycle to
show how to apply those processes at each technical baseline. The objectives of this
chapter are to:
• Tie together all the 17 systems engineering processes by illustrating their
application throughout a project lifecycle
• Provide a stand-alone systems engineering case study
Table 19-1. Chapter Activities. This is a list of activities addressed in this chapter
along with cross-references to other relevant chapters.
Here we try to get a grasp of the daunting challenge of taking a space mission
from a blank piece of paper through launch and early orbit operations. Of course,
such a story, even for a relatively simple system such as FireSAT, would span years
and the systems engineering products would fill volumes. Our approach is to give
the entire project a broad brush to cover the cradle-to-grave activities and products
in an inclusive way, zooming in on selected items, exhibits, and anecdotes to give
the reader enough concrete examples to get through the tough spots of project
development.
19.1 Certify the As-deployed Baseline 735
We've organized this chapter around the seven technical baselines that all
systems engineering processes support. As Chapter 13 describes, these baselines
serve as official "lines in the sand" that fully describe the current state of the
system. The baselines follow the mission lifecycle, and so each successive one
reflects greater design maturity. Table 19-2 shows all these baselines.
Table 19-2. Major Project Technical Baselines. These are the seven major
technical baselines in the project lifecycle and the major reviews at
which they’re defined.
We start our story at the end of FireSAT design and development, which is the
beginning of transition to normal operations. We then travel back in time to the
outset of the project to trace its evolution from initial customer need to delivery for
launch.
FIGURE 19-1. FireSAT in Deployed Configuration Doing its Mission. This shows the deployed
configuration of the FireSAT spacecraft along with a ground track illustrating the
zone of coverage for the satellite as it searches for wildfires.
With all that behind us, we're ready to "turn over the keys" of the system to
day-to-day operators. Now is a good time to reflect on the effort that brought us to
this point and look at some of the issues that we may still face.
Table 19-3. Entrance and Success Criteria for the Post-launch Assessment Review (PLAR).
Included are the supporting documents needed to complete each success criterion.
Adapted from NASA [2007],
• The launch and early operations performance, including • The observed spacecraft and
(when appropriate) the early propulsive maneuver results, science payload performance
are available agrees with prediction, or if not, is
• The observed spacecraft and science instrument adequately understood so that
performance, including instrument calibration plans and future behavior can be predicted
status, are available with confidence
• The launch vehicle performance assessment and mission • All anomalies have been
implications, including launch sequence assessment, and adequately documented, and their
launch operations experience with lessons learned, are impact on operations assessed.
completed Further, anomalies impacting
• The mission operations and ground data system spacecraft health and safety or
experience, including tracking and data acquisition support critical flight operations have been
as well as spacecraft telemetry data analysis, is available properly disposed.
• The mission operations organization, including status of • The mission operations
staffing, facilities, tools, and mission software (e.g., capabilities, including staffing and
spacecraft analysis and sequencing), is available plans, are adequate to
• In-flight anomalies and the responsive actions taken, accommodate the flight
including any autonomous fault protection actions taken by performance
the spacecraft or any unexplained spacecraft telemetry, • Liens, if any, on operations,
including alarms, are documented identified as part of the operational
• The need for any significant changes to procedures, readiness review, have been
interface agreements, software, and staffing has been satisfactorily disposed
documented
• Documentation is updated, including any updates
originating from the early operations experience
• Future development and test plans have been made
Table 19-4 summarizes each of the documents that support the success criteria
and defines their top-level contents.
Table 19-4. Top-Level Contents for Key Post-launch Assessment Review (PLAR)
Documents. This table outlines the top-level contents for each of the documents
needed to address PLAR success criteria.
Product Contents
TABLE 19-5. Post-launch Assessment Review (PLAR) Systems Engineering Process Relative
Level of Effort. This table illustrates the relative level of effort for each of the 17
systems engineering processes after the PLAR. Analogous to a cell phone signal, zero
bars represents no or very low resource requirements and five bars represents
maximum resource use. Resource requirements apply for a particular process only; a
given number of bars for one process does not imply the same level of resources as
the same number of bars for a different process.
Process Comments
16. Technical Formal technical assessment mostly ends with PLAR, but
assessment periodic project reviews continue at some level through final
disposal
17. Decision analysis By this point, key decisions are embodied in flight rules and other
procedures for the operators; additional decision analysis is on a
contingency basis
740 Chapter 19—FireSAT End-to-End Case Study
engineering change proposal for a long-term fix to the software, requiring a short
recovery time between torque rod firing and magnetometer data collection.
19.1.5 Conclusions
The above is only a small example of the kind of issues that project managers
and engineers must grapple with both during the early stages of on-orbit
operations and throughout the mission life. We have to recognize how much the
decisions made during operations depend on the vast knowledge gained about the
system over many years of development, integration, and testing. At each formal
baseline, the knowledge about the system grows.
At the PLAR, the decision authorities had to review all the information
included in the project exhibits, giving special attention to the above issue. They
then had to decide if it was prudent to accept the short term operational work
around and put FireSAT on line for the US Forest Service. Once in full service,
FireSAT could finally fulfill its intended purpose, answering the need for a more
effective means to detect and monitor wildfires throughout the US.
But how did we get here? How did we start with this need and turn it into this
complex system, ready to go online to search for wildfires? Smokey Bear meets
Buck Rogers? In the next sections, we turn back the clock on the project to see how
it progressed from concept to launch.
critical decision gate as described in Chapter 13. The gate keepers vary by
organization and by the scope of the potential project. NASA, the European Space
Agency, and DoD use a mission concept review (MCR) to pass from Pre-phase A
to a full-fledged Phase A project.
TABLE 19-6. Mission Concept Review (MCR) Entrance and Success Criteria. These
considerations are in addition to the programmatic matters faced by a newly-
approved project. Adapted from NASA [2007].
1. Mission goals and objectives. 1. Mission objectives are clearly defined and stated and
2. Analysis of alternative concepts to are unambiguous and internally consistent.
show at least one is feasible. 2. The preliminary set of requirements satisfactorily
3. Concept of operations. provides a system that will meet the mission objectives.
4. Preliminary mission descope 3. The mission is feasible. A solution has been identified
options. that is technically feasible. A rough cost estimate is
5. Preliminary risk assessment, within an acceptable cost range.
including technologies and 4. The concept evaluation criteria to be used in candidate
associated risk management/ systems evaluation have been identified and prioritized.
mitigation strategies and options. 5. The need for the mission has been dearly identified.
6. Conceptual test and evaluation 6. The cost and schedule estimates are credible.
strategy. 7. An updated technical search was done to identify
7. Preliminary technical plans to existing assets or products that could satisfy the
achieve next phase. mission or parts of the mission.
8. Defined MOEs and MOPs. 8. Technical planning is sufficient to proceed to the next
9. Conceptual lifecycle support phase.
strategies (logistics, manufacturing, 9. Risk and mitigation strategies have been identified and
and operation). are acceptable based on technical risk assessments.
19.2.4 Results
The key results of Pre-phase A closely follow the major documents list
described in Table 19-8. We start with the one document that underlies the entire
definition of the mission: the scope document. We then look at the results captured
in the FireSAT mission design report, the draft operations concepts document, the
draft risk management plan, the SEMP focused on the project document tree, and
finally the plan to achieve SRR. The remaining documents, while important to a
good MCR, are more important in later phases; for now we hold off on describing
their detailed contents. The following subsections detail the main results.
TABLE 19-7. Mission Concept Review (MCR) Success Criteria Versus Supporting Documents.
From this brief analysis, we can put together the list of paper products that we’ll need to
develop throughout the course of the study with their top-level contents identified,
including the items specifically called for in the entrance criteria.
1. Mission objectives are clearly defined and stated FireSAT scope document
and are unambiguous and internally consistent
2. The preliminary set of requirements satisfactorily FireSAT mission design study report
provides a system which will meet the mission including analysis of alternatives, cost
objectives. estimates
3. The mission is feasible. A solution has been FireSAT mission design study report, draft
identified which is technically feasible. A rough mission operations concept document
cost estimate is within an acceptable cost range.
4. The concept evaluation criteria to be used in FireSAT mission design study report
candidate systems evaluation have been
identified and prioritized.
5. The need for the mission has been clearly FireSAT scope document
identified.
6. The cost and schedule estimates are credible. FireSAT mission design study report
(credibility to be judged by the review board)
7. A technical search was done to identify existing FireSAT mission design study report
assets or products that could satisfy the mission
or parts of the mission.
the customer and other stakeholders, the requirements engineering effort will
simply chase its tail, never able to converge on a clear picture of what everyone
really wants.
To define the project scope we follow the processes described in Chapters 2 and
3. We start by capturing the customer expectations as the need we're trying to
address along with associated goals and objectives (collectively called need, goals,
and objectives—NGOs). After extensive discussions with the FireSAT project
customer—the USFS—we identify the need statement, goals, and objectives, the
basis for the project scope document as summarized in Table 19-10. These goals and
objectives form the basis for the mission-level measures of effectiveness (MOEs).
In developing the NGOs, we also have to identify the relevant stakeholders.
Chapter 2 describes this fully, and Table 19-11 identifies the FireSAT stakeholders.
With our understanding of the customer's expectations in the form of NGOs,
and appreciation for the project stakeholders in mind, we turn our attention to the
concept of operations. First we must get our arms around the pieces of the problem
we want to solve, then look at how the pieces fit together. As Chapter 3 describes,
19.2 Define the Mission Baseline 745
Table 19-8. Major Mission Concepts Review (MCR) Products and Their Contents. These
documents are associated with the review’s success criteria. (MOE is measure of
effectiveness.)
context diagrams define the boundaries of the system of interest and help us focus
on the inputs (both the ones we can control and the ones we can't) and the outputs
that the customer is paying for. Figure 19-3 provides a simple view of the FireSAT
system context.
6. Product □□□□□ Only minimum effort expended on this process, mainly to identify
integration enabling products for integration that may affect lifecycle cost
7. Product □□□□□ Only minimum effort expended on this process, mainly to identify
verification enabling products for verification that may affect lifecycle cost
8. Product ■□□□□ This is a critical but often overlooked process that needs to be
validation addressed to some level during Pre-phase A, While we have the
attention of the stakeholders to define need, goals, and
objectives, we must address ultimate project success criteria.
These form the basis for eventual system validation.
■□□□□ Only minimum effort expended on this process, mainly to identify
9. Product
transition enabling products for transition that may impact lifecycle cost
19.2 Define the Mission Baseline 747
10. Technical ■■□□□ For any project going on to Phase A, the preliminary technical
planning planning products are another important hand-over. They form
the basis for defining the nature of the project to be formally
established in Phase A. Pre-phase A is especially important in
defining the initial integrated master plan (IMP), systems
engineering management plan (SEMP) (also known as a systems
engineering plan (SEP)), and master verification plan (MVP) (also
known as the test and evaluation master plan (TEMP)). The only
reason this effort isn’t 100% is that until a project becomes “real,"
it’s not prudent to invest too much in detailed planning.
11. Requirements ■□□□□ Little or no effort is made to formally manage requirements until a
management project officially kicks off in Phase A—only enough to keep study
teams in synch
12. Interface □□□□□ Little or no effort is made to formally manage interface
management requirements until a project officially kicks off in Phase A—only
enough to keep study teams in synch
13. Technical risk ■□□□□ During Pre-phase A, top-level programmatic and technical risks
management should be fully identified to avoid kicking off a project that has
unforeseen and unmitigated risks. But little formal effort is needed
here to actively mitigate these risks.
14. Configuration ■□□□□ Little or no effort is made to formally manage configuration until a
management project officially kicks off in Phase A—only enough to keep study
teams in synch
15. Technical ■□□□□ Little or no effort is made to formally manage technical data until
data a project officially kicks off in Phase A—only enough to keep study
management teams in synch
16. Technical ■□□□□ Pre-phase A technical assessment culminates in the mission
assessment concept review. So we need to prepare for it.
17. Decision ■■□□□ Rigorous technical decision analysis is key to developing credible
analysis Pre-phase A results. It’s impossible to do all the myriad trade
studies that may be identified, so good decision analysis selects
the most critical ones.
This context diagram shows clearly that the FireSAT System will take inputs
from wildfires and deliver information to the USFS. This is a good start, and as
obvious as it may Seem, it's extremely useful to get all stakeholders to agree even
to this much. Of course, this simple view barely scratches the surface of the
problem. To dig deeper, we look at a top-level mission architecture description. As
discussed in Space Mission Analysis and Design (SMAD) [Larson and Wertz, 1999]
and illustrated in Figure 19-4, space missions share a common set of architectural
elements. Expanding on this approach, we lay out a notional FireSAT mission
architecture block diagram, as shown in Figure 19-5.
748 Chapter 19 —FireSAT End-to-End Case Study
TABLE 19-10. FireSAT Need, Goals, and Objectives. This is the beginning of the process of
understanding stakeholder expectations.
Goals Objectives
1. Provide timely detection and notification of 1.1.. Detect a potentially dangerous wildfire in
potentially dangerous wildfires less than 1 day (threshold), 12 hours
(objective)
1.2.. Provide notification to USFS within 1 hour of
detection (threshold), 30 minutes (objective)
2, Provide continuous monitoring of dangerous 2.1., Provide 24/7 monitoring of high priority
and potentially dangerous wildfires dangerous and potentially dangerous
wildfires
3. Reduce the economic impact of wildfires 3.1., Reduce the average annual cost of fighting
wildfires by 20% from 2006 average
baseline
3.2., Reduce the annual property losses due to
wildfires by 25% over 2006 baseline
4. Reduce the risk to firefighting personal 4.1.. Reduce the average size of fire at first
contact by firefighters by 20% from 2006
average baseline
4.2., Develop a wildfire notification system with
greater than 90% user satisfaction rating
5. Collect statistical data on the outbreak,
spread, speed, and duration of wildfires
6. Detect and monitor wildfires in other countries
7. Collect other forest management data
8. Demonstrate to the public that positive action
is underway to contain wildfires
TABLE 19-11. Stakeholders and Their Roles for the FireSAT Mission. Sponsors
provide funding and may be either active or passive.
Stakeholder Type
NOAA Active
NASA Passive
Prime contractor Passive
Taxpayer Sponsor, Passive
People living near forests Active
FIGURE 19-3. A Simple Context Diagram for the FireSAT System. A context diagram reflects
the boundary of the system of interest and the active stakeholders.
Figure 19-4. Space Mission Architecture. All space missions include these basic elements.
See text for definitions. Requirements for the system flow from the operator, end
user, and developer and are allocated to the mission elements.
Besides understanding what we have, we must also be clear about any rules of
engagement that constrain what we're allowed to do. Brainstorming and
imaginative out-of-the-box thinking may come up with any number of innovative
ways to improve on the existing capability. But harsh realities—technical, political.
19.2 Define the Mission Baseline 751
0
FireSAT mission
architecture
System of systems
1 2 3 4 5 6 7
Launch Mission Ground
Orbit and Space
operations USFS Wildfire
trajectory element element element
element
Context Element Element Element Element Stakeholder Subject
element element
5.1
NOAA
ground
stations
Stakeholder
FIGURE 19-5. FireSAT System of Systems Architecture Diagram. The FireSAT mission
architecture, or “system of systems,” comprises a number of interrelated elements
that must all work together to achieve the mission. (USFS is US Forest Service.)
Figure 19-6. Context Diagram for the FireSAT System, Including Likely Reference System
Elements. If we have a reference architecture for the system of interest, the context
diagram can reflect it, as shown here.
752 Chapter 19—FireSAT End-to-End Case Study
Figure 19-7. Current Fire Detection Concept of Operations. This figure gives us a view of the
current USFS fire detection concept of operations. Knowing where we are helps
define where we want to go with the FireSAT system.
and budgetary—constrain the possible to the practical. For FireSAT, the customer
and other stakeholders set the following constraints:
• Cl: The FireSAT System shall achieve initial operating capability (IOC)
within 5 years of authority to proceed (ATP), and full operating capability
(FOC) within 6 years of ATP
• C2: The FireSAT system total lifecycle cost, including 5 years of on-orbit
operations, shall not exceed $200M (in FY 2008 dollars)
• C3: The FireSAT System shall use existing NOAA ground stations at
Wallops Island, Virginia and Fairbanks, Alaska for all mission command
and control. Detailed technical interface is defined in NOAA GS-ISD-XYX.
With this background, and our understanding of the notional architecture, we
capture the alternatives for the new mission concept as summarized in Table 19-12.
This analysis helps us focus on a finite set of realistic options, which then goes
into a new concept of operations picture. The new FireSAT OV-1 is shown in
Figure 19-8.
Establishing the basic concept of operations prepares us to move on to a more
formal description of the interfaces between the different mission functions, as
shown in Figure 19-9, and different physical elements; as shown in Figure 19-8. In
19.2 Define the Mission Baseline 753
Table 19-12. FireSAT Mission Concept Analysis. (Adapted from SMAD Table 2-1 [Larson and
Wertz, 1999].) The mission concept is defined by major decisions in each of these
four areas. This shows the options for FireSAT. (IOC is initial operating capability; RFI
is request for information)
Mission
Concept
Element Definition FireSAT Issues Alternatives
Data delivery How mission and 1. How are wildfires 1.1 Identified by satellite,
housekeeping data detected? OR
are generated or 2. How are the results 1.2 Identified by ground
collected, distributed, transmitted to the from raw satellite data
and used firefighter in the field? 2.1 Fire notifications sent
via NOAA ground
station, OR
2.2 Fire notifications sent
directly to USFS field offices
Mission timeline The overall schedule 1. When will the first 1. IOC dictated by customer
for planning, building, FireSAT become 2. Outside of project scope
deployment, operational?
operations, 2. What is the
replacement, and end- schedule for satellite
of-life replenishment?
many ways, systems engineering is "all about the interfaces/' so defining how the
elements of the system are linked is one of the first steps in understanding the
system. Focusing just on the space element for illustration, Figure 19-9 shows how
information and energy move in and out, from where and to where. Chapter 5
describes how these inputs and outputs form the basis for more detailed functional
analysis of the space element as a system in its own right. With this firm foundation,
the project turns its attention to the more detailed technical definition of the mission
as detailed in the mission design study report.
754 Chapter 19 —FireSAT End-to-End Case Study
911"
“
Telemetry
(packetized Messages
(packetized
data, RF link) data, RF link)
Telemetry
Commands (packetized
(packetized data, secure
data, RF link) internet link)
Commands Archival data
(packetized (storage
data, secure media, e.g.,
internet link) DVD)
Recommend
ations and
requests
(email)
Archival data Taskings
requests
(email) (email)
FIGURE 19-9. An Ixl Diagram Showing the Connectivity Among Envisioned System
Elements. This diagram further clarifies the interfaces among the system elements.
Ixl matrices are explained in Chapter 14. Items in regular font describe “what” is
interfaced between elements. Items in italics tell “how.” (RF is radio frequency.)
1. Detection The FireSAT system shall detect The US Forest Service (USFS) has
potentially dangerous wildfires (defined determined that a 95% confidence
to be greater than 150 m in any linear interval is sufficient for the scope of
dimension) with a confidence interval of FireSAT
95%
2. Coverage The FireSAT system shall cover the For a US Government funded
entire United States, including Alaska program, coverage of all 50 states is a
and Hawaii political necessity
3. Persistence The FireSAT system shall monitor the The USFS set a mission objective to
coverage area for potentially dangerous detect fires within 24 hours (threshold)
wildfires at least once per 12-hour and within 12 hours (objective). By
period requiring a the coverage area to be
monitored at least once per 12 hour
period, the maximum time between
observations will be 24 hours.
756 Chapter 19-FireSAT End-to-End Case study
4. Timeliness The FireSAT system shall send fire The USFS has determined that a 1-
notifications to users within 30 minutes hour to 30-minute notification time is
of fire detection (objective), 1 hour sufficient to meet mission objectives
(threshold) for the available budget
5. Geo The FireSAT system shall geo-locate The USFS has determined that a 500-
location potentially dangerous wildfires to within m to 5-km geo-location accuracy on
500 m (objective), 5 km (threshold) detected wildfires will support the goal
of reducing firefighting costs
7. Design Life The FireSAT system shall have an The USFS has determined that a
operational on-orbit lifetime of 5 years. minimum 5-year design life is
The system should have an operational technically feasible. Seven years is a
on-orbit lifetime of 7 years. design objective.
8. Initial/Full The FireSAT system initial operational The on-going cost of fighting wildfires
Operational capability (IOC) shall be within 5 years demands a capability as soon as
Capability of authority to proceed (ATP) with full possible. A 5-year IOC was deemed
operational capability within 6 years of to be reasonable given the scope of
ATP the FireSAT system compared to
other spacecraft of similar complexity.
9. End of Life FireSAT space elements shall have End of life disposal of satellites is
Disposal sufficient end of life delta-V margin to required by NASA policy.
de-orbit to a mean altitude of <200 km
(for Iow-Earth orbit missions) or >450
km above geostationary belt (for GEO
missions)
10. Ground The FireSAT system shall use existing The NOAA ground stations represent
System NOAA ground stations at Wallops a considerable investment in
Interface Island, Virginia and Fairbanks, Alaska infrastructure. By using these existing
for all mission command and control. assets the FireSAT project save time,
Detailed technical interface is defined in money, and effort.
NOAA GS-ISD-XYX.
11. Budget The FireSAT system total mission This is the budget constraint levied on
lifecycle cost, including 5 years of on- the project based on projected funding
orbit operations, shall not exceed availability
$200M (in FY 2008 dollars)
19.2 Define the Mission Baseline 757
FIGURE 19-10. FireSAT GEO Fire Tower Concept. This concept features a single satellite in
geostationary orbit to provide round-the-clock coverage of the United States.
Figure 19-11. FireSAT LEO Fire Scout Concept. This concept features two satellites in an
inclined orbit, providing periodic revisit of the United States.
TABLE 19-14. Summary of Competing FireSAT Mission Concepts. This table summarize the
primary differences between the GEO Fire Tower and LEO Fire Scout mission concepts.
Tasking, Scheduling,
Concept Data Delivery and Controlling Communications Architecture
flowing from there. Figure 19-12 illustrates the highest level of trade-off, followed
by the more detailed architectural trade for the GEO Fire Tower. Figure 19-13
illustrates the detailed trade-offs for the LEO Fire Scout.
For both options, some elements of the trade tree (e.g., operations concepts) have
no significant trade space, as they flow directly from the choice of mission concept.
Other elements, such as launch vehicle selection, have several discrete options. Orbit
altitude, on the other hand, in the case of the LEO Fire Scout, has an infinite number
of options between the lowest practical altitude (~300 km) to the edge of the lower
Van Allen Radiation Belt (-1200 km). For this trade, we examine several options
within this range to identify trends or performance plateaus. As for the wildfires,
characterizing them involves a separate trade study to look at smoke, atmospheric
19.2 Define the Mission Baseline 759
FIGURE 19-12. FireSAT Trade Tree Part 1, This part of the FireSAT mission architecture trade tree
shows the top-level trade-offs between the two competing mission concepts (top).
At bottom is the detailed trade tree for the GEO Fire Tower option.
Figure 19-13. FireSAT Trade Tree Part 2. This part of the FireSAT mission architecture trade tree
shows the detailed trade-offs for the LEO Fire Scout option. (RAAN is right
ascension of the ascending node.)
760 Chapter 19 —FireSAT End-to-End Case Study
✓ ✓ ✓ ✓
3. Persistence 8. End of life disposal
✓ ✓ ✓ ✓
4. Timeliness 9. Ground system interface
✓ ✓
5. Reliability ✓ 10. Budget ✓
Spacecraft size (and hence cost) is driven by payload size (see Chapter 7). The
payload sizing model depends on the necessary aperture, which in turn is derived
from the basic laws of optics (the bigger the lens, the better you can see). For
sufficient spatial resolution to detect 150 m sized wildfires, the payload aperture at
geosynchronous altitude would be over 2.4 m. That's the size of the Hubble Space
19.2 Define the Mission Baseline 761
Telescope! Simple analogous cost modeling has priced the Fire Tower system at
several billion dollars, far beyond the available budget. In addition, such a largc-
scale project probably wouldn't meet the IOC requirement. Thus, the Fire Tower
concept is eliminated from further consideration.
After the down-select to the Fire Scout concept, engineers continue to iterate
through the branches of the trade-tree to characterize each option. They focus
primarily on trade-offs between orbit altitude and the number and size of satellites
needed to do the mission. But the constrained time and resources of Pre-phase A
also force them to make numerous assumptions, including some implementation
details. All these assumptions must be analyzed and second-guessed once the
project begins for real in Phase A. The next section describes in more detail the
nature of these trade-offs, and details of the decisions, as part of the requirements
engineering process.
To estimate the cost of the Fire Scout option, project managers rely on
parametric cost estimating relationships (CERs) appropriate for a satellite of this
size. Table 19-16 shows these CERs. Applying them to the design data developed
during the Pre-phase A study results in the lifecycle cost estimates shown in Table
19-17. These figures are in FY2000 dollars (the year the CERs were validated for).
With an inflation factor of 1.148 (SMAD), the estimated lifecyclc cost is $157.9M in
FY2007 dollars. So the Fire Scout option gives the project a nearly 25% cost margin.
TABLE 19-16. Cost Estimating Relationships (CERs) for Earth-orbiting Small Satellites Including
Research, Development, Test, and Evaluation (RDT&E) and Theoretical First
Unit. Total subsystem cost in FY00$M is a function of the independent variable X
[Larson and Wertz, 1999]. (BOL is beginning of life; EOL is end of life.)
Standard
Parameter, X Input Data Subsystem Cost Error SE
Cost Component (Units) Range CER (FY00$K) (FY00$K)
Table 19-16. Cost Estimating Relationships (CERs) for Earth-orbiting Small Satellites Including
Research, Development, Test, and Evaluation (RDT&E) and Theoretical First
Unit (Continued) Total subsystem cost in FY00$M is a function of the independent
variable X [Larson and Wertz, 1999], (BOL is beginning of life; EOL is end of life.)
Standard
Parameter, X Input Data Subsystem Cost Error SE
Cost Component (Units) Range CER (FY00$K) (FY00$K)
TABLE 19-17. Estimated FireSAT Lifecycle Cost. The table shows estimated FireSAT mission
lifecycle costs based on the Fire Scout concept. Estimates use Aerospace Corp.
Small Satellite Cost Model (SSCM 8.0). [Larson and Wertz, 1999]. (RDT&E is
research, development, test, and evaluation; ADCS is attitude determination and
control system; C&DH is command and data handling; TT&C is telemetry, tracking,
and control.)
veteran operators from NOAA, NASA, USFS, and industry, begin to expand on
the basic concept of operations proposed at the beginning of the study. This
operational insight proves invaluable to the design team and helps them to better
derive and allocate requirements for all mission elements. We document these
results in the operations concept document.
FIGURE 19-15. FireSAT Design Reference Mission (DRM). The DRM embodies the operations
concept. The one for FireSAT is fairly standard.
The DRM definition bounds the operational activities of the mission from
cradle (launch) to grave (disposal). However, with time and resources being short
in Pre-phase A, most of the operational planning focuses on what everyone hopes
will be the bulk of the mission: normal operations. Drilling down on normal
operations, the focus turns to the mission's main function—to detect wildfires. The
integrated design and operations team starts by sketching a storyboard of what the
detection scenario might look like, as shown in Figure 19-16. From there, they
define a more formal sequence of functions, shown in Figure 19-17, along with the
hand-offs between space and ground elements as illustrated on the context
diagram in Figure 19-18. Finally, they build upon this operational insight to
expand on the basic scenario timeline in Figure 19-16, to develop the much more
detailed timeline shown in Figure 19-19. This detailed timeline allocates the 30-
minute notification time to the major elements of the architecture. These derived
response times will later be driving requirements for various mission elements.
Based on these and other analyses, the draft operations concept document derives
additional requirements for ground systems, including command and control
infrastructure at the operations center, necessary staff levels, and training plans.
Fire Fire becomes Fire “911” warning Warnings to Detailed Data receivec
starts large enough detected message to wildfire observation by mission
to detect on next pass regional command data to control for
field offices center for NOAA further action
action ground station and archiving
FIGURE 19-16. Simple Storyboard of Timeline for “Detect Wildfire” Capability. Pictures quickly
capture a simple scenario that shows how a fire starts, becomes larger, and is
detected by the Fire SAT system, which then alerts NOAA and the firefighters.
FIGURE 19-17. Block Diagram for an Operational Scenario That Responds to the “Detect
Wildfire” Capability. We use many informal and formal graphical methods to
describe operational scenarios.
Figure 19-18- Tracing an Operational Scenario on a Context Diagram. This type of diagram
brings users and systems engineers together in understanding required capabilities
and the system that will realize them.
Figure 19-19. Fire Detection and Notification Timeline. This is a more detailed timeline analysis
for the operational scenario. This input is important for assessing and applying
implementation concepts for the system elements. Time segment 2 is the second
part of Time segment 1 (Adapted from SMAD [Larson and Wertz, 1999]).
TABLE 19-18. FireSAT Project Document Tree. Here we see the principal project documents and how they mature through major reviews. (MCR
Document Name Number MCR SRR SDR PDR CDR SAR PLAR
768
Table 19-18. FireSAT Project Document Tree. (Continued) Here we see the principal project documents and how they mature through major
Document Name Number MCR SRR SDR PDR CDR SAR PLAR
Ground system to mission operations FS-8x06Q Draft Baseline Final
interface requirement and control
document
Natural environment definition document FS~8x070 Draft Baseline Update Update Final
Configuration management plan FS-70060 Draft Baseline Update Update Update Final
Data management plan FS-70070 Draft Baseline Update Update Update Final
Electromagnetic compatibility and FS-70080 Draft Baseline Update Final
interference control plan
Mass properties control plan FS-70090 Draft Baseline Update Final
Mass properties reports FS-70091 Update Update Update Update Final
Manufacturing and assembly plan FS-70075 Draft Baseline Update Final
Spacecraft systems analysis plan FS-70076 Draft Baseline Update Update Final
Spacecraft systems analysis reports FS-70077 Submit Update Update Update Update Update
System configuration document FS-89000 Draft Baseline Update Update Final
Engineering drawings FS-89xxxx Draft Baseline Update
Risk management plan FS-70780 Draft Baseline Update Update Update Final
Risk management reports FS-70781 Submit Update Update Update Update Update
Reliability, maintainability, and FS-70073 Draft Baseline Update Update Update Update Update
supportability plan
Instrumentation and command list FS-98000 Draft Baseline Final
Master verification plan FS-70002 Draft Baseline Update Update Final
Verification compliance reports FS-90001 Submit
Acceptance data package FS-95000 Submit
770 Chapter 19—FireSAT End-to-End Case Study
FIGURE 19-20. Preliminary FireSAT Risk Assessment. We easily see the top five risks, as well as
their likelihood, consequences, and status. (USFS is US Forest Service; NOAA is
National Oceanic and Atmospheric Administration; MOA is memorandum of
agreement.)
technical standpoint, it's not necessary to have all the system answers by the end
of Pre-phase A. The real achievement is identifying the major questions. A rank-
ordered list of trade-offs to do in Phase A guides the planning and execution of the
subsequent design and analysis cycles (DACs). If we plan another DAC to support
development of products for SRR, this list drives requirements for DAC resources
(e.g., models, personnel, and development hardware and software).
Since the system requirements document is one of the key documents
baselined at SRR, many analyses and trade studies focus on assessing assumptions
from the Pre-phase A study and on deriving additional system and subsystem
level requirements. One aim is to identify and assess technology readiness levels
for enabling system technologies to craft early developmental testing or other risk
reduction strategies.
Finally, because much of the emphasis during the early part of Phase A is on
requirements development, we must have plans to manage these requirements,
and all the other technical data that will be created at a faster pace beginning at the
start of the phase. Chapter 4 discusses requirements management. Details of the
bigger picture of technical data management are in Chapter 17.
19.2 Define the Mission Baseline 771
procure such facilities. But preliminary orbit analysis indicates that with only two
locations in Alaska and Virginia, this constraint will limit the system’s ability to
meet the 30- to 60-minute notification requirement. For example, a FireSAT
spacecraft on a descending node pass over Hawaii won't be in contact with either
NOAA site. Thus, if a fire is detected, it will be more than one orbit later before the
data can be sent to one of the two stations, violating the 60-minute requirement.
Furthermore, initial analyses of the data formats reveals that it might not be
necessary to send complete images to the ground. Rather, the spacecraft could
generate simple "911" fire warnings with time and location information for each
fire detected. This would allow the USFS to immediately cue local fire detection
assets for rapid confirmation of each fire warning. Such a simple, low data rate
message could easily be received by regional field offices scattered all over the US
with relatively inexpensive equipment. Thus, any time FireSAT was in view of the
US, it would also be in view of a one or more USFS field offices. The requirement
to force this simple data package to first pass through one of only two NOAA sites
appears to impose unnecessary overhead and delay on a time-critical message. All
stakeholders agree. So this interface requirement is defined as follows:
• The FireSAT spacecraft shall broadcast fire notification warnings so they
can be received by USFS field offices. Detailed technical interface is defined
in USFS GS-ISD-XYX.
The requirement does force all USFS field offices to be outfitted with a simple
FireSAT warning reception station. Fortunately, the cost for these stations is
estimated to be well within the project budget.
19.2.6 Conclusions
With great relief, the FireSAT study team walks out of the mission concept
review. They have convinced the milestone decision authorities that the FireSAT
system is feasible. It has a high probability of achieving its KPPs within the
allowable budget. The joint project board, comprising senior NASA, NOAA, and
USFS personnel, has stamped the project APPROVED FOR NEW START. But this
is only the first small step on the long road toward the PLAR described at the
beginning of the chapter. The next critical steps take place in Phase A.
needed for the review. These products will be analyzed by the SRR teams with
inputs collected via review item dispositions. In this phase:
• We further refine the operations concept to derive operational requirements
• Operational requirements drive creation of iiew requirements and
refinement of existing system-level ones
• We develop an operational model to capture high-level flows that identify
operations performed by various operational nodes
• We develop a functional model for each system and tie them together at the
system-of-systems (SoS) level functional model
• We capture functional and physical interfaces
• We generate draft interface requirements documents for each system-to-
system pairing
• We generate draft system requirements documents
• We put together draft SoS-level and system-level verification plans,
containing verification requirements and configurations, support equipment
and facilities, and verification schedules
• We draft risk management plans for each element
• We produce technology plans for each element—which identify technology
maturity, and plans to mature key technologies as needed
extremely difficult contracting issues. For our discussion, we assume that a single
prime contractor, Acme Astronautics Corp., is put on contact early in Phase A after
a brief (and uncontested) bidding and award process. Throughout our discussion,
we stick largely to engineering-oriented issues, avoiding the inevitable contracting
issues that also arise. However, the reader should not conclude that the contracting
challenges of FireSAT, or any space project, are trivial—quite the opposite.
Contracting decisions largely shape the systems engineering task.
TABLE 19-19. System Requirements Review (SRR) Entrance and Success Criteria. These
considerations are in addition to the programmatic matters faced by a newly-
approved project. Adapted from NASA [2007],
• The project has successfully completed the mission concepts • The project uses a sound
review (MCR) and responded to all MCR requests for action and process for allocating and
review item discrepancies controlling requirements
• A preliminary SRR agenda, success criteria, and charge to the throughout all levels, and
board have been agreed to by the technical team, project has a plan to complete the
manager, and review chair before the SRR definition activity on
• The following technical products for hardware and software schedule
system elements are available to the review participants in • Requirements definition is
advance: complete with respect to top-
a. System requirements document level mission and science
requirements, and interfaces
b. System software functionality description with external entities and
c. Updated concept of operations between major internal
d. Updated mission requirements, if applicable elements have been defined
e. Baselined systems engineering management plan • Requirementsallocationand
f. Risk management plan flow-down of key driving
g. Preliminary system requirements allocation to the next lower requirements have been
level system defined down to subsystems
h. Updated cost estimate • Preliminary approaches
i. Technology development maturity assessment plan have been determined for
j. Updated risk assessment and mitigations (including verifying and validating
probabilistic risk assessment as applicable) requirements down to the
k. Logistics documentation (e.g., preliminary maintenance plan) subsystem level
I. Preliminary human rating plan, if applicable • Major risks have been
identified and technically
m. Software development plan assessed, and viable
n. System safety and mission assurance plan mitigation strategies have
o. Configuration management plan been defined
p. Initial document tree
q. Verification and validation approach
r. Preliminary system safety analysis
s. Other specialty disciplines, as required
19.3 Create the System Baseline 775
As we did for the MCR, we begin by laying out success criteria versus
products to answer the mail on each criterion, as shown in Table 19-20. From this
analysis, we put together the list of paper products that need to be developed
throughout the first part of Phase A along with their top-level contents. These
include the items identified above as well as specific items called for in the
entrance criteria, as shown in Table 19-21.
Table 19-20. System Requirements Review (SRR) Versus Success Criteria. Each criterion
calls for one or more documents as evidence that the project meets the criterion.
1. The project uses a sound process for allocating and Systems engineering management plan
controlling requirements through all levels, and has a (SEMP), software development plan
plan to complete the definition activity on schedule
2. Requirements definition is complete with respect to Mission design study report, system
top-level mission and science requirements, and design study report, Mission-level
interfaces with external entities and between major system requirements document (SRD),
internal elements have been defined and interface requirements documents
(IRDs), operations concept document
3. Requirements allocation and flow-down of key driving System-level SRDs, draft hardware,
requirements have been defined down to subsystems software, and ground support
equipment SRDs
4. Preliminary approaches have been determined for Mission-level SRD, system-level SRD,
verifying and validating requirements down to the master verification plan
subsystem level
5. Major risks have been identified, and viable mitigation Risk management plan
strategies have been defined
19.3.4 Results
In some cases, Phase A starts with the momentum already gained during Pre
phase A. This momentum can be maintained if the mission scope stays the same and
scope creep doesn't fundamentally change the validity of the Pre-phase A study
results. If it does, Phase A starts from behind, trying to play catch-up by reanalyzing
results from the concept study to determine which results are still valid and which
mission concepts need to be reexamined. An insidious type of scope or requirements
creep that can easily sneak in during Phase A (or almost any time for that matter) is
what the authors call "justs." These take the form of "We like all of your Pre-phase
A ideas; just use XYZ existing spacecraft bus instead." Or "The current operations
concept is great; just add the ability to downlink anywhere in the world." These
seemingly innocent additions can become major constraints on the system and may
necessitate a complete redesign.
776 Chapter 19 —FireSAT End-to-End Case Study
TABLE 19-21. Top-level Products for System Requirements Review (SRR) Major Contents. This
table details the contents of the documents listed in Table 19-20. (TBD/TBR is to be
determined/to be resolved.)
TABLE 19-21. Top-level Products for System Requirements Review (SRR) Major Contents.
(Continued) This table details the contents of the documents listed in Table 19-20.
(TBD/TBR is to be determined/to be resolved.)
Table 19-22. Systems Engineering Process Level of Effort Leading to System Requirements
Review (SRR). Analogous to a cell phone signal, zero bars represents no or very low
resource use and five bars represents maximum resource use. Resource
requirements apply to a particular process only; a given number of bars for one
process does not imply the same level of resources as the same number of bars for a
different process.
Level of
Process Effort Comments
TABLE 19-22. Systems Engineering Process Level of Effort Leading to System Requirements
Review (SRR). (Continued) Analogous to a cell phone signal, zero bars represents
no or very low resource use and five bars represents maximum resource use.
Resource requirements apply to a particular process only; a given number of bars for
one process does not imply the same level of resources as the same number of bars
for a different process.
Level of
Process Effort Comments
■■□□□ As the logical decomposition matures, allocation of functionality
4. Physical
solution drives the definition of the physical solution. Many trades begin and
continue past SRR. Special emphasis goes to preventing a
premature definition of the physical architecture. The logical
decomposition leads the physical solution design activities.
5. Product ■□□□□ Identifying existing assets and assessing the cost-benefit and risk-
implementation reduction aspects of existing systems are standard trades during
this phase. The project must also carefully determine the need for
long-lead procurement items.
6. Product ■□□□□ As complement to the logical decomposition process, this takes
integration into account all lifecycle phases for each system element. We
identify and coordinate with stakeholders to ensure that we
ascertain all requirements and interfaces.
7. Product ■□□□□ Verification planning is crucial to assure that requirements defined
verification during this phase are testable and that cost and schedule
estimates account for this critical phase. Since the physical
solution is not completely fixed in this phase, we can't fully define
the verification program.
8. Product ■■□□□ Validation planning is matured in this phase
validation
9. Product □□□□□ Only minimum effort expended on this process, mainly to identify
transition enabling products for transition that may impact lifecycle cost
10. Technical ■■■■■ These processes must be thoroughly established for a successful
planning SRR, as they determine how the program will be executed in the
next phase
11. Requirements ■■■□□ The requirements management process must be mature and in
management place to control the emerging set of requirements
12. Interface ■□□□□ Interface management is addressed in the requirements
management management processes
13. Technical risk ■■□□□ Risk planning and management remain at a vigorous level
management
14. Configuration ■□□□□ End item control and requirements management controls must be
management in place
15. Technical data ■■□□□ Tools and repositories must be fully deployed along with the processes
management to guide engineering teams in concurrently defining systems
16. Technical ■■□□□ Initial planning to support the upcoming SRR and subsequent
assessment reviews begins
17. Decision ■■■■■ Rigorous technical decision analysis is key to developing credible
analysis Phase A results. It’s impossible to complete all the myriad trade
studies that may be identified, so good decision analysis identifies
the most critical ones.
19.3 Create the System Baseline 779
FIGURE 19-21. Context Diagram for FireSAT System with Focus on the Space Element. This
expanded context diagram allows us to focus on the space element and understand
the major interfaces between it and other elements of the mission. (GPS is Global
Positioning System.)
FIGURE 19-22. Relationship between the Wildfire Definition Trade Study and Requirements.
The last three requirements are derived from the first three.
requires some coordination and confirmation by the ground to reduce the number
of false-positive warnings. Thus, we must further refine these mission-level
requirements and allocate them down to the system level. Let7s pick out two threads
to examine further—detection and geo-location accuracy.
We start with the detection KPP, where wildfire size (150 m) and confidence
level (95%) were determined by the trade study. This KPP is a good starting point,
but engineers need to further decompose it to a design-to requirement for the
spacecraft payload sensor. This trade study determines that to meet the KPP we
need to derive two separate payload requirements, one for spatial resolution (the
smallest thing the payload can see), and one for spectral resolution (the signature
energy wavelength of a wildfire). This relationship is illustrated in Figure 19-23.
Now we turn to the second thread that derives from the requirement to
accurately geo-locate the wildfires. As described in SMAD, satellite remote sensing
gcolocation accuracy is a function of:
• Inherent altitude error for ground targets (how good our 3-dimensional
map of the Earth's surface is)
The last one depends on the accuracy of the map database. This requirement
can't even be allocated within the FireSAT system of systems, as the USFS depends
on maps provided by the US Geological Survey. Fortunately, most of these maps
are highly accurate and so constitute only a small source of error, but it's one that's
beyond the control of the FireSAT project.
Now let's look at the first element listed above, satellite position knowledge in
3 axes. Even if the mission plans on using onboard GPS-derived navigation data,
a fairly common practice, we would still need mission control to do some oversight
and backup using ground tracking. Therefore, part of the accuracy requirement
must be allocated to both the spacecraft and ground systems.
The middle two error sources, pointing knowledge and timing, arguably
depend solely on the spacecraft. But let's drill down deeper on pointing
knowledge to make a point. Onboard pointing knowledge is determined by a
software estimator such as a Kalman Filter, which in turn is driven by inputs from
a suite of attitude sensors (some collection of star sensors, Sun sensors, Earth
horizon sensors, gyroscopes, and magnetometers). So whatever requirement is
derived for onboard pointing knowledge must ultimately be further allocated
down to the software and hardware elements that make up the attitude
determination and control subsystem. This requirement allocation from mission-
level to system-level, to subsystem-level, and finally to detailed specifications at
19.3 Create the System Baseline 783
FIGURE 19-24. Example Requirements Allocation Tree for Geo-position Error Requirement.
This illustrates how a single requirement, geo-location accuracy, can be allocated
first to space or ground elements, then down to hardware or software components.
(Adapted from Space Mission Analysis and Design [Larson and Wertz, 1999].)
These examples illustrate but two threads in the dozens that made up the
complete FireSAT spacecraft system and subsystem functional requirements.
Similarly, we need to develop additional threads for the nonfunctional
requirements such as launch loads, thermal conductivity, memory capacity, mass,
cost, and redundancy architecture. Building on the results from Pre-phase A, and
applying the techniques described in Chapters 2 and 4, systems engineers seek to
capture all of the requirements for every element of the mission architecture. By
developing the detailed mission operations concept we derive additional
functional and nonfunctional requirements to address the usability of the system.
Table 19-23 summarizes some of the trade-offs, derived requirements, and
FireSAT design decisions. Each technical requirement comes with a number of
critical attributes that must be managed along with it. These include:
• Rationale
• Source
• Trace
• Verification method
• Verification phase
• Priority
• Increment
• Requirement owner
With all these requirements and their attributes defined, derived, and modified
early in Phase A, it7s critical to organize, manage, and communicate them efficiently.
The defining requirements document is usually called the system requirements
document (SRD) or the System/Segment Specification (SSS). Different organizations
have their own standards or practices for writing these documents. For example,
Mil-Std-961C is widely used throughout the aerospace industry mainly because of
the strong military roots in even civil space programs. FireSAT, like many projects,
has tailored these guidelines to develop the following outline for the FireSAT space
segment SRD:
1.0 Scope
2.0 Applicable documents
3.0 Requirements
3.1 Prime item definition
3.2 System capabilities
3.3 System characteristics
3.4 Design and construction
3.5 Logistics
3.6 Personnel and training
3.7 Subsystem requirements
3.8 Precedence
TABLE 19-23. Summary of FlreSAT Design Decisions Based on Preliminary and Derived Requirements. These trade-offs help flesh out
Derived
Mission System
Requirements Description Requirements Issue or Trade FireSAT Design Decision
1. Detection The FireSAT spacecraft shall • Spectral Spectral resolution depends on • λ = 4.2 pm
detect potentially dangerous resolution wildfire temperature, atmospheric • Aperture = 0.26 m for
wildfires with a 95% confidence • Aperture size windows, and aperture diffraction assumed altitude of 700 km
limitation
2. Coverage The FireSAT spacecraft shall Orbit inclination Higher inclination reduces launch • Altitude = 700 km
cover the entire United States, vehicle capacity • Orbit inclination = 55°
including Alaska and Hawaii, to a
latitude of 70°
3. Persistence The FireSAT spacecraft shall • Orbit altitude • Higher altitude gives greater • h = 700 km
monitor the coverage area on a • Number of persistence but drives up sensor • 2 satellites in one plane
daily basis satellites size separated 180° in true
• More satellites drives up cost, anomaly
complexity, and number of
launches
4. Timeliness The FireSAT spacecraft shall Ground station • Using only two NOAA ground • Require FireSAT
provide fire notifications within 60 locations stations limits access for downlink spacecraft to broadcast
minutes (threshold), 30 minutes • Direct downlink to USFS field “911” messages to USFS
(objective) of fire detection offices decreases latency field offices
• All f eld offices to be
equipped with necessary
receiver
5. Geo-location The system shall provide • Attitude Geolocation is a function of: Derived requirements for:
geolocation information on fires to determination • Orbit determination accuracy • Orbit determination < 0.2
within 5000 meters (threshold), • Orbit • Attitude determination accuracy km along track, 0.2 km
500 meters (objective), 3 sigma determination • Timing error across track, 0.1 km radial
• Timing • Target altitude error • Attitude determination <
• Target altitude 0.06° azimuth, 0.03° nadir
error • Time error < 0.5 s
Target altitude error < 1 km
TABLE 19-23. Summary of FireSAT Design Decisions Based on Preliminary and Derived Requirements. (Continued) These trade-offs
Derived
Mission System
Requirements Description Requirements Issue or Trade FireSAT Design Decision
6. Minimum The spacecraft shall support • Sensor field of • Greater FOR gives better Sensor field of regard = 115“
elevation payload operations out to 20° regard (FOR) coverage, persistence
angle elevation angle • Field of view • Higher FOV increases sensor data
(FOV) rate, focal plane complexity
7. Reliability The system shall have an overall Flow down to Higher reliability drives up cost, • Flow requirement down to
design reliability of 95% system complexity subsystem level
(threshold), 98% (objective) • Perform rigorous FMEA
consistent with Class B NASA
mission as per NASA NPR 8705.4
8. Design life The system shall have an Flow down to Longer lifetime increases delta-V budget, power
operational on-orbit lifetime of 5 system consumables budget includes 5-year life
years (threshold), 7 years with margin
(objective)
9. End of life System shall have sufficient end of delta-V Increased delta-V drives up loaded Total delta-V budget = 513
disposal life delta-V margin to de-orbit to a mass, propulsion system complexity m/s
mean altitude of <200 km
10. Ground FireSAT shall use existing NOAA Uplink and Constrains communication Use compatible
station (GS) ground stations at Wallops Island, downlink subsystem technology options uplink/dov/nlink hardware,
interface Virginia and Fairbanks, Alaska for frequencies, flow down to spacecraft
all mission command and control. data rates, communication subsystem
Detailed technical interface is modulation requirements
defined in NOAA GS-ISD-XYX schemes, GS
EIRP, system
noise f
19.3 Create the System Baseline 787
3.9 Qualification
3.10 Standard sample
4.0 Verification requirements
5.0 Notes
Appendices
The scope described in Section 1.0 of this document reiterates the project need, goals,
objectives, and concept of operations, as well as the scope of the document itself. Thus, it
replaces the FireSAT scope document baselined in Pre-phase A. This provides important
project guidance and one-stop-shopping for managing it along with the requirements.
As discussed earlier, nothing derails requirements faster than scope creep.
Table 19-24 shows an example requirements matrix with samples from each of
the sections of the SRD except 3.7: Subsystem requirements and 4.0: Verification
requirements. The subsystem requirements are addressed in more detail in the
functional baseline discussion. Verification requirements are one of the subjects of
the build-to baseline discussion later in this chapter. The matrix does not show
requirements traceability. However, by using requirements management tools,
such as DOORS, or a systems engineering tool, such as CORE or CRADLE, we can
readily track the detailed level-by-level traceability, as shown in the example back
in Figure 19-23.
Each requirement in the SRD must also be validated before being baselined. A
major part of the SRR is critically evaluating each requirement against the VALID
criteria. This assessment determines if each requirement, as written, is:
• V erifiable
• A chievable
• L ogical
• Integral
• D efinitive
An example of this requirements validation exercise for FireSAT is described
in Chapter 11. Most of this effort, of course, has to be completed before the formal
SRR, so the results are presented along with open items still to be resolved. As
described in Chapter 4, it's not uncommon to have numerous TBRs and TBDs
included within formal requirements statements at this point. But a realistic
strategy to resolve these open issues in a timely manner must be part of the
technical planning effort presented at the review.
The SRD and the other documents summarized above, as well as the mission
and system study reports, the SEMP, the operations concept, and the risk analysis
report, are all featured at the SRR. There may be a temptation to simply "recycle"
some of the documents, especially the SEMP, at each major review. We should
avoid this and make a clcan scrub of all these documents to bring them in line with
the evolving state of the design and other inevitable project changes. Another
document called out at the SRR is the software development plan. Like the others,
TABLE 19-24. FireSAT Requirements Matrix. (SV is FireSAT space vehicle; TRL is technology readiness level; PDR is preliminary design
3.1.3 Prime The space vehicle will consist of 1) This is an accepted architecture definition for a FS-10005 A SE
item definition the wildfire detection payload and spacecraft
2) the spacecraft bus
3.2.1 Spatial The space vehicle shall have a This requirement comes from the FireSAT wildfire FS-TS-004A A USFS
resolution spatial resolution of less than or definition trade study and corresponds to a 150 m
equal to 2.14 x IO-4 radians resolution at 700 km
(0.0123°)
3,2.2 Spectral The space vehicle shall detect This requirement comes from the FireSAT wildfire FS-TS-004A A USFS
resolution wildfire emissions in the definition trade study and corresponds to a wildfire
wavelength range of temperature of 690K. Optimum wavelength for a fire
4.2 nm +/-0.2 (jm at 1000 K would be 2.9 pm; however, there’s no
atmospheric window at that wavelength so 4.2 |jm
represents a good compromise.
3.2.7 The space vehicle shall have an The USFS has determined that an average USFS-TR- B USFS
Availability operational availability of 98%, availability value of 98% suffices to meet project 086
excluding outages due to weather, goals. Some planned outage is necessary to provide
with a maximum continuous for software updates and station keeping
outage of no more than 72 hours maneuvers. 72 hours is short enough to minimize
impact on the mission, especially if the time
corresponds to seasons of relatively low wildfire risk.
3.3.1 Space The space vehicle loaded mass This is a maximum mass budget based on launching GSFC-TtM- B SE
vehicle mass shall not exceed 250 kg two satellites on a single Pegasus launch into a 043
parking orbit of 150 km at 55° inclination.
3.3.2 Launch The space vehicle shall meet its The induced environment during launch is typically GSFC-TIM- A SE
environment requirements after exposure to the the most severe from a mechanical loads standpoint. 021
induced launch environment as The SV must be able to function after being
defined in the Pegasus User’s subjected to this environment.
Handbook.
TABLE 19-24. FireSAT Requirements Matrix. (Continued) (SV is FireSAT space vehicle; TRL is technology readiness level; PDR is
3.3.2.1 Space The space vehicle first-mode This constraint is based on the launch vehicle FS-8x030 A SE
vehicle natural natural frequency shall be greater induced environment. The SV natural frequency
frequency than 20 Hz must be above the induced environment frequency
to avoid resonance problems.
3.3.4 On- The space vehicle shall be The USFS has determined that a minimum 5-year USFS-TR- B SE
orbit design designed for an operational on- design life is technically feasible. Seven years is a 054
lifetime orbit lifetime of 5 years (threshold), design objective.
7 years (objective)
3.3.8 Space The SV shall meet its requirements Spacecraft must be designed and constructed to GSFC-TIM- A SE
environment after exposure to the natural space operate in the anticipated space environment to 021
environment at 700 km mean successfully fulfill its mission
altitude, 55° inclination as defined
in the NASA Space Environment
Handbook
3.4.1 Space Thorough evaluation of the Understanding the trajectory and orbital GSFC-STD A GSFC
environment environmental effects of the environmental effects (e.g., electrostatic discharge, -1000
effects on trajectory paths and orbits shall be radiation, atomic oxygen) on the spacecraft will
material assessed for the impact on space eliminate costly redesign and fixes, and minimize on-
selection vehicle materials selection and orbit failures due to environmental interaction with
design spacecraft materials
3.4.6 Maturity All space vehicle technologies The use of new and unproven technologies requires GSFC-STD A GSFC
of new shall achieve a TRL 6 by PDR. This a thorough qualification program to reduce risk to an -1000
technologies doesn’t apply to technology acceptable level
demonstration opportunities.
3.5.1 Supply Project shall define a plan for An inadequate spare parts program leads to part GSFC-STD B GSFC
required spare units (including shortages during the development phase and has a -1000
spare EEE parts) that is compatible direct impact on potential workarounds or retrofit
witti available resources and plans
acceptable risk
TABLE 19-24. FireSAT Requirements Matrix. (Continued) (SV is FireSAT space vehicle; TRL is technology readiness level; PDR is
3.6.1 The space vehicle shall be This is the usual level of training for KSC personnel LV-TIM-003 C LV
Maintenance maintainable by a Class 2 -
personnel engineering technician as defined
in KSC WP-3556, Technician
Qualifications Manual
3.7 Addressed in next section
Subsystem
requirements
3.8.1 When in conflict, FireSAT project- All project-level requirements are overriding Project B PM
Precedence level requirements shall take manager
precedence over system-level
requirements
3.9.1 All space vehicle heritage flight All hardware, whether heritage or not, needs to be GSFC-STD B GSFC
Qualification hardware shall be fully qualified qualified for its expected environment and - 10C0
of heritage and verified for use in its new operational uses
flight application. This qualification shall
hardware take into consideration necessary
design modifications, changes to
expected environments, and
differences in operational use.
3.9.3 Structural tests that demonstrate Demonstration of structural requirements is a key GSFC-STD A GSFC
Structural that SV flight hardware is risk reduction activity during mission development -1000
qualification compatible with expected mission
environments shall be conducted in
compliance with the NASA/GSFC
General Environmental Verification
Specifications (GEVS-SE Rev A
1996)
3.10.1 Two flight model space vehicles Architecture analysis indicates a two- satellite Project A PM
Standard shall be delivered constellation is needed to perform the mission management
sample
19.3 Create the System Baseline 791
it may have been drafted during Pre-phase A. It's another useful tool for laying out
the system software effort. Because software issues come to a head during build up
to the SDR, we defer their discussion until the next section.
Table 19-25. Initial Allocation of Organization Roles and Responsibilities. Culled from the
original list of FireSAT stakeholders, these four have the most direct roles in the
project.
This work package has been assigned to the prime contractor, Acme
Astronautics Corp., with NASA/GSFC as the lead agency. The NASA chief systems
792 Chapter 19—FireSAT End-to-End Case Study
FIGURE 19-25. Project-Level Work Breakdown Structure. The project-level WBS includes both
programmatic and technical elements.
FIGURE 19-26. Payload Work Breakdown Structure. This figure depicts the second-level payload
element, all the third-level elements associated with it, and the fourth-level elements
under “Payload subsystem development.”
engineer for the project, working with the FireSAT team cadre from both NASA and
Acme, explores some existing sensor technologies that have a high degree of flight
heritage. They want hardware with the highest possible technology readiness level
(TRL) to reduce risk. The issue begins when the other project stakeholder, NOAA,
formally asked to lead the sensor effort instead. They already have an ongoing
development effort for a sensor with nearly identical characteristics for a weather
instrument on an NPOESS follow-on mission. Inserting this technology into FireSAT
promises to increase its sensitivity and scan agility while removing a large part of the
payload R&D cost from the FireSAT budget. NOAA argues this as a win-win
19.3 Create the System Baseline 793
situation where the FireSAT project can improve performance and reduce cost,
while NPOESS benefits from an early risk-reduction flight of their new instrument.
The FireSAT lead systems engineer at NASA knows that when something
sounds too good to be true, it probably is. The downside of this arrangement is to
accept significant downstream cost, schedule, and performance risks. The project
will incur these risks if the promised hardware isn't delivered on time (a greater risk
with lower TRL technology) or if the promised higher performance isn't achieved.
The motto "Better is the enemy of good/' crosses the minds of the systems
engineering team. But with the giddiness and optimism usually present early in a
new project (and a fair amount of political influence from the headquarters of both
agencies and interested Congressional delegations), they decide to restructure the
project to baseline the NOAA-provided instrument The caveat is that the
instrument must be at TRL 6 or above by the preliminary design review. So they
redefine the SEMP roles and responsibilities, as shown in Table 19-26.
TABLE 19-26. Revised Allocation of Organization Roles and Responsibilities Based on NOAA
Offer to Lead Payload Development. At this point in the project lifecycle, the
systems engineering team judges the benefits of going with the NOAA instrument to
outweigh the risks.
Cognizant of the potential risks of this decision (and smarting from being
relegated to the role of integrator rather than lead designer for the payload), Acme
Astronautics musters a small amount of internal funding to continue sensor
development activities in parallel, albeit at a lower level than previously planned.
They make this decision with the tacit support of the NASA engineering team. The
NOAA team, while they consider this parallel effort a wasteful distraction, aren't
in a position to officially object.
As with any project, especially a relatively small one like FireSAT, cost is
always in the forefront. A combination of analogous systems cost models and
parametric cost models based on historical data guides project budget estimates
throughout Pre-phase A and well into Phase A. The decision to assign project
responsibility for payload development to NOAA has a significant impact on the
project bottom line. The Small Satellite Cost Model (SSCM) developed by the
Aerospace Corp. [Larson and Wertz, 1999], which was used to develop the initial
project's cost estimate, indicates fractional lifecycle costs, as shown in Table 19-27.
794 Chapter 19 —FireSAT End-to-End Case Study
Table 19-27. Fractional Spacecraft Costs Based on Small Satellite Cost Model [Larson and
Wertz, 1999]. Costs for a space project include far more than what gets launched
into space. (TT&C is telemetry, tracking, and control: C&DH is command and data
handling; ADCS is attitude determination and control system.)
As the table shows, the payload represents 40% of the total spacecraft cost. So
with a 60/40 split between non-recurring and recurring engineering on the
payload, the FireSAT project can potentially save around 24% of the cost of the first
spacecraft (60% of 40%) by assigning the payload development responsibility to
NOAA. Headquarters budget watchers want to immediately take this much out of
the project budget. But the FireSAT project manager argues that since insertion of
the NOAA payload is contingent on it being at TRL 6 by the preliminary design
review, the budget shouldn't be reduced until the project meets that milestone.
Unfortunately, he loses that argument and the funding is removed, with an "every
effort" will be made to add back funding later if there is a problem.
19.3.6 Conclusions
Leaving the system requirements review, the project management and
systems engineering teams are only slightly battered and bruised. Fortunately, the
ground work laid down in Pre-phase A has helped make the transition from
project concept to project reality less painful. This is only possible because no
stakeholders attempt to significantly redirect the project scope. Building on the
Pre-phase A concept, the mission and system-level requirements can be baselined
at the SRR, giving the project a sound basis for moving forward to the next major
milestone in Phase A: the functional baseline.
19.4 Establish the Functional Baseline 795
Table 19-28. System Definition Review Entrance and Success Criteria. The functional domain
receives special emphasis during this part of the lifecycle. Adapted from NASA [2007],
1. Complete the SRR and respond to all requests for 1. Systems requirements, including mission
action and review item discrepancies success criteria and any sponsor-imposed
2. The technical team, project manager, and review constraints, are defined and form the
chair agree to a preliminary SDR agenda, success basis for the proposed conceptual design
criteria, and charge to the board 2. All technical requirements are allocated
3. Make SDR technical products listed below and the flow down to subsystems is
available to the relevant participants in advance: adequate. The requirements, design
a. System architecture approaches, and conceptual design will
b. Preferred system solution definition, including fulfill the mission needs consistent with the
major tradeoffs and options available resources (cost, schedule,
c. Updated baselined documentation, as required mass, and power).
d. Preliminary functional baseline (with 3. The requirements process is sound and
supporting trade-off analyses and data) can reasonably be expected to continue to
e. Preliminary system software functional identify and flow detailed requirements in
requirements a manner timely for development
f. Changes to the systems engineering 4. The technical approach is credible and
management plan, if any responsive to the identified requirements
g. Updated risk management plan 5. Technical plans have been updated, as
h. Updated risk assessment and mitigations necessary
(including probabilistic risk analyses, as 6. The tradeoffs are complete, and those
planned for Phase B adequately address
applicable)
the option space
i. Updated technology development, maturity,
and assessment plan 7. Significant development, mission, and
safety risks are identified and technically
j. Updated cost and schedule data
assessed, and we have a process and the
k. Updated logistics documentation resources to manage the risks
I. Updated human rating plan, if applicable 8. We’ve planned adequately for the
m. Software test plan development of any enabling new
n. Software requirements documents technology
o. Interface requirements documents (including 9. The operations concept is consistent with
software) proposed design concepts and aligns with
p. Technical resource use estimates and margins the mission requirements
q. Updated safety and mission assurance plan
r. Updated preliminary safety analysis
19.4.5 Results
For this discussion, the system of interest at the system definition review is the
FireSAT space element. Here we focus on a few of the principal documents that
define it as the project nears the end of Phase A: 1) the spacecraft system design
study report, 2) the software development plan, and 3) key sections of the systems
engineering management plan (SEMP).
19.4 Establish the Functional Baseline 799
Table 19-29. Supporting Documents for System Definition Review Success Criteria. This
table expands on the second column of Table 19-28 above. These documents are
essential for fulfilling their associated criteria.
FireSAT • Analysis of alternatives trade-tree, including (for each SEMP • Plan for managing the SE effort
system design system in the architecture) spacecraft, ground system, (UPDATE) • Documents driving SEMP
study report etc. • Technical summary
(for each • Alternative system concepts • Integration technical effort (including
system • Identification of system drivers for each concept or conceptual lifecycle support strategies,
element within architecture logistics, manufacturing, operation, etc.)
the mission • Results of characterization for each alternative • The 17 systems engineering processes
architecture as • Critical requirements identified or derived • Technology insertion
appropriate)
• Concept or architecture utility assessment, including • SE activities
(BASELINE) evaluation criteria such as cost, technical maturity, and • Project plan integration
measures of effectiveness • Waivers
• Detailed functional analysis • Appendices
• Proposed baseline system configuration with functional
allocation
Risk • Preliminary risk assessment FireSAT • Scope
management • Technologies and associated risk management and mission-level • Applicable documents
plan mitigalion strategies and options and systems- • Requirements
(UPDATE) level system - Prime item definition
requirements
- System capabilities
documents
(SRD) - System characteristics
(UPDATE) - Design and construction
- Logistics
- Personnel and training
- Subsystem requirements
- Precedence
- Qualification
- Standard sample
• Verification requirements
* Notes
• Appendices
Table 19-30. Contents for Primary System Definition Review (SDR) Supporting Documents. (Continued) Here we show the most important
Table 19-31. Systems Engineering Process Level of Effort Leading up to System Definition
Review (SDR). Analogous to a cell phone signal, zero bars represents no or very low
resource requirements and five bars represent maximum resource utilization. Resource
requirements are for a particular process only; a given number of bars for one process
does not imply the same level of resources as that number of bars for a different process.
(SRR is system requirements review.)
Level of
Process Effort Comments
■■■□□
1. Stakeholder We continually revisit stakeholder expectations as we develop
expectation technical requirements. This serves as a requirements
definition validation.
■■■■■
2. Technical We refine requirements to lower levels as a result of functional
requirements analysis
definition
3. Logical ■■■■■ This is a main focus for preparation for SDR
decomposition
■■■□□
4. Physical solution As the logical decomposition matures, allocation of functionality
drives the definition of the physical solution. We complete
additional trade studies leading up to SDR. The logical
decomposition leads the physical solution design activities.
■□□□□
5. Product Identifying existing assets and the assessment of the cost-
implementation benefit and risk-reduction aspects of existing systems are
standard trades during this phase. We need to carefully
consider requirements for long-lead procurement items.
6. Product integration ■□□□□ During the logical decomposition process, we consider all
lifecycle phases for each system element, We identify
stakeholders and coordinate with them to be sure that we
recognize all requirements and interfaces (such as to existing
and planned facilities).
■■□□□
7. Product verification Verification planning is key to assuring that requirements
defined during this phase are testable and that cost and
schedule estimates account for this critical phase. Since the
physical solution isn’t fully defined in this phase, we can’t
completely fix the verification program. But it’s essential to
begin to identify long-lead requirements such as test facilities
and ground support equipment.
■■□□□ Validation planning is matured in this phase
8. Product validation
9. Product transition □□□□□ Only minimum effort expended on this process, mainly to
identify enabling products for transition that may affect lifecycle
cost
■■■■■
10. Technical planning These processes must be fully defined to complete SRR
successfully, as they determine how we execute the program in
the next phase
11. Requirements
■■■□□ The requirements management process must be mature and in
management place to control the emerging set of requirements
■□□□□
12. Interface Interface management is addressed in the requirements
management management processes
19.4 Establish the Functional Baseline 803
Table 19-31. Systems Engineering Process Level of Effort Leading up to System Definition
Review (SDR). (Continued) Analogous to a cell phone signal, zero bars represents no
or very low resource requirements and five bars represent maximum resource utilization.
Resource requirements are for a particular process only; a given number of bars for one
process does not imply the same level of resources as that number of bars for a different
process. (SRR is system requirements review.)
Level of
Process Effort Comments
13. Technical risk ■■□□□ Risk planning and management remain at a vigorous level
management during this phase
14. Configuration ■□□□□ End item control and requirements management controls must
management be in place
15. Technical data ■■□□□ Tools and repositories must be fully deployed along with the
management processes to guide engineering teams in concurrently defining
systems
16. Technical ■■□□□ Initial planning to support the upcoming SRR and subsequent
assessment reviews begins
17. Decision analysis ■■■■■ Rigorous technical decision analysis is crucial to developing
credible Phase A results. It’s impossible to complete all the
myriad possible trade studies, so good decision analysis
identifies the most critical ones.
FIGURE 19-28. FireSAT Space Element Context Diagram. This simple system context diagram
for the FireSAT space element is a starting point for functional analysis based on
system inputs and outputs.
0
Perform space segment
I/O functions
Function
1 2
Handle Produce
inputs outputs
Function Function
FIGURE 19-29. Space Element Functional Decomposition Using Inputs and Outputs. This view
of the FireSAT spacecraft functional architecture focuses on system inputs and
outputs. (I/O is input/output; IR is infrared; GPS is Global Positioning System.)
FIGURE 19-30. Functional Flow Block Diagram of the Wildfire “911” Thread. This functional flow
diagram illustrates the roles played by the elements that constitute the FireSAT
spacecraft functional architecture in executing the wildfire “911” notifications. We can
develop similar diagrams for all of the functional threads performed by the system.
Here links external to the spacecraft are shown as solid lines. Internal links are
shown as dashed lines. (GPS is Global Positioning System; IR is infrared; USFS is
US Forest Service.)
identified by the I/O functional hierarchy. The input functions occur in parallel
with one another, as do the output functions. But input and output functions occur
in series as part of an ongoing loop of space operations. And the triggers that go
19.4 Establish the Functional Baseline 805
into and out of each sub-function provide insight as to how the inputs are handled
and how important outputs are generated.
FIGURE 19-31. FireSAT Enhanced Functional Flow Block Diagram (EFFBD). The EFFBD
provides a variety of insights into the FireSAT spacecraft functions. (LP is loop; LE is
loop end; IR is infrared; GPS is Global Positioning System.)
Let's take a more detailed look at the critical function of producing the "911"
messages. We consider what such a message must contain: detection of a fire and
its location and time. So the "911" message has two components, the detection
portion and the geolocation/timing function. Notionally, the first could be
allocated to the payload while the second could be allocated to the bus. Refining
these two functions still further, as shown in Figure 19-32, we see the additional
complexity needed to perform each one.
This understanding is important when we look further at how best to allocate
these functions to the correct physical piece of the system (many functions are
performed in software). Turning from the functional architecture to the physical,
we can start with a very simple, top-level decomposition of the spacecraft into bus
and payload, as illustrated in Figure 19-33.
Now, our goal is to allocate functions to the spacecraft physical architecture.
Looking at the collection of spacecraft functions, some logical groupings emerge.
Handling wildfire IR inputs is logically the function of the spacecraft payload.
Handling environmental inputs, commands, and GPS signals traditionally goes to
the spacecraft bus. On the output side, producing the "911" notifications is
handled jointly by the payload and bus. This decomposition and allocation task is
aided by analyzing the necessary functional behavior of the system in increasing
detail, and then assigning these sub-functions to various parts of the system. By
806 Chapter 19-FireSAT End-to-End Case Study
Figure 19-32. Functional Decomposition of the Detect Fire and Determine Geo-position Data Functions. Both detection and location (in
space and time) are critical to notification. (IR is infrared; GPS is Global Positioning System.)
19.4 Establish the Functional Baseline 807
2.0
FireSAT
spacecraft
System
2.1 2.2
Spacecraft Payload
bus
Context Context
Figure 19-33. FireSAT Spacecraft Top-level Physical Architecture. We traditionally divide
spacecraft into two parts, bus and payload. The payload deals with all purely
mission functions (e.g., generate mission data), while the bus takes care of all the
“overhead” (power, thermal control, etc.).
Figure 19-34. Spacecraft Physical Architecture, Bus, and Payload. Here we decompose the
FireSAT spacecraft into the subsystems that constitute the bus and payload.
We can see the allocation of the functional architecture to the physical system
by substituting our physical system for the functions described earlier and looking
again at the inputs and outputs—but this time from a subsystem perspective, as
illustrated in Figure 19-35. This helps us identify the links between subsystems that
we will rigorously define as the system-to-system and subsystem-to-subsystem
808 Chapter 19—FireSAT End-to-End Case Study
*
IR
emissions
link
I
(WikMre)
FIGURE 19-35. Allocated Functions. This diagram illustrates how the subsystems that constitute
the spacecraft bus and payload handle the inputs and outputs that the functional
analysis identifies. This way, we can see the allocation of functional to physical. The
links between the system and external elements, as well as the internal links, form
the basis for more rigorous interface descriptions. Here links external to the
spacecraft are shown as solid lines. Internal links are shown as dashed lines.
The final important set of results in the system design report presented at the
SDR is the margins associated with key performance parameters (KPPs) and
technical performance measures (TPMs). Let's look at an example of each to see
how to compute and track them, starting with the KPP for geolocation accuracy.
As stated above, satellite remote sensing geolocation accuracy is a function of:
• Satellite position knowledge (in 3 axes)
• Satellite pointing knowledge (in 2 axes)
• Satellite timing accuracy (how good the onboard clock is)
• Inherent altitude error for ground targets (how good our 3-dimensional
map of the Earth's surface is)
To meet the 5 km requirement, we've made several assumptions, some of which
have become derived requirements, for all of these parameters. There's a
19.4 Establish the Functional Baseline 809
straightforward set of analytic equations that relate errors in each of these parameters
to total geo-location error. Table 19-32 summarizes these results as part of tine analysis
in the system design report. It shows that the best estimate indicates a little over 1 km
margin in this KPP, assuming that each of the contributing error sources can be kept
at or below its assumed value.
The system design report captures similar results for technical performance
measures (TPMs) such as mass, power, and link margin. During conceptual
design, the payload size was estimated using parametric scaling rules for optical
instruments [Larson and Wertz, 1999]. The spacecraft mass was then computed
based on similar scaling rules for spacecraft of comparable size. This also provided
estimates for allocated mass and power for each subsystem. All calculations
included 25% mass, 30% power, and 15% propellant margin. Table 19-33
summarizes these conceptual design results.
Of course, these results beg the question of how much margin is enough at this
point in the design cycle. While there are several schools of thought in this area, the
GSFC "Gold Standards" [GSFC, 2005] provides guidelines. Per this directive, the
resource margins are found to be adequate for the end of Phase A, as shown in
Table 19-34.
All these results further refine the system and the derived subsystem
requirements. So another important document update presented at the system
810 Chapter 19 —FireSAT End-to-End Case Study
Orientation Errors
Table 19-33. FireSAT Conceptual Design Results. These indicate mass, power, and propellant
margins and initial allocated mass and power for each subsystem. (IR is infrared;
GPS is Global Positioning System.)
Mass Average
Design Inputs and Assumptions Component and Results (Kg) Power (W)
Orbital 49.8
maneuvers
Attitude control 2.5
Margin 7.8
Residual 1.2
Total spacecraft loaded mass 237.1
Spacecraft average power 137.0
TABLE 19-34. Technical Resource Margins. All values are given at the end of the phase [GSFC,
2005],
At launch, there shall be 10% predicted power margin tor mission-critical, cruise, and safing modes,
and to accommodate in-flight operational uncertainties
t The three-sigma variation is due to the following: 1) worst-case spacecraft mass properties; 2) 3a
low performance of the launch vehicle; 3) 3a low performance of the propulsion subsystem (thruster
performance or alignment, propellant residuals); 4) 3a flight dynamics errors and constraints; 5)
^ thruster failure (applies only to single-fault-tolerant systems)
“Telemetry and command hardware channels read data from such hardware as thermistors, heaters,
switches, motors, etc.
analyzed and assessed for the critical "Yes/No" fire detection decision? The choice is
further complicated by the earlier programmatic decision to have NOAA provide
the payload separate from NASA and the prime contractor, Acme, But any potential
political squabbling over this decision is trumped by the simple technical limitation
of transferring the necessary data over the spacecraft data harness at a rate sufficient
to meet the requirements. So from a technical standpoint, it's expedient to perform
this sub-function within the payload image processing software. We document this
rationale in the SDP section 4.2.4—Handling Critical Requirements.
Section 5.2.2 of the SDP establishes requirements for a dedicated software test
bed that uses simulated infrared imagery to verify the necessary digital signal
processing algorithms to detect fires with high confidence. Sections 5.3 through 5.5
deconstruct the problem from the payload focal plane level through the data
capture and analysis software design. At FireSAT's orbital velocity, using detector
technology to achieve infrared sensitivity in the 4.2-micrometer range at a
resolution of 30-180 m, the sensor data rate could reach 100 megabits per second.
This is an enormous volume of data to sift through, leading more than one daunted
engineer to speak of needles and US-sized hay stacks.
Fortunately, rather than having to completely reinvent the wheel to solve this
problem, the systems engineers and software designers are able to adapt
unclassified digital signal processing algorithms that were originally developed
for the USAF Defense Support Program to detect enemy missile launches from
space. With these pieces of the software plan in place, the systems engineers can
move confidently forward to the SDR.
Table 19-35. Example Subsystem Requirements. These go into the updated spacecraft system requirements document at the system
Office of
System Requirements Primary
Document (SRD) Responsibility
Section Requirement Rationale Source Priority (OPR)
3.1.3.1 Payload The wildfire detection payload will consist of: This payload architecture FS-10005 B SE
Definition 1) attitude determination and pointing, 2) is derived from the
command and control, 3) calibration function al-to-physical
sources, 4) data handling subsystem, 5) allocation in the
telescope and optics, 6) electrical power conceptual design
subsystem, 7) thermal control, 8) structures
and mechanisms, and 9) harness
3.1.3.2 Spacecraft Bus Spacecraft bus wiil consist of: 1) attitude This is an accepted FS-10005 B PM
Definition determination and control subsystem, 2) subsystem architecture
guidance, navigation, and control definition fora spacecraft
subsystem, 3) communication subsystem, 4) and is consistent with the
data handling subsystem, 5) flight software, project’s WBS
6) electrical power subsystem, 7) thermal
control, 8) structures and mechanisms, 9)
propulsion, and 10) harness
3.7.1.2 Attitude Attitude determination and control This requirement was FS-3045 B ADCS Eng.
Determination and subsystem (ADCS) shall determine attitude derived from an analysis
Control Subsystem to within +/- 0.06 degrees in the azimuth axis of the mission-level
and within 0.03 degrees in the nadir axis (3 geolocation accuracy
sigma) requirement. It
represents an allocated
error budget to the ADCS
814 Chapter 19-FireSAT End-to-End Case Study
Table 19-36. Software Development Plan Template. The FireSAT team uses this template to
begin organizing the project software tasks. (CSCI is computer software configuration
item; HWCI is hardware configuration item.) Adapted from NWSC [2005].
FIGURE 19-37. Logical Decomposition of the System-Level “Detect Fires” Function. Here we
see the “Detect Fires” function decomposed into a series of sub-functions. A critical
one allocated to software is the “Determine Fire Yes/No?” function. (IR is infrared.)
analysis need the latest coupled loads calculations and system configuration
information. Analysis done with the incorrect assumptions or erroneous data
wastes time and contributes to cost, schedule, and performance risk. The DAC
plan for FireSAT, as contained in the project SEMP, is shown in Table 19-37. Along
with the simple DAC schedule is a summary of the focus and types of tools for each
DAC. This information is essential for resource planning to ensure that the
necessary tools are available when needed.
Table 19-37. FireSAT Design and Analysis Cycle (DAC) Planning. As the project progresses
through its lifecycle, the DACs become increasingly comprehensive. (COTS is
commercial-off-the-shelf; STK is Satellite Tool Kit.)
0 * Mission concept and architecture trade • COTS software tools (e.g., STK, MS
studies Office)
* Requirements sensitivity analysis * Simple custom analytic tools (e.g.,
* Mission effectiveness spreadsheets, Matlab)
budget and schedule. The trade studies have identified some risk items to put on the
watch list, and actions are assigned to mitigate those risks. Among them is the
interface to the NOAA ground stations. One of the FireSAT system requirements is
that the satellite be able to communicate with the NOAA ground stations.
In allocating the reliability requirement to the telemetry, tracking, and
commanding subsystem, and with an eye toward reducing lifecycle operations
costs, the lead systems engineer decides to make this a largely autonomous
function within the spacecraft bus software. Unfortunately, the NOAA ground
stations require a physical initiation of request by the operator before the
spacecraft-to-ground station downlink operation. Thus, the FireSAT requirements
are incompatible with the ground station interface requirement. FireSAT can either
operate autonomously, or it can meet the requirement to use the NOAA ground
stations as is.
Since the matter primarily affects NOAA, a NOAA-led working group is
assigned to resolve this conflict. Neither choice impacts the system hardware
development. However, because the external interface would change, this further
delays software development. The NOAA team also finds a significant disconnect in
the payload development. Because they are treating the FireSAT payload as a risk
reduction flight for a future NOAA project, they've been using their own work
breakdown structure, logical decomposition, and software development plan. As a
result, their technical products don't align with the technical requirements for the
rest of the FireSAT project. The payload development doesn't break out the payload
functions to the level where software, firmware, and hardware allocations can be
tracked and managed. This leads to some uncertainty in capturing the functions that
FireSAT has to allocate to the payload.
The concern is that some "glueware" will be needed to patch the payload to
the FireSAT interface once integration starts. For this reason, the FireSAT project
manager flags this interface as a high risk item to be tracked. As part of the risk
mitigation, the configuration control effort on the payload-to-bus interface is
elevated and weekly technical tag-ups between the bus and payload teams are
mandated. Neither team welcomes additional meetings and reports, as they're
already laboring to stay on schedule. But they recognize the reasons behind the
decision, and so redouble their efforts.
19.4.7 Conclusions
The adrenaline rush leading up to the SDR fades as the team celebrates another
successful review, and systems engineers and project managers reflect on what
they've accomplished and what work remains. Enormous efforts during Pre-phase
A have served the project well in Phase A by completing a rigorous analysis of
alternatives to identify a solid technical solution. Early in Phase A, this enabled
engineers to focus less on redesign and more on analysis to confirm or modify
basic assumptions to derive a valid set of requirements baselined at SRR. And the
rigorous functional analysis performed during the DAC following the SRR helped
define the system architecture and allocate critical functions to hardware and
19.5 Reach the Design-to Baseline 817
software. As the team prepares to enter Phase B, all these results, along with the
collective skills of the entire team, will be needed to complete the system
preliminary design leading to the preliminary design review (PDR).
TABLE 19-38. Preliminary Design Review (PDR) Entrance and Success Criteria. (SRR is system
requirements review; SDR is system definition review: MDR is mission definition
review: RID is review item discrepancy; TPM is technical performance measure; RFA is
request for action; EEE is electronic, electrical, and electromechanical; EMI is
electromagnetic interference; PRA is probability risk analysis; EMC is electromagnetic
compatibility: S&MA is safety and mission assurance.) Adapted from NASA [2007],
1. Project has successfully completed the SDR and 1. The top-level requirements—including
SRR or MDR and responded to all RFAs and mission success criteria, TPMs, and any
RIDs, or has a timely closure plan for those re sponsor-imposed constraints—are
maining open agreed upon, finalized, stated clearly, and
2. A preliminary PDR agenda, success criteria, and consistent with the preliminary design
charge to the board have been agreed to by the 2. The flow down of verifiable requirements
technical team, project manager, and review is complete and proper or if not, we have
chair before the PDR an adequate plan for timely resolution of
3. PDR technical products listed below for both open items. Requirements are traceable
hardware and software system elements have to mission goals and objectives.
been made available to the cognizant participants 3. The preliminary design is expected to
before the review: meet the requirements at an acceptable
a. Updated baselined documentation, as level of risk
required 4. Definition of the technical interfaces is
b. Preliminary subsystem design specifications consistent with the overall technical
for each configuration item (hardware and soft maturity and provides an acceptable level
ware), with supporting trade-off analyses and of risk
data, as required. The preliminary software de 5. We have adequate technical margins with
sign specification should include a completed respect to TPMs
definition of the software architecture and a 6. Any required new technology has been
preliminary database design description, as developed to an adequate state of
applicable. readiness, or we have viable back-up
c. Updated technology development maturity as options
sessment plan 7. The project risks are understood and have
d. Updated risk assessment and mitigation been credibly assessed, and we have
e. Updated cost and schedule data plans, a process, and resources to
f. Updated logistics documentation, as required effectively manage them
g. Applicable technical plans (e.g., technical per 8. Safety and mission assurance (e.g.,
formance measurement plan, contamination safety, reliability, maintainability, quality,
control plan, parts management plan, environ and EEE parts) have been adequately
ments control plan, EMI/EMC control plan, addressed in preliminary designs and any
payload-to-carrier integration plan, producibili- applicable S&MA products (e.g., PRA,
ty/manufacturability program plan, reliability system safety analysis, and failure modes
program plan, quality assurance plan) and effects analysis) have been approved
h. Applicable standards 9. The operational concept is technically
i. Safety analyses and plans sound, includes (where appropriate)
human factors, and includes the flow-
j. Engineering drawing tree down of requirements for its execution
k. Interface control documents
I. Verification and validation plan
m.Plans to respond to regulatory requirements
(e.g., environmental impact statement), as
required
n. Disposal plan
o. Technical resource use estimates and margins
p. System-ievel safety analysis
q. Preliminary limited life items list (LLIL)
19.5 Reach the Design-to Baseline 819
Table 19-39. Supporting Documents for Preliminary Design Review Success Criteria. (TPM
is technical performance measure; GSE is ground support equipment; SEMP is sys
tems engineering management plan; EEE is electronic, electrical, and electrome
chanical; PRA is probability risk analysis; S&MA is safety and mission assurance.)
Table 19-40. Contents for Primary Preliminary Design Review (PDR) Supporting
Documents. (MQE is measure of effectiveness; GSE is ground support equipment;
CDR is critical design review; TBD/TBR is to be determined and to be resolved.)
TABLE 19-40. Contents for Primary Preliminary Design Review (PDR) Supporting Documents.
(Continued) (MOE is measure of effectiveness; GSE is ground support equipment;
CDR is critical design review; TBD/TBR is to be determined and to be resolved.)
TABLE 19-41. Systems Engineering Process Level of Effort Leading up to PDR. Analogous to a
cell phone signal, zero bars represents no or very low resource requirements and five
bars represents maximum resource use. Resource requirements apply for a particular
process only; a given number of bars for one process does not imply the same level of
resources as the same number of bars for a different process.
Level of
Process Effort Comments
4. Physical solution ■■■□□ During preliminary design, the final form of the
physical solution begins to emerge
5. Product implementation ■□□□□ Identifying existing assets and assessing the
cost-benefitand risk-reduction aspects of
existing systems are standard during this
phase. We also plan for necessary long-lead
procurement items.
6. Product integration ■■□□□ As part of the logical decomposition process,
we consider all lifecycle phases for each
system element. We identify and coordinate
with stakeholders to ensure that all
requirements and interfaces (such as to
existing or planned facilities) are captured.
TABLE 19-41. Systems Engineering Process Level of Effort Leading up to PDR. (Continued)
Analogous to a cell phone signal, zero bars represents no or very low resource
requirements and five bars represents maximum resource use. Resource requirements
apply for a particular process only; a given number of bars for one process does not
imply the same level of resources as the same number of bars for a different process.
Level of
Process Effort Comments
■■□□□
8. Product validation Validation planning continues at a low level as
a trade-off during design
9. Product transition ■□□□□ Only minimum effort on this process, mainly to
identify enabling products for transition and
how they may impact lifecycle cost
10. Technical planning ■■■□□ These processes must be fully defined for a
successful completion of PDR as they
determine how the program will be executed in
the next phase
16. Techn icaI assessment ■■■□□ Planning to support the upcoming PDR and
subsequent reviews must begin
17. Decision analysis ■■■□□ Rigorous technical decision analysis is key to
developing credible Phase B results. It’s
impossible to complete all the trade studies
that we identify, so good decision analysis
identifies the most critical ones.
19.5.4 Results
During Phase B, FireSAT project engineers complete the first major design and
analysis cycle (DAC), focused on subsystem design and interactive system
behavior. The Pre-phase A and Phase A DACs were more conceptual in nature.
That level of analysis often features gross assumptions, and first-order analytic
models tend to be the rule. In Phase B, we start questioning some of these
assumptions in earnest as second-order analysis tools and techniques begin to
uncover unexpected results and unintended consequences of initial design choices.
To understand this further, in this section we look at exhibits from 1) the system
19.5 Reach the Design-to Baseline 823
FIGURE 19-38. FireSAT Spacecraft Architecture. The system definition review yielded this
architecture, which undergoes further refinement and definition in Phase B.
FIGURE 19-39. FireSAT Spacecraft External Configuration, Deployed and Stowed. Conceptual
drawings complement diagrams and tables. They help everyone involved in the
project to get an idea of the final system.
Table 19>42. FireSAT Spacecraft Equipment List. We roll up the component masses to get the
subsystem masses, and these to arrive at the system mass. (MON is mixed oxides of
nitrogen; ISO is isolation; OD is outside diameter; ADCS is attitude determination and
control system; GPS is Global Positioning System; GNC is guidance, navigation, and
control; EPS is electrical power system; HPA is high-power antenna; Rx is receive;
Tx is transmit; He is helium.)
Mass Mass
Item Quantity (kg) Item Quantity (kg)
Payload 1 28.1 Propulsion 44.14
Bus Propulsion electronics 1 1.00
ADCS and GNC 9.74 Propelfant tank 1 20.00
ADCS control 1 2.00 Pressurant tank 1 10.00
electronics (including gas)
Reaction wheels 4 0.79 Orbit control engine 1 1.00
Magnetorquers 3 1.50 Reaction control engines 6 3.00
Sun sensor 1 0.60 He fill and drain valve 1 0.25
Earth sensor 1 2.75 He regulator 1 1.00
Magnetometer 1 0.85 He tank ISO valve 2 1.00
GPS receiver 1 1.00 MON fill and drain valve 1 0.50
GPS antenna 1 0.25 MON tank ISO valve 2 1.00
EPS 41.68 Lines and fittings (6.35 1 5.39
mm OD line)
Battery (24 cells) 1 9.31 Structures and 33.37
mechanisms
19.5 Reach the Design-to Baseline 825
TABLE 19-42. FireSAT Spacecraft Equipment List. (Continued) We roll up the component
masses to get the subsystem masses, and these to arrive at the system mass. (MON
is mixed oxides of nitrogen; ISO is isolation; OD is outside diameter; ADCS is attitude
determination and control system; GPS is Global Positioning System; GNC is
guidance, navigation, and control; EPS is electrical power system; HPA is high-power
antenna; Rx is receive; Tx is transmit; He is helium.)
Mass Mass
Item Quantity (kg) Item Quantity (kg)
Power control unit 1 5.96 Primary structure 1 18.87
Transmitter 1 2.10
(including HPA)
Tx antenna 1 0.08 Dry mass with margin 168.89
Once we know what's inside, we have to lay out the internal configuration,
what goes where. This is both a creative and an analytic activity, since the
quantifiable requirements (component A connects to component B) and the "-ility"
requirements (such as maintainability) all need to be met. The "-ilities" are difficult
to quantify but easy to spot when there is a problem. The book Spacecraft Structures
and Mechanisms [Sarafin and Larson, 1995] provides guidelines for achieving an
optimal spacecraft configuration. For FireSAT, the internal configuration is laid
out with integration in mind (more on this later) and can be seen in Figure 19-40.
With the internal configuration understood, the preliminary design effort
focuses on how all of the pieces interact. We begin with the instantiated physical
block diagram as described in Chapter 5 and depicted in Figure 19-41. This
diagram helps us focus on the physical interfaces between the subsystems and
between the system and the outside world.
Another useful tool for capturing and defining these interactions is the Ixl
matrix as described in Chapter 15. Figure 19-42 shows the Ixl matrix for the
FireSAT spacecraft. The matrix is read clockwise; that is, items come out of the
subsystems clockwise and into the associated subsystem on the diagonals.
826 Chapter 19—FireSAT End-to-End Case Study
Figure 19-40. FireSAT Spacecraft Internal Configuration. The inside of a spacecraft is crowded!
We have to keep interfaces in mind when arranging the subsystems and
components.
0
Spacecraft
Element
1 2 3 4 5 6
Baseplate Launch vehicle Propulsion Solar Payload Solar array
module interface module panels module w/SADA (2x)
module
HWCI HWCI HWCI HWCI HWCI
HWCI
Engineering Drawings
"If you can't draw it, you can't build it" is a familiar quip on the shop floor. The
goal of critical design is to completely specify the system to sufficient detail that
someone can actually obtain all the pieces by building, reusing, or buying them
from some source. All these hardware end-item specifications appear as drawings
or electronic schematics. Creating these drawings is a huge effort that begins to
ramp up during preliminary design.
In planning the engineering drawing tasks, the first step is to bound the
problem by constructing a drawing tree. A drawing tree lays out the hierarchical
relationship among all of the drawings for the system of interest and defines a
drawing number methodology. One useful way to delineate the drawing tree
structure is to start with the hardware configuration items defined in the
19.5 Reach the Design-to Baseline 829
configuration control process (the next section discusses how this is also useful for
integration). Figure 19-44 depicts the FireSAT spacecraft drawing tree.
The completion status of planned drawings serves as an important metric of
overall design completeness as the percentage of drawings signed and released
represents an earned value for the overall level of effort needed to finish the
design. Preliminary, top-level drawings are made and released during Phase B for
planning and design purposes. Figure 19-45 shows the stowed configuration
drawing. At this point we may also begin drawings, especially interface control
drawings, on long-lead items such as solar arrays. As a general rule of thumb,
projects aim for about 40% of the drawings, mostly top-level and interface
drawings, to be completed by the PDR.
541000 sensor
540000 payload modufe
542000 payload structural interface
Figure 19-44. FireSAT Spacecraft Drawing Tree. In FireSAT, we group the planned drawings according to the diagram of the hardware
configuration items shown in Figure 19-43 and define an overall drawing numbering strategy for the entire system. (SADA is solar
array drive assembly; GPS is Global Positioning System; ACS is attitude control subsystem; TX is transmit; RX is receive; ADCS
is attitude determination and control subsystem.)
FIGURE 19-45. FireSAT Spacecraft Stowed Configuration Drawing. This is a top-level drawing, but more detailed than a conceptual picture
For example, it includes several spacecraft dimensions.
832 Chapter 19 —FireSAT End-to-End Case Study
developed for a different NOAA program. At the time, it was stipulated that the
NOAA infrared payload must meet Technology Readiness Level (TRL) 6 before
the FireSAT PDR. One month before the PDR, a non-advocacy board for
technology readiness conducts a formal review of the state of the payload
development. Unfortunately, it doesn't rate the development at TRL 6. To
complete the additional tests mandated by the board, NOAA estimates that their
payload team will need an additional three months.
Rather than slip the PDR, the project manager, in consultation with the lead
systems engineer, decides to hold the PDR as scheduled and deal with the
consequences of the payload schedule slip separately. At the PDR, two competing
get-well arguments are put forth.
When the decision was made to give NOAA responsibility for the payload,
Acme committed internal research and development funds to continue their own
instrument development in-house. NASA strongly supported this effort as a risk
reduction effort. Acme now argues that since the NOAA development has failed
to meet the required milestone, the project office should reassign payload project
responsibility back to Acme as originally planned. According to their estimates, a
modest change in contract scope, with funding in line with their original proposal,
could accelerate their internally-developed payload to reach PDR-level within two
months. Their instrument design is based on existing, proven technology with a
notional TRL of 7 or better. Bringing the design to the required level of detail is
only a matter of applying sufficient engineering manpower.
As expected, the primary team, led by NOAA, disagrees strongly with Acme's
argument and puts forward one of their own. They feel that the new payload they've
developed will be a significant improvement in quality over the existing technology
favored by Acme. It would offer better temperature resolution, and require only 50%
of the power necessary to support the Acme design, which is based on older detector
technology. In addition, NOAA (and some members of Congress) are applying
political pressure not to abandon the NOAA design, as its use on FireSAT would be
an important risk reduction for a much higher profile program.
The FireSAT project manager and the systems engineer face a tough decision.
Continuing to bet on the NOAA instrument will lead to at least a three-month
schedule slip, assuming their estimates are correct. If they fail to meet their goal, or
more likely, get very close and then request even more time, the project could face
uncontrolled delays. But switching to the Acme design means having to add back
funding that, while available at the start of the project, has been reallocated after the
decision to use the NOAA payload. And even that option will take at least an
additional two months, assuming the funds can even be found and added to the
contract immediately (unlikely in the world of government contracting).
In the end, the managers choose to leave both options open as long as possible.
They schedule a Delta-PDR for three months after the first PDR and also delay the
official transition to Phase C. Because NOAA doesn't want to be responsible for
delaying a project to combat wildfires (and Congressmen from wildfire-prone
western states are starting to ask tough questions), they agree to find stop-gap
funding for Acme to continue their effort on the back-up payload. Meanwhile, a
834 Chapter 19—FireSAT End-to-End Case Study
tiger team of industry and government "grey beards" are brought together to find
ways to accelerate the NOAA payload development.
While this compromise decision allows for continued development of both
payload options, the project still faces a delay of at least three months, and incurs
extra schedule, cost, and performance risk. While this decision doesn't satisfy the
Acme team, the project manager feels that the potential payoff of better resolution
and lower power is worth the risk. The argument that these benefits aren't
necessary to meet the current requirements is true, but the added resolution could
find smaller wildfires, ultimately saving more lives and property. Furthermore,
the long payload duty cycle means that the potential power savings of the NOAA
payload could be significant.
19.5.6 Conclusions
Preliminary design review epilogue: the risk pays off. The combination of the
tiger team expertise and the competition from Acme spurred on the NOAA
payload team, which manages to resolve all open issues from the technology
readiness review board. At the Delta-PDR the preliminary designs for both payload
options are presented. Both options can do the job, but the NOAA option with
increased performance at lower power (and outside the FireSAT project budget)
wins the day. Although disappointed, the Acme team manager assures his
superiors that their investment of IR&D is more than returned in the form of good
will among all members of the project team, including NOAA (and positions them
to bid for a commercial mission that could leverage the same sensor technology).
While hypothetically in competition, Acme engineers actively worked with the
NOAA team to help them resolve technical issues, and some of the innovative
solutions Acme had developed for their own payload have found their way into the
NOAA design. Furthermore, because of their efforts at payload design, the Acme
team is intimately familiar with both sides of the interface, making the eventual
integration of bus and payload that much easier. With this major drama behind
them, and the design-to baseline firmly established, the FireSAT team is ready to
move into the critical design phase.
of the system. The PDR establishes a design-to baseline defined by a strong set of
requirements and other supporting project documents; the CDR has to achieve a
build-to baseline with sufficient technical detail for contracting officers to start
buying components and parts and for machinists to start bending metal. During
this phase, the project:
• Develops a product build-to specification for each hardware and software
system element, with supporting trades and data
• Completes all technical data packages, including IRDs, integrated
schematics, spares provisioning list, specifications, and drawings
• Updates baselined documents, such as the SEMP
• Completes software design documents
• Finalizes the master verification plan and supporting documents
• Develops a launch site operations plan
• Finalizes an on-orbit check-out and activation plan
• Updates risk assessments, cost and schedule data, and logistics documentation
TABLE 19-44. Critical Design Review (CDR) Inputs and Success Criteria. Phase C takes the
project to CDR, after which the project is ready for actual building. (RFA is request for
action; RID is review item disposition; EEE is electronic, electrical, and
electromechanical; PRA is probability risk analysis.) Adapted from NASA [2007],
1. The project has successfully completed the PDR 1. The detailed design is expected to meet
and responded to all PDR RFAs and RIDs, or has a the requirements with adequate
timely closure plan for those remaining open margins at an acceptable level of risk
2. A preliminary CDR agenda, success criteria, and 2. Interface control documents are
charge to the board have been agreed to by the sufficiently mature to proceed with
technical team, project manager, and review chair fabrication, assembly, integration, and
before the CDR test, and plans are in place to manage
3. The CDR technical work products listed below for any open items
both hardware and software system elements have 3. We have high confidence in the product
been made available to the cognizant participants baseline, and we have (or will soon
before the review: have) adequate documentation to allow
a. Updated baselined documents, as required proceeding with fabrication, assembly,
b. Product build-to specifications for each hardware integration, and test
and software configuration item, along with 4. The product verification and product
supporting trade-off analyses and data validation requirements and plans are
c. Fabrication, assembly, integration, and test plans complete
and procedures 5. The testing approach is
d. Technical data package (e.g., integrated comprehensive, and the planning for
schematics, spares provisioning list, interface system assembly, integration, test, and
control documents, engineering analyses, and launch site and mission operations is
specifications) sufficient to progress into the next
e. Operational limits and constraints phase
f. Technical resource use estimates and margins 6. There are adequate technical and
programmatic margins and resources
g. Acceptance criteria
to complete the development within
h. Command and telemetry list budget, schedule, and risk constraints
i. Verification plan (including requirements and 7. Risks to mission success are
specification) understood and credibly assessed, and
j. Validation plan we have the plans and resources to
k. Launch site operations plan effectively manage them
I. Check-out and activation plan 8. Safety and mission assurance (S&MA)
m. Disposal plan (including decommissioning or (e.g., safety, reliability, maintainability,
termination) quality, and EEE parts) have been
n. Updated technology development maturity adequately addressed in system and
assessment plan operational designs, and any applicable
o. Updated risk assessment and mitigation plan S&MA products (e.g., PRA, system
p. Updated reliability analyses and assessments safety analysis, and failure modes and
effects analysis) have been approved
q. Updated cost and schedule data
r. Updated logistics documentation
s. Software design documents (including interface
design documents)
t. Updated limited life items list
u. Subsystem-level and preliminary operations
safety analyses
v. System and subsystem certification plans and
requirements (as needed)
w. System safety analysis with associated
verifications
19.6 Set the Build-to Baseline 837
design-to baseline to the component and part level. As we develop these products,
this new information triggers updates in the SEMP; the technology development,
maturity, and assessment plan; risk assessment and mitigation planning, etc. Table
19-46 shows more detailed contents of some of these documents.
Table 19-45. Products to Satisfy the Critical Design Review (CDR) Success Criteria. Here we
list the documentation supporting the success criteria enumerated in Table 19-44.
(GSE is ground support equipment; EEE is electronic, electrical, and
electromechanical; PRA is probability risk analysis.)
1. The detailed design is expected to meet the • Technical measurement plan and
requirements with adequate margins at an acceptable reports
level of risk • Requirements validation matrices
2. Interface control documents are sufficiently mature to • Interface requirements and control
proceed with fabrication, assembly, integration, and documentation
test, and plans are in place to manage any open items • Baseline configuration document
3. We have high confidence in the product baseline, and • System design reports
have (or will soon have) adequate documentation to • Software development plan
proceed with fabrication, assembly, integration, and test • Released engineering drawings
• Flight hardware, software, and
ground support equipment
specifications
4. Product verification and validation requirements and * System requirements documents
plans are complete (hardware, software, and GSE)
• Master verification plan
5. The testing approach is comprehensive, and the • Master verification plan
planning for system assembly, integration, test, and • Manufacturing and assembly plan
launch site and mission operations is sufficient to
progress into the next phase
FireSAT system • Analysis of alternatives trade-tree including (for each system in the Master • See discussion below
design study report architeclure, e.g., spacecraft, ground system, etc.) verification plan,
(for each system • Alternate system concepts test, and
element within the • Identification of system drivers for each concept or architecture verification
mission • Results of characterization for each alternative requirements
architecture as • Critical requirements identified or derived
appropriate) • Concept and architecture utility assessment including evaluation
(UPDATE) criteria, e.g., cost, technical maturity, MOEs
• Detailed functional analysis
• Proposed baseline system configuration with functional allocation
Spacecraft-to- » See discussion in Section 19.5 FireSAT mission - • Scope
launch vehicle, level and • Applicable documents
spacecraft-to- system s-level • Requirements
ground systems, system - Prime item definition
spacecraft-to-user requirements - System capabilities
interface control documents (SRD) - System characteristics
documents (ICDs) (UPDATE) - Design and construction
(BASELINE) - Logistics
- Personnel and training
- Subsystem requirements
- Precedence
- Qualification
- Standard sample
• Verification requirements
• Notes and appendices
Software develop • See discussion in Section 19.5 Detailed engi • See discussion below
ment plan (SDP) neering release
(UPDATE) drawings
Plan to achieve • Rank-ordered list of studies and trade-offs Manufacture and • See discussion in SAR
SAR readiness • Forward work mapped to program schedule assembly plan section
• Technology gap closure plan mapped to program schedule
• Requirements update and management plan
• TBD/TBR burn-down plan
• CDR issue resolution plan
19.6 Set the Build-to Baseline 839
Table 19-47. Systems Engineering Process Level of Effort Leading up to CDR. Analogous to a
cell phone signal, zero bars represents no or very low resource requirements and five
bars represent maximum resource use. Resource requirements are for a particular
process only; a given number of bars for one process does not imply the same level of
resources as the same number of bars for a different process.
Level of
Process Effort Comments
■□□□□
1. Stakeholder Stakeholder expectations should now be adequately
expectation definition captured in technical requirements. More negotiations may
be needed as we uncover lower-level trade-offs.
■■■□□
2. Technical Major system-level requirements should now be well-
requirements defined. Some refinement may be needed as we make new
definition trade-offs during detailed design.
■□□□□
3. Logical Largely completed during earlier phases but still useful in
decomposition refining lower-level specifications, especially for software
■■■■■
4. Physical solution During critical design, the final form of the physical solution
is defined
■■■□□
5. Product Identifying existing assets and the assessment of the cost-
implementation benefit and risk-reduction aspects of existing systems are
standard trades during this phase
6. Product integration ■■□□□ Product integration planning and scheduling trade-offs are
important considerations during detailed design
■■□□□
7. Product verification Verification planning moves to high gear during this phase as
plans are finalized
■■□□□
8. Product validation Validation planning continues apace with verification
planning
■□□□□
9. Product transition Only minimum effort on this process, mainly to identify
enabling products for transition and their impact on lifecycle
cost
■■□□□
10. Technical planning Technical planning continues, largely as replanning when
unforeseen issues arise
11. Requirements ■■■■■ The requirements management process must be fully
management defined and in place to control the evolving and expanding
set of requirements and detail specifications
■■■■■
12. Interface Interface management efforts move into high gear as part of
management detailed design
13. Technical risk ■■■■□
Risk planning and management remain at a vigorous level
management
840 Chapter 19 —FireSAT End-to-End Case Study
TABLE 19-47. Systems Engineering Process Level of Effort Leading up to CDR. (Continued)
Analogous to a cell phone signal, zero bars represents no or very low resource
requirements and five bars represent maximum resource use. Resource requirements
are for a particular process only; a given number of bars for one process does not
imply the same level of resources as the same number of bars for a different process.
Level of
Process Effort Comments
14. Configuration ■■■■■ End item configuration control processes are critical
management throughout detailed design
■■■■□
15. Technical data Tools and repositories must be fully deployed along with the
management processes to guide engineering teams in concurrently
defining systems
■■■■□ Planning to support the upcoming CDR and subsequent
16. Technical
assessment reviews must begin
■■□□□
17. Decision analysis Decision analysis processes continue but at a lower level as
(we hope) most major project issues have been resolved by
this point
19.6.4 Results
As the name implies, the build-to (and code-to) baselines define all the people,
processes, and design definition products needed to "turn the crank" to produce
the necessary hardware and software items that make up the system. But equally
important is knowing whether, once the crank has been turned, we've actually
built the system we wanted. For FireSAT Phase C results, we focus on these two
aspects of the system lifecycle—design and verification. The top-level view of the
design, including trade-offs, architecture, and configuration are in the system
design report described in earlier sections. In this section, we concentrate first on
how the engineering drawings capture the design details. We then turn our
attention to verification planning, looking at three key documents, the system
requirements document (SRD Section 4), the master verification plan, and the test and
verification requirements.
• Instructions on how to create the assembly from its component parts along
with detailed procedure steps, required tooling, torque values, etc.
We organize the drawings in a drawing tree as described in the last section.
Manufacturing drawings must have enough information for a machinist to
fabricate a part from raw materials, or include detailed process descriptions for
adhesive bonding or other techniques. Electrical schematics must provide sufficient
detail that the metal traces can be laid on the printed circuit board and surface
mount components added (either by hand or by machine). Figure 19-46 shows the
manufacturing drawing for the FireSAT Sun sensor bracket.
Of course, these days if s not likely that parts will actually be hand-machined
by a skilled machinist. Rather, the drawing details are programmed into a
computer-aided mill that machines the part precisely with less touch-labor. But a
skilled machinist must still translate the CAD drawing, or three-dimensional CAD
model, into sensible computer-aided manufacturing routines. The same is true of
printed circuit board (PCB) traces. This process is now almost completely
automated, as many small companies specialize in turning out PCBs based on
standard e-CAD files. For low production-rate surface mount boards, all the
soldering of miniscule parts is done under a microscope by skilled technicians (who
typically must hold NASA, European Space Agency, or similar certifications). One
manufacturing requirement that NASA has levied on the spacecraft is that all
electronic flight hardware be assembled by NASA-certified technicians.
For FireSAT, like most spacecraft, a few parts are custom-made. These include
primary and secondary structural parts and some spacecraft-specific PCBs (such
as the payload avionics). The vast majority of piece parts, including fasteners,
batteries, and connectors, come from vendors. The same is true as we work our
way up the assembly chain. The prime contractor procures many major
components (such as transmitters) or even whole assemblies (such as solar arrays)
as single items from subcontractors. These subcontractors, in turn, procure lower-
level components or parts from other vendors. But at some level of the supply
chain, someone actually has to make something. So manufacturing drawings or
similar detailed specifications must exist for every single part. These specifications
are mostly not controlled at the project level. Instead, the prime contractor relies
on military or industry standards (such as ISO-9002) or other methods to ensure
high quality from the bottom of the supply chain to the top.
Once we have the parts, they must be assembled into components and other
higher-level assemblies. Figure 19-46 shows the assembly drawing for the Sun
sensor. After all the components are assembled, the spacecraft is ready for final
integration. Figures 19-47, 19-48, and 19-49 are all part of the single assembly
drawing for the spacecraft. Next to Figure 19-50 is a blow-up of the parts list. Final
spacecraft assembly can only take place when all parts are ready for integration.
Ideally, all the detail drawings, schematics, and software unit design would be
completed by the CDR. But this rarely happens. The usual rule of thumb is to try
to have the drawings about 80-90% complete. Which 80-90% (and the potential
unknown problems lurking in the other 10-20%) is one of the risks to assess during
842 Chapter 19—FireSAT End-to-End Case Study
FIGURE 19-46. FireSAT Sun Sensor Manufacturing Drawing. The drawing must give enough detail to manufacture the part or component.
19.6 Set the Build-to Baseline 843
FIGURE 19-47. FireSAT Sun Sensor Assembly Drawing. The project procures many components from vendors and subcontractors. We must
have drawings for all of these.
844 Chapter 19 —FireSAT End-to-End Case Study
FIGURE 19-48. FireSAT Spacecraft Assembly Drawing Part 1. Project engineers must produce the drawings that are specific to the spacecraft
we're building.
19.6 Set the Build-to Baseline 845
FIGURE 19-49. FireSAT Spacecraft Assembly Drawing Part 2. This figure complements Figure 19-48.
846 Chapter 19—FireSAT End-to-End Case Study
FIGURE 19-50. FireSAT Assembly Drawing Part 3 and the Parts List Every view drawn provides essential information for spacecraft
integration.
19.6 Set the Build-to Baseline 847
Verification Documents
Fully specifying the FireSAT spacecraft and support system designs through
drawings, schematics, and software state models is only half the battle. The other
half is deciding how to know that we actually get what we asked for. That's the
purpose of verification. Chapter 11 describes how to go from an initial design
requirement through filial close-out of that requirement via a verification closure
notice (VCN) or similar document. So how do we know the system has been built
right? To illustrate these processes, we take the example of a single system-level
requirement and trace how the verification planning is captured in three key
documents: 1) the system requirements document, 2) the master verification plan,
and 3) the test and verification requirements document. Each document features a
different level of detail, and we need all of them to fully define the big picture. All
these documents were drafted or baselined at earlier reviews, but we've delayed
focusing on them until the discussion on the CDR, when they must be finalized
before being implemented in the next phase.
TABLE 19-48. Excerpt from the Requirement Matrix. This matrix is in Section 3 of the system
requirements document.
Table 19-49, Excerpt from the Verification Requirements Matrix. This matrix is in Section 4 of
the system requirements document.
Verification Source
Requirement Requirement Description Verification Rationale Units
4.3.2 Space 3.3.2 Space The space vehicle first Finite element models Qualification
Vehicle vehicle mode natural are a proven analysis model, flight
Natural natural frequency shall be technique for model
Frequency frequency: verified by analysis and predicting the natural
Verification Space vehicle test. The analysis shall frequency of complex
first-mode develop a multi-node structures. The design
natural finite element model to cycle will depend on
frequency estimate natural the results of this
shall be modes. The test shall analysis through
greater than conduct a modal Phase C. Modal survey
20 Hz survey (sine sweep) of is a industry-standard
the vehicle using a test method for
vibration table. The measuring natural
analysis and test shall frequency as per
be considered GEVS-XX-Y.
successful if the
estimated and
measured first mode is
greater than 20 Hz.
Table 19-50. Master Verification Plan Contents. Planning for verification must begin in Phase A.
requirements for facilities and equipment. Sections 1 through 4 of the plan lay out
the responsibilities—"who" —across the project for managing and conducting
these activities. Sections 5 through 13 define "what" is to be done, "when" and
"using what" in terms of facilities and equipment. One important aspect of the
MVP is the description of the project verification methodology, sometimes called
model philosophy. This discussion defines the fundamental approach to risk
management and verification that the project will take. Figure 19-51 illustrates the
approach used for the FireSAT spacecraft.
As captured by the verification requirement, the design requirement for
natural frequency will be verified by both test and analysis. The test will use a
modal survey. Let's trace the verification plan for the natural frequency
requirement (allocated to the spacecraft structure) in Figure 19-51 to see where and
with what model the modal survey tests occur. We first see a modal survey for the
structural test model under Engineering Development Units. This is a planned risk
reduction effort and doesn't formally close out the requirement. A second modal
survey is planned for the qualification model and a third with the flight model. The
verification closure notice for this requirement is signed off after the qualification
model test and analyses are complete.
While it emphasizes the test flow for the flight model, the MVP also lays out how
to sequence the tests in Figure 19-51. This sequence is shown in Figure 19-52. Here we
see the baseline modal survey, the test that verifies the requirement along with the
analysis results, in sequence 2 after installation on the vibration test stand. In standard
practice, modal surveys are repeated after each static load or random vibration
850 Chapter 19—FireSAT End-to-End Case Study
Figure 19-51. Example Development and Test Campaign Planning Diagram for FireSAT. This
figure illustrates the number and type of engineering development, qualification and
flight or proto-flight units to be built, their evolution, and the type of verification
activities they'll be subjected to. To verify the natural frequency requirement, for
example, we conduct modal surveys on all three types of units shown at the top of
the figure. (EMC/EMI is electromagnetic compatibility/electromagnetic interference.)
sequence to determine if any change has occurred in the structure (a change in the
frequency of the modes indicates something has been broken by overtest).
baseline modal survey. From Figure 19-52 we know that the baseline modal survey
is one part of a series of tests using a vibration test stand. The vibration test stand
simulates the mechanical environment (static load and random vibration) that the
spacecraft will experience on the launch vehicle. It also provides inputs of known
frequency to measure vehicle response (indicating natural frequency). So it makes
sense to group together into one TVR all of the requirements associated with the
induced launch environment that will be verified using the vibration test stand.
This way, the TVR provides a logical grouping of verification requirements into
single large campaigns made up of similar events (or at least events using similar
equipment). This streamlines our planning, helps make the most efficient use of
expensive test equipment, and minimizes the number of transitions needed on
flight hardware. Table 19-51 shows the organization for a sample TVR.
Table 19-51. Test and Verification Requirement Example for the Induced Environment
Requirements. Judicious grouping of tests helps maximize test and verification
efficiency.
TVR Office of
Hardware Primary
TVR ID Source Requirements Assembly Level Responsibility
1.0 3.3.2 Space vehicle (SV) shall meet its FireSAT SV Flight Model FireSAT
requirements after exposure to the (PN 2001-1), Flight Project Office
induced launch environment as Software Version 1.1.2
defined in the Pegasus User’s
Handbook
TVR Success
Execution Input Products Output Products Criteria
Acme • FireSAT finite element analysis • FireSAT flight model The analysis
Astro report vibration campaign test and test shall
Corp. • FireSAT qualification model plan (including detailed be considered
vibration campaign test plan (as test procedures) successful if
executed) • FireSAT flight model the estimated
• FireSAT qualification model vibration test campaign and measured
vibration test campaign report report first mode is
• Vehicle modal survey configuration greater than 20
• Vibration test stand Hz.
• Vibration test stand instrumentation
• SV- to-vibration test stand adapter
852 Chapter 19-fireSAT end-to-end Case study
FIGURE 19-52. FireSAT Environmental Test Campaign Sequence. This shows where the modal survey (sequence 2) fits into the complete test
sequence. (TVAC is thermal vacuum.)
19.6 Set the Build-to Baseline 853
Our discussion has focused on only one requirement, natural frequency. This
is only one of hundreds of requirements for the FireSAT spacecraft. For each one,
the project defines (and validates) a verification requirement. Together these form
part of the development of the MVP and TVRs. All this meticulous planning pays
off both during the initial planning phases (Phase A, B, and C) and when it comes
time to implement during Phase D. During the planning phases, the focus on
verification helps derive requirements for test facilities and specialized handling
equipment with long lead times. More important, it fosters detailed discussions
between the system designers and the system verifiers on the practical aspects of
how to conduct specific tests. These discussions lead to many changes to the
design itself to add in testing "hooks/' both physical and in software, to enable
many tests to be feasible. During Phase D, when the team starts implementing the
plan, the planning effort also proves invaluable, as we describe in the next section.
3.3.2 Launch Environment: The space vehicle (SV) shall meet its requirements
after exposure to the induced launch environment as defined in the Falcon-1 User's
Handbook.
Both the axial and lateral accelerations are lower on the Falcon-1 than on the
Pegasus XL, but the natural frequency requirement for the spacecraft has increased
from 20 Hz to 25 Hz, entailing a change in this requirement:
3.3.2.1 SV Natural Frequency: SV first-mode natural frequency shall be
greater than 25 Hz.
The change demands more detailed analysis to show that the current design will
meet this increase in stiffness, and a change to the verification events is necessary to
ensure compliance with this requirement. Fortunately, based on initial results from
the structural test modal survey completed only a week before the CDR, the margin
in the current design turns out to be sufficient to easily meet the 25 Hz requirement
for each vehicle. It still remains to be shown that the vertically stacked, dual satellite
configuration is sufficiently stiff. To better understand this problem, a detailed
coupled loads analysis is required that will add six months to the schedule.
In addition, the mass of the two spacecraft plus payload adapter must be under
540 kg. The best estimate at this time is within this requirement, but with a 40 kg
payload adapter, mass growth from this point forward will have to be carefully
monitored and controlled. This adds complexity to the design change process and
to the risk of a work package cost overrun during the next phase.
There is one silver lining in the cloud. While incorporating these changes into
the project plan will cause a six month slip in the planned initial operating
capability (IOC), it will actually have little effect on the final operating capability
(FOC) and might actually move it ahead. Because the new launch campaign plan
involves only one launch instead of two, the additional time between launches,
predicted to be at least six months, is eliminated from the schedule, along with the
considerable expense of a second launch campaign.
19.6*6 Conclusions
Somewhat the worse for wear, the FireSAT systems engineering team walks
away from the Delta-CDR confident that they've laid to rest all of the open issues
precipitated by the last-minute launch vehicle change. They're thankful for the
efforts they made back in Phase A to put in place rigorous requirements processes
and technical data management systems. These tools make it easier, and less risky,
to trace out the impact of the top-level induced environment requirement change
on all the child, grandchild, and great-grandchild requirements. Risk-reduction
testing during Phase C with various engineering models also gives the team
confidence in their test procedures and verification teams. With the final go-ahead
from the milestone authority, they're ready to start bending metal!
856 Chapter 19 —FireSAT End-to-End Case Study
This means that any open items left over from the CDR are either closed or a timely
closure plan defined.
Table 19-52. System Acceptance Review (SAR) Entrance and Success Criteria. This review
closes out Phase D; successful completion permits the project to continue to the
Operational phase.
1. A preliminary agenda has been coordinated 1. Required tests and analyses are complete
(nominally) before the SAR and indicate that the system will perform
2. The following SAR technical products have properly in the expected operational
been made available to the cognizant environment
participants before the review: 2. Risks are known and manageable
a. results of the SARs conducted at the major 3. System meets the established acceptance
suppliers criteria
b. transition to production and manufacturing 4. Required safe shipping, handling, check-out,
plan and operational plans and procedures are
c. product verification results complete and ready for use
d. product validation results 5. Technical data package is complete and
e. documentation that the delivered system reflects the delivered system
complies with the established acceptance 6. All applicable lessgns learned for
criteria organizational improvement and system
f. documentation that the system will operations are captured
perform properly in the expected
operational environment
g. technical data package updated to include
all test results
h. certification package
i. updated risk assessment and mitigation
j. successfully completed previous
milestone reviews
k. remaining liens or unclosed actions and
plans for closure
Mapping these criteria into products yields the results shown in Table 19-53.
In Phase D, we have a major shift from planning and design oriented "this is what
we think..." type documents to factual reports focused on "this is what we know..."
Now we exercise all those arduously developed plans and report on the results.
More detailed contents of some of these key documents are shown in Table 19-54.
Table 19-53. Products to Satisfy the System Acceptance Review (SAR) Success Criteria.
The documents listed give evidence of meeting the associated success criteria.
6. All applicable lessons learned for organizational • Project lessons learned report
improvement and system operations are captured
19.7.4 Results
In Phase D we build and test the end items of the project. First we focus on the
manufacturing and assembly plan. We then look at documented results from the
assembly, integration, and test (AIT) campaign.
Table 19-54. Contents for Primary System Acceptance Review (SAR) Supporting
Documents. Details of some of the documents listed in Table 19-52 are shown here.
(TBD is to be determined; TBR is to be resolved; FRR is flight readiness review.)
TABLE 19-55. Systems Engineering Process Level of Effort Leading to System Acceptance
Review (SAR). Analogous to a cell phone signal, zero bars represents no or very low
resource requirements and five bars represent maximum resource use. Resource
requirements are for a particular process only; a given number of bars for one
process does not imply the same level of resources as the same number of bars for a
different process.
Level of
Process Effort Comments
Stakeholder ■□□□□
By now, the technical requirements should fully capture stakeholder
expectation expectations. Any changes would be too late to fix..
definition
■□□□□
2. Technical Major system-level requirements should now be well-defined.
requirements Requirements changes, except to fix flaws in the system uncovered
definition during verification and validation should not be accepted.
3. Logical □□□□□ Largely completed during earlier phases but still useful in refining
decomposition lower-level specifications, especially for software
4. Physical ■■□□□ As the system completes final assembly, we define the final
solution physical form of the system
5. Product ■■■■■ This phase is the final realization of all implementation planning
implementation
■■■■■ Product integration activities peak during this phase
6. Product
integration
7. Product ■■■■■ Verification activities peak during this phase as plans are executed
verification
■■■■■
8. Product Validation activities peak during this phase, once a sufficient
validation system assembly maturity is achieved
9. Product ■■■■■ Transition activities peak during this phase as plans are executed
transition
■□□□□ Technical planning continues, largely as replanning when
10. Technical
planning unforeseen issues arise
■■■■□
11. Requirements The requirements management process is used to manage
management requirements changes required to fix system flaws
12. Interface ■□□□□ Interface management activities peak during this phase as all
management system interfaces are finally put together
13. Technical risk ■■□□□ Risk planning and management remain at a vigorous level during
management this phase in response to contingencies
14. Configuration ■■■■■ End item configuration control processes are critical throughout
management final assembly, integration, and verification
15. Technical data ■■■■■ Ready access to the most current data is critical to smooth
management assembly, integration, and verification
FIGURE 19-54. FireSAT Spacecraft Hardware Configuration Item (HWCI) Definition. This
includes a CAD model of the major items. (SADA is solar array drive assembly.)
FIGURE 19-55. Integrated Verification Fishbone Logic Diagram for FireSAT Propulsion
Module. The other four modules have similar integration sequences.
862 Chapter 19-FireSAT End-to-End Case Study
Table 19-56. FireSAT Spacecraft Integration Plan. The fishbone diagrams show the AIT
sequence; the integration plan gives the schedule. (ACS is attitude control
subsystem; GPS is Global Positioning System; TX is transmit; RX is receive.)
Adapter QM
ring
Marmon
band Ring fixture
Plate fixture
Shaker table
FIGURE 19-56. Vibration test stand configuration. Specifications for the test configuration must
be clear and unambiguous.
Figure 19-57. Accelerometer locations and labels for vibration testing. The arrows indicate
the directions in which acceleration is to be measured.
19.7 Achieve the As-built Baseline 865
TABLE 19-57. Minimum Data to be Monitored during Vibration Testing. This table corresponds
to Figure 19-57 above. (M is monitored.)
Topcenter-X M M
Topcenter-Y M M
Topcenter-Z M
+Xpanel-X M M
+Ypanel-Y M M
Bottomcorner-X M M
Bottomcorner-Y M M
Bottomcorner-Z M M M
Columntop-X M M
Columntop-Y M M
Columntop-Z M
Batterytop-X M
Batterytop-Y M
Batterytop-Z M
Data shall be collected over a frequency range of 20 Hz to 2000 Hz. For sine
sweep and sine burst testing, data shall be plotted on a log-log plot of acceleration
in g versus frequency. For random vibration testing, data shall be plotted on a log-
log plot of acceleration power spectral density (PSD) in g2/Hz versus frequency.
• Activity description
• Test article configuration
• Support equipment used
• Test results and performance data
• Summary of deviations, problems, failures, and retests
• Copy of as-run procedures
• Summary and authentication
In addition to detailed test reports for each test, the FireSAT project finds it
useful to create a "quick-look" summary verification report for each detailed
report. These provide an executive summary of the test along with top-level
results. Figure 19-58 shows an example for the flight model baseline modal survey
test. Results for this test are not conclusive and so pass/fail is deferred. The details
raised by this issue are discussed in the next section.
Figure 19-58. Summary Verification Report from FireSAT Spacecraft Modal Survey Test.
FireSAT, like many projects, develops some custom report forms tailored to the
project.
while it's the "same" box, it's now being manufactured by a different supplier. The
systems engineer orders additional inspections and tests to ensure that the materials
and processes are unchanged. This delay causes the spacecraft to be unavailable to
load the new C&C software with the customer's additional autonomy. Rather than
delay the SAR, the project manager and systems engineer elect to hold the review as
scheduled, and add this issue to the traveling open-work list. The contractor will
868 Chapter 19—FireSAT End-to-End Case Study
Verification Non-
Requirement Methods Compliance Data Conformance Status
Figure 19-59. Drawing Tree and Integration. The nature of the design process produces
drawings from the top down. However, manufacturing and assembly occurs from
the bottom up. This creates a tension between the design team working in one
direction and the manufacturing and assembly team, who are waiting to start from
the other direction. (SADA is solar array drive assembly; GPS is Global Positioning
System; ACS is attitude control subsystem; TX is transmit; RX is receive; ADCS is
attitude determination and control subsystem.)
that the natural frequency was within 2 Hz of the actual flight configuration. So the
requirement for natural frequency to be greater than 25 Hz is not conclusively
confirmed. The launch provider has to be consulted and then convinced that this
potential 24 Hz spacecraft stack is within the acceptable margin for the Falcon-1.
After several meetings, the launch provider accepts the new limits.
The final issue involves the transition plan. Early in the project, the US
Department of Defense (DoD), which works cooperatively with NOAA on the
NPOESS program, agreed to provide 017 aircraft transportation for the spacecraft
to the launch site for integration. But world events have increased demand for
these assets to support the US Amy, and Air Force officials have threatened to back
away on commitments to deliver the spacecraft. Several options are available but
all would impact schedule and slip the delivery date by up to two weeks, putting
both the customer and contractor in a bad light with the congressional
subcommittee following the program. The program is also dangerously close to
the automatic initiation of congressional subcommittee review due to schedule
growth in accordance with congressionally mandated project trip wires.
870 Chapter 19—FireSAT End-to-End Case Study
Fortunately, this being the first-ever "big" satellite contract for Acme
Astronautics Corp., and one of the first-ever for the great state of Iowa, the
governor of the state steps in and offers Air National Guard C-130 transport
aircraft as part of a "training exercise." This offer makes great headlines. But the
landing loads on the C-130 are significantly different from those on the C-17 for
which the spacecraft transportation container has been designed. Engineers must
quickly design, build, and qualify a shock absorbing vibration damper to fit on the
underside of the container. Despite the last minute scramble, the spacecraft are
delivered on time for integration.
The culmination of the SAR and final delivery is the signing and presentation
of the DD-250 receiving form, shown in Figure 19-60, Not only does this mark a
major milestone in the project, it means that the prime contractor can receive near
final payment for their hard work!
19.7.6 Conclusions
As the C-130 rolls down the runway, the champagne corks start to fly. The long
road from mission concept to system delivery has passed another milestone. There's
still much to do. Many of the key project personnel only sip at their glasses; as they
have to get across the airport to catch commercial airline connections to meet the
spacecraft on the other end to start final preparations for launch. Once unpacked and
checked out at the launch site, flight batteries must be charged and propellant loaded
before the spacecraft can be integrated onto the launch vehicle. While much risk still
looms ahead, the team has good reason to celebrate their accomplishments. From
those heady, optimistic days of Pre-phase A, through the cold realities of Phase D,
the team has steadfastly adhered to rigorous systems engineering discipline. A
skillful mix of the science and art of systems engineering has been crucial to finally
deliver a verified system that meets the customer's expectations while maintaining
the project on schedule and cost with manageable risk.
Epilogue
Exactly 9 years, 4 months, 3 days, and 7 hours after launch of the first FireSAT
spacecraft, some of the original designers are on hand in the mission control center
when the final command is sent to the spacecraft. Days before, controllers ordered
it to fire its thrusters one last time to deorbit down to a perigee altitude of 100 km.
Now, it will only be a matter of hours before the hardworking little satellite burns
up somewhere over the Pacific Ocean. It has already operated more than two years
past its design life and, along with its sibling launched at the same time (and still
going strong), has far exceeded the customer's expectations. FireSAT has helped to
bring firefighting into the Space Age. But while these pioneers have blazed the
trail, the FireSAT block II satellites, launched three years earlier with vastly
improved sensors, are setting a new standard in fire detection and tracking. The
hard-learned lessons from the first FireSAT systems engineering processes helped
the team deliver the block II versions on time and well within budget. With the
19.7 Achieve the As-built Baseline 871
FIGURE 19-60. FireSAT DD-250. This form marks the transition of the spacecraft from the builder to
the user.
team still somewhat exhausted from the block II effort, there’s word on the street
that the USFS is ready to put out a request for information for a new and improved
version. A Pre-Phase A study is scheduled to kick-off next month. Time to dig out
those systems engineering notes again.
References
Boden, Daryl Gv Gael Squibb, and Wiley J. Larson. 2006. Cost-effective Space Mission
Operations. 2nd Ed. Boston, MA: McGraw-Hill.
872 Chapter 19 —FireSAT End-to-End Case Study
GSFC. 2005. Rules for Design, Development, Verification and Operation of Flight Systems,
GSFC-STD-1000, May 30, 2005.
Kelly, Thomas J. 2001. Moon Lander. New York, NY: Harper Collins.
Larson, Wiley J. and James R. Wertz. 1999. Space Mission Analysis and Design. 3rd Ed.
Dordrecht Netherlands: Kluwer Academic Publishers.
NASA. 2007. NPR 7123.1a — NASA Systems Engineering Processes and Requirements.
Washington, DC: NASA.
NWSC, 2005. Software Development Plan Template. TM-SFP-02 v2.0, Space and Naval
Warfare Systems Center, April 5, 2005.
Sarafin, Thomas and Wiley J. Larson. 1995. Spacecraft Structures and Mechanisms. New York,
NY: McGraw Hill.
Sellers, Jerry Jon. 2004. Understanding Space. 3rd Ed. New York, NY: McGraw Hill.
Index
873
874 Index
Decision making 205-211, 214, 217-219 Design and development 245, 246,
cognitive biases in 219-221 252, 256
complexity 207-208 Design baseline 728
consequences 208, 221-222 Design certification review 450
cultural bias in 220 Design constraints 465
outcomes 202,206-207, Design functions 616
209-210, 220, 223 Design parameters 119
stress and 220-221 Design process 676-678, 680
weights, weighing 203-204, 211-216, Design reference mission 264, 764-765
229, 231 Design requirements 411, 417—420,
Decision making process 201-203, 207, 439, 446, 453
210,218-219, 222-223 Design, development, test,
formal 202-203, 211,219-220 and evaluation 235, 239, 288, 290
Decision matrix for transportation 491 costs 245, 256, 257
Decision selection methods 219 Design, Risk and 296, 298-300, 313,
Decision traps 220-222 315, 320
Decision tree 211-213, 305-306, Design-to baseline 402, 517, 735, 817
309,311-313 Design-to requirements 411,450-451
Decisions 201-223 Detailed engineering estimating 234
business-critical 208
Detailed packaging review 343
financial-critical 208
Development costs 236, 245-257
mission-critical 208
Development model 367, 373
safety-critical 208, 217
Deviations 333, 334, 338, 447, 653, 660
time-critical 208
Differential testing 468
Decommissioning review 729
Digital content 669-670, 674, 681,
Decomposition 147, 154-158, 305, 310,
683-688, 690, 694^697
330, 355, 373, 399, 460,
Digital-engineering environment
464,467, 610, 804, 806, 814
training 687
Deep Space 2 476
Defect-free requirements 95, 129 DirecTV 574
Defective requirements 97, 102, 130 Disaster recovery and continuity
Defense Acquisition Guidebook 27 of operations plan 698-700
Delfi-C3 359 Discipline functions 616
Deliverables 198, 333, 334, 335, 380, Doctrine 71, 73, 74, 75,86
418, 446, 449, 451, 544 Document tree 765, 768-769
Deliverables for reviews 721-730 Documentation (configuration
Delta 268 management) 648,650-652,
Delta II 287 657-658, 661-666
Demonstration 120, 126, 373 DoDAF See Department of Defense
Department of Defense Architecture Architecture Framework
Framework 85, 87-88 Drawing tree 828-830, 869
Derived requirements 780,785-786 DRCOP See Disaster recovery and
Design 3, 9, 13, 14, 145, 147, 186, 330, continuity of operations plan
335, 338, 343-345, 367, 370-371, Drivers
374, 380-382, 396-397, 408^09, performance 74, 79, 86, 89
420, 437, 439, 450,453, 456, 464 schedule 74
definition 8-9 DRM See Design reference mission
Design and analysis cycle 522-525, DS2 See Deep Space 2
541-542, 760, 770, Dynamic analysis 190
795, 814-815, 822 Dynamic library 658
878 Index
Mission design study report 745, 776 MVP See Master verification plan
Mission goals See Goals
Mission need See Need
Mission objectives See Objectives N
Mission operations center 494-495
NxN diagrams 92
Mission requirements See also
Stakeholder requirements 39—40, NxN matrix 522-524, 615-618
NASA Systems Engineering Processes
47-51, 57-59,
61-63, 264 and Requirements See NPR 7123.1a
NBL See Neutral Buoyancy Laboratory
priorities 57-58
NDI See Non-developmental items
validation 62
Mission risks 296, 301-303, 312 Need 9, 37, 41, 42, 43, 99, 101,
107,
Mission scenario tests 437, 440, 471
Need statement 101
Mission scope See also
Scope 67,69,70-71 Need, goals, and objectives 43,63, 96, 99,
101-102, 748
Mission support training and
Neutral Buoyancy
simulation results report 738
Laboratory 225-227,229
Mission type 252
NGO See Need, goals, and objectives
Mission-critical decisions 208
NOAA ground stations 495
MLD See Master logic diagram
Non-conformances 332, 334, 337-338,
MM/OD See Space environments/
343, 344, 347
Micrometeoroid and orbital debris
Non-developmental items 409, 452-455
MOC See Mission operations center
Nonfunctional requirements 48, 49, 110,
Mock-up 367,373
113, 116-117,
Modal survey 865-867
119, 191-193
Model philosophy 366,372-373, 377,398
Nonrecurring cost 259, 260, 271, 288
Model uncertainty factors 403
Normal operations 501
Model validation 385, 388, 397^103
NPR 7123.1a 18, 27, 98, 404
Model validation matrix 401,403 NROC See iNtegrated RMAT and OCM
Model-based systems
engineering 131, 796, 853
Models 358, 361, 366-367, 372-373,
393, 395-403, 408,422-424,
o
427, 448, 458 O&S See Operations and support
architecture 143, 152, 176, 197 Objective data 210
system 146-147, 166, 170, 188-190 Objective evaluation sheet 212
MOE See Measures of effectiveness Objectives 43, 68, 99, 101-102,
Molniya orbit See Orbits, Molniya 203-204, 211-212,
Monte Carlo simulation 291, 303, 225-226, 396
306-308, 310, 313,397 Obj ect-oriented design 147
MOP See Measure of performance OCM See Conceptual Operations
Morphological box 181—182 Manpower Estimating Tool/
MPL See Mars Polar Lander Operations Cost Model
MRB See Material review board OOD See Object-oriented design
MTTF See Mean time to failure Operability 113,635
MTTR See Mean time to repair Operating modes 157, 164
MUF See Model uncertainty factors Operational architecture 65-93
Multiple, simultaneous fault Operational deficiency 40
injection 469,472 Operational environment 43, 44, 49,
Multi-tasking overload 469, 472 71-74, 112
886 Index
Operational readiness review 352, 494, ORT See Operational readiness test
519-520, 729 ORU See Orbital replacement unit
Operational readiness tests 437, 440, 447, OV-1 See Operational view 1
471, 474
Operational requirements 44, 46, 48-50,
53, 56-57, 112, p
410-411, 418, 529
Operational risks 295, 320 Package concept review 343
Operational scenarios 143, 151, 157, Packaging, handling, storage, and
164-167 transportation 485
Operational view 1 749, 754 Pairwise comparison 214-215
Operational views 87-88 PanAmSat6 417
Operations 105-106, 113, 635 Parametric cost estimating 234,235,
Operations and maintenance 265, 266, 306-307
272, 282 cost models 263
Operations and support 235, 272, 288 Parametric cost models 237, 267, 290
cost elements 258-259 Partitioning 141, 143-146, 149,
Operations and support 156-160, 162, 166, 172-175
cost drivers 260-263 functional 151, 154, 159, 162,
flight rate 263 164-167, 170-171
operations concept 262 physical 175-176, 179,184,186
reliability, maintainability, and Partnerships 329
supportability 262 Passive stakeholders 46,48,49, 51,
support policy 262 100, 104
system design 261 Pathfinding 490
Operations and support cost Payload calibration results report 738
elements 260 FCA See Physical configuration audit
Operations and support costs 236, PDF See Probability density function
256-284, 288, 290 PDR See Preliminary design review
analysis process 263-268 PDT See Process development team
elements 270 Peer review 402
ground operations 271, 277, 282
Percent new design 245, 247, 254
Operations concept 265,269,285
Performance parameters 364-365,
Operations concept document 745, 764,
776, 801 371-373, 376
Operations cost model 267 Performance requirements 330, 335, 401,
Operations design 635 410, 447, 451
Opportunities 206,208 Performance test 441
Optical prescription model 397 PFM See Proto-flight model
Optimizing techniques 211, 216 PFR See Problem/failure report
Orbit planning model 402 Phases See Proj ect lifecycle 17
Orbital replacement units 316-317 Phoenix Mars Mission 719-72 0
Orbits 81-82 PHS&T See Packaging, handling,
geosynchronous 74, 82, 84 storage, and transportation
low-Earth 73, 73-74, 82, 85 Physical block diagram 147,175
Molniya 82 Physical configuration audit 665
sun-synchronous 82 Physical models 366-367, 398, 430
Organizational structure 160, 164, 532 Piece-wise validation 402
Orion 73,718 Pioneer 574
ORR See Operational readiness review Pioneer Venus 417
Index 887
Proto-flight 366, 408, 410, 427 Recurring costs 235, 256, 259, 260, 271, 280
Proto-flight model 366 REDSTAR 239
Prototype 366 Regression testing 365, 370-371,
Prototype, Prototyping 464-465 379, 431,439, 444
Pure risk 209 Regulatory risk 295
Relational database management
system 696-697
Q Relevant stakeholders 37, 46, 100, 132
QFD See Quality function deployment Reliability 113, 189
Reliability and Maintainability Analysis
QRAS See Quantitative Risk
Assessment System Tool/Life Cycle Analysis 267
Reliability models 309, 312-315,320
Quad chart for technical status 555
Reliability, maintainability, and
Qualification 338, 340, 346, 366-367,
supportability 263
373, 408—410, 422,
Requirement voices 51-52, 55,56
427, 430,449^151
Requirements 9, 95-139, 142, 149, 176,
Qualification model 366-367, 373
Qualification program 409, 425, 454-455 186, 192-194, 198, 298, 302, 319, 322,
delta 455 330, 334-336, 338-339, 343-345,
347, 358, 365, 372, 378-379
full 455
achievable 392
Qualification requirements 408-409, 455
allocation 10, 12, 127-129, 133-134,
Qualification units 409, 422, 423, 850
174, 191, 193, 393, 396-397
Qualitative analytic models 210
attributes 129, 133-135
Quality attribute characteristics author 130, 134
method 187-188
baselining 120, 129-132, 136
Quality function deployment 117-119
changes 125, 132, 135-138
Quality requirements 338
characteristics of 122
Quantitative analytic models 210
child 128, 134, 135
Quantitative Risk Assessment System309
defect-free 95, 129
Questions, Decision making See also
defective 97, 102, 130
Problems, Decision making; Issues,
definitive 392
Decision making 202-206, 208-210,
derived 143, 191, 193-194, 197
216-217, 220-221,223, 230
developing 96, 104, 106-110,
112, 115-116, 121
R inspections 130-131
integral 392
RACI 533 iteration 107
Random reboots injection 468 logical 392
Range safety management 133-137
variables for owner 134
cost estimating relationships 244 parent 127-128, 134-135
Rank reciprocal 204, 229-230 priority 119, 134-136
Rank sum 203-204 rationale 125-126, 135, 137,
Rate effects 249 394-395, 412
RDBMS See Relational database risk 132, 134-138
management system statement 59, 118, 123-126, 137
Reaction control syntax 123-124
variables for tracing 143, 174, 191-197
cost estimating relationships 243 VALID 392
Real systems test 210 verifiable 392
Index 889
writing 97, 103, 122, 124-125, 130, 133 Risk classification 302-303
Requirements baseline 724 Risk classification matrix 304
Requirements management 195 Risk handling approach 319
Requirements management tools 194-196 accept 319
Requirements matrix 411 mitigate 319
Requirements traceability and research 319
verification matrix 118, 194, 197 watch 319
Requirements validation 385, 388, Risk management 296-298
391-392, 394-395 Risk management plan 296, 299, 301-302,
Requirements verification 549, 586, 745, 766, 770, 776, 800
matrix 118, 120, 197 Risk manager 294, 299-301, 307, 311,
Reserves 302, 306, 319, 321, 333, 335 316, 321-3
Residual risk 209 Risk mitigation See also
Resource analysis 190 Risk reduction 295, 297, 301, 304,
Resource estimate 545 319-321, 324,333,
Responsibility assignment 400, 410, 473
matrix 533-534, 536 Risk owner 294, 300, 319, 324
Reusable launch vehicles 238, 239, Risk reduction See also Risk
258, 268 mitigation 354,371,408,472
Reuse 331, 332, 340,347, Risk status 321-324
453, 465^166, 472 RM See Requirements matrix
Review boards 644 RMAT/LCA See Reliability and
Review team See also Independent Maintainability Analysis Tool/Life
review team 718-719 Cycle Analysis
Reviews 333, 334, 338, 345-346, Routine operations, normal
374, 410, 446,461-462, 677
operations 483, 501-502, 504
conducting 719-721 RTVM See Requirements traceability
decisions 721-722 and verification matrix
scope 716 RVM See Requirements verification
Risk 120, 143, 191, 204-207, 234, matrix
293-295, 299-301, 306-308, 313-315,
318-319, 321-322, 324,354,366, 374-375,
408, 410, 437, 447, 448,453^154, 462, 472-
473, 475, 476, 574, 684, 699-700
s
as a vector 293, 295 SAAD See Structured analysis and design
business 209 Safety and mission assurance
definition 293 plan 549, 551
identification 294, 297, 299-300 Safety criticality 455, 472, 473
monitoring, tracking 294, 321-324 Safety risks 295, 320
position 294, 321, 325 SAFHIRE See Systems Analysis Programs
pure 209 for Hands-on Integrated Reliability
reporting 294-295, 324 SAR See System acceptance review
residual 209 SATCOM See Satellite communication
secondary 209 Satellite communication 40,44
sources of 296, 300, 314 Scenario test 437,441
uncertainty in 294 Scenario tracing 166-172
Risk assessment 294, 301,400,466, 477 Scenarios 65-67, 69, 74-76, 79, 81, 84,
qualitative 301-302 86, 89,91,99, 103, 110, 166-167,
quantitative 301-303, 306 170-172, 187, 192, 263, 265, 309,
Risk aversion 298, 321 312-314,317-318, 433, 43 8, 465
890 Index
System design report 823, 840 465, 472, 513, 515, 582-585, 588,
System design study report 776, 799-800, 598, 605, 669-673, 691, 693, 708
820, 838 art and science 3-15
System element 66-67, 69, 72, 75-77, definition 3
81, 86, 92 engine (the systems engineering "V")29
command, control and framework 22-28
communication 72 process 351-352, 356
ground-based 72, 78, 89 processes 14-15, 25, 27-29, 31,
launch 72,89 142,
mission-operations 72, 78, 89 scope 4-5
pace-based 81 Systems engineering and
software 73 integration 532, 535, 543
space-based 68, 72, 84, 85, 89 Systems engineering artifacts 670,
System integration 73,351-382 677-679, 687-693, 700, 708
definition 352 Systems engineering domains 795-796
goal 354 Systems engineering management
map 353 plan 136, 507, 547-550, 556, 565-601,
process 355, 360, 376, 378 608, 745, 765, 776,791,814,820
System integration review 405 contents 567-579
System of interest 46, 62,63,68,69, 80, purpose 566
106, 110, 121, 127-128, 150, 152, reviews 581,587
155, 176, 179, 509-510, 512, 556 technical summary 568, 577
aircraft example 510 writing 580, 589-590
definition 32 Systems engineering plan 598-601
System of systems 751, 780 Systems engineering process
System owner 376 level of effort
critical design review 839-840
System requirements 96, 106, 108-110,
115, 118-122, 141, 145, 149, post-launch assessment review 739
160, 176, 198, 334-335, 724 preliminary design review 821-822
allocation 143 Pre-phase A 746-747
stakeholder expectations and 146, system acceptance review 860
176, 194
system definition review 802-803
System requirements documents 777, system requirements review 777-778
Systems engineers 2, 9, 14,42, 56,61,
779, 784, 800, 813,
236, 302, 313-314, 319, 327, 343,
820, 838, 847-848
350, 371, 373, 376, 427, 430, 448,
System requirements reviewl20, 129-133,
475-476, 603-604, 670-671,
393,406, 519-520, 534,548,
681-682, 699
724-725, 735, 766, 774-777, 779
capabilities 3, 15, 28-31
System safety 574
characteristics 3,5-8
System tester 376
performance levels 31
System validation 432^35, 437, 448
thought process 9
System views 87, 88
Systems integration review 374
Systems acceptance review 729
Systems management 3
Systems Analysis Programs for definition 4
Hands-on Integrated Reliability 309
Systems engineering 15-20, 22, 25,
37-39, 45, 47, 59, 63, 106-107, T
117, 143, 147, 233,234, 305, 343,
344, 353, 355, 368, 381, 386, 427, TAA See Technical assistance agreement
Index S93
TIC See Theil's Inequality Coefficient Unknown unknowns 290, 448, 475
Time phasing 235, 236, 288, 290 User requirements 47
Timeline analysis 190 Users See also Stakeholders 146, 151,
Timelines 74, 77, 79-81, 86 159, 164, 167
Time-on-pad 287
Titan 270, 287
Titan IVB 267 V
To Be Determined 102, 136-137
TOP See Time-on-pad V&V See Verification and validation
Total dose test 368 VAC See Verification analysis cycles
Total flight-operations head count 280 VALID 391-395,464, 787
TPM See Technical performance measures Validation 6, 367, 387-388, 394-395,
Trace analysis 396 402—403, 434-439, 444,
Traceability 120, 127-128, 134, 135, 136, 448, 473, 477, 665
138, 143, 193-195, 197-198, Validation matrix See also FireSAT,
378-379, 431,661 validation matrix 439, 444
Tracing requirements 143,193-195, Validation of requirements 129-131,
197-198 133-134
Tracking Data Relay Satellite 730 scope 129
Trade studies 69, 79, 96, 98, 119, 134, Validation plan 400-401, 435, 437, 444
141, 146, 181, 265, 300, 311, Validation requirements 436,444
313, 319, 764, 779, 781 Values 206-207, 217, 218, 220-221
Trade trees 84-85, 757-759 VaM See Validation matrix
Trades, Trade-offs 82, 116, 121, 136, Variable costs 259
177, VCM See 203,Verification
207, 211 compliance matrix
Traffic light reporting 597 VCN See Verification completion notice
Training 484,495, 634-636 Vehicle processing 280, 282
Transit environment 493 Verification 6, 188, 194, 197, 339, 341, 345,
Transition See Product transition 347, 355, 365-367, 371-373, 380,
Transition to normal operations 388, 392, 400, 404,407^14,418,
(L&EO) 504 421-422, 427-432, 435, 439, 444,
Transportation 488-491, 635 448—449,452—454, 475,476,
TRL See Technology readiness levels 664-665, 847, 863
TRR See Test readiness review definition 365
TT&C See Telemetry, tracking, and Verification analysis cycle 525
command system Verification and validation See also
TUF See Test uncertainty factors individual terms 60,336, 345, 371-372,
TV-1 See Technical views 385-390, 393, 400,432,
TVAC See Thermal and vacuum testing 444-445, 463, 472, 475
TVR See Test and verification deep space missions 387,403
requirements
operations 390
Type III error 203 pre-launch 390
process 388,475
u Verification completion notice 408,412,
422,444
Uncertainty 206-207, 294-295, 305-306, Verification compliance matrix See also
308,310, Compliance
313, 318-319matrix 405, 430, 432,
Unforeseen problems 354, 358, 381 866, 868
Unified Modeling Language 692 Verification matrix 60, 347
Index 895
Verification methods 412—420, 430, Work package 539, 541, 543-545, 556
434, 868 Wraps 236, 271, 282
analysis 120, 126,387, 394, 397, Wright curves 249
412, 414—420, 424
demonstration 120, 126, 412, 414—415,
417,419 X
inspection 120, 126, 395, 412, XYZ factor 252
414,417,419-420, 439
simulation and modeling 120, 404, 412
test 120, 126, 395, 415, 417—420 Y
Verification of requirements 96, 118, 120,
125-126, 131, 138 Year of technology freeze 252
Verification plan See also Master
verification plan 405, 431
Verification program 400, 407^108, 411 z
Verification report 422, 431, 867
Zachman Framework 88-90
Verification requirements 404, 408,
411-414,420,
847-848, 850-851
Verification requirements matrix
See also FireSAT, Verification
requirements matrix 406, 413^114, 848
Verification storyboard 401
Viability analysis 465
Vibration and shock testing 424
Vibration test 370
Vision Spaceport Strategic
Planning Tool 267
VOC See Voice of the Customer
Voice of the Customer 45, 50-52, 55
Voyager 574
Vozdukh ESD 310
VR See Verification requirements
VRM See Verification requirements
matrix
w
Waivers 333, 334, 338, 574,
579,582, 653, 660-661
Waterfall model 355-356,460^62
WBS See Work breakdown structure
Wear-out failures 314
Wide Field InfraRed Explorer 387, 438
Wiener process 308
WIRE See Wide Field InfraRed Explorer
Work breakdown structure 21,233-239,
245, 255, 288, 353, 529,
531, 533, 543,556, 621, 792