Relating Software Requirements and Architectures ( PDFDrive )
Relating Software Requirements and Architectures ( PDFDrive )
.
Paris Avgeriou John Grundy
l l Jon G. Hall l
Editors
Relating Software
Requirements
and Architectures
Editors
Paris Avgeriou John Grundy
Department of Mathematics and Computing Swinburne University of Technology
Science Hawthorn, VIC 3122
University of Groningen Australia
9747 AG Groningen [email protected]
The Netherlands
[email protected]
Patricia Lago
Jon G. Hall Vrije Universiteit
Open University Dept. Computer Science
Milton Keynes MK7 6AA De Boelelaan 1081 A
United Kingdom 1081 HV Amsterdam
[email protected] Netherlands
[email protected]
Ivan Mistrı́k
Werderstr. 45
69120 Heidelberg
Germany
[email protected]
Why have a book about the relation between requirements and architecture?
Requirements provide the function for a system and the architecture provides the
form and after all, to quote Louis Sullivan, “form follows function.” It turns out not
to be so simple. When Louis Sullivan was talking about function, he was referring to
the flow of people in a building or the properties of the spaces in the various floors.
These are what we in the software engineering field would call quality attributes, not
functionality. Understanding the relation between requirements and architecture is
important because the requirements whether explicit or implicit do represent the
function and the architecture does determine the form. If the two are disconnected,
then there is a fundamental problem with the system being constructed.
The figure below gives some indication of why the problem of relating require-
ments and architecture is a difficult one (Fig. 1).
l There are a collection of stakeholders all of whom have their own agendas and
interests. Reconciling the stakeholders is a difficult problem.
l The set of requirements are shown as a cloud because some requirements are
explicitly considered in a requirements specification document and other require-
ments, equally or more important, remain implicit. Determining all of the
requirements pertaining to a system is difficult.
l The double headed arrow between the requirements and the architecture reflects
that fact that requirements are constrained by what is possible or what is easy.
Not all requirements can be realized within the time and budget allotted for a
project.
l The feedback from the architecture to the stakeholders means that stakeholders
will be affected by what they see from various versions of the system being
constructed and this will result in changes to the requirements. Accommodating
those changes complicates any development process.
l The environment in which the system is being developed will change. Technology
changes, the legal environment changes, the social environment changes, and
the competitive environment changes. The project team must decide the extent
to which the system will be able to accommodate changes in the environment.
v
vi Foreword
Stakeholders
Use/construct systems
Help specify derived from
Application
Requirements –
explict or implict Middleware Architecture
Design/constrain
Platform
The authors in this book address the issues mentioned as well as some additional
ones. But, as is characteristic of any rich problem area, they are not definitive. Some
thoughts about areas that need further research are.
l Why not ask requirements engineers and architects what they see as the largest
problems? This is not to say that the opinions expressed by the architects or
requirements engineers would be more than an enumeration of symptoms for
some deeper problem but the symptoms will provide a yardstick to measure
research.
l What is the impact of scale? Architecture is most useful for large systems.
To what extent are techniques for bridging the requirements/architecture gap
dependent on scale? This leads to the question of how to validate any assertions.
Assertions about software engineer ing methods or techniques are very difficult
to validate because of the scale of the systems being constructed. One measure is
whether a method or technique has been used by other than its author. This, at
least, indicates that the method or technique is t ransferable and has some “face”
validity.
l What aspects of the requirements/architecture gap are best spanned by tools and
which by humans? Clearly tools are needed to manage the volume of require-
ments and to provide traceability and humans are needed to pe rform design
work and elicit requirements from the stakeholders. But to what extent can tools
support the human elements of the construction process that have to do with
turning requirements into designs?
l Under what circumstances should the design/constrain arrow above point to the
right and under what circumstances should it point to the left? Intuitively both
directions have appeal but we ought to be able to make more precise statements
Foreword vii
about when one does synthesis to generate a design and when one does analysis
to determine the constraints imposed by the architectural decisions already
made.
These are just some thoughts about the general problem. The chapters in this
book provide much more detailed thoughts. The field is wide open, enjoy reading
this volume and I encourage you to help contribute to bridging the requirements/
architecture gap.
Len Bass
Software Engineering Institute
Pittsburgh, Pa, USA
.
Foreword
It must be the unique nature of software that necessitates the publication of a book
such as this – in what other engineering discipline would it be necessary to argue for
the intertwining of problem and solution, of requirements and architectures?
The descriptive nature of software may be one reason – software in its most basic
form is a collection of descriptions, be they descriptions of computation or descrip-
tions of the problems that the computation solves. After many years of focusing on
software programs and their specifications, software engineering research began
to untangle these descriptions, for example separating what a customer wants from
how the software engineer delivers it. This resulted in at least two disciplines of
description – requirements engineering and software architecture – each with their
own representations, processes, and development tools. Recent years have seen
researchers re-visit these two disciplines with a view to better understand the rich
and complex relationships between them, and between the artifacts that they
generate. Many of the contributions in this book address reflectively and practically
these relationships, offering researchers and practicing software engineers tools to
enrich and support software development in many ways: from the traditional way in
which requirements – allegedly – precede design, to the pragmatic way in which
existing solutions constrain what requirements can – cost-effectively – be met. And,
of course, there is the messy world in between, where the customer needs change,
where technology changes, and where knowledge about these evolves.
Amid these developments in software engineering, the interpretation of software
has also widened – software is rarely regarded as simply a description that executes
on a computer. Software permeates a wide variety of technical, socio-technical, and
social systems, and while its role has never been more important, it no longer serves
to solve the precise pre-defined problems that it once did. The problems that
software solves often depend on what existing technology can offer, and this
same technology can determine what problems can or should be solved. Indeed,
existing technology may offer opportunities for solving problems that users never
envisaged.
ix
x Foreword
It is in this volatile and yet exciting context that this book seeks to make a
particularly novel contribution. An understanding of the relationships between the
problem world – populated by people and their needs – and the solution world –
populated by systems and technology – is a pre-requisite for effective software
engineering, and, more importantly, for delivering value to users.
The chapters in this book rightly focus on two sides of the equation – require-
ments and architectures – and the relationships between them. Requirements
embody a range of problem world artifacts and considerations, from stakeholder
goals to precise descriptions of the world that stakeholders seek to change. Simi-
larly, architectures denote solutions – from the small executable program to the full
structure and behavior of a software-intensive product line.
The ability to recognize, represent, analyize, and maintain the relationships
between these worlds is, in many ways, the primary focus of this book. The editors
have done an excellent job in assembling a range of contributions, rich in
the breadth of their coverage of the research area, yet deep in addressing some of
the fundamental research issues raised by ‘real’ world applications. And it is in this
consideration of applications that their book differs from other research contri-
butions in the area. The reason that relating requirements and architectures is an
important research problem is that it is a problem that has its origins in the
application world: requirements can rarely be expressed correctly or completely
at the first attempt, and technical solutions play a large part in helping to articulate
problems and to add value where value was hard to pre-determine. It is this ‘messy’
and changing real world that necessitates research that helps software engineers to
deliver systems that satisfy, delight and even (pleasantly) surprise their customers.
I am confident that, in this book, the editors have delivered a scholarly contribution
that will evoke the same feelings of satisfaction, delight and surprise in its readers.
This book brings together representative views of recent research and practice in
the area of relating software requirements and software architectures. We believe
that all practicing requirements engineers and software architects, all researchers
advancing our understanding and support for the relationship between software
requirements and software architectures, and all students wishing to gain a deeper
appreciation of underpinning theories, issues and practices within this domain will
benefit from this book.
Introduction
xi
xii Preface
tangentially impacting on the other. Some of the contributions in this book describe
processes, methods and techniques for representing, eliciting or discovering these
relationships. Some describe technique and tool support for the management of
these relationships through the software lifecycle. Other contributions describe case
studies of managing and using such relationships in practice. Still others identify
state of the art approaches and propose novel solutions to currently difficult or even
intractable problems.
Book Overview
We have divided this book into four parts, with a general editorial chapter providing
a more detailed review of the domain of software engineering and the place of
relating software requirements and architecture. We received a large number of
submissions in response to our call for papers and invitations for this edited
book from many leading research groups and well-known practitioners of leading
collaborative software engineering techniques. After a rigorous review process 15
submissions were accepted for this publication. We begin by a review of the history
and concept of software engineering itself including a brief review of the disci-
pline’s genesis, key fundamental challenges, and we define the main issues in
relating these two areas of software engineering.
Part I contains three chapters addressing the issue of requirements change
management in architectural design through traceability and reasoning. Part II
contains five chapters presenting approaches, tools and techniques for bridging
the gap between software requirements and architecture. Part III contains four
chapters presenting industrial case studies and artefact management in software
engineering. Part IV contains three chapters addressing various issues such as
synthesizing architecture from requirements, relationship between software ar-
chitecture and system requirements, and the role of middleware in architecting for
non-functional requirements. We finish with a conclusions chapter identifying
key contributions and outstanding areas for future research and improvement of
practice. In the sections below we briefly outline the contributions in each part of
this book.
The three chapters in this section identify a range of themes around requirements
engineering. Collectively they build a theoretic framework that will assist readers to
understand both requirements, architecture and the diverse range of relationships
between them. They address key themes in the domain including change manage-
ment, ontological reasoning and tracing between requirements elements and archi-
tecture elements, at multiple levels of detail.
xiv Preface
The five chapters in this section identify a range of themes around tools and
techniques. Good tool support is essential to managing complex requirements and
architectures on large projects. These tools must also be underpinned by techniques
that provide demonstrative added value to developers. Techniques used range from
goal-directed inference, uncertainty management, problem frames, service compo-
sitions, to quality attribute refinement from business goals.
Chapter 7 presents a goal-oriented software architecting approach, where func-
tional requirements (FRs) and non-functional requirements (NFRs) are treated as
goals to be achieved. These goals are then refined and used to explore achievement
alternatives. The chosen alternatives and the goal model are then used to derive a
concrete architecture by applying an architectural style and architectural patterns
chosen based on the NFRs. The approach has been applied in an empirical study
based on the well-known 1992 London ambulance dispatch system.
Chapter 8 describes a commitment uncertainty approach in which linguistic and
domain-specific indicators are used to prompt for the documentation of perceived
uncertainty. The authors provide structure and advice on the development process
so that engineers have a clear concept of progress that can be made to reduce
Preface xv
technical risk. A key contribution is in the evaluation of the technique in the engine
control domain. They show that the technique is able to suggest valid design
approaches and that supported flexibility does accommodate subsequent changes
to requirements. The authors’ aim is not to replace the process of creating a suitable
architecture but to provide a framework that emphasizes constructive design
actions.
Chapter 9 presents a method to systematically derive software architectures from
problem descriptions. The problem descriptions are set up using Jackson’s problem
frame approach. They include a context diagram describing the overall problem
situation and a set of problem diagrams that describe sub-problems of the overall
software development problem. The different sub-problems are the instances
of problem frames and these are patterns for simple software development pro-
blems. Beginning from these pattern-based problem definitions, the authors derive a
software architecture in three steps: an initial architecture contains one component
for each sub-problem; they then apply different architectural and design patterns
and introduce coordinator and facade components; and finally the components of
the intermediate architecture are re-arranged to form a layered architecture and
interface and driver components added. All artefacts are expressed using UML
diagrams with specifically defined UML profiles. Their tool supports checking of
different semantic integrity conditions concerning the coherence of different dia-
grams. The authors illustrate the method by deriving an architecture for an auto-
mated teller machine.
Chapter 10 proposes a solution to the problem of having a clear link between
actual applications – also referred to as service compositions – and requirements the
applications are supposed to meet. Their technique also stipulates that captured
requirements must properly state how an application can evolve and adapt at run-
time. The solution proposed in this chapter is to extend classical goal models to
provide an innovative means to represent both conventional (functional and non-
conventional) requirements along with dynamic adaptation policies. To increase
support to dynamism, the proposal distinguishes between crisp goals, of which
satisfiability is boolean, and fuzzy goals, which can be satisfied at different degrees.
Adaptation goals are used to render adaptation policies. The information provided
in the goal model is then used to automatically devise the application’s architecture
(i.e., composition) and its adaptation capabilities. The goal model becomes a live,
runtime entity whose evolution helps govern the actual adaptation of the applica-
tion. The key elements of this approach are demonstrated by using a service-based
news provider as an exemplar application.
Chapter 11 presents a set of canonical business goals for organizations that can
be used to elicit domain-specific business goals from various stakeholders. These
business goals, once elicited, are used to derive quality attribute requirements for a
software system. The results are expressed in a common syntax that presents the
goal, the stakeholders for whom the goal applies, and the “pedigree” of the goal.
The authors present a body of knowledge about business goals and then discuss
several different possible engagement methods to use knowledge to elicit business
goals and their relation to software architectural requirements. They describe a new
xvi Preface
The four chapters in this section present several industrial case studies of relating
software requirements and architecture. These range from developing next-genera-
tion Consumer Electronics devices with embedded software controllers, IT security
software, a travel booking system, and various large corporate systems.
Chapter 13 addresses the problems in designing consumer electronics (CE)
products where architectural description is required from an early stage in develop-
ment. The creation of this description is hampered by the lack of consensus on high-
level architectural concepts for the CE domain and the rate at which novel features
are added to products. This means that old descriptions cannot simply be reused.
This chapter describes both the development of a reference architecture that
addresses these problems and the process by which the requirements and architec-
ture are refined together. The reference architecture is independent of specific
functionality and is designed to be readily adopted. The architecture is informed
by information mined from previous developments and organized to be reusable in
different contexts. The integrity between the roles of requirements engineer and
architect, mediated through the reference architecture, is described and illustrated
with an example of integrating a new feature into a mobile phone.
Chapter 14 presents a view-based, model-driven approach for ensuring the
compliance ICT security issues in a business process of a large European company.
Compliance in service-architectures means complying with laws and regulations
applying to distributed software systems. The research question of this chapter is to
investigate whether the authors’ model-driven, view-based approach is appropriate
in the context of this domain. This example domain can easily be generalized to
many other problems of requirements that are hard to specify formally, such as com-
pliance requirements, in other business domains. To this end, the authors present
lessons learned as well metrics for measuring the achieved degree of separation of
concerns and reduced complexity via their approach.
Chapter 15 proposes an approach to artifact management in software engineer-
ing that uses an artifact matrix to structure the artifact space of a project along with
stakeholder viewpoints and realization levels. This matrix structure provides a basis
on top of which relationships between artifacts, such as consistency constraints,
traceability links and model transformations, can be defined. The management of
all project artifacts and their relationships supports collaboration across different
roles in the development process, change management and agile development
approaches. The authors’ approach is configurable to facilitate adaptation to differ-
ent development methods and processes. It provides a basis to develop and/or to
integrate generic tools that can flexibly support different methods. In particular,
it can be leveraged to improve the transition from requirements analysis to
Preface xvii
The three chapters in this section address some emerging issues in the domain of
relating software requirements and architecture. These issues include approaches to
synthesizing candidate architectures from formal requirements descriptions, explicit,
bi-directional constraint of requirements and architectural elements, and middleware-
based, economic-informed definition of requirements.
Chapter 18 studies the generation of candidate software architectures from
requirements using genetic algorithms. Architectural styles and patterns are used
as mutations to an initial architecture and several architectural heuristics as fitness
tests. The input for the genetic algorithm is a rudimentary architecture representing
a very basic functional decomposition of the system, obtained as a refinement from
use cases. This is augmented with specific modifiability requirements in the form of
allowable change scenarios. Using a fitness function tuned for desired weights of
simplicity, efficiency and modifiability, the technique produces a set of candidate
architectural styles and patterns that satisfy the requirements. The quality of the
produced architectures has been studied empirically by comparing generated archi-
tectures with ones produced by undergraduate students.
In Chap. 19 the authors claim that the relationship of a system’s requirements
and its architectural design is not a simple one. Previous thought has been that the
requirements drive the architecture and the architecture is designed in order to meet
requirements. In contrast, their experience is that a much more dynamic relation-
ship needs to be achieved between these key activities within the system design
lifecycle. This would then allow the architecture to constrain the requirements to an
achievable set of possibilities, frame the requirements by making their implications
on architecture and design clearer, and inspire new requirements from the capabil-
ities of the system’s architecture. The authors describe this rich interrelationship;
illustrate it with a case study drawn from their experience; and present some lessons
learned that they believe will be valuable for other software architects.
xviii Preface
We conclude this book with a chapter drawing conclusions from the preceding
15 chapters. We note many approaches adopt a goal-oriented paradigm to bridging
the gap between requirements and architecture. Similarly, many techniques and
tools attempt to address the problem of traceability between requirements elements
and architectural decisions. The advance of reference architectures, patterns, and
successful requirements models in many domains has assisted the development of
complex requirements and architectures.
We observe, as have some of the authors of chapters in this book, that the
waterfall software process heritage of “requirements followed by architecture”
has probably always been a degree of fallacy, and is probably more so with today’s
complex and evolving heterogeneous software systems. Instead, viewing software
requirements and software architecture as different viewpoints on the same problem
may be a more useful future direction. Appropriate representation of architectural
requirements and designs is still an area of emerging research and practice. This
includes decisions, knowledge, abstractions, evolution and implications, not only
technical but economic and social as well. To this end, some chapters and approaches
touch on the alignment of business processes and economic drivers with technical
requirements and architectural design decisions. While enterprise systems engineer-
ing has been an area of active practice and research for 30+ years, the relationship
between business drivers and technical solutions is still ripe for further enhancement.
The editors would like to sincerely thank the many authors who contributed their
works to this collection. The international team of anonymous reviewers gave
detailed feedback on early versions of chapters and helped us to improve both the
presentation and accessibility of the work. Finally we would like to thank the
Springer management and editorial teams for the opportunity to produce this unique
collection of articles covering the wide range of issues in the domain of relating
software requirements and architecture.
xix
.
Contents
xxi
xxii Contents
21 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 373
P. Avgeriou, P. Lago, J. Grundy, I. Mistrik, and J. Hall
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 383
.
Contributors
Christine Choppy Université Paris 13, LIPN, CNRS UMR 7030, Villetaneuse
93430, France, [email protected]
xxv
xxvi Contributors
Jon G Hall Open University, Milton Keynes MK7 6AA, United Kingdom,
[email protected]
Bashar Nuseibeh Lero – the Irish Software Engineering Research Centre and
The Open University (UK), University of Limerick, Limerick, Ireland, Bashar.
[email protected]
This book describes current understanding and use of the relationship between
software requirements and software architectures.
Requirements and architectures have risen to be preeminent as the basis of
modern software; their relationship one to the other is the basis of modern software
development process.
Their case was not always clear cut, and neither was their preeminence
guaranteed. Indeed, many tools and techniques have held the spotlight in software
development at various times, and we will discuss some of them in this chapter.
Now, however, it is clear that requirements and architectures are more than just
software development fashion: used together they have helped software developers
build the largest, most complex and flexible systems that exist today. They are
trusted, in the right hands, as the basis of efficiency, effectiveness and value
creation in most industries, in business and in the public sector. Many fast-moving
areas, such as manufacturing, chemical and electronic engineering, and finance owe
much of their success to their use of software!
It has been a virtuous spiral with the relationship between requirements and
architectures driving progress.
In the light of this, this chapter reflects on two strands to which the relationship
in question has made a defining contribution.
The first strand considers software developers’ management of growing code
complexity, a topic that lead to requirements and architectures, as well as
motivating the need to understand their relationship. Specifically, we look at
systems whose context of operation is well-known, bounded and stable. Examples
are taken from the familiar class of embedded systems – for instance, smart phone
operating systems, mechanical and chemical plant controllers, and cockpit and
engine management systems. As will be clear, embedded systems are by no
means unsophisticated, nor are the needs that such systems should fulfill simple:
some, such as air traffic control, contain complex hardware and exist within
complex socio-technical settings. Indeed, embedded system complexity has
grown with the confidence of the software developer in the complexity of the
problem that they can solve. For the purposes of this chapter, however, their
defining characteristic is that the context of the system is more or less stable so that
change that necessitates redevelopment is the exception rather than the norm. The
relationship between requirements and architectures for such systems is well-
known – it is recounted in Sect. 1.1 – and the uses, efficiency and effectiveness
of it is the topic of much fruitful research reported elsewhere in this book.
The second strand deals with systems for which a changing context is the norm,
and that are enabled by the flexibility – some might say the agility – of the software
development process that links requirements and architectures. It describes how
developers work within a volatile context – the domain of application – and with the
volatile requirements to be addressed within that domain.1 Software systems typical
of this class are those which underpin business processes in support of the enterprise
(whence our examples), with problem volatility being simply the volatility of the
business context and its needs. The relationship between requirements and
architectures in the face of volatility is less well explored and we look at the source
of some deficiencies in Sect. 1.2. New thinking, techniques and tools for improving
the efficiency and effectiveness of the treatment of volatility is the topic of other
leading-edge research reported elsewhere in this book.
Complexity and volatility are now known to be key developmental risk
indicators for software development (see, for instance, [1]). And it is the successful
treatment of complexity and volatility that places software requirements and soft-
ware architectures as preeminent in linking problem and solution domains: as they
are the key in the management of developmental risk, so most mainstream current
practice is based on their relationship.
This chapter is organised as follows: after this introduction, we consider the rise
of complexity in systems, providing an historic perspective that explains the current
importance of the relationship between requirements and architectures. We con-
sider next the rise of volatility in the light of the widely held critique of software
project failure. The last section makes two observations, and from them motivates
the continuing and future importance of the relationship between software
requirements and architectures.
The themes of this chapter are the developmental management of complexity and of
volatility. Figure 1.1 (with Table 1.1) classifies various example systems according
to their characteristics. In the figure, software complexity (measured by the proxy of
lines of code2) is related to context volatility.
1
We note that context and requirements may change independently; as they define the problem that
should be solved, we refer to problem volatility as indicative of these changes.
2
The precise details of this relationship between complexity and lines of code have been subject to
many debates over the years; we do not replay them here.
1 Introduction: Relating Requirements and Architectures 3
Fig. 1.1 Relating of requirements and architectures, the rise of complexity an volatility
The line labelled ‘System limit’ in the figure provides a bound based on the
observed laws on the growth of computing power. The most relevant (and most
widely known) is Moore’s observation that, essentially, computing power doubles
every 2 years [2].
Also overlaid on the figure are various ‘Developmental structuring mechanisms.’
These are referred to in the text, with the figure serving to delimit the approximate
scope of their applicability.
Manchester University housed the first stored program computer, the Small-Scale
Experimental Machine (SSEM in Fig. 1.1) – aka Baby – which ran its first program
on 21 June 1948. Baby’s first run was on a 17-line program3 that calculated:
3
See [3] for an intriguing reconstruction of that first program, together with a transcript of the notes
accompanying its design.
4
[. . .] the highest factor of an integer. We selected this problem to program because it used
all seven instructions [in the SSEM instruction set]. It was of no mathematical significance,
but everybody understood it.
Tom Kilburn [4]
This trivial problem was, arguably, the beginning of the modern information
age. At that time, a simple software process was sufficient: the program above was
simply crafted by inspection of the problem and debugged until it was correct.
As higher problem complexity was encountered, code complexity increased.
One response to the need to manage higher code complexity was to provide richer
and richer structures within the programming languages in which to implement it.
High-level programming languages were developed to provide greater abstraction,
often with concepts from the problem domain allowing more complex code to
service more complex problems. There was a steady trend in sophistication leading
to the ‘fifth-generation languages’ (5GLs) and ‘domain-specific languages.’
However, in 1992, Perry and Wolfe [5] brought ‘elements,’ ‘forms’ and ‘ratio-
nale’ together and so placed the new topic of ‘software architecture’ at the heart of
code design. For, with them, Perry and Wolfe captured the repeated code and
thought structures used to code and to justify code. Shaw and Garlan’s book [6]
laid the groundwork for the development of software architecture as a discipline.
From our vantage point and by their lasting popularity and utility, architectures
(in their various guises) are the correct abstraction for software: they are program-
ming language independent and serve to structure large systems4 [7].
With architectures came a growing confidence in the extent to which complexity
could be handled and this led to a need for techniques and tools for managing
complex (if stable) problem domains.
Even though the term software engineering had been defined in 1968, coincident
with the rise of architectures was a growing realisation of the importance of an
engineering approach to software. The traditional engineering disciplines had
already been a rich source of ideas. The successful construction and manufacturing
process model has, for instance, inspired the early software process model. today
known as the Waterfall model, as illustrated in Fig. 1.2.
With architecture at heart of program design, software requirements began to be
explored through software requirements engineering (for instance, [9]) which
managed problem complexity through phases of elicitation, modelling, analysis,
validation and verification. As required by the waterfall, software requirements
engineering led from system requirements to software requirements specifications
on which architectural analysis could be performed.
Things did not go smoothly, however; already in 1970, Royce [8] had suggested
an amended model based on his extensive experience of the development of
embedded software ‘[. . .] for spacecraft mission planning, commanding and post-
flight analysis’ which had found it wanting. In particular, Royce saw the need to
4
Their rise has also been coincident with a waning in the popularity of high-level programming
languages; always wary of cum hoc ergo propter hoc.
6 J.G. Hall et al.
Fig. 1.2 The traditional ‘Waterfall Model’ for software: a successful development process from
construction and manufacturing (Adapted from [8])
iterate between software requirements and program design (and between program
design and testing): for software, the early stages of development could not always
be completed before commencement of the later stages.
It was Royce’s enhanced model, illustrated in Fig. 1.3, that inspired a generation
of software processes. And it is this model, and its many derivatives, that characterise
the massively successful relationship between requirements and architectures, at least
for embedded systems.
The waterfall model made one other assumption that is not true of software, which
is that a stable problem exists from which to begin development. It is the lifting of
this assumption that we explore in this section.
Of course, there were many early business uses of computers: a very early one
started in 1951 when the Lyons5 Electronic Office (LEO, [10]) was used for order
control and payroll administration. These early applications typically involved
scheduling, reporting and number crunching and, although they provided many
technical challenges to be solved that were complex for that time, what amounts to
5
J. Lyons and Co. A UK food manufacturer in the mid 20th century.
1 Introduction: Relating Requirements and Architectures 7
Fig. 1.3 Royce’s updated waterfall for embedded system development (Adapted from [8])
first mover advantage [11] meant Lyons enjoyed a stable environment in which to
explore their business use of computers.
The wish for more sophisticated business computing use meant developers
dealing with the volatility of the business world. Any connection between the
informal organisational world and the (essentially) formal world of the computer
[12] must link two volatile targets: the world of enterprise and the world of
technology. As the move from real-world to formal world is unavoidable [12],
development must both initially cross and, as the needs exist in the real-world,
subsequently recross the boundary as development continues.
Due to volatility, the resulting relationship between requirements and
architectures is an order of magnitude more complex than for stable systems, and
the treatment of it offered by traditional techniques, such as those derived from
Royce’s processes, are increasingly seen as inadequate. Indeed, software develop-
ment in the face of such volatility is now recognised as one of the greatest
challenges it faces.
Unfortunately, only the magnitude of the challenge is understood fully: a
recently published study of 214 of large IT Projects6 [13] concluded that 23.8%
were cancelled before delivery, with 42% (of those that ran to completion)
overrunning in time and/or cost. According to the study, the total cost across the
European Union of Information Systems failure was estimated to be €142 billion
in 2004.
With the best minds on the problem of developing software in volatile contexts,
exploration is in progress; indeed, a sizeable proportion of this book describes state
of the art thinking for this problem.
It is, of course, possible to imagine that the changes that are wrought to software
processes in response to volatile contexts will make redundant both requirements
and architectures, and so knowledge of their relationship. It may, for instance, be
that the need to treat volatility will lead back to high-level language development.
There are already some small trends that lead away from requirements and
architectures to self-structuring or multi-agent systems that, for instance, learn
what is needed and then supply it autonomously. Who knows from whence the
breakthrough will come!
There are, however, a number of observations of the systems we have considered
in this chapter that lead us, with some certainty, to the conclusion that requirements
and architectures will remain topics of importance for many, many years, and that
their relationship will remain the basis of mainstream, especially organisational,
software development.
Firstly, even if change is the norm, the scope of change in organisations is not the
whole organisational system. For instance, there are general organising principles –
the need for user to access a system, the need for the business processes conducted
by the organisation to be represented, the need for organisational data to be
captured and stored – which will remain for the foreseeable future and unless
organisations change from their current form to be unrecognisable. As such, their
representation in software will always be amenable to an architectural treatment
much like as today.
Secondly, most organisations lack embedded software expertise sufficient to
meet their own recognised software needs so that software development will,
typically, always be outsourced to specialist development organisations. As the
primary or driving need exists within the organisation, there is no alternative to
accessing the stakeholder within the commissioning organisation for understanding
their needs to develop the software intensive solution. As such, the need to capture
and process requirements between organisations will always be necessary.
One happy conclusion is that the relationship between requirements and
architectures is here to stay, and that the work of this book and that like it will
grow in relevance!
Acknowledgments The first author wishes to thank Lucia Rapanotti for her careful reading and
detailed critique of this chapter.
1 Introduction: Relating Requirements and Architectures 9
References
Requirements are fundamental to any engineered system. They capture the key
stakeholder functional needs, constraints on the operation of the system, and often
form a basis for contracting, testing and acceptance [1, 2]. Architecture captures the
structuring of software solutions, incorporating not just functional properties of a
system but design rationale, multi-layer abstractions and architectural knowledge
[3, 4]. One can not exist without the other. Requirements need to be realized in a
software system, described in essence by appropriate software architectures. Archi-
tecture must deliver on specified functional and non-functional requirements in
order for the software system to be at all useful.
A major challenge to requirements engineers and architects has been keeping
requirements and architecture consistent under change [5, 6]. Traditionally when
requirements change, architecture (and implementation derived from architecture)
must be updated. Likewise changes to the system architecture result in refining or
constraining the current requirements set. However, more recently the bi-directional
interaction between requirements and architecture changes; new software deve-
lopment processes such as agile, open source and outsourcing; new paradigms
such as service-orientation and self-organizing systems; and a need to constrain
requirements to what is feasible by e.g. rapidly emerging hardware technologies
such as smartphones and embedded sensor devices, has complicated this change
management process.
A further challenge to developers has been more effective traceability and
knowledge management in software architecture [7–9]. It has become recognized
that architectural knowledge management, including relationships to requirements,
is very complicated and challenging, particularly in large organizations and in
single-organisation multi-site, outsourcing and open source projects. Traceability
between documentation and source code has been a research topic for many years.
Tracing between requirements, architecture and code has become a key area to
advance management of the relationship between requirements and architecture.
Unfortunately recovering high value traceability links is not at all straightforward.
Different stakeholders usually require different support and management for both
traceability support and knowledge management.
A major research and practice trend has been to recognize that requirements
and architecture co-evolve in most systems. To this end, not only traceability
needs to be supported but recovery of architectures and requirements from legacy
systems [10]. Separating architectural concerns and knowledge has been implicitly
practiced for many years in these activities, needing to be supported by more
explicit approaches [11].
The three chapters in this part of the book identify a range of fundamental
themes around requirements engineering and software architecture. They help to
build a theoretic framework that will assist readers to understand both software
requirements engineering, software architecture and the diverse range of relation-
ships between the two. They address key themes across the domain of this
book including the need for appropriate change management processes, supporting
ontological reasoning about the meaning of requirements and architectural ele-
ments, and the need to be able to trace between requirements elements and archi-
tecture elements, at varying levels of detail.
Chapter 3, co-authored by Soo Ling Lim and Anthony Finkelstein, proposes
Change-oriented Requirements Engineering (CoRE). This is a new method to anti-
cipate change by separating requirements into layers that change at relatively
different rates. From the most stable to the most volatile, the authors identify
several layers that need to be accommodated. Some of these layers include
patterns, functional constraints, and business policies and rules. CoRE is empiri-
cally evaluated by applying it to a large-scale software system and then studying the
observed requirements changes from development to maintenance. The results of
this evaluation show that their approach accurately anticipates the relative volatility
of the requirements. It can thus help developers to manage both requirements
evolution but also derivative architectural changes.
Chapter 4, co-authored by Antony Tang, Peng Liang, Viktor Clerc and Hans van
Vliet, introduces a general-purpose ontology to address the problem of co-evolving
requirements and architecture descriptions of a software system. The authors
developed this ontology to address the issue of knowledge management in complex
software architecture domains, including supporting requirements capture and
relating requirements elements and architectural abstractions. They demonstrate
an implementation of a semantic wiki that supports traceability between elements in
a co-evolving requirements specifications and a corresponding architecture design.
They demonstrate their approach using a reuse scenario and a requirements change
scenario.
Chapter 5, co-authored by Inah Omoronyia, Guttorm Sindre, Stefan Biffl and
Tor Stålhane, investigates homogeneous and heterogeneous requirements traceabil-
ity networks. The authors synthesize these networks from a combination of event-
based traceability and call graphs. Both of these traces are harvested during the
running of a software project using appropriate tool support. These traceability
networks are used to help understand some of the architectural styles being observed
during work on a software project. The authors demonstrate the utility of their
traceability networks to monitor initial system decisions and identify bottlenecks in
an example project.
2 Theoretical Underpinnings and Reviews 15
References
3.1 Introduction
Without the notion of future changes, the documentation mixes stable and volatile
requirements, as the existing method section will show. Existing change anticipa-
tion approaches are guidelines that rely on domain experts and experienced
requirements engineers [29], who may be absent in some projects. Finally, existing
literature is almost entirely qualitative: there is no empirical study on the accuracy
of these guidelines in real projects.
To address these problems, this chapter proposes Change-oriented Requirements
Engineering (CoRE), an expert independent method to anticipate requirements
change. CoRE separates requirements into layers that change at relatively different
rates during requirements documentation. This informs architecture to separate
components that realise volatile requirements from components that realise stable
requirements. By doing so, software design and implementation prepares for change,
thus minimising the disruptive effect of changing requirements to the architecture.
CoRE is empirically evaluated on its accuracy in anticipating requirements
change, by first applying it to the access control system project at University
College London, and then studying the number of requirements changes in each
layer and the rate of change over a period of 3.5 years, from development to
maintenance. This study is one of the first empirical studies of requirements change
over a system’s lifecycle. The results show that CoRE accurately anticipates the
relative volatility of the requirements.
The rest of the chapter is organised as follows. Section 3.2 reviews existing
methods in requirements elicitation and change anticipation. Section 3.3 introduces
the idea behind CoRE. Section 3.4 describes CoRE and Sect. 3.5 evaluates it on a real
software project. Section 3.6 discusses the limitations of the study before concluding.
Goal modelling (e.g., KAOS [32] and GBRAM [1]) captures the intent of the system
as goals, which are incrementally refined into a goal-subgoal structure. High-level
goals are, in general, more stable than lower-level ones [33]. Nevertheless, goals at the
same level can have different volatility. For example, the goal “to maintain authorised
access” can be refined into two subgoals: “to verify cardholder access rights” and “to
match cardholder appearance with digital photo.” The second subgoal is more volatile
than the first as it is a capability required only in some access control systems.
CoRE adopts the concept of shearing layers from building architecture. This concept
was created by British architect Frank Duffy who refers to buildings as composed of
several layers of change [5]. The layers, from the most stable to most volatile, are site,
structure, skin, services, space plan, and “stuff” or furniture (Fig. 3.1). For example,
services (the wiring, plumbing, and heating) evolve faster than skin (the exterior
surface), which evolves faster than structure (the foundation). The concept was
elaborated by Brand [5], who observed that buildings that are more adaptable to
change allow the “slippage” of layers, such that faster layers are not obstructed by
slower ones. The concept is simple: designers avoid building furniture into the walls
because they expect tenants to move and change furniture frequently. They also avoid
solving a 5-min problem with a 50-year solution, and vice versa.
The shearing layer concept is based on the work of ecologists [22] and systems
theorists [26] that some processes in nature operate in different timescales and as a
3 Anticipating Change in Requirements Engineering 21
CoRE separates requirements into four layers of different volatility and cause of
change. From the most stable to the most volatile, the layers are: patterns, func-
tional constraints, non-functional constraints, and business policies and rules
(Fig. 3.2). Knowledge about patterns and functional constraints can help design
and implement the system such that non-functional constraints, business policies
and rules can be changed without affecting the rest of the system.
3.4.1.1 Patterns
Patterns
pattern illustrated in Fig. 3.3 (a) with functionalities such as making reservations,
adding and finding products. These functionalities have existed long before software
systems and are likely to remain unchanged. Different patterns can be found in
different domains, e.g., patterns in the medical domain revolve around patients,
doctors, patient records [28] and patterns in the business domain revolve around
products, customers, inventory [2]. In the business domain, Arlow and Neustadt [2]
developed a set of patterns which they named enterprise archetypes as the patterns
are universal and pervasive in enterprises1. Their catalogue of patterns consists of
various business related pattern, including the ones in Fig. 3.3.
1
From this point on, the word “archetype” is used when referring specifically to the patterns by
Arlow and Neustadt [2].
3 Anticipating Change in Requirements Engineering 23
way to confirm that a party is who they say they are [2]. A functional constraint on the
achievement of this goal is that the system must display the digital photo of the
cardholder when the card is scanned, in order to allow security guards to do visual
checks.
CoRE is based on goal modelling methods [8, 35]. To separate requirements into
the shearing layers, CoRE applies five steps to the goals elicited from stakeholders
(Fig. 3.4). The access control system example for a university is used as a running
example to demonstrate the method.
Step 1: Assign patterns to goals. CoRE starts by checking if there is a pattern for
each goal. A pattern can be assigned to a goal if and only if the operation(s) in the
pattern is capable of achieving the goal. There are two ways for this to happen. First,
24 S.L. Lim and A. Finkelstein
the goal is a direct match to a functionality in the pattern. For example, the goal of
searching for a person by name can be directly mapped to the functionality to find
a person by ID or name in the PartyManager archetype [2]. Second, the
goal can be refined into subgoals that form a subset of the operations in the pattern.
For example, the goal to manage people information centrally can be refined into
subgoals such as to add or delete a person, and to find a person by ID or name.
These subgoals are a subset of the operations in the PartyManager arche-
type. If no patterns can be assigned to the goal, proceed to Step 2 with the goal.
Otherwise, proceed to Step 3.
Step 2: Refine goals. This step refines high-level goals into subgoals and repeats
Step 1 for each subgoal. To refine a goal, the KAOS goal refinement strategy [8] is
used where a goal is refined if achieving a subgoal and possibly other subgoals is
among the alternative ways of achieving the goal. For a complete refinement, the
subgoals must meet two conditions: (1) they must be distinct and disjoint; and (2)
together they must reach the target condition in the parent goal. For example, the
goal to control access to university buildings and resources is refined into three
subgoals: to maintain up-to-date and accurate person information, assign access
rights to staff, students, and visitors, and verify the identity of a person requesting
access. If these three subgoals are met, then their parent goal is met.
As the refinement aims towards mapping the subgoals to archetypes, the patterns
are used to guide the refinement. For example, the leaf goal2 to maintain up-to-date
and accurate person information is partly met by the PartyManager arche-
type that manages a collection of people. Hence, the leaf goal is refined into two
subgoals: to manage people information centrally, and to automate entries and
updates of person information. A goal cannot be refined if there are no patterns for
its subgoals even if it is refined. For example, the goal to assign access rights to
staff, students, and visitors has no matching patterns as access rights are business
specific. In that case, proceed to Step 4.
Step 3: Identify functional constraints. For each pattern that is assigned to
a goal, this step identifies functional constraints on the achievement of the goal.
This involves asking users of the system about the tasks they depend on the system
2
A leaf goal is a goal without subgoals.
3 Anticipating Change in Requirements Engineering 25
to carry out, also known as task dependency in i* [35]. These tasks should be
significant enough to warrant attention. For example, one of the security guard’s
task is to compare the cardholders’ appearance with their digital photos as they
scan their cards. This feature constrains acceptable implementations of the
PartyAuthentication archetype to those that enable visual checks.
Step 4: Identify business policies and rules. The goals that cannot be further
refined are assigned to business policies and rules. This involves searching for
policies and rules in the organisation that support the achievement of the goal [21].
For example, the goal to assign access rights to staff, students, and visitors is
supported by UCL access policies for these user categories. These policies form
the basis for access rules that specify the buildings and access times for each of
these user categories and their subcategories. For example, undergraduates and
postgraduates have different access rights to university resources.
Step 5: Identify non-functional constraints. The final step involves identifying
non-functional constraints for all the goals. If a goal is annotated with a non-
functional constraint, all its subgoals are also subjected to the same constraint. As
such, to avoid annotating a goal and its subgoal with the same constraint, higher-level
goals are considered first. For example, the access control system takes people data
from other UCL systems, such as the student system and human resource system.
As such, for the goal to maintain up-to-date and accurate person information,
these systems impose data compatibility constraints on the access control system.
The output of the CoRE method is a list of requirements that are separated into
the four shearing layers. A visual representation of its output is illustrated in
Fig. 3.5. This representation is adopted from the KAOS [8] and i* methods [35].
For example, the goal refinement link means that the three subgoals should together
achieve the parent goal, and the means-end link means that the element (functional
constraint, non-functional constraint, or pattern) is a means to achieve the goal.
To control access to
university buildings and
Compatibility Availability
resources
To automate entries
To manage people Display person photo
and updates of person
information centrally when card scanned Legend
information *
Goal
Goal refinement
link
PartyAuthentication
Pattern
PartyManager
Functional
constraint
Store, print and export Non-functional
person photo constraint
Means-end link
* Goals without patterns are linked to their related business policies. and rules.
Fig. 3.5 Partial CoRE output for the university access control system
26 S.L. Lim and A. Finkelstein
3.5 Evaluation
CoRE’s goal is to separate requirements into layers that change at different rates.
The evaluation asks if CoRE can be used to separate requirements into the shearing
layers, and if it accurately anticipates the volatility of each shearing layer. The
access control system project in University College London is used as a case study
to evaluate CoRE. First, CoRE is used to model the initial requirements for the
project. Then, the requirements change in the system is recorded over a period of
3.5 years, from the development of the system to the current date after the system is
deployed. The result is used to find out the volatility for each layer and when
changes in each layer occur in the system lifecycle.
RALIC (Replacement Access, Library and ID Card) was the access control system
project at University College London (UCL). RALIC was initiated to replace the
existing access control systems at UCL, and consolidate the new system with
identification, library access and borrowing. RALIC was a combination of devel-
opment and customisation of an off-the-shelf system. The objectives of RALIC
included replacing existing access card readers, printing reliable access cards,
managing cardholder information, providing access control, and automating the
provision and suspension of access and library borrowing rights.
RALIC was selected as the case study to evaluate CoRE for the following
reasons. First, the stakeholders and project documentation were accessible as the
system was developed, deployed, and maintained at UCL. Second, RALIC was a
well-documented project: the initial requirements and subsequent changes were
well-documented. Third, the system development spanned over 2 years and the
system has been deployed for more than 2 years, providing sufficient time to study
change during development as well as maintenance. Finally, RALIC was a large-
scale project with many stakeholders [19] in a changing environment, providing
sufficient data in terms of requirements and their changes to validate CoRE.
The CoRE method was used to separate the requirements for the RALIC project
into the shearing layers. The initial requirements model was built using the
requirements documentation signed off by the client as the baseline. The initial
requirements model for RALIC consists of 26 elements from the CoRE layers: 3
patterns, 5 functional constraints, 4 non-functional constraints, 5 business policies,
and 9 business rules.
3 Anticipating Change in Requirements Engineering 27
Fig. 3.6 An excerpt of RALIC’s meeting minutes on card design. Names have been anonymised
for reasons of privacy
28 S.L. Lim and A. Finkelstein
the same changes occurred because changes discussed in team meetings can be
subsequently reported in board meetings, reflected in functional specification,
cascaded into technical specification and finally into workplans. Interviews were
also conducted with the project team to understand the project context, clarify
uncertainties or ambiguities in the documentation, and verify the findings.
Some statements extracted from the documentation belong to more than one CoRE
layer. For example, the statement “for identification and access control using a single
combined card” consists of two patterns (i.e., Person and PartyAuthen-
tication) and a functional constraint (i.e., combined card). In such cases, the
statements are split into their respective CoRE layers.
Although the difference between pattern, functional constraint, and non-functional
constraint is clear cut, policies and rules can sometimes be difficult to distinguish.
This is because high-level policies can be composed of several lower-level policies
[3, 21]. For example, the statement “Access Systems and Library staff shall require
leaver reports to identify people who will be leaving on a particular day” is a policy
rather than a rule, because it describes the purpose of the leaver reports but not how
the reports should be generated. Sometimes, a statement can consist of both rules and
policies. For example, “HR has changed the staff organisation structure; changes
were made from level 60 to 65.” Interviews with the stakeholders revealed that UCL
has structured the departments for two faculties from a two tier to a three tier
hierarchy. This is a UCL policy change, which has affected the specific rule for
RALIC, which is to display department titles from level 65 of the hierarchy onwards.
Each change was recorded by the date it was approved, a description, the type of
change, and its CoRE layer (or N/A if it does not belong to any layer). Table 3.2
illustrates the change records, where the type of change is abbreviated as A for
addition, M for modification, and D for deletion, and the CoRE layers are
abbreviated as P for pattern, FC for functional constraint, NFC for non-functional
constraint, BP for business policies, BR for business rules, and N/A if it does not
belong to any layer. There were a total of 97 changes and all requirements can be
exclusively classified into one of the four layers.
To evaluate if CoRE accurately anticipates the volatility of each shearing layer, the
change records for RALIC (Table 3.2) is used to calculate each layer’s volatility.
The volatility of a layer is the total number of requirements changes divided by the
initial number of requirements in the layer. The volatility ratio formula from Stark
et al. [31] is used Eq (3.1).
where Added is the number of added requirements, Deleted is the number of deleted
requirements, Modified is the number of modified requirements, and Total is the
total number of initial requirements for the system. Volatility is greater than 1 when
there are more changes than initial requirements.
Using Eq. 3.1 to calculate the volatility for each layer enables the comparison of
their relative volatility. As expected, patterns are the most stable, with no changes
over 3.5 years. This is followed by functional constraints with a volatility ratio of
0.6, non-functional constraints with a volatility ratio of 2.0, and business policies
and rules with a volatility ratio of 6.4. The volatility ratio between each layer is also
significantly different, showing a clear boundary between the layers. Business
policies and business rules have similar volatility when considered separately:
policies have a volatility ratio of 6.40 and rules 6.44.
The volatility ratio indicates the overall volatility of a layer. To understand when
the changes occur, the number of quarterly changes for each layer is plotted over the
duration of the requirements change study, as illustrated in Fig. 3.7.
30 S.L. Lim and A. Finkelstein
25
System goes live System goes live
Patterns
for new staff for whole UCL
(May 06) (Mar 07) Functional constraints
20
Non-functional constraints
Business rules
Business policies
Number of changes
15
10
0
Oct-Dec Jan-Mar Apr-Jun Jul-Sep Oct-Dec Jan-Mar Apr-Jun Jul-Sep Oct-Dec Jan-Mar Apr-Jun Jul-Sep Oct-Dec Jan-Mar
05 06 06 06 06 07 07 07 07 08 08 08 08 09
Quarter
The quarter Oct–Dec 05 has the highest number of changes for functional
constraints, non-functional constraints, business policies and rules because the
requirements elicitation and documentation were still in progress. The project
board had signed off the high-level requirements, but the details surrounding access
rights and card processing were still under progress. Many of the changes were due
to better understanding of the project and to improve the completeness of the
requirements.
Consistent with the existing literature (e.g., [7]), missing requirements surfaced
from change requests after the system was deployed. The system went live first for
new staff in May 06 and then for the whole of UCL in March 07. Each time it went
live, the number of requirements change increased in the following quarters.
A rise in policy change in quarters Oct–Dec 05 and Jan–Mar 07 was followed by
a rise in rule change in the following quarters, because business rules are based on
business policies. As more than one rule can be based on the same policy, the
number of changes in rules is naturally higher than that of policies. Nevertheless,
after the system went live, policy changes did not always cascade into rule changes.
For example, application of the access control policy to new buildings required only
the reapplication of existing rules.
Interestingly, the quarterly changes for business rules resemble an inverse
exponential function, as the number of changes was initially large but rapidly
decreased. In contrast, the quarterly changes for business policies shows signs of
continuous change into the future. Rules suffered from a high number of changes to
start with, as the various UCL divisions were still formulating and modifying the
rules for the new access control system. After the system went live, the changes
reduced to one per quarter for three quarters, and none thereafter. One exception is
3 Anticipating Change in Requirements Engineering 31
in quarter Jul–Sep 08, where UCL faculty restructuring had caused the business
processes to change, which affected the rules. Nevertheless, these changes were
due to the environment of the system rather than missing requirements.
3.5.5 Implications
CoRE produces requirements models that are adequate without unnecessary details
because leaf goals are either mapped to archetypes, which are the essence of the
system, or to business policies and rules, ensuring that business specific require-
ments are supported by business reasons. CoRE does not rely on domain experts
because the archetypes capture requirements that are pervasive in the domain. The
requirements models are complete and pertinent because all the requirements in
RALIC can be classified into the four CoRE layers. Also, RALIC stakeholders
could readily provide feedback on CoRE models (e.g., Fig. 3.5), showing that the
model is easy to understand.
As CoRE is based on goal modelling, it inherits their multi-level, open and
evolvable, and traceable features. CoRE captures the system at different levels of
abstraction and precision to enable stepwise elaboration and validation. The AND/
OR refinements enables the documentation and consideration of alternative options.
As CoRE separates requirements based on their relative volatility, most changes
occur in business policies and rules. The rationale of a requirement is traceable
by traversing up the goal tree. The source of a requirement can be traced to the
stakeholder who defined the goal leading to the requirement.
Finally, CoRE externalises volatile business policies and rules. As such, the
system can be designed such that software architecture components that implement
these volatile requirements are loosely coupled with the rest of the system. For
example, in service-oriented architecture, these components can be implemented
such that changes in business policies are reflected in the system as configuration
changes, and changes in business rules are reflected as changes in service com-
position [18]. This minimises the disruptive effect of changing requirements on
the architecture.
The study is based on a single project, hence there must be some caution in
generalising the results to other projects, organisations, and domains. Also, the
study assumed that all requirements and all changes are documented. Future work
should evaluate CoRE on projects from different domains, and in a forward looking
manner, i.e., anticipate the change and see if it happens. As RALIC is a business
system, enterprise archetype patterns were used. Future work should investigate
the use of software patterns in other domains, such as manufacturing or medical
32 S.L. Lim and A. Finkelstein
domains. Finally, future work should also investigate the extent of CoRE’s support
for requirements engineers who are less experienced.
The requirements changes that CoRE anticipates are limited to those caused by
the business environment and stakeholder needs. But requirements changes can be
influenced by other factors. For example, some requirements may be more volatile
than others because they cost less to change. In addition, CoRE does not consider
changes due to corrections, improvements, adaptations or uncertainties. Future
work should consider a richer model that accounts for these possibilities, as well
as provide guidance for managing volatile requirements.
CoRE anticipates change at the level of a shearing layer. But among the elements
in the same layer, it is unclear which is more volatile. For example, using CoRE,
business rules are more volatile than functional constraints, but it is unclear which
rules are more likely to change. Future work should explore a predictive model that
can anticipate individual requirements change and the timing of the change. This
could be done by learning from various attributes for each requirement such as
the number of discussions about the requirement, the stakeholders involved in the
discussion and their influence in the project, and the importance of the requirement
to the stakeholders. Much of these data for RALIC have been gathered in previous
work [17].
3.7 Conclusion
This chapter has described CoRE, a novel expert independent method that classifies
requirements into layers that change at different rates. The empirical results show
that CoRE accurately anticipates the volatility of each layer. From the most stable
to the most volatile, the layers are patterns, functional constraints, non-functional
constraints, and business policies and rules.
CoRE is a simple but reliable method to anticipate change. CoRE has been
used in the Software Database project3 to build a UCL wide software inventory
system. Feedback from the project team revealed that CoRE helped the team
better structure their requirements, and gave them an insight of requirements that
were likely to change. As a result, their design and implementation prepared for
future changes, thus minimising the disruptive effect of changing requirements to
their architecture.
Acknowledgments The authors would like to thank members of the Estates and Facilities
Division and Information Services Division at University College London for the RALIC project
documentation, their discussions and feedback on the requirements models, as well as Peter
Bentley, Fuyuki Ishikawa, Emmanuel Letier, and Eric Platon for their feedback on the work.
3
http://www.ucl.ac.uk/isd/community/projects/azlist-projects
3 Anticipating Change in Requirements Engineering 33
References
1. Anton AI (1996) Goal-based requirements analysis. In: Proceedings of the 2nd international
conference on requirements engineering. Colorado Springs, pp 136–144
2. Arlow J, Neustadt I (2003) Enterprise patterns and MDA: building better software with
archetype patterns and UML. Addison-Wesley, Boston
3. Berenbach B, Paulish DJ, Kazmeier J, Daniel P, Rudorfer A (2009) Software systems
requirements engineering: in practice. McGraw-Hill Osborne Media, New York
4. Boehm BW (1981) Software engineering economics. Prentice Hall, Englewood Cliffs
5. Brand S (1995) How buildings learn: what happens after they’re built. Penguin Books,
New York
6. Chung L, Nixon BA, Yu E, Mylopoulos J (1999) Non-functional requirements in software
engineering. Springer, Berlin
7. Cockburn A (2002) Writing effective use cases. Addison-Wesley, Boston
8. Dardenne A, van Lamsweerde A, Fickas S (1993) Goal-directed requirements acquisition.
Sci Comput Programming 20(1–2):3–50
9. Foote B, Yoder J (2000) Big ball of mud. Pattern Lang Program Des 4(99):654–692
10. Harker SDP, Eason KD, Dobson JE (1993) The change and evolution of requirements as
a challenge to the practice of software engineering. In: Proceedings of the IEEE international
symposium on requirements engineering, San Diego, pp 266–272
11. IEEE Computer Society (2000) IEEE recommended practice for software requirements
specifications, Los Alamitos
12. International Organization for Standardization (2001) Software engineering – product quality,
ISO/IEC TR 9126, Part 1–4
13. Jacobson I (1995) The use-case construct in object-oriented software engineering. In:
Scenario-based design: envisioning work and technology in system development. Wiley,
New York, pp 309–336
14. Jones C (1996) Strategies for managing requirements creep. Computer 29(6):92–94
15. Kotonya G, Sommerville I (1998) Requirements engineering. Wiley, West Sussex
16. Leffingwell D (1997) Calculating the return on investment from more effective requirements
management. Am Program 10(4):1316
17. Lim SL (2010) Social networks and collaborative filtering for large-scale requirements
elicitation. PhD Thesis, University of New South Wales. Available at http://www.cs.ucl.ac.
uk/staff/S.Lim/phd/thesis_soolinglim.pdf, accessed on 14 December 2010
18. Lim SL, Ishikawa F, Platon E, Cox K (2008) Towards agile service-oriented business systems:
a directive-oriented pattern analysis approach. In: Proceedings of the 2008 IEEE international
conference on services computing, Honolulu, Hawaii, vol 2. pp 231–238
19. Lim SL, Quercia D, Finkelstein A (2010) StakeNet: using social networks to analyse the
stakeholders of large-scale software projects. In: Proceedings of the 32nd international
conference on software engineering, vol 1. New York, pp 295–304
20. Mens T, Galal G (2002) 4th workshop on object-oriented architectural evolution. Springer,
Berlin, pp 150–164
21. Object Management Group (2006) Business motivation model (BMM) specification. Tech-
nical Report dtc/060803
22. O’Neill RV, DeAngelis DL, Waide JB, Allen TFH (1986) A hierarchical concept of
ecosystems. Princeton University Press, Princeton
23. Papantoniou B, Nathanael D, Marmaras N (2003) Moving target: designing for evolving
practice. In: HCI international. MIT Press, Cambridge
24. Robertson S, Robertson J (2006) Mastering the requirements process. Addison-Wesley,
Reading
25. Ross RG (2003) Principles of the business rule approach. Addison-Wesley, Boston
26. Salthe SN (1993) Development and evolution: complexity and change in biology. MIT Press,
Cambridge, MA
34 S.L. Lim and A. Finkelstein
27. Simmonds I, Ing D (2000) A shearing layers approach to information systems development,
IBM Research Report RC21694. Technical report, IBM
28. Sommerville I (2004) Software engineering, 7th edn. Addison-Wesley, Boston
29. Sommerville I, Sawyer P (1997) Requirements engineering: a good practice guide. Wiley,
Chichester
30. Standish Group (1994) The CHAOS report
31. Stark GE, Oman P, Skillicorn A, Ameele A (1999) An examination of the effects of
requirements changes on software maintenance releases. J Softw Maint Res Pr 11(5):293–309
32. van Lamsweerde A (2001) Goal-oriented requirements engineering: a guided tour. In:
Proceedings of the 5th IEEE international symposium on requirements engineering. Toronto,
pp 249–262
33. van Lamsweerde A (2009) Requirements engineering: from system goals to UML models to
software specifications. Wiley, Chichester
34. Wiegers K (2003) Software requirements, 2nd edn. Microsoft Press, Redmond
35. Yu ESK (1997) Towards modelling and reasoning support for early-phase requirements
engineering. In: Proceedings of the 3rd IEEE international symposium on requirements
engineering. Annapolis, pp 226–235
Chapter 4
Traceability in the Co-evolution of Architectural
Requirements and Design
Antony Tang, Peng Liang, Viktor Clerc, and Hans van Vliet
4.1 Introduction
In this scenario, many people are involved in the development of the system, and
the knowledge used in the development is discovered incrementally over time.
Common phenomena such as this occur every day in software development. Three
problematic situations often arise that lead to knowledge communication issues in
software design.
approach to the traceability of product line and product levels [5]. However,
this approach is not suitable for general purpose traceability of requirements to
architecture design.
In this research, we investigate how requirements and design relationships can
become traceable when requirements and design objects are both incomplete and
evolving simultaneously, and the static trace links used by conventional traceability
methods are insufficient and out-of-date. Our work provides a general ontological
model to support the traceability of co-evolving architectural requirements and
design. Based on this ontology, we have applied semantic wikis to support trace-
ability and reasoning in requirements development and architecture design.
This remaining of this chapter is organized as follows. Section 4.2 describes
the issues on current traceability management from requirements to architecture
design. Section 4.3 presents the traceability use cases for co-evolving architec-
ture requirements and design with a metamodel that supports this traceability.
Section 4.4 introduces the implementation of Software Engineering Wiki (SE-
Wiki), a prototype tool that supports the dynamic traceability with an underlying
ontology based on the traceability metamodel. Section 4.5 presents three concrete
examples of using SE-Wiki to perform the traceability use cases. We conclude this
chapter in Section 4.6.
Requirements traceability is the ability to describe and follow the life of require-
ments [1]. Ideally, such traceability would enable architects and designers to find
all relevant requirements and design concerns for a particular aspect of software
and system design, and it would enable users and business analysts to find out
how requirements are satisfied. A survey of a number of systems by Ramesh
and Jarke [2] indicates that requirements, design, and implementation ought to be
traceable to ensure continued alignment between stakeholder requirements and
various outputs of the system development process. The IEEE standards recom-
mend that requirements should be allocated, or traced, to software and hardware
items [6, 7].
On the other hand, [1] distinguishes two types of traceability: pre-requirements
specification and post-requirements specification. The difference between these
two traceability types lies in when requirements are specified in a document.
With the emergence of agile software development and the use of architecture
frameworks, the process of requirements specification and design becomes more
iterative. As a result, the boundary between pre- and post-requirement traceability
is harder to define because of the evolving nature of requirements specification
activity.
In this section, we examine the knowledge that is required to be traced, the
challenges of using conventional requirements traceability methods that are based
on static information, and compare that with an environment where information
38 A. Tang et al.
changes rapidly and the capabilities to trace such dynamic requirements infor-
mation must improve.
During the development life cycle, architects and designers typically use specifications
of business requirements, functional requirements, and architecture design. Traceabil-
ity across these artifacts is typically established as a static relationship between entities.
An example would be to cross-reference requirement R13.4 which is realized by
module M_comm(). It is argued by [3] that relating these pieces of information helps
the designers to maintain the system effectively and accurately, and it can lead to better
quality assurance, change management, and software maintenance. There are different
ways in which such traceability between requirements and architecture design can be
4 Traceability in the Co-evolution of Architectural Requirements and Design 39
5. The architect proposes one or more alternative options to address these new
issues.
6. The architect evaluates and selects one architectural design decision from
alternative options. One of the evaluation criteria is that the selected decision
should not violate existing architectural design decisions and it should satisfy the
changing requirement.
7. The architect evaluates whether the new architectural design outcome can still
satisfy those non-functional requirements related to the changing functional
requirement.
Scenario 3 – Design Impact Evaluation An architect wants to evaluate the
impact a changing requirement may have on the architecture design across versions
of this requirement.
Problem The architect needs to understand and assess how the changing
requirement impacts the architecture design.
Solution The architect finds all the components that are used to implement the
changing requirement in different versions, and evaluates the impact of the chang-
ing requirement to the architecture design.
Scenario description
1. The architect extracts all the components that realize or satisfy the changing
requirement in different versions, functional or non-functional.
2. The architect finds all the interrelated requirements in the same version and the
components that implement them.
3. The architect evaluates how the changes between different versions of the
requirement impact on the architecture design, and can also recover the decision
made for addressing the changing requirement.
In order to support these traceability scenarios, a dynamic traceability approach
is needed. This approach would require the traceability relationships to remain up-
to-date with evolving documentation, especially when the stakeholders work with
different documents and some stakeholders do not know what others are doing.
In summary, the following traceability functions need to be provided for such an
approach to work effectively:
• Support the update of trace links when specification evolves – this function
requires that as documents are updated, known concepts from the ontology are
used automatically to index the keywords in the updated documents, thereby
providing an up-to-date relationship trace information.
• Support flexible definition of trace relationships – the traceability relationships
should not be fixed when the system is implemented. The application domain
and its vocabulary can change and the ways designers choose to trace informa-
tion may also change. Thus the trace relationships should be flexible to accom-
modate such changes without requiring all previously defined relationships to be
manually updated.
• Support traceability based on conceptual relationships – certain concepts have
hierarchical relationships. For instance, performance is a quality requirement,
44 A. Tang et al.
depend on address
Architectural Position
depend on Decision
Requirement (Alternatives)
relate to support/object to
Architecture Design
Component
Structure Outcome
Solution Space
Fig. 4.1 Traceability metamodel for co-evolving architectural requirements and design
4 Traceability in the Co-evolution of Architectural Requirements and Design 45
Stakeholder: refers to anyone who has direct or indirect interest in the system.
A Requirement normally is proposed by a specific Stakeholder, which is the
original source of requirements.
Requirement: represents any requirement statements proposed by a specific Stake-
holder, and a Requirement can relate to other Requirements. There are
generally two types of requirements: Functional Requirements and Non-Func-
tional Requirements, and a Requirement is realized by a set of Design
Outcomes. Note that the general relationship relate to between Requirements
can be detailed further according to the use case scenarios supported.
Architectural Requirement: is a kind of Requirement, and Architectural
Requirements are those requirements that impact the architecture design. An
Architecture Requirement can also relate to other Architectural Requirements,
and the relate to relationship is inherited from its superclass Requirement.
Issue: represents a specific problem to be addressed by alternative solutions
(Positions). It is often stated as a question, e.g., what does the data transport layer
consist of?
Position: is an alternative solution proposed to address an Issue. Normally one or
more potential alternative solutions are proposed, and one of them is to be
selected as a Decision.
Argument: represents the pros and cons argument that either support or
object to a Position.
Decision: is a kind of Position that is selected from available Positions
depending on certain Requirements (including Architectural Requirements),
and a Decision can also relate to other Decisions [35]. For instance,
a Decision may select some products that constrain how the application software
can be implemented.
Design Outcome: represents an architecture design artifact that is resulted from an
architecture design Decision.
Component and Architecture Structure: represent two types of Design Outcomes,
that an Architecture Structure can be some form of layers, interconnected modules
etc.; individual Components are the basic building blocks of the system.
The concepts in this metamodel can be classified according to the Problem and
Solution Space in system development. The Problem and Solution Space overlap:
Architectural Requirement and Decision, for example, belong to both spaces.
The metamodel depicted in Fig. 4.1 shows the conceptual model and the relationships
between the key entities in the Problem and Solution Space. This conceptual model,
or metamodel, requires an ontological interpretation to define the semantics of the
concepts it represents. In this section, we describe the ontology of our model to
support the use cases of co-evolving architectural requirements and design.
46 A. Tang et al.
An ontology defines a common vocabulary for those who need to share infor-
mation in a given domain. It provides machine-interpretable definitions of basic
concepts in that domain and the relations among them [36]. In software develop-
ment, architects and designers often do not use consistent terminology. Many
terms can refer to the same concept, i.e., synonyms, or the same term is used for
different concepts, i.e., homonyms. In searching through software specifications,
these inconsistencies can cause a low recall rate and low precision rate, respec-
tively [30].
An ontology provides a means to explicitly define and relate the use of software
and application domain related terms such as design and requirements concepts.
The general knowledge about an application domain can be distinguished from the
specific knowledge of its software implementation. For instance, system throughput
is a general concept about quality requirements and that is measurable; it can be
represented in a sub-class in the hierarchy of quality requirements class. In an
application system, say a bank teller system, its throughput is a specific instance of
a performance measure. Using an ontology that contains a definition for these
relationships, this enables effective searching and analysis of knowledge that are
embedded in software documents.
Ontology defines concepts in terms of classes. A class can have subclasses.
For instance, the throughput class is a subclass of efficiency, meaning that through-
put is a kind of performance measure. A throughput class can have instances that
relate to what is happening in the real-world. Some examples from an application
system are: the application can process 500 transactions per second or an operator
can process one deposit every 10 s.
A class can be related to another class through some defined relationships. For
instance, a bank teller system satisfies a defined throughput rate. In this case,
satisfies is a property of the bank teller system. The property satisfies links a specific
requirement to a specific throughput.
DC is proposed by
Requirement
-title -req_id
identifies -req_descr
-subject Non-functional Requirement
-description -is_proposed_by
-qual_attrribute_measures
-type
-source
-relation
-creator and other QAs
qual_is_related_to
-contributor Functional Requirement Usability
-date Efficiency
-qual_is_related_to
_
-format -depends_on
-identifier -req_is_related_to
depends_on
-realized_by Decision
relation_supercede -depends_on -design_decision
realized_by
results_in -decision_issue
req_is_related_to -arguments
identifies
satisfies -results_in
what components are used to realize a specific requirement and why, for
instance.
• Functional Requirement is_realized_by an architecture design. Designers,
programmers, and testers often need to know the implementation relationships.
If a decision has been documented and included in the ontology, then this
relationship can be inferred from the original requirement. However, design
decisions are often omitted, and so the implied realization link between
requirements and design outcomes becomes unavailable. In order to circumvent
this issue, we choose to establish a direct relationship between requirements and
architecture.
• Architecture Design satisfies some non-functional requirements. This rela-
tionship shows that an architecture design can satisfy the non-functional
requirements.
Together these relationships mark and annotate the texts in requirements and
architecture specifications, providing the semantic meaning to enable architects
and analysts to query and trace these documents in a meaningful way. Each trace
link is an instance of the ontology relationships. Traceability is implemented by
a semantic wiki implementation that supports querying or traversing.
1
http://www.mediawiki.org/
2
http://www.wikipedia.org/
50 A. Tang et al.
get the query results interactively. For example, query input [[Category:Functional
Requirement]][[is proposed by::Stakeholder A]] will return all the functional
requirements proposed by Stakeholder A. Semantic search is applicable to tempo-
rary queries that vary from time to time. In-line query refers to the query expression
that is embedded in a wiki page in order to dynamically include query results into
pages. Consider this in-line query: ask: [[Category:Requirement]][[is proposed
by::Stakeholder A]] | ?is proposed by. It asks for all the requirements proposed by
Stakeholder A. In-line query is more appropriate in supporting dynamic traceability
between software artifacts, e.g., when a functional requirement proposed by Stake-
holder A is removed from a requirements specification, the requirements list in the
wiki page of Stakeholder A will be updated automatically and dynamically.
Example uses of these semantic features supported in SE-Wiki for the trace-
ability use cases are further described in the next section.
Portal project. In order to implement the use cases, all the relevant requirements and
architecture specifications must be semantically annotated based on the traceability
ontology specified in Sect. 4.4.1, e.g., in a sample requirement statement: Student
would like to download course slides from course website., Student is annotated
as an instance of concept Stakeholder, would like to is annotated as an instance of
concept relationship is_proposed_by, and download course slides from course
website is annotated as an instance of concept Requirement.
These semantic annotations are performed by business analysts and architects
as they document the specifications. The main difference between this method and
some other requirements traceability methods is that individual requirement and
design are semantically annotated, and their traceability is enabled by reasoning
with the ontology concepts.
3
Resources in NIHR Portal project refer to all the information maintained by the Portal, e.g.,
sources of funding for different types of research.
4 Traceability in the Co-evolution of Architectural Requirements and Design 53
reused or not for the implementation of the new functional requirement Track
Usage. This query is composed of two parts: the query input in the upper left of
Fig. 4.3 [[Category:Design Outcome]][[realizes::Change User Access]] extracts
all the Design Outcomes that realize Change User Access requirement, i.e., REST
Structure and SOA Structure, which are directly related with Change User Access
requirement; the query input in the upper right ?satisfies [[Category:Non-Functional
Requirement]] returns all the Non-Functional Requirements, i.e., Integration
Requirement and Interoperability Requirement, which are indirectly related with
Change User Access requirement through the Design Outcomes.
With all the Non-Functional Requirements and their associated Design
Outcomes related to Change User Access requirement, which are all shown in
one wiki page, the architect can have a whole view of the implementation context
of the new functional requirement Track Usage, and assess the compatibility of
these Non-Functional Requirements with the Non-Functional Requirements related
to the new functional requirement. With this information, the architect will decide
whether or not to reuse these Design Outcomes for the implementation of the new
functional requirement Track Usage.
When new Design Outcomes are added to realize a requirement, in this case
the requirement Change User Access, the semantic query will return the latest
results (i.e., updated Design Outcomes realizing Change User Access). This allows
SE-Wiki to support dynamic changes to requirements and architecture design
which normal wikis cannot achieve with static trace links.
Under the current ontology definition, other possible software reuse scenarios
can be supported by SE-Wiki, some of them are:
• Find all components that support a particular kind of quality requirements, and
satisfy some quality requirements thresholds.
• Find all components that are influenced by two specific quality requirements
simultaneously.
54 A. Tang et al.
• Find the architecture design and all the components within it that support an
application sub-system.
• Trace all components that are influenced by a design decision to assess if the
components are reusable when the decision changes.
In Scenario 2, the architect evaluates and finds that related requirement Track
Usage is not affected by the change of requirement Change User Access. But the
architect finds an issue Access Control by Identity caused by the changing require-
ment. To address this issue, a design option Identity Management: Provide an
identity management infrastructure in portal personalization management is
selected by the architect and documented as a Decision. A design outcome Identity
Management Component is designed to realize the changing requirement
Change User Access. All these updates on related artifacts are recorded in this
requirement wiki page through incoming and outgoing traces as shown in Fig. 4.5.
With the information found in this page, the architect can further evaluate
whether the newly-added decision Identity Management is compatible with other
existing Designs, e.g., Portal Personalization, and whether the updated Design
Outcomes still satisfy those related Non-Functional Requirements, e.g., Inte-
gration Requirement. The Decisions and Design Outcomes may change accordingly
based on these further evaluations.
A number of other use cases that are similar to the changing requirement can also
be supported by SE-Wiki:
56 A. Tang et al.
Fig. 4.5 Updated results of scenario 2 through In-line semantic query in SE-Wiki
• An architect wants to get a list of open quality requirements for which architec-
tural decisions are needed.
• An architect wants to evaluate and detect the soundness of the software artifacts,
e.g., a design decision is wanted when an architecture is used to realize a
functional requirement.
• An architect can identify the architecture design components that have been
changed from the previous software version.
• Analysts or architects can find the latest changes to a requirement or a design of
interest.
• Analysts or architects can find changes that have been made by certain people or
within a certain period of time.
4.6 Conclusions
Acknowledgments This research has been partially sponsored by the Dutch “Regeling
Kenniswerkers”, project KWR09164, Stephenson: Architecture knowledge sharing practices in
software product lines for print systems, the Natural Science Foundation of China (NSFC) under
Grant No. 60950110352, STAND: Semantic-enabled collaboration Towards Analysis, Negotia-
tion and Documentation on distributed requirements engineering, and NSFC under Grant
No.60903034, QuASAK: Quality Assurance in Software architecting process using Architectural
Knowledge.
4 Traceability in the Co-evolution of Architectural Requirements and Design 59
References
1. Gotel OCZ, Finkelstein ACW (1994) An analysis of the requirements traceability problem.
In: IEEE International Symposium on Requirements Engineering (RE), 94–101
2. Ramesh B, Jarke M (2001) Towards reference models for requirements traceability. IEEE
Trans Software Eng 27(1):58–93
3. Spanoudakis G, Zisman A, Perez-Minana E, Krause P (2004) Rule-based generation of
requirements traceability relations. J Syst Softw 72(2):105–127
4. Egyed A, Grunbacher P (2005) Supporting software understanding with automated
requirements traceability. Int J Software Engineer Knowledge Engineer 15(5):783–810
5. Lago P, Muccini H, van Vliet H (2009) A scoped approach to traceability management. J Syst
Softw 82(1):168–182
6. IEEE (1996) IEEE/EIA Standard – Industry Implementation of ISO/IEC 12207:1995, Infor-
mation Technology – Software life cycle processes (IEEE/EIA Std 12207.0–1996)
7. IEEE (1997) IEEE/EIA Guide – Industry Implementation of ISO/IEC 12207:1995, Standard
for Information Technology – Software life cycle processes – Life cycle data (IEEE/EIA Std
12207.1–1997)
8. Farenhorst R, Izaks R, Lago P, van Vliet H (2008) A just-intime architectural knowledge
sharing portal. In: 7th Working IEEE/IFIP Conference on Software Architecture (WICSA),
125–134
9. Bass L, Clements P, Kazman R (2003) Software architecture in practice, 2nd edn. Addison
Wesley, Boston
10. Ali-Babar M, de Boer RC, Dingsøyr T, Farenhorst R (2007) Architectural knowledge man-
agement strategies: approaches in research and industry. In: 2nd Workshop on SHAring and
Reusing architectural Knowledge – Architecture, Rationale, and Design Intent (SHARK/ADI)
11. Rus I, Lindvall M (2002) Knowledge management in software engineering. IEEE Softw 19(3):
26–38
12. Hansen MT, Nohria N, Tierney T (1999) What’s your strategy for managing knowledge?
Harv Bus Rev 77(2):106–116
13. Robertson S, Robertson J (1999) Mastering the requirements process. Addison-Wesley,
Harlow
14. Tang A, Jin Y, Han J (2007) A rationale-based architecture model for design traceability and
reasoning. J Syst Softw 80(6):918–934
15. IBM (2010) Rational DOORS – A requirements management tool for systems and advanced
IT applications. http://www-01.ibm.com/software/awdtools/doors/, accessed on 2010-3-20
16. IBM (2004) Rational RequisitePro - A requirements management tool. http://www-01.ibm.
com/software/awdtools/reqpro/, accessed on 2010-3-20
17. Hayes JH, Dekhtyar A, Osborne J (2003) Improving Requirements Tracing via Information
Retrieval. In: 11th IEEE International Conference on Requirements Engineering (RE),
138–147
18. Assawamekin N, Sunetnanta T, Pluempitiwiriyawej C (2009) Mupret: an ontology-driven
traceability tool for multiperspective requirements artifacts. In: ACIS-ICIS, . 943–948
19. Hayes JH, Dekhtyar A, Sundaram SK (2006) Advancing candidate link generation for
requirements tracing: the study of methods. IEEE Trans Software Eng 32(1):4–19
20. Cleland-Huang J, Chang CK, Christensen M (2003) Event-based traceability for managing
evolutionary change. IEEE Trans Software Eng 29(9):796–810
21. Mistrk I, Grundy J, Hoek A, Whitehead J (2010) Collaborative software engineering.
Springer, Berlin
22. Schaffert S, Bry F, Baumeister J, Kiesel M (2008) Semantic wikis. IEEE Softw 25(4):8–11
23. Liang P, Avgeriou P, Clerc V (2009) Requirements reasoning for distributed requirements
analysis using semantic wiki. In: 4th IEEE International Conference on Global Software
Engineering (ICGSE), 388–393
24. Louridas P (2006) Using wikis in software development. IEEE Softw 23(2):88–91
60 A. Tang et al.
25. Hoenderboom B, Liang P (2009) A survey of semantic wikis for requirements engineer-
ing. SEARCH http://www.cs.rug.nl/search/uploads/Publications/hoenderboom2009ssw.pdf,
accessed on 2010-3-20
26. Bachmann F, Merson P (2005) Experience using the web-based tool wiki for architecture
documentation. Technical Note CMU, SEI-2005-TN-041
27. Lassila O, Swick R (1999) Resource Description Framework (RDF) Model and Syntax. http://
www.w3.org/TR/WD-rdfsyntax, accessed on 2010-3-20
28. McGuinness D, van Harmelen F (2004) OWL web ontology language overview. W3C
recommendation 10:2004–03
29. Aguiar A, David G (2005) Wikiwiki weaving heterogeneous software artifacts. In: inter-
national symposium on wikis, WikiSym, pp 67–74
30. Geisser M, Happel HJ, Hildenbrand T, Korthaus A, Seedorf S (2008) New applications for
wikis in software engineering. In: PRIMIUM 145–160
31. Riechert T, Lohmann S (2007) Mapping cognitive models to social semantic spaces-collabo-
rative development of project ontologies. In: 1st Conference on Social Semantic Web (CSSW)
91–98
32. Domges R, Pohl K (1998) Adapting traceability environments to project-specific needs.
Commun ACM 41(12):54–62
33. Lago P, Farenhorst R, Avgeriou P, Boer R, Clerc V, Jansen A, van Vliet H (2010) The
GRIFFIN collaborative virtual community for architectural knowledge management. In:
Mistrk I, Grundy J, van der Hoek A, Whitehead J (eds) Collaborative software engineering.
Springer, Berlin
34. Kunz W, Rittel H (1970) Issues as elements of information systems. Center for Planning and
Development Research, University of California, Berkeley
35. Kruchten P (2004) An ontology of architectural design decisions in software-intensive
systems. In: 2nd Groningen Workshop on Software Variability Management (SVM)
36. Noy N, McGuinness D (2001) Ontology development 101: a guide to creating your first
ontology
37. Powell A, Nilsson M, Naeve A, Johnston P (2007) Dublin Core Metadata Initiative – Abstract
Model. http://dublincore.org/documents/abstract-model, accessed on 2010-3-20
38. Krotzsch M, Vrandecic D, Volkel M (2006) Semantic Mediawiki. In: 5th International
Semantic Web Conference (ISWC), 935–942
39. Prud’Hommeaux E, Seaborne A (2006) SPARQL Query Language for RDF. W3C working
draft 20
40. NIHR (2006) NIHR Information Systems Portal User Requirements Specification. http://
www.nihr.ac.uk/files/pdfs/NIHR4.2 Portal URS002 v3.pdf, accessed on 2010-3-20
Chapter 5
Understanding Architectural Elements
from Requirements Traceability Networks
5.1 Introduction
Thus, the main research question is how software requirements evolution impacts
on the underlying architecture of the system. This question will be addressed by
investigating how traceability relations between software requirements and different
components in a system reveal its architectural implications. Turner et al. [3] des-
cribe a requirement feature as “a coherent and identifiable bundle of system func-
tionality that helps characterize the system from the user perspective.” We envisage
a scenario where decisions are previously taken on the desired architecture to be
used in implementing a specified feature in the system. We subsequently harvest
homogenous and heterogeneous requirements traceability networks. Such traceabi-
lity networks can also represent semantic graphs from which the actual architectural
representation of the system can be inferred. The aim then is to compare and validate
the desired architecture against the real-time inferred system architecture used
to implement a desired user feature.
In the remaining part of this chapter, Sect. 5.2 first provides the background on
requirements traceability and software architectures and discusses the architectural
information needs of different stakeholders. Section 5.3 presents the automated
requirements traceability mechanism that is used to realize our traceability
networks. A system architectural inference mechanism based on extracted require-
ments traceability networks is explained. Section 5.4 presents an evaluation of our
approach based on an implemented prototype. Section 5.5 presents related work
and subsequently our conclusion and further work in Sect. 5.6.
the project, architectural decisions are made to implement specific features of the
system. Such features can further generate a set of use cases and more concrete
requirements that achieve the use case (Fig. 5.1a). Subsequently, different
stakeholders use a set of components to achieve a specified system feature as
shown in Fig. 5.1b. Thus, the main concern here is deriving some real time
architectural insight based on the trace links generated between system features
or use cases, components and stakeholders.
It is not straightforward to achieve architectural insight based on tangible
representation of the system and harvested traceability links since different
components of a system can be associated with multiple desired features of the
system and in most cases worked on by different stakeholders from varying pers-
pectives, to achieve different tasks or features. Hence, traces between system
components, features and stakeholders will result in a complex web from which
the core challenge is to infer an earlier guiding design rationale. The aim of this
research is to reveal architectural rationale from such a real-time traceability
viewpoint. This is achieved by mining homogenous and heterogeneous require-
ments traceability networks. In this chapter, we focus on a subset of possible
architectural insights classified either as explicit or implicit. Explicit insight can
be directly inferred from generated traceability networks, e.g., the architecture style
revealed by real-time links between project entities. Implicit architectural insight is
the additional information that can be assumed based on the interaction between
the different project entities, e.g., development model, feature and requirements
breakdown structure, decomposition and allocation of responsibility, assignment
to processes and threads, etc.
– Ruben’s implementation of the Browse Movies use case involved the creation
and further updating of MovieCatalog.java, Cinema.java, and Movie.java.
Ruben also viewed Ticket.java a number of times.
Traceability links can be homogenous, e.g., a code component being related to
another code component, or heterogeneous, e.g., a relationship between a developer
and code component, or a use case and component. The detailed interaction event
trails is as shown in Fig. 5.2. Any selected time-point corresponds to at least one
event associated with a use case, a developer, and a code artefact. For instance, at
time-point 1, a create event associated with Account.java was executed by Amy
while working on the Purchase Tickets use case. Similarly, time-point 7 has two
events: Ruben updated Cinema.java (absolute update delta 50 [magnitude of the
update based on character difference]) while working on Browse Movies, and Bill
viewed Account.java as he worked on Purchase Tickets.
In this scenario, the Purchase Tickets use case is associated with Bill, Amy and
a number of code artefacts. Also, MovieCatalog.java is associated with the three
developers as well as the two use cases. On the whole, within such a rather small
and seemingly uncomplicated scenario involving only two use cases, three
developers and eight code artefacts, 27 different traceability links can be identified.
To make sense of such number of dependencies, they must be ranked for relevance.
For instance, the relevance of traceability links between a use case and developer is
dependent on the number of interaction events generated over time by the developer
in achieving the use case. A relevance measure of trace links between two entities is
non-symmetric. This is because the relevance measure is firstly dependent on the
number of other entity instances a selected entity can be traced to, and secondly the
amount of interaction events generated as a result of each trace link.
Fig. 5.2 Monitored interaction trails used to achieve TickX across 25 time-points
5 Understanding Architectural Elements from Requirements Traceability Networks 67
This demonstrated scenario poses a number of questions. Firstly, what are the
possible automated methods for harvesting a traceability network? Secondly, how
can the system’s architectural representations be revealed by these networks? In
addressing the first question, this research investigates an event based mechanism
for retrieving interaction events for the subsequent generation of traceability
networks. This involves capturing navigation and change events that are generated
by the developers. The advantage of this approach is the opportunity to automati-
cally harvest real-time trace links. The event based approach also provides a basis
for inferring real-time architectural representations. Some ideas for such an
approach has been presented in earlier work [10]. In this section, we present an
event based linear mechanism to generate traceability links and rank their rele-
vance. Since event based approaches are sometimes prone to generating false
positives, we also use call graphs to validate the event based networks.
‘Purchase
Tickets’
Amy
MovieCatalog.java
Ticket.java
MovieCatalog.java
MovieCatalog.java Cinema.java
Cinema.java
Customer.java
Ruben Customer.java
number of unique entity instances directly associated with it divided by the number
of unique entity instances in the whole collaboration space. For the motivating
example the SOI ratio of Amy is 6/9 (entities in Amy’s work context/total number
of entities – two use cases and seven classes).
The concepts of interaction events combined with SOI ratio forms the basis for
deriving trace networks with semantic insight on centrality of involved entity
instances. Figure 5.3 shows three directed graphs. In general, a graph G has a set
of nodes E ¼ {e1, e2, · · ·, en} and a set of arcs L ¼ {l1, l2, · · ·, lm} which are ordered
pairs of distinct entities lk ¼ < ei, ej >. The arc < ei, ej > is directed from ei to ej.
Thus, < ei, ej > 6¼ < ej, ei >. In our usage, the graphs are three-partite since their
entities E can be partitioned into three subsets Ec, Ed and Ea (use cases, developers
and code artefacts). All arcs connect entities in different subsets.
The weight attribute of each arc is specified by the accumulative linear combi-
nation of weights gained as a result of events associated with that arc and the
sphere of influence of the entity that forms the perspective of work context. More
formally, the cumulative weight x associated with an arc < ei, ej > in response to
an event is given by (5.1), where t is the type of event (possible values shown in
table 1), s the SOI ratio of ei, and n the total number of interactions associated
with the arc < ei, ej >. Thus, the weight attributed for the arc < ei, ej > after n
interactions is based upon its previous value plus the value of the last interaction
multiplied by the SOI ratio of ei.
MovieCatalog
Movie
Ticket
Customer
In network analysis, centrality indices are normally used to convey the intuitive
feeling that in most networks some vertices or edges are more central than others
[14, 15]. A centrality index which suits the requirements traceability networks
definition is the Markov centrality, which can be applied to directed and weighted
graphs. To obtain the centrality of entities in this research, the weighted
requirements traceability network shown in Fig. 5.4 is viewed as a Markov chain.
White and Smyth [16] described a Markov chain as a single ‘token’ traversing
a graph in a stochastic manner for an infinitely long time, and the next node (state)
that the token moves to is a stochastic function of the properties of the current node.
They also interpreted the fraction of time (sojourn time) that the token spends at any
single node as being proportional to an estimate of the global importance or
centrality of the node relative to all other nodes in the graph. From the viewpoint
of this research, a Markov chain enables the characterisation of a token moving
from a developer to a selected use case as an indication of the relative importance of
the use case instance to the developer. Similarly, a token moving from a use case
instance to a code artefact indicates the importance of the artefact instance in
achieving the use case.
Centrality is calculated by deriving a transition matrix from the weighted
requirements traceability network, assuming that the likelihood of a token traversal
between two nodes is proportional to the weight associated with the arc linking the
nodes. The weights in a traceability network are then converted to transition
probability weights by normalising the weights on arcs associating entities with
a work context to one. Thus, transition probability is dependent on each arc weight
value and the total number of entities within a work context. Figure 5.6 gives the
transition matrix for TickX. The transition probability of a token from Ticket to
Browse Movies use case is 0.0339 while the reverse probability is 0.0044. Each of
the rows in the transition matrix sums to one. The algorithm and computational
processes for the derivation of transition matrix and the subsequent centrality of
entities was carried out using the Java network/graph framework (JUNG) [17].
Figure 5.8 shows a graph for TickX where the size of each entity is proportional
to its Markov centrality. This figure shows the relatively higher centrality that
MovieCatalog.java has achieved in the collaboration space.
72 I. Omoronyia et al.
project. The log.event component is the clearing centre and data warehouse of
all events generated by the project collaborators. The messaging layer carries out
asynchronous processing of request/response messages from the server. The offline.
emulator component emulates the server end functions of the model and event
layers while a developer is generating interaction events in the offline mode.
Finally, the RCP layer resides only on the client end, and provides the minimal
set of components required to build a rich client application in Eclipse.
Figure 5.7 shows a snapshot of an Eclipse view of the visualisation.rpc compo-
nent. System developers can open, activate and deactivate their use cases of interest
by using the popup menu labelled 7 in Fig. 5.7. All events generated by the
developer are traced to the work context of an activated use case. The RCP layer
is also responsible for generating visualisations of requirements traceability
networks of developers, artefacts and use cases. A system developer using the
button labelled 3 in Fig. 5.7 triggers the generation of the traceability network
shown in Fig. 5.8. The size of each node corresponds to its centrality in the
traceability network. A selected node in the network can be moved around within
the visual interface to enhance clarity of trace relations for increasingly complex
trace networks.
The workflow requires that each time a developer wants to carry out a coding
activity, they log in and activate an existing use case located in the central reposi-
tory or create a new one. For each client workstation, only one use case can be
active at a selected time, working on another use case requires that the developer
activates the new use case which automatically deactivates the previous one.
Similarly, the active code artefact is the current artefact being viewed, updated or
created. Switching to another artefact automatically deactivates the previous arte-
fact. This workflow enables cross cutting relations amongst artefacts, developers
and use cases since, over their lifetime, and as they are used to achieve different
aspects of a project, each can be associated with any number of other instances.
As events generated by the developer are traced to the work context of an active use
case and artefact on the server, the centrality value of each entity instance involved
in the traceability network is recalculated.
eight of the participants (the two remaining participants were unavoidably absent).
The interviews were personalised based on the use cases/system features and code
artefacts that the participant had worked on. All data were anonymized for analysis
and presentation. Feedback from participants suggested that the tool captured
between 60–90% of the interaction events carried out over the study period. The
remaining part of this section first presents how traceability networks are used to
provide insight on architectural styles, then how they help validate initial system
decision and identify potentially overloaded components, critical bottlenecks and
information centres with ensuing architectural implications.
Our expectation is that layouts of architectural styles are unfolded and realised with
the accumulation of trace events generated by stakeholders. Thus, if traceability
networks harvested from events associated with the achievement of system features
and desired requirements is realised, then it is also possible to infer the architectural
style used to realize the specified feature or system requirement.
Insights were obtained from our initial study on the inference of architectural
styles from event based traceability networks. Figure 5.9 demonstrates a trace-
ability network for the feature ‘File Demo’ in Gizmoball (a feature requirement that
users should be able to load gizmos from file). The figure shows the different code
artefacts that the developer ‘Tony’ used to realize the desired feature and the trace
Fig. 5.9 Revealed architectural styles associated in the achievement of gizmoball feature – ‘File
Demo’
76 I. Omoronyia et al.
links between the artefacts. A visual arrangement and repositioning of the artefacts
in the traceability network reveals that a 2-tier architectural style is being used by
Tony to achieve ‘File Demo.’ Furthermore, each of the tiers reveals a possible
blackboard approach. This example demonstrates how different architectural styles
can be combined to achieve a specified system feature.
It is important to note that we do not claim here that the discovery and combi-
nation of architectural styles is trivial. While some styles such as n-tier or batch
sequential are more easily recognised from visualisation of traceability networks,
other styles such as the blackboard requires more investigation. Also, call graphs in
non trivial cases does not provide the information needed to infer styles. An example
is in cases where communication between the clients of a blackboard and the
blackboard could be via data sharing, middleware, or network communication.
Secondly, the traceability network in Fig. 5.9 demonstrates the pivotal role dis-
played by the artefacts FileHandler and GameModel in realising the architectural
style associated with File Demo. The two artefacts are responsible for the linking
of the two different blackboard styles to reveal a 2-tier architectural style. This
becomes obvious due to our use of different node sizes based on centrality, thus
demonstrating the advantage of this visualization.. Furthermore, for every new link
amongst artefacts that is subsequently introduced by collaborators to the network,
the trace network reveals corresponding adaptation that is required in the initial
architectural rationale for the associated feature of the system.
This study also reveals that traceability networks for non-trivial projects can be
overwhelming with hundreds or thousands of components. The implied archi-
tectural style used to achieve ‘File Demo’ was revealed by a simple manual visual
rearrangement of existing nodes in the network. To give support for bigger projects,
further work is needed to focus on the automatic machine learning of architectural
styles based on a given traceability network.
One of the important lessons learned from the repository of event-based traceability
networks during the 6 weeks study, is related to information that can be derived
from an entity’s centrality measures. An entity’s centrality is useful in revealing
a number of latent properties in the trace relation between requirements, code
components and the underlying system/software architecture. For instance, a high
centrality measure for a developer may suggest that they are working with many
parts of the system. Such high centrality for developers can further suggest that the
components and system features they are working on are crucial to achieving the
system and hence are central to the development process.
The study showed that stakeholders built a perception of their expected centra-
lity measures for entities in the trace network. These expectations are envisaged
5 Understanding Architectural Elements from Requirements Traceability Networks 77
based on previous decisions made on achieving the system. Such expectations are
then used to monitor the state of the system. An example is the traceability network
shown in Fig. 5.10 and involving collaboration between Greg, Boris and Blair to
achieve Gizmoball. Two project stages were identified – ‘From Demo to Final’
(Translate game demo to final mode), and JUnit Tests (generate test cases for each
gizmo object). Forty five artefacts were identified as being used to achieve these
use cases. While the major responsibility of achieving ‘From Demo to Final’ was
assigned to Boris, the responsibility for JUnit Tests was mainly assigned to Blair.
A snippet from Boris demonstrating insight he obtained while navigating the
traceability network generated as a result of their collaboration (Fig. 5.10) is
shown below:
Boris: . . .If we have done ‘JUnit Test’ how come it only relates to Gizmo.java,
Square.java and GizmoModel.java. . .? Because I know that it should be looking at
virtually all of the code. . . . . .there is more work to be done in ‘JUnit Tests’
This feedback suggests that Boris was expecting JUnit Tests to have a higher
centrality in the network. He also expected the use case to be related to more code
artefacts. This is because they had decided to use test-driven development and as
such needed every code artefact to be assigned a test case. While they had agreed
and documented their decision on test driven development in their previous group
meeting, the traceability network of the current state of the project rather suggested
that there was still much work to be done to achieve their agreed objective.
Fig. 5.11 Requirements traceability network involving 92 code artefacts, five system features and
three developers
5 Understanding Architectural Elements from Requirements Traceability Networks 79
There are some methods and guidance available that help in the development and
tracing system requirements into an architecture satisfying those requirements. The
work presented by Gr€ unbacher et al. [19, 20] on CBSP (Connector, Bus, System,
Property) focuses on reconciling requirements and system architectures. Gr€unbacher
et al.’s approach has been applied to the EasyWinWin requirements negotiation
80 I. Omoronyia et al.
technique and the C2 architectural models. The approach taken in our work differs
from Gr€unbacher et al. as our focus is rather on the use of requirements trace-
ability approach to help collaborating developers understand the architectural
implications of each action they perform.
A closely related work is that presented on architectural design recovery by
Jakobac et al. [21–23]. The main motivation for their work is based on the frequent
deviation of developers from the original architecture causing architectural erosion –
a phenomenon in which the initial architecture of an application is (arbitrarily)
modified to the point where its key properties no longer hold. The approach assumes
that a given system’s implementation is available, while the architecturally relevant
information either does not exist, is incomplete, or is unreliable. Jakobac et al. then
used source code analysis techniques for architectural recovery from the systems
implementation. Finally, architectural styles where then leveraged to identify
and reconcile any mismatch between existing and recovered architectural models.
A distinction of our work from Jakobac et al. approach is the associations of
requirement use cases or desired system features to the subsequent tangible archi-
tectural style used to realize the feature or use case. Furthermore, our traceability
links are harvested real time as the system is being realized. Harvested traces are
subsequently used to provide developers with information about the revealed
architecture based on the work that is currently carried out. We provide pointers
to potential bottlenecks and information centres that exist as a result of an initial
architectural rationale.
There are a number of other reverse engineering approaches by which the
architectures of software systems can be recovered. For instance, the IBIS and
Compendium originating from the work of Werner and Rittel [24], presents the
capability to facilitate the management of architectural arguments. Mendonca and
Kramer [25] presented an exploratory reverse engineering approach called X-ray to
aid programmers in recovering architectural runtime information from a distributed
system’s existing software artifacts. Also, Guo et al. [26] used static analysis to
recover software architectures. Guo et al’s. approach extracted software architec-
ture based on program slicing and parameter analysis and dependencies between
the objects based on relation partition algebra. However, these approaches do not
directly focus on how such extracted architectures are related to stakeholders’
requirements of the system. Again, there are different approaches to harvesting
traceability networks. This research has focused on an event based approach for
automated harvesting of heterogeneous relations, and call graph to retrieve homo-
genous trace links between components achieving the system. Other automated
mechanisms for harvesting traceability networks include the use of information
retrieval mechanisms and scenario driven approach. Traceability networks gene-
rated from information retrieval techniques are based on the similarity of terms
used in expressing requirements and design artefacts [27–29]. The scenario-driven
approach is accomplished by observing the runtime behaviour of test scenarios.
Observed behaviour is then translated into a graph structure to indicate common-
alities among entities associated with the behaviour [30].
5 Understanding Architectural Elements from Requirements Traceability Networks 81
Mader et al. [24] proposed an approach for the automated update of existing
traceability relations during the evolution and refinement of UML analysis and
design models. The approach observes elementary changes applied to UML
models, recognises the broader development activities and triggers the automated
update of impacted traceability relations. The elementary change events on model
elements include add, delete and modify. The broader development activity is also
recognised using a set of rules which helps in associating an elementary change as
constituent parts of intentional development activity. The key similarity between
the approach in this research and Mader et al.’s approach is the focus on
maintaining up-to-date post-requirement traceability relations. In addition, our
approach provides a perception of the centrality of traced entities.
References
25. Mendonça N, Kramer J (2001) An approach for recovering distributed system architectures.
Automated Softw Eng 8(3–4):311–354
26. Guo J, Liao Y, Pamula R (2006) Static analysis based software architecture recovery,
computational science and its applications – ICCSA 2006
27. Antoniol G et al (2002) Recovering traceability links between code and documentation. Softw
Eng IEEE Trans 28(10):970–983
28. Oliveto R (2008) Traceability management meets information retrieval methods: strengths
and limitations. In: Proceedings of the 2008 12th European conference on software mainte-
nance and reengineering. IEEE Computer Society, Athens, Greece
29. Lormans M, van Deursen A (2005) Reconstructing requirements coverage views from design
and test using traceability recovery via LSI. In: Proceedings of the 3rd international workshop
on traceability in emerging forms of software engineering, ACM, Long Beach
30. Egyed A (2006) Tailoring software traceability to value-based needs. In: Stefan Biffl AA,
Boehm B, Erdogmus H, Gr€ unbacher P (eds) Value-based software engineering, Egyed A.,
Springer-Verlag, pp 287–308
.
Part II
Tools and Techniques
.
Chapter 6
Tools and Techniques
In software engineering, tools and techniques are essential for many purposes. They
can provide guidance to follow a certain software development process or a selected
software lifecycle model. They can support various stakeholders in validating the
compliance of the development results against quality criteria spanning from
technical non-functional requirements to business/organizational strategies.
Finally, tools and techniques may help various types of stakeholders in codifying
and retrieving the knowledge necessary for decision making throughout their
development journey, hence providing reasonable confidence that the resulting
software systems will execute correctly, fulfill customer requirements, and cost-
effectively accommodate future changes.
Tools and techniques supporting various activities in requirements engineering
[1, 2] and software architecting [3, 4] have been devised in both industry and
academia. Looking at the state of the art and practice we draw two observations.
First, independently from the software lifecycle phase they cover, most existing
tools and techniques concentrate on delivering a satisfactory solution, this being a
requirements specification (the result of engineering requirements), an architectural
model (the result of architecture design), or the implemented software system (the
result of coding). Each of us can certainly think of tens of examples of development
environments, case tools or stand-alone applications that fall in this category. It is
also true that recent research and development efforts have been and are being
dedicated to supporting the reasoning process and decision-making leading to such
solutions. Again, also in this case we can identify a number of tools aimed at
supporting various stakeholders in the knowledge they need as input (or produce as
output) to come up with the satisfactory solution mentioned above [5]. Unfortu-
nately, we see a gap not yet filled, which is in techniques and tools supporting the
actual decision-making process as such [6]. For instance, modeling notations focus
on modeling the resulting software system (the chosen solution) at different levels
of abstraction. They are insufficient to help practitioners to make logical decisions
(choices based on logical and sound reasoning) because developers can be subject
to cognitive biases [7], e.g. making decisions driven by available expertise instead
of optimally solving the problem at hand. In other words, as emphasized in [7],
Last but not least, Chap. 11 by Len Bass and Paul Clements delves into business
goals as well-known source of requirements but seldom captured in an explicit
manner. As a consequence, system architectures end up being misaligned with such
business goals, potentially leading to IT project failures. The proposed technique
includes a reference classification of business goals as tool to drive elicitation, and a
method to help architects in eliciting business goals, relevant associated quality
requirements and architecture drivers.
References
7.1 Introduction
Software architecture (SA) defines the structure and organization by which system
components interact [22] to meet both functional and non-functional requirements
(FRs and NFRs). However, multiple architectures can be defined for the same FRs,
each with different quality attributes. As a result, software architecting is often
driven by quality or NFRs [15].
However, designing software architectures to meet both FRs and NFRs is not
trivial. A number of architectural design methods have been introduced to provide
guidelines for the three general architectural design activities [15]: (1) architectural
analysis, an activity that identifies architectural significant requirements (ASRs);
(2) architectural synthesis, an activity that designs or obtains one or more candidate
architectures; and (3) architectural evaluation, an activity that evaluates and selects
an architecture that best meets the ASRs.
1.3.2 Explore
1.2.2 Explore
architectural
alternative tasks
decisions
1.3.3 Select
1.2.3 Select
architectural
tasks
decisions
In this step, FRs are captured as hardgoals and NFRs as softgoals to be achieved.
FRs are treated as hardgoals as they generally have clear-cut achievement criteria.
For instance, a hardgoal of having an ambulance dispatched is satisfied when an
ambulance has been dispatched, an absolute proposition. On the other hands, NFRs
are treated as softgoals in this approach since many of them have less clear-cut
definition and achievement criteria. For instance, it is difficult to precisely define
security without using other NFR terms, such as confidentiality and integrity, which
in turn will have to be defined. It is also difficult to determine concrete criteria for
security since it may be unacceptable to define certain number compromises per
a period of time while on the other hand it is impossible to guarantee that a system
will never be compromised. Therefore, oftentimes, a softgoal can only be satisficed,
referring to the notion of “good enough” satisfaction [6].
To achieve the goals, alternatives are explored, evaluated, and selected based on
tradeoff analyses in consideration of NFRs that are originally posted and those
identified as side-effects [6, 25, 33]. Each selected means is then assigned to be
fulfilled by an external agent or the system under development [11].
94 L. Chung et al.
Ambulance
arrived at Quality [Ambulance
scene arrived at scene]
...
Ambulance Ambulance Ambulance Development Development
request
handled dispatched transported Ambulance
tracked
cost Resilience time
... ... ++
++
––
++
–
“More difficult
Primary-backup to design and
Clustering develop”
Resource Resource Ambulance Ambulance Response time
status location –
++
assigned notified ++
tracked tracked –
... ... ... ++ +
Hardgoal Task Softgoal Achievement Claim External Target AND Achieved Denied
Means hardware software decompo
entity sition
++ + – ––
Means-end Agent Make Help Hurt Very Critical Weakly
Break Side-effect Critical Weakly
Responsiblity achieved denied
Using an ambulance dispatch system as an example, Fig. 7.2 shows a partial goal
model that may be produced during the goal-oriented requirements analysis step.
Parts of the model are omitted for brevity where denoted by “. . .”.
On the left hand side of Fig. 7.2, corresponding to step 1.2.1 in Fig. 7.1,
Ambulance arrived at scene is the ultimate hardgoal of the system, it is refined
using an AND-decomposition to four sub-goals, including Ambulance tracked, a
goal that is further AND-decomposed to Ambulance status tracked and Ambulance
location tracked sub-goals. Each sub-goal can be in turn further decomposed as
necessary until the goal is sufficiently refined and agreeable to the stakeholders.
Corresponding to step 1.2.2, each leaf-goal is then used as a goal to explore
alternative tasks needed to operationalize the goal. For instance, Ambulance loca-
tion tracked goal may be achieved by either Voice call to report the location or
Automatically track the location alternative tasks. Corresponding to step 1.2.3, the
latter alternative is chosen for its more favorable contribution towards Response
time softgoal, as its Make/++ contribution towards the goal is preferred to the Hurt/-
contribution of the former alternative.
Consider the right hand side of the goal model for a moment. Corresponding to
step 1.3.1, Quality [Ambulance arrived at scene] softgoal represents the overall
desirable quality in the context of the FRs as represented by the root hardgoal.
7 Goal-Oriented Software Architecting 95
A relationship between a hardgoal and a domain entity is defined in terms of the role
that the goal plays in relation to the entity. Two roles are determined using the
following rules:
R1: Producer Goal. A hardgoal is considered a producer goal of a domain entity if
the fulfillment of the goal necessitates changes to the domain entity. This
relationship is represented by a uni-directional line from the producer goal
towards the domain entity.
R2: Consumer Goal. A hardgoal is considered a consumer goal of a domain entity
if the fulfillment of the goal necessitates use of information from the domain
7 Goal-Oriented Software Architecting 97
R1: R2:
producer consumer
goal goal
Ambulance Resource
tracked assigned
...
R3.1 ResourceMgr
Resource
Automatically Assign
track location resource
ResourceMgr
+ recordCurrentLocation ()
R3.2 + identifyClosestResource ()
+ reserveResource ()
Record Identify Reserve
current closet
location resource resource
Ambulance
Dispatch
System
This step defines architectural components and their inter-dependencies using the
goal model and goal-entity relationships. Two kinds of components are derived:
process and interface components. A process component provides services related
to the corresponding entity, while an interface component handles the communica-
tion between the system and an external entity. The services provided by a compo-
nent are represented by component operations. Process components and their
operations are determined by the following rules:
R3: Process Component Derivation.
R3.1: Define a process component using nomenclature “< domain-entity > Mgr”
for each domain entity that is associated with a producer and a consumer
hardgoals.
98 L. Chung et al.
R3.2: For each derived process component, define an operation for each selected
task that is assigned to the system under development.
Figure 7.4 shows examples of rule R1, R2, and R3 applications. Here,
ResourceMgr process component is derived because Resource entity is asso-
ciated with both Ambulance tracked as a producer goal and Resource assigned
as a consumer goal. Three operations, including recordCurrentLocation, identify-
ClosestResource, and reserveResource operations, are derived from the three tasks
assigned to Ambulance Dispatch System, the system under development.
The rules for determining interface components and their operations are:
R4: Interface Component Derivation.
R4.1: Define an interface component using nomenclature “< external-agent >
Interface” for each external agent that is assigned to carry out one or more tasks.
R4.2: For each derived interface component, define an operation for each task that
is assigned to the agent.
Ambulance
location tracked
Automatically Resource
track location
AVLSInterface
R4.2 + identifyCurrentLocation ()
identifyCurrent
Location
AVLSInterface
R4.1
Fig. 7.5 An example of rule AVLS
R4 application
7 Goal-Oriented Software Architecting 99
Resource Dispatch
R5
R3.1
R3.1
ResourceMgr DispatchMgr
In this step, the logical architecture is refined using an architectural style to map
component dependencies to concrete connectors and using architectural patterns
to realize the architectural decisions made during the goal-oriented require-
ments analysis. The notions of architectural styles and patterns are often used
100 L. Chung et al.
Ambulance Resource
location tracked assigned
R3.1 ResourceMgr
depender
Resource component
Automatically
track location
Identify current
location
dependee
component
R4.1 AVLSInterface
AVLS
interchangeably [17] to achieve NFRs [13]. But in this approach, they are used for
two subtle different purposes: a style is used to affect how all components interact
to achieve the desirable quality attributes [28] while a pattern is used to affect other
aspects of the architecture in a less predominant manner, for instance to achieve
NFRs such as concurrency and persistency [4].
In this step, each component dependency in the logical architecture is refined using
an architectural style that is chosen from a qualitative goal-oriented tradeoff
analysis. Figure 7.8 shows a goal-oriented tradeoff analysis to select among several
styles, including Abstract Data Type, Shared Data, Implicit Invocation, and Pipe &
Filter, in consideration of Comprehensibility, Modifiability, Performance, and Reus-
ability softgoals [8].
The selection is made by evaluating each alternative in turn to determine
whether its overall impacts on the softgoals are acceptable. An impact on a softgoal
is determined by applying the label propagation procedure [7]. For example,
Abstract Data Type is labeled “satisficed” (denoted by a checkmark) to represent
the selection. The label is propagated to “Weakly-Satisficed” (Wþ) label on Time
performance softgoal over the Help/þ contribution, while the same label is
propagated over a Break/ contribution to “denied” label (denoted by a cross)
on Extensibility [Function]. In this example, Abstract Data Type provides the most
acceptable compromises based on the weakly satisficed labels (denoted by Wþ) on
more critical softgoals (Time performance, Modifiability [Data rep] and Modifiability
[Function]), and the weakly denied labels on less critical softgoals (Modifiability
[Process], Extensibility [Function], Space performance).
The selected architectural style is then applied to each component dependency in
the logical architecture. Figure 7.9 shows the application of abstract data type and
7 Goal-Oriented Software Architecting 101
Modifiability
[System]
Modifiability
[Function]
Performance
Modifiability Modifiability
[Process] [Data Rep]
Comprehensibility
–– –– –– Extensibility Deletability
– [Function] Updatability [Function]
Space Time
+ –– + [Function] Performance Performance
++
–– + ++ ++
– ++ –– Reusability
Coherent Simplicity + ––
––
+ +
++ +
+ +
Fig. 7.8 A goal-oriented tradeoff analysis for selecting a desirable architectural style
Fig. 7.9 Applying the abstract data type style (a) and the pipe & filter style (b) using UML
The architectural decisions made during the goal-oriented analysis are applied to the
logical architecture in this step. For instance, the decision to use hot-standby backup to
achieve Resilience softgoal in Fig. 7.2 may be applied by using a primary-backup
architectural pattern. Figure 7.10 shows the resulting system architecture where two
copies of the system (DispatchSystem.1 and DispatchSystem.2) are operational where
both simultaneously receive inputs from external entities (CallTaker and ALVS). If the
primary copy fails, the hot-standby copy would immediately take over as the new
primary. The failed copy would then be repaired and brought back to operation and
102 L. Chung et al.
marked as the new hot-standby copy. If multiple patterns are considered, they may be
chosen using a goal-oriented tradeoff analysis similar to that shown in Fig. 7.8.
The goal-oriented software architecting (GOSA) approach has been applied to the
1992 London ambulance dispatch system [26] in an empirical study, a controlled
experiment performed by the authors using publicly available case reports (e.g. [26]
and [20]), with the following objectives: (1) to study how the approach can be used
to systematically design from requirements a reasonable candidate software archi-
tecture, and (2) to study how the approach helps capture and maintain relationships
between requirements and the resulting software architecture as well as the design
rationale.
7.4 Results
Figure 7.11 shows the final software architecture in UML, which consists of 10
components and 10 connectors. The components are four process components
whose name ending with “Mgr” suffix, and six interface components whose name
ending with “Interface” suffix. The connectors are represented by 10 coupled UML
“provided” and “required” interfaces: three process-to-process component
connectors (those among RequestMgr, IncidentMgr, and DispatchMgr) and seven
process-to-interface component connectors (those connecting CallTakerInterface,
ResourceAllocatorInterface, AVLSInterface, MapServerInterface, StationPrinter-
Interface, and MDTInterface). The relationships between the requirements and
resulting architecture are the relationships among the elements in the goal model
(hardgoals, softgoals, and domain entities), the logical architecture (components,
component dependencies), the concrete architecture (components and connectors),
and the rules used, as described in Sect. 7.2.
Based on the case study research [19], threats to validity of the empirical study are
discussed in this section in terms of construct validity, internal validity, external
validity, and experimental reliability.
Fig. 7.11 The final software architecture for the empirical study
104 L. Chung et al.
Internal validity or the causal relationships between the input and output of the
GOSA process are illustrated by the artifacts produced from applying the GOSA
process. The process and artifacts shown in Sect. 7.2 illustrate the step-by-step
transition of goals to alternatives, to a logical architecture, and finally to a concrete
architecture through model refinements and rule applications. Through this detailed
process, each element in the architecture is traceable back to the source FRs and
NFRs, along with the design rationales.
External validity or the domains to which the study’s findings can be generalized
are suggested by the application domain in the study. The London ambulance
dispatch system may be considered a typical information system where the system
interacts with the users and manipulates business objects. To help support this
observation, we have also applied the GOSA approach to two other information
systems: an airfare pricing system and an catering pricing system. These studies
also produced architectures with similar architectural characteristics. However, the
results from those supplemental studies were inconclusive due to the lack of control
architectures as they were proprietary software systems. For other application
domains, such as environmental control application domains where the system
controls the environment based on pre-defined rules, this study neither confirms
nor invalidates the applicability of the GOSA approach.
With respect to the experimental reliability or the ability to repeat the study, the
main factor affecting the reliability appears to be the subject who carries out the
study. Any subject applying the GOSA should produce architectures with similar
characteristics. However, the goal-oriented analysis skill of the subject and his
or her subjectiveness in making tradeoff analyses may affect the resulting
architectures. The possible variations are due to the subjectiveness in interpreting
textual subject matter literature since text is generally imprecise by nature and is
oftentimes ambiguous, incomplete, or conflicting. As a result, it is difficult for
different subjects to arrive at the same interpretation. Similarly, NFRs by nature can
be subjective and oftentimes conflicting. Different subjects may arrive at different
tradeoff decisions. Such unreliability elevates the need for explicit and (semi)
formal representation of requirements, for instance using a goal-oriented approach,
so that the interpretation and decision making are more transparent and agreeable
among the stakeholders. To help achieve reliable and repeatable goal-oriented
analysis, patterns of goals, alternatives, and their tradeoffs may be used for similar
or applicable domains. [30].
106 L. Chung et al.
As a goal-oriented method, the GOSA approach adopts and integrates three well
known goal-oriented methods in a complementary manner: the i* Framework [33]
for analyzing FRs, the NFR Framework [7] for analyzing NFRs, and KAOS [11] for
representing agent responsibility assignments. The GOSA approach further extends
the three underlying methods beyond the goal-oriented requirements engineering to
goal-oriented software architecting.
Deriving software architectures from system goals has been proposed by an
extension of the KAOS method [32] and the Preskiptor process [5], the extended
KAOS being the most closely related to our approach. At high-level, both the
extended KAOS and our GOSA approaches capture requirements as goals to be
achieved, which are then used to derive a logical architecture and subsequently
a concrete architecture. However, the goal model in the extended KAOS approach
is more formal and is mainly concerned with functional goals while the goal
model in the GOSA method deals with both functional and non-functional goals.
Non-functional goals are used in the GOSA method for explicit qualitative
tradeoff analysis. Furthermore, the GOSA method derives process and interface
components, and their operations from the goal model.
A software architecture is generally defined with a number of architectural
constituents, including components, connectors, ports, and roles. In the GOSA
approach, components and connectors are derived from the goal model and the
goal-entity relationships using the mapping rules, architectural style and pattern
applications. Since component ports and roles are not generally determined directly
by requirements, they are therefore defined in the patterns and are not derived from
the requirements.
A number of industrial methods have been used to support various architectural
design activities: architectural analysis, synthesis, and evaluation activities.
Examples of methods that support the architectural synthesis include the Attribute-
Driven Design (ADD) method [1], the RUP’s 4 þ 1 Views [21], the Siemens’s
Four-Views (S4V) [16], the BAPO/CAFCR method [15], the Architectural Separa-
tion of Concerns (ASC) [27], and The Open Group Architecture Framework
(TOGAF) [31] methods. Many of these methods provide high-level notation-
neutral processes with common features such as scenario-driven and view-based
architectural design. Our GOSA approach shares a number of features. For
instance, similar to the ADD method, the GOSA approach explores and evaluates
different architectural tactics and uses styles and patterns during the architectural
design. However, the alternatives are represented and reasoned about more for-
mally using goal-oriented modeling and analysis.
For the architectural evaluation activity, many methods use scenarios to provide
context for the architecture evaluation. These methods include Scenario-Based
Architecture Analysis Method (SAAM) [18], the Architecture Trade-Off Analysis
Method (ATAM) method [9], and the Scenario-Based Architecture Reengineering
(SBAR) method [2]. In these approaches, NFRs are generally defined by metrics to
7 Goal-Oriented Software Architecting 107
7.6 Conclusion
Acknowledgments The authors would like to thank the anonymous reviewers for their valuable
comments that greatly helped improve the presentation of this chapter.
108 L. Chung et al.
References
23. Monroe B, Garland D, Wile D (1996) ACME BNF and examples. In Microsoft Component-
Based Software Development Workshop
24. Mylopoulos J, Chung L, Nixon BA (1992) Representing and using nonfunctional
requirements: a process-oriented approach. IEEE Trans Software Eng 18(6):483–497
25. Mylopoulos J (2006) Goal-oriented requirements engineering, part II. Keynote. In: 14th IEEE
International Requirements Engineering Conference
26. Page D, Williams P, Boyd D (1993) Report of the inquiry into the London Ambulance
Service. South West Thames Regional Health Authority, vol. 25
27. Ran A (2000) ARES conceptual framework for software architecture. In: Jazayeri M, Ran A,
van der Linden F (eds) Software architecture for product families principles and practice.
Addison-Wesley, Boston
28. Shaw M, Garlan D (1996) Software architecture: perspectives on an emerging discipline.
Prentice-Hall, Upper Saddle River
29. Supakkul S, Chung L (2005) A UML profile for goal-oriented and use case-driven represen-
tation of NFRs and FRs. In: Proceedings 3rd International Conference. on Software Engineer-
ing Research, Management & Applications (SERA05)
30. Supakkul S, Hill T, Chung L, Tun TT, Leite JCSP (2010) An NFR pattern approach to dealing
with NFRs. In: 18th IEEE International Requirements Engineering Conference, pp 179–188
31. The Open Group (2003) The Open Group Architecture Framework v8.1. http://www.
opengroup.org/architecture/togaf8-doc/arch, accessed on Dec. 19, 2003
32. van Lamsweerde A (2003) From system goals to software architecture. Formal Methods for
Software Architectures 2804:25–43
33. Yu E, Mylopoulos J (1994) Understanding why in software process modelling, analysis,
and design. In: Proceedings 16th International Conference on Software Engineering,
pp 159–168
34. Zhao J (1997) Using Dependence Analysis to Support Software Architecture Understanding.
In: Li M (ed) New Technologies on Computer Software, International Academic Publishers,
pp 135–142
.
Chapter 8
Product-Line Models to Address Requirements
Uncertainty, Volatility and Risk
8.1 Introduction
products, and feature model semantics [8] in which the underlying propositional
structure of the feature model is examined. The former is useful in uncertainty
analysis as it provides a way for the design to track the gradual evolution of changes
to the requirements; the latter is problematic because it generally normalises the
propositional selection structure into a canonical form. Such a normalised model
could make it difficult to precisely identify and manage variation points that exist
because of uncertainty.
Domain-specific languages [9] often complement feature models; while a fea-
ture tree is good for simple dependencies and mutual exclusions, a domain-specific
language is better able to cope with multiple parameters with many possible values.
Domain-specific languages are typically used along with automated code genera-
tion and assembly of predefined artefacts [10]. Given suitable experience and tool
support for small-scale domain-specific languages, it may be feasible to use such
approaches in making a commitment to a design for uncertain requirements.
In addition to these techniques, some more general architectural strategies are
often used with product-line engineering, and would be suitable candidates for
design decisions for uncertain requirements. These include explicitly-defined
abstract interfaces that constrain the interactions of a component; decoupling of
components that relate to different concerns; and provision of component parameters
that specialise and customise components to fit the surrounding context [11].
Requirements
Uncertainty prompts p1
Assess requirements
Clarify requirements
issues
Writer context
Uncertainties
p2
Volatile/nonvolatile Derive flexibility
decoupling requirements
Flexibility requirements
p3
Determine appropriate
Design prompt list
design strategy
Fig. 8.1 Overview of the commitment uncertainty process. Specific materials on the left-hand
side are explained in this chapter. The clarification process on the right-hand side is outside of the
scope of this chapter
The checklist for requirements issues is as shown in Table 8.1. The list contains
issues that are related to the linguistic structure of the requirement as well as issues
that relate to the technical content of the requirement. In this analysis technique, we
recommend an explicit record of uncertainty, linking to relevant supporting infor-
mation, to enable effective impact analysis. This is similar to the use of domain
models and traceability in product-line engineering, and is essential to effective
uncertainty risk management.
In practice, a single requirement may include many forms of uncertainty, which
may interact with one another. Rather than trying to classify all such interactions,
the analysis prompts the reader into thinking about the requirement from different
standpoints. In some situations, no particular reason for the uncertainty will emerge.
116 Z. Stephenson et al.
A pragmatic trade-off must be made between the effort of explaining the uncer-
tainty and the costs saved by performing the analysis.
As an example, consider this sample requirement:
110. The system shall provide an indication of the IDG oil temperature status to the aircraft
via ARINC.
This requirement suffers from (at least) two different uncertainties. Firstly, the
phrase “an indication of the IDG oil temperature status” contains words (“indica-
tion”, “status”) that are either redundant (meaning ‘provide the IDG oil tempera-
ture’) or poorly-defined (meaning to get the status of the oil temperature, and then
transmit an indication of the status).
In all likelihood, the real requirement is the following:
110a. The system shall provide its measurement of the IDG oil temperature to the aircraft
via ARINC.
8 Product-Line Models to Address Requirements Uncertainty, Volatility and Risk 117
The second issue is that the requirement does not directly identify where to find
out information about the message format or value encoding. The link could be
written into the requirement:
110b. The system shall provide its measurement of the IDG oil temperature to the aircraft
via ARINC according to protocol P100-IDG.
It is more likely that the link would be provided through a tool-specific trace-
ability mechanism to an interface control document.
We recognise that there is also value in trying to understand the context within
which a requirement is written; given a set of such requirements, it may be possible
to infer details of the context and hence suggest specific actions to take to address
uncertainty. Table 8.2 explains possible reasons for problems appearing in
requirements. Rather than capturing every requirements issue, we instead present
possible issues for the engineer to consider when deciding on how much informa-
tion to feed back to the requirements writer and when. We make no claim at this
stage that the table is complete; however, it covers a number of different aspects
that might not ordinarily be considered, and on that basis we feel it should be
considered at least potentially useful.
In practice, it will rarely be possible to obtain a credible picture of the
requirements writer context. Nevertheless, it can be useful to consider the possible
context to at least try to understand and accommodate delays in the requirements.
Explicitly recording assumptions about the writer also facilitates useful discussion
among different engineers, particularly if there are actually multiple issues behind the
problems in a particular requirement. Finally, the overall benefit of this identification
step is that it gives clear tasks for the engineer, reducing the so-called “task uncer-
tainty” and improving the ability to make useful progress against the requirements.
With the requirement volatility captured, the requirement is then restated into
the same form as the original requirement. This is the shadow requirement, and
represents what the engineer will actually work to. The final step is to double-check
the result by checking that the original requirement, as stated, is one possible
instantiation of the shadow requirement.
Consider the sample requirement 110 again:
110. The system shall provide an indication of the IDG oil temperature status to the aircraft
via ARINC.
8 Product-Line Models to Address Requirements Uncertainty, Volatility and Risk 119
Product-line skills e1
e2
Create shadow
requirement
Shadow requirement
e3
Validate shadow
requirement
Fig. 8.2 Overview of deriving flexibility requirements. Volatility (expected changes) is specified
and then factored into the requirement to create a family of related requirements
Instantiations:
110a. The system shall provide its measurement of IDG oil temperature to the aircraft via
ARINC.
110b. The system shall provide a count of current IDG oil temperature faults to the aircraft
via ARINC.
110c. The system shall provide IDG oil temperature, operating limits and rate of change to
the aircraft via ARINC.
120 Z. Stephenson et al.
One possible shadow requirement for the IDG oil temperature requirement
would be:
110x. The system shall provide (IDG oil temperature|IDG oil temperature faults|IDG oil
temperature presence/absence of faults|IDG oil temperature, operating limits and rate of
change) to the aircraft via ARINC [using protocol (P)].
It is not intended that the commitment uncertainty approach should constrain the
type of implementation chosen to accommodate the identified uncertainty. Never-
theless, it is useful to give advice on the type of design approach that is likely to be
successful, as a way to further overcome task uncertainty and improve the likeli-
hood of quickly arriving at a suitable design.
The advice is based on a recognition that a design approach will typically
respond to a group of requirements. We take as input the scale of the volatility in
that group of requirements and produce a suggested list of design approaches to
consider, in a particular order. The intent is not to restrict the engineer to these
design approaches, nor to constrain the engineer to select the first approach for
which a design is possible; the aim is simply to help the engineer to quickly arrive at
something that is likely to be useful.
Our approach is therefore much coarser than more considered and involved
approaches such as those of Kruchten [11] or Bosch [3]. The mapping is shown in
Table 8.3. In this table, the scale of design volatility is broadly categorised as
“parameter” when the volatility is in a single parameterisable part of the requirements;
“function” when the behaviour changes in the volatile area; and “system” when the
volatility is in the existence or structure of a whole system. The engineer is
encouraged to choose whichever of these designations best matches the volatility,
and then use his existing engineering skills to arrive at designs that are prompted by
the entries under that heading: “Parameterisation” to include suitable data types and
parameters in the design; “Interfaces and Components” to consider larger-scale
In software, major design decisions are traded off and captured in the software
architecture; functionality is then implemented with respect to this architecture.
In some complex design domains, however, there are multiple competing design
dependencies that can be difficult to resolve. To assist in making progress in this
context, we provide a framework that tracks design dependencies and resolves design
decisions hierarchically to produce the complete design. The intended effect is that the
areas that are most volatile are those that are least fundamental to the structure of the
design. This technique makes use of product-line concepts to represent optionality.
In addition to dependencies between design decisions and (parts of) requirements,
any part of a design may depend on part of an existing design commitment. This
includes both communicating with an existing design element and reusing or
extending an existing element. These dependencies are the easiest to accommodate
with indirection and well-defined interfaces. Contextual domain information is also
important, and most design commitments are strongly related to domain information.
The dependency on domain information can be managed through parameterisation
or indirection. The process of uncertainty analysis provides additional exposure
of contextual issues, helping to reduce the risk associated with missing context.
In our prototype modelling approach, we explicitly represent dependencies
between design elements in a graphical notation, shown in Fig. 8.3. This example
represents the decision to add the IDG oil temperature parameter to the ARINC
table as defined in the interface control document. The decision is prompted by the
requirement to send the information, the availability of the information, and the
availability of a suitable communication mechanism. The result of the decision is
a new entry in a data table to allow the communication of the appropriate value.
While this example is perhaps trivial, it illustrates the important distinction between
decisions (processes that the user may engage in) and designs (the artefacts that
result from design activity).
Commitment uncertainty analysis associates volatility with context, require-
ments and design. This may be annotated alongside the decision tracing diagram.
To retain familiarity and compatibility with existing approaches, we base this
representation on conventional feature modelling notations, as shown in Fig. 8.4.
A feature model view of design decision volatility is a powerful visual tool to help
appreciate the impact of volatility on the design approach. It is expected that this
Fig. 8.4 Representing volatility with design decisions. The annotation on the left-hand side shows
features (rectangles) with dependencies (arcs) and selection constraints (arc label < 1–* > in
this example)
8 Product-Line Models to Address Requirements Uncertainty, Volatility and Risk 123
In this section, we present the design and results of an experiment to test the
theoretical effectiveness of the commitment uncertainty approach. For this experi-
ment, we used four instances of the requirements from an engine controller project;
a preliminary (draft) document P and issued requirements I1, I2 and I3, from which
we elicited changes made to the requirements over time. Since the requirements in
this domain are generally expressed at a relatively low level – particularly with
respect to architectural structure – we consider that the requirements are, for the
purposes of this experiment, equivalent to the design.
In the experiment, we created two more flexible versions of P: Pt using conven-
tional architectural trade-off techniques, and Pu using commitment uncertainty
techniques. The hypothesis is that, when faced with the changes represented by
I1–3, Pu accommodates those changes better than Pt, and both Pu and Pt are better at
accommodating changes than P. We consider a change to be accommodated if the
architecture of the design provides flexibility that may be used to implement the
required change. Table 8.4 shows the data for Pu, and Table 8.5 is the equivalent
data for Pt. In each table, the ID column gives the associated requirement ID, then
124 Z. Stephenson et al.
the Scope column identifies whether the requirement was in scope for commitment
uncertainty and the Derived column indicates whether a derived requirement was
produced from the analysis. The remaining columns evaluate the two sets of
requirements – the original set and the set augmented with derived requirements
8 Product-Line Models to Address Requirements Uncertainty, Volatility and Risk 125
from the additional analysis. At the bottom of the table, the totals are presented as
both raw totals and a filtered total that excludes the requirements that were outside
the scope of architectural flexibility provision.
126 Z. Stephenson et al.
Table 8.6 Summary of uncertainty analysis against original requirements. Data show significant
(w2, p < 0.05) improvement over original system
Y N Y N Y N
I1–P 20 8 I2–P 20 8 I3–P 19 9
I1–Pu 28 0 I2–Pu 28 0 I3–Pu 27 1
Table 8.8 Summary of uncertainty analysis against trade-off analysis. Data show significant
(w2, p < 0.05) improvement of uncertainty analysis against trade-off analysis
Y N Y N Y N
I1–Pu 28 0 I2–Pu 28 0 I3–Pu 27 1
I1–Pt 20 6 I2–Pt 20 6 I3–Pt 19 7
In this study, we investigated the ability of the design prompt sequence approach to
correctly identify appropriate design targets for the implementation of uncertainty-
handling mechanisms. The study is based on an internal assessment of technical
risk across a number of engine projects conducted in 2008. We extracted eight areas
that had been identified as technical risks that were in scope for being addressed
with architectural mechanisms. For each identified technical risk, we elicited
uncertainties and then used the design prompt sequence to generate design options.
Finally, we chose a particular design from the list and recorded both the position of
the design in the list and the match between the chosen design and the final version
of the requirements.
As an example, consider the anonymised risk table entry in Table 8.9. The
individual uncertainties for this particular instance are elaborated and documented
in a custom tabular format shown in Table 8.10.
The design prompts for “Function, Concurrent” are presented in the order,
components/interfaces, parameterisation and then auto-generation.
The options identified are:
1. Use an abstract signal validation component with interfaces that force the
designer to consider raw and validated input signals, power interrupts and
signals for faults. This design ensures that each component encapsulates any
uncertainty regarding its own response to power interrupt.
Uncertainties
Type Definition Rationale
Function Required signal validation after Derivation from the technical risk concept
concurrent a power interrupt “requiremens for reaction to power supply
interrupts”
128 Z. Stephenson et al.
changes between early project requirements and final issued project requirements.
Several questions still remain unanswered, however. Most importantly, how much
effort is involved in creating the derived requirements and flexible design versus
the time taken to refactor the design at a later stage? It is this comparison that is most
likely to persuade engineers that the technique has merit. In support of this,
we emphasise the positive results that have been obtained so far and focus on the
practical aspects of the approach – its lightweight nature and the ability to apply
it only where immediate risks present themselves. Similarly, in how many cases is
flexibility added to the design but never used later on? The presence of a large amount
of unused functionality may be a concern particularly in the aerospace industry and
especially if it prevents the engineer from obtaining adequate test coverage.
For future work in this area, we have identified four themes. Firstly, we are
interested in integration of the concepts of commitment uncertainty into a suitable
metamodelling framework such as decision traces [28] or Archium [29]. Second,
it would be interesting to deploy appropriate tool support based on modern meta-
modelling [30]. Third, there is potential benefit in a richer linguistic framework
to support more detailed uncertainty analysis and feedback to requirements
stakeholders, and lastly, further experimentation is needed to understand the nature
of appropriate design advice, design patterns and commitment-uncertainty metrics.
Acknowledgment The research presented in this chapter was conducted at the Rolls-Royce
University Technology Centre in Systems and Software Engineering, based at the University of
York Department of Computer Science. We are grateful to Rolls-Royce plc. for supporting this
research.
References
1. Stokes DA (1990) Requirements analysis. In: McDermid J (ed) Software engineer’s reference
book. Butterworth-Heinemann, Oxford
2. Schmid K, Gacek C (2000) Implementation issues in product line scoping. In: Frakes WB (ed)
ICSR6: software reuse: advances in software reusability. Proceedings of the 6th international
conference on software reusability. (Lecture notes in computer science), vol 1844. Springer,
Heidelberg, pp 170–189
3. Bosch J (2000) Design and use of software architectures. Addison-Wesley, Reading
4. Weiss D, Ardiss M (1997) Defining families: the commonality analysis. In: ICSE’97:
proceedings of the 19th international conference on software engineering, Boston, May
1997. ACM Press, New York
5. Kang K, Cohen S, Hess J et al (1990) Feature-Oriented Domain Analysis (FODA) feasibility
study. Technical Report CMU/SEI-90-TR-21, Carnegie-Mellon University Software Engi-
neering Institute
6. Czarnecki K, Kim CH P, Kalleberg KT (2006) Feature models are views on ontologies.
In: SPLC2006: proceedings of the 10th international conference on software product lines.
Baltimore, Maryland, USA. IEEE Computer Society Press, Los Alamitos, California, USA
7. Czarnecki K, Helsen S, Eisenecker U (2004) Staged configuration using feature models.
In: Nord R L (ed) SPLC 2004: proceedings of the 9th international conference on software
product lines, Boston. (Lecture notes in computer science), vol 3154. Springer, Heidelberg
8 Product-Line Models to Address Requirements Uncertainty, Volatility and Risk 131
9.1 Introduction
steps that are equipped with validation conditions. The method works for different
types of systems, e.g., for embedded systems, web-applications, and distributed
systems as well as standalone ones. The method is based on different kinds of
patterns. On the one hand, it makes use of problem frames [1], which are patterns to
classify simple software development problems. On the other hand, it builds on
architectural and design patterns.
The starting point of the method is a set of diagrams that are set up during
requirements analysis. In particular, a context diagram describes how the software
to be developed (called machine) is embedded in its environment. Furthermore,
the overall software development problem must be decomposed into simple
subproblems, which are represented by problem diagrams. The different
subproblems should be instances of problem frames.
From these pattern-based problem descriptions, we derive a software architec-
ture that is suitable to solve the software development problem described by
the problem descriptions. The problem descriptions as well as the software
architectures are represented as UML diagrams, extended by stereotypes. The
stereotypes are defined in profiles that extend the UML metamodel [2].
The method to derive software architectures from problem descriptions consists of
three steps. In the first step, an initial architecture is set up. It contains one component
for each subproblem. The overall machine component has the same interface as
described in the context diagram. All connections between components are described
by stereotypes (e.g., ‹‹call_and_return››, ‹‹shared_memory››, ‹‹event››, ‹‹ui››).
In the second step, we apply different architectural and design patterns. We
introduce coordinator and facade components and specify them. A facade compo-
nent is necessary if several internal components are connected to one external
interface. A coordinator component must be added if the interactions of the
machine with its environment must be performed in a certain order. For different
problem frames, specific architectural patterns are applied.
In the final step, the components of the intermediate architecture are re-arranged
to form a layered architecture, and interface and driver components are added. This
process is driven by the stereotypes introduced in the first step. For example,
a connection stereotype ‹‹ui›› motivates to introduce a user interface component.
Of course, a layered architecture is not the only possible way to structure the
software, but a very convenient one. We have chosen it because a layered architec-
ture makes it possible to divide platform-dependent from platform-independent
parts, because different layered systems can be combined in a systematic way, and
because other architectural styles can be incorporated in such an architecture.
Furthermore, layered architectures have proven useful in practice.
Our method exploits the subproblem structure and the classification of sub-
problems by problem frames. Additionally, most interfaces can be derived from
the problem descriptions [3]. Stereotypes guide the introduction of new com-
ponents. They also can be used to generate adapter components automatically.
The re-use of components is supported, as well.
The method is tool-supported. We extended an existing UML tool by providing
two new profiles for it. The first UML profile allows us to express the different
9 Systematic Architectural Design Based on Problem Patterns 135
models occurring in the problem frame approach using UML diagrams. The second
one allows us to annotate composite structure diagrams with information on
components and connectors. In order to automatically validate the semantic integ-
rity and coherence of the different models, we provide a number of validation
conditions. The underlying tool itself, which is called UML4PF, is based on the
Eclipse development environment [4], extended by an EMF-based [5] UML tool,
in our case, Papyrus UML [6].
In the following, we first discuss the basic concepts of our method, namely
problem frames and architectural styles (Sect. 9.2). Subsequently, we describe the
problem descriptions that form the input to our method in Sect. 9.3. In Sect. 9.4, we
introduce the UML profile for architectural descriptions that we have developed
and which provides the notational elements for the architectures we derive.
In Sect. 9.5, we describe our method in detail. Not only do we give guidance
on how to perform the three steps, but we also give detailed validation conditions
that help to detect errors as early as possible. As a running example, we apply our
method to derive a software architecture for an automated teller machine.
In Sect. 9.6, we describe the tool that supports developers in applying the method.
Sect. 9.7 discusses related work, and in Sect. 9.8, we give a summary of our
achievements and point out directions for future work.
Our work makes use of problem frames to analyse software development problems
and architectural styles to express software architectures. These two concepts are
briefly described in the following.
1
In the following, since we use UML tools to draw problem frame diagrams (see Figure 9.4), all
requirement references will be represented by dashed lines with arrows and stereotypes
‹‹refersTo››, or ‹‹constrains›› when it is constraining reference.
9 Systematic Architectural Design Based on Problem Patterns 137
we must develop a machine, which controls this domain accordingly. In Fig. 9.1,
the Display domain is constrained, because the Answering machine changes
it on behalf of Enquiry operator commands to satisfy the required Answer
rules.
The Commanded Information frame in Fig. 9.1 is a variant of the Information
Display frame where there is no operator, and information about the states and
behaviour of some parts of the physical world is continuously needed. We present
in Fig. 9.4 the Commanded Behaviour frame in UML notation. That frame
addresses the issue of controlling the behaviour of the controlled domain according
to the commands of the operator. The Required Behaviour frame is similar but
without an operator; the control of the behaviour has to be achieved in accordance
with some rules. Other basic problem frames are the Transformation frame in
Fig. 9.2 that addresses the production of required outputs from some inputs, and
the Simple Workpieces frame in Fig. 9.3 that corresponds to tools for creating and
editing of computer processable text, graphic objects etc.
Software development with problem frames proceeds as follows: first, the
environment in which the machine will operate is represented by a context diagram.
Like a frame diagram, a context diagram consists of domains and interfaces.
However, a context diagram contains no requirements. Then, the problem is
decomposed into subproblems. Whenever possible, the decomposition is done in
such a way that the subproblems fit to given problem frames. To fit a subproblem to
a problem frame, one must instantiate its frame diagram, i.e., provide instances
for its domains, interfaces, and requirement. The instantiated frame diagram is
called a problem diagram.
138 C. Choppy et al.
Besides problem frames, there are other elaborate methods to perform require-
ments engineering, such as i* [7], Tropos [8], and KAOS [9]. These methods are
goal-oriented. Each requirement is elaborated by setting up a goal structure. Such
a goal structure refines the goal into subgoals and assigns responsibilities to actors
for achieving the goal. We have chosen problem frames and not one of the goal-
oriented requirements engineering methods to derive architectures, because the
elements of problem frames, namely domains, may be mapped to components of
an architecture in a fairly straightforward way.
2
The mentioned architectural styles are described in [11].
9 Systematic Architectural Design Based on Problem Patterns 139
To support problem analysis according to Jackson [1] with UML [2], we created
a new UML profile. In this profile stereotypes are defined. A stereotype extends
a UML meta-class from the UML meta-model, such as Association or Class
[12].
In the following subsections, we describe our extensions to the problem analysis
approach of Jackson (Sect. 9.3.1), we explain how the different diagrams can be
created with UML and our profile (Sect. 9.3.2), we describe our approach to express
connections between domains (Sect. 9.3.3), and we enumerate the documents that
form the starting point for our architectural design method in Sect. 9.3.4. We
illustrate these concepts on an ATM example in Sect. 9.3.5.
9.3.1 Extensions
The different diagram types make use of the same basic notational elements. As a
result, it is necessary to explicitly state the type of diagram by appropriate stereo-
types. In our case, the stereotypes are ‹‹ContextDiagram››, ‹‹ProblemDiagram››,
‹‹ProblemFrame››, and ‹‹TechnicalContextDiagram››. These stereotypes extend
(some of them indirectly) the meta-class Package in the UML meta-model.
According to the UML superstructure specification [2], it is not possible that one
UML element is part of several packages. For example a class Customer should
140 C. Choppy et al.
be in the context diagram package and also in some problem diagrams packages.3
Nevertheless, several UML tools allow one to put the same UML element into
several packages within graphical representations. We want to make use of this
information from graphical representations and add it to the model (using stereo-
types of the profile). Thus, we have to relate the elements inside a package
explicitly to the package. This can be achieved with a dependency stereotype
‹‹isPart›› from the package to all included elements (e.g., classes, interfaces,
comments, dependencies, associations).
The context diagram (see e.g., Fig. 9.8) contains the machine domain(s), the
relevant domains in the environment, and the interfaces between them. Domains are
represented by classes with the stereotype ‹‹Domain››, and the machine is marked
by the stereotype ‹‹Machine››. Instead of ‹‹Domain››, more specific stereotypes
such as ‹‹BiddableDomain››, ‹‹LexicalDomain›› or ‹‹CausalDomain›› can be used.
Since some of the domain types are not disjoint, more than one stereotype can be
applied on one class.
In a problem diagram (see e.g., Fig. 9.9), the knowledge about a sub-problem
described by a set of requirements is represented. A problem diagram consists of
sub-machines of the machines given in the context diagram, the relevant domains,
the connections between these domains and a requirement (possibly composed of
several related requirements), as well as of the relation between the requirement and
the involved domains. A requirement refers to some domains and constrains at least
one domain. This is expressed using the stereotypes ‹‹refersTo›› and ‹‹constrains››.
They extend the UML meta-class Dependency. Domain knowledge and
requirements are special statements. Furthermore, any domain knowledge is either
a fact (e.g., physical law) or an assumption (usually about a user’s behaviour).
The problem frames (patterns for problem diagrams) have the same kind of
elements as problem diagrams. To instantiate a problem frame, its domains, require-
ment and connections have to be replaced by concrete ones. Figure 9.4 shows
the commanded behaviour problem frame in UML notation, using our profile.
3
Alternatively, we could create several Customer classes, but these would have to have different
names.
9 Systematic Architectural Design Based on Problem Patterns 141
stereotype ‹‹connection›› to indicate that there are shared phenomena between the
associated domains. The AdminDisplay controls the phenomenon showLog. In
general, the name of the association contains the phenomena and the controlling
domain. We represent different sets of shared phenomena with a different control
direction between two domains by a second interface class.
Jackson’s phenomena can be represented as operations in UML interface clas-
ses. The interface classes support the transition from problem analysis to problem
solution. Some of the interface classes in problem diagrams become external
interfaces of the architecture. In case of lexical domains, they may also be internal
interfaces of the architecture. A ‹‹connection›› can be transformed into an interface
class controlled by a domain and observed by other domains. To this end, the
stereotypes ‹‹observes›› and ‹‹controls›› are defined to extend the meta-class
Dependency in the UML meta-model. The interface should contain all phenom-
ena as operations. We use the name of the association as name for the interface
class. Figure 9.6 illustrates how the connection given in Fig. 9.5 can be transformed
into such an interface class.
To support a systematic architectural design, more specific connection types can
be annotated in problem descriptions. Examples of such stereotypes which can be
used instead of ‹‹connection›› are, e.g., ‹‹network_connection›› for network
connections, ‹‹physical›› or ‹‹electrical›› for physical connections, and ‹‹ui›› for
user interfaces (see e.g., Fig. 9.8). Our physical connection can be specialised
into hydraulic flow or hot air flow. These flow types are defined in SysML [13].
142 C. Choppy et al.
For the control signal flow type in SysML, depending on the desired realisation, the
stereotypes ‹‹network_connection››, ‹‹event››, ‹‹call_return››, or ‹‹stream›› can be
used. Figure 9.7 shows a hierarchy of stereotypes for connections. This hierarchy
can be easily extended by new stereotypes.
For these stereotypes, more specialised stereotypes (not shown in Fig. 9.7) can
be defined that consider the technical realisation, e.g. events (indicated with
the stereotype ‹‹event››) can be implemented using Windows Message Queues
(‹‹wmq››), Java Events (‹‹java_events››), or by a number of other techniques.
Network connections (‹‹network_connection››) can be realised, e.g., by HTTP
(‹‹http››) or the low-level networking protocol TCP (‹‹tcp››).
4
The technical context diagram is identical to the context diagram, because it is not necessary to
describe new connection domains representing the platform or the operating system.
144 C. Choppy et al.
Fig. 9.9 Problem diagram for the card reader controller in UML notation
Our method leads the way from problem descriptions to software architectures
in a systematic way, which is furthermore enhanced with quality assurance
measures and tool support (see Sect. 9.6).
The purpose of this first step is to collect the necessary information for the
architectural design from the requirements analysis phase, to determine which
component has to be connected to which external port, to make coordination
problems explicit (e.g. several components are connected to the same external
domain), and to decide on the machine type and to verify that it is appropriate
(considering the connections). At this stage, the submachine components are not yet
coordinated.
The inputs for this step are the technical context diagram and the problem
diagrams. The output is an initial architecture, represented by a composite structure
diagram. It is set up as follows. There is one component for a machine with
stereotype ‹‹machine››, and it is equipped with ports corresponding to the interfaces
of the machine in the technical context diagram, see Fig. 9.11.
Inside this component, there is one component for each submachine identified in
the problem diagrams, equipped with ports corresponding to the interfaces in the
problem diagrams, and typed with a class. This class has required and provided
interfaces. A controlled interface in a problem diagram becomes a required inter-
face of the corresponding component in the architecture. Usually, an observed
interface of the machine in the problem diagram will become a provided interface
of the corresponding component in the architecture. However, if the interface
connects a lexical domain, it will be a required interface containing operations
with return values (see [14, Sect. 9.3.1]). The ports of the components should be
connected to the ports of the machine, and stereotypes describing the technical
realisation of these connectors are added. A stereotype describing the type of the
machine (local, distributed, process, task) is added, as well as stereotypes
‹‹ReusedComponent›› or ‹‹Component›› to all components. If appropriate, stereo
types describing the type of the components (local, distributed, process, task) are also
added.
The initial architecture of the ATM is given in Fig. 9.11. Starting from the
technical context diagram in Fig. 9.8, and the problem diagrams (including the
ones given in Figs. 9.9 and 9.10), the initial ATM architecture has one component,
ATM, with stereotype ‹‹machine, local›› and the ports (typed with :PAdmin, :
PAccount, :PCustomer, :PMS_C, :PCardReader) that correspond to the
interfaces of the machine in the technical context diagram. The components
(CardReaderController, BankInformationMachine, MoneyCase
Controller, and AccountHandler) correspond to the submachines identified
for this case study (e.g., CardReaderController in Fig. 9.9, and Bank-
InformationMachine in Fig. 9.10). Phenomena at the machine interface
148 C. Choppy et al.
5
Our method does not rely on how these restrictions are represented. Possible representations are
sequence diagrams, state machines, or grammars.
6
See the corresponding design pattern by Gamma et al. [16]: “Provide a unified interface to a set of
interfaces in a subsystem. Facade defines a higher-level interface that makes the subsystems easier
to use.”
150 C. Choppy et al.
internal port. To ensure the interaction restrictions, a state machine can be used
inside the component. Typically, coordinator components are needed for interfaces
connected to biddable domains (also via connection domains). This is because
often, a user must do things in a certain order. In our example, a user must first
authenticate before being allowed to enter a request to withdraw money. Therefore,
9 Systematic Architectural Design Based on Problem Patterns 151
‹‹gui››, we need a component handling the input from the user. For ‹‹physical››
connections, we introduce appropriate driver components, which are often re-used.
We arrange the components in three layers. The highest layer is the application
layer. It implements the core functionality of the software, and its interfaces mostly
correspond to high-level phenomena, as they are used in the context diagram. The
lowest layer establishes the connection of the software to the outside world.
It consists of user interface components and hardware abstraction layer (HAL)
components, i.e., the driver components establishing the connections to hardware
components. The low-level interfaces can mostly be obtained from the technical
context diagram. The middle layer consists of adapter components that translate
low-level signals from the hardware drivers to high-level signals of the application
components and vice versa. If the machine sends signals to some hardware, then
these signals are contained in a required interface of the application component,
connected to an adapter component. If the machine receives signals from some
hardware, then these signals are contained in a provided interface of the application
component, connected to an adapter component.
The input to this step are the intermediate architecture, the context diagram, the
technical context diagram, and the interaction restrictions. The output is a layered
architecture. It is annotated with the stereotype ‹‹layered_architecture›› to distin-
guish it from the intermediate architecture. Note, however, that a layered architec-
ture can only be defined for a machine or component with the stereotype ‹‹local››,
‹‹process›› or ‹‹task››. For a distributed machine, a layered architecture will be
defined for each local component.
To obtain the layered architecture, we assign all components from the interme-
diate architecture to one of the layers. The submachine components as well as the
facade components will belong to the application layer. Coordinator components
for biddable domains should be part of the corresponding (usually: user) interface
component, whereas coordinator components for physical connections belong the
the application layer. As already mentioned, connection stereotypes guide the
introduction of new components, namely user interface and driver components.
All components interfaces must be defined, where guidance is provided by the
context diagram (application layer) and the technical context diagram (external
interfaces).
The final software architecture of the ATM is given in Fig. 9.15. Note that we
have two independent application components, one for the administrator and the
other handling the interaction with the customers. This is possible, because there
are no interaction restrictions between the corresponding subproblems. However,
both applications need to access the log storage. Therefore, the component
LogStorage does not belong to one of the application components. Each of the
biddable domains Admin and Customer is equipped with a corresponding user
interface. For the physical connections to the card reader and the money supply
case, corresponding HAL and adapter components are introduced. Because the
connection to the account data was defined to be a ‹‹network_connection›› already
in the initial architecture, the final architecture contains a DB_HAL component.
9 Systematic Architectural Design Based on Problem Patterns 153
The validation conditions to be checked for the layered architecture are similar
to the validation conditions for the intermediate architectures. Conditions VM.3
must also hold for the layered architecture, and conditions VM.1 and VM.2 become
VL.1 All components of the intermediate architecture must be contained in the
layered architecture.
VL.2 The connectors connected to the ports in the layered architecture must
have the same stereotypes or more specific ones than in the intermediate
architecture.
This final step could be carried out in a different way – resulting in a different
final architecture – for other types of systems, e.g., when domain-specific languages
are used.
The basis of our tool called UML4PF [17] is the Eclipse platform [4] together with
its plug-ins EMF [5] and OCL [18]. Our UML-profiles described in Sects. 9.3
and 9.4 are conceived as an eclipse plug-in, extending the EMF meta-model. We
store all our OCL constraints (which formalise the validation conditions given
in Sect. 9.5) in one file in XML-format. With these constraints, we check the
validity and consistency of the different models we set up during the requirements
154 C. Choppy et al.
analysis and architectural design phases. An overview of the context of our tool is
provided in Fig. 9.16. Gray boxes denote re-used components, whereas white boxes
describe those components that we created.
The functionality of our tool UML4PF comprises the following:
• It checks if the developed models are valid and consistent by using our OCL
constraints.
• It returns the location of invalid parts of the model.
• It automatically generates model elements, e.g., it generates observed and
controlled interfaces from association names as well as dependencies with
stereotype ‹‹isPart›› for all domains and statements being inside a package in
the graphical representation of the model.
The graphical representation of the different diagram types can be manipulated
by using any EMF-based editor. We selected Papyrus UML [6], because it is
available as an Eclipse plug-in, open-source, and EMF-based. Papyrus stores the
model (containing requirements models and architectures) with references to the
UML-profiles in one XML-file using EMF. The XML format created by EMF
allows developers to exchange models between several UML tools. The graphical
representation of the model is stored in separate file. Since UML4PF is based on
EMF, it inherits all strengths and limitations of this platform. To use Papyrus with
UML4PF to develop an architecture, developers have to draw the context diagram
and the problem diagrams (see Sect. 9.3). Then they can proceed with deriving the
specification, after UML4PF has generated the necessary model elements. Next, the
requirements models are automatically checked with UML4PF.
Re-using model elements from the requirements models, developers create the
architectures as described in Sect. 9.5. After each step, the model can be automati-
cally validated. UML4PF indicates which model elements are not used correctly or
which parts of the model are not consistent. Figure 9.13 shows a screenshot
of UML4PF. As can be seen below the architectural diagram, several kinds of
diagrams are available for display. When selecting the OCL validator, the validation
conditions are checked, and the results are displayed as shown at the bottom of the
figure. Fulfilled validation conditions are displayed in green, violated ones in red.
All in all, we have defined about 80 OCL validation conditions, including
17 conditions concerning architectural descriptions. The time needed for checking
only depends on EMF and is about half a second per validation condition.
9 Systematic Architectural Design Based on Problem Patterns 155
The influence of the model size on the checking time is less than linear. About 9,800
lines of code have been written to implement UML4PF.
The tool UML4PF is still under development and evaluation. Currently it is used
in a software engineering class at the University Duisburg-Essen with about 100
participants. In this class, the problem frame approach and the method for architec-
tural design described in this chapter are taught and applied to the development of
a web application. The experience gained from the class will be used to assess (and
possibly improve) the user-friendliness of the tool.
Moreover, UML4PF will be integrated into the tool WorkBench of the European
network of excellence NESSoS (see http://www.nessos-project.eu/). With this
integration, we will reach a wider audience in the future. Finally, the tool is
available for download at http://swe.uni-due.de/en/research/tool/index.php.
Since our approach heavily relies on the use of patterns, our work is related to
research on problem frames and architectural styles. However, we are not aware of
similar methods that provide such a detailed guidance for developing software
architectures, together with the associated validation conditions.
Lencastre et al. [19] define a meta-model for problem frames using UML. Their
meta-model considers Jackson’s whole software development approach based on
context diagrams, problem frames, and problem decomposition. In contrast to our
meta-model, it only consists of a UML class model without OCL integrity
constraints. Moreover, their approach does not qualify for a meta-model in terms
of MDA because, e.g., the class Domain has subclasses Biddable and Given,
but an object cannot belong to two classes at the same time (cf. Figs. 5 and 11 in
[19]).
Hall et al. [20] provide a formal semantics for the problem frame approach. They
introduce a formal specification language to describe problem frames and problem
diagrams. However, their approach does not consider integrity conditions.
Seater et al. [21] present a meta-model for problem frame instances. In addition
to the diagram elements formalised in our meta-model, they formalise requirements
and specifications. Consequently, their integrity conditions (“wellformedness pred-
icate”) focus on correctly deriving specifications from requirements. In contrast,
our meta-model concentrates on the structure of problem frames and the different
domain and phenomena types.
Colombo et al. [22] model problem frames and problem diagrams with SysML
[13]. They state that “UML is too oriented to software design; it does not support
a seamless representation of characteristics of the real world like time, phenomena
sharing [. . .]”. We do not agree with this statement. So far, we have been able to
model all necessary means of the requirements engineering process using UML.
Charfi et al. [23] use a modelling framework, Gaspard2, to design high-
performance embedded systems-on-chip. They use model transformations to move
156 C. Choppy et al.
from one level of abstraction to the next. To validate that their transformations were
performed correctly, they use the OCL language to specify the properties that must
be checked in order to be considered as correct with respect to Gaspard2. We have
been inspired by this approach. However, we do not focus on high-performance
embedded systems-on-chip. Instead, we target general software development
challenges.
Choppy and Heisel give heuristics for the transition from problem frames to
architectural styles. In [24], they give criteria for choosing between architectural
styles that could be associated with a given problem frame. In [25], a proposal for
the development of information systems is given using update and query problem
frames. A component-based architecture reflecting the repository architectural style
is used for the design and integration of the different system parts. In [26], the
authors of this paper propose architectural patterns for each basic problem
frameproposed by Jackson [1]. In a follow-up paper [27], the authors show how
to merge the different sub-architectures obtained according to the patterns
presented in [26], based on the relationship between the subproblems. Hatebur
and Heisel [3] show how interface descriptions for layered architectures can be
derived from problem descriptions.
Barroca et al. [28] extend the problem frame approach with coordination
concepts. This leads to a description of coordination interfaces in terms of services
and events together with required properties, and the use of coordination rules to
describe the machine behaviour.
Lavazza and Del Bianco [29] also represent problem diagrams in a UML
notation. They use component diagrams (and not stereotyped class diagrams) to
represent domains. Jacksons interfaces are directly transformed into used/required
classes (and not observe and control stereotypes that are translated in the architec-
tural phase). In a later paper, Del Bianco and Lavazza [30] suggest enhance
problem frames with scenarios and timing.
Hall, Rapanotti, and Jackson [31] describe a formal approach for transforming
requirements into specifications. This specification is then transformed into
the detailed specifications of an architecture. We intentionally left out deriving the
specification describing the dynamic behaviour within this chapter and focus on the
static aspects of the requirements and architecture.
Acknowledgments We would like to thank our anonymous reviewers for their careful reading
and constructive comments.
References
6. Papyrus UML Modelling Tool (2010) Jan 2010. http://www.papyrusuml.org, last checked:
2011-06-14
7. Yu E (1997) Towards modelling and reasoning support for early-phase requirements engi-
neering. In: Proceedings of the 3 rd IEEE Intern. Symposium on RE pp 226–235
8. Bresciani P, Perini A, Giorgini P, Giunchiglia F, Mylopoulos J (2004) Tropos: an agent
oriented software development methodology. Auton Agents Multi-Agent Syst 8(3):203–236
9. Bertrand P, Darimont R, Delor E, Massonet P, van Lamsweerde A (1998) GRAIL/KAOS: an
environment for goal driven requirements engineering. In ICSE’98 – 20th International
Conference on Software Engineering, New York
10. Bass L, Clements P, Kazman R (1998) Software architecture in practice. Addison-Wesley,
Massachusets, first edition
11. Shaw M, Garlan D (1996) Software architecture. Perspectives on an emerging discipline.
Prentice-Hall, Upper Saddle River
12. UML Revision Task Force (2009) OMG Unified Modeling Language: Infrastructure. avail-
able at http://www.omg.org/spec/OCL/2.0/, last checked: 2011-06-14
13. SysML Partners (2005) Systems Modeling Language (SysML) Specification. see http://www.
sysml.org, last checked: 2011-06-14
14. Côté I, Hatebur D, Heisel M, Schmidt H, Wentzlaff I (2008) A systematic account of problem
frames. In: Proceedings of the European conference on pattern languages of programs
(EuroPLoP) Universit€atsverlag Konstanz, pp 749–767
15. Choppy C, Hatebur D, Heisel M (2010) Systematic architectural design based on problem
patterns (technical report). Universit€at Duisburg-Essen
16. Gamma E, Helm R, Johnson R, Vlissides J (1995) Design patterns – elements of reusable
object-oriented Software. Addison Wesley, Reading, MA
17. UML4PF (2010) http://swe.uni-due.de/en/research/tool/index.php, last checked: 2011-06-14
18. UML Revision Task Force (2006) OMG Object Constraint Language: Reference. http://www.
omg.org/docs/formal/06-05-01.pdf
19. Lencastre M, Botelho J, Clericuzzi P, Araújo J (2005) A meta-model for the problem frames
approach. In: WiSME’05: 4th workshop in software modeling engineering
20. Hall JG, Rapanotti L, Jackson MA (2005) Problem frame semantics for software develop-
ment. Softw Syst Model 4(2):189–198
21. Seater R, Jackson D, Gheyi R (2007) Requirement progression in problem frames: deriving
specifications from requirements. Requirements Eng 12(2):77–102
22. Colombo P, del Bianco V, Lavazza L (2008) Towards the integration of SysML and problem
frames. In: IWAAPF’08: Proceedings of the 3 rd international workshop on applications and
advances of problem frames, New York, ACM, pp 1–8
23. Charfi A, Gamatié A, Honoré A, Dekeyser J-L, Abid M (2008) Validation de modèles dans un
cadre d’IDM dédié à la conception de systèmes sur puce. In: 4èmes Journées sur l’Ingénierie
Dirigée par les Modèles (IDM 08)
24. Choppy C, Heisel M (2003) Use of patterns in formal development: systematic transition
from problems to architectural designs. In: recent trends in algebraic development techniques,
16th WADT, Selected Papers, LNCS 2755, Springer Verlag, pp 205–220
25. Choppy C, Heisel M (2004) Une approche à base de “patrons” pour la spécification et le
développement de systèmes d’information. In: Proceedings approches formelles dans l’assis-
tance au développement de Logiciels – AFADL’2004, pp 61–76
26. Choppy C, Hatebur D, Heisel M (2005) Architectural patterns for problem frames. IEE Proc
Softw Spec Issue Relating Softw Requirements Architectures 152(4):198–208
27. Choppy C, Hatebur D, Heisel M (2006) Component composition through architectural
patterns for problem frames. In: Proc. XIII Asia Pacific Software Engineering Conference
(APSEC), IEEE, pp 27–34
28. Barroca L, Fiadeiro JL, Jackson M, Laney RC, Nuseibeh B (2004) Problem frames: a case
for coordination. In: coordination models and languages, Proc. 6th international conference
COORDINATION, pp 5–19
9 Systematic Architectural Design Based on Problem Patterns 159
29. Lavazza L, Bianco VD (2006) Combining problem frames and UML in the description of
software requirements. Fundamental approaches to software engineering. Lecture Notes in
Computer Science 3922, Springer, pp 199–21
30. Lavazza L, Bianco VD (2008) Enhancing problem frames with scenarios and histories in
UML-based software development. Expert Systems 25:28–53
31. Hall JG, Rapanotti L, Jackson M (2008) Problem oriented software engineering. Solving
package router control problem, IEEE Transactions on Software Engineering 34(2):226–241
.
Chapter 10
Adaptation Goals for Adaptive Service-Oriented
Architectures*
10.1 Introduction
In these years, Service-oriented Architecture (SoA) has proven its ability to support
modern, dynamic business processes. The architectural paradigm fosters the provi-
sion of complex functionality by assembling disparate services, whose ownership –
and evolution – is often distributed. The composition, oftentimes rendered in BPEL
[18], does not provide a single integrated entity, but it only interacts with services
that are deployed on remote servers. This way of working fosters reusability by
gluing existing services, but it also allows one to handle new business needs by
adding, removing, or substituting the partner services to obtain (completely)
*
This research has been funded by the European Commission, Programmes: IDEAS-ERC, Project
227977 SMScom, and FP7/2007–2013, Projects 215483 S-Cube (Network of Excellence).
different solutions. So far the research in this direction has been focused on
proposing more and more dynamic service compositions, neglecting the actual
motivations behind them. How to implement a service-based application has been
much more important than understanding what the solution has to provide and
maybe how it is supposed to evolve and adapt. A clear link between the actual
architectures – also referred to as service compositions – and the requirements they
are supposed to meet is still missing. This lack obfuscates the understanding of the
actual technological infrastructure that must be deployed to allow the application to
provide its functionality in a robust and reliable way, but it also hampers the
maintenance of these applications and the alignment of their functionality with the
actual business needs. These considerations motivated the work presented in this
chapter. We firmly believe that service-based applications must be conceived from
clearly stated requirements, which in turn must be unambiguously linked to the
services that implement them. Adaptation must be conceived as a requirement in
itself and must be properly supported through the whole lifecycle of the system. It
must cope with both the intrinsic unreliability of services and the changes imposed by
new business perspectives. To this aim, we extend a classical goal model to provide
an innovative means to represent both conventional (functional and non-functional)
requirements and adaptation policies. The proposal distinguishes between crisp goals,
the satisfiability of which is boolean, and fuzzy goals, which can be satisfied at
different degrees; adaptation goals are used to render adaptation policies.
The information provided in the goal model is then used to automatically devise
the application’s architecture (i.e., the composition) and its adaptation capabilities.
We assume the availability of a suitable infrastructure [5] – based on a BPEL
engine – to execute service compositions. Goals are translated into a set of abstract
processes (a-la BPEL) able to achieve the objectives stated in the goal model; the
designer is in charge of selecting the actual composition that best fits stated
requirements. Adaptation is supported by performing supervision activities that
comprise data collection, to gather execution data, analysis, to assess the appli-
cation’s behavior, and reaction – if needed – to keep the application on track. This
strict link between architecture and requirements and the need for continuous
adaptation led us to consider the goal model a full-fledged runtime entity. Runtime
data trigger the countermeasures embedded in adaptation goals, and thus activate
changes in the goal model, and then in the applications themselves.
The chapter is organized as follows. Section 10.2 presents the goal model to
express the requirements of the systems. Section 10.3 describes how goals are
translated into service-based applications. Section 10.4 illustrates some preliminary
evaluation; Sect. 10.5 surveys related works and Sect. 10.6 concludes the chapter.
expressing the requirements of adaptive systems. The goal model is also augmented
with a new kind of goals, adaptation goals, which specify how the model can adapt
to changes. The whole proposal is illustrated through the definition of a news
provider called Z.com [8]. The provider wants to offer graphical news to its
customers with a reasonable response time, but it also wants to keep the cost of
the server pool aligned with its operating budget. Furthermore, in case of spikes in
requests it cannot serve adequately, the provider commutes to textual content to
supply its customers with basic information with acceptable delay.
10.2.1 KAOS
The main features provided by KAOS are goal refinement and formalization. Goal
refinement allows one to decompose a goal into several conjoined sub-goals (AND-
refinement) or into alternative sub-goals (OR-refinement). The satisfaction of the
parent goal depends on the achievement of all (for AND-refinement) or at least
one (for OR-refinement) of its underlying sub-goals. The refinement of a goal
terminates when it can be “operationalized,” that is, it can be decomposed into
a set of operations. Figure 10.1 shows the KAOS goal model of the news provider.
The general objective is to provide news to its customers (G1), which is AND-
refined into the following sub-goals: Find news (G1.1), Show news to requestors
(G1.2), Provide high quality service (G1.3), and Maintain low provisioning costs
(G1.4) (in terms of the number of servers used to provide the service). News can be
provided both in textual and graphical mode (see OR-refinement of goal G1.1 into
G1.1.1 and G1.1.2). Textual mode consumes less bandwidth and performs better in
case of many requests. Customer satisfaction is increased by providing news in
a nice format and within short response times (see AND-refinement of goal G1.3
into G1.3.1 and G1.3.2). G1.3.1 is a soft goal since there is not a clear-cut criterion
to assess it, that is, whether news is provided in a nice way.
Goals are associated with a priority depending on their criticality. For example
goal G1.1.1 has lower priority (p ¼ 2) than goal G1.1.2 (p ¼ 3), since providing
news in graphical mode is more important than providing news in text mode. Goals
can contribute (either positively or negatively) to the satisfaction of other goals.
This is represented in the goal model through contribution links – dashed lines in
Fig. 10.1 – and an indication of the contribution (x ∈ [1,1]). For example, despite
the graphical mode is slower, it positively contributes to the customer satisfaction
(contribution link between goals G1.1.2 and G1.3.1). Short response times may
require the adoption of the text mode to provide the news (see the negative link
between goal G1.3.2 and goal G1.1.2) or may increase the provisioning costs since
they may require a higher number of servers in the pool (see the link between goal
G1.3.2 and G1.4).
Goals are formalized in Linear Temporal Logic1 (LTL) [21] or First Order Logic
(FOL). The definition of the leaf goals of Fig. 10.1 is reported in Table 10.1. For
example, goal G1.1.2 states that if the system receives a request for a given
keyword and date, it must provide related news within x time units. Provided
news must be about supplied keyword and date, and must come with images.
Note that we cannot provide a formal definition for goal G1.3.1, since it is soft.
Instead, the satisfaction of this goal can be inferred from its incoming contribution
links, by performing an arithmetic mean on the satisfaction of each contributing
goal weighted by the value given to each contribution link. The formalism beneath
the goals of Fig. 10.1 is a preliminary attempt to bridge the gap between the non-
formal world of the stakeholders and the formal world of the machine. This
1
For this example, we use operator sometimes in the future (e).
10 Adaptation Goals for Adaptive Service-Oriented Architectures 165
formalism relies on background knowledge of the domain and may have several
nuances of meaning.
Operationalization [14] is the process that allows one to (semi-automatically)
infer the operations that “implement” goals, and thus in our work that partner services
must provide. An operation is defined through name, input and output values, and
pre- and post-conditions. Required preconditions (ReqPre) define when the operation
can be executed. Triggering conditions (TrigPre) define how the operation is acti-
vated. Required post-conditions (ReqPost) define additional conditions that must be
true after execution. Domain pre- (DomPre) and post-conditions (DomPost) define
the effects of the operation on the domain. Table 10.22 shows the result of the
operationalization applied to the case study (except for operation Find Text Content).
For example, operation Find Graphical Content moves the system from a state in
which a container for the news that have to be collected is initialized (DomPre) to
a state in which a set of suitable news is available (DomPost). This operation is
triggered as soon as the collection of news matching provided keyword and date is
started (TrigPre). The effect of this operation is to collect a set of news that matches
the date and keyword provided by the user (ReqPost). The definition of operation
Find Text Content is similar to operation Find Graphical Content except for the
required post-condition that is specified as follows:
The goal model also specifies a set of agents able to perform one or more
operations. According to our point of view, agents represent the providers of the
services that will be used in the composition. For example, agent A1 is the user
issuing the requests and to whom news must be shown. Agent A3 can find news in
both text and graphical mode, while agent A2 can only find news in text mode.
The definition of goals through LTL formulae allows one to assess whether a goal is
satisfied, but there is no way to say if it is only satisfied partially. For example, the
definition of goal G1.3.2 only allows one to assess whether the global response time
does not exceed the maximum threshold (RTMAX), but it provides no information
about the distance between the actual value and RTMAX. Furthermore the definition of
goal G1.4 only specifies whether the number of servers is lower than a certain value
NMAX, but it says nothing about the actual number of servers used in the pool. These
are only a couple of examples that made us introduce fuzzy goals, and express their
satisfaction level through real numbers between 0 and 1. Fuzzy goals are rendered
through the operators already introduced in RELAX [27] to represent non-critical
requirements: AS EARLY/LATE AS POSSIBLE f, for temporal quantities, AS CLOSE
AS POSSIBLE TO q f, to assess the proximity of quantities or frequencies (f) to
a certain value (q), AS MANY/FEW AS POSSIBLE f, for quantities (f). This way
goals G1.3.2 and G1.4 can be redefined in terms of these operators as follows:
G1.3.2: AS EARLY AS POSSIBLE t
G1.4: AS FEW AS POSSIBLE servers
Goal G1.3.2 now says that the response time t must be as short as possible, while
goal G1.4 says that the number of servers must be as low as possible. The
assessment of goals G1.3.2 and G1.4 is guided by the membership functions
shown in Fig. 10.2 that assign a satisfaction value between 0 and 1, depending on
a b
Fig. 10.2 Membership functions for goals G1.3.2 (a) and G1.4 (b)
10 Adaptation Goals for Adaptive Service-Oriented Architectures 167
the actual response time (Fig. 10.2a) and the number of used servers (Fig. 10.2b),
respectively. For example, as for goal G1.3.2 if the response time is less than 3 s, the
satisfaction is 1, if the response time is between 3 s and 7 s the satisfaction has
a value between 0 and 1, and if the response time is greater than 7 s the satisfaction
is 0. These functions are limited3 and, in general, have a triangular or trapezoidal
shape. The severity of membership functions can be measured in terms of the
gradient of the inclined sides. The severity can be tuned according to the priority
of a goal (the higher the priority is, the steeper the membership function becomes).
Adaptation goals augment the KAOS model to describe and tune the adaptation
capabilities associated with the system-to-be that are necessary to react to changes
or to the low satisfaction of conventional goals. An adaptation goal defines
a sequence of corrective actions to preserve the overall objective of the system.
Each adaptation goal is associated with a trigger and a set of conditions. The trigger
states when the adaptation goal must be activated. Conditions specify further
necessary restrictions that must be true to allow the corresponding adaptation
actions to be executed. Conditions may refer to properties of the system (e.g.,
satisfaction levels and priorities of other goals, or adaptation goals already
performed) or domain assumptions. Each adaptation goals is operationalized
through actions.
• Add, remove, or modify a conventional goal;
• Add, remove, or modify an adaptation goal;
• Add or remove an operation;
• Add or remove an entity;
• Perform an operation, moves the process execution to the activity in which the
operation, provided as parameter, starts to be performed (i.e., the first activity in
the process flow associated with that operation);
• Perform a goal, moves the process execution to the activity in which the goal,
provided as parameter, starts to be active (i.e., the first activity in the process
flow associated with the first operation of the goal);
• Substitute agent.
Adaptation actions can be applied globally, on all (next/running) process
instances, or locally (only on the application instance for which the triggers and
conditions of that adaptation goal are satisfied). Adaptation goals may also conflict
when they are associated with conflicting goals (i.e., a couple of goals linked by
a contribution link with a negative weight). In this case, we trigger the adaptation
goal associated with the goal with the highest priority.
3
Membership functions do not continue to be greater than 0 when the response time is infinite.
168 L. Baresi and L. Pasquale
The adaptation goals envisioned for our example are shown in Fig. 10.3. Adap-
tation goals AG1 and AG2 are triggered when goal G1.1.2 is violated (i.e., its
satisfaction is less than 1). AG1 is performed when the satisfaction of goal G1.1.2 is
less than 0.7 and comprises two basic actions: it changes the agent that performs
operation Find Graphical Content with another one (e.g., A5) and executes the
same operation. These actions are applied locally, only for the instance of the goal
model (and indeed, the process instance) for which the triggers and conditions hold
true. The objective of this countermeasure is to enforce the satisfaction of goal
G1.1.2. Adaptation goal AG2 is applied when the satisfaction of goal G1.1.2 is less
than 0.7 and AG1 has been already applied. AG2 performs operation Find Text
Content, and enforces a modified version of goal G1.1.2 (i.e., enforces goal G1.1.1
instead of G1.1.2). AG2 is also applied locally. Adaptation goals AG3 and AG4 are
triggered when goal G1.3.2 is violated. In particular they are applied when the
average value of the end-to-end response time of the news provider is greater than
3 s (conditions). AG3 enforces the satisfaction of goal G1.3.2 by switching to
textual news (i.e., it substitutes goal G1.1.2 with goal G1.1.1 and operation Find
Graphical Content with Find Text Content). AG3 is applied globally on all process
instances. If AG3 is not able to reach its objective, AG4 is applied. Instead, it tries to
enforce the satisfaction of goal G1.3.2, by incrementing the number of servers in the
pool according to the severity of violation (it performs operation Increment
Servers). Operation Increment Servers can only be performed by agent A4 and
modifies the number of servers used by the load balancer. AG4 is also applied on all
process instances. Adaptation goals AG1 is in conflict with AG3 and AG4 since
they try to enforce conflicting goals. According to our policy, AG3 and AG4 are
triggered first, since they are associated with goal G1.3.2, which has higher priority
(p ¼ 5) than G1.1.2 (p ¼ 3).
10 Adaptation Goals for Adaptive Service-Oriented Architectures 169
This section illustrates our proposal to transform the goal model into running, self-
adaptive service-oriented compositions. The operationalization of conventional
goals is used to derive suitable compositions, while adaptation goals help deploy
probes needed to collect enough data for the runtime evaluation of goals’ satisfac-
tion. They are also in charge of adaptation actions.
The runtime infrastructure works at two different levels of abstractions: process and
goal level.
• The process level provides a BPEL engine to support the execution of the
process instances. It also performs data collection and adaptation activities.
Data collection activities gather the runtime data needed to update the state of
entities, detect events, and evaluate the satisfaction of goals. Data to be collected
can be internal (they belong to the process state), or external (they belong to
the environment, and are retrieved by invoking external probes). Adaptation
activities apply the actions associated with adaptation goals. Different probes
and adaptation components can be easily plugged-in to obtain a complete
execution platform.
• The goal level keeps a live goal model for each process instance, and updates it
by means of the data collected at process level. Every time an instance of the
goal model is updated, the infrastructure re-computes the satisfaction of con-
ventional goals. Specific analyzers can be plugged-in to when necessary,
depending on the kind of constraint (i.e., LTL, FOL, fuzzy) that must be
evaluated to assess a goal. The goal level also evaluates the triggers and
conditions of the adaptation goals and decides when adaptation must be
performed. Adaptation actions can affect both the goal model and the process
instances. The interplay between the goal and process levels is supported in the
infrastructure by a bidirectional mapping between the elements of the two levels.
Figure 10.4 shows the overall architecture of the runtime infrastructure. The
BPEL Engine is an instance of ActiveBPEL Community Edition Engine [1] aug-
mented with aspects [13] to collect internal data and start/stop the process’ execu-
tion when necessary. The Data Collector coordinates the different probes; the
Adaptation Farm oversees the activities of recovery components. The Supervision
Manager, based on JBoss rule engine [22], receives data from the process level,
and triggers the updates of the goal level. Also the Goal Reasoner is based on JBoss
rule engine: for each running process instance it keeps a goal model in its working
memory and updates it. The Goal Reasoner asks the Analysis Farm, which
coordinates analyzers, to (re-)compute the (degree of) satisfaction of the different
170 L. Baresi and L. Pasquale
leaf goals every time new data from the process level feed the goal model. The Goal
Reasoner evaluates the triggers and conditions associated with adaptation goals and
initiate their execution if needed. This means that the Goal Reasoner can modify
the goal model and propagate the effects of adaptation at the process level. These
effects are then applied onto the process instances by using the Recovery Farm,
through the Supervision Manager.
Service compositions are rendered as BPEL processes. Their activities, events, and
partner services have a direct mapping onto the operations, entities, and agents of
the goal model. Our assumption is that all operations associated with the same goal
define a sequence and are not interleaved with the operations associated with other
goals. The definition of a complete process requires the composition of these
sequences and the transformation of their operations into the “corresponding”
BPEL activities. Each sequence is defined by encoding the operations associated
with each goal in Alloy to check whether there exists a possible sequence of
operations whose execution guarantees the satisfaction of the corresponding goal.
Interested readers can refer to [19] for a complete presentation. In general,
a sequence s1 can unconditionally precede s2 if the ending operation of s1, op1,
and the starting operation of s2, op2 satisfy (10.1). While a sequence s1 condition-
ally precedes s2 if the ending operation of s1 and the starting operation of s2, op2,
satisfy (10.2). In this last case an if activity is inserted in the BPEL process between
s1 and s2, and its condition must correspond to the required precondition of op2.
10 Adaptation Goals for Adaptive Service-Oriented Architectures 171
a b
Collect Requests nc = NewsCollection
Assign
date --> nc.date
keyword --> nc.keyword
Collect Requests
Receive News
(news)
Find
Find Graphical Find Text Graphical
Content Content Content
Assign News
(news --> nc.news)
Provide Content
Reply
(nc)
Increment
Servers Provide Content
Fig. 10.5 (a) Two possible sequences of operations and (b) an abstract BPEL process
ðdomPostðop1Þ ! domPreðop2ÞÞ^
ðreqPostðop1Þ ! reqPostðop2Þ ^ trigPreðop2ÞÞ (10.1)
ðdomPostðop1Þ ! domPreðop2ÞÞ^
(10.2)
ðtrigPreðop1Þ ! trigPreðop2ÞÞ ^ ðreqPreðop2Þ ! reqPostðop1ÞÞ
For example Fig. 10.5a shows two possible sequences of operations. Since
operation Find Text Content and Find Graphical Content are mutually exclusive,
we select the first one to satisfy goal G1.1.2 (p ¼ 3). This is because it is more
critical than goal G1.1.1 (p ¼ 2), which is associated with operation Find Text
Content. The generation of BPEL activities is semi-automatic. When an operation,
in the goal domain, is translated into different sequences of BPEL activities, the
user must select the most appropriate. Rules for translating operations into BPEL
activities are the following:
1. If a required postcondition only contains an event, we generate one of the
following activities: invoke, invoke-receive, or reply.
2. If the triggering precondition does not contain any event and the required
postcondition changes some entities, we generate an assign for each change.
172 L. Baresi and L. Pasquale
10.3.3 Adaptation
The interplay between the process and goal levels is supported by the mapping of
Table 10.3. Each conventional goal, which represents a functional requirement (i.e.,
it is operationalized), is mapped onto the corresponding sequence activity in the
BPEL process (XPath expression). If the goal represents a non-functional require-
ment, but its nearest ancestor goal is operationalized, it is associated with the same
sequence of its parent goal. The XPath expression provides the scope for both
possible adaptation actions and for assessing the satisfaction of the goal (i.e., it
defines the activities that must be probed to collect relevant data). Each operation is
associated with the first and the last BPEL activities, associated with it through two
XPath expressions. Each agent is associated with a partner service; the user
manually inserts the actual binding. All events are mapped to an XPath pointing
to the corresponding activity in the BPEL process. This activity must represent
an interaction of the process with its partner services (e.g., invoke, pick, receive).
Each adaptation goal is associated with a set of actions that must be performed at
process level.
Data collection specifies the variables that must be collected at runtime to update
the live instance of the goal model associated with the process instance. Data are
collected by a set of probes that mainly differ on how (push/pull mode), and when
(periodically/when certain events take place) data must be collected. If data are
collected in push mode, the Supervision Manager just receives them from the
corresponding probes, while if they are collected in pull mode, the Supervision
Manager must activate the collection (periodically or at specific execution points)
through dedicated rules. To evaluate the degree of satisfaction of each goal, its
formal definition must be properly translated to be evaluated by the selected
analyzer. The infrastructure provides analyzers for FOL and LTL expressions, for
crisp goals, and also provide analyzers to evaluate the actual satisfaction level of
fuzzy goals. To this end, we built on our previous work and exploit the monitoring
components provided by ALBERT [3], for LTL expressions, and Dynamo [4] for
both FOL expressions and fuzzy membership functions.
To enact adaptation goals at runtime, the Goal Reasoner evaluates a set of rules
on the live instances of the goal model available in its working memory. Each
adaptation goal is associated with three kinds of JBoss rules. A triggering rule,
activates the evaluation of the trigger associated with the goal. A condition rule
evaluates the conditions linked to the goal. If the two previous rules provide
positive feedback, an activation rule is in charge of the actual execution of the
adaptation actions. They are performed when an adaptation goal can potentially fire
(i.e., the corresponding Activation fact is available in the working memory) and is
selected by the rule engine to be performed, among the other adaptation goals that
can be performed as well. It executes the actions associated with that adaptation
goal. For example, the triggering rule associated with AG1 is the following:
when
Goal(id¼¼"G1.1.2", satisfaction < 1, $pid: pID)
then
wm.insert(new Trigger("TrigAG1", pid));
It is activated when the satisfaction of goal G1.1.2 is less than 1. This rule inserts
a new Trigger fact in the working memory of the Goal Reasoner, indicating that the
trigger associated with adaptation goal AG1 is satisfied for process instance pid.
174 L. Baresi and L. Pasquale
It is activated when the condition associated with AG1 (the satisfaction of goal
G1.1.2 is less than 0.7) is satisfied, the trigger of AG1 has taken place, and AG1 has
been tried less than a maximum number of times (maxNumAct). It inserts a new fact
in the working memory (Activation), to assert that the adaptation actions associated
with goal AG1, for the process instance and the goal model corresponding to pid
can be performed.4 The action rule is:
salience 3
activation-group recovery
when
$a: Activation(name ¼¼ "AG1", $pid: pID)
$ag: AdaptationGoal(name¼¼"AG1", pID ¼¼ pid)
then
List<Action> actions ¼ new ArrayList<Action>();
actions.add(new SubstituteAgent("A3","A5"));
actions.add(new Perform("Find Graphical Content");
ag.numOfActivations++;
Adaptation adapt ¼ new Adaptation("AG1", actions,"instance", pid);
adapt.perform(); wm.remove(a);
Action rules have a priority (salience) equal to that of the goal they refer to
(G1.1.2, in this case) and are always associated with activation-group recovery.
This means that, once the rule with the highest priority fires, it automatically
cancels the execution of the other adaptation goals that could be performed at the
same time. Adaptation actions are performed when the triggers and conditions of
the adaptation goal are satisfied (e.g., the corresponding activation object (a) is
asserted in the working memory). The example rule performs the adaptation actions
(adapt.perform()) on process instance (pid). Finally, it removes the object (a) that
activated this adaptation.
4
Note that if an adaptation goal is applied globally, there is no need to identify the process instance
on which adaptation must be performed.
10 Adaptation Goals for Adaptive Service-Oriented Architectures 175
Assign
date --> nc.date
keyword --> nc.keyword Alternative Execution Path
Receive News
Reply (news)
(nc)
Provide content
Assign News
(news --> nc.news)
execution path, shown in Fig. 10.6. Then, the process execution proceeds, per-
forming the activities of the alternative execution path.
The validity of the proposed goal model has been evaluated by representing some
example applications commonly used by other approaches proposed to model self-
adaptive systems: an intelligent laundry [6], a book itinerary management system
[23], and a garbage cleaner [16]. These experiments said that our goal model proved
to be expressive enough to represent the main functionality of these systems
together with their adaptation scenarios.
In the first case study, a laundry system must distribute assignments to the
available washing machines and activate their execution. The system must also
guarantee a set of fuzzy requirements stating that the energy consumed must not
exceed a maximum allowed and the number of clothes that have to be washed must
be low. These requirements are fuzzy since their satisfaction depends on the number
of clothes to be washed and the amount of energy consumed, respectively. The
satisfaction level of the energy consumed allows us to tune the duration of the
washing programs accordingly. The adaptation goals devised for this case study also
allow us to detect transient failures (e.g. the washing machine turns off suddenly)
and activate an action that performs an operation to restart a washing cycle.
The itinerary booking system must help business travellers book their travels and
receive updated notifications about the travel status (e.g., delays, cancelled flights).
These notifications can be sent via email or SMS depending on the device the
customer is actually using (i.e., laptop or mobile phone). Since sending an SMS is
the most convenient option, we decided to adopt it in the base goal model of this
case study. Suitable adaptation goals allow us to detect, through a trigger (i.e.,
whether the mobile phone is turned off) and a condition (i.e., whether the email of
the customer’s secretary is part of the information provided by the customer), when
the customer’s mobile phone is turned off, and apply an adaptation action that sends
an email to him/her.
In the cleaner agent scenario, each agent is equipped with a sensor to detect the
presence of dust and its driving direction. In case an agent finds a dirty cell, it must
clean it, putting the dust in its embedded dustbin. The adaptation goals envisioned
for this example allow the cleaner agent to recharge its battery when the load level
is low. Furthermore, they allow us to cover a set of failure prevention scenarios.
For example, adaptation goals can detect known symptoms of battery degenera-
tion (e.g., suddenly reduced lifetime or voltage) and perform an operation to alert
a technician, or get a new battery. Adaptation goals can also detect the presence of
obstacles in the driving direction of an agent and activate two actions: stop the agent
and change the driving direction, when possible.
These exercises demonstrated to be very useful to highlight both the advan-
tages and disadvantages of our approach. We can perform accurate and precise
10 Adaptation Goals for Adaptive Service-Oriented Architectures 177
adaptations by assessing the satisfaction degree of soft goals and tuning the
adaptation parameters accordingly, as described before. The usage of triggers and
conditions makes it possible to react after system failures or context changes,
and also model preventive adaptations to avoid a failure when known symptoms
take place.
We adopt a priority-based mechanism to solve conflicts among adaptations that
can be triggered at the same time. This mechanism is still too simplistic in certain
situations. For example a vicious cycle may exist when a countermeasure A has
a negative side effect on another goal, and that goal’s countermeasure B has
a negative side effect on the first goal as well. These cases can be handled by
tuning the conditions of the countermeasures involved, which would become pretty
complex. For this reason, other decision-making mechanisms should be adopted,
like trade-off optimization functions. Finally our goal model does not provide
any reasoning mechanism to automatically detect possible adaptations in advance,
after changes in the context and in the stakeholders’ requirements take place.
take place when a task performed by an actor depends on another task performed by
a different actor. Dependencies are then translated into message sequences
exchanged between actors, and objectives into sets of local activities performed in
each actor’s domain. The authors also propose a methodology to modify choreogra-
phy according to changes in the business needs (dependencies between actors and
local objectives). Although this approach traces changes at requirements level, it
does not provide explicit policies to apply these changes at runtime.
The idea of monitoring requirements was originally proposed by Fickas et al.
[9]. The authors adopt a manual approach to derive monitors able to verify
requirements’ satisfaction at runtime. Wang et al. [26] use the generation of log
data to infer the denial of requirements and detect problematic components. Diag-
nosis is inferred automatically after stating explicitly what requirements can fail.
Robinson [24] distinguishes between the design-time models, where business goals
and their possible obstacles are defined, and the runtime model, where logical
monitors are automatically derived from the obstacles and are applied onto the
running system. This approach requires that diagnostic formulae be generated
manually from obstacle analysis. Despite a lot of work focused on monitoring
requirements, only few of them provide reconciliation mechanisms when require-
ments are violated. Wang et al. [26] generate system reconfigurations guided by
OR-refinements of goals. They choose the configuration that contributes most
positively to the non-functional requirements of the system and also has the lowest
impact on the current configuration. To ensure the continuous satisfaction of
requirements, one needs to adapt the specification of the system-to-be according
to changes in the context. This idea was originally proposed by Salifu et al. [25] and
was extensively exploited in different works [2, 20] that handled context variability
through the explicit modeling of alternatives. Penserini et al. [20] model the
availability of execution plans to achieve a goal (called ability), and the set of
pre-conditions and context-conditions that can trigger those plans (called oppor-
tunities). Dalpiaz e al. [2] explicitly detect the parameters coming from the external
environment (context) that stimulate the need for changing the system’s behavior.
These changes are represented in terms of alternative execution plans. Moreover the
authors also provide precise mechanisms to monitor the context. All these works
are interesting since they address adaptation at requirements level, but they mainly
target context-aware applications and adaptation. They do not consider adaptations
that may be required by the system itself because some goals cannot be satisfied
anymore, or new goals are added. We foresee a wider set of adaptation strategies
and provide mechanisms to solve conflicts among them.
Despite our solution is more tailored to service-based applications, many works
[11, 16] focus on multi agent systems (MAS). Morandini et al. [16], like us, start
from a goal model, Tropos4AS [17] which enriches TROPOS with soft goals,
environment entities, conditions relating entities and state transitions, and unde-
sired error states. The goal model is adopted to implement the alternative system
behaviors that can be selected given some context conditions. Huhns et al. [11]
exploit agents to support software redundancy, in terms of different implemen-
tations, and provide software adaptation. The advantage here is that agents can be
10 Adaptation Goals for Adaptive Service-Oriented Architectures 179
10.6 Conclusions
References
11.1 Introduction
Computer systems are constructed to satisfy business goals. Yet, even knowing this,
determining the business goals for a system is not an easy chore. First, there are
multiple stakeholders for a system, all of whom usually have distinct business
goals. Secondly, there are business goals that are not explicitly articulated and
must be actively elicited. Finally, the initial statement of some business goals is
unreasonable in their strictness.
If systems are constructed to satisfy business goals then that makes it imperative
that the requirements that drive the design of the system reflect those business goals.
how to make trade-offs is one important reason the architect needs to be familiar
with the business goals for a system no matter which organization originated them.
We surveyed the business literature to understand the types of business goals that
organizations have with respect to a particular system. The literature is silent on the
business goals for specific projects and so we use the business goals for
organizations as a basis for our work. Our literature survey is summarized in [3,
4] where we discuss the process of developing the list. Some of the most important
citations are [1, 2, 5–9, 11]. The following business goal categories resulted from
applying an affinity grouping to the categories harvested from our survey:
1. Growth and continuity of the organization. How does the system being
developed (or acquired) contribute to the growth and continuity of the organi-
zation? In one experience using these categories, the system being developed
was the sole reason for the existence of the organization. If the system was not
successful, the organization would cease to exist. Other topics that might come
up in this category deal with market share, creation and success of a product
line, and international sales.
2. Meeting financial objectives. This category includes revenue generated or
saved by the system. The system may be for sale, either in stand-alone form or
by providing a service, in which case it generates revenue. A customer might be
hoping to save money with the system, perhaps because its operators will
require less training than before, or the system will make processes more
efficient. Also in this category is the cost of development, deployment, and
operation of the system. But this category can also include financial objectives
of individuals – a manager hoping for a raise, for example, or a shareholder
expecting a dividend.
3. Meeting personal objectives. Individuals have various goals associated with
the construction of a system. Depending on the individual, they may range from
“I want my company to lead the industry” to “I want to enhance my reputation
by the success of this system” to “I want to learn new technologies” to “I want
to gain experience with a different portion of the development process than in
the past.” In any case, it is possible that technical decisions are influenced by
personal objectives.
4. Meeting responsibility to employees. In this category employees are usually
employees involved in development or employees involved in operation.
Responsibility to employees involved in development might include ensuring
that certain types of employees have a role in the development of this system or
it might include providing employees the opportunities to learn new skills.
Responsibility to employees involved in operating the system might include
safety considerations, workload considerations, or skill considerations.
11 Business Goals and Architecture 187
Since quality attribute requirements play an important role in designing the archi-
tecture we not only need to elicit business goals but relate those business goals to
quality attribute needs. In essence, we attempt to fill out the following Table 11.1
Not all cells in the table will be filled out; a particular business goal may
not manifest in requirements for every quality attribute of interest. Moreover,
cells in the table may be filled in vaguely at first, prompting the architect to have
extended discussions about the precise nature of the quality attribute requirements
precipitated by a business goal.
But armed with a table like this, the architect can now make informed designed
trade-offs, especially when combined with information about the value of achieving
each business goal, thus prioritizing the needs.
Capturing business goals and then expressing them in a standard form will let them
be discussed, analyzed, argued over, rejected, improved, reviewed – in short, all of
the same activities that result from capturing any kind of requirement. One syntax
that could be used to describe business goals is that produced under the guidance of
TOGAF [10]. We use a syntax more focused on the relation between business goal
and architecture, however. Our business goal scenario template has six parts. They
all relate to the system under development, the identity of which is implicit. The
parts are:
1. Goal-subject. This is the stakeholder who owns the goal, who wishes it to be
true. The stakeholder might be an individual, an individual in an identified
organization if more than one organization is in play, or (in the case of a goal
that has no one owner and has been assimilated into an organization) the
organization itself.
2. Goal-object. This is the entity to which the goal applies. A goal-object will
typically be one of: individual, system, portfolio, organization’s employees,
organization’s shareholders, organization, nation, or society.
3. Environment. This is the context for this goal. Osterwalder and Pigneur [9]
identify social, legal, competitive, customer, and technological environments.
Sometimes the political environment is key; this is a kind of social factor.
4. Goal. This is any business goal able articulated by the person being interviewed.
190 L. Bass and P. Clements
The body of knowledge we have presented provides the basis for eliciting architec-
turally relevant business goals from the stakeholders. This elicitation could be
executed in a variety of different engagement models. The engagement model
that we have direct experience with is having a workshop with a variety of
stakeholders; we have reported previously on our results using this engagement
model [3, 4]. In this section, we discuss a variety of different potential engagement
models and then we discuss some considerations that must be balanced when
choosing an engagement model. (All engagement models can include beforehand
inspection and analysis of relevant documentation.)
A face to face workshop. A face to face workshop can involve all of the
stakeholders or just stakeholders from one of the involved organizations. It has
the properties of openness, at least among the attendees, immediacy, and group
dynamics. It has the drawbacks of travel costs and scheduling difficulty. Finding
a time when all of the stakeholders can come together has proven to be a difficult
task, even without consideration of cost.
11 Business Goals and Architecture 191
Phone interviews. Phone interviews can be one-on-one or group oriented. They can
be mediated by one of the modern collaboration tools or can just be telephone
based. Phone interviews avoid travel cost but, if they are done with a group, have
the same scheduling problems. Group teleconferences have the problem of
keeping attendees attention. Interruptions are much easier with phone interviews
than with face to face interactions.
Questionnaires and forms. Questionnaires and forms to fill out allow the
stakeholders to contribute at their convenience. Constructing questionnaires
and forms that are unambiguous without hands on guidance by the elicitor is
very difficult and time consuming. Questionnaires and forms are also difficult to
follow up to gather additional information.
Hybrid. It is possible to have hybrid engagements, as well. For example, one might
use a form or questionnaire to gather initial information and follow up with a
phone conference at some later date.
Some of the considerations that go into choosing an engagement model for
eliciting business goals are
The cost to stakeholders. Each stakeholder from whom goals are to be elicited must
spend a non-trivial amount of time providing input. This time can be sequential
or broken up. Sequential time is frequently difficult to arrange but reduces the
context switching cost for the stakeholder. Another cost to the stakeholders is
travel time if the elicitation is performed in a remote location.
The cost to those performing the elicitation. If every relevant stakeholder is
interviewed individually, the time for those performing the elicitation is very
high. If relevant stakeholders are interviewed face to face, the travel costs for
those performing the elicitation is also high.
The immediacy of the elicitation. The elicitation can be done synchronously either
face to face or utilizing various communication technologies. It could also be
done asynchronously through questionnaires or forms. Synchronous interactions
enable the elicitor to direct a conversion and use follow up questions to achieve
clarity. Asynchronous interactions allow the stakeholder to provide information
at their convenience.
Openness. The elicitation could be done with all of the stakeholders having access
to the input provided by any of the stakeholders or with the input of stakeholders
being kept confidential. Some stakeholders may not wish to have other
stakeholders aware of their business goals. This argues for confidentiality. The
architect, on the other hand, must justify decisions in terms of business goals and
this argues for openness.
Multiplicity. The elicitation could be done one-on-one or with a group. One-on-one
elicitations provide the maximum opportunity for confidentiality and openness
to the elicitors. Group dynamics often result in participants adding items they
may otherwise not have brought up.
192 L. Bass and P. Clements
No one engagement method is best for all situations and, in general, the choice of
engagement method is usually a matter of determining the least problematic
method.
11.8 PALM
quality and reputation of their products may very well lead to (for example)
security, availability, and performance requirements that otherwise might not
have been considered.
Second, PALM can be used to inform the architect of business goals that directly
affect the architecture without precipitating new requirements. One example is
a system requiring a data base in order to utilize the database team. There is no
standard name for this property (“full-employment-ability?”) nor would it be
expected to show up as a “requirement.” Similarly, if an organization has the
ambition to use the product as the first offering in a new product line, this might
not affect any of the requirements for that product (and therefore not merit
a mention in the project’s requirements specification). But this is a crucial piece
of information that the architect needs to know early so it can be accommodated in
the design.
Third, PALM can be used to discover and carry along additional information
about existing requirements. For example, a business goal might be to produce
a product that out-competes a rival’s market entry. This might precipitate a perfor-
mance requirement for, say, half-second turnaround when the rival features one-
second turnaround. But if the competitor releases a new product with half-second
turnaround, then what does our requirement become? A conventional requirements
document will continue to carry the half-second requirement, but the goal-savvy
architect will know that the real requirement is to beat the competitor, which
may mean even faster performance is needed.
Fourth, PALM can be used to examine particularly difficult quality attribute
requirements to see if they can be relaxed. We know of more than one system where
a quality attribute requirement proved quite expensive to provide, and only after
great effort, money, and time were expended trying to meet it was it revealed
that the requirement had no analytic basis, but was merely someone’s best guess
or fond wish at the time.
Fifth, different stakeholders have different business goals for any individual
system being constructed. The acquirer may want to use the system to support their
mission; the developer may want to use the system to launch a new product line.
PALM provides a forum for these competing goals to be aired and resolved.
PALM can be used to developing organizations as well as acquiring
organizations. Acquirers can use PALM to sort out their own goals for acquiring
a system, which will help them to write a more complete request for proposals
(RFP). Developing organizations can use PALM to make sure their goals are
aligned with the goals of their customers.
We do not see PALM as anointing architects to be the arbiter of requirements,
unilaterally introducing new ones and discarding vexing ones. The purpose of
PALM is to empower the architect to gather necessary information in a systematic
fashion.
Finally, we would hope and expect that PALM (or something like it) would
be adopted by the requirements engineering community, and that within an organi-
zation requirements engineers would be the ones to carry it out.
194 L. Bass and P. Clements
We applied PALM to a system being developed by the Air Traffic Management unit
of a major U.S aerospace firm. To preserve confidentiality, we will call this system
The System Under Consideration (TSUC) and summarize the exercise in brief.
TSUC will provide certain on-line services to the airline companies to help
improve the efficiency of their fleet. Thus, there are two classes of stakeholders for
TSUC – the aerospace firm and the airline companies. The stakeholders present
when we used PALM were the chief architect and the project manager for TSUC.
Some of the main goals that were uncovered during this use of PALM were:
How the system was designed to effect the user community and the developer
community
The fact that TSUC was viewed as the first system in a future product line.
Issues regarding the lifetime of TSUC in light of future directions of regulations
affecting air traffic.
The possibility of TSUC being sold to additional markets.
Issues related to the governance strategy for the TSUC product.
The exercise helped the chief architect and the project manager share the same
vision for TSUC, such as its place as the first instance in a product line and the
architectural and look-and-feel issues that flow from that decision.
The ten canonical business goals ended up bringing about discussions that were
wide ranging and, we assert, raised important issues unlikely to have been thought
of otherwise. Even though the goal categories are quite abstract and unfocussed,
they were successful in triggering discussions that were relevant to TSUC. The
result of each of these discussions was the capture of a specific business goal
relevant to TSUC.
11.10 Conclusions
Knowing the business goals for a system is important to enable an architect to make
choices appropriate to the context for the system. Each organization involved in the
construction and operation of the system will have its own set of goals that may
conflict.
We presented a body of knowledge suitable for eliciting the business goals from
stakeholders. The assumption is that the canonical list of goals will act as the
beginning of a conversation that will result in the elicitation of multiple business
goals. Capturing these goals in a fixed syntax ensures that all of the information for
each goal has been recorded and provides a common format for the reader of the
goals.
Based on this body of knowledge there are a variety of different engagement
models that will allow the elicitor to gain the business goal from a stakeholder.
11 Business Goals and Architecture 195
References
1. Antón A (1997) Goal identification and refinement in the specification of information systems,
Ph.D. Thesis. Georgia Institute of Technology
2. Carroll AB (1991) The pyramid of corporate social responsibility: toward the moral manage-
ment of organizational Stakeholders. Bus Horiz 34:39–48
3. Clements P, Bass L (2010) Relating business goals to architecturally significant requirements
for software. Technical report CMU/SEI CMU/SEI-2009-TN-026
4. Clements P, Bass L (2010) Using business goals to inform software architecture. In
Proceedings requirements engineering 2010, Sydney, Sept 2010
5. Fulmer RM (1978) American Management Association [Questions on CEO Succession]
6. Hofstede G, van Deusen CA, Mueller CB, Charles TA (2002) What goals do business leaders
pursue? a study in fifteen countries. The business goals network source. J Int Bus Stud 33
(4):785–803, Palgrave Macmillan Journals, 2002
7. McWilliams A, Siegel D (2001) Corporate social responsibility: a theory of the firm perspec-
tive. Acad Manage Rev 26(1):117–127
8. Mitchell RK, Agle BR, Wood DJ (1997) Toward a theory of Stakeholder identification and
salience: defining the principle of who and what really counts. Acad Manage Rev 22
(4):853–886, Academy of Management. http://www.jstor.org/stable/259247
9. Osterwalder A, Pigneur Y (2004) An ontology for e-business models. In: Currie W (ed) Value
creation from e-business models. Butterworth-Heinemann, Oxford, pp 65–97
10. TOGAF (2009) The open group architecture framework, Version 9, http://www.opengroup.
org/architecture/togaf9-doc/arch
11. Usunier J-C, Furrer O, Perrinjaquet A (2008) Business goals compatibility: a comparative
study. Institut de Recherche en Management (IRM), University of Lausanne, Ecole des HEC,
Switzerland. Working paper: http://www.hec.unil.ch/irm/Research/Working Papers
.
Part III
Experiences from Industrial Projects
.
Chapter 12
Experiences from Industrial Projects
The aim of the book is to develop the bridge between two ‘islands’: Software
Architecture and Requirements Engineering. However, in Software engineering,
there is another gap that needs to be bridged between two different types of
communities: industry and academia. Industrial practitioners work under hard
constraints and often do not have the luxury of trying out research results,
let alone embedding them in their everyday practice. In contrast, academic
researchers face the pressure of ‘publish or perish’ and often struggle with finding
the right industrial context in which to validate their work. Nevertheless, when the
right synergy is established between the two communities, there can be substantial
progress of the state of the art.
In the Software Architecture field, the results of a successful partnership between
industry and academia are manyfold. Examples include the IEEE Recommended
Practice for Architectural Description of Software-Intensive Systems [1] and its
successor, the upcoming ISO-IEC Std. 42010 [2]; they constitute a common
conceptual framework respected in both communities. Methods for architecture
design [3], as well as evaluation [4] have been derived in both environments and
have been successfully applied in practice. The reusability of architecture design
has been boosted with the publication of numerous architecture [5] and design
patterns [6] that were mined in academic and industrial developmental
environments. Finally the most recent advance in the field, the management of
Architecture Knowledge, has sprung out from academic research [7] but has
quickly had an impact in industrial practice.
Similarly, in the Requirements Engineering field, again standards have arrived,
including IEEE 830–1998 [8], which describes both possible and desirable
structures, content and qualities of software requirements. Other academic/industry
requirements tools have become ubiquitous both in software and wider afield,
including use cases [9], requirements elicitation, specification and analysis [10],
and scenario analysis [11].
As well as requirements/architectures, then, this third part of the book bridges
academia/industry at the same time, relating successful collaborations of industrial
and academic stakeholders with four chapters concerning various aspects of indus-
trial experiences in the field of relating requirements and architecture.
Chapters 13 and 16 relate examples of pragmatic approaches that build on the
experience gained by industrial projects: Chapter 13 derives design solutions to
recurring problems that are packaged as a reference architecture for design reuse;
Chapter 16 documents a number of best practices and rules of thumb in architecting
large and complex systems, thus promoting process reuse.
Chapter 14 presents a detailed academic approach of checking architecture
compliance that has been validated in an industrial system, providing evidence
for the validity of the approach in a real-world case.
Finally Chapter 15 elaborates on a theoretical research result that has come out
of an industrial research environment: an approach to the management of artifact
traceability at the meta-model level, illustrated with realistic examples.
In more detail:
Chapter 13 by Tim Trew, Goetz Botterweck and Bashar Nuseibeh presents an
approach that supports requirements engineers and architects in jointly tackling the
hard challenges of a particular demanding domain: consumer electronics. The
authors have mined architectures from a number of existing systems in this domain
in order to derive a reference architecture. The latter attempts to establish
a common conceptual foundation that is generic and reusable across different
systems and context. It can be used to derive concrete product architectures by
facilitating architects and requirements engineers to successively and concurrently
refine the problem and solution elements and make informed design decisions. The
process of developing a reference architecture by mining from existing systems,
issues encountered and design decisions that resolve them, is simple yet effective
for domains where little architecture standardization exists.
Chapter 14 by Huy Tran, Ta’id Holmes, Uwe Zdun, and Schahram Dustdar
presents a case study in the challenging field of checking compliance of the
architecture (in the SOA domain) to a set of requirements (ICT security issues).
The proposed solution has two main characteristics: it defines multiple views
to capture the varying stakeholder concerns about business processes, and it is
model-driven facilitating the linkage between requirements, architecture, and the
actual implementation through traces between the corresponding meta-models. The
approach results in semi-formal business processes from the perspective of
stakeholders and the checking of compliance of requirements through appropriate
links. It is validated in an industrial case study concerning a SOA-based banking
system.
Chapter 15 by Jochen M. K€ uster, Hagen V€olzer, and Olaf Zimmermann
proposes a holistic approach to relate the different software engineering activities
by establishing relations between their artifacts. The approach can be particularly
targeted towards relating artifacts from requirements engineering and architecture,
as exemplified by their case study. The main idea behind the approach is a matrix
that structures artifacts along dimensions that are custom-built for the software
engineering process at hand. The authors propose two default dimensions as
a minimum: stakeholder viewpoints (in the sense of [1]) and realization levels.
12 Experiences from Industrial Projects 201
Artifacts are classified into the cells of the matrix according to their nature
and subsequently links are established between them that express traceability,
consistency or actual model transformation between the artifacts. The approach
can be used during method and tool definition across the software engineering
lifecycle by utilizing the generic meta-model in order to decide upon the exact
relationships to be established between the various artifacts.
Chapter 16 by Michael Stal presents a pragmatic architecting process that has
been derived from a number of industrial projects. The process is informally
presented as a set of best practices and rules of thumb, as well as a method comprised
of a series of steps. The process is based on the principle of eliciting, specifying and
prioritizing requirements and subsequently using them as key drivers for
architecting. Besides the rooting in the problem space, the approach combines
some commonly accepted tenets: piecemeal refinement, risk mitigation, reuse,
review, refactoring. The architecture activities are prioritized into four phases
(functionality, distribution and concurrency, runtime and design-time qualities)
and the system is scoped into three levels (system, subsystem, component).
References
13.1 Introduction
Consumer electronics (CE) products, such as TVs, smart phones and in-car enter-
tainment, must be appealing to customers, have features that distinguish them in the
market and be priced competitively. Despite falling hardware costs, the resources
available for their implementation, such as processing power, memory capacity and
speed, and dedicated hardware elements, are still limited. These constraints may
restrict the features that can be offered, reduce their capabilities or limit their
concurrent availability. Requirements engineers and architects must work together
to specify an attractive product within these constraints, which requires an archi-
tectural description from the beginning of development.
Given the rate at which novel features are added to product categories such as
mobile phones, requirements engineers cannot only rely on past experience.
Instead, they have to reason from first principles about how a new feature might
be used, how it might interfere with other features, whether implementations
developed for other product categories would be acceptable and how requirements
may have to be adapted for the feature to be integrated into the overall product at
acceptable cost and risk.
This reasoning is hampered by the lack of consensus on high-level architectural
concepts for the CE domain. In contrast, the information processing domain has
widely-recognized industry standards and concepts, such as transactions and the
transparencies supported by distributed processing middleware [1], which are
rarely relevant for embedded software. As an example of the differences, a transac-
tion, a core concept in information processing, is rarely used in the lower levels of
embedded software. This is because any software action that changes the state of
the hardware may be immediately observable by end-users and cannot be rolled-
back unnoticed.
In this chapter, we describe a reference architecture for CE products, which
facilitates the creation of concrete architectures for specific products, and show how
this reference architecture can be used by requirements engineers to ensure that
products are both attractive for consumers and commercially viable. The reference
architecture was developed within Philips Electronics and NXP Semiconductors,
Philips’ former semiconductor division. Philips/NXP developed both software-
intensive CE products and the semiconductor devices that are critical elements of
their hardware. These semiconductor devices are associated with considerable
amounts of software (over one million lines of code for an advanced TV), and are
sold on the open market.
The reference architecture addresses the limited consensus on concepts in this
domain, and avoids the need for architects to become familiar with many abstract
concepts before it can be used. This is by proposing recommended design solutions
for each element in the architectural structure, each followed by a process for
reviewing the design decisions in the light of the specific product requirements.
The content of the reference architecture is based on the experience of many
product developments, and the granularity of its structure is determined by the
architectural choices that must be made. Since architects must map new features
onto the reference structure, they are confronted with architectural choices and their
consequences for the requirements from the outset. We present a process for the
concurrent refinement of requirements and architecture.
In Sect. 13.2, we describe the architectural concerns of requirements engineers,
with examples of requirements and architectural choices that should be refined
together and the types of architectural information required in each case. This
includes the requirements for both complete products and individual COTS
components, which may have to be integrated into a variety of architectures.
Section 13.3 discusses how reference architectures and other forms of architec-
tural knowledge have been used in software development and considers how they
can address a broad application domain, independent of specific functionality.
13 A Reference Architecture for Consumer Electronics Products 205
Section 13.4 describes the scope of the domain of CE products and identifies
some of the characteristics and requirements of the domain that distinguish it from
others. Section 13.5 describes how, given the limited consensus on high-level
concepts relevant to the architecture of embedded software, appropriate informa-
tion was mined from earlier developments. It then describes how the informa-
tion was organised into our reference architecture. Finally, Sect. 13.6 describes
a process in which the requirements engineer and architect use the reference archi-
tecture to refine the requirements and gives an example of its use in the integration
of a novel feature into a mobile phone.
Nuseibeh’s “Twin Peaks” model describes both a concurrent process for requirements
engineering and architecting and the relationship between requirements, architecture
and design artefacts [2]. While this provides an overall framework, on its own it is
not concrete enough to provide guidance for the development of CE products. Our
reference architecture aims to pre-integrate elements of design and architecture,
so that, when developing the architecture for a new product, it will both provide
guidance on the decisions that must be made and give insight into the refinement of
the requirements.
As a first step in the development of a reference architecture, we consider
the types of architectural information that are relevant when establishing the
requirements for a CE product. Specifying these products requires a careful balance
between functionality and product cost, while meeting the constraints of perfor-
mance, quality and power consumption. The company that is first to introduce
a new feature, at the price-point acceptable for the mainstream range of a product
category, can achieve substantial sales.
Balance between functionality and price – Achieving the balance between func-
tionality and selling price requires early insight into alternatives for how a feature
might be implemented, and the hardware consequences for each of the options.
Many CE products have real-time requirements, both firm performance require-
ments for signal processing, (e.g., audio/video processing or software-defined
radio), and soft ones to ensure that the product appears responsive to the user.
If it is only found late in development that these requirements cannot be met,
then features may have to be dropped or downgraded, undermining the value
proposition for the product. As reported by Ran, by the mid-1980s embedded
products had already reached a level of complexity where it was no longer possible
to reason about their performance without architectural models that characterise
206 T. Trew et al.
their behaviour [3]. Therefore, there should be an architectural model from the very
outset as the basis for refining requirements and architecture together, to specify
a product that makes the best use of its resources.
Visibility and resolution of resource conflicts – The requirements engineer must
be able to ensure that the resource management policies that resolve conflicts
between features result in a consistent style of user interaction. CE products often
have several features active concurrently, which not only impacts performance, but
can also result in contention for non-sharable resources and thereby feature inter-
action [4]. Although resource management policies might be considered to be
a purely architectural issue, they can affect the behaviour at the user interface.
Therefore, the requirements engineer must be able to understand both the nature of
the resource conflicts, to be able to anticipate that feature interaction could occur,
and the options for their resolution.
An example of this is the muting of TV audio, which is used by several features,
such as the user mute, automatic muting while changing channels or installing TV
channels, and the child lock, in which the screen is blanked and the sound muted
when a programme’s age rating is too high [5]. Since these features can be active
concurrently, feature interaction will result, and it must be possible to articulate
policies across the features. These policies should be directly traceable to the
architecture to ensure that they are correctly implemented.
Consequently, it must be possible to map features in the requirements specifi-
cation onto elements of the software architecture to ascertain which features can
co-exist and for the software architecture to be able to represent the different
resource management policies for resolving resource conflicts.
Major CE companies used to develop all their software in-house in order to have
complete control over its requirements and to be able to fully optimize the imple-
mentation for their hardware architectures. However, this became uneconomic as
the number of product features increased and they now purchase software for non-
differentiating features. While features available on the open market are usually
governed by standards, these standards largely focus on the interface between the
product and its environment and rarely address the APIs between the implementa-
tion of this feature and the remainder of the product.1 For instance, the standards for
1
There have been many industry standardization initiatives for particular product categories for
interfaces below the application layer, such as LiMo for mobile phones [6], the MPEG Multimedia
Middleware (M3W) for audio/video platforms [7] and OpenMAX for media processing libraries
[8]. However, to date, none has been widely-adopted in the market. Contributors to this lack of
adoption are the high degree of technical and market innovation in the CE domain and the unstable
structure of the industry, which is in a transition away from vertically-integrated CE companies [9].
13 A Reference Architecture for Consumer Electronics Products 207
Conditional Access systems for controlling viewing in pay-TV systems specify how
a TV signal is encrypted and the interface to the user’s smart card, but not the
particular functions that should be called to activate decryption in the TV.
Consequently, each supplier develops their own API, with the expectation that it
can be integrated into the architecture of any of their customers’ products, but with
little knowledge of how the architectures may differ between customers. Although
integration problems can arise from many sources, a significant class of problems
result from the dynamic behaviour of the software, particularly since multiple
threads often have to be used to satisfy performance requirements. Different
companies may adopt different policies for aspects such as scheduling, synchroni-
zation, communication, error handling and resource management. Failure of the
supplier and integrator to achieve a mutual understanding of their policies can lead
to complex integration problems. A second source of problems is when the func-
tionality of components from different suppliers overlaps, so that the components
do not readily co-exist. The requirements engineer of the component supplier
should be aware of these potential problems, which will be described in more detail
shortly. Conversely, the requirements engineer of the product integrator should be
aware that such mismatches can occur and that these might be too risky to resolve if
there is a tight deadline for delivery.
The reference architecture should allow the compatibility between a component
and the remainder of the product to be assessed at an early stage. Ruling out a COTS
component on these grounds, despite having an attractive feature list, allows the
requirements engineer to focus on less risky alternatives. This may require
abandoning low-priority requirements that are only supported by that component.
To be able to detect incompatibilities, the options for the behaviour of the software
must be explicit in the reference architecture, while being described independently
of the structure of the software. This independence is required because the lack of
API standardisation results in suppliers using different structural decompositions
for the same functionality. We therefore capture alternative behavioural policies
as architectural texture, which Ran describes as the “recurring microstructure” of
the architecture [3] and which van der Linden characterizes as “the collection of
common development rules for realising the system” [10]. Kruchten’s architectural
mechanisms for persistency and communication [11] are concrete implementations
of behavioural policies. The identification of the alternative policies to include
in our reference architecture and the structuring of its architectural texture are
described in Sect. 13.5.
These issues are described in greater detail in the following paragraphs:
Policies for error handling and resource management – From a requirements
perspective, policies for error handling and resource management can affect the
behaviour observed by the end-user and, hence, must be assessed for their accept-
ability. For example, the vendor of a TV electronic programme guide may have
a policy of displaying their logo when the guide is activated, but the overall product
may support the restart of a specific feature if it fails. This restart would aim to
restore the feature to as close to its previous state as possible, and with minimal
208 T. Trew et al.
disturbance to the user. However, this recovery would be compromised if the guide
also displays the logo during this restart.
Degree of control by supplied components – Another source of incompatibility
with supplied components is the scope of their functionality and the degree of
control that they expect to have over the platform. Multi-function CE devices may
integrate best-of-breed functionality from several suppliers. Problems can occur if
the required interfaces of a component are too low, so that the component
encapsulates the control of the hardware resources it requires, or if the provided
interfaces are too high level.
The first case can cause two types of problems: either it is not possible for this
feature to execute concurrently with another that should share the same resource, or
it is not possible to achieve a smooth transition between features that require
exclusive access to the resource. The required interfaces of these components
should always be high enough that it possible to insert a resource management
mechanism below them. However, new features are often originally conceived for
products in which they would always have exclusive access. Then provisioning for
an additional layer might have appeared to be an unnecessary overhead and an
additional source of complexity. It may only be years later, when the product is
extended with functionality from another category, that the problem emerges.
As an example of restrictions on concurrent execution, consider the potential
conflicts between interactive services and video recording in a TV. Both features
must be able to both monitor broadcast data continuously and to select new stations.
Both features must be active continuously and must be able to share the resources.
However, they may not have been designed with that in mind. All terrestrial digital
TVs in the UK have supported interactive services from the outset. However, it was
only a decade later that digital video recording was integrated into TVs. In planning
the extension of the TV to include the recording functionality, it may have been
thought that it is only necessary to add the new feature, whereas it may also have
been necessary to acquire a new interactive TV engine from a different source and
to develop a resource manager. If the TV is scheduled for launch within a tight
window, the additional risk associated in this change may result in the introduction
of the recording feature being deferred.
Even if features are not to be active concurrently, excessively low-level required
interfaces can impair the end-user experience. For instance, as Wi-Fi home
networks became common, stand-alone adapters were developed to allow
consumers to browse for content on their PCs or the Internet and then to decode
the video streams, which could then be fed to a conventional TV. When this
functionality was later integrated within the TV, it was desirable to reuse the
same software components. However, previously these had exclusive control of
the video decoders, but in the new context this control has to be handed over to the
conventional broadcast TV receiver. If the Wi-Fi browser does not have provision
for this, it may be necessary to reinitialize the component whenever the feature is
selected, causing it to lose internal state information and taking an excessive time.
The requirements engineer must be aware of such consequences of reusing a proven
13 A Reference Architecture for Consumer Electronics Products 209
component in this new context to be able to decide whether the resulting behaviour
will be acceptable for the end-user.
Having provided interfaces at too high a level can compromise the consistency
of the user interface. The supplier of a component of a resource-intensive feature
has to ensure that it can operate reliably with the available hardware resources,
e.g. memory capacity, processing power or interconnect bandwidth, and possibly
with minimal power dissipation. This is most easily achieved with resource
managers that are not only aware of the current global state of the component,
but also of the desired future state of the component so that state transitions can be
planned in ways that avoid transient exhaustion of resources. For instance, in
a product with multi-stream audio/video processing, the semiconductor supplier
may wish to have complete control of the processing of these streams and of the
transitions between different stream configurations. This can be achieved by raising
the level of the provided interface, so that the client only makes a single declarative
request for a new configuration, much in the style of SOA. This enables the supplier
to both provide a component that can be fully-validated, independent of the
behaviour of the customer’s software, and allows the supplier to innovate by
evolving their hardware/software tradeoffs without affecting their customers’
code. These properties of dependability and evolvability are important non-
functional attributes for component supplier, but they can lead to two problems
for the product integrator. Firstly, in this example, the integrator may be reluctant to
reveal the stream configurations that it plans to use and, secondly, the supplier’s
state transition strategy may differ from that used in other features, resulting in
inconsistent overall product behaviour.
An architectural model is required that allows this tension to be discussed
without either party exposing critical intellectual property (IP), possibly providing
the motivation for the parties to enter a closer commercial partnership where
requirements can be discussed more freely. Therefore, the architecture should
represent the responsibilities of the components, while being independent of their
specific functionality.
This section has identified some situations in which requirements and architec-
tural choices should be refined together, both (1) to achieve a satisfactory balance
between functionality and product cost and (2) to ensure that resource management
policies result in a consistent user interface. It has also addressed the selection of
COTS components, both by identifying policies that have some influence on the
behaviour observed by the end-user, and by rapidly screening components for
architectural compatibility, so that unsuitable components can be disregarded at
an early stage. In each case, the types of architectural information required has been
highlighted, including identifying the resources used by any feature and the options
for managing resource conflict, and representing the scope of different COTS
components, in terms of the levels of their provided and required interfaces. This
must be done with a reference architecture which is abstracted from the concrete
product line architecture, both to allow these decisions to be made at an early stage
in development, before a refined architecture is available, and to protect the IP of
the parties.
210 T. Trew et al.
Having described the support that a reference architecture should provide the
requirements engineer, the remainder of the chapter describes how such an archi-
tecture was developed for the CE domain and illustrates how it can be used in
practice. As a first step in this, the next section reviews how industry develops and
uses reference architectures in general.
Before describing how our reference architecture was developed and how it can be
applied, we will introduce the form and use of reference architectures in some more
mature application domains and what lessons can be learnt for the development of
our architecture.
The role of reference architectures in software development is well-established;
the Rational Unified Process uses them to capture elements of existing architectures,
which have been proven in particular contexts, for reuse in subsequent developments
[11, 12]. Reference architectures can exist at many levels of abstraction and can take
many forms, depending on the context in which they are to be applied. The Open
Group Application Framework (TOGAF) introduces the architecture continuum to
describe the degree of abstraction a reference architecture has from an organisation-
specific architecture [13]. TOGAF describes the characteristics of potential
architectures in this continuum, ranging from Foundation Architectures to Organi-
zation-Specific Architectures and provides a Technical Reference Model (TRM) as
an example Foundation Architecture. The TRM is a taxonomy of applications,
services and service qualities. The service taxonomy is specific to information
processing applications. For our purposes, we require a model that is less specific
to an application domain, since the definition of services changes rapidly as the
functionality supported by a product category evolves. We also require a model that
provides more technical guidance, while being independent of the functionality
being supported.
The OASIS reference architecture for service-oriented architecture (SOA) [14]
is an example of such an architecture, since it captures the information that is
important for a successful SOA, independent of its functionality. In this case, the
overall structure of the SOA relates to the structure of the business, so that there is
a straightforward mapping from the requirements to the structure of the software
architecture. This mapping is more complex in an embedded system, with aspects
of a particular feature being implemented at different layers in the system, e.g. based
on the need to support variability of requirements and hardware and to separate
operations with different temporal granularity [15]. Therefore, in contrast to the
SOA reference architecture, our reference architecture for CE products should
provide guidance on structuring the software.
Eeles and Cripps’ classification of architectural assets [16] uses axes of granu-
larity and level of articulation (or implementation). At a fine grain, they identify
Architectural Styles, Architectural Patterns, Design Patterns and Idioms, which are
13 A Reference Architecture for Consumer Electronics Products 211
at a suitable level of articulation for our purposes. However, at a coarser grain, their
Reference Model is more domain-specific, being comparable to the TOGAF TRM.
We seek a reference architecture that can aggregate the fine grain architectural
information and provide guidance on its use, while still being independent of the
application domain.
POSA4 [17] presents an extensive pattern language that addresses these aims.
This provided inspiration for some aspects of our reference architecture but it does
not provide sufficient guidance for determining the overall structure of an embed-
ded system for two reasons. Firstly, rather than giving specific guidance, it raises
a set of general questions about the behaviour of an application, the variability that
it must support and its life expectancy and then describes the characteristics of the
architectural styles and patterns, relying on the insights of the architects, who must
be familiar with a wide range of concepts before the language can be applied.
Secondly, developing the structure of embedded software is particularly challeng-
ing because it is usually a hybrid of architectural styles. For example, in the
structure in Fig. 13.5, the software is largely structured as layers, but the operating
system may be orthogonal to these, being accessible by all layers. Moreover,
different architectural styles may be used in different layers, e.g. the media/signal
processing may employ pipes and filters and the user applications may use model-
view-controller.
POSA4 addresses the problem of how to interpret the general questions in the
context of a specific application by preceding its pattern language with an extensive
example of the development of a warehouse management system. This approach of
using a running example is also used by Moore et al. in their B2B e-commerce
reference architecture [18]. Here, the reader can draw parallels between these
examples and their own applications by using the widely-accepted concepts of
the information processing domain. This approach is less effective for embedded
software because of the limited consensus on higher-level concepts.
Considering how better support might be given to architects, Kruchten states that
“architecture encompasses significant decisions” about the software [11], therefore
we might expect that the reference architecture will have made some decisions,
which are applicable throughout its scope, and identify decision topics that have to
be addressed for the current system. In their model of architectural knowledge de
Boer et al. state, “decision making is viewed as proposing and ranking Alternatives,
and selecting the alternative that has the highest rank . . . based on multiple criteria
(i.e. Concerns)” [19]. The reference architecture should provide guidance for
making such decisions.
Reed provides an example of a reference architecture for information pro-
cessing, using an N-tier architecture and identifying the decision topics for each
tier [12]. Here the decision criteria can be described concisely and unambiguously
since they are based on widely-understood concepts and established technology
standards. While this example is a valuable illustration of the role of reference
architectures in supporting the creation of a wide variety of applications, many
more decisions are required to cover the whole information processing domain.
212 T. Trew et al.
When developing the reference architecture, we have to address of the scope of the
CE domain to be covered by the architecture and the identification of appropriate
viewpoints. These can be considered in relation to the business aims that motivated
13 A Reference Architecture for Consumer Electronics Products 213
the development of the architecture, which go beyond the needs of the requirements
engineer. Indeed, the search for the form and content of the reference architecture
was driven by the desire to avoid integration problems. The overall set of business
aims were as follows:
• To enable requirements engineers to ensure that the product makes the best
use of its resources and to ensure that resource management policies result in
a consistent user interface, as introduced in Sect. 13.2.1. This is particularly
important the first time that a feature is incorporated into a product category.
• To support requirements engineers in the selection of software components from
external suppliers, as introduced in Sect. 13.2.2.
• To support software component suppliers in establishing the requirements for
components to be supplied to CE manufacturers, as introduced in Sect. 13.1.
• To enable architects to exchange best practices across different product
categories, having different concrete architectures. This is particularly to avoid
problems during component integration and to improve maintainability.
• To support internal standardization to facilitate reuse as the requirements of
different product categories converge.
Note that these aims do not include aspects, such as hardware-software co-
design, where specialised analytical models, such as Synchronous Data Flow
[26], specific to the nature of the processing, are used to optimise the system
architecture. While such optimisation is critical to the success of the product, it
normally addresses only a small proportion of the code. The overall software
architecture must ensure that the remainder of the software does not compromise
the performance of these critical elements.
In selecting the application domain to be addressed by the reference architecture,
we have taken the broad domain of CE products, rather than developing separate
reference architectures for each product category, such as TVs and mobile phones.
This is for several reasons. During the requirements phase, we need to be able to
handle the expansion in the scope of functionality supported by a product category,
whether with an entirely novel feature or a feature that was originally developed for
another category. By abstracting from specific functionality we are able to provide
support for feature combinations that had not been anticipated. A broad scope is
also required to exchange best practices and to promote reuse between product
categories. Without a reference architecture, the differences in requirements and
business models can obscure their commonalities. A broader reference architecture
will be exposed to a wider range of design choices, which makes it more effective
when incorporating COTS components, and will be more satisfactory for architects
to use since it cannot be overly prescriptive. Finally, the effort of developing
a broadly scoped architecture can be recouped over more development projects.
The scope of the CE application domain is characterised in two ways:
1. An abstract context diagram for a generic CE product, shown in Fig. 13.1. This
informal diagram is annotated with examples of actors and protocols.
2. A list of the general capabilities of these products, namely:
214 T. Trew et al.
TV on-screen display,
User I/O remote control, mobile
phone LCD and keypad
PictBridge, I2 C, USB, PC
synchronization/upgrade
Connection to main
processor when the
Peripheral Host I/O system is a co-
Peripherals Communication processor
The following are the general non-functional requirements and constraints of the
products in this domain:
Requirements: The products must meet firm and soft real-time performance
constraints. Their user interfaces must be responsive, even when actions are
inherently time-consuming. Most actions should be interruptible, with the system
transitioning smoothly to respond to the new command. Actions can be triggered by
both user commands and spontaneous changes in a product’s environment. Several
features may be active concurrently. Products are usually members of a product
line, whose members may address different geographic regions or ranges of
features.
Constraints: The products have limited resources, such as application-specific
hardware and processing power and memory. They have limited user interfaces,
13 A Reference Architecture for Consumer Electronics Products 215
Given that few concepts from information processing can be applied to embedded
software, generally-recognised concepts for embedded software are only found at
the level of state machines and the services offered by real-time operating systems.
These concepts are too low-level for the early stages of architectural development.
Furthermore, the software architectures of concrete embedded products are highly
influenced by the associated hardware architecture and the diversity that this
introduces obscures the commonalities across product categories that could form
a reference architecture to support early decision-making.
The main challenges in the development of our reference architecture were
ascertaining what information should be documented and how it should be
structured. For instance, while the inclusion of a structural model in a reference
architecture is uncontentious, what should it contain? The definition of “architec-
ture” in IEEE 1471 includes “the fundamental organization of a system embodied
in its components, their relationships to each other . . .” [32] but what should be the
semantics of a component in a model abstracted from any specific product?
It was even less certain a priori what decision topics and other information
should be included in the architectural texture. However, Kruchten states that one of
the purposes of the architectural description is to “be able to understand how the
13 A Reference Architecture for Consumer Electronics Products 217
Guidelines
Product Line Reference Product Line
and
Architecture Architecture Architecture
Checklists
system works” and to “be able to work on one piece of the system” [11]. Conse-
quently, one way of identifying the necessary information is through the study of
the root causes of failures that occurred during integration and to record and
abstract those that resulted from insufficient architectural information. Further-
more, architectures should support evolution and a similar approach can be taken
with components that have poor maintainability.
Our reference architecture was therefore developed in two phases. Firstly the
problems encountered during the integration and maintenance of existing products
were studied to obtain guidelines and checklists that could be used to review new
concrete architectures. Secondly, the understanding gained here was used in the
construction of a reference architecture that would support the creation of concrete
architectures, providing support from the earliest phase of the development. The
information flow is illustrated in Fig. 13.2.
As noted by Jackson, “the lessons that can be learned from engineering failures are
most effective when they are highly specific” [33]. Therefore, when setting
expectations for what will be achieved from the study of previous projects, we
expect much more than merely reiterating that decisions should be recorded for
each of the aspects identified in COPA, e.g. initialization, termination, fault
handling [27]. However, it is challenging to gain insights on developments
incorporating COTS components, given the limited information that is usually
provided and the reluctance of suppliers to discuss problems in detail. Therefore,
in searching for decision topics to include in the architectural texture, we first
exploited the experience of multi-site development within a single company.
Here, while architectural decisions must be explicit because of the limited commu-
nication between the sites, we were not hampered by IP issues. It was only after
218 T. Trew et al.
Generic
Intent
Interaction Specific
Context Intent
Non-functional Design
Policy
Requirements Pattern
Product
Line
Architecture
Fig. 13.3 Framework for reasoning about policies for communication and synchronisation
Table 13.1 Example intents and their specializations for specific interaction contexts
Interaction contexts Intents
Designs should be insensitive to
Variables must be initialized before the order of completion of
Generic they are used unconstrained activities
Variables that will be read by a The notifications of a specific event
notification handler must be set should be generated and
Notification handling before the handler executes. delivered in the same order.
Avoid cyclic dependencies between
sub-systems during
Power-up initialization.
configurability. Rather than making a single decision for the entire product, a policy
that favours performance may be selected for the lower levels of the software and
a policy giving greater configurability may be used at higher levels. The reference
architecture is a vehicle for the requirements engineer to understand how the
efficiency of different parts of the software contributes towards the overall product
performance.
Figure 13.3 shows how, for a concrete product line, the choice of policy would
be recorded in the architecture. In contrast, the reference architecture would
contain a decision topic with the design alternatives.
Having developed this understanding of integration issues and established
a framework for structuring decision topics, several analyses were undertaken of
the integration of software from external suppliers [35]. Since this software had
been developed with no knowledge of the architecture of the products in which
it was to be integrated, these studies revealed a much larger range of policy
220 T. Trew et al.
Architectural Texture
User Applications
Recommended Design
...
Autonomous and Decision Topics
Applications
...
...
Architectural
Structure
...
...
Recommended Design
and Decision Topics
...
Hardware Drivers
Design Patterns
CD Favourite Track
Application Manager Selection, TV P50,
Timed recordings,
User User Applications Interactive TV apps
Interface (MHP, MHEG-5, ...)
Manage Partial reboot after
ment Autonomous Applications
failure
Domain Model Phone call reception
State-Maintaining Procedural Synchronous
Services Services Services TV program installation,
Electronic Program
Platform API Guide, Content
browsing
Connection Manager
Responsibilities:
Components each manage a specific hardware
element that has activities that execute
asynchronously from the higher layers.
Fig. 13.5 Structural model of the reference architecture with an example role description
Events Events
User
UIMS View Autonomous
Controller
Controller
nt
lie Resource Resource Resource
C
Manager Manager Manager
Purpose:
Mediates between applications for
control of prioritized resources
Service
Coordinator Responsibilities:
Plug-in Knows the relative priorities of the
Guarded
Soll-Ist applications wishing to use a
Soll- resource .
Ist Given this prioritization, knows
how the requests from the
Model Model
W applications should be combined
or to control the resource.
ke
r
Fig. 13.6 Recommended design for the applications roles and their interfaces to the services,
highlighting one of the three domain-specific patterns used in this design
the rules or decision topics for behaviour. Some examples of the categories of
behavioural guidelines are:
Synchronization: This category includes the rules or decision topics identified
in the studies of integration failures described in Sect. 13.5.1. Here the rules or
decision topics are classified according to their interaction contexts and are there-
fore reusable throughout the architecture.
State behaviour: This extends the taxonomy of modal behaviour in POSA4 [17] to
cover the much larger set of state-related patterns referenced in the architecture,
providing both consistency in their description and a broader perspective on the
options available to architects.
Resource management: The different classes of resources are an important
concept to help the requirements engineer to understand how architectural choices
affect feature interaction. The category identifies three different classes of resource
management policies, namely synchronized, prioritized and virtual, and the issues
that must be considered with each class. These definitions and arguments are
widely-referenced throughout the architecture.
Having described the top level of the architecture, we will describe how the
structural model is linked to more detailed recommendations and guidance on
224 T. Trew et al.
pre-existing components. A UML activity diagram is used, showing the tasks and
resulting work products. Each task in this model is hyperlinked to decision topics,
such as the alternative policies for handling notifications, introduced in Sect. 13.5.1.
Each topic has a detailed discussion of the forces involved and examples of
decisions taken in earlier product developments, obtained from the studies
described in Sect. 13.5.1. Throughout the guidance, hyperlinks are made to defini-
tions and discussions in the architectural texture, where the issues are described in
a more general context. This both allows consistent decisions to be made through-
out the product and reduces the amount of material that must be presented in the
context of each of the individual roles in the architecture.
Finally, the design rationale for each decision topic is presented using our
extension of Gross and Yu’s approach to selecting between design alternatives,
based on their support for different NFRs [23]. As described in Sect. 13.5.1, we add
the intent as a goal that must be satisfied by all design alternatives.
Early trials of the use of the architecture confirmed that the approach of
beginning with a recommended design had a shallower learning curve compared
with that of a pure pattern language, such as that in POSA4 [17], in which there are
no default decisions. Such pattern languages require that the architect has a good
initial grasp of many abstract concepts.
A general principle behind the use of web pages to document the reference
architecture is that a user should be provided with the essence of recommendations
in the first instance, but that it is easy to drill down and get more details when
required. For example, Fig. 13.6 shows both tooltips and dynamic content, used for
the overlay of different design patterns. The latter provides navigation to other
pages through hyperlinks. Indeed, the ability to provide details on demand is the
key to presenting a full design rationale in a compact and comprehensible form.
Having described the development and organisation of our architecture, the
following section describes how it can be used during requirements engineering.
A
C
B
Functional
Requirements
T1: Map to
Structural Model
Reference
Architecture
Structural Model
T5: Revise T10: Revise
Concurrently Feasible
Active Features Mapped Functional
Requirements Requirements
Concrete Concrete
Architectural Structural
Texture Model
Fig. 13.8 Role of the reference architecture in requirements-related tasks. Tasks involving
requirements engineers have a solid outline
A. Revise the requirements, having reviewed the architectural decisions that would
be required to satisfy the related NFRs. This might discard requirements with
a high technical risk.
13 A Reference Architecture for Consumer Electronics Products 227
B. Identifies cases where contention for resources restricts the concurrent avail-
ability of features. Where features cannot be active concurrently, new
requirements may be added relating to the transition between those features.
C. Where third-party components are to be used, low-priority requirements are
removed if they are only supported by components that are architecturally-
incompatible with the remainder of the product.
The use of the reference architecture will be illustrated by an example of the
integration of the PictBridge protocol into a mobile phone. This will show how
feature interaction can be detected and resource management policies assessed.
PictBridge [40] is a standard that allows a user to select, crop and print
photographs, using a camera connected directly to a printer, without requiring
a PC. The standard only addresses the protocol between the camera and printer,
and not the camera’s user interface or how the feature should be implemented in the
camera.
Consider establishing the requirements the first time that this feature was
integrated into a mobile phone, where it is to be implemented by a COTS compo-
nent that has previously only be integrated in a conventional camera. The
requirements engineer must:
• Determine a complete set of end-user requirements for the PictBridge feature.
• Identify potential feature interaction with the remainder of the phone’s features
and identify how they can be resolved satisfactorily.
These aims are addressed, with reference to Fig. 13.8, with the following
sequence of activities:
• T1: Map the functional requirements of the PictBridge feature onto the roles in
the structural model. The component implementing the protocol is an example
of a procedural service (see Fig. 13.5), which is one that executes a series of
actions, normally running to completion. The PictBridge component will need
access to the hardware drivers for the USB interface and the memory in which
the photographs are stored. In addition, the feature will require a user interface.
• T4: Map the scope of candidate PictBridge COTS components onto the struc-
tural model.
– Survey the COTS components that implement the PictBridge feature.
– The scope of each promising candidate is identified from studying the
features it supports and its interface specification. For instance, are its
provided interfaces restricted to the PictBridge protocol, or does the compo-
nent also provide some user interface functionality?
• T7: Compare the scope of the candidate PictBridge COTS components with
those of other features. Will it be possible to maintain a consistent user interface
across all features? If not then T9: discard incompatible candidate components.
If these components also support some unique functionality that cannot other-
wise be implemented, T10: revise the feasible functional requirements.
228 T. Trew et al.
• T2: Identify resource conflicts. Identify the features that could be active concur-
rently and detect feature interaction. Since the user interface of most phones only
permits one application to be selected at a time, we are primarily concerned with
interference from features of the phone that are autonomous applications (see
Fig. 13.5), i.e., features that make calls to the services without having been
explicitly selected through the user interface. For a phone, these are incoming
telephone calls and text messages. How should the product react when a call or
message is received when the PictBridge feature is active?
– What are the consequences for the user interface? PictBridge implementations
on cameras normally retain control of the user interface while the photos are
being printed. Would it be possible to continue printing in the background on a
phone, so that it could continue to be used for other purposes? The architectural
guidance for the user applications and their interface to the services includes a
recommended design for managing the transfer of resources between
applications, shown earlier in Fig. 13.6. Do the available components have
the necessary synchronization functions to implement such design patterns?
– Considering the lower levels of the structural model, can both features be
active concurrently? Does the file system support concurrent access from
multiple applications and are there sufficient memory and processor
resources to support both features?
– Based on this analysis, T5: revise the requirements for concurrently active
features.
• T8: Review the COTS components for mismatched policies. The policies for how
the feature should be activated and terminated should be consistent with those of
other features. Many cameras activate the PictBridge feature only when the
camera is switched on while connected to a printer, whereas a phone user would
not expect to have to switch the phone off and on in the same way. Mismatches
often occur in state behaviour when components developed for one category of
product are integrated into a product of another category. Mismatches may also
occur in the selection of design alternatives, such as those for handling
notifications, introduced in Sect. 13.5.1. Such mismatches can be detected by
architects during the later steps in Fig. 13.7 and may require unacceptably
complex glue code to integrate the component into the remainder of the system.
Again, following this analysis, T9: discard incompatible candidate components
and, if necessary, T10: revise the feasible functional requirements to remove
those only supported by the discarded components.
A benefit of using the reference architecture, even when a concrete archi-
tecture already exists for the mobile phone, is that it supports the comparison
of the scopes of COTS components implementing different features. This makes
it easier to detect feature interaction and identify requirements for resource
management.
13 A Reference Architecture for Consumer Electronics Products 229
13.7 Conclusions
for detailed discussions with customers without either party exposing their IP,
which will be of increasing value as the CE industry transitions away from
vertically-integrated companies towards supply chains or ecosystems.
References
20. Zimmermann O, Zdun U, Gschwind T, Leymann F (2008) Combining pattern languages and
reusable architectural decision models into a comprehensive and comprehensible design
method. Paper presented at the seventh working IEEE/IFIP conference on software archi-
tecture, Vancouver, 18–22 Feb 2008
21. Arsanjani A (2004) Service-oriented modeling and architecture: how to identify, specify, and
realize services for your SOA. http://www.ibm.com/developerworks/library/ws-soa-design1/.
Accessed 7 June 2010
22. Clerc V, Lago P, van Vliet H (2007) Assessing a multi-site development organization for
architectural compliance. Paper presented at the sixth working IEEE/IFIP conference on
software architecture, Mumbai, 6–9 Jan 2007
23. Gross D, Yu E (2001) From non-functional requirements to design through patterns.
Requirements Engineering 6(1):18–32
24. Chung L, Nixon BA, Yu E, Mylopoulos J (2000) Non-functional requirements in software
engineering. Kluwer, Boston
25. Muller G, Hole E (2007) Reference architectures; why, what and how. System architecture
forum. http://www.architectingforum.org/whitepapers/SAF_WhitePaper_2007_4.pdf
26. Lee EA, Messerschmitt DG (1987) Static scheduling of synchronous data flow programs for
digital signal processing. IEEE Transactions on Computers 36(1):23–35
27. Obbink H, M€uller J, America P, van Ommering R (2000) COPA: a component-oriented
platform architecting method for families of software-intensive electronic products. Paper
presented at the software product line conference, Denver, 28–31 Aug 2000
28. van der Linden F, Schmid K, Rommes E (2007) Software product lines in action: the best
industrial practice in product line engineering. Springer, Berlin
29. Pohl K, Weyer T (2005) Documenting variability in requirements artefacts. In: Pohl K,
B€ockle G, van der Linden F (eds) Software product line engineering. Springer, Berlin, pp 89–113
30. van Dinther Y, Schijfs W, van den Berk F, Rijnierse K (2001) Architectural modeling, introducing
the architecture metamodel. Paper presented at the landelijk architectuur congress, Utrecht
31. Kruchten P (1995) Architectural blueprints – the “4+1” view model of software architecture.
IEEE Software 12(6):42–50
32. The Institute of Electrical and Electronics Engineers, Inc. (2000) IEEE Std 1471–2000,
IEEE recommended practice for architectural description of software-intensive systems.
The Institute of Electrical and Electronics Engineers, Inc, New York
33. Jackson M (2010) Engineering and software. In: Nuseibeh B, Zave P (eds) Software
requirements and design: the work of Michael Jackson. Good Friends Publishing, Chatham
34. Trew T (2005) Enabling the smooth integration of core assets: defining and packaging
architectural rules for a family of embedded products. Paper presented at the software product
line conference, Rennes, 26–29 Sept 2005
35. Trew T, Soepenberg G (2006) Identifying technical risks in third-party software for embedded
products. Paper presented at the fifth international conference on COTS-based software
systems, Orlando, 13–16 Feb 2006
36. Martin RC (2003) Agile software development: principles, patterns and practices. Prentice
Hall, Upper Saddle River
37. Wirfs-Brock R, McKean A (2003) Object design: roles, responsibilities, and collaborations.
Addison-Wesley, Boston
38. van Ommering R (2003) Horizontal communication: a style to compose control software.
Software Practice and Experience 33(12):1117–1150
39. Jackson M (2001) Problem frames: analyzing and structuring software development
problems. Addison-Wesley, Harlow
40. Camera and Imaging Products Association (2007) White paper on CIPA DC-001-2003 Rev
2.0: Digital photo solutions for imaging devices. Camera and Imaging Products Association.
Tokyo, Japan
.
Chapter 14
Using Model-Driven Views and Trace Links
to Relate Requirements and Architecture:
A Case Study
14.1 Introduction
Throughout this study, we use a loan approval process of a large European banking
company to illustrate the application of our approach in the domain of process-
driven SOAs. The banking domain must enforce security and must be in conformity
with the regulations in effect. Particular measures like separation of duties, secure
logging of events, non-repudiable action, digital signature, etc., need to be
14 Using Model-Driven Views and Trace Links to Relate Requirements 235
1
http://www.omg.org/spec/BPMN/1.1
2
http://www.bis.org/publ/bcbs107.htm
3
http://www.senat.fr/leg/pjl02-166.html
4
http://www.fsa.gov.uk/pages/About/What/International/mifid
5
http://frwebgate.access.gpo.gov/cgi-bin/getdoc.cgi?dbname ¼ 107_cong_bills&docid ¼ f:h3763enr.
tst.pdf
Issue
236
Receive
a Loan Request Loan Decision
Receive Send
Access
LoanContract SignedContract
Portal
Sign
Customer
LoanContract
Receive Create
Loan Request LoanFile
DeclineDueTo
Yes SuspendedPrivilege
Check
Access All conditions
Customer Bank Suspended?
Portal satisfied?
Privilege Request DeclineDueTo
No Bank Information No UnsatisfiedCondition
CreditBroker
Yes
Legal delay
Perform
[7 days]
Receive LoanSettlement
Role=Supervisor SignedContract
Dispatch No DelegateTo A Post
Task Processing Clerk
Close Notify
Yes Loan Approval Customer
Legal time-out [2M]
CreditWorthinessOK?
Access Check DeclineDueTo
LoanProvider
Portal Credit Worthiness No BadCreditWorthniness
Supervisor
Yes
Send
LoanContract
Low risk?
Yes
Evaluate Initialize
LoanRisk LoanContract
No Yes
DeclineDueTo
Manager
No HighRisk
H. Tran et al.
disclosure. Nevertheless, laws and regulations are just one example of compliance
concerns that might occur in process-driven SOAs. There are many other rules,
policies, and constraints in a SOA that have similar characteristics. Some examples
are service composition and deployment policies, service execution order con-
straints, information exchange policies, security policies, quality of service (QoS)
constraints, and so on.
Compliance concerns stemming from regulations or other compliance sources
can be realized using various controls. A control is any measure designed to assure
a compliance requirement is met. For instance, an intrusion detection system or
a business process implementing separation of duty requirements are all controls
for ensuring systems security. As regulations are not very concrete on how to
realize the controls, the regulations are usually mapped to established norms and
standards describing more concretely how to realize the controls for a regulation.
Controls can be realized in a number of different ways, including manual controls,
reports, or automated controls (see Fig. 14.2). Table 14.1 depicts some relevant
Laws/
Automated
Regulations Code generation
Controls
Code
Generator
Compliance Executable
Reports Processes/Services
Fig. 14.2 Overview of the view-based, model-driven approach for supporting compliance in
SOAs
compliance requirements that the company must implement in the loan approval
process in order to comply with the applicable laws and regulations.
Our view-based, model-driven approach has been proposed for addressing the
complexity and fostering the flexibility, extensibility, adaptability in process-driven
SOA modeling development [23]. A typical business process in a SOA comprises
various tangled concerns such as the control flow, data processing, service
invocations, event handling, human interactions, transactions, to name but a few.
The entanglement of those concerns increases the complexity of process-driven SOA
development and maintenance as the number of involved services and processes
grow. Our approach has exploited the notion of architectural views to describe the
various SOA concerns. Each view model is a (semi)-formalized representation of
a particular SOA or compliance concern. In other words, the view model specifies
entities and their relationships that can appear in the corresponding view.
Figure 14.3 depicts some process-driven SOA concerns formulated using VbMF
view models. All VbMF view models are built upon the fundamental concepts of the
Core model shown in Fig. 14.4. Using the view extension mechanisms described in
[23], the developers can add a new concern by using a New-Concern-View model
that extends the basic concepts of the Core model (see Fig. 14.4) and defines
additional concepts of that concern. The new requirements meta-data view, which
is presented in Sect. 14.4.2, is derived using VbMF extension mechanisms for
Core
Model
Abstract
bridging abstraction levels
High-level
FlowView CollaborationView InformationView HumanView NewConcern
Model Model Model Model View Model
horizontal dimension
mastering the complexity of tangled process concerns
CoreModel
* service 1 process
*
Service requires Process * * View
1..*
provides view
* element
NamedElement
Element
name:String
nsURI:String
Fig. 14.4 Core model – the foundation for VbMF extension and integration
1 2 3
4 5
Fig. 14.5 The loan approval process development in VbMF: (1) The FlowView, (2–3) The high-
level collaborationView and informationView, and (4–5) The low-level, technology-specific
BpelCollaborationView and BpelInformationView
Figure 14.5 shows the loan approval process implemented using VbMF. These
views are inter-related implicitly via the integration points from the Core model
[23]. The detail of these views as well as their aforementioned relationships shall be
clarified in Sect. 14.4.3 on the trace dependencies between VbMF views.
In this section, we present a Compliance Meta-data view for linking parts of the
requirements and the design views of a SOA system. On the one hand, this view
enables stakeholders such as business and compliance experts to represent compli-
ance requirements originating from some compliance sources. On the other hand, it
allows to annotate process-driven SOA elements described using VbMF (e.g., the
14 Using Model-Driven Views and Trace Links to Relate Requirements 241
ones shown in Fig. 14.5) with the elicited compliance requirements. That is,
we want to implement a compliance control for, e.g., a compliance regulation,
standard, or norm, using a process or service.
The Compliance Meta-data view provides domain-specific architectural knowl-
edge (AK) for the domain of a process-driven SOA for compliance: It describes
which parts of the SOA, i.e., which services and processes, have which roles in the
compliance architecture (i.e., are they compliance controls?) and to which compli-
ance requirements they are linked. This knowledge describes important architec-
tural decisions, e.g., why certain services and processes are assembled in a certain
architectural configuration. In addition, the Compliance Meta-data view offers
other useful aspects to the case study project: From it, we can automatically
generate compliance documentation for off-line use (i.e., PDF documents) and
for online use. Online compliance documentation is, for instance, used in monitor-
ing applications that can explain the architectural configuration and rationale
behind it, when a compliance violation occurs, making it easier for the operator
to inspect and understand the violation.
A compliance requirement may directly relate to a process, a service, a business
concern, or a business entity. Nonetheless compliance requirements not only
introduce new but also depict orthogonal concerns to these: although usually related
to process-driven SOA elements, they are often pervasive throughout the SOA and
express independent concerns. In particular, compliance requirements can be
formulated independently until applied to a SOA. As a consequence, compliance
requirements can be reused, e.g., for different processes or process elements.
Figure 14.6 shows our proposed Compliance Meta-data view model. Annotation
of specific SOA elements with compliance meta-data is done using compliance
Controls that relate to concrete implementations such as a process or service (these
are defined in other VbMF views). A Control often realizes a number of Complian-
ceRequirements that relate to ComplianceDocuments such as a Regulation, Legis-
lation, or InternalPolicy. Such RegulatoryDocuments can be mapped to Standards
that represent another types of ComplianceDocument. When a compliance require-
ment exists, it usually comes with Risks that arise from a violation of it. For
documentation purposes, i.e., off-line uses, and for the implementation of compli-
ance controls the ControlStandardAttributes help to specify general meta-data for
compliance controls, e.g., if the control is automated or manual (isAutoma-
tedManual). Besides these standard attributes, individual ControlAttributes can be
defined for a compliance control within a certain ControlAttributeGroup.
To provide for extensibility, we have realized a generic modeling solution:
a NamedElement from the Core model can implement a Control. This way not
only Services and Processes can realize a compliance control but as the View-based
Modeling Framework is extended also other NamedElements can be specified to
implement a Control. In order to restrict the arbitrary use, an OCL constraint is
attached to the Control that can be adapted if necessary (i.e., the set of the
getTargetClasses operation is extended with a new concept that can implement
a Control).
242 H. Tran et al.
context Control
static def: getTargetClasses: Set = Set { 'core::Service', 'core::Process', 'core::Element' }
inv: self.implementedBy->forAll(e | getTargetClasses->exists(v | e.oclIsKindOf(v))
EU-Directive-95/46/EC :
LoanApproval: implements Legislation
Process title = “EU Directive 95/46/EC
Individual Protection”
CreateLoanFile: implements authors = “European
C1 : Control
AtomicTask Parliament, Council ”
description = “Secure uri = “http://eur-lex.europa.eu/
CreditBureau : implements transmission of LexUriServ/
Service individual-related data LexUriServ.do?uri=CELEX:319
through secure 95L0046:EN:NOT ”
CustomerDatabase: protocol connectors” date = 1995-10-24
Service implements
fulfills
AbuseRisk : Risk
CR1 : follows
description = “Abuse of individual-related data” ComplianceRequirement
impact = HIGH has
likelihood = LOW
Fig. 14.7 Excerpt of the compliance meta-data view of the loan approval process
Figure 14.7 shows an excerpt of the Compliance Meta-data view of the loan
approval process that illustrates a directive from the European Union on the
protection of individuals with regard to the processing of personal data. The
compliance control C1, which fulfills the requirements CR1, is implemented by
the elements of the loan approval process such as the process named LoanApproval,
the task named CreateLoanFile, and the services named CreditBureau and
CustomerDatabase. Those elements are modeled in VbMF as presented in
14 Using Model-Driven Views and Trace Links to Relate Requirements 243
Figures 1.5. The compliance requirement CR1 follows the legislative document and
is associated with an AbuseRisk.
In this way, the views in Figures 1.5 provide the architectural configuration of
the processes and services whilst Fig. 14.7 provides the compliance-related ratio-
nale for the design of this configuration. Using the Compliance Meta-data view, it is
possible to specify compliance statements such as CR1 is a compliance requirement
that follows the EU Directive 95/46/EC on Individual Protection6 and is imple-
mented by the loan approval process within VbMF.
The aforementioned information is useful for the project in terms of compliance
documentation, and hence likely to be maintained and kept up-to-date by the
developers and users of the system, because it can be used for generating the
compliance documentation that is required for auditing purposes. If this documen-
tation is the authoritative source for compliance stakeholders then it is also likely
that they have an interest in keeping this information up to date. In doing so they
may be further supported with, e.g., imports from other data sources. But in this
model also important AK is maintained: In particular the requirements for the
process and the services that implement the control are recorded. That is, this
information can be used to explain the architectural configuration of the process
and the services connected via a secure protocols connector. Hence, in this parti-
cular case this documented AK is likely to be kept consistent with implemented
system and, at the same time, the rationale of the architectural decision to use secure
protocol connectors does not get lost.
6
http://eur-lex.europa.eu/LexUriServ/LexUriServ.do?uri¼CELEX:31995L0046:EN:HTML
244 H. Tran et al.
Traceability *
TraceLink
Model link
name: string *
annotates id: string
* subLinks
ViewElementTrace
ViewToCode
*
ElementTrace ViewCodeTrace
ArtifactTrace ViewToView
TraceRationale
CodeElementTrace
description: string CodeToCode
source 1 1 target
ArtifactReference ViewArtifact target 1..* 1..* source
ElementReference
Role RelationType location: string
nsURI: string CodeArtifact name: string View Element
id: string xpath: string
uuid: string
Code Fragment
view models and view elements, ViewToCodes elicit the traceability from VbMF to
process implementations, and finally, CodeToCodes describe the relationships
between the generated schematic code and the associated individual code. Along
with these refined trace links between process development artifacts, we also extend
the ElementTrace concept by fine-grained trace link types between elements such as
ViewElementTrace, ViewCodeTrace, and CodeElementTrace. Last but not least,
formal constraints in OCL have been defined in order to ensure the integrity and
support the verification of the views instantiated from the traceability view
model [27]. In the subsequent sections, we present a number of working scenarios
to demonstrate how VbTrace can help establishing and maintaining trace
dependencies.
Fig. 14.9 Illustration of trace links between the flowView (left) and BpelCollaborationView
(right) of the loan approval process
246 H. Tran et al.
Fig. 14.10 Illustration of trace links between the high- (left) and low-level CollaborationView
(right) of the loan approval process
Fig. 14.11 Illustration of trace links between the views (left) and generated BPEL code (right) of
the loan approval process
CreateLoanFile:
Interaction
ArtifactTrace
ElementTrace
LoanApproval:
CollaborationView
EU-Directive-95/46/EC:
Legislation CreateLoanFile: LoanApproval :
AtomicTask LoanApproval.bpel:
BpelCollaboration
Code
View
Fig. 14.12 Illustration of a traceability path from requirements through architectural views to
code
elements to be changed and manipulate them. This is time consuming and error-prone
because there is no explicit links between the requirements to and process imple-
mentations. Moreover, the stakeholders have to go across numerous dependencies
between various tangled concerns, some of which might be not relevant to the
stakeholders expertise. Using our approach, the business and domain experts can
better analyze and manipulate business processes by using the VbMF abstract views,
such as the FlowView, CollaborationView, InformationView, Compliance Meta-data
View, etc. The IT experts, who mostly work on either technology-specific views or
process code, can better analyze and assess coarse-grained or fine-grained effects of
these changes based on the traceability path.
So far we have presented a case study based on the development life cycle of an
industrial business process that qualitatively illustrates the major contributions
achieved by using our approach. To summarize, these are in particular: First,
a business process model is (semi-)formally described from different perspectives
that can be tailored and adapted to particular expertise and interests of the involving
stakeholders. Second, parts of the requirements are explicitly linked to the system
architecture and code by a special (semi-)formalized meta-data view. Third, our
view-based traceability approach can help reducing the complexity of dependency
management and improving traceability in process development. In addition, we
also conducted a quantitative evaluation to support the assessment of our approach.
The degree of separation of concerns and the complexity of business process
models are measured because they are considered as the predictors of many
important software quality attributes such as the understandability, adaptability,
maintainability, and reusability [5]. This evaluation focuses on the view-based
approach as the foundation of our approach and provides evidence supporting our
claims regarding the above-mentioned software quality attributes.
14.5.1 Evaluation
14.5.1.1 Complexity
complexity, the harder it is to analyze and understand the system [5]. The complex-
ity used in our study is a variant of Lange’s model size metrics [16], which is
extended to support specific concepts of process-driven SOAs and the MDD para-
digm. It measures the complexity based on the number of the model’s elements and
the relationships between them.
In addition to the loan approval process (LAP) presented in Sect. 14.2, we
perform the evaluation of complexity on four other use cases extracted from
industrial process including a travel agency process (TAP) from the domain of
tourism, an order handling process (OHP) from the domain of online retailing,
a billing renewal process (BRP) and a CRM fulfillment process (CFP) from the
domain of Internet service provisioning. We apply the above-mentioned model-
based size metric for each main VbMF view such as the FlowView (FV), high-level
and low-level CollaborationViews (CV/BCV), and high-level and low-level
InformationViews (IV/BIV). Even though the correlation of views are implicit
performed via the name-based matching mechanism [23], the name-based integra-
tion points between high-level views (IPhigh) and low-level views (IPlow) are
calculated because these indicates the cost of separation of concerns principle
realized in VbMF. Table 14.2 shows the comparison of these metrics of VbMF
views to those of process implementation in BPEL technology, which is widely
used in practice for describing business processes. Note that the concerns of process
implementation are not naturally separated but rather intrinsically scatted and
tangled. We apply the same method to calculate the size metric of the process
implementation based on its elements and relationships with respect to the corres-
ponding concepts of VbMF views.
The results show that the complexity of each of VbMF views is lower than that
of the process implementation. Those results prove that our approach has reduced
the complexity of business process model by the notion of (semi-)formalized views.
We also measure a high-level representation of process by using an integration of
VbMF abstract views and a low-level representation of process by using an
integration of VbMF technology-specific views. The numbers say that the com-
plexity of the high-level (low-level) representation is much less than (comparable
to) that of the process implementation. The overhead of integration points occurs
in both aforementioned integrated representations.
view models extracted from the existing business process descriptions. However,
manual interventions of stakeholders are still needed to analyze and improved the
extracted views, and sometimes, the corresponding the traceability models. On the
other hand, this also implies that, comparing to a non-view-based or non-model-
driven approach, additional efforts and tool supports are necessitated for managing
the consistency of views and traceability models as those can be manipulated by
different stakeholders as well as enabling change propagation among them. None-
theless, the maintenance of trace dependencies between views can be enhanced by
using hierarchical or ontology-based matching and advanced trace link recovery
techniques [1].
However, the project can benefit in the long term regarding the reducing
maintenance cost due to the enhancement of understandability, adaptability, and
traceability as well as the preserved consistent AK. Nonetheless, it is possible to
introduce our approach into a non-model-driven project (e.g., as a first step into
model-driven development). For doing this, at least a way to identify the existing
architectural elements, such as components and connectors, must be found. But this
would be considerably more work than adding the view to an existing model-driven
project.
In our study, we applied our prior works that is the view-based, model-driven
approach for process-driven SOAs [23–27] in the field of compliance to regulatory
provisions. Therefore, more in-depth comparisons and discussions on the related
work of the view-based, model-driven approach can be found in [23–26] those of
the name-based view integration mechanism can be found in [28], and those of the
view-based traceability approach can be found in [27]. To this end, we merely
discuss the major related works in the area of bridging requirements, architecture,
and code.
A number of efforts provide modeling-level viewpoint models for software
architecture [20], 4+1 view model by [14] and the IEEE 1471 standard [11]
concentrating on various kinds of viewpoints. While some viewpoints in these
works and VbMF overlap, a general difference is, that VbMF operates at a more
detailed abstraction level – from which source code can be generated via MDD.
Previous works on better support for codifying the AK have been done in the area of
architectural decision modeling. Jansen et al. [12] see software architecture as being
composed of a set of design decisions. They introduce a generic meta-model to
capture decisions, including elements such as problems, solutions, and attributes of
the AK. Another generic meta-model that is more detailed has been proposed by
[31]. Tyree and Ackerman [29] proposed a highly detailed, generic template for
architectural decision capturing.
De Boer et al. [2] propose a core model for software architectural knowledge
(AK). This core model is a high-level model of the elements and actions of AK and
14 Using Model-Driven Views and Trace Links to Relate Requirements 253
14.7 Conclusion
Acknowledgments The authors would like to thank the anonymous reviewers for providing
constructive, insightful comments that greatly help to improve this chapter. This work was
supported by the European Union FP7 project COMPAS, grant no. 215175 and the European
Union FP7 project INDENICA grant no. 257483.
References
Jochen M. K€
uster, Hagen V€
olzer, and Olaf Zimmermann
15.1 Introduction
The state of the art in requirements engineering and software architecture has
advanced significantly in recent years. Mature requirements and software engineering
methods such as the Unified Process (UP) [18] specify processes to be followed and
artifacts to be created in application development and integration projects. In
requirements engineering, informal and formal notations as well as modeling
techniques are available to the domain analyst, for example vision statements, strategy
maps, business process models, user stories, and use cases [30]. In software architec-
ture, both lightweight and full-fledged architecture design processes exist. Architec-
tural tactics, patterns, and styles help architects to create designs from previously
gained experience that is captured in reusable assets [2, 6]. Techniques and tools for
architectural decision capturing and sharing have become available [19, 32].
When applying these or other state-of-the-art methods, techniques and tools on
large projects, requirements engineers and architects often produce a large amount
of rather diverse artifacts. If this is the case, it is challenging to maintain the
consistency of these artifacts and to promote their reuse. Likewise, the large body
design tools are organized and how they interface with each other. E.g., are the
same tools used by both roles? If so, are different UML profiles used? How can the
required architectural decisions be identified in requirements artifacts?
To address these two general problems, we propose an integrated, model-driven
artifact management approach. This general approach can be leveraged to specifi-
cally improve the transition from requirements engineering to architecture design.
In the center of our approach is the Artifact and Model Transformation (AMT)
Matrix, which provides a structure for organizing and maintaining artifacts and
their dependencies. The default incarnation of our AMT Matrix is based on
stakeholder viewpoints and analysis/design refinement levels, which we call reali-
zation levels, two common denominators of contemporary analysis and design
methods. Relative to the two general artifact management problems, our AMT
approach contributes the following:
1. Method definition: With the AMT matrix, our approach provides a generic
structure to support the definition of methods. It contributes a metamodel that
formalizes the AMT Matrix and its relationship to software engineering artifacts.
2. Tool design: With the AMT matrix, our approach provides a foundational
structure for designing and integrating tools that provide transformations
between different artifacts, create traceability links automatically, validate con-
sistency, and support cross-role and cross-viewpoint collaboration.
Via instantiation and specialization, these two general contributions can be
leveraged to solve our original concrete problem of better aligning and linking
requirements engineering and architecture design artifacts (e.g., user stories, use
cases, logical components, and architectural decisions).
The remainder of the chapter is structured in the following way. In the next
section, we clarify fundamental literature terms such as viewpoint and realization
level and introduce our general concepts on an informal level. After that, we
formalize these concepts in a metamodel. The concepts and their formalization
allow us to define cross-viewpoint transformations and traceability links between
requirements engineering and architecture design artifacts. These solution building
blocks form the core contribution of this chapter. In the remaining sections of the
chapter, we provide an example how our concepts can be applied in practice,
outline the implementation of a tool prototype and discuss related work. We
conclude with summary and a discussion of open issues.
1
Note that Eeles and Cripps [8] use the terms ‘logical’ and ‘physical’ for realization levels whereas
the 4 þ 1 viewpoint model in UP uses them for particular viewpoints.
15 Managing Artifacts with a Viewpoint-Realization Level Matrix 261
Viewpoint
Viewpoint A Viewpoint B Viewpoint C
Realization
Level
Fig. 15.1 Artifact and Model Transformation (AMT) matrix structure with entry content
262 J.M. K€uster et al.
Each entry in the matrix serves one particular, well-defined analysis or design
purpose. In the horizontal dimension, the stakeholder viewpoints differ in terms
of analysis/design concerns addressed as well as notations used and education/
experience required to create artifacts. In the vertical dimension, each level has a
certain depth associated to it such as platform-independent conceptual modeling,
technology platform-specific but not yet vendor-specific modeling, and executable/
installable platform-specific modeling and code. Both informal and formal specifi-
cations may be present in each matrix entry; both static structure and dynamic
behavior of the system under construction are covered.
Our rationale for making discrete viewpoints a default matrix dimension is
the following: Each such viewpoint takes the perspective of a single stakeholder
with a particular concern. This makes the artifacts of a viewpoint, i.e., diagrams and
models, consumable as it hides unnecessary details without sacrificing end-to-end
consistency.2
We decided for realization levels as our second dimension because they allow
elaborating analysis results and design artifacts in an iterative and incremental
fashion without losing the information from early iterations when refining the
artifacts. This is different from versioning a single artifact to keep track of editorial
changes, i.e., its evolution in time (in the absence of dedicated artifact management
concepts such as those presented in this chapter, the current state of the art is
to define naming conventions and use document/file/model versioning to manage
artifacts and organize the model space in a project). The same notation can be
used when switching from one realization level to another, but more details be
added. For instance, a UML class diagram on a logical refinement level might
model conceptual patterns and therefore not specify as many UML classes and
associations as a Java class diagram residing on the physical realization level.
Furthermore, different sets of stereotypes might be used on the two respective
levels although both take a functional design viewpoint.
Realization levels support an iterative and incremental work organization which
helps to manage risk by avoiding common pitfalls such as big design upfront (a.k.a.
analysis paralysis or waterfall) but also the other extreme, ad hoc modeling and
developer anarchy.3 Instances of this concept can be found in many places. For
instance, database design evolves from the conceptual to the logical to the physical
level. Moreover, the Catalysis approach and Fowler in UML Distilled [14] promote
similar approaches for UML modeling (from analysis to specification to implemen-
tation models). To give a third example, an IBM course on architectural thinking
recommends the same three-step refinement for the IT infrastructure (deployment)
2
Cross-cutting viewpoints such as security and performance have different characteristics; as they
typically work with multiple artifacts, they are less suited to serve as matrix dimensions. However,
such viewpoints can be represented a slices (projections) through an AMT matrix, e.g., with the
help of keyword tags that are attached to the matrix entries.
3
This extreme sometimes can be observed if teams claim to be agile without having digested intent
and nature of agile practices.
15 Managing Artifacts with a Viewpoint-Realization Level Matrix 263
viewpoint dealing with data center locations, hardware nodes, software images,
and network equipment. Finally, the distinction between platform-independent
and platform-specific models in Model-Driven Architecture can be seen as an
instance of the general approach of refinement levels as well. Additional rationale
for selecting viewpoints and realization levels as primary structuring means can be
found in the literature [8, 32].
Step 2: Position method-specific artifact types in AMT matrix entries. Our
second step is performed by method creators and tool engineers. Each method
and each analysis or design tool supporting such method is envisioned to populate
the AMT matrix structure from step 1 with artifact types for combinations of
viewpoints and realization levels. It is not required to fully populate the matrix in
this step; it rather serves as a structuring means. However, gaps should not be
introduced accidentally; they should rather result from conscious engineering
decisions made by the method creator or tool engineer.
To give an example, we now combine agile practices, OOAD, and CBD; our
concepts are designed to work equally well for other methods and notations.
Figure 15.2 shows an exemplary AMT matrix; its viewpoints stem from three
references [8, 18, 31] and the realization levels are taken from two references
[8, 15]. The three AMT matrix entries in the figure belong to the requirements
viewpoint (Req-0, Req-1) and the functional design viewpoint (Fun-1). User
stories, use cases, and component models (specified as a combination of UML
class and sequence diagrams) are the selected artifact types in this example.
The AMT matrix for a method produced in steps 1 and 2 answers the following
questions:
1. Which viewpoints to use and how many realization levels to foresee (step 1)?
2. Which artifact type(s) to use in each matrix entry, and for what purpose (step 2)?
3. Which notation to select for each artifact and how to customize the selected ones
for a particular matrix entry, e.g., syntax profiling (step 2)?
4. Which role to assign ownership of AMT matrix entries to and when in the
process to create, read, and update the matrix entries (step 2)?
5. Which techniques and best practices heuristics from the literature to recommend
for manual artifact creation and model transformation development (step 2)?
6. Which commercial, open source, in house, or homegrown tools to use for artifact
creation, e.g., editors and transformation frameworks (step 2)?
As these questions can be answered differently depending on modeling preferences
and method tailoring, the AMT matrix content for a method differentiates practitioner
communities (e.g., a practice in a professional services firm or a special interest group
forming an open source project).
Having completed steps 1 and 2, the AMT matrix is ready for project use (step 3).
Step 3: Populate a project-specific AMT matrix instance with artifacts. In this
step, the AMT matrix structures the project document repository (model space) and
is populated with actual artifacts (e.g., models and code) throughout the project.
We continue the presentation of step 3 in the second next section of this chapter.
Before that, we formalize the concepts presented so far in a metamodel for artifact
management.
In this section, we present a metamodel for the AMT matrix. This metamodel may
serve as a reference for tool development.
15.3.1 Overview
As our approach targets different audiences (i.e., method creators and tool builders,
but also project practitioners), we distinguish between an AMT metamodel, a type-
level AMT model and an instantiated AMT model. The involved models and
diagrams can be summarized as:
1. AMT metamodel, shown in Fig. 15.3.
2. Type-level AMT model, created by a method creator by instantiating the AMT
metamodel. Such a type-level AMT model is created in step 2 of our approach.
15 Managing Artifacts with a Viewpoint-Realization Level Matrix 265
Fig. 15.3 The metamodel for the AMT matrix (represented as a UML class diagram)
Fig. 15.4 Example of a type-level AMT model (UML object instance diagram)
15 Managing Artifacts with a Viewpoint-Realization Level Matrix 267
set might then be ‘modernize and accelerate home loan processing’ and ‘as a retail
bank client, I want to be able to apply for a loan online and be able to receive a quote
immediately so that I can save time and am able to compare competing quotes’.
These artifacts can be related by transformations that generate parts of artifacts
(e.g., an initial set of epics and user stories may be obtained from the heat map).
Transformations may also generate traceability links between artifact elements
(e.g., to record that a user story has been derived from a heat map).
In addition to the AMT metamodel itself, constraints can be formulated on different
levels. Such constraints can pose requirements on a type-level AMT model and also
on instantiated AMT models, such as that each configured viewpoint must be
represented. Constraints can also be formulated to require a certain relationship in a
populated AMT Matrix instance, e.g., that certain traceability links between model
elements have to exist. It is also possible to formulate constraints without tying them to
a specific AMT Matrix. Such a constraint can state a requirement for an AMT matrix
and could require a certain number of artifact types or transformation types to exist.
As a second example, we continue the OOAD/CBD example from step 2 in
Sect. 15.2 and discuss an AMT matrix configuration where software architects are
advised to analyze the use cases to create an initial component model for the
solution under construction. We now perform this design activity in an example
and derive an AMT Matrix along the way. We decided to apply the component
modeling heuristics for this design step from the literature, specifically ‘UML
Components’ by Cheesman/Daniels [7]. This book defines a mature, state-of-the-art
OOAD method that is widely adopted in practice by a large population of require-
ments engineers and architects. Alternatively, other techniques or heuristics could be
applied in this activity as well.4 With the help of our metamodel, these heuristics can
be expressed as model transformations.
In their method, Cheesman/Daniels defined a ‘specification workflow’ which
begins with a step called ‘component identification’. Business concepts and use
case model are among the input to this step (besides existing interfaces and existing
assets). Among other artifacts, it yields ‘component specs & architecture’, captured
in a component model artifact [8]. In our exemplary AMT matrix (Fig. 15.2), use
case models reside in entry Req-1 and component models in entry Fun-1. Under
‘Identifying Interfaces’, Cheesman/Daniels advise to call out user interface, user
dialog, system services, and business services components and trace them back to
requirements artifacts. These components offer one operation per use case step.
As the next step, they recommend adding a UML interface to the design for
each use case step; for each use case, a dialog type component should be added to
the architecture. This scheme can be followed by an architect/UML modeler, or
partially automated in a tool targeting the analysis-design interface (or, more
specifically, the Req-1 to Fun-1 matrix entry boundary in an AMT matrix).
4
For instance, domain- and style-specific literature, e.g., on service modeling and SOA design can
further assist with this work (see Schloss Dagstuhl Seminar on Software Service Engineering
(January 2009) and [29] for examples).
268 J.M. K€uster et al.
Table 15.1 Artifact and Model Transformation (AMT) matrix for OOAD/CBD (developed in
step 2 of our approach)
Matrix entry Requirements
engineering, level 1 Functional solution design, level 1
(Req-1) (Fun-1)
Metamodel concept Role: domain analyst Role: software architect
Artifact type (structural) Use case model Component model (UML class diagram)
Artifact type Use case scenarios Component interaction diagrams (UML
(behavioral) sequence diagrams) for sunny day and error
scenarios
Artifact element type Use case Components as defined in reference architecture
Use case step Component responsibilities expressed as initial
operations in initial system interface
Precondition Assertion in component specification
Postcondition Out value of operation plus optional assertion
Traceability link type From component back to use case
From operation back to use case step
Transformation type From use cases to functional components (realization level 1):
user interface, user dialog, system services, business services
Table 15.1 maps the AMT metamodel classes from Fig. 15.3 to the artifact types
in the OOAD/CBD example from Fig. 15.2 (Sect. 15.2). It also lists an exemplary
model transformation implementing the Cheesman/Daniels heuristics for the tran-
sition from use cases to components. A full version of this table would provide the
complete output of step 2 of our approach for OOAD/CBD.
Once a type-level AMT model has been created (e.g., the OOAD/CBD one from
Sects. 15.2 and 15.3), one instance of the type-level AMT model is created per
project that employs the method described by the AMT matrix (here: OOAD/CBD).
This instantiated AMT model is populated by the project team.
AMT matrix instance for Travel Booking System (applying step 3). In the Travel
Booking System example in [7], a ‘Make a Reservation’ use case (for a hotel
room as part of a travel itinerary) appears. The use case has the steps ‘Identify
room requirements’, ‘System provides price’, and ‘Request a reservation’ (among
others). The component identification advice given in the book is to create one
dialog type component called ‘Make Reservation’ (derived from use case) and one
system interface component providing initial operations such as ‘getHotelDetails()’,
‘getRoomInfo()’, and ‘makeReservation()’ (derived from the use case steps).
Use case model and component model for the Travel Booking System are examples
of artifacts; the four use cases, the use case steps, and the two components are
examples of artifact elements that are linked with traceability links. The transition
from the use case to the initial component model can be implemented as a model
transformation. Figure 15.5 summarizes these examples of model artifacts, artifact
elements, traceability links, and transformations.
All component design activities discussed so far take place on a conceptual,
platform-independent realization level; in the Cheesman/Daniels method, compo-
nent realization (covered in a separate Chapter called ‘Provisioning and Assembly’)
represents the transition from realization level 1 to realization level 2 (or from logical
to physical design). All design advice in the book pertains to the functional view-
point; the operational/deployment viewpoint is not covered. Advice how to place
deployment units that implement the specified and realized functional components,
e.g., in a Java Enterprise Edition (JEE) technology infrastructure from an application
server middleware vendor, is available elsewhere in the literature.
Observations in exemplary application of AMT concepts (discussion). The
examples in this section demonstrate that our artifact management ideas are in
line with those in established methods and recognized text books; our concepts add
value to these assets as they organize the artifacts produced and consumed in a
structured, interlinked fashion. They also provide a formal underpinning for such
assets that tool builders can use to create method-aware tools of higher value than
current Create, Read, Update, Delete (CRUD) editors on metamodel instances.
As a side effect, our examples also indicate that it often makes sense (or even
is required) to combine methods, techniques on application development and
integration projects. An integrated approach to artifact management facilitates such
best-of-breed approach to method engineering.
5
Depending on the positioning of BPMN in the method used to create and configure the type-level
AMT model
15 Managing Artifacts with a Viewpoint-Realization Level Matrix 271
6
Such transformations are algorithms/functions that accept one or more models as input and return
the same or another set of models as output
272 J.M. K€uster et al.
Applying our presented approach requires some effort from method engineers and
project teams. For instance, matrix dimensions, viewpoints, and realization levels
have to be defined and artifact types and artifacts have to be assigned to matrix
entries. However, much of this effort is spent anyway, even without our approach
being available (if this is not the case, project teams run the risk of creating unneces-
sary artifacts). Furthermore, well-established engineering practices such as separating
platform- independent from platform-specific concerns and distinguishing stake-
holder-specific concerns are easily violated if established concepts such as viewpoints
and realization levels are not applied. Our AMT matrix approach supports these well-
established concepts natively and combines them to their mutual benefit. What is
minimally required from the method engineer and the project team is to reflect and
clearly articulate the purpose of each artifact type/artifact in terms of its viewpoint
and realization levels. We believe that the necessary effort of following our approach
is justified by its various benefits: (1) it enables checking the completeness of artifact
types/artifacts across all required stakeholder concerns across the software engineer-
ing lifecycle, (2) it enables checking the absence of redundancy across artifact types/
artifacts and (3) it allows method engineers and project teams to specify and reason
about traceability and consistency of artifacts/artifact types in a more conscious and
disciplined way.
Furthermore, we believe that the required effort can be minimized through good
tool design. For instance, a tool that supports role-specific configuration of its user
interface and is aware of method definitions, e.g., of requirements analysis and
architecture design methods, can automatically tag artifacts according to their
location within the matrix. This assumes that the tool has been configured with an
AMT matrix.
management during a project. The ideas presented by Milanovic et al. are similar,
however, their focus is on the common repository. It is not described how such
a common repository can be technically realized. Our AMT Matrix concept and the
metamodel can be used to structure the common repository when realizing an
integrated tool infrastructure. Recent work by Broy et al. [5] criticizes the current
state-of-the art of tool integration and proposes different ways to improve the
situation. We believe that our AMT matrix can be used as one means and basis
for seamless model-based development.
Earlier work in the area of consistency management has already recognized
that consistency constraints are dependent on the development process and the
application domain. K€ uster [20] describes a method how consistency management
can be defined dependent on the process and application domain. Further, the large
body of work on consistency checking and resolution (see, e.g., [9, 17, 23]) must be
integrated and adapted by tools working with the AMT Matrix in order to achieve
full benefit of it. This also applies to the work on traceability which establishes
traceability links based on, e.g., transformations and the work on defining and
validating model transformations (see, e.g., [21, 22]).
15.7 Conclusion
AMT Matrix which can be used as a reference model for integrating AMT matrix
concepts into tools. We also presented an exemplary OOAD/CBD instantiation of
the AMT metamodel and applied it to a sample project. Finally, we discussed the
usage scenarios, benefits, and the implications of our approach.
In future work, we plan to evaluate further our approach, e.g., via personal
involvement in industry projects (action research) and via joint work with
developers of commercial requirements engineering and architecture design tools.
Further research includes the elaboration of how consistency and traceability
management can be defined on the basis of the AMT matrix as well as adapting
existing modeling tools to support the AMT matrix when defining modeling
artifacts. Cross-cutting viewpoints also require further study; tagging the matrix
entries that address cross-cutting design concerns such as performance and security
to slice AMT instances seem to be a particularly promising direction in this regard.
References
Michael Stal
16.1 Introduction
provide the optimal architecture solution due to such changes, they will at least be
able to create solutions with sufficient quality and traceability.
It is important to note, that the introduced architecture design process does
not claim to be the only possible approach for systematic architecture design.
Nor does the sequence of steps and activities define the only possible order. Rather
the process introduces best practices from real and successful system development
projects at SIEMENS.
16.2.1 Loops
Fig. 16.2 Iterations are introduced in the architecture creation process to help master complexity
and to avoid accidental complexity. Piecemeal growth is ensured by iterative loops
Fig. 16.3 The Onion model structures the architecture and design process into four different phases
model (see Fig. 16.3) is supposed to introduce a systematic model for all necessary
architectural design activities.
Its core introduces all functional aspects of a software system, thus representing
the problem domain functionality software engineers need to implement. All other
requirements, no matter whether of operational, developmental or infrastructural
type can only be addressed with these functional responsibilities in mind. For
instance, it is not useful to build a flexible or high performance system without
any knowledge about the actual parts that need to be flexible or well performing.
Of course, we must have previously assigned unique priorities to all functional
requirements – and likewise to all other requirements. Consequently, designers can
move step by step from the functional requirement with the highest priority down
to the one with the lowest priority. This does not imply that all functional or other
requirements must be known upfront nor does it mean we need to cover them in full
detail. It only suggests we should at least know a sufficient subset of high priority
requirements in advance, where “sufficient” heavily depends on influential factors
such as forces from the problem and solution space which is the reason why it proves
to be (almost) impossible to specify a rule of thumb. When applying use cases as
a means for functional design, we could also adhere to the concept of piecemeal
growth by first considering the main scenarios (also known as “Happy Day”
scenarios), followed by alternative or exceptional flows (“Rainy Day” scenarios).
After designing the functional core of the software system with a priority – and
requirements – driven design process, the two subsequent phases consist of first,
addressing infrastructural and then, operational requirements. Note, that we could
also view infrastructural requirements as a subset of operational qualities. However,
infrastructure prerequisites such as the necessity of distributing logic or executing
it concurrently typically reveal an indispensable precondition which operational
(and developmental) qualities must take into account. Therefore, it often turns out
useful to separate infrastructural qualities with respect to distribution and con-
currency from operational qualities and to assign them the highest priorities.
Thus, the first sub step comprises determining an appropriate distribution and
concurrency infrastructure. Architecture patterns such as Broker or Half-Sync/Half-
Async help find the main constituents of this infrastructure and can be further
284 M. Stal
refined using design patterns such as Proxy or Monitor Object. In this process,
architects should integrate the software architecture they have already designed as
functional core with the distribution and concurrency infrastructure. This way, they
are beginning to mix in, step by step, elements from the solution space into
architectural concepts from the problem space.
In the third phase software architects integrate the remaining operational qualities
such as security, performance or reliability ordered by descending priorities. This
activity resembles the previous one for introducing the distribution and concurrency
infrastructure. Designers take care of the operational quality, conceive an architec-
tural infrastructure to handle the quality, and then integrate the current architecture,
i.e. functional core plus infrastructure, with the architecture entities imposed by the
current quality. If architecture and design patterns exist for the quality under consid-
eration, designers should leverage existing best practices instead of reinventing the
wheel. In the Onion model each operational quality represents an additional onion
layer.
As all architecture refinements and extensions are priority-driven, more impor-
tant requirements will have more impact on the software architecture than less
important requirements. For instance, if security is considered more important than
performance, we’ll end up with a different software system, than if performance
were considered the predominant quality. Thus, it is an essential prerequisite to
assign unique priorities to each requirement. Otherwise, architects would face
ambiguous precedence rules when facing design decisions.
The fourth phase in the Onion model addresses developmental qualities such
as maintainability, manageability or modifiability. Again, designers handle these
qualities in descending priority order. This activity happens after designing the
operational qualities, because developmental properties need to operate on func-
tional and operational entities in the software architecture. With other words, we
require those architecture parts to be already available that we expect to modify. For
example, modifiability could imply the exchange of a specific functionality such as
the communication protocols used. It could also mean to configure algorithms or
the sizes of thread pools for performance improvement.
As the Onion model illustrates, the obvious way to create software architecture is
via a top–down, breadth-first approach. However, only a few software development
projects start as green field projects. In most project contexts various constraints
limit the freedom of designers. For example, in the following cases:
• Usage of specific operating systems, legacy components, middleware, standard
information systems, or tools is required.
• In Embedded and especially in Real-Time systems, resources such as CPU time
or memory are only available in limited quantity.
16 Onions, Pyramids & Loops – From Requirements to Software Architecture 285
• For the integration into an existing environment, the software system under
development needs to provide interfaces to external systems or use external
interfaces.
How can designers cope with such Bottom–up requirements? An appropriate
solution requires stakeholders to assign priorities to these like to all other
requirements and to use them as input for the kind of top–down design promoted
by the Onion model. With this strategy, we can ensure that bottom–up constraints
have the necessary impact on the software architecture and its implementation.
Of course, stringent limitations such as restricted main memory must get the highest
possible priorities as they directly affect the feasibility of all other architecture and
technology decisions.
• System as a whole: In this abstraction layer the system is considered a black box
that is in charge of a set of responsibilities and that lives in an environment with
which it interoperates.
• Subsystems: If we leave the black-box view and enter a grey-box view,
the system consists of major interrelated building blocks, typically
called subsystems, which are responsible to implement the system’s core
responsibilities.
• Components: Subsystems are usually too coarse-grained in order to understand
how requirements such as use cases are mapped to the software architecture.
Thus, architects introduce interrelated components as constituents of subsystems
in order to reveal a more detailed design. In contrast to subsystems that represent
a kind of artificial abstraction level, programming platforms often support the
notion of components.
Note that terms such as subsystem or component in this context denote architec-
tural entities with different abstraction levels. While subsystem introduce a coarse-
grained partitioning of system responsibilities, in the ideal case each subsystem
representing one responsibility, components define the fundamental finer-grained
building blocks to implement the subsystems.
This model of abstraction is not prescriptive. In a pure Service-Oriented Archi-
tecture we would rather introduce system, services and components as abstraction
layers, while for small systems abstraction layers such as system, component, and
class might be more appropriate. Architects choose whatever model is most suitable
with respect to their concrete problem context.
The Onion and the Pyramid model add important means to the tool & value set of
software architects as these models provide guidance how to map requirements to
architecture decisions and where to set the boundary between software architecture
and design. Both models focus on a coarse-grained level of establishing architecture
decisions systematically from requirements. However, they do not define an ade-
quate architecture design process with more detailed steps. To establish the coarse-
grained feedback model for software architecture design that got introduced in
Fig. 16.2, different alternative solutions are possible. Nonetheless, we can identify
some common practices software engineers should employ to create the design
efficiently and effectively, no matter what concrete problem context they are
actually facing. To illustrate the conceptual idea, let us introduce a kind of process
pattern that is based upon these practices which it instantiates and concretizes in
a concrete process model (see Fig. 16.5).
16 Onions, Pyramids & Loops – From Requirements to Software Architecture 287
Perform
user requirements
elicitation
Create
domain model
Model dynamics
of scenarios
Determine scope
& boundaries
Fig. 16.5 Architects and developers need to execute several steps to create the implementation
Our assumptions and preconditions for the applicability of the process pattern are
as follows:
• In the context of the system development project, a development process model
has been already defined or prescribed by the organization. In the ideal case, the
development process uses an iterative-incremental or agile process model, but
the suggested process pattern may also be applied in an iterative waterfall model.
• The software architects and developers know the business goals of their organi-
zation. Business aspects are important for dealing with architecture decisions
and risks.
• We also assume familiarity of the development team with the requirements and
other forces and constraints that drive system development, but we don’t expect
a frozen and complete set of requirements nor their availability in all details.
Even if the set of requirements isn’t complete, there needs to be a sufficient
number of core requirements to start architecture creation, for example high
priority use cases and infrastructural/operational requirements. There is no
metrics for quantifying “sufficient,” as this heavily depends on various factors
such as maturity of the problem domain or available expertise and knowledge.
The available requirements should have been prioritized by all stakeholders to
allow architects and developers decide which direction to use in all decisions. In
addition, the requirements and other forces should have been specified with the
necessary quality. i.e., they need to be cohesive, complete, correct, consistent,
current, unambiguous, verifiable, feasible, mandatory, and externally observable.
• Software architects have already taken the system requirements and figured out
the architecturally relevant requirements.
288 M. Stal
The following sections explain the details of the activities that are introduced in
Fig. 16.5.
In the first step architects consider only functional aspects. They create a set of use
cases with high priorities together with those stakeholders who are in charge of
requirements engineering. High priority use cases are covered first because they
need to have the most significant impact on the functional core (as described in the
Onion model). Use cases in this set should be available as textual use case
descriptions (see Fig. 16.6), not as Use Case Diagrams, because Use Case diagrams
provide an overview of use cases without offering any details or explanations.
If the number of high priority use cases is very large, architects should partition
each use cases in a success scenario and alternative flows as appropriate strategy for
mastering complexity. They could consider the success scenarios first before
continuing with the alternative flows.
With a (subset of the) problem domain model and prioritized and partitioned use
cases in place, software architects can map the black-box use cases to white-box
scenarios relying solely on problem domain concepts. This helps further refine the
After the previous steps, software architects are able to define the architecture
baseline. For this purpose, they follow the Onion model and the Pyramid model.
16 Onions, Pyramids & Loops – From Requirements to Software Architecture 291
Administrator CRM
System Archive
Operator
SAP
SAP
Fig. 16.9 Use case diagrams and context diagrams help identify the system’s boundaries
The functional core is extended step by step with different layers representing
infrastructural, operational or developmental requirements. Thus, concepts of the
solution domain are introduced to the functional entities. At the end of this step,
a conceptual high-level software architecture design is available.
It is up to the software architects whether and to what extent they take develop-
mental qualities into account. One alternative strategy might be to cover mandatory
and important developmental qualities in this step while postponing all other devel-
opmental qualities to the later step refining the architecture baseline through end-
to-end slices. Another alternative could consist of addressing all developmental
qualities in the later refinement phase. Which approach they choose, depends on
the criticality, complexity and relevance of developmental qualities. For example, in
a product line engineering organization variability management denotes an important
issue for domain engineering and should already be addressed in this architecture
creation step.
In Fig. 16.10 an original sketch of the conceptual architectural baseline of the
Warehouse Management system was derived from the problem domain object
model by applying the Onion model approach. The Pyramid model helped to con-
strain the architecture to three levels of architecture abstractions (system, subsys-
tem, component). Using architecture and design patterns helped with providing the
infrastructural, operationaland developmentallayers.
292 M. Stal
Fig. 16.10 With the onion model and the pyramid model, software architects can derive an initial
architecture baseline
Concurrency concept
for performance and
scalability
ERP
Scheduling
Client FIFO
Warehouse Strategy
management Future
Service
Scheduler
H Interface
Warehouse DB Activation
M representation Access *
List
A. Method
I Request Memento
Material flow
control Macro Store Fetch
Command Item Item
Fault
Storage Leafs LoadIn Warehouse
mgmt Capacity Only Layers
Core
Storage Abstract Abstract Abstract
User
mgr Active Object
Manager Visitor
*
Iterator
*
Strategy
Abstract
Storage
*
Successor
Hazardous Real
Value Bin
Business Infrastructure
logic subsystem
subsystems
Fig. 16.11 In the warehouse management system the Layers pattern helps creating a three tier-
architecture
Fig. 16.12 Architecture patterns such as Layers help improve internal software quality and
support future evolution and maintenance
<<OS>>
:RedHat Enterprise Linux 4
<<database>>
:PostgreSQL10
Fig. 16.13 Deployment denotes the mapping of software architecture artifacts to physical entities
After availability of the initial architecture baseline, software architects must take
additional preparations for the later implementation steps. In order to employ some
architecture governance, they are supposed to introduce principles and guidelines
developers must adhere to. Otherwise, the refinement of the architecture baseline
can lead to lack of internal architecture qualities, for instance to insufficient
symmetry, but also to a failure meeting operational or developmental qualities.
Typical examples are crosscutting concerns such as coding styles, exception
handling or logging. When every developer introduces her own coding style,
expressiveness and simplicity will inevitably suffer. Such principles and guidelines
comprise (but are not limited) to the following areas:
• Coding Guidelines
• Application of specific software patterns
• Handling of cross-cutting concerns such as error handling, logging, auditing,
session management, internationalization
• Design tactics at least for the most critical operational and developmental
qualities
• Idioms for applying specific technologies and tools
16 Onions, Pyramids & Loops – From Requirements to Software Architecture 295
For the most critical topics, software development projects might assign responsi-
bilities to subsystem architects or whole teams who are in charge of providing and
enforcing guidelines how developers are expected to handle particular concerns.
Such guidelines do not necessarily resemble formal specifications; they might also
include tutorials (Fig. 16.14).
The last but most extensive phase consists of the iterative-incremental refinement of
the architectural baseline. Again, this step is driven by the Onion model. In contrast
to strategic design it is not limited by the Pyramid model but introduces whole end-
to-end slices of the system, eventually resulting in implementation prototypes. The
testing strategy gains significant importance for quality assurance as does the
application of architecture analysis and the use of appropriate tools.
In Fig.16.15 the iterative and incremental fine design of the SIEMENS Warehouse
Management system is illustrated.The diagram shows an intermediate state of the
system after a couple if iterations. For sake of brevity, only a part of the system
is presented.
Scheduling
Client FIFO
Strategy
Service
Future Scheduler
Interface
Activation
List
A. Method
Request Memento
*
Hazardous
Bin Warehouse Aisle
SOC
Hazardous Real
Value Bin
Fig. 16.15 During the last phase, software engineers create the implementation step by step
or solution domain and cannot be easily generalized. But there are also some rules
of thumbs, patterns and practices we can leverage independently of our domains.
This section provides some common hints and tips without striving for complete-
ness. These guidelines are all taken from my experiences in various projects.
The introduced architecture design process introduced works well for software
development projects, no matter whether they start from scratch or can be based on
existing artifacts. Sometimes, the organization has already gained some familiarity
with the problem and solution domain. In this case, a reference architecture might be
available that reflects knowledge and experience how to build a software system in
the given domain(s). If software architects can leverage a reference architecture, their
job is constrained to refining and instantiating the reference architecture with the
project-specific forces, i.e., requirements, constraints, and business goals. In the ideal
16 Onions, Pyramids & Loops – From Requirements to Software Architecture 297
case, the organization has already matured into a product line organization [6, 7]
and can instantiate the whole software system by configuration and adaption of core
assets. Even if such systematic re-use isn’t established, a common practice is to re-use
design or implementation artifacts to reduce costs and time. Software patterns such as
design patterns and architecture patterns help introduce proven solutions for problems
in specific contexts instead of reinventing the wheel.
As already explained, creating a software system should not consist only of adding
new features. In all iterations architects need to assess content and quality of the
software architecture and get rid of all architectural deficiencies early. Otherwise,
top–down design will continuously extend the architecture, making it increasingly
difficult or even impossible and too costly to revise earlier decisions. This is where
software architectures begin suffering from design erosion until they eventually have
to be completely reengineered or rewritten. To identify architecture smells such
as dependency cycles, unnecessary abstractions, insufficient coupling or cohesion, or
inadequate design decisions architects should apply a method for architecture evalu-
ation. Several methods for software architecture review have been proposed [8–10].
For regular evaluation an experience-based flash review or an ADR (Architecture
Design Review) will suffice. Tools for CQM (Code Quality Management) or soft-
ware architecture assessment show many benefits in this context, keeping designers
up-to-date about the state of the software architecture. After software architects have
identified deficiencies they should apply software architecture and code refactoring.
However, this activity needs to obey the same principles as systematic software
architecture design. For instance, all issues with respect to strategic requirements
have to be resolved before the tactical issues are being tackled, and inadequate
architecture decisions regarding high priority requirements should be addressed
before those concerning requirements with low priority.
16.3 Outroduction
References
11. Clements P, Northrop L (2001) Software product lines: practices and patterns. Addison-
Wesley, New York
12. Constantine L, Lockwood L (1999) Software for use: a practical guide to the models and
methods of usage-centred design. Addison-Wesley, New York
13. Hohmann L (2003) Beyond software architecture – creating and sustaining winning solutions.
Addison-Wesley, New York
14. Maier MW, Rechtin E (2000) The art of systems architecting. CRC Press, Boca Raton
15. Fowler M, Beck K, Brant J, Opdyke W, Roberts D (1999) Refactoring: improving the design
of existing code. Addison-Wesley, New York
16. Beck K (1999) Extreme programming explained: embrace the change. Addison-Wesley,
New York
17. Kruchten P (2000) The rational unified process, an introduction. Addison-Wesley, New York
18. Alexander C (1979) The timeless way of building. Oxford University Press, Oxford, UK
19. Gabriel RP (1996) Patterns of software. Oxford University Press, Oxford, UK
20. Gamma E, Helm R, Johnson R, Vlissides J (1995) Design patterns – elements of reusable
object-oriented software. Addison-Wesley, New York
21. Pattern languages of program design (1995, 1996, 1997, 1999), vols 1–4. Addison-Wesley,
New York
22. Rozanski N, Woods E (2008) Software systems architecture. Addison-Wesley, New York
.
Part IV
Emerging Issues in Relating Software
Requirements and Architecture
.
Chapter 17
Emerging Issues in Relating Software
Requirements and Architecture
often organic and emerge from user communities over time. Rapid technology
change fields like consumer appliance engineering and cloud computing similarly
have architectures and technologies rapidly changing during development. Some
applications have requirements that are difficult to anticipate during development
e.g. social networking applications and mash-ups where an application may be
composed of parts and the “requirements” both dynamic and run-time emergent
from stakeholders.
A number of new development processes and methods have emerged in recent
years which make the relationship between requirements and architecture a chal-
lenge to elicit or maintain. Model-driven engineering and Genetic Algorithms (GA)
have been applied to taking high-level models and architectural information
and synthesizing code and configurations of systems [1, 10]. Could they be used
to either generate architectures from suitable requirements models. In a related way,
could requirements models be usefully extracted from existing architectural models?
Agile processes have received much attention and they bring a very different focus
to both capturing and using requirements and working with architectures [7]. In
many agile methods architecture is emergent and requirements prioritization and
alignment with architecture is highly volatile. Open sourcing, as noted above, is often
characterized by highly distributed, heterogeneous teams and both requirements
and architecture are often emergent. No one group of stakeholders owns either
the requirements or the technology solutions used to realize them. Outsourcing
typically requires well-defined architectures to work within along with well-defined
requirements. How can more agile processes or domains with emergent requirements
and/or architectures use outsourcing approaches? End user computing allows non-
technical experts to modify their systems to configure them to their needs. By
definition, requirements are highly volatile and emergent and even architectures can
be vastly different depending on mash-up integrations.
A variety of new technologies and applications are impacting both requirements
and architecture and their relationship. Cloud computing and self-architecting
platforms offer a new way to architect systems but also impact on overall system
and application requirements, particularly quality of service attributes non-
functional requirements [8]. Similarly, ubiquitous computing systems are made
up of a wide variety of components and dynamically evolving architectures. They
have a range of users and are often context-dependent. This dramatically impacts
on requirements and architecture volatility. In a similar way, social networking
applications with mash-ups and user-selected applications, and mobile smart
phones and iPods with dynamic application content similarly result in highly
evolving technology platforms and application sets. These are, in contrast to
traditional software systems, heavily user-driven in terms of evolving requirements
and necessary architectural solutions, often very heterogeneous, embodied in user-
selected and configured diverse applications.
Development process and tool support for requirements engineering and software
architecting has become very sophisticated. A number of previous chapters in this
book have highlighted this. However, support is still rather limited for managing the
diversity and complexity and relationships between software requirements and
17 Emerging Issues in Relating Software Requirements and Architecture 305
architectures [9]. For example, issues of non-functional constraints and their reali-
zation by appropriate architecture middleware choices [4] and the economics of
evolving requirements on architectural decisions [5]. The above emerging issues
will further compound these challenges. This final section of our book contains
three very interesting chapters addressing different areas of these emerging issues
of relating software requirements and architectures.
Chapter 18 by Outi R€aih€a, Hadaytullah, Kai Koskimies and Erkki M€akinen
looks at a new approach to generating candidate software architectures for a system
based on a set of requirements models. One can think of this as a form of “model
driven engineering” where the model is a set of requirements for a software system
and the output a candidate architecture for it. A genetic algorithm is used to produce
the architecture working from an initial simplistic model and refining the model
using fitness functions. These evaluate candidates produced by the genetic algo-
rithm including such measures as simplicity, efficiency and modifiability. Their
approach then produces a proposal for a new software architecture for the specified
target system. The candidate architecture makes use of various architectural
styles and patterns to constrain the result. They study the quality of the candidate
architectures by comparing the generated solutions to ones produced by undergrad-
uate students. The results of these comparisons are very interesting!
Chapter 19 by Eoin Woods and Nick Rozanski describes how software architec-
ture can be used to frame, constrain and indeed to inspire the requirements of
a system. They observe that historically a system’s requirements and its architecture
have been viewed as having a simple relationship. Typically this was assumed to be
the requirements driving the architecture and the architecture was designed in order
to meet the requirements. They report that in their experience a much more dynamic
relationship exists and indeed must be achieved between these activities. They
present a model that allows the architecture of a system to be used to constrain the
requirements to an achievable set of possibilities, rather than be a “pipe dream” set
of possibly unattainable feature. The architectural characteristics can be used to
frame these requirements making their implications clearer. New requirements may
even be inspired by consideration of a system’s architecture.
Chapter 20 by Rami Bahsoon and Wolfgang Emmerich addresses a very differ-
ent aspect of requirements and architecture relationships, namely economics in the
presence of complex middleware capabilities. They observe that most systems now
adopt some form of middleware technology platform in order to more easily and
reliably achieve many non-functional requirements for a software system, such
as scalability, openness, heterogeneity, availability, reliability and fault-tolerance.
An issue is how to evolve non-functional requirements while being able to analyse
the impact of these changes on software architectures induced by middleware.
Specifically they look at economics and stability implications and the role of
middleware in architecture in achieving non-functional requirements in a software
architecture. They propose a technique to elicit not only current non-functional
requirements for a system but likely evolution of these over the lifetime of the system
and economic impact of these. This then allows choice of appropriate middleware
solutions that will both achieve non-functional requirements when used to realise an
306 J. Grundy et al.
References
18.1 Introduction
18.2 Background
Meta-heuristics [12] are commonly used for combinatorial optimization, where the
search space can become especially large. Many practically important problems are
NP-hard, making exact algorithms not feasible. Heuristic search algorithms handle
an optimization problem as a task of finding a “good enough” solution among all
possible solutions to a given problem, while meta-heuristic algorithms are able to
solve even the general class of problems behind the certain problem. A search will
optimally end in a global optimum in a search space, but at the very least it will
give some local optimum, i.e., a solution that is “better” than alternative solutions
nearby. A solution given by a heuristic search algorithm can be taken as a starting
point for further searches or be taken as the final solution, if its quality is considered
high enough.
We have used genetic algorithms, which were invented by John Holland in the
1960s. Holland’s original goal was not to design application specific algorithms, but
rather to formally study the ways of evolution and adaptation in nature and develop
ways to import them into computer science. Holland [6] presents the genetic
310 O. R€aih€a et al.
something about the frequency and resource consumption of the operations. For
example, if an operation that is frequently needed is activated via a message
dispatcher, there is a performance cost because of the increased message traffic.
To allow the evaluation of modifiability and efficiency, the operations can be
annotated with this kind of optional information. If this information is insufficient,
the method may produce less satisfactory results than with the additional informa-
tion. However, no actual “hints” on how the GA should proceed in the design
process are given. The null architecture gives a skeleton for the system and does not
give any finer details regarding the architectures. The information regarding the
operations merely helps in evaluating the solutions but influences in no direct way
the choices of the GA.
The specific quality requirements of a system are represented in two ways. First,
the fitness function used in the GA is basically a weighted sum of the values of
individual quality attributes. By changing the weights the user can emphasize or
downplay some quality attributes, or remove completely certain quality attributes
as requirements. Second, the user can optionally provide more specific quality
requirements using so-called scenarios. The scenario concept is inspired by the
ATAM architecture evaluation method [10], where scenarios are imaginary
situations or sequences of events serving as test cases for the fulfilling of a certain
quality requirement. In principle, scenarios could be used for any quality attribute,
but their formalization is a major research issue outside the scope of this work. Here
we have used only modifiability scenarios, which are fairly easy to formalize. For
example, in our case a scenario could be: “With 50% probability operation T needs
to be realized in different versions that can be changed dynamically.” This is
expressed for the GA tool using a simple formal convention covering most usual
types of change scenario contents.
Figure 18.1 depicts the overall synthesis process. The functional requirements
are expressed as use cases, which are refined into sequence diagrams. This is done
manually by exploiting knowledge of the major logical domain entities having
functional responsibilities. The null architecture, a class diagram, is derived mechani-
cally from the sequence diagrams. The quality requirements are encoded for the GA
as a fitness function, which is used to evaluate the produced architectures. Weights
can be given as parameters to emphasize certain quality attributes, and scenarios can
be used for more specific quality (modifiability) requirements. When the evolution
begins, the null architecture is used by the GA to first create an initial population of
architectures and then, after generations of evolution, the final architecture proposal is
presented as the best individual of the last generation. New generations are produced
by applying a fixed library of standard architectural solutions (styles, patterns, etc.) as
mutations, and crossover operations to combine architectures. The probabilities of
mutations and crossover can be given as parameters as well. The GA part is discussed
in more detail in Sect. 18.4.
The influence of human input is present in defining the use cases which lead to
the null architecture and in giving the parameters for the GA. The use cases must be
defined manually, as they depict the functional requirements of a system: automati-
cally deciding what a system is needed for is not sensible. Giving the parameters for
18 Synthesizing Architecture from Requirements: A Genetic Approach 313
Quality Null
requirements architecture
initial population
Subfitness Genetic
Software
weights, architecture architecture
scenarios fitness synthesis result
mutations
Solution
base
the GA, in turn, is necessary for the algorithm to operate. It is possible to leave
everything for the algorithm, and give each mutation the same probability and each
part of the fitness function the same weight. In this case, the GA will not favor any
design choice or quality aspect over another. If, however, the human architect has
a vision that certain design solutions would be more beneficial for a certain type of
system or feels that one quality aspect is more important than some other, it is
possible to take these into account when defining the parameters.
Thus, the human restricts the GA in terms of defining the system functionality
and guides the GA in terms of defining parameters. Additionally, the GA is
restricted by the solution base. The human can influence the solution base by
“removing solutions,” that is, by giving them probability 0, and thus making it
impossible for the GA to use them. But in any case the GA cannot move beyond the
solution base: if a pattern is not defined in the solution base, it cannot be used, and
thus the design choices are limited to those that can be achieved as a combination
of the specified solutions. Currently the patterns must be added to the solution base
by manual coding.
other approaches, this tool uses simulated annealing. O’Keeffe and Ó Cinnéide
[28, 29] have continued their research by constructing a tool for refactoring object-
oriented programs to conform more closely to a given design quality model. This
tool can be configured to operate using various subsets of its available automated
refactorings, various search techniques, and various evaluation functions based on
combinations of established metrics.
Seng et al. [25, 26] and O’Keeffe and Ó Cinnéide [27–29] make more substantial
design modifications than, e.g., Simons and Parmee [21, 22], and are thus closer to
our level of abstraction, but they work clearly from the re-engineering point of view,
as a well designed architecture is needed as a starting point. Also, modifications to
class hierarchies and structures are still at a lower abstraction level than the design
patterns and styles we use, as we need to consider larger parts of the system (or even
the whole system). The metrics used by Seng et al. [25, 26] and O’Keeffe and
Ó Cinnéide are also simpler, as they directly calculate, e.g., the number of methods
per class or the levels of abstraction.
Mancoridis et al. [30] have created the Bunch tool for automatic modularization.
Bunch uses hill climbing and GA to aid its clustering algorithms. A hierarchical
view of the system organization is created based on the components and relation-
ships that exist in the source code. The system modules and the module-level
relationships are represented as a module dependency graph (MDG). The goal of
the software modularization process is to automatically partition the components of
a system into clusters (subsystems) so that the resultant organization concurrently
minimizes inter-connectivity while maximizing intra-connectivity.
Di Penta et al. [31] build on these results and present a software renovation
framework (SRF) which covers several aspects of software renovation, such as
removing unused objects and code clones, and refactoring existing libraries into
smaller ones. Refactoring has been implemented in the SRF using a hybrid
approach based on hierarchical clustering, GAs and hill climbing, and it also
takes into account the developer’s feedback. Most of the SRF activities deal with
analyzing dependencies among software artifacts, which can be represented with a
dependency graph.
The studies by Mancoridis et al. [30] and Di Penta et al. [31] again differ from ours
on the direction of design, as they concentrate on re-engineering, and do not aim to
produce an architecture from requirements. Also they operate on different design
levels: clustering in the case of Mancoridis et al. [30] is on a higher abstraction level,
while, e.g., removing code clones in the case of Di Penta et al.’s [31] study is on
a much more detailed level than our work.
In the self-adaptation approach presented by Menascé et al. [32], an existing
SOA based system is adapted to a changing environment by inserting fault-tolerance
and load balancing patterns into the architecture at run time. The new adapted
architecture is found by a hill climbing algorithm. This work is close to ours in the
use of architecture-level patterns and heuristic search, but this approach – as other
self-adaptation approaches – use specific run-time information as the basis of archi-
tectural transformations, whereas we aim at synthesizing the architecture based on
requirements.
316 O. R€aih€a et al.
To summarize, most of the approaches discussed above are different from ours
in terms of the level of detail and overall aim: we are especially interested to shape
the overall architecture genetically, while the works discussed above consider the
problem of improving an existing architecture in terms of fairly fine-grained
mechanisms.
The genetic algorithm makes use of two kinds of information regarding each
operation appearing in the null architecture. First, the basic input contains the call
relationships of the operations taken from the sequence diagrams, as well as other
attributes like estimated parameter size, frequency and variability sensitiveness,
and the null architecture class it is initially placed in. Second, the information gives
the position of the operation with respect to other structures: the interface it
implements and the design patterns [24] and styles [33] it is a part of. The latter
data is produced by the genetic algorithm.
We will discuss the patterns used in this work in Sect. 18.4.2. The message
dispatcher architecture style is encoded by recording the message dispatcher the
operation uses and the responsibilities it communicates with through the dispatcher.
Other patterns are encoded as instances that contain all relevant information regard-
ing the pattern: operations involved, classes and interfaces involved, and whether
additional classes are needed for the pattern (as in the case of Façade, Mediator
and Adapter). All this data regarding an operation is encoded as a supergene.
An example of a supergene representing one operation is given in Fig. 18.2.
The chromosome handled by the genetic algorithm is gained by collecting the
supergenes, i.e., all data regarding all operations, thus representing a whole view
of the architecture. The null architecture is automatically encoded into the chromo-
some format on the basis of the sequence diagrams. An example of a chromosome
is presented in Fig. 18.3. A more detailed specification of the architecture represen-
tation is given by R€aih€a et al. [34, 35].
The initial population is generated by first encoding the null architecture into
the chromosome form and creating the desired number of individuals. A random
pattern is then inserted into each individual (in a randomly selected place). In
addition, a special individual is left in the population where no pattern is initially
inserted; this ensures versatility in the population.
As discussed above, the actual design is made by adding patterns to the architecture.
The patterns have been chosen so that there are very high-level architectural styles
(message dispatcher and client-server), medium-level design patterns (Façade and
Mediator), and low-level design patterns (Strategy, Adapter and Template Method).
The particular patterns were chosen also because they mostly deal with structure
and need very little or no information of the semantics of the operations involved.
The mutations are implemented in pairs of introducing a specific pattern or remov-
ing it. The dispatcher architecture style makes a small exception to this rule: the
actual dispatcher must first be introduced to the system, after which the components
can communicate through it.
Preconditions are used to check that a pattern is applicable. If, for example, the
“add Strategy”-mutation is chosen for operation oi, it is checked that oi is called by
some other operation in the same class c and that it is not a part of another pattern
already (pattern field is empty). Then, a Strategy pattern instance spi is created.
It contains information of the new class(es) sci where the different version(s) of the
operation are placed, and the common interface sii they implement. It also contains
information of all the classes and operations that are dependent on oi, and thus use
the Strategy interface. Then, the value in the class field in the supergene sgi
(representing oi) would be changed from c to sci, the interface field would be
given value sii and the pattern field the value spi. Adding other patterns is done
similarly. Removing a pattern is done in reverse: the operation placed in a “pattern
class” would be returned to its original null architecture class, and the pattern
found in the supergene’s pattern field would be deleted, as well as any classes
and interfaces related to it.
The crossover is implemented as a traditional one-point crossover. That is, given
chromosomes ch1 and ch2 that are selected for breeding, a crossover point p is first
chosen at random, so that 0 < p < n, if the system has n operations. The supergenes
sg1. . .sgp from chromosome ch1 and supergenes sgp+1. . . sgn from ch2 will form one
child, and supergenes sg1. . .sgp from chromosome ch2 and supergenes sgp+1. . . sgn
from ch1 another child.
A corrective function is added to ensure that the architectures stay coherent,
as patterns may be broken by overlapping mutations. In addition to ensuring that
the patterns present in the system stay coherent and “legal,” the corrective function
also checks that no anomalies are brought to the design, such as interfaces without
any users.
318 O. R€aih€a et al.
Also the mutation points are selected randomly. However, we have taken
advantage of the variability property of operations with the Strategy, Adapter and
dispatcher communication mutations. The chances of a gene being subjected to
these mutations increase with respect to the variability value of the corresponding
operation. This should favor highly variable operations.
The actual mutation probabilities are given as input. Selecting the mutation is
made with a “roulette wheel” selection [36], where the size of each slice of the
wheel is in proportion to the given probability of the respective mutation. Null
mutation and crossover are also included in the wheel. The crossover probability
increases linearly in relation to the fitness rank of an individual, which causes the
probabilities of mutations to decrease in order to fit the larger crossover slice to the
wheel. Also, after crossover, the parents are kept in the population for selection.
These actions favor strong individuals to be kept intact through generations. Each
individual has a chance of reproducing in each generation: if the first roulette
selection lands on a mutation, another selection is performed after the mutation
has been administered. If the second selection lands on the crossover slice, the
individual may produce offspring. In any other case, the second selection is not
taken into account, i.e., the individual is not mutated twice.
The fitness function needs to produce a numerical value, and is thus composed of
software metrics [37, 38]. The metrics introduced by Chidamber and Kemerer [9]
have especially been used as a starting point for the fitness function, and have been
further developed and grouped to achieve clear “sub-functions” for modifiability
and efficiency, both of which are measured with a set of positive and negative
metrics. The most significant modifications to the basic metrics include taking
into account the positive effect of interfaces and the dispatcher and client-server
architecture styles in terms of modifiability, as well as the negative effect of the
dispatcher and server in terms of efficiency. A simplicity metric is added to penalize
having many classes and interfaces.
Dividing the fitness function into sub-functions gives the possibility to empha-
size certain quality attributes and downplay others by assigning different weights
for different sub-functions. These weights are set by the human user in order to
guide the GA in case one quality aspect is considered more favorable than some
other. Denoting the weight for the respective sub-function sfi with wi, the core
fitness function fc(x) for architecture x can be expressed as
Here, sf1 measures positive modifiability, sf2 negative modifiability, sf3 positive
efficiency, sf4 negative efficiency and sf5 complexity. The sub-fitness functions are
defined as follows (|X| denotes the cardinality of X):
18 Synthesizing Architecture from Requirements: A Genetic Approach 319
Adding the scenario sub-fitness function to the core fitness function results in
the overall fitness, f ðxÞ ¼ fc ðxÞ þ ws sfs :
320 O. R€aih€a et al.
18.5 Application
As an example system, we will use the control system for a computerized home,
called ehome. Use cases for this system are assumed to consist of logging in,
changing the room temperature, changing the unit of temperature, making coffee,
moving drapes, and playing music. In Fig. 18.4, the coffee making use case has
been refined into a sequence diagram.
Since we are here focusing on the architecture of the actual control system, we
ignore user interface issues and follow a simple convention that the user interface is
represented by a single (subsystem) participant that can receive use case requests.
Accordingly, in the null architecture the user interface is in this example repre-
sented by a single component that has the use cases as operations.
To refine this use case, we observe that we need further components. The main
unit for controlling the coffee machine is introduced as CoffeeManager; addition-
ally, there is a separate component for managing water, WaterManager. If a
component has a significant state or it manages a significant data entity (like, say,
a data base), this is added to the participant box. In this case, CoffeeManager and
WaterManager are assumed to have significant state information.
The null architecture in Fig. 18.5 (made by hand in this study) for the ehome
system can be mechanically derived from the use case sequence diagrams. The
null architecture only contains use relationships, as no more detail is given for the
algorithm at this point. The null architecture represents the basic functional decom-
position of the system.
After the operations are derived from the use cases, some properties of the
operations can be estimated to support the genetic synthesis, regarding the amount
of data an operation needs, frequency of calls, and sensitiveness for variation.
For example, it is likely that the coffee machine status can be shown in several
different ways, and thus it is more sensitive to variation than ringing the buzzer
when the coffee is done. Measuring the position of drapes requires more infor-
mation than running the drape motor, and playing music quite likely has a higher
frequency than changing the password for the system. Relative values for the
chosen properties can similarly be estimated for all operations. This optional
information, together with operation call dependencies, is included in the infor-
mation subjected to encoding.
Finally, different stakeholders’ viewpoints are considered regarding how the
system might evolve in the future, and modifiability scenarios are formulated
accordingly. For example, change scenarios for the ehome system include:
• The user should be able to change the way the music list is showed (90%)
• The developer should be able to change the way water is connected to the coffee
machine (50%)
• The developer should be able to add another way of showing the coffee machine
status (60%).
A total of 15 scenarios were given for the ehome system.
18.5.2 Experiment
In our experiment, we used a population of 100 and 250 generations. The fitness
curve presented is an average of 10 test runs, where the actual y-value is the average
of 10 best individuals in a given population. The weights and probabilities for the
tests were chosen based on previous experiments [34, 35, 40].
We first set all the weights to 1, i.e., did not favor any quality factor over another.
The architecture achieved this way was quite simple. There were fairly well-placed
instances of all low-level patterns (Adapter, Template Method and Strategy), and the
client-server architecture style was also applied. Strikingly, however, the message
dispatcher was not used as the general style, which we would have expected for this
type of system. Consequently, we calibrated the weights by emphasizing positive
modifiability over other quality attributes. Simultaneously negative efficiency was
given a smaller than usual weight, to indicate that possible performance penalty of
solutions increasing modifiability is not crucial. The fitness curve for this experiment
is given in Fig. 18.6. As can be seen, the fitness curve develops steadily, and most
improvement takes place between 1 and 100 generations, which is expected, as the
architecture is still simple enough that applying the different mutations is easy.
An example solution with increased modifiability weight is depicted in Fig. 18.7.
Now, the dispatcher architecture style is present, and there are also more Strategy
patterns than in the solution where all quality factors were equally weighted. This is
a natural consequence of the weighting: the dispatcher has a significant positive
18 Synthesizing Architecture from Requirements: A Genetic Approach 323
12000
10000
8000
Fitness
6000
4000
2000
0
1 18 35 52 69 86 103 120 137 154 171 188 205 222 239
Generation
Average,modifiability weighted
effect on modifiability, and since it is not punished too much for inefficiency, it
is fairly heavily used as a communication pattern. The same applies to Strategy,
although in smaller scale.
18.6.1 Setup
Fig. 18.7 Example architecture for ehome when modifiability is weighted over other quality
factors
were third year Software Systems majors from Tampere University of Technology,
having participated in a course on software architectures.
The students were given essentially the same information that is used as input for
the GA, that is, the null architecture, the scenarios, and information about the expected
frequencies of operations. In addition, students were given a brief explanation of
the purpose and functionality of the system. They were asked to design the architec-
ture for the system, using only the same architecture styles (message dispatcher and
18 Synthesizing Architecture from Requirements: A Genetic Approach 325
After the students had returned their designs, the assistant teacher for the course
(impartial to the GA research) was asked to grade the designs as test answers on a
scale of 1–5, five being the highest. The solutions were then categorized according
to the points they achieved. From the categories of 1, 3 and 5, one solution for each
category was randomly selected. These architectures were presented as grading
examples to four software engineering experts. The experts were researchers and
teachers at the Department of Software Systems at Tampere University of Technol-
ogy. They all had a M.Sc. or a Ph.D. degree in Software Systems or in a closely
related discipline and several years of expertise from software architectures, gained
by research or teaching.
In the actual experiment, the experts were given ten pairs of architectures. One
solution in each pair was a student solution, selected randomly from the set of
student solutions, and one was a synthesized solution. The solutions were edited in
such a way that it was not possible for the experts to know which solution was
synthesized. The experts were then asked to give each solution 1, 3 or 5 points.
They were given the same information as the students regarding the requirements.
The experts were not told how the solutions were achieved, i.e., that they were
a combination of student and synthesized solutions. They were merely asked to help
in evaluating how good solutions a synthesizer could make.
18.6.2 Results
The scores given by the experts (e1 e4) to all the automatically synthesized
architectures (a1 a10) and architectures produced manually by the students
(m1 m10) are shown in Table 18.1. The points in Table 18.1 are organized so
that the points given to the synthesized and human-made solutions of the same pair
(ai, mi) are put next to each others so the pairwise points are easily seen. The result
of each comparison is one of the following
326 O. R€aih€a et al.
Table 18.1 Points for synthesized solutions and solutions produced by the students
a1 m1 a2 m2 a3 m3 a4 m4 a5 m5 a6 m6 a7 m7 a8 m8 a9 m9 a10 m10
e1 3 3 1 3 5 3 1 5 3 1 1 3 3 3 5 3 3 5 3 3
e2 5 1 3 3 5 1 1 1 3 3 3 5 1 1 3 1 1 1 5 1
e3 3 3 3 5 3 3 1 3 3 1 3 1 1 3 1 1 3 3 3 1
e4 3 1 5 3 3 5 3 1 5 1 5 3 3 3 3 1 3 3 5 1
• The synthesized solution is considered better (ai > mi, denoted later by +)
• The human-made solution is considered better (mi > ai , denoted later by ), or
• The solutions are considered equal (ai ¼ mi, denoted latter by 0).
By doing so, we lose some information because one of the solutions is consid-
ered simply “better” even in the situation when it receives 5 points while the other
receives 1 point. As can be seen in Table 18.1, this happens totally six times. In five
of these six cases the synthesized solution is considered clearly better than the
human-made solution, and only once vice versa. As our goal is to show that the
synthesized solutions are at least as good as the human-made solutions, this lost of
information does not bias the results.
The best synthesized solutions appear to be a3 and a10, with two 3’s and two 5’s.
In solution a3 the message dispatcher was used, and there were quite few patterns,
so the design seemed easily understandable while still being modifiable. However,
a10 was quite the opposite: the message dispatcher was not used, and there were
especially as many as eight instances of the Strategy pattern, when a3 had only two.
There were also several Template Method and Adapter pattern instances. In this
case the solution was highly modifiable, but not nearly as good in terms of
simplicity. This demonstrates how very different solutions can be highly valued
with the same evaluation criteria, when the criteria are conflicting: it is impossible
to achieve a solution that is at the same time optimally efficient, modifiable and still
understandable.
The worst synthesized solution was considered to be a4, with three 1’s and one 3.
This solution used the message dispatcher but also the client-server style was
eagerly applied. There were not very many patterns, and the ones that existed
were quite poorly applied. Among the human-made solutions, there were three
equally scored solutions (m5, m8, and m10).
Table 18.2 shows the numbers of the preferences of the experts, with “+” indicating
that the synthesized proposal was considered better than the student proposal, “”
indicating the opposite, and “0” indicating a tie. Only one (e1) of the four experts
preferred the human-made solutions slightly more often than the synthesized solution,
while two experts (e2 and e4) preferred the synthesized solutions clearly more often
than the human-made solutions. The fourth expert (e3) preferred both types of solu-
tions equally. There were totally 17 pairs of solutions with better score for the
synthesized solution, nine pairs preferring the human-made solution, and 14 ties.
The above crude analysis clearly indicates that in our simple experiment, the
synthesized solutions were ranked at least as high as student-made solutions. In
order to get more exact information about the preferences and finding confirmation
18 Synthesizing Architecture from Requirements: A Genetic Approach 327
even for the hypothesis that the synthesized solutions are significantly better
than student-made solutions, it would be possible to use an appropriate statistical
test (e.g., counting the Kendall coefficient of agreement). However, we omit such
studies due to the small number of both experts and architecture proposals consid-
ered. At this stage, it is enough to notice that the synthesized solutions are
competitive with those produced by third year software engineering students.
We acknowledge that there are several threats and limitations in the presented
experiment. Firstly, as the solutions for evaluations were selected randomly out of
all the 38 student (and synthesized) solutions, it is theoretically possible that the
solutions selected for the experiment do not give a true representation of the entire
solution group. However, we argue that as all experts were able to find solutions they
judged worth of 5 points as well as solutions only worth 1 point, and the majority of
solutions were given 3 points, it is unlikely that the solutions subjected to evaluation
would be so biased it would substantially affect the outcome of the experiment.
Secondly, the pairing of solutions could be questioned. A more diverse evalua-
tion could have been if the experts were given the solutions in different pairs (e.g.,
for expert e1 the solution a1 would have been paired with m5 instead of m1). One
might also ask if the outcome would be different with different pairing. We argue
that as the overall points are better for the synthesized solutions, different pairing
would not significantly change the outcome. Also, the experts were not actually told
to evaluate the solutions as pairs – the pairing was simply done in order to ease the
evaluation and analysis processes.
Thirdly, the actual evaluations made by the experts should be considered.
Naturally, having more experts would have strengthened the results. However,
the evaluations were quite uniform. There were very few cases where three experts
considered the synthesized solution better or equal to the student solution (or the
student solution better or equal to the synthesized one) and the fourth evaluation
was completely contradicting. In fact, there were only three cases where such
contradiction occurred (pairs 2, 3 and 4), and the contradicting expert was always
the same (e4). Thus we argue that the consensus between experts is sufficiently
good, and increasing the number of evaluations would not substantially alter the
outcome of the experiment in its current form.
328 O. R€aih€a et al.
Finally, the task setup was limited in the sense that architecture design was
restricted to a given selection of patterns. Giving such a selection to the students
may both improve the designs (as the students know that these patterns are
potentially applicable) and worsen the designs (due to overuse of the patterns).
Unfortunately, this limitation is due to the genetic synthesizer in its current stage,
and could not be avoided.
18.7 Conclusions
We have presented a method for using genetic algorithms for producing software
architectures, given a certain representation of functional and quality requirements.
We have focused on three quality attributes: modifiability, efficiency and simpli-
city. The approach is evaluated with an empirical study, where the produced
architectures were given for evaluation to experts alongside with student solutions
for the same design problem.
The empirical study suggests that, with the assumptions given in Sect. 18.1, it is
possible to synthesize software architectures that are roughly at the level of an
undergraduate student. In addition to the automation aspect, major strengths of
the presented approach are the versatility and options for expansion. Theoretically,
an unlimited amount of patterns can be used in the solution library, while a human
designer typically considers only a fairly limited set of standard solutions. The
genetic synthesis is also not tied to prejudices, and is able to produce fresh, unbiased
solutions that a human architect might not even think of. On the other hand, the
current research setup and experiments are still quite limited. Obviously, the
relatively simple architecture design task given in the experiment is still far from
real-life software architecture design, with all its complications.
The main challenge in this approach is the specification of the fitness function.
As it turned out in the experiment, even experts can disagree on what is a good
architecture. Obviously, the fitness function can only approximate the idea of archi-
tectural quality. Also, tuning the parameters (fitness weights and mutation proba-
bilities) is nontrivial and may require calibration for a particular type of a system. To
alleviate the problem of tuning the weights of different quality attributes, we are
currently exploring the use of Pareto optimality [41] to produce multiple architecture
proposals with different emphasis of the quality attributes, instead of a single one.
In the future we will focus on potential applications of genetic software archi-
tecture synthesis. A particularly attractive application field of this technology is
self-adapting systems (e.g., Cheng et al. [42]), where systems are really expected
to “redesign” themselves without human interaction. Self-adaptation is required
particularly in systems that are hard to maintain in a traditional way, like constantly
running embedded systems or highly distributed web systems. We see the genetic
technique proposed in this paper as a promising approach to give systems the ability
to reconsider their architectural solutions based on some changes in their require-
ments or environment.
18 Synthesizing Architecture from Requirements: A Genetic Approach 329
Acknowledgments We wish to thank the students and experts for participating in the experiment.
The anonymous reviewers have significantly helped to improve the paper. This work has been
funded by the Academy of Finland (project Darwin).
References
1. Denning P, Comer DE, Gries D, Mulder MC, Tucker A, Turner AJ, Young PR (1989)
Computing as a discipline, Commun. ACM 32(1):9–23
2. Brown WJ, Malveau C, McCormick HW, Mowbray TJ (1998) Antipatterns – refactoring
software, architectures, and projects in crisis. Wiley
3. S3 (2008) Proceedings of the 2008 workshop on self-sustaining systems. S3’2008, Potsdam,
Germany, 15–16 May 2008. Lecture notes in computer science, vol 5146. Springer-Verlag,
Heidelberg
4. Diaz-Pace A, Kim H, Bass L, Bianco P, Bachmann F (2008) Integrating quality-attribute
reasoning frameworks in the ArchE design assistant. In: Becker S, Plasil F, Reussner R (eds)
Proceedings of the 4th international conference on quality of software-architectures: models
and architectures. Lecture notes in computer science, vol 5281. Springer, Karlsruhe,
Germany, p 171
5. Buschmann F, Meunier R, Rohnert H, Sommerland P, Stal M (1996) A system of patterns –
pattern-oriented software architecture. John Wiley & Sons, West Sussex, England
6. Holland JH (1975) Adaption in natural and artificial systems. MIT Press, Ann Arbor,
Michigan, USA
7. Mitchell M (1996) An introduction to genetic algorithms. MIT Press, Cambridge
8. ISO (2001) Software engineering – product quality – part I: quality model. ISO/TEC
9126–1:2001
9. Chidamber SR, Kemerer CF (1994) A metrics suite for object oriented design. IEEE T
Software Eng 20(6):476–492
10. Clements P, Kazman R, Klein M (2002) Evaluating software architectures. Addison-Wesley,
Reading
11. Clarke J, Dolado JJ, Harman M, Hierons R, Jones MB, Lumkin M, Mitchell B, Mancoridis S,
Rees K, Roper M, Shepperd M (2003) Reformulating software engineering as a search
problem. IEE Proc – Softw 150(3):161–175
12. Glover FW, Kochenberger GA (eds) (2003) Handbook of metaheuristics, vol 57, International
series in operations research & management science. Springer, Heidelberg
13. Salomon R (1998) Short notes on the schema theorem and the building block hypothesis
in genetic algorithms. In: Porto VW, Saravanan N, Waagen D, Eiben AE (eds) Evolutionary
programming VII, 7th international conference, EP98, California, USA. Lecture notes in
computer science, vol 1447. Springer, Berlin, p 113
14. Babar MA, Dingsoyr T, Lago P, van Vliet H (eds) (2009) Software architecture knowledge
management – theory and practice establishing and managing knowledge sharing networks.
Springer, Heidelberg
15. ISO (2010) Systems and software engineering – architecture description. ISO/IEC CD1
42010: 1–51
16. Kruchten P (1995) Architectural blueprints – the “4 þ 1” view model of software architec-
ture. IEEE Softw 12(6):42–50
17. Selonen P, Koskimies K, Syst€a T (2001) Generating structured implementation schemes from
UML sequence diagrams. In: QiaYun L, Riehle R, Pour G, Meyer B (eds) Proceedings of
TOOLS 2001. IEEE CS Press, California, USA, p 317
18. Harman M, Mansouri SA, Zhang Y (2009) Search based software engineering: a comprehen-
sive review of trends, techniques and applications. Technical report TR-09-03, Kings College,
London
330 O. R€aih€a et al.
19. R€aih€a O (2010) A survey on search-based software design. Comput Sci Rev 4(4):203–249
20. Bowman M, Brian, LC, Labiche Y (2007) Solving the class responsibility assignment
problem in object-oriented analysis with multi-objective genetic algorithms. Technical report
SCE-07-02, Carleton University
21. Simons CL, Parmee IC (2007a) Single and multi-objective genetic operators in object-
oriented conceptual software design. In: Proceedings of the genetic and evolutionary compu-
tation conference (GECCO’07). ACM Press, London, UK, p 1957
22. Simons CL, Parmee IC (2007) A cross-disciplinary technology transfer for search-based
evolutionary computing: from engineering design to software engineering design. Eng Optim
39(5):631–648
23. Amoui M, Mirarab S, Ansari S, Lucas C (2006) A GA approach to design evolution using
design pattern transformation. Int J Inform Technol Intell Comput 1:235–245
24. Gamma E, Helm R, Johnson R, Vlissides J (1995) Design patterns, elements of reusable
object-oriented software. Addison-Wesley, Reading
25. Seng O, Bauyer M, Biehl M, Pache G (2005) Search-based improvement of subsystem
decomposition. In: Proceedings of the genetic and evolutionary computation conference
(GECCO’05). ACM Press, Mannheim, Germany, p 1045
26. Seng O, Stammel J, Burkhart D (2006) Search-based determination of refactorings for
improving the class structure of object-oriented systems. In: Proceedings of the genetic
and evolutionary computation conference (GECCO’06). ACM Press, Washington, USA,
p 1909
27. O’Keeffe M, Ó Cinnéide M (2004) Towards automated design improvements through com-
binatorial optimization. In: Workshop on directions in software engineering environments
(WoDiSEE2004), Workshop at ICSE’04, 26th international conference on software engineer-
ing. Edinburgh, Scotland, p 75
28. O’Keeffe M, Ó Cinnéide M (2006) Search-based software maintenance. In: Proceedings of
conference on software maintenance and reengineering. IEEE CS Press, Bari, Italy, p 249
29. O’Keeffe M, Ó Cinnéide M (2008) Search-based refactoring for software maintenance. J Syst
Software 81(4):502–516
30. Mancoridis S, Mitchell BS, Rorres C, Chen YF, Gansner ER (1998) Using automatic
clustering to produce high-level system organizations of source code. In: Proceedings of the
international workshop on program comprehension (IWPC’98). Silver Spring, p 45
31. Di Penta M, Neteler M, Antoniol G, Merlo E (2005) A language-independent software
renovation framework. J Syst Software 77:225–240
32. Menascé DA, Sousa JP, Malek S, Gomaa H (2010) QoS architectural patterns for self-
architecting software systems. In: Proceedings of the 7th international conference on auto-
nomic computing and communications. ACM Press, Washington DC, USA, p 195
33. Shaw M, Garlan D (1996) Software architecture – perspectives on an emerging discipline.
Prentice Hall, Englewood Cliffs
34. R€aih€a O, Koskimies K, M€akinen E (2008) Genetic synthesis of software architecture. In: Li X
et al. (eds) Proceedings of the 7th international conference on simulated evolution and
learning (SEAL’08). Lecture notes in computer science, vol 5361. Springer, Melbourne,
Australia, p 565
35. R€aih€a O, Koskimies K, M€akinen E, Syst€a T (2008) Pattern-based genetic model refinements in
MDA. Nordic J Comput 14(4):338–355
36. Michalewicz Z (1992) Genetic algorithms þ data structures ¼ evolutionary programs.
Springer, New York
37. Losavio F, Chirinos L, Matteo A, Lévy N, Ramdane-Cherif A (2004) ISO quality standards
measuring architectures. J Syst Software 72:209–223
38. Mens T, Demeyer S (2001) Future trends in evolution metrics. In: Proceedings of international
workshop on principles of software evolution. ACM Press, Vienna, Austria, p 83
39. Bass L, Clements P, Kazman R (1998) Software architecture in practice. Addison-Wesley,
Boston
18 Synthesizing Architecture from Requirements: A Genetic Approach 331
19.1 Introduction
inherent in the underlying business drivers that the system aims to meet. This
process is part of a wider three-way interaction between requirements, architecture
and project management. In this chapter we focus on the interaction between the
requirements and architectural design processes, while touching on the relationship
that both have with project management.
We start by examining the classical relationship between requirements and
architectural design, before moving on to describe how to achieve a richer, more
positive, interaction between requirements and architecture. We then illustrate the
approach with a real case study from the retail sector. In so doing, we hope to show
how architects need to look beyond the requirements that they are given and work
creatively and collaboratively with requirements analysts and project managers in
order to meet the system’s business goals in the most effective way.
In this work, we focus on the architecture and requirements of information
systems, as opposed to real-time or embedded systems. We have done this because
that is the area in which we have both gained our architectural experience and
applied the techniques we describe in the case study.
While both are very familiar concepts, given the number of interpretations that exist
of the terms “system requirements” and “software architecture” it is worth briefly
defining both in order to clearly set the scene for our discussion.
To avoid confusion over basic definitions, we use widely accepted standard
definitions of both concepts, and they serve our purposes perfectly well.
Following the lead of the IEEE [7] we define systems requirements as being “(1)
a condition or capability needed by a user to solve a problem or achieve an
objective; (2) a condition or capability that must be met or possessed by a system
or system component to satisfy a contract, standard, specification, or other formally
imposed document; or a documented representation of a condition or capability as
in (1) or (2).”
For software architecture, we use the standard definition from the ISO 42010
standard [8], which is “the fundamental organization of a system embodied in its
components, their relationships to each other, and to the environment, and the
principles guiding its design and evolution.”
So for the purposes of our discussion we view requirements as being the
definition of the capabilities that the system must deliver, and the architecture
of the system being its structure and organization that should allow the system to
provide the capabilities described by its requirements.
Gathering requirements is a complicated, subtle and varied task and for large
systems the primary responsibility for this usually lies with a specialist require-
ments analyst (or “requirements engineer” depending on the domain and termino-
logy in force). The task is usually divided into understanding the functions that the
system must provide (its functional requirements) and the qualities that it must
19 How Software Architecture can Frame, Constrain and Inspire System Requirements 335
Recognition of the importance of the relationship between the requirements and the
design of a system is not a recent insight and software development methods have
been relating the two for a long time.
The classical “Waterfall” method for software development [18] places
requirements analysis quite clearly at the start of the development process and
336 E. Woods and N. Rozanski
then proceeds to system design, where design and architecture work would take
place. There is a direct linkage between the activities, with the output of the
requirements analysis processing being a (complete) set of specifications that the
system must meet, which form the input to the design process. The design process is
in turn a problem-solving activity to identify a complete design for the system that
will allow it to meet that specification. The waterfall method is not really used in
practice due to its rigidity and the resulting high cost of mistakes (as no validation of
the system can be performed until it is all complete) but it does form a kind of cultural
backdrop upon which other more sophisticated approaches are layered and compared.
From our current perspective, the interesting thing about the waterfall approach is that
although primitive, it does recognise the close relationship of requirements analysis
and system architecture. The major limitation of the model is that it is a one-way
relationship with the requirements being fed into the design process as its primary
input, but with no feedback from the design to the requirements.
The well-known “spiral” model of software development [3] is one of the better-
known early attempts at addressing the obvious limitations of the waterfall
approach and has informed a number of later lifecycles such as Rational’s RUP
method [12]. The spiral model recognises that systems cannot be successfully
delivered using a simple set of forward-looking activities but that an iterative
approach, with each iteration of the system involving some requirements analysis,
design, implementation and review, is a much more effective and lower-risk way to
deliver a complicated system. The approach reorganises the waterfall process into a
series of risk-driven, linked iterations (or “spirals”), each of which attempts to
identify and address one or more areas of the system’s design. The spiral model
emphasises early feedback to the development team by way of reviews of all work
products that are at the end of each iteration, including system prototypes that can
be evaluated by the system’s main stakeholders. Fundamentally the spiral model
focuses on managing risk in the development process by ensuring that the main
risks facing the project are addressed in order of priority via an iterative prototyping
process and this approach is used to prioritise and guide all of the system design
activities.
A well-known development of the spiral model is the “Twin Peaks” model of
software development, as defined by Bashar Nuseibeh [15] which attempts to
address some limitations of the spiral model by organising the development process
so that the system’s requirements and the system’s architecture are developed in
parallel. Rather than each iteration defining the requirements and then defining
(or refining) the architecture to meet them, the Twin Peaks model suggests that the
two should be developed alongside each other because “candidate architectures
can constrain designers from meeting particular requirements, and the choice of
requirements can influence the architecture that designers select or develop.”
While the spiral model’s approach to reducing risk is to feedback to the require-
ments process regularly, the Twin Peaks model’s refinement of this is to make the
feedback immediate during the development of the two. By running concurrent,
interlinked requirements and architecture processes in this way the approach aims
to address some particular concerns in the development lifecycle, in particular
19 How Software Architecture can Frame, Constrain and Inspire System Requirements 337
“I will know it when I see it” (users not knowing what their requirements are until
something is built), using large COTS components within systems and rapid
requirements change.
Most recently, the emergence of Agile software development approaches [14] has
provided yet another perspective on the classical relationship between requirements
and design. Agile approaches stress the importance of constant communication,
working closely with the system’s end users (the “on-site customer”) throughout
the development process, and regular delivery of valuable working software to allow
it to be used, its value assessed and for the “velocity” (productivity) of the develop-
ment team to be measured in a simple and tangible way. An important difference to
note between the Agile and spiral approaches is that the spiral model assumes that the
early deliveries will be prototype software whereas an Agile approach encourages the
software to be fully developed for a minimal feature set and delivered to production
and used (so maximising the value that people get from it, as early as possible). In an
agile project, requirements and design artefacts tend to be informal and lightweight,
with short “user stories” taking the place of detailed requirements documentation
and informal (often short-lived) sketches taking the place of more rigorous and
lengthy design documentation. The interplay between requirements and design is
quite informal in the Agile approach, with requirements certainly driving design
choices as in other approaches, and the emphasis being on the design emerging from
the process of adding functions to the system, rather than “upfront” design. Feedback
from the design to the requirements is often implicit: a designer may realize the
difficulty of adding a new feature (and so a major piece of re-design – refactoring – is
required), or spot the ability to extend the system in a new way, given the system’s
potential capabilities, and suggest this to one of the customers who may decide to
write a new “user story.”
In summary, the last 20 years have seen significant advances in the approach
taken to relating requirements and design, with the emphasis on having design work
inform the requirements process as early as possible, rather than leaving this until
the system is nearly complete. However the remaining problem that we see with all
of these approaches is that architecture is implicitly seen as the servant of the
requirements process. Our experience suggests that in fact it is better to treat these
two activities as equal parts of the system definition process, where architecture is
not simply constrained and driven by the system’s requirements but has a more
fundamental role in helping to scope, support and inspire them.
We have found that the basis for defining a fruitful relationship between require-
ments and architecture needs to start with a consideration of the business drivers
that cause the project to be undertaken in the first place. We consider the business
drivers to be the underlying external forces acting on the project, and they capture
338 E. Woods and N. Rozanski
the fundamental motivations and rationale for creating the system. Business drivers
answer the fundamental “why” questions that underpin the project: why is devel-
oping the system going to benefit the organization? why has it chosen to focus its
energies and investment in this area rather than elsewhere? what is changing in the
wider environment that makes this system necessary or useful? Requirements
capture and architectural design, on the other hand, tend to answer (in different
ways) the “what” and “how” questions about the system.
It is widely accepted that business drivers provide context, scope and focus for
the requirements process, however we have also found them to be an important
input into architectural design, by allowing design principles to be identified and
justified by reference to them. Of course the requirements are also a key input to
the architectural design, defining the capabilities that the architecture will need
to support, but the business drivers provide another dimension, which helps the
architect to understand the wider goals that the architecture will be expected to
support. These relationships are illustrated by the informal diagram in Fig. 19.1.
Having a shared underlying set of drivers gives the requirements and architec-
ture activities a common context and helps to ensure that the two are compatible
and mutually supportive (after all, if they are both trying to support the same
business drivers then they should be in broad alignment). However, it is important
to understand that while the same set of drivers informs both processes, they may be
used in quite different ways.
Some business drivers will tend to influence the requirements work more directly,
while others will tend to influence architectural design more. For example, in a retail
environment the need to respond to expansion from a single region where the
organisation has a dense footprint of stores into new geographical areas is likely to
have an effect on both the requirements and the architecture. It is clear that such drivers
could lead to new requirements in the area of legislative flexibility, logistics and
distribution, the ability to have multiple concurrent merchandising strategies, infor-
mation availability, scalability with respect to stores and sales etc. However, while
these requirements would certainly influence the architecture, what is maybe less
obvious is that the underlying business driver could directly influence the architecture
Requirements
motivation,
business context Capture
priorities
technical context,
Business constraints, needs,
Drivers possibilities aspirations,
dependencies
priorities
motivation, Architectural
business context Design
priorities
Project Software
Manager scope,
Architect
cost,
time
in other ways, such as needing to ensure that the system is able to cope with relatively
high network latency between its components or the need to provide automated and/or
remote management of certain system components (e.g. in-store servers). These
architectural decisions and constraints then in turn influence the requirements that
the system can meet and may also suggest completely new possibilities. For example,
the ability to cope with high network latencies could both limit requirements, perhaps
constraining user interface options, and also open up new possibilities, such as the
ability for stores to be able to operate in offline mode, disconnected from the data
center, while continuing to sell goods and accept payments.
The other non-technical dimension to bear in mind is that this new relationship
between requirements and architecture will also have an effect on the decision-
making in the project. Whereas traditionally, the project manager and requirements
engineer/analyst took many of the decisions with respect to system scope and
function, this now becomes a more creative three-way tension between project
manager, requirements engineer and software architect as illustrated by the infor-
mal diagram in Fig. 19.2.
All three members of the project team are involved in the key decisions for the
project and so there should be a significant amount of detailed interaction between
them. The project manager is particularly interested in the impact of the architect’s
and requirements analyst’s decisions on scope, cost and time, and the requirements
analyst and architect negotiate and challenge each other on the system’s scope,
qualities and the possibilities offered by the emerging architecture.
General
Specification
Level
of
detail
Requirements Architecture
Detailed
Independent Dependent
Implementation
dependence
analysis activity into the architectural design, and the requirements are one of the
architect’s primary inputs. But rather than being a simple one-way relationship, we
would suggest that it is better to aim for a more intertwined relationship in the spirit
of Twin Peaks, but developing this theme to the point where architecture frames,
constrains and inspires the requirements as both are being developed. So we need to
consider how this process works in more detail.
Bashar Nuseibeh’s “Twin Peaks” model, as shown in Fig. 19.3, shows how the
requirements definition and architectural design activities can be intertwined so
that the two are developed in parallel. This allows the requirements to inform the
architecture as they are gathered and the architecture to guide the requirements
elicitation process as it is developed. The process of requirements analysis informing
architectural design is widely accepted and well understood, as the requirements are a
primary input to the architectural design process and an important part of architec-
tural design is making sure that the requirements can be met. What is of interest to us
here is how the architectural design process influences the requirements-gathering
activity and we have found that there are three main ways in which this influence
manifests itself.
Starting with the simplest case, we can consider the situation where the architecture
frames one or more requirements. This can be considered to be the classical case,
where the requirements are identified and defined by the requirements analysis
process and then addressed by the architecture. However, when the two are being
developed in parallel then this has the advantage that the architecture provides
19 How Software Architecture can Frame, Constrain and Inspire System Requirements 341
context and boundaries for the requirements during their development rather than
waiting for them to be completed. Such context provides the requirements analyst
with insights into the difficulty, risk and cost of implementing the requirements and
so helps them to balance these factors against the likely benefits that implementing
the requirement would provide. If this sort of contextual information is not provided
when eliciting requirements for a system then there is the distinct danger that “blue
sky” requirements are specified without any reference to the difficulty of providing
them. When the architecture is developed in parallel with the requirements,
this allows the architect to challenge requirements that would be expensive
to implement. In cases where they are found to be of high value, consider early
modifications or extensions to the system’s architecture to allow them to be
achieved at lower cost or risk.
For example, while a “surface” style user interface might well allow new and
innovative functions to be specified for a system, such devices are relatively imma-
ture, complicated to support, difficult to deploy and expensive to buy, so it would be
reasonable for any requirements that require such interfaces to be challenged on the
grounds of cost effectiveness and technical risk. The architecture doesn’t prevent this
requirement from being met, but investigating its implementation feasibility in
parallel with defining the requirement allows its true costs and risks to be understood.
In other situations, the architect may realize that the implementation of a require-
ment is likely to be very expensive, risky or time-consuming to implement using
any credible architecture that they can identify. In such cases, we say that the
architecture constrains the requirements, forcing the requirements analyst to focus
on addressing the underlying business drivers in a more practical way.
To take an extreme example, while it is certainly true that instant visibility of
totally consistent information across the globe would provide a new set of capabilities
for many systems, it is not possible to achieve this using today’s information systems
technology. It is therefore important that a requirements analyst respects this con-
straint and specifies a set of requirements that do not require such a facility in order to
operate. In this case, understanding the implementation possibilities while the
requirements are being developed allows a requirement to be highlighted as impossi-
ble to meet with a credible architecture, so allowing it to be removed or reworked
early in the process.
Finally, there are those situations where the architectural design process actually
inspires new aspects of the emerging requirements, or “the solution drives the
342 E. Woods and N. Rozanski
problem” as David Garlan observed [5]. However while Garlan was commenting
on this being possible in the case of product families (where the possible solutions
are already understood from the existing architecture of the product family),
we have also seen this happen when designing new systems from scratch. As
the architecture is developed, both from the requirements and the underlying
business drivers, it is often the case that architectural mechanisms need to be
introduced which can have many potential uses and could support many types
of system function. While they have been introduced to meet one requirement,
there is often no reason why they cannot then also be used to support another,
which perhaps had not been considered by the requirements analyst or the
system’s users.
An example of this is an architectural design decision to deliver the user
interface of the system via a web browser, which might be motivated by business
drivers around geographical location, ease of access or low administrative overhead
for the client devices. However, once this decision has been made, it opens up
a number of new possibilities including user interface layer integration with other
systems (e.g. via portals and mash ups), the delivery of the interface onto a much
wider variety of devices than was previously envisaged (e.g. home computers
as well as organisational ones) and accessing the interface from a wider variety
of locations (e.g. Internet cafes when employees are travelling as well as office and
home based locations). These possibilities can be fed back into the requirements
process and can inspire new and exciting applications of the system. This effec-
tively “creates” new requirements by extending the system’s possible usage into
new areas.
So as can be seen there is great potential for a rich and fruitful set of interactions
between requirements analysis and architectural design, leading to a lot of design
synergy, if the two can be performed in parallel, based on a set of underlying
business principles.
In practice, achieving these valuable interactions between requirements and
architectural design means that the requirements analysts and the architects must
work closely together in order to make sure that each has good visibility and
understanding of the other’s work and that there is a constant flow of information
and ideas between them.
As we said in the previous section, it is also important that the project
manager is involved in this process. While we do not have space here to discuss
the interaction with the project manager in detail, it is important to remember
that the project manager is ultimately responsible for the cost, risk and schedule
for the project. It is easy for the interplay between requirements and architecture
to suggest many possibilities that the current project timescales, budget and risk
appetite do not allow for, so it is important that the project manager is involved
in order to ensure that sound prioritisation is used to decide what is achievable
at the current time, and what needs to be recorded and deferred for future
consideration.
19 How Software Architecture can Frame, Constrain and Inspire System Requirements 343
A major clothing retailer was experiencing problems with stock accuracy of size-
complex items in its stores, leading to lost sales and a negative customer perception.
A size-complex such as a men’s suit is an expensive item which is sold in many
different size permutations. A store will typically only have a small stock of each size
permutation (for example, 44-Regular) on display or in its stockroom, since larger
stock levels take up valuable store space and drive up costs in the supply chain.
Manual counting of size-complex items is laborious and error-prone. Even
a small counting error can also lead to a critical stock inaccuracy, where the store
believes it has some items of a particular size in stock but in fact has none. Critical
inaccuracies lead to lost sales when customers cannot find the right size of an item
they want to buy.
According to inventory management systems, the retailer’s stock availability
was around 85% for size-complex lines (that is, only 15% were sold out at any one
time). However stock sampling indicated that real availability was as low as 45%
for some lines, and that critical inaccuracy (where the line is sold out but the stock
management system reports that there is stock available in store) was running as
high as 15%. This was costing millions of pounds in lost sales, and also driving
customer dissatisfaction up and customer conversion down (so customers were
leaving stores without buying anything).
The goal of the project was to drive a 3–5% upturn in sales of size-complex lines by
replacing error-prone and time-consuming manual stock counting with a more
accurate and efficient automated system. By reducing the time taken to do a stock
count from hours to a few minutes, the retailer expected to:
• Increase the accuracy of the stock count;
• Reduce the level of critical inaccuracy to near zero;
• Drive more effective replenishment;
• Provide timely and accurate information to head office management.
The new system was subject to some significant constraints because of the environ-
ment into which it was to be used and deployed.
344 E. Woods and N. Rozanski
• The in-store equipment had to be simple to use by relatively unskilled staff with
only brief training.
• The in-store equipment had to be robust and highly reliable. The cost of repair or
replacement of units in stores was high and would eat away at much of the
expected profits.
• The system had to be compatible with the infrastructure currently installed in
stores. This was highly standardised for all stores and comprised: a store server
running a fairly old but very stable version of Microsoft Windows; a wireless
network, with some bandwidth and signal availability constraints in older stores
because of their physical layout; and a low-bandwidth private stores WAN
carrying mainly HTTP and IBM MQ traffic.
• The system had to be compatible with the infrastructure and systems which ran
in Head Office and in partner organisations.
• The solution had to minimise the impact on the existing distribution channels
and minimise any changes that needed to be made by suppliers and distributors.
• The solution had to minimise any increase in material or production costs.
Some further constraints emerged during the early stages of the project as
described below.
Further investigation revealed that there were two types of RFID tag available for
use, read-only (write once) and read-write (write many times). Read-write tags
19 How Software Architecture can Frame, Constrain and Inspire System Requirements 345
operational
systems
garment portable
with tag reader mapping
serial
MANUFACTURE mapping
table
mappings
operational
serial systems
garment portable central stock
with tag reader management
count management
system
information
STOCK COUNTING systems
would be required for the solution above, since the UPC would need to be written to
the tag when it was attached to the garment, rather than when the RFID tag was
manufactured. However read-write tags were significantly more expensive and less
reliable, so this approach was ruled out.
Ruling out read-write tags was a fairly significant change of direction, and was
led primarily by cost and architectural concerns. However, since it had a significant
impact on the production and logistics processes, the decision (which was led
by architects) required the participation of a fairly wide range of business and IT
stakeholders.
Since each RFID tag has a world-unique serial number, a second model was
produced in which the serial number of a read-only tag would be used to derive the
UPC of the garment. Once the tag was physically attached to the garment, the
mapping between the tag’s serial number and the garment’s UPC would be written
to a mapping table in a database in the retailer’s data center (Fig 19.5).
There were again some significant implications to this approach, which required
the architects to lead various discussions with store leaders, garment manufacturers,
logistics partners and technology vendors. For example, it was necessary to develop
a special scanning device for use by manufacturers. This would scan the RFID
346 E. Woods and N. Rozanski
serial number using an RFID scanner, capture the garment’s UPC from its label
using a barcode scanner, and transmit the data reliably to the mapping system at the
retailer’s data center. Since manufacturers were often located in the Far East or
other distant locations, the device had to be simple, reliable and resilient to network
connectivity failures.
This iteration illustrated the constraining relationship between architecture and
requirements. The immaturity of the read-write RFID technology, and the potential
cost implications of this approach led to a solution that was more complex, and
imposed some significant new requirements on garment manufacturers, but would
be significantly more reliable and cheaper to operate.
It was initially planned to derive the UPC of the counted garment at the time that the
tag serial numbers were captured by the in-store reader. However the reader was a
relatively low-power device, and did not have the processing or storage capacity to
do this. An application was therefore required to run on the store server, which
maintained a copy of the mapping data and performed the required collation before
the counts were sent off (Fig. 19.6).
This iteration also illustrated the constraining relationship between architecture
and requirements.
The next consideration was product returns, an important value proposition for this
retailer. If a customer were to return a product for resale, then any existing tag
garment portable
with tag reader mapping
serial
MANUFACTURE mapping
table
mappings
operational
serial systems
garment portable store central stock
with tag reader server management
tag count management
system
IDs information
STOCKCOUNTING systems
garment
reader with tag
garment portable
with tag reader mapping
serial mapping
MANUFACTURE mapping RETURNS
table
mappings
operational
serial systems
garment portable store central stock
with tag reader server management
tag count management
system
IDs information
STOCKCOUNTING systems
would need to be removed, since it might have been damaged, a new tag attached,
and the mapping table updated before the item was returned to the shop floor. This
required a special tag reader on the shop floor, and also at the retailer’s distribution
centers.
This led to the third major iteration of the solution architecture as shown in
Fig. 19.7.
This iteration illustrated the inspiring relationship between architecture and
requirements. It was primarily the consideration of the architecture that prompted
the addition of specific returns-processing capabilities to the solution, especially the
provision of the specialized tag readers for this purpose.
Discussions were also held with the team that managed the stock system. It already
had the capability to enter stock corrections through a data entry screen, but an
automated interface would need to be added and it was necessary to confirm that the
system could deal with the expected volume of updates once the system was rolled
out to all stores. This became an architectural and a scheduling dependency that was
managed through discussions between the architects and project managers in both
teams.
After surveying the marketplace it became clear that the reader would have to be
custom-built. Manufacture was handed off to a third party but software architecture
considerations drove aspects of the reader design. It needed to be portable, with its
348 E. Woods and N. Rozanski
own battery, and had to shut down gracefully, without losing data, if the battery ran
out. It also had to present a simple touch-screen user interface.
Another constraint which emerged in the early stages of the project was around
customer privacy. The retailer was very keen to protect its brand and its reputation,
and the use of RFID to tag clothing was becoming controversial. In particular there
was a concern amongst privacy groups that retailers would be able to scan clothes
when a customer entered a store, and use the information to identify the customer
and track their movements.
To allay any customer concerns, the tag was designed so that it could be removed
from the garment and discarded after purchase. The retailer also met with the
privacy groups to explain that their fears were unfounded, but needed to be careful
not to reveal too much detail of the solution before its launch. Architects were
involved in the technical aspects of this and also supported the retailer’s Marketing
Department who produced a leaflet for customers to explain the tags.
The system was very successful. The initial pilot showed a consistent uplift in sales
for counted lines, which exceeded the project goals. It was popular with staff and
business users, and there was no customer resistance to the tags since they could be
removed once the item had been paid for.
There were some further lessons from the pilot that were incorporated into the
solution. For example, the readers proved to have a significantly larger operating
range than expected and care needed to be taken not to count stock that was in the
stock room rather than on the shop floor.
There are some weaknesses to this approach however. There will be a significant
amount of uncertainty and change in the early stages of the lifecycle, and if ongoing
changes to the requirements or architecture are not communicated to everyone
concerned, then there is a significant risk that the wrong solution will be developed.
Also, many stakeholders are uncomfortable with this level of uncertainty. Users
may insist that the requirements are correct as initially specified, and object to
changes they view as being IT-driven rather than user-driven. Developers, on the
other hand, may struggle to design a system whose requirements are fluid or
unclear.
Finally, an iterative approach can be viewed by senior management as extending
the project duration. Explaining that this approach is likely to lead to an earlier
successful delivery can be difficult.
All of these problems require careful management, oversight and discipline on
the part of the project manager, the architect and the requirements analyst.
together with them in order to develop the requirements and the architecture
simultaneously with reference to each other.
• Work Collaboratively – as well as working in parallel and referencing each
other’s work, this approach needs a collaborative mindset on the parts of
the requirements analyst and the architect, so aim to establish this early and
try to work in this way. Of course, “it takes two to tango” so you may not always
be successful if the requirements analyst is unused to working in this way, but in
many cases if the architect starts to work in an open and collaborative way,
others are happy to do so too.
• Challenge Where Needed – as requirements start to emerge, do not be afraid
to challenge them if you feel that they will be prohibitively expensive or risky to
implement, or you spot fundamental incompatibilities between different sets of
requirements that will cause severe implementation difficulties. It is exactly this
sort of early feedback that is one of the most valuable outputs of this style of
working.
• Understand Costs and Risks–during the early stages of a large project you are in
a unique position to understand the costs and risks of implementing proposed
requirements, as it is unlikely that a project manager or a requirements analyst
will have a deep knowledge of the design possibilities for the system. You
should work with the project manager to understand the costs and risks inherent
in the emerging requirements, and then explain them to the other decision
makers on the project to allow more informed decisions to be made.
• Look for Opportunities – the other dimension that you can bring to the project in
its early stages is an understanding of opportunities provided by each of the
candidate architectural designs for the system. Your slightly different perspec-
tive on the business drivers, away from their purely functional implications,
allows a broader view that often results in an architecture that has capabilities
within it that can be used in many different ways. By pointing these capabilities
out to the requirements analyst, you may well inspire valuable new system
capabilities, even if they cannot be immediately implemented.
In short, our experience suggests that rather than accepting a set of completed
system requirements, architects need to get involved in projects early and working
positively and collaboratively with the requirements analysts and project managers
to shape the project in order to increase its chances of successful delivery, while
making the most of the potential that its underlying architecture can offer.
As already noted, many other people working in the software architecture field have
explored the close relationship between requirements and software architecture.
The SEI’s Architecture Trade-off Analysis Method – or ATAM – is a structured
approach to assessing how well a system is likely to meet its requirements, based on
the characteristics of its architecture [10]. In the approach, a set of key scenarios are
19 How Software Architecture can Frame, Constrain and Inspire System Requirements 351
identified and analysed to understand the risks and trade-offs inherent in the design
and how the system will meet its requirements. When applied early in the lifecycle,
ATAM can provide feedback into the requirements process.
A related SEI method is the Quality Attribute Workshop – or QAW – which is a
method for identifying the critical quality attributes of a system (such as performance,
security, availability and so on) from the business goals of the acquiring organisation
[1]. The focus of QAW is identification of critical requirements rather than how the
system will meet them, and so is often used as a precursor to ATAM reviews.
Global Analysis [16] is another technique used to relate requirements and
software architecture by structuring the analysis of a range of factors that influence
the form of software architectures (including organisational constraints, technical
constraints and product requirements). The aim of the process is to identify a set
of system-wide strategies that guide the software design to meet the constraints that
it faces, so helping to bridge the gap between requirements and architectural design.
As well as architecture centric approaches, there have also been a number of
novel attempts to relate the problem domain and the solution domain from the
requirements engineering community.
One example is Michael Jackson’s Problem Frames approach [9], which
encourages the requirements analyst to consider the different domains of interest
within the overall problem domain, how these domains are inter-related via shared
phenomena and how they affect the system being built. We view techniques like
problems frames as being very complimentary to the ideas we have developed here,
as they encourage the requirements analyst to delve deeply into the problem,
domain and uncover the key requirements that the architecture will need to meet.
Another technique from the requirements engineering community is the KAOS
method, developed at the universities of Oregon and Louvain [13]. Like Problem
Frames, KAOS encourages the requirements analyst to understand all of the
problem domain, not just the part that interfaces with the system being built, and
shows how to link requirements back to business goals. Again we view this as
a complimentary approach as the use of a method like KAOS is likely to help the
requirements analyst and architecture align their work more quickly than would
otherwise be the case, as well as understand the problem domain more deeply.
19.10 Summary
In this chapter we have explained how our experience has led us to realise that
a much richer relationship can exist between requirements gathering and architec-
tural design than their classical relationship would suggest. Rather than passively
accepting requirements into the design process, much better systems are created
when the requirements analyst and architect work together to allow architecture
to constrain the requirements to an achievable set of possibilities, frame the
requirements making their implications clearer, and inspire new requirements
from the capabilities of the system’s architecture.
352 E. Woods and N. Rozanski
References
The requirements that drive the decision towards building a distributed system
architecture are usually of a non-functional nature. Scalability, openness, heteroge-
neity, and fault-tolerance are just examples. The current trend is to build distributed
systems architectures with middleware technologies such as Java 2 Enterprise Edition
(J2EE) [1] and the Common Object Request Broker Architecture (CORBA) [2].
COTS Middleware simplifies the construction of distributed systems by providing
high-level primitives, which shield the application engineers from distribution com-
plexities, managing systems resources, and implementing low-level details, such
as concurrency control, transaction management, and network communication.
These primitives are often responsible for realizing many of the non-functional
requirements in the architecture of the software system induced. Despite the fact
that architectures and middleware address different phases of software develop-
ment, the usage of middleware can influence the architecture of the system being
developed. Conversely, specific architectural choices constrain the selection of the
underlying middleware [3]. Once a particular middleware system has been chosen
for a software architecture, it is extremely expensive to revert that choice and adopt
a different middleware or a different architecture. The choice is influenced by the
non-functional requirements. Unfortunately, these requirements tend to be unstable
and evolve over time and threaten the stability of the architecture. Non-functional
requirements often change with the setting in which the system is embedded, for
example when new hardware or operating system platforms are added as a result of
a merger, or when scalability requirements change due to sudden increase in users
as it is the case of successful e-commerce systems.
Adopting a flexible COTS middleware induced-architecture that is capable of
accommodating future changes in non-functional requirements, while leaving the
architecture intact is important. Unfortunately, such viability and sustainability of
the choice comes with a price. This is often a matter of how valuable this flexibility
will be in the future relative to the likely changes in non-functional requirements.
When such flexibility ceases to add a value and becomes a liability, watch out,
the “beauty” is becoming wild and intolerable. Alternatively, a middleware with
limited flexibility may still realize the change through “cosmetic” solutions of
ad-hoc or proprietary nature, such as modifying part of the middleware; extending
the middleware primitives; implementing additional interfaces; and so forth. These
solutions could seem to be costly, problematic, and unacceptable; yet they may turn
to be more cost-effective in the long-run.
As a motivating example, consider a distributed software architecture that is to
be used for providing the back-end services of an organization. This architecture
will be built on middleware. Depending on which COTS middleware is chosen,
different architectures may be induced [3]. These architectures will have differences
in how well the system is going to cope with changes. For example, a CORBA-based
solution might meet the functional requirements of a system in the same way as
a distributed component-based solution that is based on a J2EE application server.
A notable difference between these two architectures will be that increasing scalabi-
lity demands might be easily accommodated in the J2EE architecture because J2EE
primitives for replication of Enterprise Java Beans can be used, while the CORBA-
based architecture may not easily scale. The choice is not straightforward as the
J2EE-based infrastructures usually incur significant upfront license costs. Thus, when
selecting an architecture, the question arises whether an organization wants to invest
into an J2EE application server and its implementation within an organization,
or whether it would be better off implementing a CORBA solution. Answering
this question without taking into account the flexibility that the J2EE solution
provides and how valuable this flexibility will be in the future might lead to making
the wrong choice.
20 Economics-Driven Architecting for Non Functional Requirements 355
The chapter is organised as follows: Sect. 20.2 reports on a case study, which
demonstrates how economics-driven approaches can inform the selection of more
stable middleware-induced architectures and discusses observation resulted from
its application. Section 20.3 discusses closely related work. Section 20.4 concludes.
The case study demonstrates a novel application of real options theory and its
fitness for informing the selection of a more “stable” middleware-induced architec-
ture. The observations derived upon conducting the case aims at advancing our
understanding to the architectural stability problem, when addressed in relation to
middleware. The case simulates the selection process inspired by economics-driven
approaches and highlights possible insights that could derive from the application
of real options theory to the selection problem. In particular, the case study extends
the confidence in the following specific claims: (1) the uncertainty, attributed to the
likelihood of change(s), makes real options theory superior to other valuation
techniques, which fall short in dealing with the value of architectural flexibility
under uncertainty; (2) the flexibility of a middleware-induced architecture in face
of likely changes in requirements creates values in the form of real options; (3)
The problem of finding a potentially stable middleware-induced architecture
requires finding a solution that maximizes the yield in the added value, relative to
some likely future changes in requirements. If we assume that the added value is
attributed to flexibility, the problem becomes maximizing the yield in the embed-
ded or adapted flexibility provided by the selected middleware-induced architecture
relative to these changes; and nevertheless (4) the decision of selecting a potentially
stable architecture has to maximize the value added relative to some valuation
points of view: we take savings in future maintainability as one dimension of value
to illustrate the approach.
We note that case studies have been extensively used to empirically assess
software engineering approaches. When performed in real situations, case studies
provide practical and empirical evidence that a method is appropriate to solve
a particular class of problems. According to Dawson et al. [4], conducting con-
trolled and repeatable experiments in software engineering is quite difficult, if not
impossible to accomplish. This is mainly because the way software engineering
methods are applied varies across different contexts and involve variables that
cannot be fully controlled. We note that the primary aim of this case study is to
show how economics-driven approaches can provide an effective alternative to
inform the selection of middleware-induced architectures. Under no considerations
should the results be regarded as a definite distinction of the merit of one technology
over the other as we have only used “flavors” of CORBA and J2EE.
356 R. Bahsoon and W. Emmerich
20.2.2 Setting
DB
Servers
Accounts
Account
Web Client
Customer Customer
Application
Transaction
Transaction
Fig. 20.1 The architecture of
the duke’s bank
20 Economics-Driven Architecting for Non Functional Requirements 357
Scala-
bility
through
Repli-
cation
Load Fault
Balan- Toler-
cing ance
Server Client Support Equalize Increase Support Incur Load Inter Log- Fault Rep-
Trans- Trans- Dynamic Dynamic System Admini- Minimal Metrics opera- ging Manage- lication
parency parency Opera- Load Depen- strative Over- and bility and ment Manage-
tions Distri- dability Tasks head Balan- and Recovery ment
bution cing Porta- Manage-
bility ment
Fig. 20.2 The goal-oriented refinement for achieving scalability through replication
induced by a particular middleware. In more abstract terms, the guidance was given
through the knowledge of the domain; vendor’s specification [1, 2]; related design
and implementation experience, mainly that of Othman et al. [8, 9]. We note that
different architectural mechanisms may operationalise the scalability goal. As an
operationalisation alternative, we use replication as way for achieving scalability.
The reason is due to the fact that both CORBA and J2EE do provide the primitives
or guidelines for scaling a software system using replication, which make the
comparison between the two versions feasible. In particular, the Object Manage-
ment Group’s CORBA specification [2] defines a fault tolerance and a load balanc-
ing support, both when combined provide the core capability for implementing
scalability through replication. Similarly, J2EE provides the primitives for scaling
the software system through replication. Hence, the refinement and its corresponding
operationalisation are guided by the solution domain (i.e., the middleware). Refine-
ment of the scalability goal is depicted in Fig 20.2. We then estimate the structural
impact and the SLOC to be added upon achieving scalability on both versions.
X
n
Ie þ E½maxðxi V Cei ; 0Þ (20.1)
i¼0
The savings, however, are uncertain and differ with the number of hosts, as the
replicas may need to be run on different hosts. Such uncertainty makes it even more
appealing to use “options thinking”. The valuation using ArchOptions is flexible to
result in a comprehensive solution that incorporates multiple valuation techniques,
some with subjective estimates, and others based on market data, when available.
The problem associated with how to guide the estimation in this setting, we term as
a multiple perspectives valuation problem. To introduce discipline into this setting
and capture the value from different perspectives, we had suggested valuation
points of view (i.e., market or subjective estimates) as a solution. The framework
is comprehensive enough to account for the economic ramifications of the change,
its global impact on the architecture, and on other architectural qualities. The
solution aims to promote flexibility through incorporating both subjective estimates
and/or explicit market value, when available. It is worth nothing that for this case
we only report on the structural maintainability value point of view. Nevertheless,
the valuation could be extended to include other valuation dimensions.
Calculating the volatility (s).Volatility is a quantitative expression of risk.
Volatility is often measured by standard deviation of the rate of return on an asset
price S (i.e., xiV) over time. Unlike with financial options, in real options the
volatility of the underlying asset’s value cannot be observed and must be estimated.
During the evaluation of architectural stability, it is anticipated and even expected
that stakeholders might undervalue or overvalue the architectural potential relative
xiV to the change in requirement(s). In other words, stakeholders tend to be uncertain
about such value. For example, back to the motivating example of Sect. 20.4,
suppose that the value of the architectural potential of inducing an architecture
with J2EE and not CORBA (or perhaps vice versa) take the form of relative savings
in development and configuration effort, if the future change in scalability need to
be exercised on the induced structure: estimating such savings may vary from one
architect to another within the firm. It differs with the architect’s experience, the
novelty of the situation; consequently, it could be overvalued or undervalued.
The variation in the future savings, hence, determines the “cone of uncertainty” in
the future value of the architectural potential for embarking on a J2EE-induced
architecture relative to the CORBA one. Thus, it is reasonable to consider the
uncertainty of the architectural potential to correspond to the volatility of the
stock price. In short, the volatility s tends to provide a measure of how uncertain
the stakeholders are about the value of the architectural potential relative to change;
it tends to measure fluctuation in the said value. Volatility stands for the “fluctua-
tion” in the value of the estimated xiV. We take the percentage of the standard
deviation of the three xiVs estimates-the optimistic, likely, and pessimistic values to
calculate s.
Exercise time (t). The risk-free rate is a theoretical interest rate at which an
investment may earn interest without incurring any risk. An increase in the risk-free
interest rate leads to an increase in the value of the option. Finding the correspon-
dence of this parameter is not straightforward, for the concept of interest in the
architectural context does not hold strongly (as it is the case in the financial world)
and is situation dependent. In our analogy, we set the risk-free interest rate to zero
362 R. Bahsoon and W. Emmerich
assuming that value of the architectural potential is not affected by factors that
could lead to either earning or depreciation in interest. That is, the value of
architectural potential today is that of the time of exercising the flexibility option.
However, we note that it is still possible for the analyst to account for this value,
when applicable. For example, if the architectural platform is correlated in a way
with the market, then the value of the architectural potential may increase or
decrease with the market performance of the said platform. We set the exercise
time to 1 year, assuming that the Duke’s Bank needs to accommodate the change in
1 year time. We set the free risk interest rate to zero assuming the value of money
today is the same as that of 1 year’s time.
Table 20.2 The options in ($) on S1 relative to S0 for one host, with S1 license cost (Clicesh) ¼ 0
Cei xiVS1/S0 s t Options
Optimistic 1,558 96,450 94,892
Likely 1,948 120,563 118,615
Over all Pessimistic 2,435 150,704 22.7 1 148,269
Opt 0 96,481 96,481
Likely 0 120,602 120,602
Development Pes. 0 150,753 22.7 1 150,753
Opt 1,558 31 0
Likely 1,948 39 0
Configuration Pes. 2,435 49 22.7 1 0
Opt 0 0 0
Likely 0 0 0
Deployment Pes. 0 0 22.7 1 0
20 Economics-Driven Architecting for Non Functional Requirements 363
time. Thus, S1 induced by M1 is likely to add more value in the form of options
relative to the change, when compared to S0. Note that, though S1 is flexible relative
to the scalability change, it might not necessarily mean that it might be flexible with
respect to other changes. Obviously, J2EE does provide the primitives for scaling
the software system, which result in making the architecture of the software system
more flexible in accommodating the change in scalability, as when compared to the
CORBA version. Calculating the options of S0 relative to S1, S0 is said to be out of
the money for this change. The CORBA version has not added value, relative to
J2EE as the cost of implementing the change was relatively significant to “pull” the
options.
Scenario 2. We use WebLogic server [http://www.bea.com/] as M1 with an
average upfront payable license cost Clicesh ¼ $25,000/host. As an upfront license
fee is incurred, increasing the number of hosts may carry unnecessary expenditures
that could be avoided, if we adopt M0 instead. However, M0 does also incur costs
to scale the system, due to the development of the load balancing and the fault
tolerance services. Such a cost, however, maybe “diluted” as the number of hosts
increases. The cost will be distributed across the hosts and incurred once, as the
developed services can be reused across other hosts. An additional configuration
and deployment cost materializes per host and sum up to Ce (2), when an additional
host is needed to run a replica.
We calculate xiVS0/S1 using (3) and then the options of S0 relative to S1. We
adjust the options by subtracting the upfront expenditure of developing both
services on M0, as reported in Table 20.3. The adjusted options reveal situations
in which S0 is likely to add value relative to S1, when the upfront cost is considered.
These results may provide us with insights on the cost effectiveness of implementing
fault tolerance and load balancing support to scale the software system relative to
S1, where a licensing cost is incurred per host.
Therefore, a question of interest is: when is it cost effective to use M0 instead
of M1? When does the flexibility of M1 cease to create value relative to M0? We
assume that for any k hosts, S0 and S1 are said to support UkS0 and UkS1 concurrent
users, respectively with UkS0 equal or different to UkS1. For the non-adjusted
options results of Table 20.3, if we benchmark these options values against the
cost of developing the load balancing and fault tolerance services (i.e., the upfront
Table 20.3 Options in ($) on S0 relative to S1 with (Clicesh) ¼ $25,000, s ¼ 22.7, and pessimistic
Cei
h Cei xiVS0/S1 Options Adjusted options Conc. users
1 2,386 25,049 2,343 0 U1S0 vs U1S1
2 4,772 50,049 4,772 0 U2S0 vs U2S1
3 7,158 75,049 67,891 0 U3S0 vs U3S1
4 9,544 100,049 90,505 0 U4S0 vs U4S1
5 11,930 125,049 113,119 0 U5S0 vs U5S1
6 14,316 150,049 135,733 0 U6S0 vs U6S1
7 16,702 175,049 158,347 7,643 U7S0 vs U7S1
364 R. Bahsoon and W. Emmerich
cost), we can see that payoff following developing these services is far from
breaking even for less than 7 hosts. Once we adjust the options to take care of the
upfront cost of investing to implement the both services, the adjusted options for S0
relative to S1 reports values in the money for the case of seven or more hosts,
as shown in Table 20.3. For hosts 7, M0 appears to be a better choice under the
condition that UnS0 UnS1. This is due to the fact the expenditures in M1 licenses
increases with the number of hosts, henceforth, the savings in adopting M1 cease to
exist. For hosts < 7, M1 has better potentials and appears to be more cost-effective
under the condition that UnS1 UnS0.
20.3 Discussion
The change impact analysis has shown that the architectural structure of S1 is left
intact when the scalability change needs to be accommodated. However, the
structure of S0 has undergone some changes, mostly on the architectural infrastruc-
ture level to accommodate the scalability requirements. From a value-based per-
spective, the search for a potentially stable architecture requires finding an
architecture that maximizes the yield in the added value, relative to some future
changes in requirements. As we are assuming that the added value is attributed to
flexibility, the problem becomes selecting an architecture that maximize the yield
in the embedded or adapted flexibility in a software architecture relative to these
changes. Even, if we accept the fact that modifying the architecture or the infra-
structure is the only solution towards accommodating the change, analyzing the
impact of the change and its economics becomes necessary to see how far we are
expending to “re-maintain” or “re-achieve” architectural stability relative to
the change. Though it might be appealing to the intuition that the “intactness” of
the structure is the definitive criteria for selecting a “more” stable architectures, the
practice reveals a different trend; it boils down to the potential added value upon
exercising the change.
For the first scenario, the flexibility has yielded a better payoff for S1 than for S0,
while leaving S1 intact. However, the situation and the analysis have differed upon
varying the number of hosts and upon factoring a license costs for S1. Though S0
has undergone some structural changes to accommodate the change, the case has
shown that it is still acceptable to modify the architecture and to realize added value
under the conditions that UnS0 UnS1 for seven or more hosts (Table 20.3,
Fig 20.3). Hence, what matters is the added value upon either embarking on
a “more” flexible architecture, or investing to enhance flexibility which is the
case for implementing load balancing and fault tolerance on S0. For the case of
WebLogic, though M1 is in principle more flexible, the flexibility comes with
a price, where the flexibility turned to be a liability rather than a value for seven
or more hosts, as when compared with the JacORB, under the condition that
UnS0 UnS1.
20 Economics-Driven Architecting for Non Functional Requirements 365
Options ($)
80000
60000
40000
20000
0
1 2 3 4 5 6 7
No. of Hosts
The “coupling” between the middleware and the architecture becomes of higher
interest in case of developing and analyzing software systems for evolution. This
is because the solution domain can guide the development and evolution of the
software system; provide more pragmatic and deterministic knowledge on the
potential success (failure) of evolution, and consequently assist in understanding
the stability of the software architectures from a pragmatic perspective.
Observation 2. Understanding architectural stability and evolution in relation to
styles.
Following the definition of Di Nitto and Rosenblum [3], a style defines a set of
general rules that describe or constrain the structure of architectures and the way
their components interact. Styles are a mechanism for categorizing architectures
and for defining their common characteristics. Though S1 and S0 have exhibited
similar styles (i.e., three-tier), they have differed in the way they cope with the
change in scalability. The difference was not only due to the architectural style, but
also due to the primitives that are built-in in the middleware to facilitate scaling the
software system. The governing factor, hence, appears to be to a large extent
dependent on the flexibility of the middleware (e.g., through its built-in primitives)
in supporting the change. The intuition and the preliminary observations, therefore,
suggest that the style by itself is not revealing for the stability of the software
architecture when the non-functional requirements evolve. It is, however, a factor
of the extent to which the middleware primitives can support the change in non-
functional requirements. Interestingly, Sullivan et al. [19] claims that for a system
to be implemented in a straightforward manner on top of a middleware, the
corresponding architecture has to be compliant with the architectural constraints
imposed by the middleware. Sullivan et al. [19] support this claim by demonstrating
that a style, that in principle seems to be easily implementable using the COM
middleware, is actually incompatible with it. Following a similar argument, adopting
an architectural style that in principle appear to be suitable for realizing the non-
functionality and supporting its evolution, may not be compliant with the middleware
in the first place. And if the architectural style happens to be compliant with the
middleware, there are still uncertainties in the ability of the middleware primitives
to support the change. In fact, the middleware primitives realize much of the
20 Economics-Driven Architecting for Non Functional Requirements 367
can help a designer to reason about both investment in modularity and how much to
spend searching for alternatives.
Architectural evaluation. Existing methods to architectural evaluation have
ignored any economic considerations, with CBAM [24] being the notable excep-
tion. The evaluation decisions using these methods tend to be driven by ways that
are not connected to, and usually not optimal for value creation. Factors such as
flexibility, time to market, cost and risk reduction often have higher impacts on
value creation. Hence, flexibility is in the essence. In our work, we link flexibility to
value, as a way to make the value of stability tangible.
Relating CBAM to our work, the following distinctions can be made: with the
motivation to analyse the cost and benefits of architectural strategies, where an
architecture strategy is subset of changes gathered from stakeholders, CBAM does
not address stability. Further, CBAM does not tend to capture the long-term and the
strategic value of the specified strategy. ArchOptions, in contrast, views stability
as a strategic architectural quality that adds to the architecture values in the form of
growth options. When CBAM complements ATAM [25] to reason about qualities
related to change such as modifiability, CBAM does not supply rigorous predictive
basis for valuing such impact. Plausible improvements of the existing CBAM
include the adoption of real options theory to reason about the value of postponing
investment decisions. CBAM uses real options theory to calculate the value of
option to defer the investment into an architectural strategy. The delay is based on
cost and benefit information. In the context of the real options theory, CBAM tends
to reason about the option to delay the investment in a specific strategy until more
information becomes available as other strategies are met. ArchOptions, in contrast,
uses real options to value the flexibility provided by the architecture to expand
in the face of evolutionary requirements; henceforth, referred to as the options to
expand or growth options.
20.5 Conclusion
Though the reported observations reveal a trend that agrees with the intuition,
research, and the state-of-practice, confirming the validity of the observations are
still subject to careful further empirical studies. These studies may need to consider
other non-functional requirements, their concurrent evolution, and their corresponding
change impact on different architectural styles and middleware. As a limitation, we
have relaxed considering the change impact of scaling up the software system on other
non-functional requirements like security, availability and reliability. However, we
note that the analysis might get complex upon accounting for the impact of the change
on other non-functional requirements and their interactions. Note the change could
positively or negatively impact other non-functional requirements and understanding
the cost implications is not straightforward and worth a separate empirical investi-
gation. In this context, utilizing the NFR framework [26] could be promising to
model the interaction of various non-functional requirements, their corresponding
20 Economics-Driven Architecting for Non Functional Requirements 369
References
1. Sun MicroSystems Inc (2002) Enterprise javaBeans specification v2.1, Copyright# 2002 Sun
Microsystems, Inc.
2. Object Management Group (2000) The common object request broker: architecture and
specification, 24th edn. OMG, Framingham, MA, USA
3. Di Nitto E, Rosenblum D (1999) Exploiting ADLs to specify architectural styles induced by
middleware infrastructures. In: Proceedings of the 21st International conference on software
engineering. IEEE Computer Society Press, Los Angeles, pp 13–22
4. Dawson R, Bones P, Oates B, Brereton P, Azuma M, Jackson M (2003) Empirical
methodologies in software engineering In: Eleventh annual International workshop on soft-
ware technology and engineering practice, IEEE CS Press, pp 52–58
5. Emmerich W (2000) Software engineering and middleware: a road map. In: Finkelstein A (ed)
Future of software engineering. Limerick, Ireland
6. Dardenne A, van Lamsweerde A, Fickas S (1993) Goal-directed requirements acquisition.
Sci Comput Program 20:3–50
7. Anton A (1996) Goal-based Requirements Analysis. In: Proc. 2nd IEEE Int. Conf.
Requirements Engineering. Orlando, USA
8. Othman O, O’Ryan C, Schmidt DC (2001) Designing an Adaptive CORBA Load Balancing
Service Using TAO. IEEE Distributed Systems Online 2(4)
9. Othman O, O’Ryan C, Schmidt DC (2001) Strategies for CORBA Middleware-Based Load
Balancing. IEEE Distributed Systems Online 2(3)
10. Bahsoon R, Emmerich W, Macke J (2005) Using real options to select stable middleware-
induced software architectures. IEE Proc Softw Spec issue relating software requirements
architectures 152(4):153–167, IEE press, ISSN 1462–5970
11. Erdogmus H, Boehm B, Harriosn W, Reifer DJ, Sullivan KJ (2002) Software engineering
economics: background, current practices, and future directions. In: Proceeding of 24th
International conference on software engineering. ACM Press, Orlando
12. Erdogmus H (2000) Value of commercial software development under technology risk.
Financier 7(1–4):101–114
13. Schwartz S, Trigeorgis L (2000) Real options and investment under uncertainty: classical
readings and recent contributions. MIT Press, Cambridge
14. Bahsoon R, Emmerich W (2004) Evaluating architectural stability with real options
theory. In: Proceedings of the 20th IEEE International conference on software maintenance.
IEEE CS Press, Chicago
15. Bahsoon R, Emmerich W (2003) ArchOptions: a real options-based model for predicting the
stability of software architecture. In: Proceedings of the Fifth ICSE workshop on economics-
driven software engineering research, Portland
16. Boehm B, Clark B, Horowitz E, Madachy R, Shelby R, Westland C (1995) The COCOMO 2.0
software cost estimation model. In: International society of parametric analysts
17. Medvidovic N, Dashofy E, Taylor R (2003) On the role of middleware in architecture-based
software development. Int J Software Engineer Knowledge Engineer 13(4):367–393
18. Rapanotti L, Hall J, Jackson M, Nuseibeh B (2004) Architecture driven problem decomposi-
tion. In: Proceedings of 12th IEEE International requirements engineering conference
(RE’04). IEEE Computer Society Press, Kyoto
19. Sullivan KJ, Socha J, Marchukov M (1997) Using formal methods to reason about architec-
tural standards. In: Proceedings of the 19th International conference on software engineering,
Boston
20. Boehm B, Sullivan KJ (2000) Software economics: a roadmap. In: Finkelstein A (ed) The
future of software engineering. ACM Press, Lemrick, Ireland
21. Baldwin CY, Clark KB (2001) Design rules – the power of modularity. MIT Press, Cambridge
22. Sullivan KJ, Chalasani P, Jha S, Sazawal V (1999) Software design as an investment activity:
a real options perspective. In: Trigeorgis L (ed) Real options and business strategy:
applications to decision-making. Risk Books, London
20 Economics-Driven Architecting for Non Functional Requirements 371
23. Sullivan KJ, Griswold W, Cai Y, Hallen B (2001) The structure and value of modularity in
software design. In: The Proceedings of the ninth ESEC/FSE, Vienna, pp 99–108
24. Asundi J, Kazman R (2001) A Foundation for the Economic Analysis of Software
Architectures. In: Proceedings of the Third Workshop on Economics-Driven Software Engi-
neering Research
25. Kazman R, Klein M, Barbacci M, Lipson H, Longstaff T, Carrière SJ (1998) The architecture
tradeoff analysis method. In: Proceedings of fourth International conference on engineering of
complex computer systems (ICECCS ‘98). IEEE CS Press, Monterey, pp 68–78
26. Mylopoulos J, Chung L, Nixon B (1992) Representing and using nonfunctional requirements:
a process-oriented approach. IEEE Trans Software Eng 18(6):483–497
27. Black F, Scholes M (1973) The pricing of options and corporate liabilities. J Polit Econ 81(3):
637–654
28. Emmerich W (2002) Distributed component technologies and their software engineering
implications. In: Proceedings of the 24th International conference on software engineering.
ACM Press, Orlando, pp 537–546
29. Nuseibeh B (2001) Weaving the software development process between requirements
and architectures. In: Proceedings of STRAW 01 the First International workshop from
software requirements to architectures, Toronto
.
Chapter 21
Conclusions
and available solutions, industrial needs are still quite unfulfilled. Practitioners
typically codify information about requirements in a fuzzy way: notations are
informal; natural language (occasionally with some standardized guidelines) is
the most common practice. This “information gap” makes it is impossible to
precisely relate requirements and design, the latter being much more detailed and
formalized. Hence, approaches that assume availability of precise information
do not fill industrial practices. We think that future efforts should thus be striving
for ways to relate just the core pieces of information, such as only those
requirements/design decisions that are more costly to change and that need to be
monitored, or those that recur more frequently in one’s specific industrial domain.
Secondly, links between requirements and architecture design decisions are multi-
dimensional (one requirement influences multiple decisions, and the other way
round), often indirect (one requirements might influence one decision that on its
own might reveal a new requirement, etc.) and of different nature [2]. This complex
network of dependencies, well known in both requirements engineering and soft-
ware architecture fields, makes the problem addressed by this book difficult to solve
in ways that are also applicable in industrial practice. Industrial applicability should
hence become a must have for future research in both requirements engineering and
software architecture fields, as well as when relating software requirements and
architecture.
We further observe increasing attention dedicated to the notion and role of
‘context’ (or environment where a software system lives). In his paper on past,
present and future of software architecture, Garlan [4] identified three major
trends influencing the way architecture is perceived: the market shifting engineer-
ing practices from build to buy; pervasive-computing challenging architecture
with an environment of diverse and dynamically changing devices; and the
Internet transforming software from closed systems to open and dynamically
configurable components and services. These trends, among many others, make
software contexts always smarter and increasingly complex, hence posing chal-
lenging research questions on the co-evolution of contextual (external) and
system (internal) requirements and architectures.
Last but not least, there is one research area that has focused since its inception on
the relation between requirements and architecture: enterprise architecture has been
evolving for three decades around the premise of aligning business processes with the
technical world. Business processes are supported by software systems, which in turn
pose constraints and generate new requirements for the former. However, in spite of
the great deal of research carried out since the 1970s, Business-IT alignment remains
a major issue [5]. The current trend is to tackle the problem through governance
of the software development process, as a mechanism to guarantee meeting the
business goals and mitigating associated risks through policies, controls, measures.
Governance works at the interface between the structure of the business organization
and the structure of the software and thus includes both requirements and architecture
within its scope. Governance is particularly relevant in distributed development
environments, which face increased challenges, as requirement and architecture,
are often produced in different sites.
378 P. Avgeriou et al.
References
1. Abrahamsson P, Ali Babar M, Kruchten P (2010) Agility and architecture: can they coexist?
IEEE Softw 27(2):16–22
2. Berg Mvd, Tang A, Farenhorst R (2009) A constraint-oriented approach to software architecture
design. In: Proceedings of the quality software international conference (QSIC 2009), IEEE,
Jeju, pp 396–405
3. de Boer RC, van Vliet H (2009) On the similarity between requirements and architecture. J Syst
Softw 82(3):544–550
4. Garlan D (2000) Software architecture: a roadmap. In: Proceedings of the conference on the future
of software engineering, limerick. ACM, Ireland, pp 91–101. doi:10.1145/336512.336537
5. Luftman J, Papp R, Brier T (1999) Enablers and inhibitors of business-IT alignment. Commun
AIS 1(3es):1–32
Editor Biographies
383
384 Index
V W
Validation condition, 134, 148, 151, 153 Waterfall model, 287
Viewpoints, 262 White box view, 289
4+1 views, 311 Work contexts, 67
Volatility, 111, 119, 135, 145