Ebook - Advances in Software Maintenance Management Technologies and Solutions
Ebook - Advances in Software Maintenance Management Technologies and Solutions
Maintenance Management:
Technologies and Solutions
Macario Polo
Universidad de Castilla - La Mancha, Spain
Mario Piattini
Escuela Superior de Informatica, Spain
Francisco Ruiz
Escuela Superior de Informatica, Spain
Copyright © 2003 by Idea Group Inc. All rights reserved. No part of this book may be
reproduced in any form or by any means, electronic or mechanical, including photocopy-
ing, without written permission from the publisher.
Excellent additions to your institution’s library! Recommend these titles to your Librarian!
To receive a copy of the Idea Group Publishing catalog, please contact (toll free) 1/800-345-4332,
fax 1/717-533-8661,or visit the IGP Online Bookstore at:
[http://www.idea-group.com]!
Note: All IGP books are also available as ebooks on netlibrary.com as well as other ebook sources.
Contact Ms. Carrie Stull at [[email protected]] to receive a complete list of sources
where you can obtain ebook information or IGP titles.
Advances in Software
Maintenance Management:
Technologies and Solutions
Table of Contents
Preface .................................................................................................. vi
Macario Polo, Escuela Superior de Informatica, Spain
Mario Piattini, Escuela Superior de Informatica, Spain
Francisco Ruiz, Escuela Superior de Informatica, Spain
Chapter I.
Software Maintenance and Organizational Health and Fitness ............ 1
Ned Chapin, InfoSci Inc., USA
Chapter II.
Problem Management within Corrective Maintenance ....................... 32
Mira Kajko-Mattsson, Stockholm University &
Royal Institute of Technology, Sweden
Chapter III.
The Impact of eXtreme Programming on Maintenance ...................... 75
Fabrizio Fioravanti, Finsystem s.r.l., Italy
Chapter IV.
Patterns in Software Maintenance: Learning from Experience .......... 93
Perdita Stevens, University of Edinburgh, Scotland
Chapter V.
Enhancing Software Maintainability by Unifying and Integrating
Standards ............................................................................................. 114
William C. Chu, Tunghai University, Taiwan
Chih-Hung Chang, Feng Chia University, Taiwan
Chih-Wei Lu, Feng Chia University, Taiwan
Hongji Yang, De Montfort University, England
Hewijin Christine Jiau, National Cheng Kung University, Taiwan
Yeh-Ching Chung, Feng Chia University, Taiwan
Bing Qiao, De Montfort University, England
Chapter VI.
Migrating Legacy System to the Web: A Business Process
Reengineering Oriented Approach ..................................................... 151
Lerina Aversano, University of Sannio, Italy
Gerardo Canfora, University of Sannio, Italy
Andrea De Lucia, University of Sannio, Italy
Chapter VII.
Requirements Risk and Maintainability ............................................ 182
Norman F. Schneidewind, Naval Postgraduate School, USA
Chapter VIII.
Software Maintenance Cost Estimation ............................................. 201
Harry M. Sneed,University of Regensburg, Bavaria
Chapter IX.
A Methodology for Software Maintenance ........................................ 228
Macario Polo, Escuela Superior de Informatica, Spain
Mario Piattini, Escuela Superior de Informatica, Spain
Francisco Ruiz, Escuela Superior de Informatica, Spain
Chapter X.
Environment for Managing Software Maintenance Projects ............ 255
Francisco Ruiz, Escuela Superior de Informatica, Spain
Félix García, Escuela Superior de Informatica, Spain
Mario Piattini, Escuela Superior de Informatica, Spain
Macario Polo, Escuela Superior de Informatica, Spain
Preface
It is a good thing to practice some bad habits such as smoking, eating
pork, drinking over your limit or not doing any physical exercise, so that
if one day you fall ill, your doctor, to make recover, will have something
to ban. But if you are all virtue, you will have no further room for im-
provement and falling ill will take you to your death bed.
Luis Landero, in Games of the Late Age.
The choice of the quotation that starts this preface has not been casual. In
fact, software in execution suffers many bad habits that, fortunately for software
services companies, produce more and more work every year. From the point of
view of software maintenance, such imperfections have their origin in the software
itself, when some defects must be removed; in users, when they ask for new
functionalities to be added; and in the changing technological environment, when
the software must adapt to a new environment. On the other side, and mapping
Lehman Laws (Lehman, 1980) with the last sentence of the quotation, non-changing
software is non-used, dead software.
According to the ISO/IEC (1995) terminology, the software maintenance
process is activated “when the software product undergoes modifications to code
and associated documentation due to a problem or the need for improvement or
adaptation.” In spite of this definition, which is very similar to that of ANSI-IEEE
(1990), the ignorance of maintenance activities may lead to underestimating its
importance, since there is a tendency to associate software maintenance only with
corrective activities. However, several authors (McKee, 1984; Frazer, 1992; Basili
et al., 1996; Polo, Piattiani, & Ruiz, 2001) have shown that perfective interven-
tions receive the most effort of maintenance.
From the seventies, software maintenance is the most costly stage of the
software life cycle (see Table 1), and there are no reasons to think that the situa-
tion will change, since novel environments and technologies require great mainte-
nance efforts to keep software products in operation. For Brereton, Budgen, and
Hamilton (1999), maintenance of hypertext documents will become a serious prob-
lem that requires immediate action, since they share many characteristics (struc-
ture, development process, economical value) with classical software products.
According to Lear (2000), many legacy applications written in COBOL are being
adapted to be integrated with current technologies, such as e-commerce.
There are organizations that devote almost all their resources to mainte-
nance, which impedes new development. Moreover, maintenance necessities in-
crease as more software is produced (Hanna, 1993), and its production has al-
vii
ways shown a growing tendency. On the other side, big programs never are com-
plete, but are always in evolution (Lehman, 1980). Ramil, Lehman, and Sandler
(2001) confirm this old theory 21 years later.
PROBLEM CAUSES
In spite of this, software organizations still pay more attention to software
development than to maintenance. In fact, most techniques, methods, and meth-
odologies are devoted to the development of new software products, setting aside
the maintenance of legacy ones. This problem is also common among program-
mers, for whom maintenance is “less creative” than development; in fact, many
legacy systems use old and boring programming environments, file systems, etc.,
whereas programmers prefer working with new, powerful visual environments.
However, the same software evolves and must continue to evolve along years
and, to their regret, programmers devote 61% of their professional life to mainte-
nance, and only 39% to new development (Singer, 1998).
The lack of methodologies may be due to the lack of a definition of the
software maintenance process. For Basili et al. (1996), the proposal and valida-
tion of new methodologies that take into account maintenance characteristics are
a must. Also, Pigoski (1996) says that there is little literature regarding mainte-
nance organizations.
PROPOSED SOLUTIONS
It is clear that maintenance organizations require methodologies and tech-
niques that facilitate software maintenance, decreasing costs and difficulties. There
exist different types of partial solutions for software maintenance. Depending on
their nature, they can be classified into:
• Technical solutions, that assist in certain moments of maintenance interventions.
Reengineering, reverse-engineering or restructuration techniques are some ex-
amples.
viii
REFERENCES
ANSI-IEEE (1990). ANSI/IEEE Standard 610: IEEE standard glossary of soft-
ware engineering terminology. New York: The Institute of Electrical and Elec-
tronics Engineers, Inc.
Basili, V., Briand, L., Condon, S., Kim, Y., Melo, W. & Valett, J.D. (1996).
Understanding and predicting the process of software maintenance releases. In
Proceedings of the International Conference on Software Engineering,
(pp. 464-474). Los Alamitos, CA: IEEE Computer Society.
Brereton, P., Budgen, D. & Hamilton, G. (1999). Hypertext: The next mainte-
nance mountain. Computer, 31(12), 49-55.
Frazer, A. (1992). Reverse engineering-hype, hope or here? In P.A.V. Hall (Ed.),
Software Reuse and Reverse Engineering in Practice (pp. 209-243) Chapman
& Hall.
Hanna, M. (1993, April). Maintenance burden begging for a remedy. Datamation,
53-63.
ISO/IEC (1995). International Standard Organization/International Electrotechnical
Commission. ISO/IEC 12207: Information Technology-Software Life Cycle
Processes. Geneve, Switzerland.
Lear, A.C. (2000). Cobol programmers could be key to new IT. Computer,
33(4), 19.
Lehman, M. M. (1980). Programs, life cycles and laws of software evolution.
Proceedings of the IEEE, 68(9), 1060-1076.
Lientz, B.P. & Swanson, E.F. (1980). Software Maintenance Management.
Reading, MA: Addison Wesley.
McKee, J.R. (1984). Maintenance as a function of design. In Proceedings of
AFIPS National Computer Conference in Las Vegas, 187-93.
Pigoski, T. M. (1996). Practical Software Maintenance. Best Practices for
Managing Your Investment. New York: John Wiley & Sons.
Polo, M., Piattini, M. & Ruiz, F. (2001). Using code metrics to predict mainte-
nance of legacy programs: A case study. Proceedings of the International
Conference on Software Maintenance. Los Alamitos, CA: IEEE Computer
Society.
Ramil, J.F., Lehman, M.M. & Sandler, U. (2001). An approach to modelling
long-term growth trends in large software systems. Proceedings of the Inter-
national Conference on Software Maintenance. Los Alamitos, CA: IEEE
Computer Society.
Schach, S.R. (1990). Software Engineering. Boston, MA: Irwin & Aksen.
Singer, J. (1998). Practices of software maintenance. In Khoshgoftaar & Bennet
(Eds.), Proceedings of the International Conference on Software Mainte-
nance, (pp. 139-145) Los Alamitos, CA: IEEE Computer Society.
x
ACKNOWLEDGMENTS
We would like to thank all the authors for their excellent contributions. We
want also to acknowledge the guidance of Michele Rossi, our Development Edi-
tor, for her motivation and patience, and the continuous support of the Idea Group
Publishing staff for this project.
Chapter I
Software Maintenance
and Organizational
Health and Fitness
Ned Chapin
InfoSci Inc., USA
This chapter sets out a foundation for the management of software mainte-
nance. The foundation highlights a fundamental for managing software maintenance
effectively—recognition of the connection between managing change in organiza-
tions and the organizations’ health and fitness. The changes arising as a conse-
quence of software maintenance nearly always impair or improve that health and
fitness. The basis for that impact lies in how business rules contribute to the
performance of systems, and in how systems contribute to the performance of
organizations. An understanding and application of how changes from software
maintenance on business rules and systems affect the performance of organizations
provides a basis for management action. With that added confidence, managers
can reduce impairing their organizations’ health and fitness and can increase positive
action to improve their organizations’ health and fitness via software maintenance.
INTRODUCTION
Software maintenance provides a vital path for management in preserving and
building organizational health and fitness. Since software maintenance also is the
usual means for implementing software evolution, that vital path takes on additional
significance for management. As explained later in this chapter, that vital path
provided by software maintenance is a main way of making intentional changes in
how organizations work. In the contexts in which they operate, how organizations
work is manifested in their organizational health and fitness.1
person may be and usually is a member of or a component in more than one sub-
organization.
Although no analogy is perfect, an organization can be regarded as if it were
a natural living biological entity. Eight activities and attributes of organizations are
also activities and attributes of living biological entities:
• They enclose self within boundaries that separate the self from its environ-
ment and milieu.2 For biological entities, the boundaries may take such
physical forms as membrane, hide, cell wall, bark, skin, scales, etc. In
addition, some animal entities mark territorial boundaries, as by scents,
scratch marks, etc. In organizations, the boundary may take both physical
forms such as a building’s walls, or territorial forms such as signs, different
cultural practices, different ways of doing things (such as the Navy way, the
Army way, etc.).
• They preserve boundary integrity to reduce breaches of the boundaries.
For biological entities, this may take behavioral forms such as posturing,
grooming, threatening, hardening of the boundary material, refreshing mark-
ings, etc. Organizations use many ways to try to preserve their boundary
integrity, such as advertising, using spokespersons, holding meetings, seeking
endorsements, screening applicants, etc.
• They acquire materials from the environment to incorporate within the self.
These are inward-flowing transports across the boundaries. For a biological
entity, the materials are its diet and its sources of energy, such as sunlight.
Organizations take in materials (such as money, electricity, iron ore, card-
board, etc.) both from the environment and from its milieu such as from other
Management
Performance
Suborganizations Systems
Enhancing Software Maintainability 139
CONCLUSIONS
In this chapter, we have proposed an XML-based unified model that can
integrate and unify a set of well-accepted standards into a unified model represented
in a standard and well-accepted language, XML. The survey of these adopted
standards and their roles in the software life cycle are presented in this chapter as
well.
The XUM can facilitate the following tasks:
1) The capturing of modeling information of models and transforming into views
of XUM.
2) The two-way mapping of modeling information among models and XUM
views.
3) The integration and unification of modeling information of different views in
XUM.
4) The support of systematic manipulation.
5) The consistent checking of views represented in XUM.
Organizational Health and Fitness 5
with the people often providing much of the sensory apparatus. But the links
to the typically wide selection of responses tend to be inconsistent and erratic,
because of relying on people to provide or do the linkages. For example,
because of the voluntary sudden departure of an employee, the other
employees have to change what they do and when and how they do it, to pick
up on-the-fly the workload change caused by the departure until a replace-
ment is hired and broken in, if the organization’s operational performance
effectiveness is to be preserved.
• They avoid illness and infirmity to the extent they do the seven activities
listed above. Some degree of success is needed in all seven for the entity to
have good health and fitness, although partial substitutions are sometimes
effective. For biological entities, animals and insects have more opportunity
to act to improve their health and fitness than do plants, because plants
typically have much less mobility. Organizations have the most opportunity to
improve their health and fitness—i.e., their operational performance—as
explained later in this chapter.
Organizations normally enjoy one attribute that biological entities do not
have—an ability to shape-shift and function-shift at will. Biological entities do
shape-shift and function-shift, but in a set sequence and on an inflexible schedule.
For example, a tomato plant make take the shape of a tomato seed, of a tomato
seedling, and as a fruiting tomato plant, but only in that sequence and only among
those forms and their associated functions. Vertebrate animals take the shape of
a gamate (egg or sperm), fetus, youngster and adult, but only in that sequence and
only among those forms and their associated functions. Many bacteria have two
forms and two sequences, cell to two separate cells, or cell to spore to cell, but only
in those sequences and only among those forms and their associated functions.
An organization, by contrast, has much more latitude to shape-shift and
function-shift, within limits as noted later. An organization can select its own
constituent components and how they are to interact, and change both at any time.
An organization can change its internal processes at any time. An organization can
choose the form, substance, and position of its boundaries, and change them at any
time. An organization can select the products or services it produces and
distributes, and how and when it produces and distributes them. An organization
can select what means of self-protection to deploy, and how and when it deploys
and engages them at any time. And an organization can determine what it will treat
as stimuli and the linkages to select what responses to make, when, and how, and
can change them at any time.
The common limitation on an organization’s ability to shape-shift and function-
shift arise from two main sources: access to materials (resources), and management
decision-making. The usual resource limitations are prices, capital, personnel skills
6 Chapin
• The most common way is to use interacting systems to accomplish the creation
and distribution of products and services along with the associated overhead.
For example, to produce electric motors, the production department uses a
full enterprise resource planning (ERP) system or a combination of inventory
systems (raw material, work in process, and finished goods), bills of materials
system, production control system, logistics system, quality control system,
plant and equipment support system, purchasing and supply chain system, and
cost accounting system. Stimulus-responses links with feedback are com-
mon, and the timing of action is normally important in such systems; for
information systems, a common implementation form is online transaction
processing (OLTP).
• A second way is to use interacting systems (primarily information systems) to
implement in the organization the equivalent of a (biological) nervous system
for providing responses to stimuli other than those directly involved in the
product/service systems. These more general systems deal with data relevant
for line and staff functions in an organization, such as building repair status,
accounts receivable, human resources, advertising, and site security. Such
systems involve a coordinated use of data acquisition, data reduction, data
storage (in information systems, databases and files), data analysis, data
processing, and data reporting to provide data on a periodic or an as needed
basis to destinations within the organization (such as personnel and machines)
and without (such as external stakeholders, regulatory agencies, etc.).
• A third and less common way is to use specialized computerized information
systems for helping management in policy determination and planning for the
Changing a System
The same management action that specifies a system can also specify a change
in a system. A management-specified change in a system has the effect of making
a change in a system’s operational performance. Furthermore, such a change in a
system’s operational performance also results in a change in the organization’s
operational performance. Such management-specified changes are typically based
on the external (management visible) characteristics of a system. Those are external
with respect to the specific system, not necessarily external to the organization.
Some of the boundaries of a system may coincide with some of the boundaries of
an organization, as when an information system used by an organization accepts as
a stimulus something from another organization in its milieu or from the organization’s
environment. Otherwise, the boundaries of a system normally coincide with parts
of the boundaries of some of the organization’s other systems where those
boundaries are internal to the organization’s boundaries.
A management-specified change in a system often necessitates, in the judg-
ment of systems personnel, other changes that support or facilitate it. An
information system example is when management specifies that products are to be
shown hereafter in a value of turnover (sales) sequence in an existing report. The
systems personnel recognize that this will require a separate sort processing for each
territory, something missing from the management specification. Hence the systems
personnel generate consequential changes to make the sorted data available in
order to implement the management-specified change in report content and format.
Because the changes, whether direct (management-specified) or consequen-
tial, affect a system, any or all of the four services in system may be affected. A
change in one service usually ripples to make one existing other service be no longer
a good fit. If the fit is left impaired, then the organization’s operational performance
is also impaired. For example, assume that management requires that the currently
Organizational Health and Fitness 15
manually applied rules to assess customer credit worthiness are to be replaced with
a different set of rules effective over the weekend. Furthermore, management
requires that the new rules are to be applied by a computer at the time of each
transaction with a customer, that equivocal cases are to be flagged for personnel
review, and that the review results are to be given to the computer before the
transaction is allowed to be completed. The new requirement changes personnel
roles and shifts the skills repertoire the personnel need. It also changes the timing,
both for the personnel services and for the computer services, and introduces a new
dependency between the computer and personnel services. How are both the
computer and personnel business rule changes to be accomplished over the
weekend to make the required system change without impairing the operational
performance of the organization?
one system and usually changes in several systems. To implement each change in
a system requires changes to be made in at least one business rule and usually in
several business rules. At each level, boundary changes may be made.
These change processes also work in the other direction, but with more
opportunity for shortfalls, oversights, and impaired organizational performance.
Thus, each change made in a business rule makes changes in at least one system and
may result in consequential changes in other systems. Each change made in a system
makes changes in at least one organization and may result in consequential changes
in other organizations. At each level, boundary changes may be made. Introducing
a change in a business rule is often regarded in practice as all that is needed to change
an organization’s operational performance. While only making such a change
usually does make a difference in an organization’s operational performance, it
often has unintended side effects because of ripple changes warranted but not made
in the combination of services in the system or in the boundaries. These unintended
side effects often result in some impairment in an organization’s operational
performance—i.e., its health and fitness.
The most common unintended side effects are on the service of personnel. Any
change in business rules has the potential to change what people do or how and
when they do it in the system, and with whom they interact. When the potential
becomes an actual change affecting people, then some of those people are losers
and some are winners in the organization. Much of the internal political chaos and
advocacy for change or for status quo in organizations springs from people seeking
to be winners or seeking to avoid being losers. To make anyone in an organization
be a winner or a loser, their personal performance in applying business rules in
systems in the organization has to change relative to other persons’, or the business
rules or systems have to change.
The winners usually accept the change as appropriate and worth doing.
Almost all people in organizations have experienced that to gain in position, prestige,
recognition, promotion, importance, desirable transfer, upgrade, more favorable
associates, increased remuneration, work space, access to resources, power, or
other “perks,” specific changes have to be made to the business rules or systems
(i.e., the way the organization works) in order to give them a more significant role.
For example, consider the case of a man working as a receptionist and assigned to
do routine data entry of inventory corrections as the background fill work. A
systems change is proposed that would have him keep the appointments books for
a group of executives and move part of the data entry work to a clerical pool. The
man welcomes this proposal, because he wants to get a better basis for bidding later
on moving up to being a “go for” executive assistant to some one of the executives
in the group, in order to give him a less sedentary job with more variety.
The losers normally resist the change and fight it both passively and actively,
directly and indirectly. The loses can take many forms, many of which are important
to some people, but are not important to others. For example, a particular change
extends an organization’s use of an enterprise resource planning (ERP) system,
eliminating on a week’s notice the job of one of the assistant purchasing agents. The
affected man is offered, with no change in salary, a transfer to an incoming quality
inspection group or an opportunity to train to become a sales associate. The man
regards both choices as “a slap in the face.” He regards the training offer as being
forced out of a group he knows and feels comfortable with, into associating with a
bunch of people he believes he has little in common with, all without any assurance
of passing the training. He regards the transfer offer as a move from indoor office
work to work in a warehouse, where he is taking orders and working with things
instead of what he likes to do—trying to get people to do things his way. In an
attempt to avoid the choices, he brings up with his present supervisor how the
purchasing department will need help in adapting to the ERP system and adapting
the ERP system to the needs of the department—and requests he be assigned to
that work (and thus initiates a proposal for additional consequential change).
All that has been noted previously in this chapter about changes to organiza-
tions, systems, and business rules applies also to software maintenance. This is
because software is used in implementing systems that use computers for part of
their machine services. One trend in the software used in systems that interact with
people, such as point-of-sale systems and accounting systems, is facilitating the
human interface. The proportion of the software devoted to that interface has been
increasing and now often exceeds half of the software in a system. Changing the
software to facilitate or improve the friendliness of the human interface is often
treated as a means of modifying the services of personnel in the system. However,
strengthening or facilitating the human interface may actually be like “frosting on the
cake” with respect to the needed changes in personnel services in the system.
The processes of software maintenance are not limited to modifying just the
software:
• Modifying the software typically modifies part of the machine services in a
system. This in turn typically requires rebalancing the contributions from the
non-computer service providers in the system, if the organization’s opera-
tional performance is to be preserved or improved. Failure to make the
rebalancing or readjusting changes appropriately typically results in some
impairment of the organization’s operational performance. Software mainte-
nance is more than tinkering with the code; rather, it is managing change in the
organization (Canning, 1972). If management would preserve or improve the
organization’s operational performance, then management has a stake in how
the software maintenance is directed and what is achieved.
• Not all software maintenance involves making changes only to the software
implementations of business rules. Some software maintenance work,
especially that initiated by the systems personnel, results in modifying software
properties usually undiscernable to management. Examples are changes in the
speed of system execution, the amount of computer storage used, the
involvement of operating system services, the arrangement of data in a
database, the transition to a new version of a directory, etc. Such changes
affect primarily the methods, machines and materials services in the system,
but rarely result in changes to the personnel services.
• Some software maintenance changes the system’s documentation. Except for
user manual changes, such changes usually directly affect relatively few people
in an organization other than the systems personnel. When the documentation
is changed to be more accurate, current and relevant, the work of the people
directly affected is nearly always improved, and indirectly—through faster,
better and lower cost changes to systems, the organization’s operational
performance is subsequently improved.
Organizational Health and Fitness 21
• The largest part of software maintenance does not modify the software—i.e.,
it leaves the code and the documentation unchanged. What it does is use the
code and the documentation to:
1. build or refresh the systems personnel’s knowledge and understanding
of the system and its software;
2. support consultation with users and other affected personnel about the
use and characteristics of the system and its software, such as about its
contribution in the organization, how this system fits with other systems,
how various classes of transactions are handled by the system, etc.;
3. validate or test the software as a whole to confirm its operational
performance (regression testing is an example);
4. provide a basis for answering questions from management about the
system and its software, such as “What would be the likely time and cost
to make a particular change in this system?”;
5. act as a vehicle or aid in the training or refreshing of people in the use of
the system and its software, such as by teaching, on-site tutoring, staffing
a “help desk,” etc.; and
6. serve as a basis in helping management devise ways to give the
organization new or changed functionality (function shifting) through
different and changed uses of the existing systems.
More systems personnel time goes into these six activities than into any other
of the activities and processes in doing software maintenance. Rebalancing
services, making changes in systems’ properties and functionalities, upgrading
the documentation, and the six activities just noted above, can all contribute
to preserving and improving an organization’s operational performance
capability.
Software maintenance is also the operative vehicle for accomplishing software
evolution (Chapin et al., 2001). The activities and processes in software mainte-
nance that implement changes in the business rules are the primary means of
accomplishing software evolution. This is because software evolution results in new
versions of computerized information systems provided for use by the personnel in
the organization.
Management often directs, on an ad hoc basis, that software maintenance be
done to implement changes in business rules. The usual result of such ad hoc
direction is unplanned, uncoordinated, and episodic changes and growth in the
software and systems used by the organization. Taken together, such “random
walk” changes result in unmanaged software evolution and hence in unmanaged
system evolution. Haphazard improvement or impairment in the organization’s
operational performance may result. Alternatively, management can forecast and
specify a deliberate planned and targeted set of steps to guide the software evolution
22 Chapin
among associates and other managers that the organization should pare the
money put into that “rat hole” of existing systems (Chapin, 1993). A starved,
neglected, ignored, and invisible or frowned upon activity can help motivate
an organization’s personnel to look for other ways to preserve or improve an
organization’s operational performance.
4. Management accepts the view that the proper province of software mainte-
nance is only diddling with the code. Management rejects the view that
software maintenance is the management of change in the organization
(Canning, 1972). The usual consequence is that the organization’s opera-
tional performance is impaired, and management usually blames either the
current systems as being incompetent to some degree, or the software
maintenance as not being properly done, or both.
5. Management manages software maintenance reactively (Chapin, 1985).
Management regards software maintenance requests or needs as unwelcome,
unsought situations that are best minimized and handled by firefighting and
damage control. Something broken or worn out might be something that
management might consider having maintenance done on to return it to its prior
level of functionality. Unlike machines, software does not wear out and very
rarely gets broken, and hence, in management’s view, should very rarely
receive any maintenance attention. And if it should become obsolete, then the
appropriate management action is to replace it and gain the benefits of
improved technology.
6. Management sets and enforces deadlines for the completion of software
maintenance work based upon the requesters’ expressed preferences.
Management typically treats the systems personnel’s estimates of the sched-
uling, time, effort, and resources needed to do the requested work as self-
serving padded data unworthy of serious consideration. To reduce any gap
between the deadlines proposed by requesters and by the systems personnel,
management applies pressure to the system personnel to meet the requesters’
deadlines (Couger & Colter, 1985).
7. Management declines to recognize that software maintenance is a more
demanding job than software development, in spite of supporting evidence
(Chapin, 1987). Hence, management often assigns junior-level and new-hire
personnel to software maintenance and sets the pay scale to pay less to
personnel normally assigned to software maintenance than to software
development. The result is an effectively higher unit cost for most quality levels
of software maintenance done, that in turn discourages management from
having software maintenance done.
8. Management views software maintenance narrowly, such as making correc-
tions or enhancement, or both, to the source code. Management regards such
24 Chapin
Positives
The major activities with positive potential relating organization health and
fitness with software maintenance are also not new—they too are some of the
lessons learned over the years by organizations doing software maintenance. While
often overwhelmed by the negative potentials, in practice, the positive potential
activities have contributed to achieving some effective software maintenance and
improving the operational performance of some organizations, or offsetting or
defeating some of the negatives to some degree. The positives get far less
management attention than the negatives, primarily from the typical short planning
horizon most managers currently support (“We have to look good this quarter.”)
and from personnel pressure to permit and reward solo stars rather than competent
stable teams of personnel. Ten activities with positive potential are:
1. Management acts to eliminate, neutralize, or reverse the activities with
negative potential to clear the way for the activities with positive potential to
be effective in contributing to preserving or improving the organization’s
operational effectiveness.
2. Management takes toward a proactive software evolution approach software
management. This involves the relationships between the management
Organizational Health and Fitness 25
personnel and the systems personnel and the personnel using the system.
Communication and consultation between users, management and the sys-
tems personnel about anticipating changes and defining needs and their timing
are usually helpful. Planning out the staging and transitions for a series of
changes helps reduce the personnel’s feeling of chaos and “fud” (fear,
uncertainty, and doubt) and build feelings of assurance and confidence among
the personnel affected by the changes.
3. Management publicizes the stable low-cost, personnel-empowering way that
change via software maintenance is handled in the organization, as well as the
actual track record from past and current software maintenance projects.
Management also contrasts this with the typical chaotic, high-cost fud
characteristics and the poor industry track record of discarding existing
systems and replacing them with new systems. In systems change, as the
software is maintained (or replaced), the winners rarely need their gains
glorified, but the losses for the losers have to be minimized, as by skill-
upgrading training, attractive assignments, etc., to avoid the performance
impairments from personnel turnover and reduced morale or motivation.
4. Management encourages maintenance teams and the use of repeatable
processes. The teams may be small, as in pair programming, to ten members
or more (Biggs, 2000; Humphrey, 2000; Prowell et al., 1999). But the
processes the teams use are known and understood by all of the team
members, even when some of the team members may not be expert in their
use. The processes are also repeatable, so that nearly identical results are
achieved regardless of which team does the work. The teams apply quality
assurance techniques and methods built into the ways they work, rather than
leaving quality assurance as a separate “may be done well” step near the end
of the maintenance project. An independent, separate software quality
assurance (SQA) activity may also contribute value to some software
maintenance projects.
5. Management encourages the fractal-like characteristic of business rules,
systems, and organizations to help evaluate the quality of the results of
software maintenance processes (Mills, Linger, & Hevner, 1986). Data
about this fractal-like characteristic has been scattered throughout this chapter
(for example, compare Figures 1, 3, and 5). Deviation from the fractal-like
characteristic appears to be associated with inelegance and inefficiency in the
level at which it appears and in any levels above. The impairment that arises
stems from the deviation’s effect upon the software maintenance processes,
and tends to be cumulative. However, software maintenance can be done in
ways that encourage and utilize the fractal-like characteristic.
26 Chapin
to the local standard operating procedure (SOP). This usually includes data
consistently named and described, with clear and meaningful annotation
placed appropriately in the code, and with accurate and frequent cross-
reference between the code and the non-code documentation. Full test plans
with their test description, test directions, test input data or how to find them,
and correct test output data for test run comparison or how to find them are
often included as part of well-groomed software. Well-groomed software is
easier for the systems personnel to understand and quickly find in it what they
want to find, thus reducing unit cost for software maintenance, speeding
maintenance, and improving the quality of the software maintenance.
10. With each request for software maintenance, management requires from the
requester a quantitative estimate (in monetary and other quantitative terms) of
the value to the organization of the requested software maintenance. Excluded
from the estimate is the estimate of the cost to the organization for the actual
doing of the requested software maintenance, since that estimate is made
separately by the systems personnel and is used to get the net value of the
requested maintenance. Then the organization management and the systems
personnel compare the value estimate with the cost estimate to get a benefit/
cost estimate, including the time value of money, to assist the management in
deciding whether or not to do the requested software maintenance. If the
maintenance is done, then about a year subsequent to the initial use of the
modified software, an audit is made to determine the actual realized benefit/
cost. The realized benefit/cost data assist in making future estimates of value
and cost, and establish the real quantitative value to the organization of doing
software maintenance.
Maintenance Maturity
The twenty activities just listed, half positive and half negative, really are just
some highlights or aspects of what is often called “process maturity.” The general
concept of process maturity as applied to information systems has been most widely
publicized in the work of the Software Engineering Institute (SEI) with its Capability
Maturity Model (CMM) for software development (Humphrey, 1995). The SEI
CMM gives scant attention to software maintenance in any of its five levels of maturity.
This oversight is being rectified by others; for example, for corrective maintenance, see
the CM3 described elsewhere in this volume by Dr. Mira Kajko-Mattsson.
In summary and in generic form, an organization operating at a low level of
maturity in softwaremaintenanceusuallydisplaysmostofthefollowingcharacteristics:
• the program and system documentation is weak, incomplete, not up-to-date,
inaccurate, or hard to understand, and user manuals usually share the same
states;
28 Chapin
• the personnel find they rely mostly on the source code and their own memories
to help them understand the software;
• the personnel usually work solo but are often specialized to certain software
or certain processes (such as preparing test runs), commonly with little variety;
• work is unplanned and often done informally in response to verbal requests,
and a separate quality assurance activity is rarely used;
• deadlines drive the work, and methods and techniques are used in an ad hoc
manner with little repeatability;
• software configuration management and software quality assurance are
absent or inconsistently applied;
• the personnel career paths are ambiguous, and supervisors rarely spend time
with subordinates except to ask about the meeting deadline status;
• personnel performance is assessed in qualitative informal terms, and contact
with the software’s stakeholders is fleeting and rare; and
• in general, management has only a weak ability to affect outcomes.
CONCLUSION
In this chapter, “organization” has been the term for a human-created, human-
operated entity for a human-designated purpose. Organizations usually have
suborganizations as components. Although such parallels are perilous, comparing
eight activity attributes of living things and of organizations makes us aware of some
key characteristics of organizations, and of what contributes to health and fitness as
manifested in the operational performance of the entity or organization.
In organizations, systems determine how recurrent situations get handled by
the methods that combine the services of personnel, machines and materials in order
to get them to operate together. Information systems are of special interest in this
chapter since they often involve software that implement the methods. In a system,
a key part of the methods is management’s choice of the business rules. These have
two main portions, decision and action. Business rules typically are nested and
implemented within systems, and systems typically have subsystems as compo-
nents. The systems and their business rules are the primary contributors to the
operational performance capability of an organization—i.e., to the organization’s
health and fitness.
To change an organization’s health and fitness, management has to specify
changes to the existing repertoire of systems and how those system operate.
Typically, changing an existing system requires changing the business rules within the
system, often with supporting or consequential changes in the services provided by
personnel, machines, materials, and methods. The personnel changes are typically
the most sensitive and complicated to handle, and can result from making changes
to the business rules, to a system, and to an organization, due to the fractal-like
characteristics of systems and their components.
Management has a key role in selecting, directing, and specifying change in an
organization, a system, or a business rule. Implementing management-required
changes often requires the systems personnel to make additional ripple, supporting,
or consequential changes. In information systems, the systems personnel normally
take care of these additional changes that may sometimes be extensive. Comput-
erized information systems get changed mostly through software maintenance
processes, yet information systems involve more than software—they also involve
personnel, machines (in addition to computers), materials, and methods. This
emphasizes that the management of software maintenance involves more than just
managing the changing of software—software maintenance makes changes to a
system’s and hence also to an organization’s operational performance. In closing,
this chapter highlighted twenty lessons learned. Ten are software maintenance
activities that can have a negative potential, and ten are software maintenance
activities that can have a positive potential on the health and fitness of an
organization.
30 Chapin
ENDNOTES
1
Except where otherwise indicated, this chapter draws primarily upon the
author’s observations and experiences, and upon these sources: Chapin,
1955; Chapin, 1971; and Chapin, 1965–1990.
2
The intended distinction between “environment” and “milieu” in this chapter
is that the environment is the natural physical surrounding context of the entity,
and the milieu is the human-dependent, cultural and social-institution sur-
rounding context of the entity.
3
This is a minor variation on an engineering definition. A discussion of some of
the varied meanings of the term “system” can be found in Klir (1991).
4
This involves many technical considerations. Two examples are Chapin
(1978) and Chapin (1999).
REFERENCES
Biggs, M. (2000). Pair programming: Development times two. InfoWorld, 22(30)
(July 24), 62, 64.
Canning, R.G. (1972). That maintenance ‘iceberg.’ EDP Analyzer, 10(10), 1–14.
Chapin, N. (1955 & 1963). An Introduction to Automatic Computers. Princeton,
NJ: Van Nostrand Co.
Chapin, N. (1965–1990). Training Manual Series. Menlo Park, CA: InfoSci
Inc.
Chapin, N. (1971). Computers: A Systems Approach. New York: Van
Nostrand Reinhold Co.
Chapin, N. (1971). Flowcharts. Princeton, NJ: Auerbach Publishers.
Chapin, N. (1974). New format for flowcharts. Software Practice and Experi-
ence, 4(4), 341–357.
Chapin, N. (1978). Function parsing in structured design. Structured Analysis
and Design Volume 2, (pp. 25–42) Maidenhead, UK: Infotech Interna-
tional, Ltd.
Chapin, N. (1985). Software maintenance: A different view. AFIPS Proceedings
of the National Computer Conference (Vol. 54, pp. 507–513) Reston
VA: AFIPS Press.
Chapin, N. (1987). The job of software maintenance. Proceedings Conference
on Software Maintenance–1987 (pp. 4–12) Los Alamitos, CA: IEEE
Computer Society Press.
Chapin, N. (1993). Software maintenance characteristics and effective manage-
ment. Journal of Software Maintenance, 5(2), 91–100.
Chapin, N. (1999). Coupling and strength, a la Harlan D. Mills. Science and
Engineering in Software Development: A Recognition of Harlan D.
Mills’ Legacy (pp. 4–13) Los Alamitos, CA: IEEE Computer Society Press.
Organizational Health and Fitness 31
Chapin, N., Hale, J.E., Khan, K.M., Ramil, J.F., & Tan, W.-G. (2001). Types
of software evolution and software maintenance. Journal of Software
Maintenance and Evolution, 13(1), 3–30.
Couger, J.D., & Colter, M.A. (1985). Maintenance Programming. Englewood
Cliffs, NJ: Prentice-Hall.
Drucker, P.F. (1973a). Management: Tasks, Responsibility, Practice (pp. 95–
102, 517–602) New York: Harper &Row.
Drucker, P.F. (1973b). Management: Tasks, Responsibility, Practice (pp.
430–442) New York: Harper &Row.
Humphrey, W.S. (1995). A Discipline for Software Engineering. Reading,
MA: Addison Wesley.
Humphrey, W.S. (2000). Introduction to the Team Software Process. Upper
Saddle River, NJ: Prentice-Hall.
Jones, T.C. (1993). Assessment and Control of Software Risks. Englewood
Cliffs, NJ: Prentice-Hall International.
Kitchenham, B.A., Travassos, G.H., Mayrhauser, A.v., Niessink, F., Schneidewind,
N.F., Singer, J., Takada, S., Vehvilainen, R., & Yang, H. (1999). Towards
an ontology of software maintenance. Journal of Software Maintenance,
11(6), 365–389.
Klir, G.J. (1991). Facets of System Science (pp. 3–17, 327–329) New York:
Plenum Press.
Lehman, M.M., & Belady, L.A. (1985). Program Evolution: The Process of
Software Change. New York: Academic Press.
Li, W., & Henry, S. (1995). An empirical study of maintenance activities in two
object-oriented systems. Journal of Software Maintenance, 7(2), 131–
147.
Lientz, B.P., & Swanson, E.B. (1980). Software Maintenance Management.
Reading, MA: Addison Wesley.
McClure, C.L. (2001). Software Reuse: A Standards Based Guide. Los
Alamitos, CA: IEEE Computer Society Press.
Mills, H.D., Linger, R.C., & Hevner, A.R. (1986). Principles of Information
Systems Analysis and Design. Orlando, FL: Academic Press.
Prowell, S.J., Trammell, C.J., Linger, R.C., & Poore, J.H. (1999). Cleanroom
Software Engineering: Technology and Process. Reading, MA: Addison
Wesley Longman, Inc.
Rumbaugh, J., Jacobson, I., & Booch, G. (1999). The Unified Modeling
Language Reference Manual. Reading, MA: Addison Wesley.
Swanson, E.B. (1976). The dimensions of maintenance. Proceedings of the 2nd
International Conference on Software Engineering (pp. 221–226) Long
Beach, CA: IEEE Computer Society Press.
32 Kajko-Mattsson
Chapter II
INTRODUCTION
Corrective maintenance is a very important process for achieving process
maturity. It not only handles the resolution of software problems, but also provides
a basis for quantitative feedback important for assessing product quality and
reliability, crucial for continuous process analysis and improvement, and essential for
defect prevention and necessary for making different kinds of decisions. Yet, the domain
of corrective maintenance has received very little attention in the curriculum within
academia or industry. Current literature provide very coarse-grained descriptions.
Extant process models dedicate at most one or two pages to describing it.
Software Maintenance
What is corrective maintenance and how does it relate to other types of
maintenance work? To explain this, we must place corrective maintenance within
total maintenance and identify its relation to other maintenance categories. The
majority of the software community follows the IEEE suggestion for defining and
categorizing maintenance. As depicted in Table 1, the IEEE defines software
maintenance as “the process of modifying a software system or component after
delivery to correct faults, improve performance or other attributes, or to adapt to
a changed environment” (ANS//IEEE STD-610.125 1990).
Not everybody agrees on this definition today. There prevails a controversy
on the choice of maintenance scope, its constituents, time span, and on drawing a
dividing line between software development and software maintenance (Chapin,
2001; Kajko-Mattsson, 2001d; Kajko-Mattsson, 2001f; Parihk, 1986;
Schneidewind, 1987; Swanson, 1999). This is clearly visible in so widely varying
cost estimates of software maintenance, spanning between 40%-90% of the total
software life cycle cost (Arthur, 1988; Boehm, 1973; Cashman & Holt, 1980;
DeRoze & Nyman, 1978; Glass & Noiseux, 1981; Jones, 1994; Mills, 1976).
According to Martin and McClure (1983), Pigoski (1997), and Schneidewind
(1987), the definition suggested by the IEEE states that maintenance is entirely a
34 Kajko-Mattsson
Corrective Maintenance
Corrective maintenance is the process of attending to software problems as
reported by the users of the affected software systems. During corrective mainte-
nance, we attend to all kinds of software problems: requirement problems, design
problems, software code problems, user documentation problems, test case
specification problems, and so forth. The main process within corrective maintenance
is a problem management process encompassing the totality of activities required for
reporting, analyzing and resolving software problems (Kajko-Mattsson, 2001a).
A software problem may be encountered either externally by an end-user
customer or internally by anybody within a maintenance organization. A Problem
Submitter, usually the person or organization who has encountered a problem, is
obliged to report the problem to the maintenance organization. The maintenance
organization must then attend to the reported problem and deliver the corrected
version of the software system to the affected customer(s).
Due to the high cost of corrective maintenance, it is not an economical use of
resources to attend to only one problem at a time. Instead, as depicted in Figure 1,
maintenance organizations lump together several problems and attend to them in
Problem Management 37
group. Before presenting the problem management process, we describe the basic
terms relevant within the context of corrective maintenance: defect, fault, failure,
software problem, and problem cause. For better visualization, we even model
them in Figure 2.
Problem
Problem
Report
Report
Problem
Problem
Report
Report
Fault Other
Defect
38 Kajko-Mattsson
Software Problem
A software problem is a human encounter or experience with software that
causes a doubt, difficulty, uncertainty in the use or examination of the software, or
an encounter with a software failure (Florac, 1992). Other terms for a software
problem are an incident or a trouble. Examples of a problem are the experiencing
of a system crash, an encounter with a wrong value computed by a function, or an
inability to understand documentation. It is software problems, not defects, that we
report to the maintenance organization. At the problem encounter, we do not
always know their underlying defects.
A software problem may be encountered dynamically during the execution of
a software system. If so, then it is caused by a failure. A problem may also be
encountered in a static environment when inspecting code or reading system
documentation. In this case, a problem is the inability to understand code or
documentation. An underlying defect to this problem might be misleading or unclear
instructions.
Problem Cause
For the most part, problems are the consequences of defects. Sometimes
however, problems may be due to something else, such as a misunderstanding, the
misuse of a software system, or a number of other factors that are not related to the
software product being used or examined (Florac, 1992). Additionally, in embed-
ded systems, a problem could arise due to anomalies found in the hardware. Most
of the software organizations today need to record causes of problems irrespective
of their origin. For this reason, we define a cause as an origin of an imperfection or
a flaw in a product (hardware or software defect) or a flaw in the operation of a
product. An example of a problem cause might be a worn-out engine, choice of an
inappropriate sequence of commands, or a software defect.
Maintenance Organizations
What does a corrective maintenance organization look like? Actually, this
varies a lot. The scenarios may be the following:
1. Corrective maintenance is performed by one or several departments within a
software organization. In this case, the maintenance department(s) perform
different types of maintenance on the software product whose major parts
have been developed by the same software organization.2
2. Corrective maintenance is conducted by several collaborating organizations,
usually belonging to one and the same organizational group. These organiza-
tions both develop and maintain major parts of their software.
3. Corrective maintenance is outsourced to one or several organizations. These
organizations do nothing but maintenance.
Problem Management 39
Problem Reporting
The PSP organizations should mainly deal with managing software problems.
They should confirm that a software problem exists, check whether the problem is
a duplicate, and if not, transfer it on for attendance to the maintenance execution
level. The main role of the maintainers at the PSP level is to filter out all duplicate
problem reports and only to report on unique problems to the maintenance
execution level.
It is very important that the maintainers at the PSP level report on only unique
software problems to the maintenance execution process level. Transferring
duplicate problems implies enormous burden to the maintenance execution process
level. Instead of concentrating on conducting changes to software systems, the
maintenance engineers must manage duplicate problem reports. This disrupts the
continuity of their work and adversity affects their productivity.
academia or industry; hence, it has been little formalized (Bouman et al., 1999; Hart,
1993). To the knowledge of the author of this chapter, there are few models suitable for
this domain (Bouman et al., 1999; Kajko-Mattsson, 2001b; Niessink, 2000).
To conduct upfront maintenance is a difficult and demanding task. The same
problem in one product may appear in different environments, under different
guises, and with different frequencies. To efficiently help customers, the upfront
maintenance engineers must have sufficient knowledge of the products, their
environments, the problems hitherto encountered in these products, and even the
knowledge of customers and their businesses.
By keeping track of the customers, their products and problems, the upfront
maintenance process contributes to the optimization of the product, the optimization
of customer satisfaction, and optimization of the development and maintenance
processes. The customer profits from the upfront maintainers’ know-how, and the
maintenance organization gains data on product quality, adherence to user expec-
tations, product use patterns, and the requirements for future improvements or
enhancements (Hart, 1993; Zuther, 1998).
Product Perspective
Where exactly in a software product do we make corrective changes? Let us
have a look at Figure 5. We identify two phases during which the corrective changes
are implemented (Kajko-Mattsson, 2001a). They are the following:
Figure 5. Corrective maintenance—product perspective (Kajko-Mattsson,
2001a).
Corrective Maintenance
Version 1
Immediate (direct)
s
lem 3
p rob k.1-k.
e cted sions
r i
Cor e Rev
Corrective Maintenance
h
Perfective & Adaptive
Composite (indirect)
in t
Maintenance
Requirement
Specification Version 2
Development Process
44 Kajko-Mattsson
Problem Report
To be implemented in
Reported in Investigated in versions 5 and 6
version 3 version 4.2
which the problem is recreated depends on many factors. Usually, the maintenance
engineer attempts to do it on the version in which the problem was reported and the
latest released version of the software product. In our context, it would be Version
3 and Revision 4.2. If the problem can be recreated in Revision 4.2, change actions
must then be suggested and implemented. In our simplified scenario, changes would
be implemented in Revision 4.3 and Version 5. It may happen, however, that the
customer does not have an appropriate environment to install Revision 4.3. In this
case, the maintenance organization either creates a new Revision 3.1 or advises the
customer to make an environmental upgrade (Kajko-Mattsson, 2001a).
Some organizations, however, manage only one version of a software product.
In that case, the choice of the version in which the problem solution will be
implemented is not an issue of consideration.
Predelivery/Prerelease Phase
According to some authors (Jones, 1994; Martin & McClure, 1983; Pigoski,
1997; Schneidewind, 1987; Swanson, 1999), postdelivery maintenance phases
are greatly influenced by the level of maintainers’ involvement during the predelivery
stage. Irrespective of who is going to maintain the software system (either the
development organization or a separate maintenance organization), there is a need
for a predelivery/prerelease maintenance phase. Certain maintenance activities
must be conducted then. These activities are for the most part of a preparatory
nature for the transition and postdelivery/postrelease phases. Examples of these
activities are the designation and creation of maintenance organization(s), their
involvement during this phase, creation and realization of a maintainability plan and
of a maintenance plan, preparation for the transition phase, and other things (Kajko-
Mattsson, 2001e). From the corrective maintenance perspective, it is important to
build maintainability into the product. The more maintainable the product, the easier
it is to correct it, and the easier it is to manage the impact of change.
Transition Phase
Software transition is a controlled and coordinated sequence of activities
during which a newly developed software system is transferred from the organiza-
tion that conducted development to both the customer and maintenance organiza-
46 Kajko-Mattsson
tions, or a modified (corrected, in our case) software system is transferred from the
organization that has conducted modification to both the customer and maintenance
organizations. From the corrective maintenance perspective, these activities are
conducted each time an organization releases a corrected version of a software
system, or a new enhanced version containing the corrections from the earlier
version’s revisions. Please observe that the organization that has developed new
software, the organization that has modified the software, and the user organization
may be one and the same organization.
Postdelivery/Postrelease Phase
Postdelivery begins after the software product has been delivered to the
customer and runs all the way to the retirement activity. It includes the sum of
activities to conduct changes to software systems. The changes to products are
implemented incrementally in subsequent product releases (see Figure 7). Hence,
postdelivery consists of a series of postrelease phases.
Usually, the creation of one major software release corresponds to the
modification of software due to modification requests for either enhancements and/
or adaptations. When implementing these modifications, however, we should not
forget to treat this subphase as some kind of a predelivery phase for the next release.
From the corrective maintenance perspective, it is very important that in this phase
we preserve maintainability of the product when infusing changes. To distinguish this
phase from the original development, we call it a prerelease phase.
Transition Transition
...
Roles
Various roles are involved in problem management. They are the following
(Kajko-Mattsson, 2001a):
• Problem Report Submitter (PRS) reports on software problems. A problem
submitter is either a customer organization or a maintenance engineer on any
organizational process level, as depicted in Figure 3.
• Problem Reporting Phase: During this phase, problems get reported to the
maintenance execution process level. External software problems are trans-
ferred from upfront corrective maintenance, whereas internal software prob-
lems are directly reported to this level by any role within the maintenance
organization. This phase is conducted on the SPAP level of Figure 3.
• Problem Analysis Phase: During this phase, maintenance engineers attempt
to recreate the reported problems and identify their cause(s). For some
problems, they also attempt to find root causes – deficiencies in the develop-
ment or maintenance processes, or deficiencies in the resources or products
that have led to the reported software problem. We divide this phase into the
following subphases:
Problem Analysis I, Report Quality Control and Maintenance Team
Assignment: In this subphase, Problem Report Administrator (PRA)
conducts a preliminary quality control check of a problem report, and, if
satisfactory, assigns the report to the relevant maintenance team. In some
organizations, this subphase is automatically performed by the Problem
Report Repository and Tracking System (Kajko-Mattsson, 1999a). Many
organizations, however, follow this procedure manually. This is because it is
not always easy to automatically identify the relevant maintenance team. This
phase is conducted on the SPAP level of Figure 3.
Problem Analysis II, Problem Administration and Problem Report
Engineer Assignment: During this problem analysis subphase, the problem
report has been assigned to a maintenance team and a Problem Report
Owner (PRO). The Problem Report Owner administers the reported
software problem. The goal of this stage is to make a preliminary analysis of
the problem report in order to determine whether it has been reported to the
appropriate maintenance team, and to start planning for the problem analysis
and resolution phases by analyzing the nature of the problem, by preliminarily
Problem Reports
All information on software problems is recorded in the maintenance database
managed by Problem Report Repository and Tracking System. This information
is communicated with the help of special reports called Problem Reports. These
reports consist of a substantial number of forms, each dedicated to a specific
process step and process role. All the forms are linked with each other providing
detailed and summarized data about problem management and regular feedback to
all groups concerned within the software organization. Examples of these data are
problem reporting data, project management information, status of problem
management process, status and quality of software products and their releases,
and experience gained during corrective maintenance. In Figure 10, we present one
form utilized for reporting software problems at ABB Robotics AB in Sweden.
Interested readers are welcome to study other problem report forms presented in
Kajko-Mattsson et al. (2000).
Maintenance organizations distinguish between external and internal problem
reports:
• External problem reports—problems submitted externally by end-user
customers.
• Internal problem reports—problems submitted internally by software
developers, software maintainers, testers, and anybody else within a software
organization.
The classification of problem reports into external and internal ones is very
important. The goal is to discover as many software problems as possible before
releasing software products to the customers. External problem reports are very
costly. This is mainly because many upfront maintenance and customer organiza-
tions may be involved in managing them. Another factor contributing to this immense
cost is the fact that certain problems may lead to changes in several releases. All
these releases will have to be tested, regression tested, and delivered to customers.
For these reasons, the software organizations work hard towards minimizing the
number of external problem reports and maximizing the number of internal ones
(Kajko-Mattsson, 2000a).
for the very high cost of the problem management. At its worst, it may lead to several
months of hard labor for the maintainer to recreate the problem.
If the problem report is satisfactory, the Problem Report Administrator
assigns it to an appropriate maintenance team. It may happen that the problem is
so comprehensive so that it must be attended to by several collaborating teams, or
even organizations. The identification of the maintenance team and/or organization
is not always so straightforward when reporting on external software problems.
Some systems are very complex. They may be embedded systems, integrated with
other systems, and their subsystems may not always be produced by the same
organization. In such a case, it is not always easy to identify the affected subsystem
and its release at the problem encounter. The organizations possessing such
subsystems must have a special procedure for system identification and mainte-
nance team assignment. Usually, representatives from different systems or system
parts have a meeting or a series of meetings during which they attempt to analyze
the problem and localize it within such a complex system.
problem at this stage, the plan for implementing the problem solution is very coarse-
grained. It merely consists of the designation of the software version(s) in which the
problem will be investigated and possibly implemented, and the designation of an
activity, such as Problem Investigation. When doing this, she assigns the problem
report to one of her team members, the Problem Report Engineer.
eating the process steps to be conducted by these two maintenance levels (Kajko-
Mattsson, 2001a).
After problem investigation, the Problem Report Engineer has a more or less
clear picture of the software problem. However, this picture is not clear enough for
designing a problem solution. Just as the Problem Report Owner does, the PRE
attempts to create or refine a preliminary mental picture of the problem solution. She
also refines its preliminary evaluation and the preliminary plans. In these plans, she
suggests Problem Cause Identification Activity and identifies the versions in
which the problem causes (defects) should be identified. She reports all this to the
Problem Report Owner who in turn makes decisions whether to continue with the
problem management.
acquainted with (if done by another engineer) or reacquainted with the software
problem and its causes. After that the engineer may start designing problem
solutions. A common mistake is that a maintenance engineer spends substantial time
on investigating a problem and its causes, and then immediately starts making
changes to the software system without taking time to design a solution. Even if the
problem does not appear complex and difficult to resolve, it still requires a design.
A good design helps in the implementation of change and in preserving the
maintainability of the software.
When designing problem solutions, the engineer must verify the choice of the
software releases in which these solutions will be implemented. As we have already
mentioned, one and the same problem may have to be resolved in several releases.
The maintainer must determine firm requirements for the change, that is, identify
where exactly these changes should be infused. She must also study the history of
the changes to the components now earmarked for change, so that the suggested
changes are compatible with the earlier changes. If the Problem Report Engineer
makes several modification designs (problem solutions), she must evaluate and
rank each design for its optimality with respect to short-term and long-term benefits,
drawbacks, costs, risks, and the like.
For each modification design, the Problem Report Engineer creates a plan
for its realization. Such a plan contains (1) identification of the version(s) of the
software product in which the software problem will be resolved, (2) designation
of the activities to be taken, (3) identification of all the resources required for
implementing the problem solution (equipment, personnel resources), (4) identifi-
cation of the prerequisites for implementing the problem solution, and (5) determi-
nation of the schedule for conducting the activities/tasks.
changed components with those unchanged ones to ensure the traceability of the
system, and she should check whether all changes in software are traceable from
problem report and vice versa to ensure the traceability of change.
Testing
During problem resolution, the Problem Report Engineer has implemented
changes to the affected software components. She has not, however, thoroughly
tested these changes. Usually, she conducts unit testing and/or component testing
(testing of a set of strongly related software and/or hardware units). During these
tests, she repeats the test cases that have been conducted during the problem
investigation for recreating the software problem. She may also design new
complementary test cases to cover the additional changes made during modification
design. Finally, she must conduct regression testing, that is, repeat the suite of all the
former tests for those units. She must do it to make sure that the new changes have
not affected other parts of the software component. Before regression testing,
however, she should clean up the regression test repository. It may happen that the
new changes have made the old test cases obsolete.
The integration tests are usually conducted by the departments responsible for
the product components, whereas the system tests are conducted by the separate
testers within the organization. Unfortunately, many maintenance organizations do
not system test the releases that are being the result of corrective changes. But if they
do, then testing teams tightly cooperate with the upfront maintainers. It is the upfront
maintainers who know best how their customers use the product. Hence, they are
an important resource for testing the adherence to user expectations, product use
patterns, problems, and other matters.
ME-Process-MI-3.1.1: Conduct a mini version of the Problem Investigation process. Make sure that you understand the problem.
ME-Process-MI-3.1.2: Conduct a mini version of the Cause Identification process. Make sure that all causes are identified and
understood.
ME-Process-MI-3.1.3: Study the software system. Make sure that you understand it.
ME-Process-MI-3.1.4: Study the chosen modification suggestion.
ME-Process-MI-3.1.5: Make the changes. For each component (documentation item) to be modified (at any software system
documentation level), do the following:
ME-Process-MI-3.1.5.1: Make the necessary change(s).
ME-Process-MI-3.1.5.2: Test the change(s) (unit testing).
ME-Process-MI-3.1.5.3: Check that the component is changed according to the documentation/coding standards as defined by the
organisation.
ME-Process-MI-3.1.5.4: Identify and record the parts of the component that have been changed down to the line/figure level.
ME-Process-MI-3.1.5.5: Record the reason for the change (corrective, non-corrective change).
ME-Process-MI-3.1.5.6: Ensure the traceability of the changed documentation item to the other (modified and unmodified)
documentation items.
ME-Process-MI-3.1.5.7: Ensure the traceability of change from the modified documentation item to the problem report and vice
versa.
Problem Management 63
Process Flexibility
So far, we have described the main phases of problem management process
as suggested by CM3: Problem Management (Kajko-Mattsson, 2001a). The
choice of these phases and their subphases may, however, vary for each problem
management process instance. Below, we illustrate some of the process variances:
• Problem Investigation may occur from zero to several times. In the first case
(zero times), the Problem Report Submitter is a maintenance engineer who
has encountered the problem. She already knows the problem. Therefore, for
some less serious problems, she may directly start conducting Problem
Cause Identification activity. In the second case (several times), the
Problem Report Engineer must check whether the problem exists in several
releases. This may be relevant in cases when different customers use different
releases of the product. It is then important to identify all those releases and
check whether they contain the reported problem.
• Problem Cause Identification may occur from zero to several times. In the
first case (zero times), the Problem Report Engineer (PRE) might have been
able to localize a minor problem cause by herself when investigating some
other problem. In this case, this PRE becomes a Problem Report Submitter.
It is also she who later attends to this problem. The Problem Report
Submitter might also be a tester or another maintenance engineer who has
encountered the problem during testing or attending to some other problem.
Her duty is to give a thorough report on what the problem is and identify the
problem cause(s), if possible. In cases where the problem cause has been
identified by somebody else, the maintenance engineer attending to the
problem should not rely on somebody else’s data. She should identify the
problem cause(s) by herself before designing any problem solutions. It may
also happen that the Problem Report Engineer must identify all the possible
releases in which the problem and its cause(s) have been identified.
• The process phases such as Problem Investigation, Problem Cause
Identification, Modification Design, and Modification Implementation
may merge into one and the same phase. This is the case when the maintenance
engineers attend to some minor (cosmetic) problems. Instead of going through
all these phases, they may directly implement them as soon as they have
identified them. We recommend, however, that these problems and their
solution designs be presented to the CCB, irrespective of their triviality. One
should not forget to evaluate the impact of change as well. The presentation
to the CCB may occur after the problem has been resolved. However, if the
CCB rejects some solution, then the changes must be unmade.
• One problem may be attended to by many teams. Still, however, we suggest
that one team should be responsible for the management of the whole
problem.
64 Kajko-Mattsson
• Modification Decision (CCB decision) may take place earlier than after
Modification Design. Some problems are very serious and/or urgent.
Hence, they must be supervised by the CCB or some other authority within
the organization immediately after being reported. They may also be under
continuous supervision during the whole problem management process
(Kajko-Mattsson, 2001a).
• The attendance to the software problem may terminate anywhere within the
problem management process. Below, we identify some of the scenarios
during which the problem management may be terminated.
During the first Problem Analysis subphase, Report Quality Control
and Maintenance Team Assignment, when checking the correctness of the
reported data, the Problem Report Administrator may discover that the
reported problem was not a problem at all; it was a misunderstanding.
The problem cannot be recreated. This can easily arise in the case of a
distributed system or a real-time embedded system. The maintenance orga-
nization may temporarily postpones its resolution until more information is
acquired about it. The customer must agree to live with the problem.
It is too expensive to attend to the problem, and the problem may have
to be lived with and/or worked around.
It is too risky to resolve the problem. For instance, it may require
substantial reorganization of the system, retraining of all customers, too many
changes in the customer documentation, and/or operational changes.
Structure of CM3
As depicted in Figure 14, each constituent CM3 process model is based on
CM : Definition of Maintenance and Corrective Maintenance. In addition,
3
each such process model has the following structure: (1) CM3: Taxonomy of
Activities listing a set of activities relevant for the process; (2) CM3: Conceptual
Model defining concepts carrying information about the process; (3) CM3:
Maintenance Elements explaining and motivating the maintenance process activi-
ties; (4) CM3: Roles of individuals executing the process; (5) CM3: Process Phases
structuring the CM3 processes into several process phases; (6) CM3: Maturity
Levels structuring the CM3 constituent processes into three maturity levels (Initial,
Defined, and Optimal); and (7) CM3: Roadmaps aiding in the navigation through
the CM3 processes.
CM3:
Maturity
Levels
CM3: CM3:
M3 & CM3:
Taxonomy of Process
Roadmaps
Activities Phases
CM3: CM3:
CM3:
Conceptual Maintenance
Roles
Model Elements
With this structure, we aim towards providing maximal visibility into corrective
maintenance and towards decreasing the subjectivity in the understanding and
measurement of corrective maintenance processes. We hope that the CM3 model
will constitute a framework for researchers and industrial organizations for building
process and measurement models, and serve as a pedagogical tool for universities
and industrial organizations in the education of their students/engineers within the
area of corrective maintenance.
The problem management process is very complex in itself. It may not always
be easy for maintenance organizations to know which process issues to implement
first. For this reason, we have divided our CM3 problem management process
model into three levels, where each level provides a relevant guidance on what to
implement. Below, we briefly describe our levels.
Problem
Problem Problem Management Process
Solved
Problem Management 67
Activity
Activity Modification Modification Modification
Activity Design Decision Implementation
Epilogue
In this chapter, we have presented a problem management process utilized
within corrective maintenance. The theory presented here has been extracted from
a recently defended PhD thesis titled Corrective Maintenance Maturity Model:
Problem Management suggesting a detailed problem management process model
(Kajko-Mattsson, 2001a). By concentrating our efforts on a limited domain, we
were able to scrutinize it meticulously, establish its definition, scope, borders to
other processes, and, most importantly, to identify all the important process
activities necessary for efficient management of software problems.
CM3: Problem Management has been built by Software Maintenance
Laboratory, a cooperation of Stockholm University/Royal Institute of Technology
in Stockholm and ABB in Västerås, Sweden. CM3: Problem Management has
been primarily developed in the ABB context. However, it is targeted to all software
organizations involved in building or improving their corrective maintenance
processes. It has been evaluated against 28 industrial non-ABB processes. The
evaluation results have shown that our model is realistic, down-to earth, and that it
appropriately reflects the current industrial reality. CM3: Problem Management
does the following:
70 Kajko-Mattsson
ENDNOTES
1
It is expressed in the Entity-Relationship Model (Powersoft, 1997). See the
Appendix, an explanation of the modelling constructs.
2
The remaining parts may be COTS products, or products conducted by
subcontractors.
3
For detailed description of problem description templates, please read
Kajko-Mattsson (2001a).
REFERENCES
American National Standard Institute/IEEE (ANSI/IEEE STD-982.2). (1988). Guide
for the Use of IEEE Standard Dictionary of Measures to Produce Reliable
Software. Los Alamitos, CA: Computer Society Press.
American National Standard Institute/IEEE (ANSI/IEEE STD-610.12). (1990).
Standard Glossary of Software Engineering Terminology. Los Alamitos,
CA: Computer Society Press.
Problem Management 71
APPENDIX
MODELING CONSTRUCTS
Entity Name
Attribute Name
Generalisation
Relationship
Cardinality
Minimum Maximum
Optional At most one
Chapter III
EXTREME PROGRAMMING:
A GENTLE INTRODUCTION
One of the emerging techniques for managing software project is eXtreme
Programming (Beck, 1999 and 2000). XP surely changes the way in which we
develop and manage software, but in this chapter we will explore also how it can
change the way we maintain software. The most interesting feature of eXtreme
Programming is the fact it is “human oriented” (Brooks, 1995). XP considers the
human-factor as the main component for steering a project towards a success story.
It is important to evidence that XP is guided by programming rules, and its more
interesting aspects deal with the values that are the real guides for the design,
development and management processes.
The values on which XP is based are:
• Communication: it is the first value, since XP’s main objective is to keep the
communication channel always open and to maintain a correct information
flow. All the programming rules I will describe later cannot work without
communication among all the people involved in the project: customers,
management, and developers. One of the roles of management is to keep
communication always up (Blair, 1995).
• Simplicity: the first and only question that you have to answer, when
encountering a problem is: “What is the simplest thing that can work?” As an
XP team member, you are a gambler, and the bet is: “It is better to keep it
simple today and to pay a little bit tomorrow for changing it, instead of creating
a very complicated module today that will be never used in the future.” In real
projects with real customers, requirements change often and deeply; it is crazy
to make a complex design today that have to be completely rewritten in one
month. If you keep it simple initially, when the customer changes his mind or
when the market evolution requires a change, you will be able to modify
software at a lower cost.
• Feedback: continuous feedback about the project status is an added value
that can drastically reduce the price that you must pay when changes have to
be made. Feedback is communication among management, customers and
developers, that is, direct feedback; feedback is also tracking and measuring,
so that you can maintain control of the project with indirect measures on the
code, on its complexity, and on the estimated time to make changes.
Feedback is not only communication but also deals with problems of software
measurement and then with metrics.
• Courage: courage is related to people, but it is always related to the way in
which you develop your project. XP is like an optimization algorithm, such as
simulated annealing in which you slightly modify your project in order to reach
a optimal point. But when the optimum you reach is only local, you need the
courage to make a strong change in order to find the global optimum. XP with
its rules encourages you to re-factor when your project is stalled, but in order
to re-factor in deep you need courage.
These values are the bases for the 12 XP rules. The following is a short
description of these rules.
Impact of eXtreme Programming 77
Planning Game
During software development, commercial needs and technical considerations
have to steer the project in equivalent ways. For these reasons, the planning of an
XP project is decomposed in several steps. Starting from a general metaphor, for
each functionality a story is written; a story summarizes what the function has to do
and what results have to be obtained from it. Each story is then divided into a
variable number of tasks that are in example classes or part of them. Stories are then
sorted by relevance, where relevance is related to the benefit to the user and to the
added value obtained by the implementation of the story for the whole project.
Within each story, tasks are then sorted by relevance, and the more important are
selected by the manager in order to choose the subfunctionality to be implemented
first. These are the ingredients for the recipe that the planning game is.
Planning has to be treated as a game in which each person plays a role and has
a specific assignment to be carried out. Managers have to make decisions about
priorities, production deadlines, and release dates, while developer have to:
estimate the time for completing assigned tasks, choose the functionality that has to
be implemented first among those selected by the customer for being implemented
in the current release, and organize the work and the workspace in order to live
better.
Small Releases
Each software release should be given out as soon as possible with the
functionality of greater value correctly working. A single release is comprised of
several subreleases that implement single functionalities that in XP are called tasks.
More tasks collected together to form a more complex and a self-comprehending
functionality determines a story that can be implemented during a release. Note that
adopting this operating way, it is not possible to implement only a half of a
functionality, but only to shorten the release time.
Metaphor
Each XP project is summarized as a metaphor or a collection of metaphors that
are shared among customers, management, and developers. The metaphor aids the
customer to describe in natural language what he wants, and if refined by technical
persons it can be a good starting point for defining requirements. The metaphor that
can be assumed, for example, for a XML editor is that of a word processor (that
means cut, copy and paste operations plus file loading/saving and the capability of
writing something) that is able to write only tags with text inside. The vocabulary of
the editor is the schema or DTD set of tags, the spell checker is the XML well-
formedness and the grammatical check is the validation against DTD or schema.
78 Fioravanti
Tests
Programmers write unit tests in order to be confident that the program will
cover all the needed functionality; customers write functional tests in order to verify
high-level functionality. You don’t have to necessarily write one unit test for each
method, but you have to write a test for each part of the system that could possibly
break now or in the future. Remember to also address the problem of monkey test
in which the system has to deal with crazy data in input.
Simple Design
Each part of the system has to justify its presence by means of its used
functionality. In general, the simpler project in each time instant (i) makes all the tests
run, (ii) has no code or class duplication, (iii) is quite self-documenting by adopting
explicatory names and commenting on all the functionality, and (iv) has the least
possible number of classes and methods to correctly implement the functionality.
Refactoring
Before adding a new functionality, verify if the system can be simplified in order
to make less work to add it. Before test phases, programmers ask themselves if it
is possible to simplify the old system in order to make it smarter. These techniques
are at the basis of continuous refactoring. Keep in mind that refactoring is carried out to
simplify the system and not to implement new functions that are not useful.
Pair Programming
All the code must be written by two programmers in front of the same computer
(Spinellis, 2001; Williams, Kessler, Cunningham, & Jeffries, 2000). This can be a
good way to instruct new team members, reducing skill shortage and, in all cases,
is a good practice to increase productivity and reduce error probability, since one
person is continuously stimulated by the other to produce quality code and to
correct the errors that the other inserts in the program. Also a not-so-skilled
programmer can be a valid aid in pair programming since he can ask questions of
the more skilled person that can produce a better testing suite. Feedback helps the
less skilled to improve his system knowledge, and the right question at the right moment
can be a contribution for the more skilled to find a solution that can simplify the code.
Continuous Integration
The code is tested and integrated several times a week and in some cases a
few times a day. This process improves the probability of detecting hidden faults in
other parts of the system due to the modification planned. The integration ends only
when all the previous and the new tests work correctly.
Impact of eXtreme Programming 79
Collective Ownership
Whoever sees the possibility to simplify a part of the code has to do it. With
individual code property, only the person that has developed a part is able to modify
it, and then the knowledge of the system decreases rapidly. Consider the possibility
that the code-owner accepts a job with another company; in that case … you are
really in trouble without the adoption of collective ownership.
On-site Customer
The customer has to be present in the workgroup and has to give feedback to
developers. He also has to write functional tests in order to verify that the system
works correctly and has to select the functions that add greater value to the system
in order to choose which functionalities should be present in the next release of the
software.
40-Hour Weeks
You cannot work for 60 hours a week for a long time. You’ll become tired and
stressed and your code will have a lower quality. In addition, your capability to
interact with other people will decrease (Brooks, 1995).
Coding Standards
The adoption of a coding standard, so that all programmers will have the same
way to write code, minimizes the shock due to seeing a code not formatted to your
personal standards. In particular, comments, especially those for automated
documentation generation, have to be standardized; parentheses and indentation
should be uniform among all the code in order to facilitate comprehension and
collective ownership.
In the following, these rules will be discussed from the point of view of
maintenance, evidencing that XP can help achieve a constant maintenance cost over
time.
The XP rules and values have a strong impact on the life cycle of the project,
and the life cycle itself drives the maintenance activities, as will be shown in the
following. For that reason, it is necessary to introduce the XP life cycle and to
consider its impact on maintenance.
Considering that each task is partially superimposed on the previous one and
the next, a graph as shown reported in Figure 1 can be drawn for effort assignment.
The life cycle of an XP task is slightly different because of the approach to
program development using XP rules. The phases of task development in XP are:
• Requirements and analysis: these tasks are usually performed together since
the customer is on-site, and the feedback about functionalities to be imple-
mented is faster and more precise. This reduces the effort and the time for
these operation compared to a more classical approach.
• Design: in an XP project, we never project for tomorrow, but we only address
the problems related to the task selected by the on-site customer. This reduces
the time-window, so that the effort for this phase can be better employed for
other task phases.
• Refactoring: continuous refactoring is one of the rules of XP and a refactoring
phase has to be performed before any modification is introduced in the system.
During this phase, the system is analyzed in order to determine if the possibility
of system simplification exists. This task can take more time than the design
phase since all the subsystems related with the task under development have
to be revised and eventually improved and/or simplified. Since small releases
are a must in XP, the refactoring activities are very close to each other,
approximating a continuous refactoring.
• Write-Tests: during this phase, the classes/functions of the systems needed to
implement the new functionalities are realized in the form of a skeleton, and the
code inserted is only for the purpose of satisfying, as with dumb code, the unit
tests we have to write in this phase. Unit tests must cover all the aspects that
have the probability of not working.
Requirement
s
Analysis
effort (i.e. person/day)
Design
Code
Test
Integration
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
XP
Evolutionary
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
time (i.e., days)
Impact of eXtreme Programming 83
In Figures 6 and 7, the testing points for measuring the needed maintenance
effort for each approach are shown by arrows showing the testing points. For each
point the maintenance effort is calculated supposing that an error will be detected
at that time. With respect to the time scale reported in the figures, we assume the
following notation, ERR@t means that we discover an error at time t. A detailed
description of the error’s impact on the previous phases is reported during the
discussion of Tables 1 and 2.
Considering Figure 6, the errors detected and the phases they impact for a
certain percentage of the phase effort are evidenced in Table 1.
An explanation of the table is needed in order for the reader to better
understand the reported numbers:
• ERR@3: An error detected at middle of the analysis phase impacts partially
(50%) a review of the requirement, and also the already performed analysis
(and then a 25% impact has been supposed).
• ERR@5: As for the analysis phase, we can suppose that an error detected
during the design impacts the maintenance effort in a similar manner, and then
for 50% of requirements and analysis, and for 25% of design.
time
Requirement Analysis Design Coding Testing Integration
time
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
Table 1: Maintenance effort with respect to the already spent effort expressed
as a function of the time instant in which an error is identified for classical
evolutionary life cycle.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
can be considered stable. This assumption is true since at each test/integration phase
all the unit and functional tests for the system are executed, and then the written code
and the analysis-design phase can be considered well done.
By XP adoption, errors can be discovered during design and refactoring
phases and during test-integration phases, and therefore, considering as stable the
phase before a successful test, Table 2 can be drawn.
Also in this case, an explanation of the table is needed for the reader to better
understand the reported numbers:
• ERR@3: An error detected at the middle of the design phase partially impacts
(50%) on a review of the requirement and on the analysis, and also on the
already performed design phase (and then a 25% impact has been supposed).
• ERR@5: As for the design phase, we can suppose that an error detected
during refactoring impacts in a similar manner on the maintenance effort, and
then for 50% of requirements, analysis and design phases, and for the 25%
of refactoring, in the mean.
• ERR@8: The same consideration already performed before can also be
redone for errors detected during this phase, considering also that all the
already performed work on testing should be repeated in order to be sure of
delivering a stable version of the re-factored system.
• ERR@10, ERR@12, ERR@14, ERR@16: In this case (ERR@10), the
error is detected during a test phase. Since the re-factored code can be
considered stable because of the already performed tests, the error has
probably been inserted in the last phase of coding. This means that the written
code since the last successful test phase should be partially (50%) revised, and
the test phase has to be completely redone. Similar considerations can be
repeated for testing points at times 12, 14, and 16.
Table 2: Maintenance effort with respect to the already spent effort, expressed
as a function of the time instant in which an error is identified for XP life cycle.
R&A means Requirement and Analysis; D means Design; R means Re-
Factoring; WT means Write Tests; TI means Test Integration; Cn is the nth
Coding Subphase; I&Tn the nth Integration and Test phase.
Figure 9: Maintenance costs in XP life cycle (the y-axis scale is the same with
respect to that of Figure 8).
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
88 Fioravanti
XP IMPACT ON MAINTENANCE
In this section, the impact of the XP rules and practices on maintenance will be
evidenced. The rules and the values on which rules are based will be examined in
order to highlight the added value that each practice can give to the maintenance
process.
Communication
Good communication means that all team members know exactly what is
happening on the code; that is, each person can modify each part of the system. If
he/she is not able to do so with his partner, he/she will look for another partner for
the specific task. The impact of this value on maintenance is evident—the reduction
of time-to-production for each team member. Communication value strongly helps
to push down the modification cost curve, since no proprietary part of code exists.
The communication among team and customers also helps to identify errors
and misunderstandings about the project, as soon as possible minimizing the time
spent for useless or unnecessary parts of code.
Feedback
Feedback between customer and those who understand the software en-
hances the probability of identify the greatest part of the problems. The direct
contact with customers, who also write functional tests, is a strong push in the
direction of a continuous preventive maintenance activity that allows a reduction in
global costs of the whole maintenance process.
Impact of eXtreme Programming 89
Courage
Courage, in the sense of XP, can improve the simplicity of a system and
therefore reduce the maintenance cost and improve the effectiveness of mainte-
nance activities.
Courage, especially in the refactoring phase, allows a milestone reducing, as
previously shown, the future maintenance costs.
Planning Game
Well, a smart management is always an aid during all the phases of software
development, and therefore during maintenance. Consider also the fact that the
functionality with the highest added-value is developed first and then tested in many
different operating conditions, allowing the delivery of a more stable code for at least
the main functionalities. The planning game (small tasks to be implemented and
tested in sequence) impacts strongly on the life cycle and then on maintenance costs.
Small Releases
Adding only a few functionalities at a time and verifying each release with the
customer, reduces the risk of needed corrective maintenance to be carried out.
Smaller releases more closer in time can improve the problem solving in the sense
that if a fault is present, it will be eliminated more quickly reducing the time to market
of a stable software version.
Metaphor
A general guideline that is clear for all the persons involved in the software
development and test activities helps to focus the project aims and scope. Metaphor
has no direct consequences on maintenance, but the idea of a metaphor to be
implemented as a story and as tasks change the life cycle of a project in significant ways.
Tests
It is obvious that testing reduces the probability of future appearances of
undiscovered errors. From the point of view of maintenance, automated unit and
function tests allow the identification of the problems inserted by the changes
performed on code. Regression testing is universally recognized as an activity to
reduce future maintenance cost during corrective maintenance, and regression
testing is the usual practice in XP.
Refactoring
Maintenance for the sake of maintenance, also called perfective maintenance,
is not universally recognized as a good practice. When it is used to enforce project
90 Fioravanti
simplicity and reduce the probability of future corrective maintenance activities it has
to be considered a good rule to be followed. XP continuously refactors with this
aim.
Pair Programming
It is evident that four eyes see better than two, but the greater advantage of pair
programming is the sharing of knowledge among the team members. The reduction
of maintenance costs is obtained by diminishing the time needed for programmers
to understand a part of code written by other people, since all the advantages of
collective code ownership are a direct consequence of this particular technique for
developing.
Continuous Integration
The addition of small parts of code to a stable version reduces the probability
of inserting errors since the code can be integrated only when all the previous tests
continue to work correctly. This modus operandi reduces the probability of
inserting indirect errors in other parts of the system and then curtails the costs related
to corrective maintenance, since these activities are performed directly during
integration, in case of failure of the regression testing.
Collective Ownership
Collective ownership of code obviously facilitates the sharing of knowledge,
but the main advantage with respect to maintenance is due to the need to adopt a
revision control system for storing the code. In the case of error, the project can be
reverted to the last stable release of the problematic module, giving to the customer
the impression that maintenance activities can be performed without a great impact
on the installed systems and with minimal reduction in functionalities.
On-site Customer
The presence of the customer on-site improves the maintenance factors
related to functional tests and feedback. The customer select the more relevant
stories for the project. These stories are realized first, and then they are tested and
re-factored many times. These activities give more stability and simplicity to the
functions that are more relevant for the customer, increasing customer satisfaction
and reducing the probability of project rejection.
40-Hour Weeks
Extra work is not good for code quality, nor for maintenance activities. This
can be trivial, but it is true.
Impact of eXtreme Programming 91
Coding Standards
A better comprehension of the source code simplifies the change process and
thus reduces the time needed for maintenance activities. A standardized way to
write comments allows also automatic documentation that is improved each time a
comment line is added.
CONCLUSION
From all the examined features it can be easily determined that maintenance
costs can be kept under control if all the previous rules are applied correctly and
with the right aim. Constant maintenance costs during the software life cycle is no
more a chimera; it is not so easy to be achieved, but with adoption of the right
programming practices, a good approximation can be reached. XP has no “silver
bullets” for reducing maintenance costs; however, each rule enforces all the others,
creating an environment in which maintenance is the normal status of a project, and
then all people involved work to reduce the costs related to the main activity as much
as possible.
The major disadvantage of XP is its applicability for industrial projects. From
my experience and point of view, a large effort is needed to convince programmers
to work in couples, exploiting pair programming, and to change the way in which
they program, i.e., writing tests before implementing code and standardizing the
way in which they write code comments and documentation. One of the rules of
thumb for the success of XP techniques is the cooperative work among people, and
from my experience, cooperative work can be successfully achieved in a group not
larger than 10 programmers. Then, you can control the maintenance costs with the
adoption of XP, but you must be very careful to deploy XP techniques in a group
that is ready to apply its rules.
REFERENCES
Beck, K. (1999). Embracing change with extreme programming. IEEE Com-
puter.
Beck, K. (2000). Extreme Programming Explained: Embrace Change.
Boston, MA: Addison Wesley Longman.
Blair, G. M. (1995). Starting to manage: The essential skills. IEEE Engineers
Guide to Business Series.
Bohem, B. (1981). Software Engineering Economics. Englewood Cliffs, NJ:
Prentice-Hall.
Brooks, F. (1995). The Mythical Man-Month. Reading, MA: Addison-Wesley.
DeMarco, T. (1982). Controlling Software Projects. Englewood Cliffs: Yourdon
Press.
92 Fioravanti
Chapter IV
Patterns in Software
Maintenance: Learning
from Experience
Perdita Stevens
University of Edinburgh, Scotland
INTRODUCTION
Software maintenance is widely seen as the “poor relation” of the software
engineering world. The skills that it requires are less widely recognized than those
needed by more glamorous areas such as design or even testing. Few books on
software maintenance are available, and those that exist are seldom seen on
bookshops’ shelves. Software maintenance is gradually being recognized as an
important area to cover in software engineering degree courses, but even now it is
often covered in just a few hours of teaching. The overall effect is that maintainers
of software generally acquire their skills almost entirely on the job.
in the field of software maintenance, and in the following section we discuss how this
application can be done in the context of an organization’s processes. Finally, we
discuss future trends and conclude.
BACKGROUND
Patterns
First, we give a little more detail concerning patterns, their origin in architecture,
and their adoption in software design.
The term pattern in the sense meant here was coined by the architect
Christopher Alexander. In books such as A Pattern Language: Towns, Build-
ings, Construction and The Timeless Way of Building (Alexander, 1979;
Alexander et al., 1977) he collected solution parts that he had seen used
successfully on several occasions, and wrote them up in a standard, easy to consult
form. [A summary of the form used by Alexander can be seen in Figure 1, where
all quotations are from Alexander et al. (1977, pp. x-xi and 833-837).] He explored
the process of solving recurring problems by balancing different considerations
(sometimes here, and often later, called forces) that anybody looking for a good
solution in a constrained solution space faces. For example, in his pattern Window
Place, he describes the tendency for people to want to be near the window in any
room and points out that this may conflict with the desire to sit in a comfortable sitting
area within the room. The solution suggested—which shares with many successful
patterns an “obvious once you see it’’ quality—is to make sure that there is a
comfortable sitting area near the window. Individually, Alexander’s patterns were
not new insights; they were tried-and-tested solutions that had been used many
times by different architects. Alexander’s great contribution was to collect them
together and to explain them and their connections in a standard way that made it
easy for people to browse and consult his books. He intended that the patterns
should form a pattern language; that they should be generative in the sense that
once the designer has chosen a suitable subset of the available patterns, the design
should in some sense emerge from them. Note that Alexander also insisted that one
could use the same pattern many times without doing the same thing twice, so there
is nothing mechanistic about the process of getting from a list of pattern names to
a design.
Even at the time, Alexander was conscious of trying to introduce a significantly
different approach to knowledge codification and transfer. However, in that field
patterns never achieved the prominence that they now enjoy in software design. It
would be interesting to study why that is; I suspect that it has to do, at least partly,
with the different perceptions of individual creativity in the two fields.
Patterns in software design. It is in object-oriented software design that
patterns have become a widely used and understood technique for recording and
96 Stevens
1. A short, descriptive name for the pattern. May be adorned with one or two
asterisks denoting increasing confidence in the pattern.
2. A picture showing “an archetypal example” of the pattern
3. A short paragraph on how the pattern “helps to complete certain large pat-
terns”
4. A graphical divider (three diamonds)
5. A sentence (occasionally two) describing the problem, set in bold type. E.g.,
“Everybody loves window seats, bay windows, and big windows with low sills
and comfortable chairs drawn up to them.”
6. The main section, discussing what the pattern solution is and how, why, and
when it works. May include photographs and/or diagrams.
7. Keyword “Therefore:”
8. A sentence (or two) summarizing the solution, again set in bold type. E.g., “In
every room where you spend any length of time during the day, make at least
one window into a ‘window place.’”
9. A summary diagram.
10. A graphical divider (three diamonds).
11. A short paragraph on “smaller” patterns that may work with this pattern.
transferring problem-solving expertise. One pioneer was Erich Gamma, who later
wrote with Richard Helm, Ralph Johnson and John Vlissides (1995) what is still the
most widely used book of software design patterns; this is the book often known
as the Gang of Four or GoF book. Another was Kent Beck, whose book (Beck,
1996) included patterns that recorded good habits of coding, as well as some
concerning higher level design.
By comparison with Alexander’s intentions for patterns, generativity is played
down in the software design world. While the idea that software designs would be
made entirely of patterns did appear — that is, some people believed (and may still
believe) that there are object-oriented systems in which every class and every
relationship between classes can be defined by the roles it plays in the patterns in
which it takes part — this has never been a mainstream view. It is more common
for a system to include the use of only a few patterns, deployed to solve particular
problems that arise.
Is this simply because not enough patterns have been written down? Perhaps,
but to answer this question we would have to understand what we mean by
“enough.” The object oriented design world was not short of methodologies before
the arrival of patterns. There were plenty of books describing in general terms how
to identify a good set of classes and define the interactions between objects of those
classes. What was missing was more particular—descriptions of how to avoid the
pitfalls that could occur even when following such a methodology, the pitfalls that
experienced designers knew how to steer around. This was the gap that design
patterns filled.
Patterns in Software Maintenance 97
It is worth mentioning that we can already start to see the relevance of patterns
to software maintenance. In many cases, the problems that design patterns solved
were not so much problems of how to design a system in the first place, but of how
to design a maintainable system. Patterns such as State, Strategy, Visitor (Gamma
et al., 1995) and many others are concerned with localizing hotspots in the design,
that is, ensuring that certain kinds of anticipated change would require minimal
alterations to the system. Several formats have been used for recording design
patterns. In general, the formats tend to be more structured than Alexander’s, with
a larger number of set parts; the pattern typically includes fragments of design
diagrams, and often also fragments of code. Figure 2 describes the structure of the
GoF patterns.
Whatever the format, a pattern is a structured, named description of a good
solution to a common problem in context. Its name gives it a handle that can be used
in conversation among people who are familiar with the pattern; people can say
“Shall we use Visitor ?” rather than having to explain a design at length. The use of
an illustrative example helps the reader to understand; this is not to be confused with
the examples of real use of a pattern that often occur later in the pattern description.
(Ideally, a pattern should have been used at least three times by different
organizations before being considered mature; this is the Rule of Three.) It includes
a discussion of the pros and cons of the solution, helping the reader to consider
Name: Short; often descriptive, but at least memorable. Annotated with the name of the
category into which this pattern falls, e.g., Object Behavioral.
Intent: A sentence or two summarising what the pattern achieves.
Also Known As: Aliases, since several design patterns are known under different names
for historical reasons.
Motivation: The description of the overall problem; often includes a simple example.
Applicability: “Use the Name pattern when” and a list of pointers to its applicability, e.g.,
“many related classes differ only in their behavior.”
Structure: One or more diagrams.
Participants: Brief description of the role played by each class in the pattern.
Collaborations: How the pattern works, dynamically.
Consequences: Description of the benefits of using the pattern where it is applicable, and
of the drawbacks of doing so.
Implementation: Description of implementation issues, i.e., lower level considerations
than were discussed under Consequences.
Sample code: With explanations; typically, the code corresponds to the simple example
introduced under Motivation.
Known uses: Comments on real systems that used this pattern.
Related patterns: May include alternatives to this pattern, as well as “larger” and “smaller”
patterns that may interact well with this one.
98 Stevens
alternatives and choose the best way to proceed. In software design, a solution
normally takes the form of a fragment of design; however, in software maintenance
it is common for the most vital information to be about the process of arriving at a
new situation, rather than about the new situation itself.
Process patterns. The term process pattern covers patterns in which the
solution is not a fragment of an artifact, such as a design, but a way of carrying out
a process. A variety of process pattern languages that, while not being specific to
software maintenance, are relevant to this field have been considered.
For example, Alistair Cockburn has written a collection of patterns describing
how to manage risks that arise in teamwork (Cockburn, 2001). His pattern
Sacrifice one person applies in circumstances where members of the team are
often interrupted by urgent tasks (for example, requests for progress reports,
support on earlier products, etc.). Whereas it is natural—and in some circum-
stances optimal—to try to share out these interruptions so that no one member of
the team gets unduly behind in their main task, Cockburn suggests that in certain
circumstances it is better to designate one member of the team as the interface to
the outside world and to have this person handle all interruptions so that the rest of
the team may continue unhindered. Although the sacrificial victim’s “real” work will
have to carried by the rest of the team, this can be more efficient that allowing
everyone to be interrupted. Some discussion by Cockburn and others follows,
mentioning, for example, the need to be aware of the feelings of the person who is
sacrificed. Other examples of pattern languages relevant but not exclusive to
software maintenance include Brad Appleton’s Patterns for Conducting Process
Improvement (Appleton, 1997) and Scott Ambler’s Reuse Patterns and
Antipatterns. (An antipattern is a description of a bad way of doing something.)
A very useful web page collecting resources on process patterns is Ambler’s
Process Patterns Resource Page at http://www.ambysoft.com/
processPatternsPage.html.1
Refactoring
Refactoring is a simple technique for making improvements in the design of
an existing body of code, while minimizing the chances of introducing new bugs in
the process. It has come to prominence recently in the context of extreme
programming (XP); see for example Beck (2000).
As with patterns, refactoring makes explicit good practice that has been
developed many times over by individual developers, rather than creating a radical
new technique. It applies particularly when one attempts to add a feature to a system
whose design is not well adapted to support that feature, whether that happens in
initial development of the system or in perfective maintenance. The natural, naive
Patterns in Software Maintenance 99
approaches are either to ignore the deficiencies of the design and hack the feature
in somehow, or, at the next level of professionalism, to improve the design at the
same time as adding the feature. The idea of refactoring is to observe that the latter
procedure is very error prone. One often—even normally—finds that the new
feature does not work as intended, and then faces the task of determining whether
it is the understanding of the redesign or the understanding of the new feature that
is at fault. Worse is the case where the new feature works but regression testing
shows that something old is now broken. Is the feature responsible in some
interesting way for breaking something that previously worked, or is it just that the
redesign has not been properly conceived or implemented? Complex debugging
tasks like these can easily lead developers back to the original “hack it in somehow”
solution; symptoms include people saying things like “I daren’t touch this code,
anything I do to it breaks it.”
More experienced developers will recognize that the problem is not attempting
to redesign parts of the underlying system, as such, but rather, trying to do too much
at once, and especially trying to make changes to a system without a completely
clear understanding of what the resulting behavior should be. Refactoring captures
this idea. A refactoring is a change, usually a small change, made to a system that
is not intended to change its behavior; that is, it is a change that contributes to
improving the design of a system, for example, getting the system to a point where
its design supports a new feature smoothly and easily.
This tends to be a counterintuitive idea for new developers, who, especially
under time pressure, often consider that they haven’t time to change the design of
a system at all, let alone to do so in a succession of small steps, compiling and testing
after each step. It is true that to work this way requires a supportive infrastructure
of useful tests and usually the ability to compile and test a subsystem separately from
the rest of the system; it will not work in a situation where the only way to test your
code is to build an entire system that takes many hours to compile! Usually,
however, the impression that this will be an inefficient way to work proves to be an
illusion. It is human nature to discount the possibility that one will make mistakes,
and so people tend to see the extra time that repeated compilations and test running
will take and not see the extra time that the alternative, more error-prone procedure
will take in debugging and bugfixing.
The general idea of refactoring, like the general idea of using design patterns,
applies to any system in any paradigm. Again, however, it is in the object-oriented
world that the idea has come to prominence. Martin Fowler has written a book
(Fowler, 1999) that describes some of the commonest design improvements that
can be made to object-oriented systems. He writes them up in a structured form
reminiscent of that used for design patterns, describing explicitly what steps should
be performed, in what order, and at what stages to run regression tests.
100 Stevens
Context Strategy
The new Context class contains an attribute that is a reference to an object of some
subclass of Strategy, and provides code for switching algorithms. The new abstract
Strategy class defines the interface that all algorithms are required to satisfy, and
may also define common supporting methods or data. The specific subclasses of
Strategy implement the unique parts of each algorithm. Figure 3 illustrates.
The result is that, although the class structure of the resulting system is more
complex, it is easier to find the relevant pieces of code for a given purpose. More
important for our purpose here, it is straightforward to add, remove, or modify an
algorithm during maintenance; for example, adding a new algorithm requires
creating a new subclass of Strategy and modifying the Context code that selects an
object of that class as the current strategy.
Whether the additional structural complexity is worth accepting depends on
the situation. It is more likely to be acceptable if there are expected to be many
algorithms or if they have a complex structure of sharing pieces of code. Even in such
an apparently technical problem, non-technical factors may be important. For
example, if several people are to be involved in implementing the functionality of
Context, it is far more likely to be worth using the Strategy pattern than if the whole
is to be implemented by one developer, because splitting up the functionality in a
clear, easy to understand way will then be important.
Please note that this description has been extremely abbreviated; the full
discussion in Gamma et al. (1995) occupies nine pages.
An anecdote told by Michael Karasick when he presented his very interesting
paper (Karasick, 1998) at a recent Foundations of Software Engineering confer-
ence is also pertinent. The group was working on a development environment that
was intended to be long lived and very flexible; it was intended that a variety of tools
to perform different kinds of analysis and manipulation of the structure of the user’s
program would be added in future. This is, in classical design patterns terms, an
obvious situation in which to be using the Visitor pattern. However, the elegant
solution proposed by this pattern involves very complex interactions and is
notoriously difficult for people to “get their heads around.” Moreover, the devel-
opment team had a high rate of turnover; new recruits had to be constantly educated
in the architecture of the system. In the end, this was the major factor that convinced
the team that it was not sensible to use Visitor. They used instead a simpler approach
that involved the use of run-time type information. For them, the drawbacks of this
solution—for example, the possibility of type errors in the tool’s code that the
compiler would not be able to detect—were acceptable, given the difficulty of
making sure that everybody who needed to be was happy with the Visitor solution.
This is not, of course, to say that another group or the same group at a different time
would have come to the same conclusion. For example, design patterns are much
more widely used and taught now than they were then. If the probability that new
104 Stevens
recruits would already be familiar with Visitor had been a little higher, the IBM
team’s decision might have been different.
Whether and how to introduce a design pattern. So, if a maintenance team
inherits a system that does not make use of a design pattern in what seems like an
obvious place for one, should it introduce the pattern? And are the questions to be
answered the same as those for the original development team, or are they different
in character?
The maintenance team may be better placed than the original development
team was to see what flexibility the design needs. The real requirements on a system
are generally much clearer once the system is in use; whereas the original
development team may have guessed that a certain kind of flexibility would be
required in future, the maintenance team may be in a position to know that it will be.
If there are several change requests in the pipeline that would be easier to fulfill if
a particular design pattern were in place, then it may be sensible to adopt the design
pattern when the first such change request is actioned.
Let us return to our Strategy example. Suppose that the system currently
incorporates two different algorithms, and that a boolean attribute of a monolithic
context class ParagraphSplitter controls which one is in use. Several methods of this
class have the form
if (usingAlgorithmB) {
// do algorithm B stuff
...
} else {
// do algorithm A stuff
}
The change request being considered is to add a third algorithm, C.
When only two algorithms were available, Strategy, if it was considered at all,
probably seemed like overkill. This judgment may or may not have been “correct”;
in any case, it was an educated guess based on judgments about exactly such
unknowns as whether it would be necessary to add new algorithms in future.
The maintainer has several options:
1. Keep usingAlgorithmB as it is, add another boolean usingAlgorithmC, and
insert a new if/else statement inside the existing else case.
2. Replace the boolean usingAlgorithmB by a variable, say algorithm, of an
enumerated type A,B,C, and replace the if/else by a three-way case
statement.
3. Use Strategy; that is, split off most of ParagraphSplitter’s functionality into
new classes for each of the three algorithms, together with an abstract
algorithm class.
Patterns in Software Maintenance 105
The first is likely to be a mistake; it fails to make explicit the invariant that at most
one of the booleans usingAlgorithmB and usingAlgorithmC should be true, and the
code structure will be confusing. (The exceptional case where this might be
acceptable would be when algorithm C was in fact a variant of algorithm B, so that
the branch “Are we using a B-type algorithm? If yes, then is it, more specifically,
algorithm C?’’ would reflect the domain.) To decide between the last two options,
we would need to know more about the problem, for example, how complicated
the algorithms are. A rule of thumb is that case statements are unlikely to be
confusing if the whole of the case statement can be seen on one screen, so that in
such a case, the second option might be reasonable.
If the third option is chosen, the best way to introduce the Strategy pattern is
by a sequence of refactorings. First, the system should be changed so that it uses
the Strategy pattern with only the two existing algorithms. That can itself be done
in several stages; most obviously, one could first separate out a new Strategy class
from ParagaphSplitter, moving the flag usingAlgorithmB and the code for the two
algorithms into that class, and creating an appropriate interface for ParagraphSplitter
to use to communicate with its Strategy. Then the Strategy class itself can be split
into an abstract base class with common utility methods, and two concrete
subclasses AlgorithmA and AlgorithmB, each implementing the algorithm interface
agreed upon at the previous stage. Most care is needed in deciding how the creation
and swapping of the Strategy objects is achieved, since this will introduce some
dependencies on the concrete subclasses of Strategy and these should be localized
and limited. Whichever resolution of that problem is used, the system should be
back in working order, passing its regression tests, before the new algorithm is
added; this separates concerns about the new algorithm from more general
concerns about the architecture of the system.
least a starting point and some initial record of best practice. For example, the
pattern Deprecation discusses the important, common, but often overlooked
problem of how to manage the evolution of a library (of functions, components, or
whatever) that is being widely used. The basic technique is that when elements must
be dropped from the library, they are as a first step labelled “deprecated” before
being dropped in a later release of the library. There are, however, several
considerations—suchasthenatureoftheapplicationsthatusethelibrary,thewillingness
of the library’s users to upgrade, etc.—that affect the success of such a strategy and
which are easily overlooked.
For more information, see the websites of the groups at http://
www.reengineering.ed.ac.uk and http://www.iam.unibe.ch/~famoos/patterns/ respec-
tively.
Anthony Lauder and Stuart Kent at the University of Kent have pursued a
program of work grounded in Lauder’s work over several years with Electronic
Data Processing PLC, a software company specializing in software for sales and
distribution. Lauder worked with EDP to develop parallel catalogues of petrifying
patterns and productive patterns. Essentially, this approach separates consider-
ation of commonly occurring problems from discussion of tried-and-tested solu-
tions. “Petrifying pattern” is Lauder’s coinage. The term captures both the tendency
of unsuccessful practices to reduce flexibility, turning parts of a system to stone, and
the effect on the unprepared developer of having to deal with the results of such
practices! Petrifying patterns are not patterns in the classical sense, but, they can be
illuminating especially when paired with productive patterns. Similar descriptions of
problems have elsewhere been called anti-patterns.
As an example, we will consider the petrifying pattern Tower of Babel and its
antidote productive pattern Babel Fish, both from Lauder (2001). As before, the
reader should be aware that what follows are necessarily summaries of much longer
descriptions.
Tower of Babel. The problem described in this pattern is the difficulty of
making applications written in different languages interoperate. This can, for
example, lock developers into a legacy system. If new functionality needs to be
added to the legacy system and the developers do not perceive that there is any easy
way of making the legacy system interact with a new, separate component that
provides the new functionality, they are driven to implement the functionality inside
the legacy system, perhaps in a legacy language. Conversely, migration of function-
ality out of a legacy system is also hampered. One of the concrete examples given
by Lauder of how this problem arose in EDP concerns its sales order processing
system that interacted with its legacy financials package. EDP wanted to replace the
legacy financials package with a better third-party financials package, but found this
impractical given that the systems were in different languages (Commercial Basic,
Patterns in Software Maintenance 107
Currently the best source of information on Lauder’s work is his recent PhD
thesis (Lauder, 2001); earlier work in the program is reported by Lauder and Kent
(2000, 2001).
discussed in the next section, but eventually, it could build up a pattern or, if the
problem is better split into parts, a pattern language that might be useful to new
members of the team. What might not be so obvious at the outset is that there seem
to be benefits even to experienced team members in attempting to codify their
knowledge in this way; for example, interesting discussions are likely to arise out of
considering why a particular instance of the problem was solved in a different way
from most.
Most software development organizations routinely make use of the Web, and
it is natural to use this medium for dissemination of the codified expertise, whether
in pattern form or some other. This has many advantages, such as its being easy to
produce patterns using standard HTML authoring tools, ease of version control
(compared with paper-based solutions), and familiarity of the interface. There are
a number of disadvantages as well: while it is easy for an individual to search
purposefully for a particular pattern in a web-based tool, it is less easy for a team
meeting to consult its pattern collection during its discussions, and people will be
unable to use patterns as their bedtime reading. Experimenting with providing a
“printer-friendly” version (with version information) might be worthwhile.
that play the standard roles in the pattern. Figures 4 and 5 illustrate two variants of
the notation in the case of the Strategy pattern example; for more details, see OMG
(2001) or a good UML book.
Learning a repertoire of patterns. To a large extent, becoming familiar with
collections of patterns is an essentially individual activity; little seems to be gained
by attempting to mandate the use of patterns. Managers can facilitate the process
by purchasing books of patterns and perhaps by putting items such as “any relevant
patterns” onto meeting agendas and document templates. For example, a team
could routinely ask whether any pattern is applicable when it decides how to tackle
a difficult modification request. It is to be expected that the answer will frequently
be that the team does not know any relevant pattern; once the problem is solved,
if the solution seems generally applicable, the team might consider summarizing the
solution it found in pattern form.
Many pattern-reading groups exist; typically, the group meets for lunch or over
a drink to discuss one pattern or a small number of patterns. Teams that are
interested in getting into patterns might consider starting their own small group.
There are definite advantages in a team of people being familiar with the same
collection of patterns, because then the pattern names can be used in conversation
as a kind of high-level (system or process) design vocabulary.
Writing patterns. It may well be impossible to identify dedicated effort for
writing patterns, and integrating pattern development and use into the process
organically may be the only way to proceed. Even if it is possible to allocate effort
specifically to pattern writing, an organization’s pattern catalogue needs to be a
living artifact if it is to succeed; a one-off effort may help to start the process, but
it will not suffice on its own.
Therefore, an organization that wishes to write patterns needs to identify the
existing opportunities for doing so. For example, if there are regular project
meetings to assess progress and discuss problems, then perhaps pattern writing
should be a standing agenda item. Writing good patterns is not easy and is an art
rather than a science, so it should not be expected that a pattern will be produced
in its final form at such a meeting. There are several “pattern languages for pattern
writing’’ available on the Web that may help a team new to pattern writing to get
started; in book form, John Vlissides’ short book (Vlissides, 1998) is well worth
reading for this purpose, even though it concentrates on design patterns rather than
process patterns. It is important to remember that, provided that there is some way
of identifying how much or little reliance can safely be put on a pattern (Alexander’s
asterisks, for example) it does not matter if a particular pattern is just a rough draft;
if it proves useful but incomplete, it can be improved at a later date. In such cases
it is especially important that readers of the draft pattern know whom to contact for
more information and discussion.
Notice that patterns written in this way need not be organization specific.
Solutions that are gleaned from other sources such as books, papers, or conver-
sations might be recorded in this way for the benefit of team members who are not
familiar with the original source. In such a context it may be sensible to write very
abbreviated patterns, that might not describe the solution in detail, but that would,
rather, provide a pointer to the full source, such as “see pages 73-78 of Bloggs’
book Software Maintenance in the team room.”
FUTURE TRENDS
What are the Next Steps for Pattern Use in
Software Maintenance?
Design patterns and process patterns that are not specific to software
maintenance will continue to be written; it is to be hoped that their visibility to all
those who may find them useful will continue to increase. In my view, one of the signs
112 Stevens
of the increasing maturity of the patterns world is the gradual diminution of the
“hype” surrounding patterns. At this stage, it is important that we investigate the
limits of what patterns can do and understand where they are and are not
appropriate. Part of this investigation involves an understanding of what it is that
people get out of patterns. I have hinted at a belief that patterns are not, in fact,
mainly used as cookbook solutions to current problems. I suspect that patterns have
several more important roles, for example, as learning material and as thought
experiments.
Patterns specific to software maintenance or, yet more specifically, to software
reengineering have a more uncertain future. My own experience leads me to suspect
that the use of such patterns by a maintenance organization to document the
solutions that work in its own context will ultimately prove to be more fruitful than
the documentation of solutions that apply to a wide class of organizations. It seems
to me that maintenance is so organization-dependent that it is difficult to codify, in
pattern form or in any other way, much that can genuinely be of use in many
organizations.
On a cynical note, one might also argue that the use of a fashionable technique
such as patterns might help to attract more attention to the Cinderella discipline of
software maintenance.
CONCLUSIONS
In this chapter we have introduced the idea of using patterns in software
maintenance. We have covered three main types of pattern use: the use of design
patterns, the use of existing process patterns, and the writing and use of
organization-specific patterns. We have briefly discussed some practical ways in
which patterns of these different kinds may be relevant, and how they can be
introduced into a maintenance organization’s practice.
A chapter such as this can only be an introduction, but the references that
follow and the URLs in the text should provide leads into the wider literature for any
reader who would like to explore further.
ENDNOTES
1
This, like all URLs in this chapter, was last checked on 1 December 2001.
REFERENCES
Alexander, C. (1979). The Timeless Way of Building. Oxford, UK: Oxford University
Press.
Alexander, C., Ishikawa, S., Silverstein, M., Jacobson, M., Fiksdahl-King, I., & Angel,
Patterns in Software Maintenance 113
Chapter V
Enhancing Software
Maintainability by Unifying
and Integrating Standards
William C. Chu
Tunghai University, Taiwan
Software standards are highly recommended because they promise faster and
more efficient ways for software development with proven techniques and standard
notations. Designers who adopt standards like UML and design patterns to
construct models and designs in the processes of development suffer from a lack
of communication and integration of various models and designs. Also, the problem
of implicit inconsistency caused by making changes to components of the models
and designs will significantly increase the cost and error for the process of
maintenance. In this chapter, an XML-based unified model is proposed to help to
solve the problems and to improve both software development and maintenance
through unification and integration.
INTRODUCTION
Software systems need to be fast time-to-market, evolutional, interoperable,
reusable, cross-platform, and much more. Maintaining software systems is now
facing more challenges than ever before, due to 1) the rapid changes of hardware
platforms, such as PDA, WAP phone, Information Appliance, etc., 2) emerging
software technologies, such as Object Oriented, Java, middleware, groupware,
etc., and 3) new services, such as E-commerce, mobile commerce, services for
Application Service Provider (ASP), services for Internet Content Provider (ICP),
etc.
Due to the high complexity of software systems, development and mainte-
nance usually involve teamwork and high cost. However, most systems are
developed in an ad hoc manner with very limited standard enforcement, which
makes the software maintenance very difficult. De facto standards, such as Unified
Modeling Language (UML) (OMG, 2001), or XML (Connolly, 2001; Lear,
1999), are used to reduce communication expenses during the software life cycle
and to increase maintainability and reusability. Design Patterns (Gamma, Helm,
Johnson, & Vlissides, 1995) are reusable solutions to recurring problems that occur
during software development (Booch, 1991; Chu, Lu, Yang, & He, 2000; Holland,
1993; Johnson & Foote, 1988; Lano & Malik, 1997; Meyer, 1990; Ossher,
Kaplan, Harrison, Katz, & Kruskal, 1995; Paul & Prakash, 1994; Xiao, 1994).
However, these standards usually only cover partial phases of the software
process. For example, UML provides standard notation for modeling software
analysis and design. But lacking of support in the implementation and maintenance
phases, design patterns offer help to the design phase, and component-based
technologies focus on the implementation phase.
In other words, these standards are not talking to each other currently, and
therefore designers need to spend a lot of manual effort mapping and integrating
these standards while crossing each phase of the software life cycle. The activities
of software maintenance involve the whole software life cycle, including require-
ment, design, implementation, testing, and maintenance phases. Not only the model
used in each phase, but also the mapping and integration of models between phases
will affect the efficiency of software maintenance.
Without unifying and integrating these standards, the consistency of the models
cannot be maintained, and the extent of automation is very narrow. This chapter
proposes an XML-based meta-model to unify and integrate these well-accepted
standards in order to improve maintainability of the software systems.
This chapter will discuss the adopted standards, including UML, design
patterns (Gamma et al., 1995), component-based framework, and XML. A
comparison and mapping of these standards will be presented. An XML-based
unified model is used to unify and integrate these various models.
116 Chu, Chang, Lu, Yang, Jiau, Chung, and Qiao
BACKGROUND
Developing and maintaining a software system are very difficult tasks due to
their high complexity. Various models and standards have been proposed to reduce
the complexity and the cost of software development. However, no standard or
model can cover all dimensions of the software process.
In order to understand the activities of software development and mainte-
nance, we should first understand the process of the software life cycle. Figure 1
shows a typical process of the software life cycle, which includes analysis, design,
implementation, and maintenance phases (Sommerville, 1996).
During the process of system development, various kinds of domain knowl-
edge, methodologies, and modeling standards will be adopted according to the
specific requirements and individual expertise. As a result, the difference between
methodologies and standards causes the development/maintenance to be difficult,
and information can be lost while transforming from one phase to another. In the
following, related methodologies, software standards, and studies are surveyed to
disclose the problem itself, as well as some noteworthy efforts responding to that
demand.
Requirement
Design Implementation Maintenance
Analysis
Enhancing Software Maintainability 117
Formalization of OOA/D
A formal method is a systematical and/or mathematical approach to software
development; it begins with the construction of a formal specification describing the
system under development (Bourdeau & Cheng, 1995). Formal specifications
document software requirements or software designs using a formal language. A
formal specification can be rigorously manipulated to allow the designer to assess
the consistency, completeness, and robustness of a design before it is implemented.
The advantages of using formal methods are obvious in that the uses of notations
are both precise and verifiable, and the facilitation of automated processing is
feasible. Much progress was made in formal methods in both theory and practice
in the early of 1990s, most notably, the development of the Z notation (Spivey,
1992). Z notation models the data of a system using mathematical data types and
describes the effects of system operations using predicate logic and the availability
of supporting tools such as ZTC (Jia, 1998b) and ZANS (Jia, 1998a). To combine
the strengths of object-orientation and formal methods seems. However, many
aspects of object-oriented analysis and design models still remain informal or semi-
formal, such as the data and operation specifications (Jia, 1997). The informal
methods and approaches are seductive, inviting their use to enable the rapid
construction of a system model using intuitive graphics and user-friendly languages;
yet they are often ambiguous, resulting in diagrams that are easily misinterpreted
(Cheng, Campbell, & Wang, 2000).
Some researchers tried to formalize and enable developers to construct
object-oriented models of requirements and designs and then automatically
generate formal specifications for diagrams (Cheng et al., 2000). Other projects
have explored the addition of formal syntax and/or semantics to informal modeling
techniques (Hartrum & Bailor, 1994; Hayes & Coleman, 1991; Moreira & Clark,
1996; Shroff & France, 1997). In Jia and Skevoulis (1998), a prototype tool—
Venus, is proposed to integrate the popular object-oriented modeling notation
118 Chu, Chang, Lu, Yang, Jiau, Chung, and Qiao
UML and a popular formal notation Z (Jia, 1997). Wang and Cheng (1998) give
a brief overview of the process that has been developed to support a systematic
development of the object-oriented models from requirements to design. But such
studies only give partial consideration to all the phases during development or only
address issues in a specific domain or application.
all the activities for software development; however, for most of the cases only
specific domain applications with limited scale benefits.
Design patterns and framework are two approaches to solving the problem of
enabling certain elements of designs to be recorded and reused, but they differ
significantly. We will compare them in terms of what they provide and how they
intend to be used. First, design patterns and framework differ in terms of size. Inside
of a framework, more than one design pattern may be applicable. Second, design
patterns and framework have different contents. A framework defines a complete
architecture that is a mixture of design and code. But a design pattern is a pure design
idea that can be adapted and implemented in various ways according to the choice
of the developers. From these comparisons, we know design patterns are more
portable than framework.
Component Based Software Engineering (CBSE). As stated in Szyperski
and Pfister (1997), a software component is a unit of composition with contrac-
tually specified interfaces and explicit context dependencies only. A software
component can be deployed independently and is subject to composition by third
parties. By using the well-proven software parts, reusable component software
engineering has the advantages of fast development, easy configuration, stable
operation, and convenient evolution. Component-based software reuse is consid-
ered one of the most effective approaches to improve software productivity. The
research of software reuse covers the scope of software components identification,
extraction, representation, retrieval, adaptation, and integration. Only a few
researchers (Chu et al., 2000) have addressed the whole process. In practice,
component-based software engineering still suffers from some problems. Smith,
Szyperski, Meyer, and Pour (2000) summarize the critical problems as follow: lack
of a unified interconnection standard; lack of a proven economic model for selling
components; and lack of a standard for component specification. Lots of problems
remain in this field to be solved before reusable component software engineering
can become more practical.
Traditional software development usually produces custom-made software
that usually has the significant advantage of being optimally adapted to the user’s
business model, and it can fulfill all the requirements of the in-house proprietary
knowledge or practices. But it also has the severe disadvantage of huge cost.
A software component is what is actually deployed, just like an isolated part
of a system. It is different than an object that is almost never sold, bought, or
deployed. A component could just as well use some totally different implementation
technology, such as pure functions or assembly languages that follow some special
flow, and there is nothing that looks like objects. So even though in some books and
papers, the terms “component” and “object” are often used interchangeably, they
120 Chu, Chang, Lu, Yang, Jiau, Chung, and Qiao
are totally different. But when we talk about reuse, we have to deal with both of them
together.
Requirement
Design Implementation Maintenance
Analysis
UML
Design
Pattern
Framework
CBSE
122 Chu, Chang, Lu, Yang, Jiau, Chung, and Qiao
<xsd:complexType name=”ComponentType”>
<xsd:annotation>
Model for
Collaboration Transform
diagram ation
...
SM3 Associations
Model for
Design SMn
...
Patterns
...
Source
Code
124 Chu, Chang, Lu, Yang, Jiau, Chung, and Qiao
The AssociationType schema defines the needed information and the types
that are used to describe the relationship of components. The XML schema of
AssociationType is defined as follows:
<xsd:complexType name=”AssociationType”>
<xsd:annotation>
<xsd:documentation>Definition of primitive element: Association</
xsd:documentation>
</xsd:annotation>
<xsd:attribute name=”from” type=”xsd:string”/>
<xsd:attribute name=”to” type=”xsd:string”/>
</xsd:complexType>
Ab straction _Link
...
<xsd:complexType name=“Unification_linkType”>
<xsd:annotation>
<xsd:documentation>Definition of primitive element: Unification_lin</
xsd:documentation>
</xsd:annotation>
<xsd:complexType>
<xsd:attribute name=“xlink:type” type=“(locator)” use=“fixed” value=””/>
<xsd:attribute name=“xlink:arcole” type=“xsd:CDATA” use=“required”/>
<xsd:attribute name=“xlink:href” type=“xsd:string” use=”required”/>
<xsd:attribute name=“xlink:title” type=“xsd:CDATA”/>
<xsd:attribute name=“xlink:from” type=“xsd:NMTOKEN”/>
<xsd:attribute name=“xlink:to” type=“xsd:NMTOKEN”/>
</xsd:complexType>
</xsd:complexType >
The Sourcecode_ link represents the link between the component and its
corresponding source code and is defined as follows:
represented can be integrated and unified in XUM. Therefore, when a model (view)
gets changed, the changes can be reflected to other related models (views).
Each model adopted from standards has its corresponding XUM represen-
tation and its schema is defined in XUMM. Transforming modeling information into
XUM is not a difficult task. Due to the space limitation, we only show the structure
of XUMM in Figure 5 and its mapping rules in Table 1. The detailed schema of
XUMM is shown in the Appendix.
AN EXAMPLE
In this section, to demonstrate the feasibility of our XUM approach, we have
prepared the following example: the development and maintenance of a library
subsystem – a book-loaning management software.
U se C as e D iag ra m
A n alysis
1
1
p ha se
* * *
A ct o r U se C as e R elat io n s h ip
+from
+to
+type
1
1 0..* D ea ig n Pa tt ern D esign
p ha se
1..*
* * * 1 *
1 1
*
*
* *
C o lla b o rat io n _A sso ciatio n
C la ss_ A ss o c iat io n
C la ss +from
+from
+to
+to - nam e
+sequence
+type * * * *
<<uses>> +m essage
Im p le m en ta tio n
p ha se
Enhancing Software Maintainability 129
Uses
Manager Uses
Maintain Book
The design pattern Mediator has been applied to this design, which happens
to cover the same set of classes shown in Figure 9 in this example. The XUM
representation of Mediator is shown in Figure 11. Figure 12 shows the collabora-
tion diagram of the system, and its XUM representation is shown in Figure 13.
The capturing of modeling information from models and transforming them into
XUM is quite systematic and straightforward. From previous diagrams and their
corresponding XUM representations, each model adopted from standards has its
corresponding XUM view. The view in XUM has explicitly and honestly repre-
sented the semantics of components and their relations, which may be implicitly
represented in the adopted standard model. The naming of elements in models and
XUM views is the same. Therefore, the two-way mapping between models and
views has been constructed in our XUM approach.
<Class_Diagram>
<Class name=“Mediator”>
<Integration_link xlink:href=“D_Mediator”/>
<Class name=“ReservationMediator”>
……
<Class name=“Book”>
……
<Class name=“Book_Borrower”>
……
<Class name=“Reservation”>
……
<Class name=“Colleague”>
……
<Class_Association from=“Mediator” to=“Reservation Mediator” type=“generalization”
client=”1">
<Integration_link xlink:title=“Mediator_ ReservationMediator” xlink:lable=“ Associa-
tion of Mediator_ ReservationMediator” xlink:href=“Mediator_ReservationMediator”
xlink:from=“D_Mediator” xlink:to=“D_ ReservationMediator” />
<Class_Association from=“ReservationMediator” to=“Reservation” type=“dependency”
client=“0..n”>
……
<Class_Association from=“ReservationMediator” to=“Book” type=“dependency” cli-
ent=”1" >
……
<Class_Association from=“ReservationMediator” to=“Book_Borrower”
type=“dependency” client=”1" >
……
<Class_Association from=“Book” to=“Colleague” type=“generalization” client=”1" >
……
<Class_Association from=“Book_Borrower” to=“Colleague” type=“generalization” cli-
ent=”1">
……
<Class_Association from=“Reservation” to=“Colleague” type=“generalization” client=”1" >
……
<Class_Association from=“Colleague” to=“Mediator” type=“dependency” client=”1">
……
</Class_Diagram>
Enhancing Software Maintainability 133
3: updateBookState()
4: updateBorrowerState()
: Book
:
Book_Borrower
<Collaboration_Diagram>
<Class name=“Mediator”>
<Integration_link xlink:href=“D_Mediator”/>
</Class>
<Class name=“ReservationMediator”>
……
<Class name=“Book”>
……
<Class name=“Book_Borrower”>
……
<Class name=“Reservation”>
……
<Class name=“Colleague”>
……
<Collaboration_Association from=“ReservationMediator” to=“Reservation” sequence=”1"
message=“returnBook()”/>
<Integration_link xlink:href=“ReservationMediator_Reservation”/>
</Collaboration_Association ><Collaboration_Association from=“ReservationMediator”
to=“Book” sequence=”2" message=“updateReservation()”>
……
<Collaboration_Association from=“ReservationMediator” to=“Book_Borrower” …>
……
</Collaboration_Diagram>
Enhancing Software Maintainability 135
scale systems, the maintenance problems may become a disaster and unmanage-
able. Therefore, the costs of software maintenance are usually much greater than
developing new similar software (Sommerville, 1996). There are some reasons for this:
• Maintenance staffs are inexperienced and unfamiliar with this software.
• Suitable tools are lacking.
• The software structure is difficult to understand.
• The related documentations are usually unreliable and inconsistent, which
offer very limited assistance to software maintenance.
One of the difficulties in software maintenance is the maintenance of consis-
tency of various documents, including requirement documents, design documents,
comments in source codes, and source codes. However, these documents are
usually not existing or inconsistent; source codes are usually the only reliable source
that provides the most updated information. Without a mechanism to enforce the
existence and consistency of these documents, the software maintenance problem
has no way of being solved.
Here we use the same example to show how XUM can help on software
maintenance. We assume a new function that can notify users who have reserved
a book when it is returned to library. The component Notification is added to the
collaboration diagram shown in Figure 12 and Figure 14. The resulting XUM of
modified collaboration diagram is shown in Figure 15. A new <Class>, Notifica-
tion, is specified. However, its <Integration_link>, which is supposed to point to
class specification of XUM, is undefined at the current stage since its class
specification is not specified yet. A new <Collaboration_Association> is defined
for ReservationMediator and Notification.
From the missing information of <Integration_link> to class specification, the
change can be reflected to the class diagram view shown in Figure 10a and 10b.
Figure 10a does not have the specification of class Notification. Figure 10b is
missing <Class> and <Class_Association> for class Notification also. In order to
5: sendMail()
3: updateBookState()
View with the
constraint of
4: updateBorrowerState()
Mediator
pattern : Book
: :
Book_Borrower Notification
Additional
class
136 Chu, Chang, Lu, Yang, Jiau, Chung, and Qiao
FUTURE TRENDS
According to the discussion and validation above, we believe that the XUM
approach proposed in this chapter can be extended and benefit more activities in
various processes for software development, especially software maintenance. We
suggest the future directions based on our unified model and list them in the
following:
1. Linking reuse technologies into the software development process with
unified model
Applying reuse at an earlier phase of the software life cycle can reduce the
software cost and increase the software productivity greatly. Without inte-
grating and unifying the models used in each phase, the links from the
requirement, design, to implementation are missing. The reuse of models at the
early phase has no way to link to its corresponding source codes. With the
support of the unified model, the integration with software reuse technologies
can be another direction that needs to be studied further.
2. The modeling of design patterns
Most of the available design patterns are not yet formally specified and
modeled. Although we have tried to capture the information of the design
patterns in our model, they are not well specified yet. In order to accomplish
CONCLUSIONS
In this chapter, we have proposed an XML-based unified model that can
integrate and unify a set of well-accepted standards into a unified model represented
in a standard and well-accepted language, XML. The survey of these adopted
standards and their roles in the software life cycle are presented in this chapter as
well.
The XUM can facilitate the following tasks:
1) The capturing of modeling information of models and transforming into views
of XUM.
2) The two-way mapping of modeling information among models and XUM
views.
3) The integration and unification of modeling information of different views in
XUM.
4) The support of systematic manipulation.
5) The consistent checking of views represented in XUM.
140 Chu, Chang, Lu, Yang, Jiau, Chung, and Qiao
REFERENCES
Aho, A.V., Kernighan, B.W., & Weinberger, P.J. (1979). Awk—A pattern scanning
and processing language. Software-Practice and Experience, 9(4), 267-280.
Atlee, J.M., & Gannon, J. (1993). State-based model checking of event-driven system
requirements. IEEE Transactions on Software Engineering, 19(1), 24-40.
Booch, G. (1991). Object-oriented design with applications. Redwood City, CA:
Benjamin/Cummings.
Booch, G. (1994). Object-oriented analysis and design with applications, 2nd ed.
Redwood City, CA: Benjamin/Cummings.
Bourdeau, R.H., & Cheng, B.H.C. (1995). A formal semantics for object model
diagrams. IEEE Transactions on Software Engineering, 21(10), 799-821.
Chen, D.J., & Chen, T. K. (1994, May). An experimental study of using reusable
software design frameworks to achieve software reuse. Journal of Object-
Oriented Programming, 7(2), 56-67.
Cheng, B.H.C., Campbell, L.A., & Wang, E.Y. (2000). Enabling automated analysis
through the formalization of object-oriented modeling diagrams. In the Proceed-
ings International Conference on Dependable Systems and Networks 2000
(DSN 2000), IEEE, 305-314.
Chu, W.C., Lu, C.W., Chang, C.H., & Chung, Y.C. (2001). Pattern based software
re-engineering. Handbook of Software Engineering and Knowledge Engi-
neering, Vol. 1. Skokie, IL: Knowledge Systems Institute.
Chu, W.C., Lu, C.W, Yang, H., & He, X. (2000). A formal approach to component
retrieval and integration. Journal of Software Maintenance, 12(6), 325-342.
Connolly, D. (2001). The extensible markup language (XML). The World Wide Web
Consortium. Retrieved August 21, 2001 from http://www.w3.org/XML.
Deitel, H., Deitel, P., Nieto, T., Lin, T., & Sadhu, P. (2001). XML How To Program.
Upper Saddle River, NJ : Prentice Hall.
Do-Hyoung, K., & Kiwon, C. (1996). A method of checking errors and consistency
in the process of object-oriented analysis. In Proceedings of the 3rd Asia-
Pacific Software Engineering Conference (APSEC ’96), IEEE, 208-216.
Gamma, E., Helm, R., Johnson, R., & Vlissides, J. (1995). Design Patterns: Elements
of Reusable Object-Oriented Software. Reading, MA: Addison-Wesley.
Enhancing Software Maintainability 141
Gunter, C.A., Gunter, E.L., Jackson, M., & Zave, P. (2000, May/June). A reference
model for requirements and specifications. IEEE Software, 17(3), 37-43.
Hartrum, T.C., & Bailor, P. D. (1994). Teaching formal extensions of informal-based
object-oriented analysis methodologies. In the Proceedings of Computer
Science Education, 389-409.
Hayes, F., & Coleman, D. (1991). Coherent models for object-oriented analysis. In
Proceedings on ACM OOPSLA’91. ACM, 171-183.
Holland, I.M. (1993). The design and representation of object-oriented compo-
nents. PhD thesis, Northeastern University. Retrieved March 20, 1996 from http:/
/www.ccs.neu.edu/home/lieber/theses-index.html.
Holzmann, G.J. (1997). The model checker SPIN. IEEE Transactions on Software
Engineering, 23(5), 279-295.
Jia, X. (1997). A pragmatic approach to formalizing object-oriented modeling and
development. In Proceedings of the COMPSAC ’97—21st International
Computer Software and Applications Conference. IEEE, 240-245.
Jia, X. (1998a). A Tutorial of ZANS—A Z Animation System. Retrieved Feb. 25,
1998 fromhttp://venus.cs.depaul.edu/fm/zans.html.
Jia, X. (1998b). ZTC: A Type Checker for Z Notation, User’s Guide, Version 2.03.
Retrieved August 12, 1998 from http://venus.cs.depaul.edu /fm/ztc.html.
Jia, X., & Skevoulis, S. (1998). VENUS: A Code Generation Tool, User Guide,
Version 0.1. Retrieved August 25, 1998 from http://venus.cs.depaul.edu/fm/
venus.html.
Johnson, R.E., & Foote, B. (1988). Designing reusable class. Journal of Object-
Oriented Programming, 1(2), 22-35.
Koskimies, K., Systä, T., & Tuomi, J. (1998). Automated support for modeling oo
software. IEEE Software, 15(1), 87- 94.
Lano, K., & Malik, N. (1997). Reengineering legacy applications using design
patterns. In the Proceedings of the 8th International Workshop on
Software Technology and Engineering Practice, IEEE, 326-338.
Lear, A.C. (1999). XML seen as integral to application integration. IT Professional,
2(5), 12-16.
Meyer, B. (1990). Tools for the new culture: Lessons from the design the Eiffel
libraries. Communications of the ACM, 33(9), 68-88.
Moreira, A.M.D., & Clark, R.G. (1996). Adding rigour to object-oriented
analysis. Software Engineering Journal, 11(5), 270-280.
Moser, S., & Nierstrasz, O. (1996). The effect of object-oriented frameworks on
developer productivity. IEEE Computer, 29(9), 45-51.
Murphy, G.C., Notkin, D., & Sullivan, K.J. (2001). Software reflexion models:
Bridging the gap between design and implementation. IEEE Transactions on
Software Engineering, 27(4), 364-380.
142 Chu, Chang, Lu, Yang, Jiau, Chung, and Qiao
APPENDIX
XUMM SCHEMA
<?xml version=“1.0" encoding=”UTF-8"?>
<xsd:schema xmlns:xsd=“http://www.w3.org/2000/10/XMLSchema” xmlns:xlink=“http://
www.w3.org/1999/xlink” elementFormDefault=“qualified”>
<xsd:complexType name=“ComponentType”>
<xsd:annotation>
<xsd:documentation>Definition of primitive element: Component</xsd:documentation>
</xsd:annotation>
<xsd:attribute name=“name” type=“xsd:string” use=“required”/>
<xsd:attribute name=“id” type=“xsd:string” use=“optional”/>
</xsd:complexType>
<xsd:complexType name=“AssociationType”>
<xsd:annotation>
<xsd:documentation>Definition of primitive element: Association</xsd:documentation>
</xsd:annotation>
<xsd:attribute name=“from” type=“xsd:string”/>
<xsd:attribute name=“to” type=“xsd:string”/>
</xsd:complexType>
<xsd:complexType name=“Unification_linkType”>
<xsd:annotation>
<xsd:documentation>Definition of primitive element: Unification_link</
xsd:documentation>
</xsd:annotation>
<xsd:complexType>
<xsd:attribute name=“xlink:type” type=“(locator)” use=“fixed” value=””/>
<xsd:attribute name=“xlink:arcole” type=“xsd:CDATA” use=“required”/>
<xsd:attribute name=“xlink:href” type=“xsd:string” use=“required”/>
<xsd:attribute name=“xlink:title” type=“xsd:CDATA”/>
<xsd:attribute name=“xlink:from” type=“xsd:NMTOKEN”/>
<xsd:attribute name=“xlink:to” type=“xsd:NMTOKEN”/>
</xsd:complexType>
</xsd:complexType >
<xsd:element name=“XUMM”>
<xsd:complexType>
<xsd:sequence>
<xsd:element name=“Requirement”>
<xsd:complexType>
<xsd:sequence>
<xsd:element name=“UseCase_Daigram”>
<xsd:complexType>
<xsd:sequence>
<xsd:element name=“Actor” type=“ComponentType” minOccurs=”0"
maxOccurs=“unbounded”>
<xsd:annotation>
144 Chu, Chang, Lu, Yang, Jiau, Chung, and Qiao
<xsd:annotation>
<xsd:documentation>Element(component) of XUM for design: Class</
xsd:documentation>
</xsd:annotation>
<xsd:complexType>
<xsd:complexContent>
<xsd:extension base=“ComponentType”>
<xsd:sequence>
<xsd:element name=“Attributes” minOccurs=”0"
maxOccurs=“unbounded”>
<xsd:complexType>
<xsd:complexContent>
<xsd:extension base=“ComponentType”>
<xsd:attribute name=“type” type=“xsd:string”/>
<xsd:attribute name=“limit”>
<xsd:simpleType>
<xsd:restriction base=“xsd:NMTOKEN”>
<xsd:enumeration value=“public”/>
<xsd:enumeration value=“protect”/>
<xsd:enumeration value=“private”/>
</xsd:restriction>
</xsd:simpleType>
</xsd:attribute>
</xsd:extension>
</xsd:complexContent>
</xsd:complexType>
</xsd:element>
<xsd:element name=“Operations” minOccurs=”0"
maxOccurs=“unbounded”>
<xsd:complexType>
<xsd:complexContent>
<xsd:extension base=“ComponentType”>
<xsd:attribute name=“limit”>
<xsd:simpleType>
<xsd:restriction base=“xsd:NMTOKEN”>
<xsd:enumeration value=“public”/>
<xsd:enumeration value=“protect”/>
<xsd:enumeration value=“private”/>
</xsd:restriction>
</xsd:simpleType>
</xsd:attribute>
</xsd:extension>
</xsd:complexContent>
</xsd:complexType>
</xsd:element>
<xsd:element name=“Integration_link” type=“Unification_linkType”
maxOccurs=“unbounded”>
<xsd:annotation>
<xsd:documentation>A link to indicate the unification relationship to
share/refer a class in different models</xsd:documentation>
</xsd:annotation>
</xsd:element>
<xsd:element name=“Sourcecode_link” type=“Unification_linkType”>
146 Chu, Chang, Lu, Yang, Jiau, Chung, and Qiao
<xsd:annotation>
<xsd:documentation>A link to connect the corresponding source code
to its class in design</xsd:documentation>
</xsd:annotation>
</xsd:element>
<xsd:element name=“Abstraction_link” type=“Unification_linkType”
minOccurs=”0" maxOccurs=“unbounded”/>
</xsd:sequence>
</xsd:extension>
</xsd:complexContent>
</xsd:complexType>
</xsd:element>
<xsd:element name=“Association” maxOccurs=“unbounded”>
<xsd:complexType>
<xsd:complexContent>
<xsd:extension base=“AssociationType”>
<xsd:sequence>
<xsd:element name=“Integration_link” type=“Unification_linkType”
maxOccurs=“unbounded”/>
</xsd:sequence>
<xsd:attribute name=“id” type=“xsd:string”/>
</xsd:extension>
</xsd:complexContent>
</xsd:complexType>
</xsd:element>
<xsd:element name=“Class_Diagram” maxOccurs=“unbounded”>
<xsd:complexType>
<xsd:sequence>
<xsd:element name=“Class” maxOccurs=“unbounded”>
<xsd:annotation>
<xsd:documentation>Element(component) of XUM for design: UML
Class Diagram association</xsd:documentation>
</xsd:annotation>
<xsd:complexType>
<xsd:sequence>
<xsd:element name=“Integration_link” type=“Unification_linkType”
minOccurs=”0">
<xsd:annotation>
<xsd:documentation>A link to indicate the unification relationship
to share/refer a class in different models</xsd:documentation>
</xsd:annotation>
</xsd:element>
</xsd:sequence>
<xsd:attribute name=“name” type=“xsd:string”/>
</xsd:complexType>
</xsd:element>
<xsd:element name=“Class_Association” maxOccurs=“unbounded”>
<xsd:annotation>
<xsd:documentation>Element(association) of XUM for design: UML
Class Diagram association</xsd:documentation>
</xsd:annotation>
<xsd:complexType>
<xsd:complexContent>
Enhancing Software Maintainability 147
<xsd:extension base=“AssociationType”>
<xsd:sequence>
<xsd:element name=“Integration_link”
type=“Unification_linkType”/>
</xsd:sequence>
<xsd:attribute name=“type”>
<xsd:simpleType>
<xsd:restriction base=“xsd:NMTOKEN”>
<xsd:enumeration value=“association”/>
<xsd:enumeration value=“composition”/>
<xsd:enumeration value=“generalization”/>
<xsd:enumeration value=“dependency”/>
</xsd:restriction>
</xsd:simpleType>
</xsd:attribute>
<xsd:attribute name=“client”>
<xsd:simpleType>
<xsd:restriction base=“xsd:NMTOKEN”>
<xsd:enumeration value=”0"/>
<xsd:enumeration value=”1"/>
<xsd:enumeration value=”n”/>
<xsd:enumeration value=”0..1"/>
<xsd:enumeration value=”0..n”/>
<xsd:enumeration value=”1..n”/>
</xsd:restriction>
</xsd:simpleType>
</xsd:attribute>
</xsd:extension>
</xsd:complexContent>
</xsd:complexType>
</xsd:element>
<xsd:element name=“Abstraction_link” type=“Unification_linkType”
minOccurs=”0"/>
</xsd:sequence>
<xsd:attribute name=“dominator” type=“xsd:string” use=“required”/>
</xsd:complexType>
</xsd:element>
<xsd:element name=“Collaboration_Diagram” minOccurs=”0"
maxOccurs=“unbounded”>
<xsd:complexType>
<xsd:sequence>
<xsd:element name=“Class” maxOccurs=“unbounded”>
<xsd:annotation>
<xsd:documentation>Element(component) of XUM for design: UML
Collaboration Diagram association</xsd:documentation>
</xsd:annotation>
<xsd:complexType>
<xsd:sequence>
<xsd:element name=“Integration_link” type=“Unification_linkType”>
<xsd:annotation>
<xsd:documentation>A link to indicate the unification relationship
to share/refer a class in different models</xsd:documentation>
</xsd:annotation>
148 Chu, Chang, Lu, Yang, Jiau, Chung, and Qiao
</xsd:element>
</xsd:sequence>
<xsd:attribute name=“name” type=“xsd:string”/>
</xsd:complexType>
</xsd:element>
<xsd:element name=“Collaboration_Association” maxOccurs=“unbounded”>
<xsd:annotation>
<xsd:documentation>Element(association) of XUM for design: UML
Collaboration Diagram association</xsd:documentation>
</xsd:annotation>
<xsd:complexType>
<xsd:complexContent>
<xsd:extension base=“AssociationType”>
<xsd:sequence>
<xsd:element name=“Integration_link”
type=“Unification_linkType”/>
</xsd:sequence>
<xsd:attribute name=“sequence” type=“xsd:string”/>
<xsd:attribute name=“message” type=“xsd:string”/>
</xsd:extension>
</xsd:complexContent>
</xsd:complexType>
</xsd:element>
<xsd:element name=“Abstraction_link” type=“Unification_linkType”
minOccurs=”0"/>
</xsd:sequence>
<xsd:attribute name=“dominator” type=“xsd:string” use=“required”/>
</xsd:complexType>
</xsd:element>
<xsd:element name=“Design_Pattern” minOccurs=”0" maxOccurs=“unbounded”>
<xsd:annotation>
<xsd:documentation>Element(component) of XUM for design: Design
Pattern</xsd:documentation>
</xsd:annotation>
<xsd:complexType>
<xsd:complexContent>
<xsd:extension base=“ComponentType”>
<xsd:sequence>
<xsd:element name=“Participation” maxOccurs=“unbounded”>
<xsd:annotation>
<xsd:documentation>Element(component) of XUM for design:
Design Pattern: Participation</xsd:documentation>
</xsd:annotation>
<xsd:complexType>
<xsd:complexContent>
<xsd:extension base=“ComponentType”>
<xsd:sequence>
<xsd:element name=“Integration_link”
type=“Unification_linkType”>
<xsd:annotation>
<xsd:documentation>A link to indicate the unification
relationship to share/refer a common class in different models</xsd:documentation>
</xsd:annotation>
Enhancing Software Maintainability 149
</xsd:element>
</xsd:sequence>
<xsd:attribute name=“role” type=“xsd:string”/>
</xsd:extension>
</xsd:complexContent>
</xsd:complexType>
</xsd:element>
<xsd:element name=“Structure” maxOccurs=“unbounded”>
<xsd:annotation>
<xsd:documentation>Element(association) of XUM for design: Design
Pattern structure association</xsd:documentation>
</xsd:annotation>
<xsd:complexType>
<xsd:complexContent>
<xsd:extension base=“AssociationType”>
<xsd:sequence>
<xsd:element name=“Integration_link”
type=“Unification_linkType”/>
</xsd:sequence>
<xsd:attribute name=“type”>
<xsd:simpleType>
<xsd:restriction base=“xsd:NMTOKEN”>
<xsd:enumeration value=“association”/>
<xsd:enumeration value=“composition”/>
<xsd:enumeration value=“generalization”/>
<xsd:enumeration value=“dependency”/>
</xsd:restriction>
</xsd:simpleType>
</xsd:attribute>
<xsd:attribute name=“client”>
<xsd:simpleType>
<xsd:restriction base=“xsd:NMTOKEN”>
<xsd:enumeration value=”0"/>
<xsd:enumeration value=”1"/>
<xsd:enumeration value=“n”/>
<xsd:enumeration value=”0..1"/>
<xsd:enumeration value=”0..n”/>
<xsd:enumeration value=”1..n”/>
</xsd:restriction>
</xsd:simpleType>
</xsd:attribute>
</xsd:extension>
</xsd:complexContent>
</xsd:complexType>
</xsd:element>
<xsd:element name=“Collaboration” maxOccurs=“unbounded”>
<xsd:annotation>
<xsd:documentation>Element(association) of XUM for design: Design
Pattern collaboration association</xsd:documentation>
</xsd:annotation>
<xsd:complexType>
<xsd:complexContent>
<xsd:extension base=“AssociationType”>
150 Chu, Chang, Lu, Yang, Jiau, Chung, and Qiao
<xsd:sequence>
<xsd:element name=“Integration_link”
type=“Unification_linkType”/>
</xsd:sequence>
<xsd:attribute name=“sequence” type=“xsd:string”/>
<xsd:attribute name=“message” type=“xsd:string”/>
</xsd:extension>
</xsd:complexContent>
</xsd:complexType>
</xsd:element>
<xsd:element name=“Abstraction_link” type=“Unification_linkType”
minOccurs=”0"/>
</xsd:sequence>
<xsd:attribute name=“dominator” type=“xsd:string”/>
</xsd:extension>
</xsd:complexContent>
</xsd:complexType>
</xsd:element>
</xsd:sequence>
</xsd:complexType>
</xsd:element>
<xsd:element name=“Implementation”>
<xsd:complexType>
<xsd:sequence>
<xsd:element name=“Framework” minOccurs=”0" maxOccurs=“unbounded”>
<xsd:complexType>
<xsd:sequence>
<xsd:element name=“Sourcecode” maxOccurs=“unbounded”>
<xsd:annotation>
<xsd:documentation>Element(component) of XUM for implementation:
source code </xsd:documentation>
</xsd:annotation>
<xsd:complexType>
<xsd:sequence>
<xsd:element name=“Sourcecode_link” type=“Unification_linkType”>
<xsd:annotation>
<xsd:documentation>A link to connect a class in the design to its
corresponding codes</xsd:documentation>
</xsd:annotation>
</xsd:element>
</xsd:sequence>
</xsd:complexType>
</xsd:element>
</xsd:sequence>
</xsd:complexType>
</xsd:element>
</xsd:sequence>
</xsd:complexType>
</xsd:element>
</xsd:sequence>
</xsd:complexType>
</xsd:element>
</xsd:schema>
Legacy System 151
Chapter VI
The Internet is an extremely important new technology that is changing the way
in which organizations conduct their business and interact with their partners and
customers. To take advantage of the Internet open architecture, most companies
are applying business reengineering with the aim of moving from hierarchical
centralized structures to networked decentralized business units cooperating with
one another. As a consequence, the way in which software information systems are
conceived, designed, and built is changing too. Monolithic, mainframe-based
systems are being replaced by distributed, Web-centric, component-based sys-
tems with an open architecture.
Ideally, business process reengineering should entail the adoption of new
software systems designed to satisfy the new needs of the redesigned business.
However, economic and technical constraints make it impossible in most cases to
discard the existing and legacy systems and develop replacement systems from
scratch. Therefore, legacy system migration strategies are often preferred to
replacement. This entails that a balance must be struck between the constraints
imposed by the existing legacy systems and the opportunities offered by the
reengineering of the business processes.
This chapter discusses a strategy for migrating business processes and the
supporting legacy systems to an open, networked, Web-centric architecture. The
overall strategy comprises modelling the existing business processes and assessing
the business and technical value of the supporting software systems. A decisional
framework helps software management to make informed decisions. This is
followed by the migration of the legacy systems, which can be in turn enacted with
different approaches. The chapter discusses a short-term migration approach and
applies it to an industrial pilot project.
INTRODUCTION
Convergence between telecommunications and computing and the explosion
of the Internet suggest new ways of conceiving, designing, and running businesses
and enterprises. More and more companies are moving towards a virtual organi-
zation model, where independent institutions, departments, and groups of special-
ized individuals converge in a temporary network with the aim of utilizing a
competitive advantage or solving a specific problem.
Information and communication technology is a primary enabler of virtual
organizations, as people and institutions in a network make substantially more use
of computer-mediated channels than physical presence to interact and cooperate
in order to achieve their objectives. However, technology is not the only factor:
taking advantage of the Internet and its open architecture requires that the way in
which business processes are organized and enacted be profoundly changed.
Business Process Reengineering (BPR) is defined as “the fundamental rethinking
and radical redesign of business processes to achieve significant improvements of
the performances, such as cost, quality, service, and speed” (Hammer & Champy,
1993). Most BPR projects aim at converting business organizations from hierar-
chical centralized structures to networked decentralized business units cooperating
with one another. This conversion is assuming a strategic relevance as the Internet
is radically changing business processes, not only because they are purposely
reengineered, but also because the Internet and, in general, the information and
communication technology, offer clients and customers more convenient means of
fulfilling their requirements.
Current business processes have been profoundly fitted to the available
hardware and software. The technologies involved in process execution impact the
way businesses are conceived and conducted. Abstractly, reengineering business
processes should entail discarding the existing and legacy systems to develop new
software systems that meet the new business needs. This is superficially attractive
and humanly appealing. However, in most cases, legacy systems cannot be simply
discarded because they are crucial systems to the business they support (most
legacy systems hold terabytes of live data) and encapsulate a great deal of
Legacy System 153
knowledge and expertise of the application domain. Sometimes, the legacy code
is the only place where domain knowledge and business rules are recorded, and this
entails that even the development of a new replacement system may have to rely on
knowledge that is encapsulated in the old system. In summary, existing systems are
the result of large investments and represent a patrimony to be salvaged (Bennett,
1995). In addition, developing a replacement system from scratch often requires
excessive resources and entails unaffordable costs and risks. Even where these
resources are available, it would take years before the functions provided by a
legacy system could be taken up by new reliable components. Legacy system
migration strategies are often preferred to replacement (Brodie & Stonebaker,
1995). Therefore, to satisfy the goals of a BPR project it is necessary to work
intensively to search for a trade-off between the constraints of existing legacy
systems and the opportune BPR strategy.
In this chapter, we discuss a strategy for migrating business processes and the
supporting legacy systems to an open, networked, Web-centric architecture. The
overall strategy comprises modelling the existing business processes and assessing
the business and quality value of the supporting software systems. This is followed
by the migration of the legacy systems, which can in turn be enacted with different
strategies. The initial step consists of understanding and modelling the business
processes together with the involved documents and software systems. The
analysis of the existing processes is required to get an inventory of the activity
performed, compare them with best practices, and redesign and/or reengineer
them.
Jacobson, Ericsson, and Jacobson (1995) introduce the term process
reverse engineering to denote the activities aimed at understanding and modelling
existing business processes and define a two-step method:
• use case modeling, that produces a model of the existing business processes
in terms of actors and use cases; and
• object modeling, that produces an object model of the processes.
The aim is to identify a basic process model in order to prioritize the processes
that are most critical for further investigations. Therefore, the model produced tends
to be at a very high level of abstraction; the authors recommend that “you should
be careful not to create models that are too detailed” (Jacobson et al., 1995). In
addition, the authors do not give insights on how the information and data needed
to create the models should be collected.
We have developed a method for business process understanding and
modelling and have applied it in two different areas, redesigning administrative
processes in the public organizations (Aversano, Canfora, De Lucia, & Gallucci,
2002) and providing automatic support for the management of maintenance
processes within a virtual software factory (Aversano, Canfora, & Stefanucci,
154 Aversano, Canfora, and De Lucia
2001b; Aversano, De Lucia, & Stefanucci, 2001c). The main difference with the
work by Jacobson et al. (1995) is that we aim at producing process models that are
sufficiently detailed to be readily used for process automation with a workflow
management system (Workflow Management Coalition, 1994). The method
addresses the modelling of a business process from different perspectives:
• the activities comprised in the process, the flow of control among them, and
the decisions involved;
• the roles that take part in the process; and
• the flow and the structure of information and documents processed and/or
produced in the process.
The method also addresses the analysis and the assessment of the software
systems that support and automate the process.
Different formalisms and languages for process modelling have been proposed
in the literature (Casati, Ceri, Pernici, & Pozzi, 1995; Van der Aalst, 1998;
Winograd & Flores, 1986). Our approach is different: we do not introduce a new
modelling language, rather we use the standard UML (Booch, Rumbaugh, &
Jacobs, 1999), which has also been used as a process and workflow modelling
language also (Cantone, 2000; Loops & Allweyer, 1998). In particular, Loops and
Allweyer (1998) show how a suitable workflow model can be achieved through the
joint use of different UML diagrams (activity diagrams, use-case diagrams, and
interaction diagrams), although some deficiencies are pointed out. In our opinion,
a reasonable high-level workflow model can be produced using a combination of
UML diagrams, and refined using the specific process definition language con-
structs of the selected workflow management system.
There are also several alternative formalisms to approach the modelling of the
set of documents involved in a process and the actors producing/processing them.
An example is the Information Control Nets (Salminen, Lytikainen, & Tiitinen,
2000), which provides all the characteristics and notations necessary to represent,
describe, and automate the flow of documents. We use UML diagrams to model
the structure, the information content, and the flows of documents, too. An
advantage is that the processes and the associated documents are represented in
an integrated way.
Our overall approach for business process understanding and modelling
consists of five main phases, each comprising several steps: Phases 1 and 2 define
the context and collect the data needed to describe the existing processes, with
related activities, documents, and software systems; Phases 3, 4, and 5 are
executed in parallel and focus on workflow modelling, document modelling, and
legacy system analysis and assessment, respectively. Figure 1 summarizes the main
aspects of the different phases. The first four phases of our approach are discussed
in detail in references (Aversano et al., 2002; Aversano et al., 2001b). In this
Legacy System 155
chapter we focus on the final phase and discuss how the business and technical value
of a legacy system can be assessed and how the results of the assessment can be
used to select a system migration strategy.
Context definition. The first phase consists of two steps: scope definition and process
map definition. The goal of the scope definition step is the identification of two key
elements: the context of the analyzed organization (i.e., the units it comprises and the other
entities it interacts with) and the products (i.e., the objects handled within the processes
of the organization). In the process map definition step, a description of the process at
different levels of abstraction is produced according to a top-down approach. Defining
a process map helps to identify the key processes, the involved documents, and software
systems.
Data collection and process description. The goal of this phase is to collect all information
needed to describe and assess the existing processes and the involved documents and
systems. This information can be achieved in different ways, for example, through
observations, questionnaires, and interviews conducted with process owners and key
users. The collected information about workflows, documents, and software systems are
included in a structured process description document.
Workflow modelling. Workflow modelling involves two separate steps: activity map
definition and activity documentation. The first step aims to produce a semi-formal
graphical model of the analyzed process. We use UML activity diagrams to model the flow
of the process activities, including decisions and synchronizations, use cases to model
organizational aspects, i.e., which actors (roles) participate in which use case (activity or
group of activities), and interaction (sequence and collaboration) diagrams to depict
dynamic aspects within a use case. In the activity documentation step, the workflow model
produced in the previous step is completed with detailed documentation.
Document modelling. This phase produces a model of the content, structure, and mutual
relationships of the documents involved in the workflow. It is based on two steps:
document class definition and document life-cycle modeling. Initially, the documents are
partitioned into classes on the basis of their content, and the relationships existing
between document classes are identified. The result is a document-relationships model
modelled through a UML class diagram, where nodes represent document classes and
edges depict mutual relationships. The second step consists of describing the life cycle
for each document class, i.e., its dynamic behavior, through UML state diagram.
Legacy systems analysis and assessment. This phase aims to assess the existing software
systems involved in the business process to identify the most feasible Web migration
strategy. Usually, a legacy system is assessed from two points of views: a business
dimension and a technical dimension. Most of the information for the evaluation of the
business value is collected through interviews and questionnaires. The technical value
of a legacy system can be assessed through different quality attributes, such as the
obsolescence of the hardware/software platforms, the level of decomposability, the
maintainability, and the deterioration. This information can be achieved by combining
interviews and questionnaires with the analysis of the legacy source code and of the
available documentation.
156 Aversano, Canfora, and De Lucia
The chapter is organized as follows. The next section discusses the assessment
of legacy systems and presents decisional frameworks to guide and support
management in making decisions about a system. The decisional frameworks
presented analyze the legacy systems from both the business and the technical
points of view. Then, the chapter introduces a strategy to migrate legacy systems
as a consequence of a BPR project and discusses the main enabling technologies.
The application of the strategy in an industrial pilot project is then discussed. Finally,
the chapter summarizes the work and gives concluding remarks.
Elimination / Redevelopment /
Replacement Reengineering /
Low
Migration
Low High
Business
value
Legacy System 159
view of an incremental migration. In addition, the main technical factor affecting the
possibility of renewing the legacy system into the Web-based infrastructure of the
reengineered business process is the system decomposability, which therefore is
assumed as a decision dimension rather than as a component of the technical value.
Two different kinds of decomposability can be considered:
• vertical decomposability, which refers to the possibility of decomposing a
system into major architectural layers; and
Elimination / Reengineering /
Replacement Redevelopment
Low
Low High
Business
value
User Interface
Data
Management
Network
Interface
Banking System
160 Aversano, Canfora, and De Lucia
A MIGRATION STRATEGY
The migration of a legacy system entails the reuse of the system components
while moving the system toward newer and more modern technology infrastructure.
Brodie and Stonebraker (1995) propose an incremental approach named Chicken
Little, based on 11 steps to migrate a legacy information system using gateways.
Each step requires a relatively small resource allocation and takes a short time; it
produces a specific, small result toward the desired goal corresponding to an
increment. The Chicken Little steps are: analyzing the legacy system, decomposing
the legacy system structure, designing the target interfaces, designing the target
application, designing the target database, installing the target environment, creating
and installing the necessary gateway, migrating the legacy database, migrating the
legacy applications, migrating the legacy interfaces, and cutting over to the target
system. Using this strategy, it is possible to control the risk involved and determine
the size of each part of the legacy system to be migrated. If a step fails, it is not
necessary to repair the entire process but only to restore the failed step. A different
approach proposed by Wu et al. (1997) is the Butterfly methodology that eliminates
the needs to access both the legacy and new database during the migration process,
thus avoiding the complexity induced by the introduction of gateways to maintain
the consistency of the data.
The Chicken Little and the Butterfly methodologies aim at migrating a legacy
system mainly based on its vertical decomposability. Migration strategies have also
been proposed that takes into account the horizontal decomposability of a legacy
system. In Canfora et al. (1999) a strategy for incrementally migrating legacy
systems to object-oriented platforms is presented. The process consists of six
sequential phases: static analysis of legacy code, decomposing interactive pro-
grams, abstracting an object-oriented model, packing the identified object methods
into new programs, encapsulating existing objects using wrappers, and incremen-
tally replacing wrapped objects. Wrapping is the core of the migration strategy: it
makes new systems able to exploit existing resources, thus allowing an incremental
and selective replacement of the identified objects. Serrano, Montes de Oca, and
Carter (1999) propose a similar approach. The main difference between the two
migration strategies is the method used to identify objects. Serrano et al. (1999)
exploit data mining techniques (Montes de Oca & Carver, 1998), while Canfora
et al. (1999) use a clustering method based on design metrics (Cimitile et al., 1999).
In this chapter we propose a Web-centric, short-term migration strategy
based on vertical decomposability and on the use of wrappers. As discussed in the
previous section, this strategy applies to legacy systems with high business value and
a high vertical decomposability level. Figure 5 shows the main phases of the
migration process. The first phase is the decomposition and restructuring of the
legacy system to a client-server style, where the user interface (client side) controls
Legacy System 163
Legacy
System
Restructuring
and
Decomposing
Legacy Legacy
User Interface Server
Component Component
Reengineering / Wrapping
Redevelopment
Web Wrapped
User Interface Legacy Server
Component Component
Integrating
Web based
System
(partially migrated)
the execution of the application logic and database components (server side). The
complexity of this phase depends on the decomposability of the system.
Nondecomposable systems are the most difficult to be restructured, because
decoupling the user interface from the application logic and database components
can be very challenging and risky; slicing algorithms are required to identify the
statements that implements the user-interface component, while restructuring the
legacy programs to a client server style is human intensive (Canfora et al., 2000).
164 Aversano, Canfora, and De Lucia
and s18. It is worth noting that the subroutines s, s1, s3, and s4 are now colored
grey, as they only contribute to implement the user-interface component due to the
calls to the subroutines s2, s8, and s13. The new call graph can be decomposed by
an ideal borderline in two parts: the subroutines of the client component (in grey)
are in the higher part of the call hierarchy, while the lower part contains the
subroutines of the server components (in white). The border line crosses all the
edges corresponding to calls between subroutines in the client component and
subroutines in the server components; these calls, depicted with the dashed lines,
will have to be converted into service requests.
Canfora et al. (2000) have proposed an interprocedural slicing algorithm
based on control dependencies to identify the statements implementing the user-
interface components. These statements are identified by traversing the control
dependencies of the legacy programs backward starting from the I/O statements.
The authors also propose an interactive tool to restructure the logic of each program
to a client-server style. The tool automatically identifies database accesses that need
to be extracted from the user-interface part and encapsulated into separate
subroutine. The tool also allows a software engineer to select code fragments
implementing application logic components and automatically encapsulates them
into separate subroutines; the user is only asked to assign a name to the subroutine.
Extracting these subroutines allows restructuring of the program with a call graph
such as the one in Figure 6(a) to a client-server style with a call graph similar to the
one in Figure 6(b). Once a program has been decomposed and restructured in this
way, it is ready for migration to a Web architecture. The user interface will be
reengineered or redeveloped using Web technology, while the subroutines of the
server component invoked by the legacy user interface will be wrapped. These two
issues will be discussed further in the next sections.
s1 s3 s4 s1 s14 s3 s4
s17
s2 s5 s18 s2 s5
s6 s7 s8 s13 s15 s8 s16 s13
s9 s10 s9 s10 s6 s7
interface using Web-based technologies. The maturity of the Web offers different
solutions to reimplement the user interface of the system. Several scripting
languages, such as VBScript and JavaScript, have been introduced to enhance
HTML and quickly develop interactive Web pages. Scripts can be executed on the
client to dynamically modify the user interface, driven by the occurrence of
particular events. Scripts can also be executed on the Web server, in particular to
generate dynamic HTML pages from the data received from the interaction with
external resources, such as database and applications.
Concerning the reengineering methodologies, several approaches to the
problem of user interface reengineering have been presented in the literature, that
can be customized for the Web. Merlo et al. (1995) propose a technique for
reengineering CICS-based user interfaces in COBOL programs into graphical
interfaces for client-server architectures. Character-based interface components
are extracted and converted into specifications in the Abstract User Interface
Description Language (AUIDL); then they are reengineered into graphical AUIDL
specifications and used to generate the graphical user interfaces. The authors outline
the need for investigating slicing and dependence analysis to integrate the new
interface code into the original system. Also, this approach could be extended and
used to convert a character-based user interface into a Web-based user interface.
Van Sickle, Liu, and Ballantyne (1993) propose a technique for converting user
interface components in large minicomputer applications, written in COBOL, to run
under CICS on an IBM mainframe.
A more general approach, called MORPH, has been proposed in references
(Moore, 1998; Moore & Moshkina, 2000; Moore & Rugaber, 1997). MORPH
exploits a knowledge base for representing an abstract model of the interaction
objects detected in the code of the legacy system. Transformation rules are
exploited to restructure and transform the original abstract model into a new
abstract model used to generate the target graphical user interface. MORPH was
originally developed for reengineering character-based user interfaces to graphical
user interfaces; it has also been used to reengineer graphical user interfaces from one
platform to another (Moore & Rugaber, 1997) and character-based user interfaces
to Web interfaces (Moore & Moshkina, 2000). We have also used these guidelines
to reengineer graphical user interfaces to Web interfaces (Aversano, Cimitile,
Canfora, & DeLucia, 2001a).
The MORPH method entails three steps, named detection, representation,
and transformation. In the first step, a static analysis is conducted to identify and
extract the user interface implementation patterns from the source code. The
representation step aims at building a hierarchical abstract model where the
identified user interface coding patterns are the leaves and higher level conceptual
168 Aversano, Canfora, and De Lucia
interaction tasks and attributes are abstracted from the lower level patterns. This
abstract model is stored in the MORPH knowledge base. The final step defines a
set of transformation rules used to move the abstract model into a concrete
implementation with a particular GUI technology.
The transformation of the legacy user interface into the new Web interface
should also be driven by the business needs. For example, if the new interface has
to be used by the old users, a goal could be minimizing the need for retraining. In
this case, in addition to establishing a one-to-one correspondence between the
legacy and the new interaction objects, the mapping should be built maintaining a
correspondence between the legacy panels and the new Web pages and forms
(Aversano et al., 2001a). However, in many cases the BPR process introduces new
roles (for example, a legacy system in the reengineered process might be required
to be accessed directly by a customer); in this case, the new interface must be
radically redesigned to meet the new user’s needs (Sneed, 2000).
EXPERIENCE
The Web migration strategy outlined in the previous section has been applied
in a pilot project concerned with a COBOL system, named Overmillion, produced
by a small Italian software house. Overmillion is an online analytical processing
system aimed at querying large archives of data for decision support through
statistical and explorative analyses. It uses a proprietary form of inverted file, named
FOPA (attribute-based file organization), which allows achieving very short
response times (usually units of seconds also for archives with million of records)
for this kind of analyses.
The system has evolved over the past 15 years from a centralized mainframe
version with a character-based user interface to a client-server version with a
graphical user interface running on a PC with the Microsoft Windows operating
system. Only the user interface is separated from the rest of the system, while the
application logic and the database services (server part) are not separated. Besides
170 Aversano, Canfora, and De Lucia
the original mainframe version based on VSAM files, the server part has been
specialized to different platforms, such as IBM AS/400 and Aix, and different
database management systems, such as the native AS/400 database and the DB2
relational database. Batch utilities convert the archive/database content to the
FOPA format.
In the current version, the system provides the final user with two alternative
work modalities: PC stand alone and remote host mode. Whenever the PC stand-
alone mode is selected, the user works on a local image of the FOPA archive. In
the remote mode, the user interface interacts with a set of COBOL programs
distributed between the client and the server, which implements a socket-based
connection with the remote host. The code implementing the querying and analyses
of the FOPA archives is duplicated in different versions for the PC and the different
hosts.
Several reasons motivated the need to migrate Overmillion to the Web. The
first and most important is the fact that the system is used by several banks that use
it to make decisions both at central and peripheral levels. In the old architecture, the
PC-host connection layer had to be replicated on each client installation, and this
increased the application ownership costs. Most banks nowadays use Internet/
Intranet technologies for their information system infrastructure, and this caused a
push in the direction of Web migration. In addition, the new architecture adds
flexibility; in particular, it opens the way to the migration of the stand-alone PC
version towards a distributed version that federates several archives/databases host
sources; the querying and analysis software based on FOPA organization will be
GUI component
+
User interface software
only on the middle tier, while the remote hosts will act as database servers for
production transactions. This will eliminate the need to maintain the different
versions of the querying and analysis software based on FOPA archives.
host does not need to be modified to enable the migration of the system to the Web.
Besides the graphical user interface component, the migrated part of the system
consisted of 218 COBOL programs and more than 130 KLOC. Table 1 shows
a summary of the analyzed programs, classified according to the software layers of
the architecture depicted in Figure 7. The low-level service software layer (last row
in the table) consists of a set of programs that implement functions used by programs
of other layers, in particular the user interface and the local archive access layers.
The particular architecture of the system (semidecomposable) did not require
a particular effort to decompose it. Call graph restructuring was reduced to a
minimum, as very few server programs issued calls to programs belonging to the
user interface. More extensive restructuring activities were performed to transform
global data areas used for the communication between the user interface and the
other programs into parameters exchanged through LINKAGE SECTIONs; data
flow analysis techniques (Cimitile et al., 1998) were extensively used to implement
this task.
One of the aims of our pilot project was to achieve a balance between the effort
required to reengineer or redevelop the legacy programs and the effort required to
wrap them. For example, most of the programs that implemented the user interface
component were very small (on the average 500 LOC including the DATA
DIVISION); therefore, it was more convenient to redevelop this software layer
using the same Web technology adopted for reengineering the user interface. With
this approach, the minimization of the wrapping effort was achieved by wrapping
only the programs of the Routing and Utility layer and the PC-Host lower level
connection layer. In particular, the second group of programs was wrapped to
Web browser
access software
HTML
ASP Remote archive access software
Legacy System 173
enable the communication with the new graphical user interface, while the interac-
tions with the programs of the high communication layer is still implemented through
the LINKAGE SECTION of the COBOL programs.
The target architecture of the system is shown in Figure 8. The two object
wrappers, one for the routing software and the utilities and the other for the lower
level connection layer, are realized as dynamic load libraries written in Microfocus
Object COBOL; they are loaded into the Microsoft Internet Information Server
(IIS) and are accessed by the user interface through VBScript functions embedded
into ASP pages.
Figure 9. The wrapper receives messages from the user interface through the
VBScript functions embedded into the ASP pages. The messages received by the
wrapper are converted into calls to the programs that realize the required services.
The communications between the user interface and the wrapper is realized
through strings. The VBScript functions build an input string instring from the data
contained in the HTML form and pass them as a parameter to the object wrapper
method (in the formal parameter VarIn); the string instring also contains information
needed to identify the user and the session that issued the call. Figure 10 shows the
fragment of VBScript function that invokes the method MSomma. The sequence
of data concatenated in the input and output strings are mapped onto the structure
of records of the invoked COBOL method.
The invoked method first extracts the data from the string VarIn into the fields
of the local storage record APP-INFO2; then, relevant data is extracted from the
fields of record APP-INFO2 and copied into the actual parameters to be passed
to the legacy program. Finally, the wrapper method concatenates the parameters
returned by the program into the string VarOut and returns it to the invoking scripting
function. Special characters are used to separate record fields in the output string
with a convention that uses different characters to identify the different nesting levels
of the record fields (in the input string this is not required, as the concatenated fields
have a fixed size). For example, the character “#” is used to separate fields of the
outermost nesting level, while “=” is used for the second level of nesting. The
VBScript code repeatedly splits the data contained in the returned string from the
outermost to the innermost nesting level (see Figure 10) and produces the output
for the appropriate client.
method-id. “MSomma”.
local-storage Section.
01 APP-INFO2.
......
linkage Section.
01 VarIn PIC X(1920).
01 VarOut PIC X(1920).
......
procedure division using by reference VarIn
returning VarOut.
....extracting parameters from string VarIn
CALL “C:\OMWeb\Bin\SCSOMMA” USING .....
....concatenating parameters into string VarOut
Legacy System 175
CONCLUSION
Current business processes have been profoundly fitted to the available
hardware and software. Traditionally, software systems have been developed with
a mainframe-oriented, centralized architecture, and this has impacted the way
businesses were conceived and conducted. Nowadays, the convergence between
telecommunications and computing and the explosion of the Internet enables new
ways of interaction between an organization and its customers and partners. As a
consequence, moving to Web-centric, open architectures is widely recognized as
a key to staying competitive in the dynamic business world. However, Internet and
the Web are only enabling factors. To take advantage of the Internet’s open
architecture, most companies are applying business reengineering with the aim of
moving from hierarchical centralized structures to networked decentralized busi-
ness units cooperating with one another. Abstractly, reengineering business pro-
cesses should entail discarding the existing and legacy systems to develop new
software systems that meet the new business needs and have a distributed, Web-
centric, component-based, open architecture. In practice, discarding existing and
legacy systems is, in most cases, infeasible because they represent economic and
knowledge investments to be salvaged, and the risks and costs associated with
replacement cannot be afforded. As a consequence, the migration of the legacy
systems to the new business and technological platform is often the only viable
alternative. This means that a balance must be struck between the constraints
imposed by the legacy systems and the opportunities offered by BPR.
......
result = Split (dataSource.MSomma (instring),“#”)
Response.Write(“<BR>Soggetti contati...............” & Cstr (CLng (result (0))))
Response.Write(“<BR>Soggetti dell’applicazione.....” & Cstr (CLng (result (1))))
Response.Write(“<BR>Data ultimo caricamento........” & result (2))
Response.Write(“<BR>Tempo di elaborazione..........” & result (3))
Response.Write(“<BR><BR><HR>”)
for i = 4 to ubound (result) - 2
res2 = Split (result(i),“=”)
s = sepDigit (Cstr (CDbl (res2(0))))
m = sepDigit (Cstr (CDbl (res2(1))))
if (fil(i - 4) <> “”) then
Response.Write(“<BR>” & Session (“nameArray”)(Cint(fil(i - 4))) & _
“<BR>SOMMA = “ & s & “<BR>MEDIA = “ & m & “<HR>”)
end if
next
......
176 Aversano, Canfora, and De Lucia
This chapter has proposed a strategy for migrating business processes and the
supporting legacy systems toward an open, networked, Web-centric architecture.
The initial step consists of modelling the existing business processes and assessing
the business and technical value of the supporting software systems. The analysis
of the existing processes is required to get an inventory of the activity performed,
compare them with best practices, and redesign and/or reengineer them. The
assessment of the legacy systems aims at evaluating its technical and business
quality, to devise the most appropriate evolution approach. Based on the assess-
ment, a decisional framework assists software managers to make informed
decisions. This is followed by the migration of the legacy system, which can in turn
be enacted with different approaches. The chapter proposed a short-term system
migration strategy that decomposes and reengineers the system into its client and
server components and uses wrappers to encapsulate the server components.
Reverse engineering is used to abstract a model of the client components and, in
particular, the user interface, and redevelop them. Enabling technologies for each
of the activities involved in the migration process have been overviewed in the
chapter.
Finally, the chapter has discussed the application of the overall migration
strategy in an industrial pilot project. The project aimed at integrating an existing
COBOL system into a Web-enabled architecture. The need for migrating the
system was imposed by the changes in the underlying business processes. After
assessing the system, we opted for a short-term migration strategy with the aim of
reducing to a minimum the time needed to have a working version of the new, Web-
enabled system and the extent of the changes to be made to the existing code. This
was achieved essentially through an extensive use of reverse engineering to design
the new user-interface, and wrapping to implement the server side. In particular, the
presentation layer of the original system was separated from the application logic
and database components and reimplemented using ASP and VBScript; the
MORPH approach (Moore, 1998) was used to map the components of the existing
interface onto the new Web-based interface. The application logic and database
functions of the legacy system were wrapped into a single server component using
dynamic load libraries written in Microfocus Object COBOL and accessed
through VBScript functions embedded into ASP pages.
The migration project consumed approximately eight man/months over a
period of five calendar months. Overall, five people were involved in the project:
two software engineers from the company that had developed the original system,
2 junior researchers from the University, and a senior researcher with the role of
coordination and advising. Assessing the existing system and decomposing it was
performed jointly by an engineer of the company (he had also been involved in the
Legacy System 177
development of the original system) and a junior researcher. Another joint team was
in charge of selecting the enabling technologies, based on an assessment of the
company’s expertise and capabilities, and designing the new overall architecture of
the system. At first, the development of the server and the client proceeded
independently, with the company’s engineers working on the COBOL wrapper
and the university junior researchers working on the ASP integration layer and the
client, including the reverse engineering of the original user interface. Server stubs
were used to incrementally test the client and, as new features were integrated into
the client, to test the corresponding server functions. This strategy of independent
development was chosen because team members were expert in either COBOL
or HTML and ASP, but not both. However, the last six calendar weeks of the
project were spent on system testing, including fine-tuning the integration of client
and server, and required a close cooperation of all the members of the team.
While the short-term strategy was successful in reducing the time and the costs
of migrating the system to adapt it to the new business scenario, it did not changed
the quality of the running server programs, which remain essentially unchanged, and
therefore, no benefits are expected in the future maintenance and evolution of the
system. This is in contrast with other approaches that reverse engineer meaningful
objects from the legacy code and wrap each of them separately (see upper-left
quadrant in Figure 3). While more costly, such approaches offer more long term
benefits as they enable incremental replacement strategies.
REFERENCES
Arranga, E., & Coyle, F. (1996). Object oriented COBOL. New York: SIGS
Books.
Aversano, L., Canfora, G., De Lucia, A., & Gallucci, P. (2002). Business process
reengineering and workflow automation: A technology transfer experience.
The Journal of Systems and Software, to appear.
Aversano, L., Canfora, G., & Stefanucci, S. (2001b). Understanding and improv-
ing the maintenance process: A method and two case studies. In Proceedings
of the 9th International Workshop on Program Comprehension, (pp.
199-208) Toronto, Canada. New York: IEEE Computer Society.
Aversano, L., Cimitile, A., Canfora, G., & De Lucia, A. (2001a). Migrating legacy
systems to the Web. In Proceedings of the Conference on Software
Maintenance and Reengineering, (pp. 148-157) Lisbon, Portugal. New
York: IEEE Computer Society.
Aversano, L., De Lucia, A., & Stefanucci, S. (2001c). Introducing worfklow
management in software maintenance processes. In Proceedings of the
International Conference on Software Maintenance, (pp. 441-450)
Florence, Italy. New York: IEEE Computer Society.
178 Aversano, Canfora, and De Lucia
De Lucia, A., Fasolino, A.R., & Pompella, E. (2001). A decisional framework for
legacy system management. Proceedings of the International Conference
on Software Maintenance, (pp. 642-651) Florence, Italy: IEEE Computer
Society.
Dietrich, W.C., Nackman, L.R., & Gracer, F. (1989). Saving a legacy with
objects. Proceedings of the Conference on Object Oriented Program-
ming Systems Languages and Applications, 77-88.
Dunn, M.F., & Knight, J.C. (1993). Automating the detection of reusable parts in
existing software. In Proceedings of 15th International Conference on
Software Engineering, (pp. 381-390) Baltimore, MD: IEEE Computer
Society.
Hammer, M., & Champy, J. (1993). Reengineering the Corporation: A Mani-
festo for Business Revolution, New York: HarperCollins.
Jacobson, I., Ericsson, M., & Jacobson, A. (1995). The Object Advantage:
Business Process Reengineering with Object Technology, ACM Press,
Reading, MA: Addison-Wesley.
Kim, H.S., & Kwon, Y.R. (1994). Restructuring programs through program
slicing. International Journal of Software Engineering and Knowledge
Engineering, 4(3), 349-368.
Lanubile, F., & Visaggio, G. (1997). Extracting reusable functions by flow graph-
based program slicing. IEEE Transactions on Software Engineering,
23(4), 246-259.
Lindig, C., & Snelting, G. (1997). Assessing modular structure of legacy code
based on mathematical concept analysis. Proceedings of 19th International
Conference on Software Engineering, (pp. 349-359) Boston, MA: ACM
Press.
Liu, S., & Wilde, N. (1990). Identifying objects in a conventional procedural
language: An example of data design recovery. Proceedings of Interna-
tional Conference on Software Maintenance, (pp. 266-271) San Diego,
CA, IEEE Computer Society.
Livadas, P.E., & Johnson, T. (1994). A new approach to finding objects in
programs. Journal of Software Maintenance: Research and Practice, 6,
249-260.
Loops, P., & Allweyer, T. (1998). Object orientation in business process
modelling through applying event driven process chains (EPC) in UML.
Proceedings of 2nd International Workshop on Enterprise Distributed
Object Computing, (pp. 102-112) San Diego, CA: IEEE Computer
Society.
Markosian, L., Newcomb, P., Brand, R., Burson, S., & Kitzmiller, T. (1994).
180 Aversano, Canfora, and De Lucia
Chapter VII
Requirements Risk
and Maintainability
Norman F. Schneidewind
Naval Postgraduate School, USA
INTRODUCTION
While software design and code metrics have enjoyed some success as
predictors of software quality attributes such as reliability and maintainability
(Khoshgoftaar & Allen, 1998; Khoshgoftaar, Allen, Halstead, & Trio, 1996a;
Khoshgoftaar, Allen, Kalaichelvan, & Goel, 1996b; Lanning & Khoshgoftaar,
1995; Munson & Werries, 1996; Ohlsson & Wohlin, 1998; Ohlsson & Alberg,
1996), the measurement field is stuck at this level of achievement. If measurement
is to advance to a higher level, we must shift our attention to the front-end of the
development process, because it is during system conceptualization that errors in
specifying requirements are inserted into the process and adversely affect our ability
to develop and maintain the software. A requirements change may induce ambiguity
and uncertainty in the development process that cause errors in implementing the
affects reliability and maintainability. At the same time, this complex requirement will
affect the size and complexity of the code that will, in turn, have deleterious effects
on reliability and maintainability.
OBJECTIVES
Our overall objective is to identify the attributes of software requirements that
cause the software to be unreliable and difficult to maintain. Furthermore, we seek
to quantify the relationship between requirements risk and reliability and maintain-
ability. If these attributes can be identified, then policies can be recommended to the
software engineering community for recognizing these risks and avoiding or
mitigating them during development and maintenance. The objective of these policy
changes is to prevent the propagation of high-risk requirements through the various
phases of software development and maintenance.
Given the lack of emphasis in measurement research on the critical role of
requirements, we are motivated to discuss the following issues:
• What is the relationship between requirements attributes and reliability and
maintainability? That is, are there requirements attributes that are strongly
related to the occurrence of defects and failures in the software?
• What is the relationship between requirements attributes and software
attributes like complexity and size? That is, are there requirements attributes
that are strongly related to the complexity and size of software?
• Is it feasible to use requirements attributes as predictors of reliability and
maintainability? That is, can static requirements change attributes like the size
of the change be used to predict reliability in execution (e.g., failure occur-
rence) and the maintainability of the code?
• Are there requirements attributes that can discriminate between high and low
reliability and maintainability, thus qualifying these attributes as predictors of
reliability and maintainability?
• Which requirements attributes pose the greatest risk to reliability and main-
tainability?
An additional objective is to provide a framework that researchers and
practitioners could use for the following: 1) to analyze the relationships among
requirements changes, complexity, reliability, and maintainability, and 2) to assess
and predict reliability and maintainability risk as a function of requirements changes.
METHODS
Our approach involves postulating several hypotheses concerning how re-
quirements attributes affect reliability and maintainability and then conducting
experiments to accept or reject the hypotheses. Various statistical methods can be
Requirements Risk and Maintainability 185
used to identify the major risk factor contributors to unreliable and non-maintainable
software. We illustrate selected methods using requirements and reliability data
from the NASA Space Shuttle.
Several projects have demonstrated the validity and applicability of applying
metrics to identify fault prone software at the code level (Khoshgoftaar & Allen,
1998; Khoshgoftaar et al., 1996a; Khoshgoftaar et al., 1996b; Schneidewind,
2000). Now, we apply this approach at the requirements level to allow for early
detection of reliability and maintainability problems. Once high-risk areas of the
software have been identified, they would be subject to detailed tracking throughout
the development and maintenance process.
This chapter is organized as follows: background, selected measurement
research projects, approach to analyzing requirements risk, risk factors, solutions
to risk analysis example, future trends, and conclusions.
BACKGROUND
This topic is significant because the field of software engineering lacks the
capability to quantitatively assess and predict the effect of a requirements change
on the reliability and maintainability of the software. Much of the research and
literature in software metrics concerns the measurement of code characteristics
(Munson & Werries, 1996; Nikora, Schneidewind, & Munson, 1998). This is
satisfactory for evaluating product quality and process effectiveness once the code
is written. However, if organizations use measurement plans that are limited to
measuring code, they will be deficient in the following ways: incomplete, lacking
coverage (e.g., no requirements analysis and design), and starting too late in the
process. For a measurement plan to be effective, it must start with requirements and
continue through to operation and maintenance. Since requirements characteristics
directly affect code characteristics and hence reliability and maintainability, it is
important to assess their impact on reliability and maintainability when requirements
are specified. We show that it is feasible to quantify the risks to reliability and
maintainability of requirements changes — either new requirements or changes to
existing requirements.
Once we are able to identify requirements attributes that portend high-risk for
the operational reliability and maintainability of the software, it is then possible to
suggest changes in the development and maintenance process of the organization.
To illustrate, a possible recommendation is that any requirements change to mission
critical software — either new requirements or changes to existing requirements —
would be subjected to a quantitative risk analysis. In addition to stating that a risk
analysis would be performed, the policy would specify the risk factors to be
analyzed (e.g., number of modifications of a requirement or mod level) and their
threshold or critical values. We have demonstrated the validity and applicability of
186 Schneidewind
identifying critical values of metrics to identify fault prone software at the code level
(Schneidewind, 2000). For example, on the Space Shuttle, rigorous inspections of
requirements, design documentation, and code have contributed more to achieving
high reliability and maintainability than any other process factor. Thus, it would be
prudent to consider adapting this process technology to other NASA projects,
DoD, and other space and defense organizations because the potential payoff in
increased reliability and maintainability would be significant. The objective of these
policy changes is to prevent the propagation of high-risk requirements through the
various phases of software development and maintenance. The payoff to these
organizations would be to reduce the risk of mission critical software not meeting
its reliability and maintainability goals during operation. For example, if the risk
analysis identifies requirements that appear risky, measurements could be made on
a prototype of the design and code to verify whether this is indeed the case. If the
risk is confirmed through rapid prototyping, countermeasures could be considered
such as modularizing or simplifying the requirements.
SELECTED MEASUREMENT
RESEARCH PROJECTS
A number of reliability and maintenance measurement projects have been
reported in the literature. For example, (Briand, Basili, & Kim, 1994) developed
a process to characterize software maintenance projects. They present a qualitative
and inductive methodology for performing objective project characterizations to
identify maintenance problems and needs. This methodology aids in determining
causal links between maintenance problems and flaws in the maintenance organi-
zation and process. Although the authors have related ineffective maintenance
practices to organizational and process problems, they have not made a linkage to
risk assessment.
Pearse and Oman (1995) applied a maintenance metrics index to measure the
maintainability of C source code before and after maintenance activities. This
technique allowed the project engineers to track the “health” of the code as it was
being maintained. Maintainability is assessed but not in terms of risk assessment.
Pigoski and Nelson (1994) collected and analyzed metrics on size, trouble
reports, change proposals, staffing, and trouble report and change proposal
completion times. A major benefit of this project was the use of trends to identify
the relationship between the productivity of the maintenance organization and
staffing levels. Although productivity was addressed, risk assessment was not
considered.
Sneed (1996) reengineered a client maintenance process to conform to the
ANSI/IEEE Standard 1219, Standard for Software Maintenance. This project is
Requirements Risk and Maintainability 187
a good example of how a standard can provide a basic framework for a process
and can be tailored to the characteristics of the project environment. Although
applying a standard is an appropriate element of a good process, risk assessment
was not addressed.
Stark (1996) collected and analyzed metrics in the categories of customer
satisfaction, cost, and schedule with the objective of focusing management’s
attention on improvement areas and tracking improvements over time. This
approach aided management in deciding whether to include changes in the current
release, with possible schedule slippage, or include the changes in the next release.
However, the author did not relate these metrics to risk assessment.
An indication of the back seat that software risk assessment takes to hardware,
Fragola (1996) reports on probabilistic risk management for the Space Shuttle.
Interestingly, he says: “The shuttle risk is embodied in the performance of its
hardware, the careful preparation activities that its ground support staff take
between flights to ensure this performance during a flight, and the procedural and
management constraints in place to control their activities.” There is not a word in
this statement or in his article about software! Another hardware-only risk
assessment is by Maggio (1996), who says, “The current effort is the first integrated
quantitative assessment of the risk of the loss of the shuttle vehicle from 3 seconds
prior to lift-off to wheel-stop at mission end.” Again, not a word about software.
Pfleeger lays out a roadmap for assessing project risk that includes risk prioritization
(Pfleeger, 1998), a step that we address with the degree of confidence in the
statistical analysis of risk.
APPROACH TO ANALYZING
REQUIREMENTS RISK
Our approach involves conducting experiments to see whether it is feasible to
develop a mapping between changes in requirements to changes in software
complexity and then to changes in reliability and maintainability. In other words, we
investigate whether the following implications hold, where R represents require-
ments, C represents complexity, F represents failure occurrence (i.e., reliability),
and M represents maintainability: ∆R⇒∆C⇒∆F, ∆M. We include changes in size
and documentation in changes in complexity. We are able to judge whether the
approach is a success by assessing whether this mapping can be achieved with the
desired degree of statistical significance.
By retrospectively analyzing the relationship between requirements and reli-
ability and maintainability, we are able to identify those risk factors that are
associated with reliability and maintainability, and we are able to prioritize them
based on the degree to which the relationship is statistically significant. In order to
188 Schneidewind
quantify the effect of a requirements change, we use various risk factors that are
defined as the attribute of a requirement change that can induce adverse effects on
reliability (e.g., failure incidence), maintainability (e.g., size and complexity of the
code), and project management (e.g., personnel resources). Examples of Space
Shuttle risk factors are shown in the RISK FACTORS section.
Table 1 shows the Change Request Hierarchy of the Space Shuttle, involving
change requests (i.e., a request for a new requirement or modification of an existing
requirement), discrepancy reports (i.e., reports that document deviations between
specified and observed software behavior), and failures. We analyzed Categories
1 versus 2 with respect to risk factors as discriminants of the categories.
Table 2.
Table 3.
Table 4.
• Metrics data for 1400 Space Shuttle modules, each with 26 metrics. An
example of a partial set of metric data is shown in Table 4.
Table 5 shows the definition of the Change Request samples that were used
in the analysis. Sample sizes are small due to the high reliability of the Space Shuttle.
However, sample size is one of the parameters accounted for in the statistical tests
that produced statistically significant results in certain cases (see the SOLUTIONS
TO RISK ANALYSIS EXAMPLE).
To minimize the effects of a large number of variables that interact in some
cases, a statistical categorical data analysis was performed incrementally. We used
only one category of risk factor at a time to observe the effect of adding an additional
risk factor on the ability to correctly classify change requests that have No
Discrepancy Reports versus change requests that have ((Discrepancy Reports
Only) or (Discrepancy Reports and Failures)). The Mann-Whitney Test for
difference in medians between categories was used because no assumption need
be made about statistical distribution. In addition, some risk factors are ordinal scale
quantities (e.g., modification level); thus, the median is an appropriate statistic to
190 Schneidewind
Sample Size
Total CRs 24
CRs with no DRs 14
CRs with (DRs only) or (DRs and Failures) 10
CRs with modules that caused failures 6
CRs can have multiple DRs, failures, and modules that caused failures. CR: Change
Request. DR: Discrepancy Report.
use. Furthermore, because some risk factors are ordinal scale quantities, rank
correlation (i.e., correlation coefficients are computed based on rank) was used to
check for risk factor dependencies.
RISK FACTORS
One of the software process problems of the NASA Space Shuttle Flight
Software organization is to evaluate the risk of implementing requirements changes.
These changes can affect the reliability and maintainability of the software. To assess
the risk of change, the software development contractor uses a number of risk
factors, which are described below. The risk factors were identified by agreement
between NASA and the development contractor based on assumptions about the
risk involved in making changes to the software. This formal process is called a risk
assessment. No requirements change is approved by the change control board
without an accompanying risk assessment. During risk assessment, the develop-
ment contractor will attempt to answer such questions as “Is this change highly
complex relative to other software changes that have been made on the Space
Shuttle?” If this were the case, a high-risk value would be assigned for the
complexity criterion. To date, this qualitative risk assessment has proven useful for
identifying possible risky requirements changes or, conversely, providing assurance
that there are no unacceptable risks in making a change. However, there has been
no quantitative evaluation to determine whether, for example, high-risk factor
software was really less reliable and maintainable than low-risk factor software. In
addition, there is no model for predicting the reliability of the software if the change
is implemented. We address both of these issues.
We had considered using requirements attributes like completeness, consis-
tency, correctness, etc., as risk factors (Davis, 1990). While these are useful
generic concepts, they are difficult to quantify. Although some of the following risk
factors also have qualitative values assigned, there are a number of quantitative risk
factors, and many of the risk factors deal with the execution behavior of the software
(i.e., reliability) and its maintainability, which is our primary interest.
Requirements Risk and Maintainability 191
Complexity Factors
• Qualitative assessment of complexity of change (e.g., very complex); “com-
plexity.” Not significant.
- Is this change highly complex relative to other software changes that
have been made on the Space Shuttle?
• Number of modifications or iterations on the proposed change; “mods.”
Significant.
- How many times must the change be modified or presented to the
Change Control Board (CCB) before it is approved?
Size Factors
• Number of source lines of code affected by the change; “sloc.” Significant.
- How many source lines of code must be changed to implement the
change request?
• Number of modules changed; “mod chg.” Not significant.
- Is the number of changes to modules excessive?
Performance Factors
• Amount of memory space required to implement the change; “space.”
Significant.
Requirements Risk and Maintainability 193
- Will the change use memory to the extent that other functions will not
have sufficient memory to operate effectively?
• Effect on CPU performance; “cpu.” (insufficient data)
- Will the change use CPU cycles to the extent that other functions will
not have sufficient CPU capacity to operate effectively?
“mods,” “sloc,” “issues,” and “space.” We use the value of alpha in Table 6 as a
means to prioritize the use of risk factors, with low values meaning high priority. The
priority order is: “issues,” “space,” “mods,” and “sloc.”
The significant risk factors would be used to predict reliability and maintain-
ability problems for this set of data and this version of the software. Whether these
results would hold for future versions of the software would be determined in
validation tests on subsequent Operational Increments. The finding regarding
“mods” does confirm the software developer’s view that this is an important risk
factor. This is the case because if there are many iterations of the change request,
it implies that it is complex and difficult to understand. Therefore, the change is likely
to lead to reliability and maintainability problems. It is not surprising that the size of
the change “sloc” is significant because our previous studies of Space Shuttle
metrics have shown it to be important determinant of software quality (Schneidewind,
2000). Conflicting requirements “issues” could result in reliability and maintainabil-
ity problems when the change is implemented. The on-board computer memory
required to implement the change “space” is critical to reliability and maintainability
because unlike commercial systems, the Space Shuttle does not have the luxury of
large physical memory, virtual memory, and disk memory to hold its programs and
data. Any increased requirement on its small memory to implement a requirements
change comes at the price of demands from competing functions.
In addition to identifying predictive risk factors, we must also identify thresh-
olds for predicting when the number of failures would become excessive (i.e., rise
rapidly with the risk factor). An example is shown in Figure 1, where cumulative
failures are plotted against cumulative issues. The figure shows that when issues
reach 286, failures reach 3 (obtained by querying the data point) and climb rapidly
thereafter. Thus, an issues count of 286 would be the best estimate of the threshold
to use in controlling the quality of the next version of the software. This process
would be repeated across versions with the threshold being updated as more data
is gathered. Thresholds would be identified for each risk factor in Table 6. This would
Table 6. Statistically significant results (alpha ≤ .05). CRs with no DRs vs.
((DRs only) or (DRs and Failures)). Mann-Whitney Test.
provide multiple alerts for the quality of the software going bad (i.e., the reliability and
maintainability of the software would degrade as the number of alerts increases).
7
Cumulative Failures
0
0 50 100 150 200 250 300 350 400
Cumulative Issues
196 Schneidewind
FUTURE TRENDS
Requirements risk analysis is another project in our series of software
measurement projects that has included software reliability modeling and predic-
tion, metrics analysis, and maintenance stability analysis (Schneidewind, 1998b;
Schneidewind, 1997b). We have been involved in the development and application
of software reliability models for many years (Schneidewind, 1993, 1997c;
Schneidewind & Keller, 1992). Our models, as is the case in general in software
reliability, use failure data as the driver. This approach has the advantage of using
a metric that represents the dynamic behavior of the software. However, this data
is not available until the test phase. Predictions at this phase are useful but it would
be much more useful to predict at an earlier phase—preferably during requirements
analysis—when the cost of error correction is relatively low. Thus, there is great
interest in the software reliability and metrics field in using static attributes of
software in reliability and maintainability modeling and prediction. Presently, the
software engineering field does not have the capability to make early predictions of
reliability and maintainability problems. Early predictions would allow errors to be
discovered and corrected when the cost of correction is relatively low. In addition,
early detection would prevent poor quality software from getting into the hands of
the user. As a future trend, the focus in research will be to identify the attributes of
software requirements that cause the software to be unreliable and difficult to
maintain.
Based on the premise that no one model suffices for all prediction applications,
a future research goal is to develop an integrated suite of models for various
applications. One type of model and its predictions have been described. Other
members of our suite are quality metrics (Schneidewind, 1997a) and process
stability models (Schneidewind, 1998a). We recommend that other researchers
change their emphasis to the prediction of reliability and maintainability at the earliest
possible time in the development process — to the requirements analysis phase —
that heretofore has been unattainable. In doing so, researchers would have the
opportunity to determine whether there exists a “standard” set of risk factors that
could be applied in a variety of applications to reliability and maintainability
prediction.
CONCLUSIONS
Our objective has been to improve the safety of software systems, particularly
safety critical systems, by reducing the risk of failures attributed to software. By
improving the reliability and maintainability of the software, where the reliability and
maintainability measurements and predictions are directly related to safety, we
contribute to improving system safety.
Risk factors that are statistically significant can be used to
make decisions about the risk of making changes. These changes
affect the reliability and maintainability of the software. Risk factors
that are not statistically significant should not be used; they do not
provide useful information for decision-making and cost money and
time to collect and process. Statistically significant results were
found for CRs with no DRs vs. ((DRs only) or (DRs and Failures)).
The number of requirements issues (“issues”), the amount of memory
198 Schneidewind
ACKNOWLEDGMENTS
We acknowledge the technical support received from Julie Barnard, Boyce
Reeves, and Patti Thornton of United Space Alliance. We also acknowledge the
funding support received from Dr. Allen Nikora of the Jet Propulsion Laboratory.
REFERENCES
Boehm, W. (1991). Software risk management: Principles and practices. IEEE
Software, 8 (1), 32-41.
Briand, L. C., Basili, V. R., & Kim, Y.-M. (1994). Change analysis process to
characterize software maintenance projects. Proceedings of the International
Conference on Software Maintenance, Victoria, British Columbia, Canada,
38-49.
Davis, A. (1990). Software requirements: Analysis and specifications, Englewood
Cliffs, NJ: Prentice Hall.
Elbaum, S. G. & Munson, J. C. (1998). Getting a handle on the fault injection process:
Validation of measurement tools. Proceedings of the Fifth International
Software Metrics Symposium, Bethesda, MD 133-141.
Fragola, J. R. (1996). Space shuttle program risk management. Proceedings of the
Annual Reliability and Maintainability Symposium, 133-142.
Requirements Risk and Maintainability 199
Khoshgoftaar, T. M., & Allen, E. B. (1998). Predicting the order of fault-prone modules
in legacy software. Proceedings of the Ninth International Symposium on
Software Reliability Engineering, Paderborn, Germany, 7, 344-353.
Khoshgoftaar, T. M., Allen, E. B., Halstead, R., & Trio, G. P. (1996a). Detection of
fault-prone software modules during a spiral life cycle. Proceedings of the
International Conference on Software Maintenance, Monterey, CA, 69-76.
Khoshgoftaar, T. M., Allen, E. B., Kalaichelvan, K, & Goel, N. (1996b). Early quality
prediction: A case study in telecommunications. IEEE Software, 13 (1), 65-71.
Lanning, D. & Khoshgoftaar, T. (1995). The impact of software enhancement on
software reliability. IEEE Transactions on Reliability, 44 (4), 677-682.
Maggio, G. (1996). Space shuttle probabilistic risk assessment methodology and
application. Proceedings of the Annual Reliability and Maintainability Sym-
posium, 121-132.
Munson, J. C. & Werries, D. S. (1996). Measuring software evolution. Proceedings
of the Third International Software Metrics Symposium, Berlin, Germany,
41-51.
Nikora, A. P., Schneidewind, N. F., & Munson, J. C. (1998). IV&V Issues in achieving
highreliabilityandsafetyincriticalcontrolsoftware,finalreport,Vol.1;Measuring
and evaluating the software maintenance process and metrics-based software
quality control, Vol. 2; Measuring defect insertion rates and risk of exposure to
residual defects in evolving software systems, Vol. 3; and Appendices. Pasadena,
CA Jet Propulsion Laboratory, National Aeronautics and Space Administration.
Ohlsson, M., C., & Wohlin, C., (1998). Identification of green, yellow, and red legacy
components. Proceedings of the International Conference on Software
Maintenance, Bethesda, MD, 6-15.
Ohlsson, N. & Alberg, H. (1996). Predicting fault-prone software modules in telephone
switches. IEEE Transactions on Software Engineering, 22(12), 886-894.
Pearse, T. & Oman, P. (1995). Maintainability measurements on industrial source code
maintenance activities. Proceedings of the International Conference on
Software Maintenance, Opio (Nice), France, 295-303.
Pfleeger, S. L. (1998). Assessing project risk. Software Tech News, DoD Data
Analysis Center for Software, 2(2), 5-8.
Pigoski, T. M. & Nelson, L. E. (1994). Software maintenance metrics: A case study.
Proceedings of the International Conference on Software Maintenance,
Victoria, British Columbia, Canada, 392-401.
Schneidewind, N. F. (1993). Software reliability model with optimal selection of
failure data. IEEE Transactions on Software Engineering, 19(11), 1095-
1104.
Schneidewind, N. F. (1997a). Software metrics model for integrating quality
control and prediction. Proceedings of the Eight International Symposium
200 Schneidewind
Chapter VIII
Software Maintenance
Cost Estimation
Harry M. Sneed
Institut für Wirtschaftsinformatik, University of Regensburg, Bavaria
This chapter deals with the subject of estimating the costs of software
maintenance. It reviews the existing literature on the subject and summarises the
various approaches taken to estimate maintenance costs starting with the original
COCOMO approach in 1981. It then deals with the subject of impact analysis and
why it is essential to estimate the scope of maintenance projects. Examples are given
to illustrate this. It then goes on to describe some of the tools the author has
developed in the past ten years to support his practice of maintenance project
estimation including the tools SoftCalc and MainCost. For both of these tools
empirical studies of industrial experiments are presented as proof of the need to
automate the estimation process.
long as it is not a safety hazard. Whether he has it repaired or not depends on the
time and costs. In any case, he will want to know what it will cost before he does
anything. If he is prudent, he will visit several repair shops to get different estimates.
If he happens to be in a foreign country and the repair will take a week, he may
decide to wait until he returns home. Or, if the cost is too high, he may decide to wait
until he has the money to pay for it.
The time and cost plays an even greater role in the case of enhancements and
perfection. It is similar to the situation of a home owner who wants to either add a
bathroom on to the house – enhancement – or renovate an existing bathroom –
perfection. In neither case is he compelled to act now. He can postpone this task
indefinitely depending on his budget. Everything is a question of costs and benefits
and priorities in the light of the current economic situation.
Software system owners, like unfinished home owners, have a number of
options of what they could do next. They could upgrade the user interface, add new
system interfaces, add new functionality, or restructure the code. What actions they
take depend on their current priorities and the costs of implementing the actions.
There is always more that could be done, than what the owner has the capacity or money
to have done. If the owner is rational, he will weigh the costs and benefits of each
alternative action and select them on the basis of their cost/benefit relationship.
Therefore, except for critical errors and unavoidable adaptations, knowledge
of the costs is essential. The owner of the system must know what a given
maintenance task will cost in order to make a rational decision whether to pay for
the task or not. It is up to the maintainers of the software to give him a cost and time
offer, just as the car repair shop is obligated to make an offer for repair to the
automobile owner or the plumber is obligated to make an offer for a new bath to
the home owner. This is such a natural phenomenon that it is astounding to note how
uncommon it occurs in the software business. One can only attribute the lack of a
proper customer service relationship to the immaturity of the software field.
RESEARCH ON MAINTENANCE
COST ESTIMATION
The lack of attention to a proper cost/benefit analysis of maintenance tasks is
reflected in the pertinent literature. There has been very little research published on
the subject, and what research there has been, has had little impact on industrial
practice. One of the first attempts to estimate the costs of software maintenance is
described by Barry Boehm in his book Software Engineering Economics.
(1981). Boehm maintained that annual maintenance costs can be derived from the
initial development effort – DevEffort– and the annual change traffic – ACT,
adjusted by a multiplication factor for the system type – Type.
Software Maintenance Cost 203
based on real data and has to be accepted. It is doubtful, however, if business data
processing shops would be willing to pay so much for maintenance.
Another study which was directed toward assessing maintenance costs was
that of Vessey and Weber (1993) in Australia. These authors studied 447
commercial COBOL programs to determine to what degree program complexity,
programming style, programmer productivity, and the number of releases affects
maintenance costs. Surprisingly, they came to the conclusion that program com-
plexity has only a limited impact on repair costs. They also discovered that
programming style is only significant in the case of larger programs. The number of
releases only affected the costs of adaptive maintenance but not repair maintenance.
The more a program is changed, the more difficult it is to adapt. This reinforced the
conclusion that increasing complexity drives up adaptive maintenance costs. Thus,
it is important to assess program complexity, whereas programming style, e.g.,
structuredness and modularity, is not so relevant as many would believe.
Dieter Rombach (1987) also studied factors influencing maintenance costs at
the University of Kaiserslautern in Germany. He concluded that the average effort
in staff hours per maintenance task is best explained or predicted by those combined
complexity metrics which measure external complexity by information flow and
which measure internal complexity by length or number of structural units, i.e.,
nodes and edges. This means that the complexity of the target software should
definitely be considered when estimating the costs of maintenance tasks.
This author has examined these effort estimation equations in the light of
reengineering of existing software that could reduce complexity and increase
quality, thus lowering maintenance costs. However, there were limits to this.
Complexity could only be reduced by up to 25% and quality could be
increased by no more than 33% thus bringing about a maximum effort reduction of
30%! This conclusion is supported by the fact that the product itself is only one of
four maintenance cost drivers:
• the product being maintained,
• the environment in which the product is being maintained,
• the maintenance process, and
• the personnel doing the maintenance, (Sneed, 1991).
More recent studies have been made by Lanning and Khoshgoftaar (1994) in
relating source code complexity to maintenance difficulty and by Coleman, Ash,
Lowther and Oman (1994) to establish metrics for evaluating software maintain-
ability. The latter compute a maintainability coefficient to rate programs as either
highly maintainable, i.e., above 0.85, moderately maintainable, i.e., above 0.65, or
difficult to maintain, i.e., those under 0.65. Both studies come to the conclusion that
complexity and maintainability are related.
Software Maintenance Cost 205
In recent years, the first studies have been published on the maintainability of
object-oriented software. Chidamber and Kemerer (1994) have developed a set
of six metrics for sizing object-oriented software:
• weighted number of methods per class,
• depth of inheritance tree,
• number of subordinate classes per base class,
• coupling between object classes, i.e., the number of messages,
• number of possible responses per message, and
• cohesion between methods in a class, i.e., the number of commonly used data
attributes.
These metrics are very important in finding a way to size object-oriented
software. Equally important are metrics for determining the complexity of object-
oriented software. Wilde and Huitt (1992) have identified those features of object-
oriented software, such as polymorphism, inheritance and dynamic binding, which
complicate maintenance tasks. They maintain that the unrestricted use of these
features tends to drive maintenance costs up by increasing the complexity and
decreasing the transparency of the software.
These and the many other studies on the relationship of software product size,
complexity, and quality to maintenance costs must be considered when developing
a maintenance costing model. This is, however, only the product side. Besides the
maintenance process with the steps required to process a change request or error
report as described by Mira Kajko-Mattsson (1999), there is also the environment
side to be considered, with features of the maintenance environment such as source
accessibility, browsing, editing, compiling and testing facilities, communication
facilities, influence maintenance costs, etc. Here, studies have been made by
Figure 1: Cost Drivers in Software Maintenance
Maintenance Maintenance
Environment Personnel
Costs
of Software
Maintenance
$
Maintenance Maintenance
Process Product
206 Sneed
Thadhani (1984), Lambert (1984), and Butterline (1992), which show that
maintenance productivity can be increased by up to 37% by improvement to the
maintenance environment, in particular the use of a dedicated maintenance work-
bench as proposed by this author in another paper. (Sneed, 1995).
All of this goes to show how difficult it is to create a comprehensive model for
estimating software maintenance costs. It is not only necessary to consider the
product to be maintained, but also the environment in which it is to be maintained.
Finally, the quality of the maintenance personnel has to be considered. These are
the four general cost drivers in software maintenance as depicted in Figure 1.
Quality
Change
Request
Complexity
Produc-
tivity
Software Maintenance Cost 211
In Step 3, the size of the impacted domain is measured in two or more of the
following size metrics:
• lines of code,
• statements,
• function-points,
• data-points, and
• object-points.
In Step 4, the size measure is adjusted by a complexity factor calculated using
a code auditor.
In Step 5, the size measure is adjusted by the external and internal quality
factors. The latter, which reflects the software maintainability, is also obtained
automatically from a code audit.
In Step 6, the size measure is adjusted by a productivity influence factor
depending on the estimation method.
In Step 7, the adjusted size measure is transposed into maintenance effort by
means of a productivity table.
the existing system in data-points. Data-points are a measure for the number of data
elements processed, the number of user views, the number of data fields in the
views, and the complexity of the interfaces (Sneed, 1990). Data elements and data
fields are weighted with 1, relational tables with 4, database accesses with 2, and
user views with 4. User views are adjusted by a complexity factor of 0,75 to 1.25.
The sum of the adjusted data-points is a reliable indicator for the size of 4GL
systems which connect user interfaces to relational databases.
For object software, the most appropriate measure of size is the object-point.
Object-points are derived from the class structures, the messages, and the
processes or use cases (Sneed, 1995).
In the case of classes, the number of class-points for each class is adjusted by
the reusability rate. It is important to note that inherited attributes and methods are
only counted once, at the base or super class level.
Class-points and message-points can be derived automatically from the code
itself. Process-points must be counted in the user documentation. The object-point
count is the sum of the class-points, message-points, and process-points. As such,
it is a union of internal and external views of the software.
Function-points are a universal measurement of software size for all kinds of
systems. They are obtained by counting system inputs, system outputs, system
queries, databases, and import/export files (Albrecht & Gaffney, 1983). System
inputs are weighted from 3 to 6; system outputs are weighted from 4 to 7; system
queries are weighted from 3 to 6; databases are weighted form 7 to 15, and import/
export files are weighted from 7 to 10. It is difficult to obtain the Function-point
count by analyzing the code. The question is what is an input and an output. Counting
READ’s and WRITE’s, SEND’s and RECEIVE’s, DISPLAY’s and ACCEPT’s
is misleading, since there is no direct relationship between program I/O operations
Software Maintenance Cost 213
and system inputs and outputs. The actual number of system inputs and outputs is
a subset of the program inputs and outputs. This subset has to be selected manually
from the superset derived automatically by comparison with the system documen-
tation. The database function-point count can be derived by counting physical
databases or relational tables in the database schema. Import/Export Files can be
found in the Interface Documentation. In light of the strong reliance on documen-
tation rather than code, there is much more human effort involved in counting
function-points, unless this count can be obtained from a CASE Repository.
Independently of what unit of measure is used, the result of the system sizing
task is an absolute metric representative of the size of the system as a whole. The
next step is to determine what portion of this size measurement is affected by the
planned maintenance operation. This is the goal in impact analysis.
Transaction
3 Fct. Pt. 5 Fct. Pt. 6 Fct. Pt. Processing
Change
User User User Request
View View View
25 % Change Rate
Data 16 Fct. Pt.
Base x.25
4 Fct. Pt.
214 Sneed
others. The object of the impact analysis depends on how the software has been
sized. If it is sized in function-points, it will be necessary to analyze the system
documentation with the following questions in mind:
• What inputs and outputs must be added?
• What inputs and outputs must be altered?
• What queries must be added?
• What queries must be altered?
• What import/export files must be added?
• What import/export files must be altered?
• What files or databases must be added?
• What files or databases must be altered?
If inputs, outputs, queries, import/export files, or databases are to be added,
then their function-points must be counted in full. If they are to be altered, then the
question is to what extent. Here, the analyst must exercise judgement in coming up
with a percent change factor. One method is to count the number of data elements
to be changed or added relative to the sum of the data elements in that particular
input/output interface or file. (See Figure 3.)
If the system is sized in object-points, it is the goal of the analyst to first establish
which classes have to be added or altered. If a class is to be added, then its
attributes, relationships, and methods are counted in full. If it is to be altered, then
only those attributes, relationships and methods which are affected by the change
are counted. The same applies to messages. New messages are counted in full;
altered messages are counted relative to the proportion of parameters, senders, and
receivers affected. Finally, new processes are counted in full, but altered processes
are only counted if there is a new variant to be implemented. The sum of the impacted
object-points is then some portion of the total number of object-points. This will be the
most likely method of sizing maintenance actions on object-oriented software.
In case the system has been sized in data-points, the focus of the impact
analysis is on the data model. Here, it is required to answer the following questions:
• What data entities must be added?
• What data elements in existing data entities must be added or changed?
• What relationships between existing data entities must be added or changed?
• What user views must be added?
• What data fields in existing user views must be added or changed?
• What relationships between existing data entities and existing user views must
be added or altered?
New entities and user views are counted in full. Altered entities and user views
are counted in proportion to the number of their attributes and relationships affected
by the planned maintenance action. The result is some portion of the total system
size in data-points.
Software Maintenance Cost 215
1. This factor is used to adjust the size of the impact software by complexity. Thus,
a complexity factor of 1.1 will cause a 10% increase in the size of the impact domain.
Complexity-Adjusted-Size =
Raw-Size * Complexity-Factor
inverted internal quality factor 0.8, the adjusted function-point count will be:
20 * 1.1 * 0.8 = 17
system to COBOL-85. The new COBOL versions were due to go into production
by the end of the year. However, due to changes in the tax laws, the programs had
to be altered before going into production. The changes were first built by the
customer programmers into the original Assembler programs and tested. Fortu-
nately, they were also marked, so it was possible to identify them.
After the original Assembler programs had been validated, the statements in
the COBOL programs which corresponded to the marked Assembler statements
were identified. The changes were significant so that in all some 372 Assembler
statements or 5% of the total code had been altered. This corresponded to precisely
410 COBOL statements.
The complexity measurements of the Assembler code derived by the
ASMAUDIT tool were:
Data Complexity = 0.743
Data Usage Complexity = 0.892
Control Flow Complexity = 0.617
Decisional Complexity = 0.509
Interface Complexity = 0.431
giving an average complexity for the Assembler programs of 0.636. This was
transposed to a complexity factor of 1.14. (See Figure 4.)
QUANTITY METRICS
PROCEDURE SIZE METRICS
NUMBER OF FILES/DATABASE 11 System
NUMBER OF DATA OBJECTS 18
NUMBER OF DATA DECLARED 2473 Summary Report
NUMBER OF DATA REFERENCES 1805
NUMBER OF ARGUMENTS 3008
NUMBER OF RESULTS 2632
NUMBER OF PREDICATES 1428
NUMBER OF COPY/MACROS 38
NUMBER OF DATA-POINTS 712
PROCEDURE SIZE METRICS
NUMBER OF STATEMENTS 7521
NUMBER OF MODULES 9
NUMBER OF BRANCHES 2933
NUMBER OF GO TO BRANCHES 1254
NUMBER OF SUBROUTINE CALLS 207
NUMBER OF MODUL CALLS 19
NUMBER OF DATA REFERENCES 4961
NUMBER OF FUNCTION POINTS 55
NUMBER OF LINES OF CODE 9994
NUMBER OF COMMENT LINES 1093
COMPLEXITY METRICS
DATA COMPLEXITY 0,743
DATA FLOW COMPLEXITY 0,892
INTERFACE COMPLEXITY 0,421
CONTROL FLOW COMPLEXITY 0,617
DECISIONAL COMPLEXITY 0,509
INTERNAL COMPLEXITY 0,636
QUALITY METRICS
MODULARITY 0,339
PORTABILITY 0,214
MAINTAINABILITY 0,364
TESTABILITY 0,472
CONFORMITY 0,702
INTERNAL QUALITY 0,418
Software Maintenance Cost 219
QUANTITY METRICS
PROCEDURE SIZE METRICS
NUMBER OF FILES/DATABASE 11
NUMBER OF DATA OBJECTS 18
NUMBER OF DATA DECLARED 2473
NUMBER OF DATA REFERENCES 1805
NUMBER OF ARGUMENTS 3008
NUMBER OF RESULTS 2632
NUMBER OF PREDI CATES 1428
NUMBER OF COPY/MACROS 38
NUMBER OF DATA-POI NTS 712
PROCEDURE SIZE METRICS
NUMBER OF STATEMENTS 7521
NUMBER OF MODUL ES 9
NUMBER OF BRANCHES 2933
NUMBER OF GO TO BRANCHES 1254
NUMBER OF SUBROUTI NE CALLS 207
NUMBER OF MODUL E CALLS 19
NUMBER OF DATA REFERENCES 4961
NUMBER OF FUNCTION POI NTS 55
NUMBER OF LINES OF CODE 9994
NUMBER OF COMMENT LI NES 1093
COMPLEXITY METRICS
DAT A COMPLEXITY 0,743
DAT A FLOW COMPLEXITY 0,892
INTERF ACE COMPLEXITY 0,421
CONTROL FLOW COMPLEXITY 0,617
DECISIONAL COMPLEXITY 0,509
QUALITY METRICS
MODULARITY 0,339
PORTABILITY 0,214
MAINTAINABILITY 0,364
TESTABILITY 0,472
CONFORMITY 0,702
INTERNAL QUALITY 0,418
220 Sneed
adjusted Assembler statement count of 458. At this point, the size of the COBOL
impact domain was still larger.
The project influence factors had also improved as a result of the reengineering.
It was now possible to edit, compile, and test the programs on a PC-workstation.
The product of the cost drivers was 1.26 for the Assembler Code. For the COBOL
Code, it was 0.98. Thus, whereas the final adjusted Assembler statement count
came out to be 577, the adjusted COBOL statement count was 21% less at 456.
At a productivity rate of 20 Assembler statements or 30 adjusted statements
per man-day, it took some 19 man-days to adapt and retest the original Assembler
programs. On the COBOL side, it took only 10 man-days to duplicate the changes
and retest the programs giving a maintenance productivity rate of 45 adjusted
statements or 41 real statements per man-day. The effort of specifying the changes
was not included on either side, but it would have been the same for both.
This is a good example of how reengineering can reduce maintenance costs.
Even though the automatic conversion of Assembler to COBOL resulted in more
code, it was still possible to alter the COBOL code quicker and cheaper than the
original Assembler code. This was due primarily to the better support offered by
the Microfocus COBOL Workbench which underlies the impact of the environ-
ment on maintenance productivity.
name
URL Table
1..*
name
isModuleOf 0..* 0..*
refers
1..* 0..*
0..* includes
uses
Module
1..*
name
id
1..1 1..1
type
isAttributeOf
isClassOf
isSubclassOf
isMethodOf 0..*
name
isMethodOf isAttributeOf
0..* 0..*
0..* 0..* 0..*
calls
0..*
0..* Method Attribute
signature type
to concept function to test case
222 Sneed
type of software entity is affected. The tool comes back with a list of all entity
occurrences of that type, e.g., a list of all classes. Then the user selects that entity
or those entities directly affected by the change. These are the first level impacted
source entities. They are passed as parameters to a backend component which then
searches through the repository for all entities to which they are related. These are
the second level impacted entities. This recursive search algorithm goes on up to a
maximum closure level defined by the user.
The result of the impact analysis is a tree of impacted source elements identified
by name and level. The level corresponds to their distance from the base element,
measured in hierarchical levels. This tree is the output of CodeScan and the input
to CodeCalc. CodeCalc processes the tree node for node, searching out the source
text of the particular entity. Each of the source texts referenced is measured in terms
of its size, quality, and complexity. For the size statements, data-points, function-
points, and object-points are counted. Complexity is measured here in terms of data
structure, data flow, data access, interface (fan-in/fan-out), control flow (cyclomatic
complexity), decisional depth, branching level, inherence depth, and language
volume. Quality is measured in terms of portability, modularity, flexibility, testability,
readability, maintainability, security, and conformity to standards. These metrics for
each impacted element are stored in a table and accumulated (see Figure 7).
The sizes of the impacted elements are adjusted by two factors. One is the tree
level or distance from the point of impact. The size metrics are divided by the level
number thus reducing their impact depending on their distance from the base
element. The other factor is the change rate given by the user. It is a percentage of
the total size adjusted by the distance to the point of impact. It takes only that
percentage of the size affected by the change.
Figure 7: Impact Analysis Paths
Change
Request
Component Module
The resulting adjusted size is then adjusted again by the complexity and the
quality of that particular source element. In the end, the adjusted sizes are
aggregated to give a total adjusted size of the impact domain. It is this total size which
is then compared to the existing productivity tables to be converted via interpolation
into a number of person-months based on the experience with previous mainte-
nance projects. It is important that the maintenance productivity table be kept up
to date. So after every completed maintenance task, the size of the actual impact
domain and the actual effort flow back into the productivity table by means of a
change analysis and the effort reporting system (see Figure 8).
MAINCOST has proven to be a very effective tool in terms of time and cost
estimates. A routineered project planer can start an estimation and receive a range
of estimated efforts within minutes. That means, a customer requiring an offer to
implement some change or enhancement can get an answer within a few hours. This
is only possible with the support of an extensive software repository, impact
analysis, tools, source measurement tools and, last but not least, a set of accurate
productivity statistics.
REFERENCES
Albrecht, A. J. & Gaffney, J. E. (1983). Software function, source lines of code, and
development effort prediction – A software science validation. IEEE Transac-
tions on S. E., Vol. SE-9, No. 6, p. 639.
AnQuetil, N. (2000). A comparison of graphs of concept for reverse engineering, In
Proceedings of 8th IWPC-2000, Limerick, pp. 231-240. New York: IEEE
Press.
ANSI/IEEE. (1998). Standard 1219 – Standard for software maintenance, IEEE
Standards, Vol. 2. New York: IEEE Press.
Software Maintenance Cost 225
Chapter IX
Software maintenance is the most expensive stage of the software life cycle.
However, most software organizations do not use any methodology for mainte-
nance, although they do use it for new developments. In this article, a methodology
for managing the software maintenance process is presented.
The methodology defines clearly and rigorously all the activities and tasks to
be executed during the process and provides different sets of activities for five
different types of maintenance (urgent and non-urgent corrective, perfective,
preventive, and adaptive). In order to help in the execution of tasks, some
techniques have been defined in the methodology. Also, several activities and tasks
for establishing and ending outsourcing relationships are proposed, as well as
several metrics to assess the maintainability of databases and their influence on the
rest of the Information System.
This methodology is being applied by Atos ODS, a multinational organization
among whose primary business activities is the outsourcing of software maintenance.
INTRODUCTION
Software Maintenance has been traditionally the most expensive stage of the
software Life Cycle (see Table 1), and it will continue to grow and become the main
work of the software industry (Jones, 1994). In fact, new products and technolo-
gies need to increase maintenance efforts in corrective and perfective (for hypertext
maintenance, for example, as reported in Brereton, Budgen, & Hamilton, 1999),
as in adaptive (for adapting old applications to new environments, as client/server
as discussed in Jahnke and Wadsack, 1999). With this in mind, it is natural that
software evolution laws announced by Lehman (1980) have been recently con-
firmed (Lehman, Perry, & Romil, 1998).
In spite of this, one study conducted by Atos ODS in Europe has shown that
most software organizations do not use any methodology for software maintenance,
although they do use it for new developments. This usage is really surprising, above
all if we take into account that 61% of the professional life of programmers is
devoted to maintenance work, and only 39% to new developments (Singer, 1998).
So, we agree with Basili et al. (1996) the statement sense that “we need to define
and validate methodologies that take into account the specific characteristics of a
software maintenance organization and its processes.” Furthermore, as these same
authors express, the improvement of the maintenance process is very interesting due
to the large number of legacy systems currently used.
Usual solutions for software maintenance can be divided into two groups:
technical, which, among others, encompasses reengineering, reverse engineering
and restructuration; and management solutions, characterized by having quality
assurance procedures, structured management, use of human resources specialized
in maintenance, change documentation, etc. However, whereas every year new
technical solutions are proposed, very little has been researched about management
solutions. The consequences of this have been denounced by some authors:
Pressman (1993), for example, affirms that there are rarely formal maintenance
organizations, which implies that maintenance is done “willy nilly.” Baxter and
Pigdeon (1997) show that incomplete or out of date documentation is one of the
main four problems of software maintenance. For Griswold and Notkin (1993), the
successive software modifications make maintenance each time more expensive.
For Pigoski, there is a lack of definition of the maintenance process which,
furthermore, hampers the development and use of CASE tools for helping in its
management. Also, excepting some works (Brooks, 1995; McConnell, 1997), the
modeling of organizations is a neglected area of research in Software Engineering
in general. Fugetta (1999) states that most of techniques and methods for
230 Polo, Piattini, and Ruiz
WORKING METHOD
During the development of MANTEMA, we followed the recommendations
of Avison, Lau, Myer, and Nielson (1999) who say that “to make academic
research relevant, researchers should try out their theories with practitioners in real
situations and real organizations,” and we decided to select Action Research as an
appropriate method for working jointly with Atos ODS. Action Research is a
qualitative research method. This kind of methods have received recently special
attention (Seaman, 1999). According to McTaggart (1991), Action Research is
“the way groups of people can organize the conditions under which they can learn
from their own experiences and make this experience accessible to others”.
Basically, Action Research is an iterative research method which refines, after each
cycle, the generated research products. In fact, Padak and Padak (1998) identify
four cyclic phases in research projects carried out through Action Research (Figure
1).
As Wadsworth (1998) states, every refined solution provided after each cycle
helps “to develop deeper understandings and more useful and more powerful
theory about the matters we are researching, in order to produce new knowledge
which can inform improved action or practice.” (Wadsworth, 1998)
Software Maintenance 231
Identifying questions
Refined to guide the research
solutions
4 2
Sharing results
with others Collecting information
to answer the questions
3
Analyzing the
information that
has been collected
the classification of modification requests depending on whether or not they are due
to an error, their degree of severity , etc. Also the IEEE Standard 1219
recommends the modification requests classification as corrective, perfective,
preventive, and adaptive, and that they are integrated into sets that share the same
design areas (IEEE, 1992). This idea is also mentioned in ISO/IEC 12207 but, as
in IEEE 1219, activities to be followed depending on the type are not specified.
In MANTEMA, the following five types of maintenance are distinguished and
precisely defined (Polo et al., 1999c):
1) Urgent corrective, when there is an error in the system which blocks it and
must be corrected immediately.
2) Non-urgent corrective, when there is an error in the system which is not
currently blocking it.
3) Perfective, when the addition of new functionalities is required.
4) Preventive, when some internal attributes of the software are going to be
changed (maintainability, cyclomatic complexity, etc.), but with no change of
either the functionality or the form of use of the system.
5) Adaptive, when the system is going to be adapted to a new operational
environment.
This distinction allowed for the building of different technical guides for each
type, which were progressively refined. However, in the final version of MANTEMA
we have grouped the last four types into one, since the practical application of the
methodology revealed that the treatment and ways of execution of these types of
maintenance were very similar. Therefore, they have been grouped under an unique
denomination, planneable maintenance, leaving in this manner the urgent correc-
tive as non-planneable maintenance.
On the other hand, a method for establishing and ending outsourcing relation-
ships has been incorporated into MANTEMA as a response to the growing
importance that nowadays this area is obtaining in many sectors influenced by
Information Technologies (Rao, Nam, & Chaudhury, 1996). Hoffman (1997)
shows, for example, that 40% of the biggest companies in the United States have
outsourced at least one of the major pieces of their operations.
Definition of the
Execution of the Migration and
Maintenance
intervention retirement
Process
234 Polo, Piattini, and Ruiz
STRUCTURE OF MANTEMA
From the ideas exposed in the previous section, the basis of the maintenance
process depicted in Figure 2 must be extended according to the different types of
maintenance identified and the outsourcing activities. Figure 3 illustrates the results
of these considerations.
Non-planneable activities
and tasks
Non-urgent
Perf ective Preventive A daptive
corrective
Software Maintenance 235
4) People responsible, who must belong to a set of profiles defined for the
maintenance process.
5) Interfaces with other processes, as Configuration Management, in order to
execute operations inside the organization’s framework.
Then, the whole process can be seen as the concatenation of elements like
those shown in Figure 4, which illustrates the schema of a task. As it is seen, some
metrics must also be collected from the execution of every task, in order to have the
process under control.
1) Introduction.
2) General results of the analysis applied to the applications.
3) Service proposal:
3.1 Technical report defining: goals, bounds, bindings, responsibilities, and contractual
parameters.
3.2 For every application inside the (future) maintenance contract, maintenance types and
their corresponding service indicators must be set.
3.3 Proposal of contract (that will have the same format as the Definitive contract).
4) Economic offer.
a possible exception at this point; however, they do not consider risks in such
estimations (although maybe the table of weight assignation of Albrecht’s, 1979,
function-point could be considered as a light way of hefting risks) or, if they do, they
are given over to an expert´s judgement (Briand, El-Eman, & Bomarius, 1998). In
Polo, Piattini, and Ruiz (2002), the complete method for identifying and estimating
risks in Software Maintenance projects is presented in detail.
These results are used to covenant the values of the Service Level Agreements
between the two involved parties when there is an outsourcing relationship. Later
in this chapter, the use of these agreements for planning the needed resources for
urgent-corrective is shown.
With all these data, a Maintenance Proposal containing at least the following
contents can be filled in; Figure 6 provides the minimal content of such a proposal.
Observation
Configuration
Techniques Interview Estimation Cross
management
references
Planneable maintenance.
As was previously mentioned, under this denomination we have grouped the
non-urgent corrective, perfective, preventive, and adaptive types of maintenance,
because they share a large portion of the activities and tasks. However, every one
of them has some certain particularities which distinguish it from the others. The
different task ways for every one of these types of maintenance is shown in Figure
8, and we detail them in tables 4 and 5.
Interfaces
with other Verification Configuration Management
processes
Table 4. Planneable maintenance (continued).
Int. processes Quality assurance Config. Manag. Config. Manag. Quality assurance Quality assurance
Software Maintenance 245
Table 5. Planneable maintenance (continued).
ActivityÆ Modification Request Analysis Intervention and tests (continues)
Applicable
type of
CP/P/A CP/P CP A A CP/P/A CP/P/A
maintenance
CP Error diagnostic
(DOC9) Modified software
List of software
Selected alternative product
Software product in elements and
Software Software product in Document of
Software product in operation Software product in operation operation properties to
Inputs product in operation intervention
Modification request MR in the waiting queue Implementation improve (DOC8)
operation List of software (DOC7/11/13)
alternatives (DOC10) Project P
elements and Modification
documentation
properties to improve request)
(DOC8)
A Copy of the software
product
Error diagnostic and
Corrected software
possible solutions (DOC9)
product
C Implementation alternatives
CP Document of
P (DOC10)
intervention
MR in the waiting queue Product measures Unitarily tested
Copy of the (DOC7/DOC11)
Intervention calendar (DOC16a) Selected alternative Intervention software product
Outputs software
(full DOC9) calendar Modified software Document of
List of software elements product
product unitary tests (DOC8
and properties to improve P
Document of
P (DOC12)
intervention (DOC13)
Product measures
Schedule estimation (DOC16a)
A A Adapted copy
Resources disposability
Portfolio Analysis Source code analysis Query to the historical Project Codification
Techniques Test techniques
Project management Project documentation analysis DB management Redocumentation
Time dedicated to the task See task UC2.1 in Error! See task UC2.3 in
Time dedicated to the Time dedicated to Time dedicated
Metrics Time dedicated to the task Number of affected FP Reference source not Error! Reference
task the task to the task
Error origin and cause found. source not found.
Responsible Maintainer Maintainer Maintainer Maintainer Maintainer Maintainer Maintainer
Int. processes Quality assurance Config. Manag. Config. Manag. Quality assurance Quality assurance
Software Maintenance 247
Process Metrics
In this section, some metrics to maintain the control of the process are shown:
a) Number of received MRs (NRMR)
b) Ratio of rejected/received MRs
c) Ratio of mutually incompatible MRs
d) Number of urgent-corrective MR/NRMR
e) Number of non-urgent corrective MR/NRMR
f) Number of perfective MR/NRMR
g) Number of preventive MR/NRMR
h) Number of adaptive MR/NRMR
i) Number of MR due to legal changes/NRMR
j) Number of MR due to business evolution/NRMR
k) Number of MR due to business rules change/NRMR
l) Number of replanned MR per period and area, period and NRMR, period
and functional domain
m) Service indicators:
• Response time
• Planning respect
n) Flexibility with no replanning: number of persons-month we can dispose with
no replanning nor new resources allocation
o) Number of persons for each project and month
p) Medium degree of replanning: number of hours replanned in each period.
250 Polo, Piattini, and Ruiz
Documents
During the maintenance process, a large quantity of documentation is gener-
ated. MANTEMA provides standard templates for documents as shown in Figure
9 (some of which have been presented in Figures 5, 6 and 7.)
Automatic Support
In general, there are many tools for helping in the maintenance process,
although most of them were initially designed for other goals (Quang, 1993).
However, nearly all of these are vertical tools, in the sense that they only help in some
of the maintenance tasks (cost estimation, reengineering, reverse engineering,
restructuration) and there are few horizontal tools suitable of being used along the
entire process; moreover, the use of horizontal tools is limited to certain bands of
the maintenance process, such as configuration management (Pigoski, 1997).
CONCLUSIONS
In this chapter we have presented MANTEMA, a methodology for supporting
the maintenance process. We believe that this methodology covers a lack of
Software Engineering, providing a complete guide for carrying out the process
through the rigorous definition of different types of maintenance, all of them detailed
at task level.
Such process architecture allows the easy translation of the methodology to an
automatic environment, which facilitates the application of the methodology and the
control of the process.
REFERENCES
Albretch, A. J. (1979). Measuring application development productivity. Proceedings
of the IBM Application Development Symposium. Monterey, Canada.
Avison, D., Lau, F., Myers, M., & Nielsen, A. (1999). Action research. Communi-
cations of the ACM, 42(1), 94-97.
Basili, V., Briand, L., Condon, S., Kim, Y., Melo, W., & Valett, J.D. (1996).
Understanding and Predicting the Process of Software Maintenance Releases.
Proceedings of the International Conference on Software Engineering.
IEEE.
Baxter, I.D. & Pidgeon, W.D. (1997). Software Change Through Design Mainte-
nance, Proceedings of the International Conference on Software Engineer-
ing, 250-259. IEEE Computer Society, Los Alamitos, California.
Bourke, T. (1999). Seven majors ICT companies join the European Commission
to work towards closing the skills gap in Europe. Conclusions of the
Conference on “New Partnerships to Close Europe’s Information and Commu-
nication Technologies.” Available online: http://www.career-space.com/
whats_new/press_rel.doc. Accessed January 2, 2000.
252 Polo, Piattini, and Ruiz
Brereton, P., Budgen, D., & Hamilton, G. (1999). Hypertext: The next maintenance
mountain. Computer, 31(12), 49-55.
Briand, L., Kim, Y., Melo, W., Seaman, C., & Basili, V.R. (1998). Q-MOPP:
Qualitative evaluation of maintenance organizations, processes and products.
Software Maintenance: Research and Practice, 10, 249-278.
Briand, L.C., El Emam, K., & Bomarius, F. (1998). COBRA: A hybrid method for
software cost estimation, benchmarking and risk assessment. Proceedings of the
20th International Conference on Software Engineering. Washington D.C:
IEEE Computer Society Press.
Brooks, F. P. (1995) The mythical man-month. Essays on software engineering
Anniversary Edition. Reading, MA: Addison-Wesley.
Calzolari, F., Tonella, P., & Antoniol, G. (1998). Modelling maintenance effort by
means of dynamic systems. Proceedings of the Third European Conference
on Software Maintenance and Reengineering. Amsterdam (The Netherlands),
IEEE Computer Society, Los Alamitos, CA, USA.
De Vogel, M. (1999). Outsourcing and metrics. Proceedings of the 2nd European
Measurement Conference, FESMA’99. Federation of European Software
MetricsAssociation/TechnologischInstituut.
Euromethod Version 1. 1996. Euromethod Project.
Frazer, A. (1992). Reverse engineering-Hype, Hope or Here? In P.A.V. Hall (ed.),
Software reuse and reverse engineering in practice. p. 209-243. Chapman & Hall.
Fuggetta, A. (1999). Rethinking the models of software engineering research. The
Journal of Systems and Software, (47), 133-138.
Griswold, W.G. & Notkin, D. (1993). Automated assistance for program restructuring.
ACM Transactions on Software Engineering and Methodology, 2(3), 228-
269.
Hoffman, T. (1997). Users say move quickly when outsourcing your personnel.
Computer World, March, p.77.
IEEE Std. 1074-1991 (1991). Standard for Developing Software Life Cycle Pro-
cesses. IEEE Computer Society Press.
IEEE Std. 1219-1992 (1992). Standard for Software Maintenance. New York:
IEEE Computer Society Press.
ISO-International Standard Organization (1995). ISO/IEC 12207. Information
Technology Software Life Cycle Processes.
Jahnke, J.H. & Wadsack, J. (1999). Integration of Analysis and Redesign Activities in
Information System Reengineering. Proceedings of the Third European Con-
ference on Software Maintenance and Reengineering, Amsterdam. Los
Alamitos, CA: IEEE Computer Society.
Jones, C. (1994). Assessment and Control of Software Risks. New York: McGraw-
Hill.
Software Maintenance 253
Chapter X
In the area of Software Maintenance (SM), there are still a number of matters
to study and research (Bennett & Rajlich, 2000). One of the most important is the
development of tools and environments to support methodologies and to facilitate
the reuse of processes (Harrison, Ossher, & Tarr, 2000). A Software Engineering
Environment (SEE) is quite useful to manage the complexity of SM projects, since
it can provide the needed services. The SEE must be capable of managing data and
metadata of the different production processes – in our case, the Software
Maintenance Process (SMP) – at different detail and abstraction levels. The SEE
should be based, for this purpose, upon a conceptual multilevel architecture,
allowing all the information of processes to be shared among all available tools. This
last need is satisfied using a repository manager that saves data and metadata of
processes using an open and portable format.
In this chapter, a conceptual multilevel architecture is presented, making the
integration of all available tools for managing SM projects possible, in precisely an
integrated environment.
The fact is that such a SEE is a help to approach the inherent complexity of the
SMP from a broader perspective than the merely technological one.
A SEE must satisfy several requirements to reach the aforementioned general
goal. The two most meaningful requirements are the following: it must be process-
oriented, and it must permit work with different models and metamodels of the
software processes involved in the SM projects.
Of the different aspects to highlight in these environments, in this chapter we
put our main attention on those that are more directly related to the goal of helping
in the management of complexity. The first section presents a proposal to approach
the SMP from a wide perspective of business processes, integrating technological
and management aspects. The use of a Process-sensitive Software Engineering
Environment (PSEE) to reach such goal is justified in the second section, and its
architecture is presented.
The importance of using a multilevel conceptual architecture are justified in the
third section and how to apply the Meta-Object Facility (MOF) Standard to the
SM is commented. The MANTIS proposal of integral environment for the
management of SM projects is presented in the following section. Lastly, the main
components of MANTIS are commented in the remaining sections: conceptual
tools (multilevel architecture, involved processes, ontologies, and metamodels);
methodological tools (methodology and interfaces with organizational and mana-
gerial processes) and technical tools (horizontal and vertical software tools,
repository, and interaction with process enactment software tools).
Managing Software Maintenance Projects 257
SOFTWARE MAINTENANCE
AS A BUSINESS PROCESS
In recent years, everything that occurs once a software product has been
delivered to users and clients has been receiving much more attention because of
the significant economic importance that it has on the information technology
industry. Proof of this are the recent cases of the year 2000 effect and Euro
adaptation. In the same line, Rajlich and Bennet (2000) have made a new proposal
of a software life cycle oriented towards increasing the importance of SM. These
authors consider that, from a business point of view, a software product passes
through the following five distinct stages:
• Initial development: engineers build the first functioning version of the
software product to satisfy initial requirements.
• Evolution: engineers extend the capabilities of the software product to meet
user needs. Iterative changes, modifications, and deletions to functionality
occur.
• Servicing (saturation): engineers make minor defect repairs and simple
functional changes. During this stage, changes are both difficult and expensive
because an appropriate architecture and a skilled work team are lacking.
• Phase-out (decline): the company decides not to undertake any more
servicing, seeking to generate revenue, or other benefits, from the unchanged
software product as long as possible.
• Closedown: the company shuts down the product and directs users to a
replacement product, if one exists.
Several characteristics change substantially from one stage to another, includ-
ing staff expertise, software architecture, software decay (the positive feedback,
the loss of team expertise, and the loss of architectural coherence) and economic
benefits. From the point of view of the SMP, another important difference between
one stage and another is the different frequency with which each type of mainte-
nance is carried out. Corrective maintenance (correcting errors) is more usual in the
servicing stage, while perfective maintenance (making changes to functionality) is
more frequent in the evolution stage. The other two types of maintenance, as defined
in the ISO 14764 (1998b) Standard—adaptive (changing the environment) and
preventive (making changes to improve the quality properties and to avoid future
problems)—are usually considerably less frequent.
Whilst the initial development stage is well documented using numerous
recognized methods, techniques and tools, the other four stages (which correspond
to the SM) have been studied and analysed to a lesser degree. The attention paid
to the development of tools and environments, adapted to the special characteristics
of the SMP, has been significantly low.
258 Ruiz, García, Piattini, and Polo
PROCESS-SENSITIVE SOFTWARE
ENGINEERING ENVIRONMENTS
Manufacturing processes are special cases of business processes . Although
SM is not a typical manufacturing process (due to peculiarities, as such the creative
human participation), is does have in common the double facet “production vs
management”. Therefore, it can be useful to highlight the similarities – more than
differences- to understand the activities involved in SM from a more global
perspective. The same as with a manufacturing project, a SM project consists of
two main, interrelated processes: the production process and the management
process. In our case, the production process is the same software engineering
process that we know as SM, whereas the management process provides the
needed resources for the production process, and controls it. This is possible if the
SMP returns information about its behaviour to the management process. There are
Managing Software Maintenance Projects 259
Management SM process
process Controla
Supports
Realimenta
Explota
PSEE Explota
Management Process SM
technology technology technology
Proporciona
Proporciona
Provides
Provides Provides
... ...
Users
Repository of Process
Product, Process Communication layer
Engine
Models, and
Process Status
Import /
Export
Channels
Workspace ... Workspace
Managing Software Maintenance Projects 261
relationship, relational, etc.). In the latter, the abstract concepts used in the M1 layer
are defined: entity, table, attribute, activity, role, etc. Similarly, the M3 layer can
describe many other metamodels that in turn represent other kinds of metadata.
Summarizing, this four layer metadata architecture has the following principal
advantages:
• It can support practically all kinds of imaginable meta-information.
• It allows different kinds of metadata to be related.
• It allows the interchange of both metadata (models) and meta-metadata
(metamodels).
The M3 layer is composed of the MOF Model that constitutes the meta-
language for metamodel definition. The main modelling concepts provided by MOF
are similar to the corresponding ones in UML, although with slight differences:
• Classes, which model MOF meta-objects. Classes can have attributes and
operations.
• Associations, which model binary relationships between meta-objects.
• Data Types, which model other data (e.g., primitive types, external types,
etc.).
M3 layer
MOF model
meta metamodel
... ...
Projects data
M0 layer
Projects data Projects data
Projects data
Projects data Projects data user objects
Projectsdata
data Projects data
Projects data
Projects Projects data
Data
264 Ruiz, García, Piattini, and Polo
General Characteristics
Probably, the principal advantage of MANTIS is that it integrates practically
all the aspects that must be taken into account for directing, controlling, and
managing SM projects under one conceptual framework. This advantage is built
upon the following features:
• a MOF-based conceptual architecture that facilitates working with the
significant complexity inherent in the management of SM projects; specifically,
the M3 layer allows work with the different metamodels needed in MANTIS:
of processes, of product, of organization, etc.;
• the integration of methods and techniques that have been specially developed
for the SMP, such as the MANTEMA methodology (Polo, Piattini, Ruiz, &
Calero, 1999a), or adapted from the development process;
Managing Software Maintenance Projects 265
Components
All the MANTIS components are categorized as tools of three different types:
conceptual, methodological, and technical (i.e., CASE software tools). A summary
of the components that currently make up the MANTIS Big-E Environment is
shown in Figure 4. The principal components are described in the following
paragraphs, highlighting the most original aspects and those of most interest to
organizations that must undertake SM projects.
266 Ruiz, García, Piattini, and Polo
CONCEPTUAL TOOLS
METHODOLOGICAL TOOLS
Organizational interfaces:
- Improvement: based on the Niessink proposal
- Measurement: suite of specific metrics for SMP
Managerial interfaces:
- Management: based on the PMI proposal
- Project Management: based on the PMI proposal
- Risk Management: special set of risk factors
CONCEPTUAL TOOLS
Conceptual tools are used to represent the inherent complexity of SM
projects. A level-based conceptual architecture is necessary to be able to work at
different detail levels. A software life cycle process framework is useful for knowing
which are the software processes related to the maintenance process. To make sure
that all the concepts are correctly defined, used, and represented, a generic
ontology for the SM is used. Moreover, in the MANTIS Big-E Environment, two
different but complementary points of view of SMP are considered:
Managing Software Maintenance Projects 267
• a real-world SMP, that includes all the real activities needed to carry out the
maintenance project; and
• an SMP metamodel, which is a representation of the real-world activities for
steering, enforcing or automating parts of that SMP.
A real-world SMP is defined with a Workflow ontology based on the
workflow technology. A Measure ontology, for SM projects management, has
been defined to estimate and improve this process, measuring what is happening
with those real-world projects.
In conclusion, an adequate software process generic metamodel is required to
represent the different ontologies.
Conceptual Architecture
Four conceptual levels that are based on MOF have been defined. These four
levels of the MOF architecture and their adaptation to MANTIS (Ruiz, Piattini, &
Polo, 2001b) can be seen in Table 1.
Level M0 has the data of real and specific SM projects with concrete time and
cost restrictions. The data handled at this level are instances of the concepts defined
at the higher M1 level. The most important specific models that are used at level M1
are based on the MANTEMA methodology and a group of techniques adapted to
the special characteristics of the SM. Level M2 corresponds to the SMP
metamodel, which will be discussed later in more detail. For example, the generic
concept of Activity used in M2 is instanced in the Modification Request Analysis
concept in M1 and these, in turn, appear in level M0 as Analysis of the modification
Primary processes
Customer-Supplier Supporting
processes
Acquisition
......
Engineering Documentation
.......
Development
MAINTENANCE
Organizational processes
Organization
Management
Organizational alignment
Management
Improvement
Project Management
Human Resource Management
Quality Management
Infrastructure
Risk Management
Measurement
Reuse
Managing Software Maintenance Projects 269
models and metamodels that have been used in the problem domain (SMP
management) based on the same conceptualisation (set of objects, concepts,
entities, and relationships among them, assuming that they exist in the domain) are
also required. Moreover, an explicit specification of such conceptualisation is also
required; that is, building an ontology (Gruber, 1995).
The elaboration of a common ontology to all the components of the Big-E
Environment is a secondary goal of MANTIS for the above mentioned reasons. For
this purpose, the informal ontology proposed by Kitchenham et al. (1999) is
adequate. A certain formalization level is required in order to represent the ontology
by means of objects of the aforementioned conceptual levels, as well as to build the
tools that keep and manage models and metamodels. MANTIS uses UML to
formalize ontologies. The proposal of Kitchenham et al. is useful for all those
constraints
Software Resource Hum an Resource
P rocedure automates
constraints employs
Paradigm
is_output_to
Maintenance Activity Engineer Manager
performs Customer
ne gotiates_w ith
Management Acti vi ty
Modification Activity * Investigation Activity
Enhancement
Correction constrains
Configuration Management
receives
delivers
Changed Requirements Maintenance Event
Maintenance Management
produces
P roduct Upgrade
defines
Investigation Report
Product
Maintenance Organisation Structure
receives
supports
uses
has approves
Modification Activity * Change Control
Client Organisation *** Maintenance Organization **
Managing Software Maintenance Projects 271
persons that are working in the field of SM and, of course, also for defining and
building maintenance-oriented SEE. This proposal is structured in several partial
subontologies focused on the Activities, the Products, the Peopleware and the
Processes. In the MOF-based conceptual architecture used in MANTIS each of
these ontologies can be represented with partial metamodels of M2 level that
operates like a MOF-package in level M3, allowing the reuse of these metamodels.
Figure 6 shows a summarized and integrated view of these partial ontologies
(in UML diagram class format). Due to the size of this proposal, we will only present
the partial ontologies that compose it, recommending to the readers refer to the
reference (Kitchenham et al, 1999) for more information. In short, each of these
ontologies represents the following aspects of SM:
• Products ontology: how the software product is maintained and how it
evolves with time.
• Activities ontology: how to organize activities for maintaining software and
what kinds of activities they may be.
• Processes ontology: approaches the SMP from two different perspectives,
defining a sub-ontology for each one:
– Procedures sub-ontology: how the methods, techniques, and tools (either
specific or shared with the development process) can be applied to the
activities, and how the resources are used in order to carry out these
activities.
– Process Organization sub-ontology: how the support and orgaizational
processes (of ISO 12207) are related to the SMP activities, how the
maintainer is organized, and what its contractual obligations are.
• Peopleware ontology: what skills and roles are necessary in order to carry
out the activities, what the responsibilities of each one are and how the
organizations that intervene in the process (maintainer, customer and user)
relate to each other.
Workflows Ontology
Recently, some authors (Ocampo & Botella, 1998) have suggested the
possibility of using workflows for dealing with software processes, taking advan-
tage of the existing similarity between the two technologies. The value of Workflow
Management Systems (WFMS) in the automation of business processes has been
clearly demonstrated and, given that SMP can be considered as part of a wider
business process, it is reasonable to consider that workflow technology will be able
to contribute a broader perspective to SMP (which we could call Process
Technology) in line with the objectives of our MANTIS Big-E Environment.
These reasons have led us to integrate the workflow technology in the generic
MANTIS ontology. We have incorporated aspects of the Workflow Reference
272 Ruiz, García, Piattini, and Polo
Activities Specification
Activities specification includes how the activities can broken down into
is managed by creates
SMP MANTIS Environm ent SMP Instance
1 1 1 1..*
1
1
is defined using Control Flow is registered in
1..* 1
1 ends in
begins in
0..*
Workflow Specification 1..*
1..* Historical of SMP Instances
Diagram Node
1..*
1..*
simpler activities, their relationships and control flow, and the possibility of
automatic execution. The different roles that can be carried out by each activity are
also included. The following objects and relationships are incorporated:
• The SMP itself, is seen as a business process, managed using the MANTIS
Environment.
• The Task Type Specification class contains the properties that can be
abstracted from an activity or task, and that are independent from workflows;
i.e., it includes the general properties of a set of tasks of the same type. There
is a distinction between each Task Type Specification and the node represent-
ing it in the workflow (Workflow Activity).
• A Task Type Specification can be atomic or nested. The first have no internal
structure and are represented through the Simple Task class. The latter have
an internal structure that is represented using a Workflow Specification. The
execution of a nested task implies the execution of the underlying workflow
in it.
• Recursivity is used to represent the task hierarchy. A maintenance project is
modelled as a single main Task Type Specification. This main task has an
associated Workflow Specification, which includes different Workflow
Activities (appearing as Diagram Nodes), each one being defined by a Task
Type Specification. In its turn, every one of these specifications has an
associated Workflow Specification, which includes other Workflow Activi-
ties, and so on. The number of recursivity levels is given by the levels used in
the decomposition structure of project. Recursivity finishes when a Task Type
Specification corresponds to a Simple Task.
• Each Workflow Specification is represented through the Diagram Node,
Control Flow, Transition Condition and Workflow Activity classes. The
workflow model used is based on the proposal of Sadiq and Orlowska
(1999).
• A Workflow Specification is represented using workflow diagrams (see
Figure 8), this is, a set of Diagram Nodes interconnected by means of Control
Flows (arrows). Diagram Nodes can be Workflow Activities or Transition
Conditions. A condition can be either Or-split or Or-join. Conditions Or-split
and Or-join respectively allows to represent branches (choice) and fusions
(merge), that is, optional execution paths. To represent concurrent execution
paths, activities with more than a control flow beginning in them (beginning of
concurrency) or more than a control flow finishing in them (end of concurrency
or synchronization) are used.
• The Workflow Activity class has specializations of Manual Activity and
Automated Activity, in order to check that automated activities can be carried
274 Ruiz, García, Piattini, and Polo
Process Enactment
The dynamic aspects related to the execution are represented through the
following objects and relationships:
3 4 5
SEQUENCE
CONCURRENCY SYNCHRONIZATION
1 2 6 8 9
Initial Atomic
Activity CHOICE MERGE Activity Final
Activity
Or-split Or-join
7
CONTROL FLOWS
Nested Activity
Managing Software Maintenance Projects 275
Measure Ontology
A fundamental aspect to manage and control any project is the availability of
a set of metrics which will allow the measurement of the product that is being
produced or maintained and how the project is being executed. Both aspects are
fundamental for quality assurance and process assessment or improvement. For
these reasons, and for software engineering to be considered as such, it is essential
to be able to measure what is being done. For this purpose, the measure ontology
of MANTIS includes the concepts of measure, metrics, value and attribute
associated to the activities, artifacts, and resources, bearing in mind that the
measurements can refer to both product and process aspects. This measure
ontology of MANTIS is based on the proposal of the Institut Experimentelles
Software Engineering (IESE) that represents a generic schema for the modelling of
software processes, in which measurement aspects are very important (Becker,
Kornstaedt, & Webby, 1999). The following concepts are taken into consider-
ation:
• The activities, artifacts, resources, and actors are “appraisable elements.” In
order to be able to measure process enactment, the run-time instances of
SMP and activity, and the work items are also appraisable elements.
• An appraisable element has “attributes” that are susceptible to measurement.
For example, the duration of an activity or the length of a code module (an
artifact type).
• Each attribute has a specific “attribute type,” with the possibility that “sub-
types” may exist. For example, the “duration of an activity” is a subtype of
“quantity of time.”
• A “metric” is a formula for measuring certain types of attributes. A “measure”
is an association between a specific attribute and a metric. Its principal
property is the “value” obtained.
Process Metamodel
In order to be able to represent, in an integrated way, the aforementioned
ontologies, a generic software process metamodel, which is represented at level
276 Ruiz, García, Piattini, and Polo
Process Modelling
This metamodel is prepared from the necessary constructors to define models
of software processes. For it, the Element abstract class is the root class, from which
Entity and Relationship classes are defined by inheritance (in a similar manner to the
use of MOF-class and MOF-association in the MOF model).
Element
Process Modelling Relationship
1 Measurement
has
* has
2+ *
contains
Entity
* Attribute * has *
Value
* *
* calculates
has
uses
1
* * Expression
Resource * * uses
Project Activity * *
* Attribute Type
* * owns
** * * contains *
0..1
consumes
Equipment Room Tool
is a
produ ces
involves is assigned to 0..1
modifies * Agent
asumes *
Artifact *
*
Human Resources
* Rol
*
contains
belongs to
Organization Person
* *
* *
has fills
*
*
reports to Position
*
*
Managing Software Maintenance Projects 277
Measurement
In order to represent the aforementioned measure ontology, we must include
the possibility of defining indicators for any element in the metamodel. These
indicators must allow us to check the quality of the processes, for example, to be
able to apply improvement plans. The partial measurement metamodel (mentioned
in Figure 9) includes the Attribute, Attribute Type, Value and Expression classes.
Each Element of the model can have certain associated Attributes. In turn , each
Attribute has an Attribute Type (integer, date, string, …) and a Value. Examples
of attributes are: the duration of an activity, the number of detected errors in a
module, and the cost of a resource. In addition, the calculation of an attribute value
can be based on an Expression. The same expression can be used with various
attributes.
Human Resources
This partial metamodel is outstanding in the generic metamodel since human
resources are the key element for successfully performing software processes. With
this partial metamodel, it is possible to model who is performing the activities. The
following classes stand out:
278 Ruiz, García, Piattini, and Polo
METHODOLOGICAL TOOLS
Because maintenance is the most expensive stage in the software life cycle, it
is important to have methodologies and tools so that we can approach this problem
in the best possible way. Since necessities in the development and maintenance
phases are practically divergent and different (see Figure 10), it becomes necessary
to grant to the SM the importance that it has, keeping in mind its special
characteristics and its differences with the development phase. The MANTEMA
methodology was developed with this objective in mind (Polo et al., 1999a). For
these reasons, MANTEMA 2.0 is the proposed methodology in the MANTIS
Environment. For it, a MANTEMA process model is included in M1 level of the
conceptual architecture.
For an adequate management of the SMP, MANTIS considers interfaces with
several organizational and managerial processes (see Figure 5): improvement,
measurement, management, project management, and risk management. For this
issue, the models of these different processes also can be represented at the M1
level of the conceptual architecture.
Managing Software Maintenance Projects 279
Preliminary conception
Detailed conception
Codification
Unitary test
Formation - Installation
Development Maintenance
MANTEMA Methodology
The readers have a whole chapter dedicated to MANTEMA in this same
book and, therefore, this section only explains the role model used, given its
importance in process management issues. MANTEMA classifies the roles
depending on the organization to which belong (Polo et al., 1999c):
• Customer organization, which owns the software and requires the mainte-
nance service. Its roles can be:
– The Petitioner, who promotes a modification request, establishes the
needed requirements for its implementation, and informs to the main-
tainer.
– The System organization is the department that has a good knowledge
of the system that will be maintained.
– The Help-Desk, this is the department that attends users. It also reports
the incidents sent by users to generate the modification request to the
Petitioner.
280 Ruiz, García, Piattini, and Polo
• Maintainer Organization supplies the maintenance service. Its roles can be:
– The Maintenance-Request Manager decides whether the modification
requests are accepted or rejected and what type of maintenance should
be applied. He/she gives every modification request to the Scheduler.
– The Scheduler must plan the queue of accepted modification requests.
– The Maintenance Team is the group of people who implement the
accepted modification request. They take modification requests from
the queue.
– The Head of Maintenance prepares the maintenance stage. He/she
also establishes the standards and procedures to be followed with the
maintenance methodology used.
• User Organization uses the maintained software. Its roles can be:
– The User makes use of the maintained software and communicate to the
incidents to the Help-Desk.
This enumeration is not intended to be rigid, since it may be tailored to every
particular case including new roles or modifiying the existing ones. Each one of the
three organisations listed above may be a different organisation, but this is not
always so. Sometimes two or more different roles may coincide in the same actor
(person or organizational unit).
Organizational Interfaces
SMP improvement can be managed with MANTIS from the two different
perspectives proposed by Niessink (2000):
• Measurement-based, the measurement is used as an enabler of improvement
activities (see Figure 11); and
• Maturity-based, the organization or processes are compared with a refer-
ence framework that is assumed to contain the correct activities for the
organization or processes. The best-known examples of such reference
frameworks are the Software Capability Maturity Model (CMM) and the
ISO 15504 Standard.
In MANTIS, we have used the ISO 15504 as a model for the assessment and
improvement of the SMP (García, Ruiz, Piattini, & Polo, 2002). For example,
Table 2 shows the integration of the ISO 15504 assessment model into the
MANTIS conceptual architecture.
The multilevel conceptual architecture of MANTIS enables us to work with
different models of one process, which is a requirement in order to be able to
manage the process improvement.
On the other hand , the measure process management is also possible with
MANTIS thanks to the inclusion of the Measure metamodel previously mentioned,
Managing Software Maintenance Projects 281
Possible
analysis analysis
cause
Possible
implementation implementation
solution
and to the definition of a set of metrics that are suitable for the management of the
SMP. In the MANTEMA methodology, metrics are enumerated for each task.
Furthermore, some of these metrics are used to define Service Level Agreements
(SLA) in SM contracts and to verify their later fulfilment:
• Time of Resolution of Critical Anomalies (TRCA): it is the maximum time that
the maintenance organization may take in fixing a critical anomaly without
being sanctioned.
Table 2. Mapping between generic metamodel and assessment model.
Managerial Interfaces
The management and project management process models are based on the
ISO 16326 proposal (ISO/IEC, 1998c). Additionally, as in the aforementioned
Workflows ontology, a Work Item is defined as the smallest unit of a job which is
undertaken and that is controlled, managed, and assigned to an actor to carry it out.
This method for controlling the execution of the job is based on the norm of the
Project Management Institute (PMI, 2000).
Knowledge of the main risks associated to SM projects is required in order
to manage the risk management process with MANTIS. For this, a tailoring of the
proposal Control Objectives for Information and Related Technology (CobiT)
which has been published by the Information Systems Audit and Control Founda-
tion (ISACF) (ISACF, 1998) has been made. The high-level control objectives list
of CobiT has been modified, substituting the Manage Changes for Manage the
Software Maintenance Process, and the resulting list of detailed control objectives
has been also restructured in terms of the SMP activities and tasks. The result is the
high and detailed control objectives (Ruiz et al., 2000) shown in Table 3.
TECHNICAL TOOLS
Although the development of software tools is not the main goal of the
MANTIS project, the definition and construction of some has been, in order to
support the automatization of SM projects management. The most significative are
mentioned below.
Horizontal Tool
MANTIS Tool is a software tool whose functionality is to offer an integrated
user interface for using the different vertical tools, as well as any other used software
Managing Software Maintenance Projects 283
(WFMS, etc.). Figure 12 shows a summary of its interaction with other types of
software tools used in MANTIS.
Vertical Tools
MANTOOL allows the management of modification requests (MR) from their
inception until the end of their respective sets of activities and tasks, according to
the five maintenance types and the different stages defined in the MANTEMA
methodology. Templates of MR and of other documents generated during the
maintenance process are carefully detailed. The pursuit of every MR is done on a
screen such as that in Figure 13. In this screen, there is a tab for every task belonging
to the maintenance type of the MR: inputs, outputs, and metrics of the task are
specified on this screen. There is also an additional tab (the one in foreground in
Figure 13) that shows the general information of the MR (application, date and time
of presentation, number of modification request, last executed task, the user who
presented it, and a description of the error and error messages). There is also a
graph with nodes associated with every stage. Nodes change their colour when their
respective tasks are executed in order to provide a quick idea of the MR status.
The data saved in MANTOOL can be used to extract different kinds of
reports and to do estimates of future maintenance interventions: summaries of the
MANTIS repository
manage
process
MANTIS Tool enactment
(projects)
Managing Software Maintenance Projects 285
different data saved for every application in every type of maintenance; data of
maintenance requests related to a definite type of maintenance; dedication of
personnel; deviations (difference between the time initially scheduled for executing
an MR and the real time dedicated); tendency (evolution of the different metrics
both of modules and of routines); etc.
MANTICA is another tool of the MANTIS environment, developed to define
and register quality metrics of relational, object-relational, conceptual, or UML
schemas. For other types of metrics, those developed by other authors have been
selected.
METAMOD is a MOF-based tool for representing and managing software
process models and metamodels (Ruiz et al., 2001c). The application is composed
of a metamodel administrator as its principal component and of a graphical user
interface that allows a visual description of the classes that make up the core of the
MOF model (Package, Class, Datatype, Attribute, Operation, Reference,
AssociationEnd, and Constraint). The metamodel administrator has a three-shape
structure, as does the MOF model; a package contains classes and associations,
a class contains attributes and operations, an association contains restrictions, etc.
In Figure 14, the window associated with the MOF-class definition is visible. The
Repository Manager
A basic aspect for the effectiveness of MANTIS is the existence of a common
repository for all the environment components. In order to have an open format for
the storage of the data and metadata, the MANTIS repository uses XML
Metadata Interchange (XMI). It is essential that both the metamodels defined
with the MANTIS tools and the metamodels defined with other tools that support
XMI are usable. XMI constitutes the integrating element for metadata originating
from different sources, as it represents a common specification. Therefore, to a
great extent, the RepManager component provides a tool with support enabling it
to interchange metadata with other repositories or tools that support XMI. These
functions are supported via a set of calls to the system and basically are:
1) Storage of MOF models in the local metadata repository using XMI
documents; and
2) Importation/exportation of models and metamodels.
M1 MANTEMA model
<MANTEMA:Urgent Corrective Intervention
M0 Project Data name="Intervention number 36"
date="31/12/1999" />
<MANTEMA:Maintenance Team
name="A Team"
participants=7/>
288 Ruiz, García, Piattini, and Polo
REFERENCES
Aversano, L., Betti, S., De Lucia, A., & Stefanucci, S. (2001). Introducing
workflow management in software maintenance processes. IEEE Interna-
tional Conference on Software Maintenance (ICSM), pp. 441-450.
Becker-Kornstaedt, U. & Webby, R. (1999). Comprehensive Schema Inte-
grating Software Process Modeling and Software Measurement.
Fraunhofer Institute, IESE report Nº 047.99/E., v. 1.2, 1999.
Bennett, K. & Rajlich, V. (2000). Software maintenance and evolution: A
roadmap. International Conference on Software Engineering (ICSE) -
Future of SE Track, pp. 73-87.
Cockburn, A. (2000). Selecting a projects methodology. IEEE Software, July/
August , pp. 64-71.
Derniame, J.C., Kaba, B.A., & Warboys, B. (1999): The software process:
Modelling and technology. In Derniame et al (eds.), Software Process:
Principles, Methodology and Technology. LNCS 1500, pp. 1-13. New
York: Springer-Verlag,
Falbo, R.A., Menezes, C.S., & Rocha, A.R. (1998). Using ontologies to improve
knowledge integration in software engineering environments. Proceedings of
the 4th International Conference on Information Systems Analysis and
Synthesis, SCI’98/ISAS’98, Orlando, FL. USA, July.
Managing Software Maintenance Projects 289
García, F., Ruiz, F., Piattini, M., & Polo, M. (2002) Conceptual architecture for
the assessment and improvement of software maintenance. Proceedings of
the 4th International Conference on Enterprise Information Systems
(ICEIS’02). Ciudad Real (Spain), April.
Gruber, T. (1995): Towards Principles for the Design of Ontologies used for
Knowledge Sharing. International Journal of Human-Computer Studies,
43(5/6), pp 907-928.
Harrison, W., Ossher, H., & Tarr, P. (2000). Software engineering tools and
environments: A roadmap. International Conference on Software Engi-
neering (ICSE) - Future of SE Track, pp. 261-277.
ISACF (1998). CobiT: Governance, Control and Audit for Information and
Related Technology, 2nd edition. Information Systems Audit and Control
Foundation. Rolling Meadows, IL.
ISO/IEC (1995). IS 12207 Information technology - Software life cycle
processes. Geneva, Switzerland: International Organization for Standards.
ISO/IEC (1998a): TR 15504-2 Information technology - Software process
assessment - Part 2: A reference model for processes and process
capability, August.
ISO/IEC (1998b). FDIS 14764 Software engineering - Software mainte-
nance, December.
ISO/IEC (1998c). DTR 16326 Software engineering – Project management,
December.
ISO/IEC (2000). JTC1/SC7/WG4 15940 Information technology –Software
engineering environment Services, July.
ISO/IEC (2001): FDIS 15474-1 Software Engineering - CDIF Framework -
Part 1: Overview, working draft 5, March 2001.
Kitchenham, B.A., Travassos, G.H., Mayrhauser, A., Niessink, F., Schneidewind,
N.F., Singer, J., Takada, S., Vehvilainen, R., & Yang, H. (1999). Towards
an ontology of software maintenance. Journal of Software Maintenance:
Research and Practice. 11, pp. 365-389.
Liu, C., Lin, X., Zhou, X., & Orlowska, M. (1999). Building a repository for
workflow systems, Proceedings of the 31st International Conference on
Technology of Object-Oriented Language and Systems. IEEE Computer
Society Press, 1999, pp. 348-357.
MDC (1999). Meta Data Coalition, Open Information Model, v.1.0, August.
Niessink, F. (2000). Perspectives on improving software maintenance. PhD
Thesis, Vrije Universiteit, Netherland. Available in http://
www.opencontent.org/openpub/.
Niessink, F. & Vliet, H.v. (1999). Measurements should generate value, rather
than data. Proceedings of the Sixth IEEE International Symposium on
290 Ruiz, García, Piattini, and Polo
Software Metrics (METRICS’99). Boca Raton, FL. pp. 31-38. New York:
IEEE Computer Society Press.
Ocampo, C. & Botella, P. (1998). Some Reflections on Applying Workflow
Technology to Software Processes. TR-LSI-98-5-R, UPC, Barcelona,
Spain.
OMG (2000a). Object Management Group, Meta Object Facility (MOF)
Specification, v. 1.3 RTF, March. Available in http://www.omg.org.
OMG (2000b): Object Management Group, XML metadata interchange (XMI),
v. 1.1, November.
OMG (2001). Object Management Group, Software Process Engineering
Metamodel (SPEM) Specification, December.
Pigoski, T.M. (1996). Practical software maintenance. Best practices for
managing your investment. New York: John Wiley & Sons.
PMI (2000): A guide to the project management body of knowledge, 2000
edition. Newtown Squares, PA: Project Management Institute Communica-
tions.
Polo, M., Piattini, M., & Ruiz, F. (2001). MANTOOL: A tool for supporting the
software maintenance process. Journal of Software Maintenance and
Evolution: Research and Practice; Vol 13, Nº 2, pp. 77-95.
Polo, M., Piattini, M., Ruiz, F., & Calero, C. (1999a). MANTEMA: A complete
rigourous methodology for supporting maintenance based on the ISO/IEC
12207 Standard. Third Euromicro Conference on Software Maintenance
and Reengineering (CSMR’99). Amsterdam (Netherland), 1999, pp. 178-
181. New York: IEEE Computer Society Press.
Polo, M., Piattini, M., Ruiz, F., & Calero, C. (1999b): Using the ISO/IEC tailoring
process for defining a maintenance process. IEEE Conference on Stan-
dardization and Innovation on Information Technology. Aachen, Ger-
many. pp. 205-210. New York: IEEE Computer Society Press.
Polo, M., Piattini, M., Ruiz, F., & Calero, C. (1999c): Roles in the maintenance
process. ACM Software Engineering Notes; vol 24, Nº 4, pp. 84-86.
Rajlich, V.T. & Bennett, K.H. (2000): A staged model for the software life cycle.
IEEE Computer, July, pp. 66-71.
Randall, R. & Ett, W. (1995). Using process to integrate software engineering
environments. Proceedings of the Software Technology Conference, Salt
Lake City, UT. In http://www.asset.com/stars/loral/pubs/stc95/psee95/
psee.htm.
Ruiz, F., Piattini, M., Polo. C., & Calero, C. (2000). Audit of software maintenance
process. In Auditing Information Systems, pp. 67-108. Hershey, PA: Idea
Group Publishing.
Ruiz, F., Piattini, M., & Polo, M. (2001a): Using metamodels and workflows in a
Managing Software Maintenance Projects 291
Ned Chapin is an information systems consultant with InfoSci Inc., USA. His
decades of experience include all phases of the software life cycle and cover
industrial, business, financial, non-profit, and governmental organizations. He has
also served in roles from lecturer to professor of Information Systems at various
universities. Ned’s interests include software maintenance and evolution, database
technology, systems analysis and design, and software management. He is a
member of the ACM, ACH, AITP, IEEE Computer Society, and Sigma Xi—the
Scientific Research Society of America. Ned is a Registered Professional Engineer,
and a Certified Information Systems Auditor. His MBA is from the Graduate
School of Business of the University of Chicago, and his PhD is from Illinois Institute
of Technology. Ned currently is the co-editor of the Journal of Software
Maintenance and Evolution. He can be contacted at: [email protected].
Andrea De Lucia received the Laurea degree in Computer Science from the
University of Salerno, Italy, in 1991, the MSc degree in Computer Science from
the University of Durham, UK, in 1995, and the PhD degree in Electronic
Engineering and Computer Science from the University of Naples “Federico II,”
Italy, in 1996. He is currently an associate professor of Computer Science at the
Faculty of Engineering of the University of Sannio in Benevento, Italy. He serves on
the program and organising committees of several international conferences and
was program co-chair of the 2001 International Workshop on Program Compre-
hension. His research interests include software maintenance, reverse engineering,
reuse, reengineering, migration, program comprehension, workflow management,
document management, and visual languages.
Index
H 152, 153
modeling constructs 74
Help Desk Process (HDP) 39 modeling techniques 120
horizontal decomposability 160 Modeling Transfer 120
hypertext 209 Modification Requests (MR) 240
MORPH methodology 167
I multilevel conceptual architecture 260
Immediate (direct) corrective mainte-
nance 43
O
impact analysis 206, 207 object modelling 153
impact domain 213 Object-oriented (OO) technology 116
implementation diagrams 117 Object-Oriented Analysis and Design
individual problem report process (OOA/D) 116
instance level 47 object wrapping 173
integration and test 81 online transaction processing (OLTP) 7
interacting systems 7 ontology of SM 269
interface complexity 215 operational performance 12
interface components 160 organisation-wide process level 47
internal problem reports 51 organizational health and fitness 1
internal stimuli 4
P
L
pair programming 78
legacy systems 152, 156 patterns for software maintenance 100
life cycle of an XP project 79 patterns in software design 96
locality of change factors 192 perfective maintenance 35, 257
performance factors 192
M planneable maintenance 233
MAINCOST 220 preserve boundary integrity 3
maintenance cost estimation 202 problem management process 47
Maintenance Demand Administration Problem Report Administrator (PRA) 48
Process (MDAP) 40 Problem Report Engineer (PRE) 48
Maintenance Demand Realisation Problem Report Owner (PRO) 48
Process (MDRP) 40 Problem Report Repository and Track-
maintenance impact domain 216 ing System (PRR) 48
management-specified change 15 problem report submitter 48
MANTEMA 230, 264, 279 process engine 260
MANTICA 286 process flexibility 63
MANTIS Big-E Environment 255, 264, process maturity 28
280 process metamodel 276
MANTOOL 251, 285 process reverse engineering 153
measures ontology 275 Process-Sensitive Software Engineering
meta-metamodel layer 262 Environment (PSSE) 256, 258,
metamodel 262 259
metamodel layer 262 Product Support Process (PSP) 39
Meta-Object Facility (MOF) 256 products ontology 270
methodology 228, 278 project influence factor 217
migrating business processes project management 268
300 Index
T
task structure 235
test-Integration 80
U
UML 115
UML Exchange Format (UXF) 122
Just Released!
Information Modeling in the
New Millennium
Matti Rossi, Helsinki School of Economics and Business Administration Finland
“Despite the rapid advance of technology in the last few decades, accuracy, on-time
and on-budget completion of information systems development projects is still a
vision rather than a reality.”
–Kees van Slooten, University of Twente, The Netherlands
IRM Press
Hershey • London • Melbourne • Singapore • Beijing
Editor:
Information Resources
Mehdi Khosrow-Pour, D.B.A. Management Journal
Editorial Preface
Management and Organizational Issues for Decision Making Support Systems
Reviewed Papers
Demonstrating Value-Added Utilization of Existing Databases for Organizational
Decision-Support
ISSN: 1040-1628 ;eISSN: 1533-7979 Understanding Decision-Making in Data Warehousing and Related Decision
Support Systems: An Explanatory Study of a Customer Relationship
Management Application
Subscription: Annual fee per volume (four issues): Individual Design and Implementation of a Web-Based Collaborative Spatial Decision
Support System: Organizational and Managerial Implications
Mission
The Information Resources Management Journal (IRMJ) is a refereed, international publication featuring the
latest research findings dealing with all aspects of information resources management, managerial and organi-
zational applications, as well as implications of information technology organizations. It aims to be instrumen-
tal in the improvement and development of the theory and practice of information resources management,
appealing to both practicing managers and academics. In addition, it educates organizations on how they can
benefit from their information resources and all the tools needed to gather, process, disseminate and manage this
valuable resource.
Coverage
IRMJ covers topics with a major emphasis on the managerial and organizational aspects of information re-
source and technology management. Some of the topics covered include: Executive information systems; Infor-
mation technology security and ethics; Global information technology Management; Electronic commerce tech-
nologies and issues; Emerging technologies management; IT management in public organizations; Strategic IT
management; Telecommunications and networking technologies; Database management technologies and is-
sues; End user computing issues; Decision support & group decision support; Systems development and CASE;
IT management research and practice; Multimedia computing technologies and issues; Object-oriented tech-
nologies and issues; Human and societal issues in IT management; IT education and training issues; Distance
learning technologies and issues; Artificial intelligence & expert technologies; Information technology innova-
tion & diffusion; and other issues relevant to IT management.