Ph.D. Thesis of Amit Mishra - Final - Print
Ph.D. Thesis of Amit Mishra - Final - Print
DOCTOR OF PHILOSOPHY
in
COMPUTER SCIENCE AND ENGINEERING
By
Amit Mishra
I hereby declare that work contained in this thesis titled “DESIGN AND
EVALUATION OF A MODEL FOR MULTISTAGE LEGACY-CRISIS
DETECTION IN IT-SOLUTIONS OF ENTERPRISES” is original. I
have followed the standards of research ethics to best of my abilities. I have
acknowledged all the sources of information which I have used in the
thesis. I have completed all the pre-submission requirements as mentioned
in the UGC mandate and GBU Ph.D. ordinance.
Amit Mishra
Enrolment No.: PHD/ICT/1303
Department of Computer Science and Engineering
School of Information and Communication Technology
Gautam Buddha University, Greater Noida, 201310
Gautam Buddha Nagar, Uttar Pradesh, India
(I)
CERTIFICATE
This is to certify that Mr. Amit Mishra has worked on the research work
entitled “Design and Evaluation of a Model for Multistage Legacy-Crisis
Detection in It-Solutions of Enterprises”, under my supervision and
guidance. The content of thesis being submitted to the Department of
Computer Science and Engineering, School of Information and
Communication Technology, Gautam Buddha University for the award of
degree of Doctor of Philosophy in Computer Science and Engineering,
are original and have been carried out by the candidate himself. This thesis
has not been submitted in full or part for the award of any other degree or
diploma to this or any other university.
Countersigned by
(Dr. AK Gautam)
Dean of School
School of Information and Communication Technology
Gautam Buddha University, Greater Noida.
(II)
ACKNOWLEDGEMENTS
This is the grace of GOD, who provided me the courage, family, friends, teachers and
colleagues motivating and supporting me to accomplish this work. First of all I express
my gratitude to my father, who still keeps interacting with me and taking status of my
studies as he used to do during my childhood. This reminded me every day to keep
focus on the research work. This was a necessary fuel to accomplish the research
journey along with my tedious job profile being a working professional. Blessings of
my mother always helped me to keep me oriented towards my work balancing with
professional work.
ICIAICT-2012, 1st international conference on innovations and advancements in GBU
was trigger point when Mr. Nripendra Mishra, motivated me to present two papers in
the conference. This gave me opportunity to interact with Dr. Anurag Singh Baghel, Dr.
Pradeep Tomar and Dr. Gurjit Kaur to present two papers as industry representative.
This encouraged me to initiate the conceptualization for this research work.
I express my special gratitude to my research supervisor Dr. Anurag Singh Baghel, who
consistently guided to follow the right path and methodology for this research work.
Even being almost equal in age he was never hesitant asking me to work hard. He
cooperated with me to make himself available as per my needs, requirements and
schedule. Without this kind of unconditional support this work would have been
impossible for me to accomplish being a working professional.
Dr. Pradeep Tomar, coached me for good paper writing, he made himself available for
discussion and drafting research papers giving his experience in Software Engineering,
together during Saturday‟s to get desired direction and results of this work. Express my
gratitude to Prof. Anil Kumar Gautam and Dr. Rajesh Mishra for giving valuable
feedback during review presentations, helping to attain comprehensiveness in this work.
My brother Dr. Anurag Mishra and his wife Dr. Namita Mishra shared their experiences
of research and those were key inputs to my work. Brother Anoop, supported me in
discussing software re-engineering topics and many times countering my arguments and
being critic he helped to get the work in final shape.
(III)
Very-very deep sense of gratitude to my wife Mrs. Rekha Mishra, who is supportive
throughout my work. Sacrificing countless week-ends and vacations, she gave me
encouragement to devote my free time to research work. My daughters Apoorva,
Ayushi and son Abhyuday teased me for still studying like student but supported to
concentrate on my work.
Special thanks to ST-Microelectronics, management team, process working group and
my colleagues who provided me the real environment and data to accomplish the work.
I can‟t forget the contribution of Mr. Sanjay Davar for giving key inputs to this work on
the architectural analysis part. Mr. Vipul Goyal, Mr. Rishi Agarwal and my team
supported me via backing up whenever I needed off days to dedicate to this work.
Special thanks to Dr. Manad Lal Verma „Krant‟, who guided me towards writing and
publishing work and Dr. Vivek Agnihotri, who reviewed my publishing draft work from
technical and grammatical perspectives to improve the quality of research and
publishing work.
I would also like to express my sincere thanks to all faculty members and research
scholars of Gautam Buddha University, who helped me to have academic view to my
work. Special thanks to Mr. Ankur Chaudhary, Mr. Sonu Lal Gupta and Miss Priyanka
Goyal for providing me the study materials and peer reviewing the work. Also thanks to
Dr. Kapil Sharma, who motivated me to attend good conferences and guided for writing
impactful research papers. Dr. Sandeep Sharma assisted me in thesis drafting, his
suggestions were real help for me. Last but not the least, I thank all my friends in
Greater Noida who appreciated my excuses and un-availabilities on several other
occasions to devote my time to this work. Mr. Ashish Gangwar executed several of my
personal duties during this tenure to facilitate me bandwidth for this work.
With the countless helps, blessings and grace of God, this work has been concluded
smoothly.
“सकल , सक लक ”|
(IV)
ABSTRACT
This thesis deals with the legacy transformation and related issues. Primarily the
work focuses on the detection of crisis situation in maintaining the legacy software
applications. This point is clear indication to start thinking for transforming or phasing
out of legacy application. This research work has a peculiarity in terms of industrial
aspects into academics. Study has included all real data and applications of real world
with big size applications having big user-base in software industry. Academic research
approach of study helped to have quality outcome. Industrial view has highlighted the
importance of softer issues in legacy transformation.
Practical approaches to cope the softer issues are also result of industry experience into
study. All practical approaches described were used to define the checklist and tools to
handle issues in legacy transformation. Literature review and study have been
performed in two perspectives to define the problem area. One perspective is industrial
perspective and other one is academic perspective. “Legacy attributes and their
weights”, is a specific chapter in this thesis which describes the situations of entering a
legacy into crisis. It also determines the impact of each attribute contributing to legacy
crisis. It talks about the symptomatic parameters, which indicate the degree of legacy
crisis.
(V)
Further a model is developed for “Multistage Legacy Software Crisis Detection
Matrix”, where stage-1 for Legacy Crisis Symptom Score (LCSS) computation is based
on monthly Key Performance Indicators (KPIs) while stage-2 is more at the design
entropy level. In second stage the architectural analysis is made with respect to SOA
and CLOUD industry standard architecture paradigms. Threshold values are determined
empirically for both the stages to decide the degree of legacy crisis. The model
developed here is unique improvement over the existing models and methods in the
determination of legacy crisis situation. Peculiarity of this model is it‟s correctness to
practical results, ease of implementation, objectivity and first attempt to develop a
standardized and proven approach. This work contributes in saving cost by giving a tool
to determine legacy crisis situation and taking proactive measures to deal with it.
Organizations may save cost on consultancy by applying the model themselves.
Application of the model on legacy software gives a recommendation to go for
transformation or not. Validation of model is done through the application of model on
already transformed applications and the results were validated with old gut-feel model.
Gut-feel model is based on decisions by expert teams and people having deep domain
knowledge and experience. For the validation and threshold value determination it has
been applied on 204 applications representing 7 business solution groups across the
globe.
Models developed in this work are useful for industry in practical sense. It is more
suitable for organizations where legacy applications are being used. Models can be
automated further by researchers to make the application simpler for LCSS and Soft
Architectural Distance (SAD) computation.
(VI)
List of Publications
This section provides a list of publications that has been derived from the work
presented in this thesis.
[1] Amit Mishra, Pradeep Tomar and Anurag Singh Baghel, “Multistage Legacy
Software Crisis Detection Matrix,” IJCTA Journal, International Science Press,
vol. 9(10), pp. 453-461, 2016.
[2] Amit Mishra, Pradeep Tomar and Anurag Singh Baghel, “Transforming from
Legacy to Packaged/Standard S/W Solutions: For Big Enterprises,” Journal of
Software Engineering Tools and Technology Trends, vol.3(1), pp.5-11, 2016.
[3] Amit Mishra, Pradeep Tomar and Anurag Singh Baghel, “Soft Architectural
Distance (SAD): How Far is the Legacy Architecture from SOA Architecture
Principle?,” IOSR Journal of Computer Engineering(IOSR-JCE), vol. 1(1), pp. 7-
12, 2016
[4] Amit Mishra, Pradeep Tomar and Anurag Singh Baghel, “Softer Impediments in
Legacy Software Transformation: Challenges and Lessons Learnt,” International
journal of control theory and applications, 3rd International Conference on
Computing Sciences, April, 2016 . (Accepted: Scopus indexed Journal)
[5] Amit Mishra, Pradeep Tomar and Anurag Singh Baghel, “Technological Advances
in Computing and it is Coherence with Functional and Business Needs,”
Communication and Computing Systems - CRC Press, ICCCS-16, pp. 197-200,
2016.
[6] Amit Mishra, Pradeep Tomar and Anurag Singh Baghel, “Improvement and
Evaluation of Software Development Process through Contextualization for
Maintenance of Legacy Systems,” Elsevier, Materials Today Proceedings,
International Conference on Recent Trends in Engineering and Materials Science
(ICEMS-2016), 2016 (Accepted)
(VII)
Communicated
[7] Amit Mishra, Pradeep Tomar and Anurag Singh Baghel, “Coping with Soft Issues
in Legacy Software Transformation,” Communicated to: Indian Journal of Science
and Technology. (SCOPUS Indexed)
[8] Amit Mishra, “A Technical Report on Software Maintenance, Key Performance
Indicators: Legacy Software Health Report,” Communicated to: ACM Transactions
on Software Engineering and Methodology (TOSEM).
(VIII)
CONTENTS
Self-Declaration I
Certificate II
Acknowledgement III
Abstract V
List of Publication VII
List of Figures XI
List of Tables XII
List of Abbreviations XV
Chapter 1: Introduction 01
1.1. Software Re-engineering …..………………………………………... 04
1.2. Problem Identification .…………..……………………………………….. 08
1.3 Legacy Case Study: 4th Decimal Place in Unit Price.........………………. 09
1.4. Legacy crisis attributes …………………………………………………... 13
1.5. Research Objectives …..………………………………………………….. 15
1.6. Research Methodologies and Validation Model………………………… 16
1.7 Detailed Approach ……………………………………………………… 19
1.8. Thesis Organization …………………………………………..………… 21
(IX)
4.2. Multi-Stage: Legacy Crisis Detection Model………….………………… 50
4.3. STAGE-I: Legacy crisis detection via operational matrices…...………… 51
4.4. Determination of Threshold Value for LCSS.............................................. 55
4.5. STAGE-II: Legacy Crisis Detection via Architectural Analysis................. 61
4.6. Legacy Architecture Questionnaire Determination...................................... 63
4.7. Applying the SAD computation w.r.t. CLOUD Architecture Principle….. 67
References…......................................................................................................... 137
Appendix-A: Appendix-A: Data-sources, Techniques and Models…………….. 148
Appendix-B: A Technical Report on Software Maintenance, Key Performance
Indicators: Legacy Software Health Report ……..…………………………………. 152
(X)
List of Figures
Fig. No. Description Page No.
1.1. Horseshoe model of reengineering………………..…………………........ 04
1.2. Service Oriented Architecture …………………………………………… 07
1.3. „n-why‟ Analysis for Root Cause Identification ……..………………....... 11
1.4. Legacy Crisis Attributes ..…..…….…………………………………….... 14
1.5. Outline of Research Work …....….……………………………………..... 17
1.6. Validation Model ……………………….......………….….…............…... 18
2.1. Legacy Transformation Strategy ……………..…..…...................... 21
2.2. Legacy Transformation Approaches vs. Effort Dimensions …………….. 27
4.1. Flowchart for LCSS Computation ……………………….........…………. 53
4.2. Delphi Technique for Questionnaire Determination .……..............……... 63
5.1. Flow Chart-Softer Challenges Determination ...………………................. 74
5.2. Program Board Structure with IT and Business Accountable..................... 94
6.1. Figure 6.1: Incident Arrival Trend (Legacy - SO and Recent – DRP)….……... 100
6.2. Monthly Question Trend (Legacy - SO and Recent – DRP)………………..…... 100
6.3. Monthly SLO/SLA Compliance Trend for Application SO…............................ 101
6.4. Monthly Reported Problems Trend For Legacy and Recent Application ……... 101
(XI)
List of Tables
(XII)
5.2. Soft Issues Classification (Organizational and Individual)……………. 76
6.1. User Counts and Normalization Factors for Applications ...……..……... 104
6.7. LCSS Computation for Order Scheduling (AS, MS, SWAP)…………… 110
6.9. Answer Architecture Questionnaire for Legacy – SO: SOA …………… 113
6.11. Answer Architecture Questionnaire for Legacy – SO: CLOUD …..……. 116
6.13. Answer Architecture Questionnaire for Legacy – BM: SOA ……..……. 119
(XIII)
6.18. SAD computation for Order Scheduling: SOA ………………………… 127
(XIV)
List of Abbreviations
(XV)
KPI Key Performance Indictors
LCSS Legacy Crisis Symptom Score
LES Logistics Execution System
LSHR Legacy Software Health Report
MS Manual Scheduling
NF Normalization Factor
NFS Normalization Factor on Size
NFU Normalization Factor on User-base
NOAC No Action
OR Operations Reviews
PEM Proactive Evolution Model
REM Reactive Evolution Model
ROI Return on Investment
SAD Soft Architectural Distance
SDLC Software Development Lifecycle
SDP Software Development Process
SEPG Software Engineering Practice Group
SGA Sales General Agreement
SLA Service Level Agreement
SLO Service Level Objectives
SnM Sales and Marketing
SO Sales Order
SOA Service Oriented Architecture
SPI Software Process Improvement
TGD Technical General Design
WM Warehouse Management
(XVI)
1. Introduction
Technology evolution reshapes the way to do IT, offering a wide variety of development
technologies/languages, packaged solutions, cloud solutions, infrastructure options,
hosting options with a high speed of implementation at acceptable costs. This evolution is
also resulting in a high speed of redundancy of existing IT solutions of enterprises, as
support and availability of experienced staff tends to dwindle, forcing them to evolve and
keep abreast with the evolution.
Page |1
Organizations always intend to acquire and implement best-in-class solution approaches
and architectural designs for their IT solutions. But as the time passes by, requirements
change, new technologies emerge, customer base becomes broadened and the
compatibility with old systems becomes a challenge. To cope up the increased
requirements from the software and to cope up the changed environment (users, clients,
interfacing, platform etc.) maintenance becomes increasingly tedious - monetarily and
effort wise too. At some point, this maintenance effort (time and cost) increases so much
that they start to eat most of the IT budget and resources. This is the point at which
legacy system starts into entering crisis situation.
From industrial perspective, business requirements are also evolving and they need more
flexible, robust and agile systems. In this situation, companies cannot rely on their legacy
systems further. It becomes difficult to maintain legacy and the knowledge around it
starts diminishing. In most of the cases, either there is no documentation or it is
insufficient and unauthentic. But when enterprises define a legacy transformation and its
roadmap, often the core issue comes to identify the state when one can say that now it is
the time to start the legacy transformation. At this stage the maintenance of the software
becomes very costly. Increased number of regression bugs, even small change deters
developers from touching the code. In general, legacy software usually suffer from : lack
of packaging, redundancy in software components leading discrepancies, out of support
or warranty, issues in collaborating with partners who are already advanced, scarcity of
skilled resources on old technologies, software distribution & WEB support, increasing
maintenance and support cost. In a nutshell, at the crisis point called as „legacy crisis‟
maintenance start to eat most of ICT budget and resources leaving no room for
innovation [1], [2].
Against this backdrop, big organizations are often facing a challenge to transform their
systems to modern day packaged / standard solutions. The broad area of this work is
software re-engineering, which is the process of analyzing an existing software system
and re-implementing it for specific objective [1], [3]. Most of the work in this area
already done is for the legacy transformation, approaches to the legacy transformation
and challenges with their solutions [1], [2], [3], [4]. But this work is one step back at the
level of decision to determine if the situation has arisen when the legacy should be
Chapter 1. Introduction Page |2
transformed. Maintaining legacy at this point is no more economically and technically
viable. This work develops a model for legacy crisis detection.
In fact this work identifies legacy as a multistage phenomenon. After this analysis, the
work is focused to develop a multistage „Legacy Crisis Detection Matrix‟, which in the
first stage gives symptom based detection for the management to initiate the
transformation process. Then the next level of detection goes more at architectural and
code level to do the assessment of the legacy software crisis stage. Both, when combined,
give the elements to take right decision about the legacy transformation.
The study in this thesis reveals that the legacy systems are not only about IT, but also
involve business and organization aspects. Apart from technical challenges, soft issues
have much deep rooted impact [5]. Soft issues are non-technical problems related to
behavioral, psychological, environmental, and managerial issues. They are often hidden
but still they may have deep rooted effect on quality and output [6]. Academia
perspective is more towards the technical aspects of the legacy systems while industry‟s
aspect is more about the business value of the system. At the same time, business world
does not bother till legacy starts impacting and disturbing business processes [7]. This is
the reason why ICT proposes proactively to modernize legacy systems, but business does
not show interest to sponsor the transformation. Without having approval, sponsorship
and ownership from business such initiatives can‟t be launched. Even if they are
launched, they are set to fail [8]. Failures are not only because of technical reasons but
organizational aspects also play big role [9], [10].
This thesis covers these aspects and more importantly it tries to answer following
questions - When to go for legacy transformation? Is it the situation where maintaining
legacy is not economically viable? Has the legacy crisis arrived or is about to arrive? All
these important questions are at the center of this work.
Following section discusses the key concepts and definitions used in this research work.
R. Kazman et al. gave another definition of re-engineering in 1998 [6]. As per these
authors software reengineering is- the process of re-implementing the system in a new
form. It could be new technology, new platform, and new languages used. Reusing of
components need to be judiciously decided in reengineering [11]. Generally,
reengineering is a two-step process. In first step a reverse engineering is applied to have
an abstract description of the system and then forward engineering is applied to create the
reengineered system [12]. So, reengineering can be represented by Equation (1.1) as
culmination of both reverse and forward engineering in sequence. In current scenarios
SOA principles are put in focus to achieve sustainable reengineering [13].
Reengineering is explained via the horseshoe model [3], [14] shown in Fig. 1.1. It has
three broad steps -
If applications do not evolve and do not respond to changing technology and business,
they becomes legacy. Legacy software are old software which are outdated
technologically, normally it is not known how to cope with them but they are still useful
for the organization. Legacy software had been written years ago using outdated
techniques, yet it continues to do useful work [5].To harness the methodological
framework with proactive capabilities, its roadmap is suggested to be defined with a
highly iterative process model. Each iteration forces the process and system architects to
reestablish alignment between new business process requirements that emerge from new
strategic directions causing the birth of new legacy enterprise systems [4][16].
While legacy becoming outdated, new concepts continually emerge. Applications may
become outdated with respect to current trends in architecture as well [4]. To confirm the
point of legacy crisis, the evaluation of state of architecture plays a key role. In the
present work, Service Oriented Architecture (SOA) and CLOUD architecture are taken as
reference or benchmark for the study. Legacy architecture is compared with each
attribute of SOA and evaluated on SOA architecture principles.
The SOA implementation is focused on a front end application that uses one or more
services. These services are published in service registers. Service bus is arranged to
communicate with services [13]. While from the business perspective [14], [16], SOA is
a concept of business architecture. In this architecture business functionalities and
application logic are made available to SOA consumers. These services are published and
Service contract
Service
Business logic
Interface 1
Implementation
Interface 2 Data
Having multiple applications and modules un-packaged in legacy make the issues
manifold. In this referred case also, organization has taken a big hit on business.
Enablement of 4th digit pricing was required to be implemented in all applications
impacted in whole landscape. But, as different applications in legacy are tightly coupled,
one application specific evolution causes impact on others and make it more complex.
Even after implementation goes live, it is not smooth. It is very relevant to put this case
study here to realize the legacy crisis. Following section will discuss about a real case of
industry where a small change caused huge impacts and disruptions to the running
applications.
1.3.1 Problem Description & Containment Actions: (Event occurred: 3rd June 2012)
Problem summary is stated in Table 1.1. These problems were quantified by concerned
departments and reported in remedy tool. Remedy tool is used to report the issues faced
by users of all software systems in ST-Microelectronics. This Table 1.1 itself is result of
8-D analysis tool under D-2 section (see Appendix A). HP3K and E1 or Esicom1 in this
case study is the legacy machine which is managed with COBOL application. These
problems caused huge discrepancies in invoices to end customers and intercompany
customers. Recovering back the normal situation and confidence of users and customers
Under the D-4 of 8-D analysis, the root cause is identified for 4th decimal problem. This
is done using 5-why method to reach up to lowest level (leaf node) as shown in Fig. 1.3.
Preventive actions are aimed to never repeat similar problem in any area under the
domain. For this case study, Table 1.2 enlists the identified preventive actions.
E1
Whyprograms did did
E1 programs Why E1 field changed
not handle well
not handle well thethe from 3 to 4 digit? Was Why SI local
price change
price change event an un-intentional change programs was not
event? E1 program was not detected. tuned for 4th
logic
change wasevent
incorrect.
did decimal? Not in
not flow correctly the impact analysis
to E1E1 ? program
E1 Why un-intentional component
Why
programs did not change was not detected repository
Why E1 SI local
logic incorrect?
handle well
It‟s complex tothe in tests? Test coverage program was not in
price change
manage Esicom1 event was limited repository? Was
changes in COBOL. not in ICT portfolio.
This case study shows the severity of the legacy management and transformation.
Organizations intending to replace their legacy systems by the new sustainable solutions,
generally apply best-in-class solution approaches and architectural designs but still face
lot of challenges. Target of this research work is to prevent organizations from facing
such legacy crisis by detecting the crisis point scientifically and not after facing issues.
This work is targeted to develop a multistage „Legacy Crisis Detection Matrix‟. Which in
the first stage gives symptom based detection for the management to initiate the
This thesis, further, explores and develops overall practical approach for helping the
legacy transformation according to the industry needs. Special focus on the dimensions
which are least talked or explored till now in legacy transformation. Considerations are
not only to be made for technical issues but also the business matters of continuity and
softer issues including people issues and organizational issues too. Some of
organizational issues are discussed in [26], [27], [28]. The thesis also proposes a model
which can be used to analyze the current architecture and code to give legacy crisis
levels. Apart from technical issues, the thesis also presents the issues related to behaviors
[9], [29], highlighted empirical studies of software engineering, and the way they might
be combined with quantitative methods. Finally, the softer issues of legacy
transformation are explored with industry validated ways to address them.
The data from qualitative research methodology is applied to incorporate empirical data
and results, description of situation, results and different views etc. [32]. Softer issues
determination and coping with them were studied using post-mortem analysis of legacy
software releases. Post deployment screening helped in identification of these attributes.
Flow chart in Fig. 1.5, defines various stages of this research work.
Problem/gap identification
Development/modification of models
Implementation of models
The validation of the developed model and the legacy transformation matrix is done in
following manner as depicted in Fig. 1.6 -
1- The model for detection of legacy crisis was applied in different application domains
of some organizations and compared the results with today‟s gut-feel model and
Delphi decision support model (see Appendix-A).
Apply
Model
Validate with
already
transformed
Validate Intermediate application results
Results
1- Real issues faced by industry in terms of poor health of legacy applications, eating
lot of resources in legacy maintenance and issue of 4th decimal crisis (section 1.3)
were the trigger point and motivation of the research.
2- First stage of legacy crisis detection is based on the operational attributes of legacy
crisis that were taken from ICT top-page document (see Appendix-A). Seven purely
operational attributes were taken directly from top-page.
3- Gaps were identified while working with only above 7 operational attributes.
Therefore 3 more attributes were added concerning maintenance releases over legacy
and using the performance attributes from release database. Delphi method (see
Chapter 2 covers the literature review done around legacy transformation, issues with
legacy systems, software reliability factors, managing issues in legacy transformation.
Literature study is divided in 2 broad categories- industrial perspective and academic
perspective. Industrial perspective is covered by white papers published by some top
Information Technology (IT) companies and consulting organizations from internet
sources. Academic perspective is addressed through publications in reputed journals and
proceedings of conference papers. Chapter also discusses about the various international
standards of IT, for software product evaluation and quality attributes of software.
Adopted strategies of legacy modernization, their pros and cons are also discussed.
Chapter 3 describes „Legacy Crisis‟ situation. It discusses the different reasons behind
applications becoming legacy. This chapter explores different attributes of legacy and
their magnitude of effects i.e. determination of weights of each attribute. Symptomatic
parameters, which indicate the degree of legacy crisis are discussed in this chapter.
Chapter 4 describes two stages of legacy crisis detection. Stage-I for LCSS computation
which is based on monthly Key Performance Indictors (KPIs) and second stage
determines if the legacy is in the state of crisis. This model gives flexibility to the
organizations to define their respective threshold points to declare a legacy crisis situation
during SAD computation. A derivation of the weighted factor of transformation matrix
giving a notional value for degree of legacy crisis is also presented in this chapter.
Chapter 5 discusses identification of soft issues and correlation among them. This
chapter also discusses practical approaches to cope with the soft issues. It also deals with
different such soft issues and ways to handle them.
Two appendices are also attached. Appendix-A introduces different theories used in the
thesis, e.g. Delphi Technique for group decision making. 8-D tool and Eshikawa
(Fishbone) analysis used to determine legacy crisis attributes as a derivation from real
industry problems. All data sources and tools used in the research work, has been
explained in this appendix.
This chapter summarizes the literature and the work already done around the field
of legacy transformation and software re-engineering. Besides literature from academia,
white papers from companies, published results and indicators from industry are also
used as an important source of information. Particularly, the production data of legacy
applications being used in several wings of ST-Microelectronics, a Hi-tech company with
world-wide presence in all continents is used. 8-D tool and Eshikawa (Fishbone) analysis,
from real industry problems are used to determine legacy crisis attributes (See Appendix
A). Author participated to several industry workshops dealing different aspects of legacy
applications and software engineering practices through software engineering database
[37]. Proceedings and findings of these workshops and special task forces/workgroups
added further to the study.
Authors Definition
Legacy software are the software which have grown large, written years
Bennett (1995) ago, use outdated technology, organizations don‟t know how to cope
with them but they are vital for organization function.
Wu et al. (1997) Large and complex software as a result of continuous evolution but no
more modification or evolution is easy.
Page |23
Authors Definition
The „Business Value of Legacy Modernization (Microsoft)‟ [39], a white paper from
Microsoft-supported programs in 2007, highlighted a review of different programs that
can help us to find the best partner(s) and guidance for modernizing legacy systems. This
white paper examines the issue of legacy modernization from a business standpoint [39].
It is designed for top management and executives, which concludes with guidance on
how to create a successful strategy for legacy modernization. This legacy transformation
strategy is depicted in Fig. 2.1. Depending on the application features, business
requirements and cost pressure determines the transformation strategy. For example, if
applications do not meet current business needs, and type of changes are with business
differential then strategy is to go for redevelopment of application.
There are essentially three steps in determining the right approach for any organization,
looking for legacy transformation [39]: These steps are followed by most of the
organizations going for transformation. They are as follows-
Approach
According to this work, the average cost of attempting reuse can be formulated as:
To favour reuse, it must have an adequate coverage of the library (large p) and make sure
that developers can, quickly, either find the component they need or be fairly confident
that it does not exist.
In the context of white box reuse, the developers must compare the costs of fresh
development vs. effort of searching and reusing existing component. Reuse must consider
any modification if required. The average cost of development with the intent to reuse
can be formulated through Equation (2.3).
Here, ‘p’ is the probability finding the component in the database, ‘q’ is the probability of
finding a component with satisfactory approximation, and „SCapprox’ is the approximate
searching cost. Similar to Equation (2.2), the Equation (2.4) is derived to determine if
reuse is viable.
Core logic used to determine if reuse or redevelopment is better, is based on cost factor.
This determination in the study was applied on approximation, given by Equation (2.5).
This model talks only about one factor i.e. software component reuse, similar model can
be developed to all the variables attributing to legacy crisis. This is cost based and
approximation formula on the probability of component search hit ratio.
SK Mishra et al. presented their work in 2012 [42], focusing reverse engineering and
legacy transformation to SOA architecture. The detail for creating reusable software
component from object-oriented legacy system through reverse engineering is discussed
here. The authors have developed a model named Component Oriented Reverse
Engineering (CORE) for identification and creation of reusable components. Reverse
Oladipo et al. [51] presented a model in 2011, based on code pattern matching in both the
dimensions, technical and functional. Legacy modernization approach discussed here is a
reverse engineering methodology on a transformation paradigm. It is aimed for
preserving capital investments and saving production and maintenance costs. The
transformation approach involved retaining and extending the value of the investments on
the legacy system through migration and modernization of the subject system.
Modernization generally transforms a legacy system in three phases: Initialization,
Extraction, and Modernization [52], [53]. The authors in this work presented a multi-
level legacy modernization roadmap that involves information extraction, artifacts
gathering from many sources, knowledge organization, analysis, and information
abstraction. Aggregation of components is done in such a way that redefining the
Chapter 2. Literature Review Page |29
relationships, abstractions, and hierarchical mental models does not alter the original
system [54]. Using the mental models, additional knowledge about the system is
produced. It is normally used because of lack of documentation about legacy systems
[55]. This paper also mentions that the legacy application is already part of today‟s
business operations, a smooth transition is vital, but there is no indication to how the
smooth transition will take place.
Ricardo Perez-Castillo et al. [56] in 2012, demonstrated the ways to improve return on
investment out of legacy by extending it‟s lifespan with smooth functioning. This is
achieved by reducing the development effort via reuse of components as much possible.
It helps to get two main advantages. First, with reuse, development cost is lower
compared to the fresh development without reuse. The second advantage is increase in
lifespan of the legacy systems resulting into improved return on investment (ROI). The
main concept demonstrated in this study is to expose data stored in legacy databases via
web services. The advantage of this approach is to expose common components and data
using web services which is prime focus of SOA approach [48], [49]. This is facilitating
legacy modernization into SOA architecture. This paper also explores the „Architecture
Driven Modernization (ADM)‟. In this paper authors explored the approach based on all
the aspects of current system architecture. A target architecture is defined first and then
the current architecture is transformed into target architecture. Under the ADM
modernization approach discussed in [57], all software assets and application are restored
in current architecture form and transformation is done considering a to-be architecture.
ADM approach has following advantages-
Easy integration with other systems and other environments like SOA.
Paper [58] discusses about a tool PRECISO that follows a model driven approach. It is a
semi-automated tool which facilitates the publication, search, and deployment of web
services from legacy relational databases. Still, some manual steps are involved.
B. Schmerl et al. [59] have presented an interesting approach, which uses pattern
matching algorithms to detect the architecture of a running system. The key concept used
here is to explore implementation regularities and knowledge of the architectural styles.
These architectural styles are being used to create a mapping which can be applied to any
system that conforms to the implementation conventions. This mapping is useful to
aggregate the low level behavior specifications to architecturally significant modules.
This approach can be applied to any system that can be monitored at runtime [60], [61].
Literature about the behavioral aspects in legacy transformation were also explored
through available publications. Carolyn B. Seaman, touched upon this topic in 1999 [62],
where impact of non-technical issues were studied. Shikun Zhou et al. [63] in 1999
discussed about softer challenges indirectly in reverse engineering or legacy
transformation. The study was around the reverse engineering software metrics and
problems concerning these metrics. A systematic research approach has been introduced
to develop software metrics for reverse engineering [64]. Core of the work was to
develop a classification of software metrics for reverse engineering. They classified these
metrics of reverse engineering measures into five categories. These five categories
include abstractness indicators, complexity indicators, economic indicators, object-
orientation indicators, and reusability indicators. However, the real application of this
approach will not be seen until the tool of reverse engineering has industrial-strength and
substantial industrial experiments have been carried out. Hence, in this thesis, similar
In 2008, a case study [65] was presented proving the fact that- so often the
implementation problems are not only the complex technical implications but also
involve behavioral aspects, organizational aspects and internal conflicts. So, the softer
aspects of issues also need to be understood and addressed. Indirectly papers [66] – [72]
touch upon non-technical challenges. However, paper does not talk about comprehensive
behavioral and other aspects. These papers do not propose solutions to these softer
aspects. This is very significant in legacy transformation projects which can be further
extended to complete the study to benefit the enterprises.
Legacy transformation to SOA and CLOUD is discussed in [43] – [50], [73] - [76] where
aim was to systematically present the existing researches around SOA and CLOUD
migrations, classify these studies, and compare them. It reveals that cloud migration
research, despite being in early stages of maturity, provides clues around architectural
aspects. Many of these papers suggested the need of a migration framework to help
improving the maturity levels and trust into SOA & CLOUD migration approaches.
Oracle Corporation published a white paper on CLOUD architecture in 2012 [73]. This
Oracle white paper highlighted the enterprise ecosystem for cloud solutions. It also talks
about broad portfolio of complete and integrated products to build Software as a Service
(SaaS), Platform as a Service (PaaS), and Infrastructure as a Service (IaaS) and elaborates
the cloud architecture elements. But it does not highlight architectural gaps with respect
to legacy transformation. Pooyan Jamshidi et al. [74] have elaborated apprehensions
about cloud architecture. The work suggests a lack of tool support to automate migration
tasks along with the need for architectural adaptations to implement a self-adaptive
cloud-enabled systems. In [75] authors also referred to several reference white papers
published by National Institute of Standards and Technology (NIST), around “Cloud
Computing Reference Architecture- 2014”. Focus was on cloud architecture, standards
and implementation. The activities of two optional cloud players, cloud broker and cloud
auditor, were reviewed as their services are necessary in some business circumstances for
the delivery of cloud services. Authors discussed about the fact that softer issues are
much more important than technical aspects. However, it does not elaborate about tool
Chapter 2. Literature Review Page |32
support to automate migration tasks. Overall there is a lack of architectural adaptation
and self-adaptive cloud-enabled systems in existing solutions and were not considered by
the authors for legacy transformation.
Study explored the existing work around softer challenges and referred to published case
studies, challenges, non-technical issues, empirical models and qualitative analyses
during legacy transformation [76] – [90]. Where [76], [77] focus more on defining
strategy, while Richards et al. [78] and Pfleeger S. L et al. [79], [80] focus on empirical
models in their study. Papers, [81] and [[82] touch upon some softer challenges but to
only a particular phase of development. Papers [83] to [85] focus on planning and
management aspects in legacy transformation. [86]-[90] highlight on challenges faced in
transformation. Business rules based approaches are discussed by Len Erlikh et al. in [90]
and [91]. [92] focused on future vision and Simone Kaplan presented financial
considerations in [93].
This chapter deals with identification of legacy attributes and their weights from
operational and industrial perspective. Weights of attributes help organizations to define
their strategy to cope with them. Weights also help in identification of legacy crisis point;
higher the weight - higher the impact on crisis situation.
Following sections describe the situation when the „Legacy Crisis‟ arrives, how it arrives,
factors attributing to legacy crisis situation, and their impacts contributing to legacy crisis
situation. Further, the symptomatic parameters which indicate the degree of legacy crisis
through application health data reported in Key Performance Indicators (KPIs) are also
described. [94] Suggested to migrate the legacy applications before the crisis becomes
unmanageable. The need is due to numerous reasons e.g. solutions using software which
are no more supported, difficult to handle new business concepts, collaboration with
other enterprises for the integrations and exchange of information, consolidation of
functions and data, understanding the precise key functions of software and their impact
on data, challenges in merger and acquisitions due to heterogeneous systems etc. When
mergers and acquisitions among two or more companies happen, IT has big role to play
in adopting data, systems, and processes integrated. If all participating organizations are
not using standard software solutions and one or many are still operating with legacy
systems it causes difficulty to integrate [95], [96].
The legacy software reaches to a level when it starts costing high for their maintenance
and also becomes difficult to find expert resources to maintain them due to technology
obsolescence [97]. Engineers and programmers working on the legacy become almost
irreplaceable due to non-availability of new resources with these skills [98]. This poses a
bigger challenge of handling softer issues with human resources. The work in this thesis
attempts to provide structured approach with practical industry experiences to manage
legacy transformation. The work incorporates learning from industry experience of
managing legacy software solutions for about 20 years in ST-Microelectronics and in
doing so, it uses applications and their release data of seven globally distributed Business
Solution Groups (BSGs) of the organization. These seven groups own applications of
Page |34
their respective business domain. For example Sales and Marketing (S&M) BSG owns
applications used by company‟s sales and marketing department users, such that
applications for customer services for quotation desks, managing contracts, taking
customer orders, confirming the orders and responding back to customers etc. While
finance BSG deals with all invoice and debit/credit related applications.
It is natural that with different communities requesting for the changes in application and
different people working to develop the same may go with different approaches [99].
After certain level of stabilization in solution and resources, engineers move to other
projects. Lack of design and other documents make the situation more complex [100].
Next change requests (CRs) for further maintenance of legacy software are carried out by
people who do not understand the functionality in totality and attributing to make the
systems difficult to maintain. Different modules of the software talk less to each other.
Only minimal level of integration remains, no standard ways like SOA for lose coupling
is used. Finally, to reach a breakeven, the management has to take decision of replacing
the legacy.
Effort vs. CR size (man days / function points) last release: Legacy in poor state
takes more effort to develop even a simple change.
Number of defects in 1st month of new version deployment: Any new release of
legacy usually follows with lot of problems and requires a stabilization period.
Regression Defects reported in the 1st month after new version: New releases of
legacy results into defects even in the previously working functionality where the
changes were not intended.
This addition was based on group consensus in PWG. Group opinion was based on the
attributes that are derived from organization specific measures being monitored quarterly
and monthly. The attributes determined by the Delphi technique are listed in Table 3.1.
This Table 3.1 also lists the relationship between KPIs with respective attributes and
source of data, where they are reported.
1. ICT Service dashboard- consists of all issues reported in live applications from remedy tool.
2. Time Logging System (TLS) reports- to report efforts on different ICT activities including KLO.
3. Release DB- Master Release (quarterly) data of ICT applications: past 5 years.
4. HR DB- Human Resource DB, having data of all recruits and resignations
Data 1 and 3 are published ST-Intranet sites, while sources 2 & 4 are accessible to managers of their perimeter. But
reports and outcomes of HR-DB on resignations and TLS are shared among teams (See Appendix-A).
These are symptomatic attributes only, which are used only as qualifying criteria to enter
into architectural evaluation. Hence in this work, these attributes have been picked
directly from organization‟s top page. Several type of issues and attributes in [102], [103]
and [104] making legacy unmanageable.
• Workgroup members were asked to rank-1 to rank-3 on paper slip among these 10
attributes. Each 30 participants of workgroup gave rank-1 to 3.
• Rank based weights are determined using Equation (3.1), which has rationale to
assign highest weights to attributes ranked 1 by participants, and lowest to the rank-3
attributes. As the same attribute may be ranked 1, 2 or 3 by different group members
hence a notional weight formula used to assign weights as below-
Where ρ1, ρ2 and ρ3 are count of rank-1, rank-2, and rank-3 respectively,
Fn is normalization factor to normalize the weights totaling 100. In this case it was
100/180 i.e. 5/9. Truncation towards infinity is applied to avoid having decimal values to
the attribute weights. The computation is presented in Table-3.2.
These computed weights are base to the Legacy Crisis Symptom Score (LCSS)
computation. LCSS is the stage-I of multistage legacy crisis detection matrix. Stage-I,
considers only the operational matrices hence here only the ICT top page KPIs are used.
Concept developed for LCSS computation is presented in Chapter-4 of this thesis, where
a threshold value for LCSS is also determined. If this threshold value is crossed by a
legacy application, it enters for the 2nd stage of evaluation of crisis based on architectural
aspects.
(Normalized)
Rank 2 * 2
Rank 3 * 1
Rank 1 * 3
Weight
Rank 1
Rank 2
Rank 3
ICT Top Page KPI
Delphi Round-1: In this round each BSG had to provide 10 to 15 attributes contributing
to legacy crisis based on their experience in legacy releases in past five years. Attributes
to be mentioned here are not to be the ones belonging to the symptomatic or operational
attributes. These attributes should primarily consider architecture, design, coding,
programing language, technologies used and other implementation challenges faced.
Raw inputs collected from 7 BSGs namely, Sales and Marketing (S&M), Global
Logistics and Warehousing (GLWO), Finance, Purchasing, Manufacturing, Human
Resources (HR) and Business Intelligence (BI). For the convenience of presentation these
inputs are listed in Table 3.3, Table 3.4 and Table 3.5.
These inputs from 7 BSGs are consolidated and resulted in 43 distinct attributes.
Attributes with same name or meaning has been considered only once. For example
GLWO gave “n-Tier architecture” and Manufacturing BSG gave “3-Tier architecture”.
Both have been consolidated to a single attribute “n-Tier architecture”.
BI BSGs did not give sufficient list of attributes, they gave only 5 attributes vs. 10 to 15
as required by each BSG. But work group considered it reasonable as landscape of
solutions in general is isolated for BSG from application context. Only data part is used
by BI as their portfolio contains only reporting applications. Manufacturing BSG gave
very specific issues to their particular experience. All those attributes were potential
targets for filtering in round-2 as they were not common issues for other BSGs.
Once these 43 distinct issues have been consolidated, they all were presented to the full
group for next round of determination for further filtering. These attributes are listed in
Table 3.6. Similar to this exercise, C. P. Holland and B. Light [105] tried to elaborate
critical success attributes during legacy transformation program execution. In paper [106]
issues highlighted in SAP ERP package were highlighted. Few of the reported issues
above and in [107] are commonly reported in this thesis under Table 3.6.
SN Attributes SN Attributes
1 GUI or batch mode 23 Human Language specific display
2 Technology used 24 Application linked with h/w device
Regional flavour in application
3 Age of application 25
implementation
4 Interfaces technology 26 Algorithm available and extendable
5 Database used 27 Monolithic or modular
6 Central or local application 28 Reliability matters
7 Logic complexity 29 Encrypted data exchange
8 Object orientation 30 Availability ensured
9 Shared library usage 31 Security features
Design documents : FGD/DD and TGD
10 32 Web enabled
available
Home grown application or purchased
11 33 Portable application
package
Component-based software
12 Hand held device enabled 34
engineering (CBSE)
13 User base 35 Reuse of components in place
14 EDI / Rosetta Net /B2b/ EAI 36 CAD/CAM software
Authentication LDAP or application
15 37 Concurrency / multithreading
specific
16 SAP or Not 38 EAI based communication
17 Package or not 39 Authentication
18 CLOUD solution? 40 Scheduler in use
19 Follows SOA principles of architecture 41 Integration with ERP (SAP)
20 Language used 42 Mode: Online/offline
21 Platform used 43 Monolithic or modular
22 n-Tier architecture
Delphi Round-2: As observed in round-1 that out of these 43 attributes many were very
specific to a particular BSG. These particular attributes were not significant in general at
ICT level. Also, defining model with 43 parameters would have been complex to deal
with. This needed to further narrow down the list of attributes for the model by
eliminating some attributes. Target for the elimination was not to lose any attribute which
may impact significantly the determination of legacy crisis situation. With these
considerations and constraints, model has been developed to work with ten most
significant attributes. Each BSG was then asked to mark top-10 attributes out of 43
attributes. After the rating from all BSGs for top-10, a table was built with consolidated
input. Top-10 score against each attribute is counted in last column of the table. Finally,
the attributes were arranged according to their rank of top-10 count and 10 most
Purchas
Finance
Manufa
GLWO
cturing
SnM
Overall
HR
BI
TOP-10 (Workshop inputs)
e
top 10
Age of application 1 1 1 1 4
Shared library usage 1 1 1 1 4
Design documents: FGD,DD 1 1 1 1 4
Home grown application or
1 1 1 1 1 5
purchased
EDI / Rosetta Net /B2b/ EAI 1 1 1 1 4
Package or not 1 1 1 1 1 1 6
Follows SOA principles of
1 1 1 1 4
architecture
Platform/Language used 1 1 1 1 1 5
n-Tier architecture 1 1 1 1 1 5
Monolithic or modular 1 1 1 1 1 1 6
First three are “Fundamental” in nature and others are derived from these fundamental
attributes. As summarized in Table 3.8, weights of 3 fundamental attributes are assigned
20, while most of derived attributes have value 5 and Encapsulation have value 10.
Weight Score
SOA Attributes
(Wi)
Abstraction 20
Coarse Grained Nature Of
20
Services
Loose Coupling 20
Compliance 5
Interoperability 5
Search-ability 5
Replace-ability 5
Encapsulation 10
Law Of Composition 5
Service Autonomy 5
Based on above two set of legacy attributes determinations legacy crisis issues can be
categorized as described in Table 3.1.
3.5.2 Effort vs. CR size: Carrying out development on legacy systems is costly. New
software development has different benchmarks than the modification to very old legacy.
New development or Change Request (CR) to recent applications (non-legacy) is very
In case of legacy applications, different modules don‟t talk each other so easily and hence
one module needs to replicate a functionality of other modules causing the redundancy
and then finally resulting into overall less robust solution. This also attributes post go-live
failures where on change is implemented in one module but has been left for one or more
others impacted.
3.5.5 Design Entropy (No-Object Orientation): Legacy software are the result of
continuous development over old software. This results into poor design and finally bad
3.5.6 Regression Defects: In and out of legacy is not known to anyone and no
documentation exist in most of the cases. Hence any development and changes in legacy
result into regression defects. Even if what is implemented is correct but something else
start misbehaving because it has got impacted without intentional changes. Service
management dashboards clearly show increase of incidents and defects with every SO
release. Increase of average number of issues goes up to 50% to 60% for big release of
efforts more than 100 man-days.
3.5.7 User Interface not In-line with Technology Evolution: Legacy software use old
Graphical User Interface (GUI) which are outdated with respect to current technologies.
Still COBOL and VB6 user interfaces make the applications less intuitive. Applications
go towards natural death. This is a key factor in legacy crisis. Applications with such an
interface cause issues with integration with upstream and downstream process
applications. Also non uniformity is evident to end users. Different functions requiring
multiple application usage with possibly different logons to loose productivity.
3.5.8 Production Issues (incidents, ITPs and Problems): Legacy is well known for it‟s
production issues and unavailability in case of any changes applied in the ecosystem.
High number of incidents reported by users in production environment is one factor.
Unforeseen unavailability is another example of production issue and is used to reported
as Interrupts to Production (ITP). Recurrent incidents cause due to inherent problems in
application.
3.5.9 Scarcity of skilled resources: With time it is becoming difficult to find the skilled
resources on old obsolete technologies used by legacy applications. Engineers are not
interested to learn and work on those technologies due to lesser job opportunities.
3.5.10 Reduced Vendor Support/out of Warranty: Most of the legacy systems are now
in a technology which is outdated and out of support. So it is risky to remain in such a
solution. COBOL based systems on HP3K machines are one such example.
A two stage legacy crisis detection model is proposed in this chapter. The model
developed here is the only model till date available for legacy crisis detection. The first
stage is based on monthly Key Performance Indictors (KPIs) and the second stage is
rather technical and architectural evaluation of the legacy application code. First stage
evaluation is symptomatic assessment and second stage validates fully if the legacy is in
the state of crisis. This model gives flexibility to the organizations to define their
threshold values for declaring a legacy crisis situation. In summary, the chapter deals
with the design and development of the „Multistage Legacy Crisis Detection Matrix‟
model, using legacy attributes. It also derives the weight factors of the transformation
matrix giving a notional value to represent a degree of legacy crisis.
4.1 Introduction
Legacy software are not easy to replace while being used in production due to their heavy
dependency and tight coupling among different applications. They require a lot of manual
support and interventions to keep the systems running. It increases cost of KLO.
Gradually the situation worsens and it reaches to a point when, direct and indirect cost
becomes as high as unaffordable. When viability of maintenance vanishes, it reaches to
crisis point. Proactive measures to avoid this legacy crisis is the key objective of this
thesis.
The study by Anthony Lauder et al. [94] and by Amey Stone [96] paved the way for
determination of the issues in legacy and coping with them. They provided two choices -
either to maintain legacy or to transform it. But none of them addressed the dilemma of
maintaining or transforming legacy. There may be two main bases of the legacy
transformation decision. The first basis being strategic decision when enterprise goes for
phasing out all old and legacy software and deploy new suit of applications or packages.
In strategic decisions, the maintenance cost and operational issues don‟t play big role in
decision making. It is based on the poor symptoms of legacy application, hence also
referred as symptomatic analysis. The second basis for legacy transformation decision is
These matrices are usually operational in nature and are periodically being monitored by
management in Operations Reviews (ORs) every month. When the organization‟s
technology roadmap is superimposed with operational reports, it gives the Legacy Crisis
Symptom Score (LCSS).
LCSS threshold limit is determined based on 80 percentile of 204 applications (legacy &
non-legacy). When LCSS score of a particular legacy application crosses this threshold
limit, crisis detection enters into stage-II. Stage - II uses existing technical architecture,
programming languages, platform integrations, and software source code etc. At this
stage the model will again give a numeric notional value to indicate the degree of crisis in
legacy software. This notional value of architectural evaluation is named as Soft
Architectural Distance (SAD) [108]. Degree of SADness is key parameter of stage-II in
legacy crisis detection matrix.
BP
i 1
ij
mj 1
m j 1
The value of NF is calculated with the help of Equations (4.2), (4.3) and (4.4).
The outcome of threshold determination [35] suggests that any LCSS score above 300
(empirical) is good enough to mark legacy application for stage-II evaluation of legacy
crisis detection (i.e. architectural analysis).
Score 300, in fact, indicates that legacy performance is 3 times worse compared to non-
legacy benchmarked applications.
For ST-Microelectronics, 3 benchmarked applications were used with 10 legacy crisis
attributes in the proposed model. Algorithm to compute LCSS in this specific case is
explained through Fig. 4.1 below.
LCSS Computation
legacy score
Factor %
Weight
BAP1 BAP2 BAP3 Legacy Weighted
BAP Avg.
(W)
(L)
Attribute Score Score Score Factor Factor
(Y)
(M) (N) (O) (Z=L/Y) (W*Z/NF)
Number of Interrupts to
Production (ITPs) in a 5 L1 M1 N1 O1 (M1+N1+O1)/3 L1/Y1 W1*Z/NF
quarter
Number of incidents
15 L2 M2 N2 O2 (M2+N2+O2)/3 L2/Y2 W2*Z2/NF
reported (per month)
Application
Application Name LCSS Rank Percentile
ID
155 E-SAMPLE 721.2 1 100.00%
172 PRICE LIST (DCPL,LB,MPP,ER) 476.3 2 99.50%
S2S PLANNING SCHEDULE ( DELFOR
191 461.7 3 99.00%
LOADER, RESPONSE )
22 CLS - MANUFACTURING INTEGRATION 459.3 4 97.00%
23 CLS – PICKING 459.3 4 97.00%
24 CLS - WORLDWIDE SHIPMENT 459.3 4 97.00%
25 CLS (GLOBAL STOCK MGMNT) 459.3 4 97.00%
161 I2 SCHEDULING 451.2 8 96.00%
171 ORDER SCHEDULING (AS,MS,SWAP) 451.2 8 96.00%
83 DM (DEMAND MANAGEMENT IN I2) 432.6 10 94.00%
90 MFS 432.6 10 94.00%
92 MPS 432.6 10 94.00%
131 BACKLOG ENGINE 432.6 10 94.00%
187 RMS (RETURN MANAGEMENT SYSTEM) 421 14 93.50%
193 SGA (SALES GENERAL AGREEMENT) 408.2 15 91.60%
194 SGA AUTHORIZATION 408.2 15 91.60%
SGA KEY CUSTOMER CONTRACT (FOR
195 408.2 15 91.60%
SGA06)
196 SGA PRICE CONTRACT (FOR SGA01) 408.2 15 91.60%
198 SHIP AND DEBIT/CLAIM(CL) 407.4 19 91.10%
199 SO (SALES ORDER) 392.5 20 90.60%
186 PRMIS 389.2 21 90.10%
145 DESIGN WIN 378.2 22 89.60%
201 SO-BATCH (SALES ORDER) 376.4 23 89.10%
192 S2S PO CHANGE 372.1 24 88.60%
S2S ORDERS (INCLUDING
190 369.3 25 88.10%
SBI,CONSUMPTION)
The degree of SADness may be calculated for CLOUD architecture also, using the same
equation and method but the CLOUD architecture weight matrix.
A high SAD score represents poor state of architecture. Organization specific threshold
for this score may be derived to take the decision of legacy crisis point. This value
closure to 1 represents bad architecture while lower values represent standard industry
architecture. Algorithm established in this thesis for SAD computation is represented in
flowchart given in Fig. 4.2.Figure 4.2: Flowchart for SAD Computation
For each answer „YES‟ in Table 4.4, fill 1 in weight matrix and for „NO‟, put 0. All other
weights are standard from the legacy design compendium document [35]. Bottom row is
sum of all these weights where answer is „YES‟. These aggregates are the observed
values of variable (Oi) in the equation.
LAW OF COMPOSITION
SERVICE AUTONOMY
INTEROPERABILITY
COARSE GRAINED
analysis Questions
LOOSE COUPLING
REPLACEABILITY
AS-IS architecture
ENCAPSULATION
Q. Sr. No.
SEARCHABLITY
ABSTRACTION
COMPLIANCE
Answer: Y/N?
(W=20)
(W=20)
(W=20)
(W=10)
(W=5)
(W=5)
(W=5)
(W=5)
(W=5)
(W=5)
1 Packaged solution? 0 0.8 0.5 0.5 0.2 0.2 0 0 0.5 0.1 0.1
2 Home grown applications? 1 0.3 0.1 0.1 0.1 0.1 0 0 0.1 0.1 0.1
3 Development is done using framework? 0 0.5 0.2 0.2 0.2 0.2 0.2 0.2 0.5 0.4 0.1
4 Batch Programs in 3 GL? 1 0.3 0.2 0.1 0.1 0 0.1 0.1 0.2 0.1 0.1
3-Tier architecture to keep
5 1 0.6 0.3 0.3 0.3 0.2 0.2 0.2 0.2 0.4 0.3
presentation and biz layer separated?
6 Monolithic structure? 1 0.1 0.1 0.1 0.1 0.1 0 0 0.1 0.1 0.1
Ecosystem is heterogeneous (e.g. C, JAVA, SAP,
7 0 0.4 0.4 0.3 0.2 0.5 0.6 0.6 0.2 0.2 0.2
DB scripts etc.)?
8 Nature of services is database centric? 1 0.5 0.5 0.4 0.3 0.2 0.2 0.2 0.2 0.4 0.3
9 Nature of services is functionality/logic centric? 0 0.7 0.5 0.3 0.3 0.3 0.2 0.2 0.2 0.2 0.4
Age of LEGACY in range of
10 1 0.1 0.1 0.1 0.1 0.1 0 0 0.1 0.1 0.1
20 years?
Age of LEGACY in range of
11 0 0.3 0.2 0.1 0.1 0 0.1 0.1 0.2 0.1 0.1
10 years not older?
Age of LEGACY in range of
12 0 0.6 0.4 0.3 0.3 0.3 0.4 0.6 0.5 0.5 0.6
5 years not older?
13 Services used by SELF/mother application only? 1 0.3 0.3 0.4 0.6 0.5 0.5 0.6 0.4 0.4 0.4
14 Services published and used by self and others? 1 0.8 0.7 0.7 0.3 0.4 0.4 0.6 0.5 0.5 0.6
Application/service design is in the form of
15 1 0.8 0.6 0.5 0.4 0.5 0.5 0.6 0.4 0.4 0.4
functions for one logical set of actions?
Services / functions follow international level
16 0 1 1 1 0.8 0.8 0.6 0.8 0.8 0.8 0.8
standards?
Services do not follow international standards but
17 1 0.8 0.6 0.5 0.7 0.6 0.6 0.6 0.7 0.4 0.6
follow at organization level standards?
18 Functions and Services: signatures clearly defined? 1 0.6 0.5 0.5 0.7 0.6 0.9 0.9 0.5 0.4 0.6
SERVICE AUTONOMY
INTEROPERABILITY
COARSE GRAINED
analysis Questions
LOOSE COUPLING
REPLACEABILITY
AS-IS architecture
ENCAPSULATION
Q. Sr. No.
SEARCHABLITY
ABSTRACTION
COMPLIANCE
Answer: Y/N?
(W=20)
(W=20)
(W=20)
(W=10)
(W=5)
(W=5)
(W=5)
(W=5)
(W=5)
(W=5)
19 Functions do not use global or static variables? 0 0.5 0.5 0.5 0.4 0.6 0.6 0.9 0.5 0.4 0.6
Functions/Services signatures are published at
20 0 0.6 0.5 0.5 0.7 0.6 0.9 0.9 0.5 0.4 0.6
common place?
User Requirements Document (URD) exists at
21 0 0.3 0.4 0.4 0.1 0 0.1 0.1 0.2 0.1 0.1
product level?
User Requirements Document (FGD) exists at
22 1 0.6 0.5 0.5 0.2 0.2 0.9 0.3 0.3 0.2 0.3
product level?
User Requirements Document (DD) exists at
23 0 0.3 0.4 0.4 0.1 0.2 0.1 0.1 0.2 0.1 0.1
product level?
User Requirements Document (TGD) exists at
24 1 0.3 0.4 0.4 0.1 0.2 0.1 0.1 0.2 0.1 0.1
product level?
Service names represent the purpose correctly
25 0 0.3 0.4 0.4 0.1 0.1 0.1 0.1 0.2 0.1 0.1
(brief description compliments it)?
Observed
Weight
Analysis wi
SOA Attributes Score Wi*Wi
Score (Wi -Oi) (wi-Oi)
(Wi)
(Oi)
Abstraction 20 3.9 16.1 322 400
Coarse Grained Nature
20 3.1 16.9 338 400
Of Services
Loose Coupling 20 2.9 17.1 342 400
Compliance 5 3.1 1.9 9.5 25
Interoperability 5 2.6 2.4 12 25
Search-ability 5 2.6 2.4 12 25
Replace-ability 5 2.7 2.3 11.5 25
Encapsulation 10 2.7 7.3 73 100
Law Of Composition 5 2.5 2.5 12.5 25
Service Autonomy 5 2.7 2.3 11.5 25
100 Sum Wi*(Wi -Oi) 1144 1450
Soft Architectural Distance (SAD) score using Equation (4.5) 0.789
As illustrated in Table 4.5, for Sales Order application, Equation (4.5) gives SAD score =
0.789 on SOA architecture benchmark. This score is close to 1 depicting the application
being architecturally poor in condition.
On-demand self-service
Resource pooling
Measured service
Q. Sr. No.
Questions
(W=20)
(W=20)
(W=20)
(W=20)
(W=20)
SO
1 Packaged solution 0 1 0.8 0.8 1 1
2 Home grown applications? 1 0.3 0.3 0.3 0.2 0.3
3 Development is done using framework? 0 0.5 0.4 0.5 0.6 0.4
4 Batch Programs in 3 GL? 1 0.3 0.2 0.2 0.2 0.6
3-Tier architecture to keep
5 1 0.6 0.3 0.6 0.3 0.2
presentation and biz layer separated?
6 Monolithic structure? 1 0.2 0.2 0.1 0.3 0.2
Ecosystem is heterogeneous (e.g. C,
7 0 0.4 0.4 0.3 0.7 0.5
JAVA, SAP, DB scripts etc.)?
8 Nature of services is database centric? 1 0.5 0.5 0.4 0.3 0.2
Nature of services is functionality/logic
9 0 0.4 0.5 0.6 0.5 0.5
centric?
Age of LEGACY in range of
10 1 0.1 0.1 0.1 0.1 0.2
20 years?
Age of LEGACY in range of
11 0 0.3 0.3 0.3 0.3 0.2
10 years not older?
Age of LEGACY in range of
12 0 0.6 0.4 0.6 0.6 0.3
5 years not older?
Services used by SELF/mother
13 1 0.3 0.3 0.4 0.6 0.5
application only?
Services published and used by self and
14 0 0.8 0.7 0.7 0.7 0.8
others?
Application/service design is in the form
15 0 0.8 0.6 0.5 0.4 0.5
of functions for one logical set of actions?
Services / functions follow international
16 0 1 1 1 0.8 0.8
level standards?
Services do not follow international
17 standards but follow at organization level 1 0.5 0.6 0.5 0.7 0.6
standards?
Functions and Services: signatures clearly
18 1 0.6 0.5 0.5 0.7 0.6
defined?
Functions do not use global or static
19 0 0.5 0.5 0.5 0.4 0.6
variables?
Functions/Services signatures are
20 0 0.7 0.5 0.5 0.7 0.6
published at common place?
User Requirements Document (URD)
21 0 0.3 0.4 0.4 0.1 0
exists at product level?
User Requirements Document (FGD)
22 0 0.4 0.5 0.5 0.2 0.2
exists at product level?
User Requirements Document (DD)
23 0 0.3 0.4 0.4 0.1 0.2
exists at product level?
User Requirements Document (TGD)
24 1 0.3 0.4 0.4 0.1 0.2
exists at product level?
Service names represent the purpose
25 correctly (brief description compliments 0 0.7 0.6 0.6 0.5 0.6
it)?
This model was applied to detect the legacy crisis symptom indicated in matrices using
Remedy tool reports for the ST-Microelectronics legacy applications in ICT. LCSS score
for Sales Order application was computed to be more than threshold value 300, which is
then submitted to stage-II for SAD score computation.
Similarly, the threshold value for degree of SADness based on SOA and CLOUD, is
empirically determined and consented by „Archo-2015‟ [35] as follows. As stage-II is the
final evaluation and not only the indicator hence it is not purely a single numerical
threshold value. Any SAD score less than 0.6 is clear indicator of sustainable
architecture, while when it is more than 0.8 is clear indication bad architecture. However,
values in the range of 0.6 to 0.8 requires human intelligence and in this case, other softer
factors may play role in decision making. Stage-II‟s is a manual decision having inputs of
SAD computation. Degree of SADness for Sales Order come out to be 0.789 with respect
to SOA architecture, while 0.823 with respect to CLOUD architecture. Both the stages
qualify it perfectly for legacy crisis. LCSS score correctly gave the indication that most
of the IT budget was eaten up by KLO activities leaving no room to think for
transformation. Results of stage-I were supported with stage-II results and management
took decision to include the Sales Order, legacy application in transformation roadmap.
LCSS model is implemented with monthly dashboard reports, which do systematic LCSS
computation for each application presented (legacy or non-legacy). As it is generated
The term “Soft Issues” is introduced in Chapter-1 under introduction. Current chapter
details the identification of soft issues in legacy transformation programs which are less
visible upfront but most impacting in the success of the transformation program. This
chapter also establishes the interdependencies or correlation among these soft issues and
also presents the measures to be taken to cope with these soft issues. This study based on
post-mortem analysis of real programs is unique to reveal soft issues experienced through
legacy transformation programs [34]. Similar attempt on smaller scale was made by Kim
Man Lui in [107].
Legacy software applications are well known for their importance to the organization but
less maintainable due to huge incremental changes over years. A situation comes when
managing legacy becomes a crisis, costing adversely to the organization. This forces
organizations to go for legacy modernization. In the journey of this transformation, often
it is seen that the focus remains around the technical solutions and challenges. Still
several programs fail miserably because of ignorance in handling the softer issues with
the stakeholders of the program [107], [109]. This work focuses on coping up with such
softer issues, human psychology, organizational challenges, management role,
stakeholder‟s confidence and contribution.
There are several studies done via post mortem analysis of legacy transformation
programs. Runeson et al. in 2012 [65], presented a case study to deal empirically the
issues in software engineering. Here, authors presented how a program which was going
to failure was converted to success. In [102], Herman Tromp and Ghislain Hoffman have
suggested a methodology to follow certain steps to make the transformation successful.
Authors suggested to have a correct and agreed „as-is‟ situation in current environment
and architecture in the form of standard document. AS-IS document describes the current
implementation while TO-BE document elaborates blue-print of future implementation.
Hence, the work also suggested to have separately determined „to-be‟ technical and
deployment architecture. A critical evaluation was then suggested between „as-is‟ and
„to-be‟ situations. While doing this, authors also touched the softer issues and risks in
Page |72
study and hinted to have management perspective. However, the work mentioned nothing
about coping with the softer challenges. Studies presented in [31] and [103] focus on
lessons learnt from programs which were failed. These are also helpful to avoid similar
mistakes while running the transformation programs. Few studies were performed based
on post-mortem of successfully executed programs [53], [55]. In [107] Kim Man Lui and
Keith C. C. Chan, have focused on the team structure and team composition to bring the
effectiveness in the program execution.
All above studies were with the intention to find the shortcomings or areas of
improvement in technical and behavioural aspects.
Interdependency determination
using correlation analysis
Note: Table 5.1 is a partial presentation of data. Total people responded 55, many people did not list all 10
factors. All attributes which got twenty or more score in TOP-10 count are put in this table. The complete
data can be referred to [116].
1- Strongly Disagree
2- Disagree
3- Somewhat agree
4- Agree
5- Strongly Agree
Natural Inertia:
a- Switching my profile from one role to other is a value add for me?
b- Switching one s/w language, package or platform to other is good for my carrier?
Following sections will determine the correlation among the soft issues. They elaborate
the meaning of interdependencies experienced among the various soft issues under
consideration.
Response of all 55 experts were captured in Table 5.4. To find the interdependency
among these issues, correlation analysis was applied. The correlation analysis applied
here is Pierson‟s correlation (‘R’) between two variables [113], Equation (5.1).
N . ( X ..Y ) X . Y
R …(5.1)
[ N . X 2 ( X ) 2 ][ N . Y 2 ( Y ) 2 ]
Where, N is the number of pairs of data. X and Y are two variables where correlation is
being calculated.
Correlation analysis was done applying computation on the 110 responses recorded in
Table 5.3. Result of such computation is given in Table 5.4.
Looking the rows and columns of correlation results it is evident that there are strong
dependencies between:
a) Coefficient = 0.6164, for „Natural Inertia‟ and „Fear of Losing Jobs‟; and
b- Was the current ICT organization structure suitable to run a transformation program?
b- Was the current ICT organization structure suitable to run a transformation program?
a- You were made aware by program structure about the outcomes before?
b- You were correctly told what you see today after transformation?
b- You were requested to customize the packaged solution by adding many functions of
legacy?
This is true for ICT resources, who manage and maintain the legacy as they are
comfortable knowing all in-and-out of applications. Same is also true for business users
who are used to with the legacy software [114].
Coping with Fear of Being Outdated and Losing Jobs: Following are the ways to cope
with this factor:
Fitment in New Context: Finding and committing job profile fitment for specific
experts in the new context ensures their support upfront. It is much more relevant for
the applications which are more with business bend. Change in job profile should be
well communicated as ultimately a good mix of previous experience and change in
profile returns most of the time good energy and motivation in employees. It
generates the new synergy contributing to the success of transformation.
Although several practices like reverse engineering of legacy systems are generally used,
it still needs commitment and time of IT legacy experts and key business users running
day to day operations on the legacy. Engaging existing legacy experts and end users
becomes key in such circumstances and managing their motivation is equally important.
Key Legacy experts and end users need to be relieved from their operational work and
assigned to the transformation. Although various reverse engineering practices and
automation tools could help to extract partial specifications from the legacy code itself,
manual efforts are still heavily needed to reconstruct and complete the „as-is‟ state and
specifications. Together with Legacy experts and end users, transformation teams need to
resurrect and complete the „as-is‟ state from available content (like help documents,
change request tickets, release documentation, incident tickets etc.) complementing the
content extracted by tools.
Train the legacy experts with reverse engineering tools: This will help in not
only generating the base of the documentation work by listing the
services/function/classes or components but also to increase the motivation of
legacy experts.
The presence of a desire to achieve the results, willingness to accept the impact of doing
the work, and the resolve to follow through and complete the endeavor are the ingredients
to make the transformation successful. This is evaluated through and assessment matrix,
Table 5.7. Active discussions and brainstorming sessions are required to fill this
assessment matrix. A well determined organization is better able to execute the
Readiness factors significant for the transformation are determined via brainstorming
sessions with experts of different domains. Then these factors are evaluated on three
dimensions, urgency, readiness status and degree of difficulty to fix an issue. Combining
these three dimensions degree of favor is being defined as follows-
It can be intuitively argued (also have general consent of group of 55 experts) that the
degree of urgency directly and positively affects the business return value of
transformation program, which may be represented on some scale calling it degree of
favor (Df). Further the readiness status also affects it the same way. However, the degree
of difficulty to fix affects it adversely, that is if difficulty is more then it will attract less
favor.
Let X be degree of urgency, Y be the readiness status, and Z be the degree of difficulty –
X .Y
Then, Degree of favor Df = …(5.2)
15.Z
Where division by 15 is done to map the maximum degree to 1 and keeping all other
values non-negative.
It is a bit subjective which may vary with specificities and can be fixed by management
and architects. Df is a good representation of assessment of legacy transformation
readiness. This assessment gives a systematic way of evaluation of the organization‟s
confidence on ability to run the transformation programs. This exercise should also be
done with business representatives inside the assessment team.
Most organizations have a pretty strict and project-based organization which is adding
considerably to their latency and inability to adapt. It is clear that in such an environment
there will be a lot of resistance towards new developments.
If the program comprises multiple projects, they too must have a business project
manager and an IT project manager. Project managers in IT are the persons who maintain
the project plans at their track levels who share the same to consolidate to the global plan
at program management level. The same is typically represented in Fig. 5.2.
Program
management
office
The legacy code should be frozen and only minimal changes needed for business
continuity or production bugs would be allowed in the legacy software which should
be otherwise declared frozen.
If migrating to a new package, educate and insist business users to go for standard
solutions offered by the new package and not bringing fancy features of old
applications which may be costly in maintenance later.
Awareness and trainings on new solution should be arranged for all stakeholders.
Engaging Business: The acceptability of the new solution cannot be granted unless
end users who will switchover from operating the legacy solution to the new one are
adequately engaged, informed, and trained at relevant stages in the transformation
program. Apart from keeping those in the program-board and project management
organization, business organization must be equally made aware of features,
possibilities, way to use, and high level trainings on new solution.
End user engagement: Selected end user representatives from various sites and
local/regional offices must be engaged during the solution selection, architecture and
roadmap definition stages and assigned key roles as part of the business team in the
transformation project. They should have the responsibility to define business
requirements and functional specifications and should be the ones to approve/sign-off
the new solution.
Change agents and spreading the knowledge in business: Selected end users
should also be groomed to lead the end user trainings and act as change catalysts in
the business organizations. The role of change catalyst is very important as they work
as bridge between IT and business organizations.IT organization neither needs to
It often happens that various stakeholders make very high expectations out of
transformation results and modernization. There is a need to be careful and realistic while
making commitments and presenting the transformation program [74][77]. On one hand
business users may be dreaming for having full automation, no old pains, being bug-free
software from day one, and magical response time etc. On the other hand, IT
management may be dreaming of excellent customer satisfaction after deploying new
solution [99]. Their expectations may be very high, but they may get disappointed after
the realization of the transformation.
Setting right expectations out of legacy transformation: Following are the key
considerations used to overcome this problem:
Setting up the ‘goal models’ with stakeholders: Here a kind of modeling is done to
represent the stakeholder goals in the form of graphical representation along with their
inter-dependencies. Goals are decomposed into sub goals and their synthesis. In [99], soft
goals are analyzed as qualitative objectives such as „improve profits‟ or „keep customers
happy‟. Subsequently, goal model goes through refinements. Root goal is represented as
collection of leaf goals. To achieve the system level root goals, work and controls are
applied on the individual leaf goals.
Beta validation with right participants: Key users with real practitioners in business
must be involved in Business IT Alignment (BITA) validation and sign off. Combined
validation cycle with IT analysts and business will help to arrive at a final version faster.
This step should be marked as important milestone and a key achievement as pre go-live.
It is suggested that all the documentation and release notes must be published and shared
by this milestone. Clear tracking and addressing of the questions, bugs highlighted
during BITA, is a key to have smooth go-live. BITA results must be formally recorded
and reviewed. Appropriate time must be spent in BITA. While doing the validation,
regular image of LIVE data must be reflected in beta environment.
Legacy functions with target evolution: Reverse engineered specifications merged with
new adaptations are discussed, agreed, and signed off. Being realistic here and an
authority to awake people from dreams will help. „To–be‟ picture should be made black
and white clear. Suggested to discuss the crazy change requests when they are raised
instead of leaving them in grey zone. ICT should not do any verbal or false commitment.
However, be sensitive to capture and integrate the sincere evolutions to make maximum
out of the transformation programs. Try to address as much the pains of users possible.
Choosing right authorities from each stake holder community: This helps to enforce
the well thought decisions even if it may have certain drawbacks. Here again,
Soft issues are often ignored during the legacy transformation. It is experienced that they
are crucial and impact considerably the transformation program. Hence appropriate
handing is mandatory. Ways to cope with them described in this chapter are tried in real
scenarios and found to be working.
This chapter discusses the data details, result summary, and any deviation of
hypothesis with proven results. The chapter also highlights the scope of contribution of
this research work to the society, research community and software industry. Specially, to
the organizations and people dealing with legacy software.
Page |99
Ticket Count
Month (YYYY-MM)
Higher the number of questions asked lower the clarity in users. This costs support effort
and disturbs the solution team and support teams. It has also pretty good weightage 10%
set by this work. Fig. 6.2 depicts such a comparison with respect to questions being asked
to the support team by users. SO is getting consistently twice or thrice the questions in
comparison to DRP on an average.
20 19
17 18 17
16 16
Ticket Count
15
11 12
10 10 10 10 10 10
7 6 6 6 7
5 5 DRP
4 4 3 3
1 SO (SALES ORDER)
0
Month (YYYY-mm)
20.00%
Month (YYYY-mm)
0.00%
2016-01 2016-02 2016-03 2016-04 2016-05 2016-06
35
30 29
25
Ticket Count
20 20
18
15 14 14
12 DRP
10 9 8 7 8 SO (SALES ORDER)
5 6 6 6 6 5
3 4 4 3
2 2
0
Month (YYYY-MM)
Figure 6.4: Monthly Reported Problems Trend For Legacy and Recent Application.
Legacy Normalization
S. No. LCSS Remarks
Application Factor (NF)
Applied the developed model and result
1 SO 1.2 392.5 qualified stage-I for legacy
transformation.
Already transformed, model applied on
2 VLOG 2 459.3 old data (2013-14), result qualified
stage-I for transformation.
In case of BP application, Model did not
qualify stage-I for transformation but
actually it was transformed. Decision of
BP transformation was not because of
3 BP 1.05 252.1
legacy crisis but to adopt to a new
futuristic solution Customer Master Data
Management (CMDM ) mandated by
ICT architecture roadmap.
Already transformed, model applied on
4 BM 0.75 508.2 old data (2013-14), result qualified
stage-I for transformation.
Already transformed, model applied on
5 SGA (Price) 1.1 408.2 old data (2011-12), result qualified
stage-I for transformation.
SERVICE AUTONOMY
INTEROPERABILITY
COARSE GRAINED
analysis Questions
LOOSE COUPLING
REPLACEABILITY
AS-IS architecture
ENCAPSULATION
SEARCHABLITY
Q. Sr. No.
ABSTRACTION
COMPOSITION
COMPLIANCE
Answer: Y/N?
LAW OF
(W=20)
(W=20)
(W=20)
(W=10)
(W=5)
(W=5)
(W=5)
(W=5)
(W=5)
(W=5)
1 Packaged solution? 0 0.8 0.5 0.5 0.2 0.2 0 0 0.5 0.1 0.1
2 Home grown applications? 1 0.3 0.1 0.1 0.1 0.1 0 0 0.1 0.1 0.1
3 Development is done using framework? 0 0.5 0.2 0.2 0.2 0.2 0.2 0.2 0.5 0.4 0.1
4 Batch Programs in 3 GL? 1 0.3 0.2 0.1 0.1 0 0.1 0.1 0.2 0.1 0.1
3-Tier architecture to keep
5 1 0.6 0.3 0.3 0.3 0.2 0.2 0.2 0.2 0.4 0.3
presentation and biz layer separated?
6 Monolithic structure? 1 0.1 0.1 0.1 0.1 0.1 0 0 0.1 0.1 0.1
Ecosystem is heterogeneous (e.g. C, JAVA, SAP, DB scripts
7 0 0.4 0.4 0.3 0.2 0.5 0.6 0.6 0.2 0.2 0.2
etc.)?
8 Nature of services is database centric? 1 0.5 0.5 0.4 0.3 0.2 0.2 0.2 0.2 0.4 0.3
9 Nature of services is functionality/logic centric? 0 0.7 0.5 0.3 0.3 0.3 0.2 0.2 0.2 0.2 0.4
Age of LEGACY in range of
10 1 0.1 0.1 0.1 0.1 0.1 0 0 0.1 0.1 0.1
20 years?
Age of LEGACY in range of
11 0 0.3 0.2 0.1 0.1 0 0.1 0.1 0.2 0.1 0.1
10 years not older?
Age of LEGACY in range of
12 0 0.6 0.4 0.3 0.3 0.3 0.4 0.6 0.5 0.5 0.6
5 years not older?
13 Services used by SELF/mother application only? 1 0.3 0.3 0.4 0.6 0.5 0.5 0.6 0.4 0.4 0.4
14 Services published and used by self and others? 0 0.8 0.7 0.7 0.3 0.4 0.4 0.6 0.5 0.5 0.6
LOOSE COUPLING
REPLACEABILITY
AS-IS architecture
ENCAPSULATION
SEARCHABLITY
Q. Sr. No.
ABSTRACTION
COMPOSITION
COMPLIANCE
Answer: Y/N?
LAW OF
(W=20)
(W=20)
(W=20)
(W=10)
(W=5)
(W=5)
(W=5)
(W=5)
(W=5)
(W=5)
Application/service design is in the form of functions for one
15 0 0.8 0.6 0.5 0.4 0.5 0.5 0.6 0.4 0.4 0.4
logical set of actions?
16 Services / functions follow international level standards? 0 1 1 1 0.8 0.8 0.6 0.8 0.8 0.8 0.8
18 Functions and Services: signatures clearly defined? 1 0.6 0.5 0.5 0.7 0.6 0.9 0.9 0.5 0.4 0.6
19 Functions do not use global or static variables? 0 0.5 0.5 0.5 0.4 0.6 0.6 0.9 0.5 0.4 0.6
Functions/Services signatures are published at common
20 0 0.6 0.5 0.5 0.7 0.6 0.9 0.9 0.5 0.4 0.6
place?
21 User Requirements Document (URD) exists at product level? 0 0.3 0.4 0.4 0.1 0 0.1 0.1 0.2 0.1 0.1
22 User Requirements Document (FGD) exists at product level? 0 0.6 0.5 0.5 0.2 0.2 0.9 0.3 0.3 0.2 0.3
23 User Requirements Document (DD) exists at product level? 0 0.3 0.4 0.4 0.1 0.2 0.1 0.1 0.2 0.1 0.1
24 User Requirements Document (TGD) exists at product level? 1 0.3 0.4 0.4 0.1 0.2 0.1 0.1 0.2 0.1 0.1
Service names represent the purpose correctly (brief
25 0 0.3 0.4 0.4 0.1 0.1 0.1 0.1 0.2 0.1 0.1
description compliments)?
Answer: Broad
Resource Scalable and Measured
Q. Sr. YES=1 On-demand self- network
AS-IS architecture analysis Questions pooling Elasticity service
No. service (W=20) access
NO=0? (W=20) (W=20) (W=20)
(W=20)
1 Packaged solution 0 1 0.8 0.8 1 1
2 Home grown applications? 1 0.3 0.3 0.3 0.2 0.3
3 Development is done using framework? 0 0.5 0.4 0.5 0.6 0.4
4 Batch Programs in 3 GL? 1 0.3 0.2 0.2 0.2 0.6
3-Tier architecture to keep
5 1 0.6 0.3 0.6 0.3 0.2
presentation and biz layer separated?
6 Monolithic structure? 1 0.2 0.2 0.1 0.3 0.2
Ecosystem is heterogeneous (e.g. C, JAVA, SAP,
7 0 0.4 0.4 0.3 0.7 0.5
DB scripts etc.)?
8 Nature of services is database centric? 1 0.5 0.5 0.4 0.3 0.2
9 Nature of services is functionality/logic centric? 0 0.4 0.5 0.6 0.5 0.5
Age of LEGACY in range of
10 1 0.1 0.1 0.1 0.1 0.2
20 years?
Age of LEGACY in range of
11 0 0.3 0.3 0.3 0.3 0.2
10 years not older?
Age of LEGACY in range of
12 0 0.6 0.4 0.6 0.6 0.3
5 years not older?
13 Services used by SELF/mother application only? 1 0.3 0.3 0.4 0.6 0.5
14 Services published and used by self and others? 0 0.8 0.7 0.7 0.7 0.8
LAW OF COMPOSITION
AS-IS architecture analysis
SERVICE AUTONOMY
Answer: YES=1/NO=0?
INTEROPERABILITY
COARSE GRAINED
NATURE of services
LOOSE COUPLING
REPLACEABILITY
ENCAPSULATION
SEARCHABLITY
ABSTRACTION
COMPLIANCE
Q. Sr. No.
(W=20)
(W=20)
(W=20)
(W=10)
(W=5)
(W=5)
(W=5)
(W=5)
(W=5)
(W=5)
Questions
1 Packaged solution? 0 0.8 0.5 0.5 0.2 0.2 0 0 0.5 0.1 0.1
2 Home grown applications? 1 0.3 0.1 0.1 0.1 0.1 0 0 0.1 0.1 0.1
3 Development is done using framework? 0 0.5 0.2 0.2 0.2 0.2 0.2 0.2 0.5 0.4 0.1
4 Batch Programs in 3 GL? 1 0.3 0.2 0.1 0.1 0 0.1 0.1 0.2 0.1 0.1
3-Tier architecture to keep
5 1 0.6 0.3 0.3 0.3 0.2 0.2 0.2 0.2 0.4 0.3
presentation and biz layer separated?
6 Monolithic structure? 1 0.1 0.1 0.1 0.1 0.1 0 0 0.1 0.1 0.1
Ecosystem is heterogeneous (e.g. C,JAVA, SAP, DB scripts
7 0 0.4 0.4 0.3 0.2 0.5 0.6 0.6 0.2 0.2 0.2
etc.) ?
8 Nature of services is database centric? 1 0.5 0.5 0.4 0.3 0.2 0.2 0.2 0.2 0.4 0.3
9 Nature of services is functionality/logic centric? 0 0.7 0.5 0.3 0.3 0.3 0.2 0.2 0.2 0.2 0.4
Age of LEGACY in range of
10 1 0.1 0.1 0.1 0.1 0.1 0 0 0.1 0.1 0.1
20 years?
Age of LEGACY in range of
11 0 0.3 0.2 0.1 0.1 0 0.1 0.1 0.2 0.1 0.1
10 years not older?
Age of LEGACY in range of
12 0 0.6 0.4 0.3 0.3 0.3 0.4 0.6 0.5 0.5 0.6
5 years not older?
13 Services used by SELF/mother application only? 1 0.3 0.3 0.4 0.6 0.5 0.5 0.6 0.4 0.4 0.4
SERVICE AUTONOMY
Answer: YES=1/NO=0?
INTEROPERABILITY
COARSE GRAINED
NATURE of services
LOOSE COUPLING
REPLACEABILITY
ENCAPSULATION
SEARCHABLITY
ABSTRACTION
COMPLIANCE
Q. Sr. No.
(W=20)
(W=20)
(W=20)
(W=10)
(W=5)
(W=5)
(W=5)
(W=5)
(W=5)
(W=5)
Questions
14 Services published and used by self and others? 1 0.8 0.7 0.7 0.3 0.4 0.4 0.6 0.5 0.5 0.6
Application/service design is in the form of functions for one
15 1 0.8 0.6 0.5 0.4 0.5 0.5 0.6 0.4 0.4 0.4
logical set of actions?
16 Services / functions follow international level standards? 0 1 1 1 0.8 0.8 0.6 0.8 0.8 0.8 0.8
Services do not follow international standards but follow at
17 1 0.8 0.6 0.5 0.7 0.6 0.6 0.6 0.7 0.4 0.6
organization level standards?
18 Functions and Services: signatures clearly defined? 1 0.6 0.5 0.5 0.7 0.6 0.9 0.9 0.5 0.4 0.6
19 Functions do not use global or static variables? 0 0.5 0.5 0.5 0.4 0.6 0.6 0.9 0.5 0.4 0.6
Functions/Services signatures are published at common
20 0 0.6 0.5 0.5 0.7 0.6 0.9 0.9 0.5 0.4 0.6
place?
User Requirements Document (URD) exists at product
21 0 0.3 0.4 0.4 0.1 0 0.1 0.1 0.2 0.1 0.1
level?
22 User Requirements Document (FGD) exists at product level? 1 0.6 0.5 0.5 0.2 0.2 0.9 0.3 0.3 0.2 0.3
23 User Requirements Document (DD) exists at product level? 0 0.3 0.4 0.4 0.1 0.2 0.1 0.1 0.2 0.1 0.1
User Requirements Document (TGD) exists at product
24 1 0.3 0.4 0.4 0.1 0.2 0.1 0.1 0.2 0.1 0.1
level?
Service names represent the purpose correctly (brief
25 0 0.3 0.4 0.4 0.1 0.1 0.1 0.1 0.2 0.1 0.1
description compliments it)?
LAW OF COMPOSITION
SERVICE AUTONOMY
INTEROPERABILITY
COARSE GRAINED
analysis Questions
LOOSE COUPLING
REPLACEABILITY
AS-IS architecture
ENCAPSULATION
Q. Sr. No.
SEARCHABLITY
ABSTRACTION
COMPLIANCE
Answer: Y/N?
(W=20)
(W=20)
(W=20)
(W=10)
(W=5)
(W=5)
(W=5)
(W=5)
(W=5)
(W=5)
1 Packaged solution? 0 0.8 0.5 0.5 0.2 0.2 0 0 0.5 0.1 0.1
2 Home grown applications? 1 0.3 0.1 0.1 0.1 0.1 0 0 0.1 0.1 0.1
3 Development is done using framework? 0 0.5 0.2 0.2 0.2 0.2 0.2 0.2 0.5 0.4 0.1
4 Batch Programs in 3 GL? 1 0.3 0.2 0.1 0.1 0 0.1 0.1 0.2 0.1 0.1
6 Monolithic structure? 1 0.1 0.1 0.1 0.1 0.1 0 0 0.1 0.1 0.1
7 Ecosystem is heterogeneous (e.g. C, JAVA, SAP, DB scripts etc.)? 0 0.4 0.4 0.3 0.2 0.5 0.6 0.6 0.2 0.2 0.2
8 Services are database centric? 1 0.5 0.5 0.4 0.3 0.2 0.2 0.2 0.2 0.4 0.3
9 Services is functionality centric? 0 0.7 0.5 0.3 0.3 0.3 0.2 0.2 0.2 0.2 0.4
SERVICE AUTONOMY
INTEROPERABILITY
COARSE GRAINED
analysis Questions
LOOSE COUPLING
REPLACEABILITY
AS-IS architecture
ENCAPSULATION
Q. Sr. No.
SEARCHABLITY
ABSTRACTION
COMPLIANCE
Answer: Y/N?
(W=20)
(W=20)
(W=20)
(W=10)
(W=5)
(W=5)
(W=5)
(W=5)
(W=5)
(W=5)
13 Services used by SELF/mother application only? 1 0.3 0.3 0.4 0.6 0.5 0.5 0.6 0.4 0.4 0.4
14 Services published and used by self and others? 1 0.8 0.7 0.7 0.3 0.4 0.4 0.6 0.5 0.5 0.6
16 Services / functions follow international level standards? 0 1 1 1 0.8 0.8 0.6 0.8 0.8 0.8 0.8
18 Functions and Services: signatures clearly defined? 1 0.6 0.5 0.5 0.7 0.6 0.9 0.9 0.5 0.4 0.6
19 Functions do not use global or static variables? 0 0.5 0.5 0.5 0.4 0.6 0.6 0.9 0.5 0.4 0.6
20 Functions/Services signatures are published at common place? 0 0.6 0.5 0.5 0.7 0.6 0.9 0.9 0.5 0.4 0.6
21 User Requirements Document (URD) exists at product level? 0 0.3 0.4 0.4 0.1 0 0.1 0.1 0.2 0.1 0.1
22 User Requirements Document (FGD) exists at product level? 1 0.6 0.5 0.5 0.2 0.2 0.9 0.3 0.3 0.2 0.3
23 User Requirements Document (DD) exists at product level? 0 0.3 0.4 0.4 0.1 0.2 0.1 0.1 0.2 0.1 0.1
24 User Requirements Document (TGD) exists at product level? 1 0.3 0.4 0.4 0.1 0.2 0.1 0.1 0.2 0.1 0.1
25 Service names represent the purpose correctly? 0 0.3 0.4 0.4 0.1 0.1 0.1 0.1 0.2 0.1 0.1
Weight Observed
Cloud Attributes
Score Score (Wi -Oi) Wi(Wi-Oi) Wi .Wi
(Wi) (Oi)
On-demand self-
service 20 9 11 220 400
Broad network
access 20 8.1 11.9 238 400
Resource pooling 20 8.3 11.7 234 400
Scalable and
Elasticity 20 7.5 12.5 250 400
Measured service 20 7.6 12.4 248 400
Totals 100 Sum Wi.(Wi -Oi) 1190 2000
Soft Architectural Distance (SAD) score using Equation (4.5) 0.595
6.5 Conclusions
The findings of this work are summarized below:
A model was developed to detect legacy crisis point, which has two stages. First
stage is operational symptom based and the second stage is based on architectural
issues.
Developed methodology was applied on 204 applications belonging to 7 BSGs.
These 204 applications include 4 applications which were already transformed.
All these 4 applications are recommended by the methodology when applied on
pre-transformation production data and old architecture.
Out of 204 applications, LCSS 80 percentile give 42 applications and these 42
applications are confirmed with poor rating and identified as target for next set of
applications under transformation.
Legacy software after certain stage enters into crisis situation this situation
depends on the attributing factors identified in this study. Legacy crisis attributes
are almost similar for all sort of legacy but their weightage may differ specific to
the organization. Model developed in the study helps to decide weight factor for
each attribute. Legacy crisis symptom can be determined by LCSS score as per
the model developed. This LCSS is totally on the operational matrices. It is the 1st
stage of legacy crisis determination process.
Next researches are expected to be focused more on how to automate LCSS value
computation. A plug-in need to be developed which can interface with any
support tool like REMEDY to apply the LCSS algorithm of applications with
benchmarked recent applications.
Code scan tool can be developed which uses all the architectural aspects to
automatically determine the SAD score with basic answer to questions and
organization specific parameter matrix as inputs.
A fuzzy model can be developed to interpret the results of SAD computations,
especially when the SAD score has no extreme values, i.e. not clearly saying good
or bad architecture.
In this study the methodology applied is generic but determination of factor sis
organization specific. Adding on to this methodology, future work could be to
release the factor/weight matrix generic to the industry. Organizations always
have time constraints in every project they do. So, the best solution for them is
ready-to-use matrix which is generic. Also if methodology is not correctly used
they may arrive to wrong factors.
[1] Seacord, Robert C., et al.: “Legacy modernization strategies. Technical Report
CMUSEI-2001-TR-025,” Carnegie Mellon University, Pittsburgh, 2001.
[2] Seacord, Robert C., Plakosh,D., Lewis,G.A.: “Modernizing Legacy Systems.
Carnegie Mellon,” SEI. Addison-Wesley, Reading, 2003.
[3] Bergey J., Smith D., Weiderman N., Wood S.,“Option Analysis for
Reengineering (OAR): Issues and Conceptual Approach, Software Engineering
Institute,” Carnegie Mellon University, Tech. Note CMU/SEI-99-TN-014, 1999.
[4] Van den Heuvel,Willem-Jan., “Integrating Modern Business Applications with
Legacy Systems: A Software Component Perspective,” MIT Press, Cambridge,
2007.
[5] Bennett, K., “Legacy systems: coping with stress. Software”, IEEE Transactions,
vol. 12(1), pp. 19-23, 1995.
[6] R. Kazman, S. G. Woods, and S. J. Carriere, “Requirements for Integrating
Software Architecture and Reengineering Models: CORUM II,” IEEE, In
Proceedings of WCRE-98, Honolulu, pp. 154-163, 1998.
[7] Amit Mishra, “Legacy Software Tragedy: with 4th Decimal Pricing, a Case
Study,” ST-Microelectronics, 2012
[8] Runeson, P., & Host, M., “Guidelines for conducting and reporting case study
research in software engineering.” Empirical Software Engineering, vol. 14(2),
pp. 131-164, 2008.
[9] Seaman, C. B., “Qualitative methods in empirical studies of software
engineering. Software Engineering”, IEEE Transactions on, vol. 25(4), pp. 557-
572, 1999.
[10] Seacord, R. C., Comella-Dorda, S., Lewis, G., Place, P., & Plakosh, D., “Legacy
System Modernization Strategies (No. CMU/SEI-2001-TR-025).”, Carnegie-
Mellon University Pittsburgh Pa Software Engineering Inst., 2001
[11] Sneed, H. M., “Encapsulation of legacy software: A technique for reusing legacy
software components,” Annals of Software Engineering, vol. 9(1-2), pp. 293-313,
2001.
[95] Gabriele Bavota, Rocco Oliveto, Malcom Gethers, Denys Poshyvanyk, Andrea
De Lucia, "Methodbook: Recommending Move Method Refactorings via
Relational Topic Models", IEEE Transactions on Software Engineering, vol.
40(7), pp. 671-694, 2014
[96] K. Bennett, “Legacy Systems, Coping with Success,” IEEE Software, vol. 12(1),
pp.19–73.1995.
This Appendix is aimed to present references and required details to all the data sources,
techniques and methods used in this thesis.
Data Sources
In the present work ST-Microelectronics Pvt. Ltd., Software Engineering data of all
seven BSGs were used largely. These 7 BSGs are namely-
• Finance
• Purchasing
• Manufacturing
[1] ST-Microelectronics PVT. LTD., ICT – BSGs Quarterly Master Releases KPI data,
http://best-collab.st.com/ws/SQA_SCC_Quality_Assurance/Measurement%20
and%20 Analysis/Forms/AllItems.aspx from year 2011 till 2015.
[2] ST-Microelectronics PVT. LTD., ICT-BSGs Operation and Support Service
Dashboard, http://best-collab.st.com/ws/Service_Management/SitePages/Home.
aspx and http://best.st.com/docshare/ict/ict-dashboard/Forms/AllItems.aspx,with
rolling 1 year data from year 2011 till 2015
[3] ST-Microelectronics PVT. LTD., ICT-TOP Page data for all seven BSGs,
http://best.st.com/Corporate/ICT/Documents/ICT-Top-Page-2015-V18.pdf from
year 2011 till 2015.
Page | 148
[4] ST-Microelectronics PVT. LTD., ICT-Remedy Database (Change Requests,
Incidents, Questions, Requests and Problems) for all seven BSGs,
ICT_Health_Evolution Business Object (BO) Universe, past 5 years, from 2011 till
2015.
[5] ST-Microelectronics PVT. LTD., ICT8D repository and fishbone / Eshikawa
diagram analysis reports covering critical Interrupts to production (ITPs),
http://8d.sgp.st.com/past 5 years, from 2011 till 2015.
[6] ST-Microelectronics PVT. LTD., ICT Applications Portfolio Analysis (APA) by
Cap-Gemini as benchmarking exercise on legacy transformation, http://best-
collab.st.com/ws/ICT/Reporting%20and%20Dashboards/Forms/AllItems.aspx?Roo
tFolder=%2Fws%2FICT%2FReporting%20and%20Dashboards%2FSnM%5FSPS
G%5FServiceManagement, 2013.
[7] ST-Microelectronics PVT. LTD., ICT, Time Logging System (TLS),
http://tls.sgp.st.com/ to report efforts on different ICT activities including KLO:
past 5 years, from 2011 till 2015.
[8] ST-Microelectronics PVT. LTD., ICT, Human Resources Data (limited access),
http://ehr.st.com/for employee attrition rate working in legacy area.
Delphi Technique
The Delphi technique [101] is used in this work for deriving parameter values and
sometimes weights of these parameters applicable for one organization. It is a well-
accepted method for gathering data from respondents within their domain of expertise.
The technique is designed as a group communication process which aims to achieve a
convergence of opinion on a specific real-world issue. Fig, A-1 describes the iterations in
Delphi Technique. ST-Microelectronics process working group (PWG) was instrumental
in determination of final set of attributes on legacy crisis contribution from operational
and architectural perspective. PWG is constituted with one senior representative from
each BSG and 3 members of S/W Engineering Practice Group (SEPG).
PWG PWG
Brainstorming Brainstorming
Identify PWG
Common Issues Approval
PWG Management
Brainstorming Agreement
- implement relevant preventive actions related to system root causes to avoid re-
occurrence.
D2: Explore all dimensions to detail the problem and gravity of problem.
D5/D6: In D5 the team lists the corrective actions linked to the technical root causes
(occurrence and escape). These actions must be verified to be sure they are effective and
that they do not cause other undesired effects for the customer (risk assessment). In D6
the permanent and selected corrective actions are implemented.
D7: The D7 objective is to prevent any recurrence. The team lists and implements all the
preventive actions linked to the potential, the system root causes (occurrence and escape)
and the cross-fertilization to avoid the same issue to happen again.
These actions must be verified to be sure they are effective and they don‟t cause other
undesired effects for the customer (risk assessment).
D8: This part is dedicated to the lessons learned, the assessment of the 8D and the closure
of the 8D. The „Lessons Learned‟ and the assessment are performed by the management
during the 8D board or the 8D closure meeting.
It concerns not only the lessons learned about the issue it-self but also about the way to
manage and solve it. What worked well? What did not work so well? Etc. are reviewed.
This Appendix is aimed to present a real industry data on software maintenance and
software engineering to compliment the work presented in this thesis. Based on this real
software data a technical report writing is presented to extract meaningful information.
Software maintenance data is taken from ST-Microelectronics operational dashboards.
Here the information processing is based on different flavors of software application
support. Data reported in this section belongs to seven Business Solution Groups (BSGs)
involving 204 applications running in live. Horizon of data is past three years, typically
from year 2012 to 2015. Software maintenance data is taken from corporate tools (e.g.
Remedy), reports and dashboards based on these tools etc. (See Appendix-A). A
systematic study is applied on the incidents, problems, service requests, change requests,
questions asked by users about application behavior, interrupts to production faced during
execution etc. Framework used for operations and support (OnS) processes here are
based on IT Infrastructure Library (ITIL). Suitable Key Performance Indicators (KPIs)
for software maintenance are discussed to emphasize on importance of management by
measurements. KPIs help to make the performance measurable and comparable. Trend
analysis of KPIs and their meaning is also elaborated which indicates about the actions to
take for quality improvement. Software Performance Indicators (SPI) are derived by
combining the KPIs from Information and Communication Technologies (ICT) service
dashboards. This information is supposed to be helpful for the software industry catering
mainly the software maintenance and support on legacy or old software. Study presented
here is also a good insight for the researchers contributing to software engineering and
software maintenance.
Software applications developed and deployed has regular maintenance and definite life
cycle like other operational things. Although there is no natural wear and tear effect on
software code like regular working machines, yet they start performing poorly due to
regular maintenance over the existing code. This all makes the software less robust in
Some of the KPI figures for a subset of Sales and Marketing BSG applications are shared
below and represented in graphs Fig. B.2 – Fig. B.9 -
Number of Incidents Reported (Per Month): Is record of any misbehavior of
application faced by users. Full detail of tickets are available in dashboard but summary
chart is shared in one sheet to have quick view of the trend. Any abnormal increase trend
gives the signal of unusual event. This need to be explained else a deeper analysis and
solution to the issue is initiated.
Number of Problems Reported (Per Month): Is record of consistent misbehavior of
any application function. Each problem may have multiple linked reported incidents.
400 390
350
# of Incidents
300
250
206 190
184 203
180 194
200 151 171
155 171 160156 156
150 137
115 120
105 97 100
100
65
38
50 24
21
Months
0
07- 08- 09- 10- 11- 12- 01- 02- 03- 04- 05- 06-
2015 2015 2015 2015 2015 2015 2016 2016 2016 2016 2016 2016
Total # of Incidents Arrived 105 97 100 390 206 180 151 184 203 190 160 65
No of Open Incidents 21 24 38 115 120 137 171 194 155 171 156 156
Figure B.2: Monthly incident trend for sales process applications (under SnM BSG)
Problem Trend
300
250
240
250 239 235
# of Problems
200 181
165
162
149 144
146
150
104
95
100
54 43
50 36
30 29
27 17 18
12 14 19 15
0
Months 07-2015 08-2015 09-2015 10-2015 11-2015 12-2015 01-2016 02-2016 03-2016 04-2016 05-2016 06-2016
Total # of Problem
12 14 19 54 27 36 17 30 43 29 18 15
Arrived
No of Open Problem 146 95 104 149 165 144 162 181 240 239 235 250
Figure B.3: Monthly Problem Trend for Sales Process Applications (Under SnM BSG)
200 173
176
180 157
159
160
# of Service Requests
Figure B.4: Monthly service request and question trend for sales process applications
Number of change requests reported (per month): Change requested for application
evolution.
Change Trend
140
122
120
# of Total Changes
104
81 97
100
78 76 86
77 69
80 65
56 59
60
31 34
40 23 27
16 25
18 16 14
20 13 10 9
0
07- 08- 09- 10- 11- 12- 01- 02- 03- 04- 05- 06-
2015 Months 2015 2015 2015 2015 2015 2016 2016 2016 2016 2016 2016
Total # of Changes Arrived 18 13 16 16 10 23 14 25 31 34 27 9
No of Open Changes 77 78 69 76 65 56 59 81 122 86 97 104
Figure B.5: Monthly change request trend for sales process applications
Appendix-B Page | 157
Service Level Objectives and Service Level Agreement (SLA):
It is the measurement of the percentage of tickets meeting the agreed time to resolve the
ticket. Acknowledgement is first response to requester, while resolution is the completing
the request with appropriate solution. Target for organization under study is 87% for
acknowledgement and 90% for resolution.
100.00%
90.00% 92.86%
86.15%
80.00% 82.28%
77.97% 77.01% Target 87%
70.00% 70.93%
SLO %
120.00%
Target 90%
100.00%
96.43%
85.29%
80.00% 81.13% 78.72%
73.33% 73.26%
Resolution SLO
60.00% Compliance
SLO %
Target
40.00%
20.00%
MONTH (YYYY-MM)
0.00%
2016-01 2016-02 2016-03 2016-04 2016-05 2016-06
Acknowledgments
Special thanks to ST-Microelectronics for providing the opportunity to author to work as
process working group member and participating to various workshops and initiatives.
World-wide exposure to work with software engineering data made this technical report
more meaningful and practical. Thanks also to our management permitting to use non-
sensitive data for this report in view to give benefits to industry and researchers.