Component-Based Software Engineering Methods and Metrics, Tiwari, Kumar, 2021
Component-Based Software Engineering Methods and Metrics, Tiwari, Kumar, 2021
Software Engineering
Component-Based
Software Engineering
Methods and Metrics
Reasonable efforts have been made to publish reliable data and information, but the author and publisher
cannot assume responsibility for the validity of all materials or the consequences of their use. The authors and
publishers have attempted to trace the copyright holders of all material reproduced in this publication and
apologize to copyright holders if permission to publish in this form has not been obtained. If any copyright
material has not been acknowledged please write and let us know so we may rectify in any future reprint.
Except as permitted under U.S. Copyright Law, no part of this book may be reprinted, reproduced, transmitted,
or utilized in any form by any electronic, mechanical, or other means, now known or hereafter invented,
including photocopying, microfilming, and recording, or in any information storage or retrieval system,
without written permission from the publishers.
For permission to photocopy or use material electronically from this work, access www.copyright.com or
contact the Copyright Clearance Center, Inc. (CCC), 222 Rosewood Drive, Danvers, MA 01923, 978-750-8400.
For works that are not available on CCC please contact [email protected]
Trademark notice: Product or corporate names may be trademarks or registered trademarks, and are used only
for identification and explanation without intent to infringe.
vii
viii Contents
2.4 X Model......................................................................................................................... 44
2.4.1 Key Findings.................................................................................................... 46
2.4.2 Critiques........................................................................................................... 46
2.5 Umbrella Model........................................................................................................... 46
2.5.1 Key Findings.................................................................................................... 47
2.5.2 Critique............................................................................................................. 47
2.6 Knot Model................................................................................................................... 48
2.6.1 Key Findings.................................................................................................... 49
2.6.2 Critiques........................................................................................................... 49
2.7 Elite Model.................................................................................................................... 49
2.7.1 Key Findings.................................................................................................... 51
2.7.2 Critiquess......................................................................................................... 51
Summary��������������������������������������������������������������������������������������������������������������������������������� 51
References................................................................................................................................ 51
Index.............................................................................................................................................. 203
Figures and Tables
Figures
1.1 Modern software.......................................................................................................................................2
1.2 Traditional software engineering paradigms.......................................................................................3
1.3 Basic waterfall development paradigm.................................................................................................4
1.4 The incremental development paradigm..............................................................................................6
1.5 The rapid application development paradigm....................................................................................8
1.6 The prototyping development paradigm............................................................................................10
1.7 The spiral development paradigm.......................................................................................................12
1.8 Advanced software engineering paradigms......................................................................................14
1.9 One increment of cleanroom software development........................................................................19
1.10 Component-based software engineering framework........................................................................22
1.11 Concurrent component-based software development processes....................................................23
1.12 Componentization vs integration cost.................................................................................................26
2.1 Y model for component-based software.............................................................................................38
2.2 BRIDGE model for component-based software.................................................................................40
2.3 X model for component-based software.............................................................................................44
2.4 Umbrella model for component-based software...............................................................................46
2.5 The Knot model for component-based software................................................................................48
2.6 The Elite model for component-based software................................................................................49
4.1 User login page.....................................................................................................................................120
4.2 Reusability graph when components contain new and reused function points.........................123
4.3 Reusability graph when components contain reused function points only................................124
4.4 Reusability graph when components contain adaptable function points only...........................124
5.1 Interactions between two components..............................................................................................129
5.2 Interactions among four components................................................................................................134
5.3 Interaction flow graph of two components......................................................................................137
5.4 Interaction flow graph of three components with 2 CRs................................................................140
5.5 Interaction flow graph of three components with 4 CRs................................................................141
5.6 Interaction flow graph of four components with 3 CRs.................................................................142
5.7 Interaction flow graph of four components with 7 CRs.................................................................144
5.8 Interaction flow graph of five components with 10 CRs................................................................145
6.1 Black-box component testing..............................................................................................................150
6.2 White-box component testing.............................................................................................................151
6.3 Testing of individual components......................................................................................................154
6.4 Integration-effect graph of two components....................................................................................155
6.5 Interactions among three components..............................................................................................159
6.6 Integration-effect graph of three components..................................................................................160
6.7 Interactions among four components................................................................................................162
6.8 Integration-effect graph of four components...................................................................................163
6.9 Interactions among five components.................................................................................................166
6.10 Integration-effect graph of five components....................................................................................167
6.11 Interaction flow graph of six components........................................................................................173
7.1 Interactions among five components.................................................................................................180
7.2 Hardware reliability.............................................................................................................................188
7.3 Software reliability................................................................................................................................188
7.4 Component interaction graph.............................................................................................................196
xi
xii Figures and Tables
Tables
3.1 Summary of Reuse and Reusability Issues.........................................................................................60
3.2 Summary of Interaction and Integration Complexities....................................................................76
3.3 Decision Table.........................................................................................................................................83
3.4 Summary of Testing Issues in Component-Based Software.............................................................86
3.5 Summary of Reliability Issues in Component-Based Software.......................................................91
4.1 Reusability Matrix................................................................................................................................119
4.2 Partially Qualified, Fully Qualified and Off-the-Shelf Function Points of
Five Candidate Components..............................................................................................................121
4.3 Reused, Adaptable and New Function Points of Five Candidate Components.........................121
4.4 Reusability Matrix when New and Reused Function Points are Involved..................................121
4.5 Reusability Matrix when only Reused Function Points of the
Component are Involved.....................................................................................................................122
4.6 Reusability Matrix when only Adaptable Function Points of the
Component are Involved.....................................................................................................................122
5.1 Number of In-Out Interactions Between Components C1 and C2................................................132
5.2 Number of In-Out Interactions among Four Components.............................................................134
6.1 Probable Values of Integration-Effect Matrix of Two Components...............................................155
6.2 Integration-Effect Matrix of Two Components Without Faults.....................................................157
6.3 Integration-Effect Matrix of Two Components With Faults...........................................................158
6.4 Integration-Effect Matrix of Two Components................................................................................158
6.5 Probable Values of Integration-Effect Matrix of Three Components............................................161
6.6 Actual Values of Integration-Effect Matrix of Three Components................................................161
6.7 Probable Values of Integration-Effect Matrix of Four Components..............................................165
6.8 Actual Values of Integration-Effect Matrix of Four Components..................................................165
6.9 Probable Values of Integration-Effect Matrix of Five Components...............................................170
6.10 Actual Values of Integration-Effect Matrix of Five Components..................................................170
6.11 Comparison of Black-Box Test Cases.................................................................................................174
6.12 Comparison of White-Box Test Cases................................................................................................174
7.1 Inputs of Path P1...................................................................................................................................183
7.2 Interaction Metrics of Path P1.............................................................................................................183
7.3 Inputs of Path P2...................................................................................................................................184
7.4 Interaction Metrics of Path P2.............................................................................................................184
7.5 Inputs of Path P3...................................................................................................................................185
7.6 Interaction Metrics of Path P3.............................................................................................................185
7.7 Inputs of Path P4...................................................................................................................................185
7.8 Interaction Metrics of Path P4.............................................................................................................185
7.9 Actual Execution Time of Four Different Paths...............................................................................186
7.10 Inputs of Path P1...................................................................................................................................197
7.11 Probability of Component Interaction in Path P1.............................................................................197
7.12 Inputs of Path P2...................................................................................................................................198
7.13 Probability of Component Interaction in Path P2.............................................................................199
7.14 Inputs of Path P3...................................................................................................................................199
7.15 Probability of Component Interaction in Path P3.............................................................................199
7.16 Execution Time and Reliabilities of Paths.........................................................................................200
7.17 Actual Execution Time and Reliability of CBS Application...........................................................200
Preface
In the current software development environment, the “divide and conquer” approach is
used in development of very large-scale and complex software. Software is divided into
components and then individual components are either developed or purchased by the
development team or by the third party. Finally, developed/purchased components are
integrated to design the software according to the requirements. This development
approach is known as component-based software engineering (CBSE). Component-based
software engineering, a distinctive software engineering paradigm, offers the feature of
reusability. It promotes the development of software systems by picking appropriate pre-
existing, pre-built, pre-tested and reusable software work products called components.
These components are assembled and integrated within a well-defined architectural
design. Rather than focusing solely on coding, component-based software development
enables application developers to concentrate on better design and optimized solutions to
problems, since coding objects are available in the repository in the form of components.
Component-based software engineering accentuates development with reuse as well as
development for reuse. Reusable components interact with each other to provide and
access functionalities and services. The interactions and integrations of heterogeneous
components raise issues, including the suitable and efficient reusability of components,
complexities produced during the interaction among components, testing of component-
based software and the overall reliability of the application under development.
This book addresses four core issues in the context of component-based software: reus-
ability, interaction/integration of components, testing and reliability. In addition, the major
research issues addressed in this book are:
xiii
xiv Preface
• Lack of testing procedures and methodologies for individual components and inte-
grated CBSE software
• Reliability and sensitivity to changes.
In this book, these issues are addressed using a model-driven approach to analyze and
show the results of proposed measures and metrics. This approach is a very simple, suit-
able and comparatively efficient way to resolve them effectively. Appropriate scenarios
and case studies are constructed to model the component-based environment. Comparative
results are reported in each chapter, demonstrating the efficiency of the proposed metrics.
Intended Audience
This book focuses on a specialized branch of the vast domain of software engineering:
component-based software engineering. It is designed for Ph.D. research scholars, M. Tech.
(computer science and engineering) scholars, M. Tech. (information technology) scholars,
B. Tech. (computer science and engineering) students, B. Tech. (information technology)
students and other higher education courses, as well as for research areas where software
engineering is in the syllabus.
Chapter Organization
This book contains seven chapters, organized as follows:
Chapter 1 introduces the basic concepts of software engineering in terms of traditional
development and advanced development paradigms. Further, it defines the context of
component-based software engineering (CBSE), including definitions of components and
CBSE, evolution of the CBSE paradigm, basic characteristics of CBSE, related problems
and observations.
Chapter 2 provides comparative review and analysis of various component-based
development models proposed and used so far. These models include the BRIDGE,
Umbrella, X, Y and other models.
Chapter 3 explores the major issues in developing component-based software, focusing
on reusability, interaction/integration complexities, design and coding, testing and reli-
ability. The chapter provides a detailed comparative literature review of these issues.
Chapter 4 focuses on the reusability features of components and the role of reusability
in selection and verification of components. In this chapter metrics for reusability are
developed at component level as well as at system level. These metrics are categorized
according to the degree of reusability as new, adaptable or off-the-shelf components.
Chapter 5 defines some simple and effective interaction and integration metrics to
assess the complexities of component-based software. These interaction metrics are divided
into two categories: black-box and white-box. For black-box components, integration met-
rics are developed and for white-box components, cyclomatic complexity metrics are
constructed.
Chapter 6 focuses on testing and test-case generation issues in component-based soft-
ware. Two types of technique are defined in this chapter: one for components whose code
is not accessible and the other for those whose code is available. In the context of CBSE,
testing and the assessment of reliability is one of the crucial issues.
Chapter 7 illustrates some measures to assess the execution time and reliability of com-
ponent-based applications using reusability feature of the components. To enhance the
reliability of component-based software, interaction metrics and reusability metrics are
used as the reliability estimation factor.
Acknowledgements
No creation in this world is a solo effort, and neither is this work for which we are credited.
From the person who inspired us to write this book to the person who approved the work,
everyone has a role. We have benefited from various resource materials and ideas on com-
ponent-based software engineering, and we are gratified that over the entire period of the
compilation of this book, many people have helped and supported us.
It is our pleasure to express our gratitude to Prof. (Dr.) Kamal Kumar Ghanshala, Hon.
President, Graphic Era (Deemed to be University), Dehradun for encouraging us and pro-
viding the resources and facilities to conduct this work. We express our profound gratitude
to Prof. (Dr.) R. C. Joshi, Chancellor, Graphic Era (Deemed to be University), Dehradun for
his wise suggestions, never-ending inspiration and deep interest in this book. We express
our thanks to Prof. (Dr.) Rakesh K. Sharma, Vice Chancellor, Graphic Era (Deemed to be
University), Dehradun and Prof. (Dr.) H. N. Nagaraja, Pro-Vice Chancellor, Graphic Era
(Deemed to be University), Dehradun for their guidance and support.
We are very thankful to Dr. D. R. Gangodkar, Dr. Pravin P. Patil, Dr. Bhasker Pant,
Dr. Devesh Pratap Singh and Mr. Manish Mahajan for their valuable advice and continu-
ous evaluation of this book. We are especially grateful to Prof. Dibyahash Bordoloi, Prof.
(Dr.) S.C. Dimri, Dr. Priya Matta and Dr. Kamlesh Purohit for their academic, administra-
tive and moral support. We thank all our dear colleagues who helped us unconditionally
and selflessly during the writing of this work.
We would like to thank students, readers of this book and practitioners, and hope that
this work will be helpful for them in their research in the field of component-based soft-
ware engineering. We would also like to thank Ms. Balambigai, Senior Project Manager,
SPi Global, Ms. Aastha Sharma and Ms. Shikha Garg at CRC Press/Taylor & Francis Group
for their continuous support and cooperation throughout the publication process.
We express our deep sense of appreciation to our families for their unconditional love,
support and care.
xvii
Authors
xix
1
Introduction to Software Engineering and
Component-Based Software Engineering
1.1 Introduction
In last few years, the nature of software as well as the software development process has
changed significantly. Development processes have come a long way from ad-hoc develop-
ment to a customer-specific agile development process, from simple, single-task programs
to complex and extremely large component-based software, from mere number-crunching
mathematical calculations to real-life, problem-specific and dynamic solutions. In today’s
era, software is not merely a product: it’s a medium for delivering products, it’s a service.
In computer science, two fundamental constructs of developing a program or software
are algorithms and data (structure). Generally, algorithms and logics are developed to cre-
ate or manipulate data. These algorithms are non-executable codes that are translated into
executable form, commonly known as programs. Further, these executable programs are
collectively grouped together to form the executable software.
In the modern era, however, software is more than a collection of executable programs.
Today’s software is a combination of executable programs that perform defined tasks, data
and data structures on which these programs will operate, operating procedures, and
associated documentation to help users and customers understand the software (see
Figure 1.1).
1.2 Software Engineering
Software engineering is both an art and a science. It is the process of constructing accept-
able artifacts with scientific verifications and validations within the limitations of time and
budget. The term “software engineering” came into existence in the 1960s when NATO’s
Science Committee organized conferences to address the “software crisis.” Krueger (1992)
describes the software crisis as “the problem of building large, reliable software systems in
a reliable, cost-effective way.” Previously, the industry, the research community and
academia had concentrated on the development of capable and competent hardware. The
result of this effort was the availability of powerful and cheaper machines. Now, there was
the requirement of large, functionally efficient software to fully utilize the capability of the
available hardware machines and other resources. Thus, the focus of the community
1
2 Component-Based Software Engineering
Executable
Data (structures)
Programs (.exe)
Software
Operating Procedures
Documentation
Other Artefacts
FIGURE 1.1
Modern software.
shifted from small, number-crunching programs to developing software that could address
real-world problems. The outcome of these conferences was the emergence of the term
“software engineering.”
Software is a set of executable programs, related documentation, operational manuals
and data structures. Engineering is the step-by-step evolution of constructs in an orga-
nized, well-defined and disciplined manner. Some leading definitions of software engi-
neering produced by eminent researchers and practitioners are as follows:
The conclusion of all these definitions is that the ultimate goal of software engineering is to
develop quality software within reasonable time limits and at affordable cost. Software engi-
neering is an engineering domain that seeks to implement standard ways of developing and
maintaining software, through standard methods using standard tools and techniques.
Throwaway Exploratory
Prototype Prototype
FIGURE 1.2
Traditional software engineering paradigms.
4 Component-Based Software Engineering
1.4.1.1.1 Key Findings
Key features of this paradigm are
• One of the first paradigms based on the software development life cycle, having a
defined set of software development phases.
• The waterfall paradigm freezes all customer requirements at a very early stage, so the
selection of the development team, and acquisition of software and hardware
resources is done before the actual development begins.
• Waterfall is a very simple paradigm and comparatively easy to implement.
Software
Requirement
Analysis Software
- Commencement Design
of project - Project Software
- Gathering of design Implementation
requirements - Overall - Unit level Software
- Listing of architecture coding Testing
requirements - Front-end - Overall Project - Unit testing Deployment
- Feasibility design coding - Project - Project
analysis - Back-end - Code listing testing release Maintenance
- Scheduling design - Test cases - Project - Support
- Estimations - Design installation - Feedback
- Requirement document - Feedback
Specification
Document
FIGURE 1.3
Basic waterfall development paradigm.
Introduction to Software Engineering 5
• Development phases are organized in such a way that they do not overlap, that is, the
new phase commences only after the conclusion of the previous phase.
• Overall development takes less time as this paradigm requires all requirements to be
listed in a clear and predefined manner. Once all the requirements are documented,
developers can focus solely on core development.
• The development process is manageable and always under control, as the team’s full
emphasis is on one phase at a time.
• Each phase has a defined and definite deliverable. After the accomplishment of each
phase the work product can easily be reviewed.
This paradigm is best suited to projects where all the requirements are clear and defined,
and the development team is sure about the tools and technologies required for the devel-
opment. The waterfall model is suitable where reengineering of old software is required,
and is performed with the help of new technologies and on new platforms. This model is
used when the development team has enough experience to develop similar nature of
software.
1.4.1.1.2 Critiques
• It is rare for all software requirements to be available at the very early stages of devel-
opment. Sometimes, the customer is not actually aware of or clear about the require-
ments, in which case the development process cannot proceed.
• Customers or users are not involved during the development process of their own
software other than in the first phase, so they are required to be patient till the last
phase, which is quite difficult.
• It is a rigid paradigm which does not allow for flexibility, or changing or enhancing
requirements in between the development phases.
• The sequential nature of the waterfall paradigm produces “blocking states” (Bradac
et al. 1994), that is, the development team cannot start the next phase until they have
finished the current phase. Parallel development phases are not described in the
waterfall paradigm. This sequential approach results in coders and testers remaining
idle for a long time.
• Errors made by systems analysts, designers or coders are not detected till the final
phase of development, as the testing phase comes almost at the end of the paradigm.
A high level of uncertainty and risk is therefore involved in the waterfall paradigm.
• Wastage of resources is quite possible as all the resources are acquired long before the
commencement of actual development.
• This paradigm is not suitable for innovative projects where the requirements are not
very clear, or uncertainty or risk in the project is likely to occur.
1.4.2.1.1 Key Findings
• Huge and complex software may be componentized or modularized and necessary
modules can be delivered with available resources.
• Development can be started with an initial or most prioritized set of requirements,
without the need to freeze all the requirements in advance.
• The customer obtains operational software with basic functionalities within a short
time.
• On the basis of customer feedback, the subsequent increments can be better planned,
and developers have the time to produce better-quality deliverables that satisfy the
customer.
Requirements
Final set of
Requirements Design
Implement Final
Coding
Testing Increment
Deploy
nth
Increment
Requirements
Requirements
Design
Implement 2nd
Extended
Coding
Testing Increment
Deploy
Feedback of Core product (First Increment)
Requirements
Requirements Design
Implement 1st
Coding
Initial
FIGURE 1.4
The incremental development paradigm. (Adapted from Pressman 2005.)
Introduction to Software Engineering 7
1.4.2.1.2 Critiques
• Increments should be planned and designed so that each increment is organized in a
defined manner. Each increment should add functionality rather than cost overheads
and development time.
• The incremental paradigm may be started with the highest-priority requirements,
but prioritization of requirements is a complex task. The core product may become
irrelevant if the customer’s priorities have changed in the next increment.
• Cost and time to incorporate new functionalities in the working software may be
higher than in the waterfall paradigm.
• The integration of new or changed requirements in the operation software may lead
to new and unforeseen errors, which may cause the budget and development sched-
ule to overrun.
1.4.2.2.1 Key Findings
• The RAD paradigm includes advanced modeling tools and techniques. Its design
phase consists of three modeling concepts: business modeling or logic building, data
modeling and process modeling.
• RAD supports auto code generation mechanisms and code reusability.
8 Component-Based Software Engineering
Module-n Design
Module-n Coding
Module-n Testing
Requirements Integration
Analysis of Modules
- Unit Testing
- Integration
Testing
Overall System Team 2
Design Module-2 Analysis Deployment
of Software
Module-2 Design
- Feedback
- Support
Team 1 Module-2 Coding
Module-1 Analysis
Module-2 Testing
Module-1 Design
Module-1 Coding
Module-1 Testing
FIGURE 1.5
The rapid application development paradigm. (Adapted from Pressman 2005.)
• RAD is one of the first paradigms to apply the partitioning of large problems into
smaller sub-problems. RAD follows the “divide and conquer” approach to
development.
• Componentization and modularization are among the fundamental attributes of the
RAD paradigm. Large and complex software is divided into modules and these mod-
ules are allotted to different development teams.
• Parallel development of different modules results in a shorter life cycle for the overall
software. Basically, RAD is used when the customer needs large-scale software
quickly. In general, RAD is a “high-speed” version of classic life-cycle paradigms.
• It is the most suitable model for business-oriented products where market-bounded
time constraints govern product launches. Some authors have therefore used the
terms “business modeling” and “data modeling” in place of analysis and design.
• RAD offers enough flexibility to accommodate changes in or extensions to the cus-
tomer’s requirements. As it is an incremental paradigm, changes can be incorporated
and delivered in increments.
1.4.2.2.2 Critiques
• RAD uses extensive human resources, as it requires various teams to develop differ-
ent modules.
• Integration of different modules developed by different teams with varying levels of
technical skill is a major issue. If not handled properly, integration and interaction of
modules may generate unmanageable complexities.
Introduction to Software Engineering 9
• When the assembled modules are tested as a complete system, integration testing
and regression testing may be time-consuming as well as costly.
• Coordination among development teams is a critical issue. Different teams have dif-
ferent technological expertise. Combining their work on a common platform and uti-
lizing their skills to achieve a common goal (customer requirements) may present a
real challenge.
• RAD requires experienced and skilled systems analysts and designers. Modularization
of software is a risky and complex job. If division of modules is not done properly, or
not according to the requirements, it will create design faults, which may ultimately
result in faulty software.
• The initial cost of the RAD paradigm is comparatively high, as it requires various
development teams, and each team requires resources.
Identification Quick
and Refinement Analysis and
of Requirements Planning
Feedback and
Evaluation of Quick
Prototype Design
Deployment Prototype
of Prototype Development
Final Prototype
Throwaway Prototype Exploratory Prototype
Delivery and
Implementation
Maintenance
Testing
Delivery and
Maintenance
FIGURE 1.6
The prototyping development paradigm.
Introduction to Software Engineering 11
Depending on the implementation method and the customer requirements, there are
two approaches to prototyping: throwaway prototyping and exploratory prototyping.
i. Throwaway prototyping
Once the development team and the customer finalize the prototype, it is put to one
side. Taking the final prototype as the basis for the model, a new SRS with detailed
configurations is written. A rapid new design is built using new tools and technolo-
gies. The finalized prototype is discarded and exactly the same operational software
is developed. The discarded prototype works as a dummy guide used to refine the
requirements, as shown in Figure 1.6.
ii.
Exploratory prototyping
In exploratory prototyping, the finalized prototype is not discarded; rather it is
treated as the semi-finished product. New features and additional functionalities are
added to make the prototype operational. After reviews by the customer and testing
by the developer it is deployed to the customer’s site.
1.4.3.1.1 Key Findings
• Prototyping is the most suitable paradigm for eliciting requirements from the cus-
tomer. Non-technical customers may be unaware of or uncertain about their software
requirements; after seeing working or dummy prototypes of similar products, cus-
tomers can explore more specific requirements.
• Once the development team and customer finally agree on a particular prototype, the
development of the final product takes minimum time compared with other software
development paradigms.
• Analysis, design or coding errors by the development team can be identified early in
the development phases, and can be rectified without delay. It improves the overall
life cycle of the software.
• Of all the traditional development paradigms, customer involvement is at its highest
level in the prototyping model. In all iterations, customer feedback is considered and
incorporated in the next prototype iteration.
• Prototyping is one of the most flexible development paradigms in terms of incorpo-
rating changes or extensions to the requirements.
1.4.3.1.2 Critiques
• One of the major criticisms of the prototype paradigm concerns long-run quality
issues. In the rush of delivering operational software, quality is sometimes compro-
mised. To fulfill the customer’s demands as soon as possible, inefficient, incompetent
or poor programming languages can be chosen or inappropriate databases selected,
or the developer may choose technologies for their own convenience rather than to
meet the demands of the proposed software.
• If the customer’s demands change too frequently, then the time taken to develop the
prototype is longer than the time to develop the actual software.
• Sometimes, it is difficult to convince other stakeholders about the prototype develop-
ment paradigm. If after a while the customer is informed that the product developed so
far is only a prototype, they may raise objections regarding the time and cost invested.
• For large and complex projects, the prototype model may become unmanageable.
12 Component-Based Software Engineering
After the completion of each spiral cycle, some milestones are achieved and new mile-
stones are established (Figure 1.7).
Risk
Analysis
Start of the
Project Prototype
Basic
New requirements gathering, elements of
plan for next cycle development
One complete
cycle
Milestones
FIGURE 1.7
The spiral development paradigm. (Adapted from Boehm 1986.)
Introduction to Software Engineering 13
1.4.3.2.1 Key Findings
• The spiral paradigm is especially useful when the cost factor as well as the risk factor
in the software is high. It assesses the risk involved in the software in each iterative
cycle. Risk assessment early on in development may save many final-stage unfore-
seen events, as any risk can delay the schedule or even take the software over
budget.
• The spiral paradigm is one of the most interactive development paradigms, with the
involvement of the customer highly appreciated and encouraged. Customers are
involved in almost every phase of the life cycle of their product. They cannot provide
inputs but they are entitled to review the product and can record their feedback.
• The spiral paradigm provides flexibility in terms of adding and/or changing require-
ments in every cycle. New requirements or changes in the previous requirements can
be easily accommodated in the next development cycle.
• The spiral paradigm is a never-ending paradigm. Even after the delivery of the soft-
ware, it is still adaptable and can accept changes, modifications and enhancements in
the running software.
1.4.3.2.2 Critiques
• To assess and mitigate the risk, experts are required. If risk is not evaluated properly
or identified in time, it may generate serious consequences.
• It is not suitable for small-scale projects. The spiral paradigm requires a sufficiently
large amount of both time and cost to plan, review and assess each cycle, which may
not be cost-effective for small projects.
• The spiral paradigm may generate a huge amount of paperwork, as each phase con-
sists of various changes, prototypes and similar intermediate versions.
• Sometimes it is hard not only to establish realistic milestones but also to achieve
them, as the spiral paradigm supports continuous extensions and changes in the
software.
• It is a comparatively costly development paradigm, due to the involvement of risk
assessment features, the building and evaluation of prototypes in each cycle, and
continuous changes suggested by the customer.
FIGURE 1.8
Advanced software engineering paradigms.
i. “Individuals and interactions over processes and tools,” that is, customers and inter-
actions with customers will be given preference over development processes and
tools.
ii. “Working software over comprehensive documentation,” that is, a focus on giving
working and operational deliverables rather than generating documents.
iii. “Customer collaborations over contract negotiations,” that is, the customer is
involved in the project and developers have ownership of the software rather than
just developing and delivering the software to the customer.
iv. “Responding to change over following a plan,” that is, changes in the software at any
stage are welcomed and adaptations made rather than waiting for the next phase
after the review, as is the case in traditional development paradigms.
To attain and realize the agile paradigm in the best way, the “Manifesto for Agile
Development” describes the 12 principles of agility (Agile Alliance 2003, Fowler and
Highsmith 2001), as:
i. “Highest priority of agile paradigm is to satisfy the customer through early and con-
tinuous delivery of valuable software.”
ii. “Agile paradigm welcomes changing requirements, even late in development. Agile
processes harness change for the customer’s competitive advantage.”
iii. “Deliver working software frequently, from a couple of weeks to a couple of months,
with a preference to the shorter timescale.”
iv. “Business people and developers must work together daily throughout the
project.”
v. “Build projects around motivated individuals. Give them the environment and sup-
port they need, and trust them to get the job done.”
Introduction to Software Engineering 15
vi. “The most efficient and effective method of conveying information to and within a
development team is face-to-face conversation.”
vii. “Working software is the primary measure of progress.”
viii. “Agile processes promote sustainable development. The sponsors, developers, and
users should be able to maintain a constant pace indefinitely.”
ix. “Continuous attention to technical excellence and good design enhances agility.”
x. “Simplicity, the art of maximizing the amount of work not done, is essential.”
xi. “The best architectures, requirements, and designs emerge from self–organizing
teams.”
xii. “At regular intervals, the team reflects on how to become more effective, then tunes
and adjusts its behaviour accordingly.”
Agility is a philosophy rather than just a development paradigm. It is not about simply
following the defined plan or predefined activities and operations. The agile paradigm
defines the process of development as a collaborative endeavor among stakeholders of the
software, self-organizing and motivated teams working in a healthy environment to
achieve a common goal: customer satisfaction.
In the agile development paradigm, large and complex software is divided into smaller
and manageable increments to avoid the rush of delivering the full-fledged software at the
final stage. Quick design and rapid implementation are performed by expert teams mak-
ing the deliverable available for customer review.
The agile paradigm is actually a methodology, a philosophy of development. There are
various frameworks available to implement this methodology. Some of the best-known
frameworks are;
• Scrum
• Extreme Programming (XP)
• Crystal
• Lean Software Development (LSD)
• Adaptive Software Development (ASD)
• Dynamic Systems Development Method (DSDM)
• Feature Drive Development (FDD)
1.5.1.1 Key Findings
• Unlike traditional paradigm teams consisting of analysts, designers and developers,
in the agile development paradigm, the team consists of all the stakeholders of the
software, that is, the customer or the customer’s representative is now an integral
part of the development team.
• “Customer’s stories” are replaced by “customer’s voice.” Rather than documenting
the requirements of the customer, here the customer is involved in the elicitation,
planning and analysis of the requirements as a member of the development team.
• “Customer satisfaction” is at the heart of the agile development paradigm. If the cus-
tomer is satisfied with the deliverable product, its cost and the schedule, then the
agile paradigm is on the right track.
16 Component-Based Software Engineering
1.5.1.2 Critiques
• All the emphasis is on customer satisfaction, adaptability to change and developer
skills, but there is little focus on developers’ or software engineers’ motivation or, in
Alistair Cockburn’s words, “the agile alliance forgets the frailties of the people who
build computer software” (Cockburn 2002).
• While delivering rapid increments, sometimes it is difficult to assess the actual effort
and resources required for large and complex software.
• With the main emphasis on adaptation to changes suggested by the customer, imple-
mentation of those changes, and rapid delivery, detailed design and detailed docu-
mentation may be neglected.
• If the development team or the customer is not clear about the scope of the require-
ments, or the customer is not satisfied with the increments and makes changes quite
frequently, agile may become an unmanageable or even interminable process.
• Agile demands experts and experienced designers and developers in the develop-
ment team, who can design and implement new or changed requirements efficiently
and as quickly as possible. The success of the agile development paradigm depends
to a considerable extent on the availability of these domain experts.
• For low-budget or small-scale projects, agile is not the right development paradigm.
1. One of the major reasons for size increments is the repetition of functionalities in every
component or module of the software. Sometimes it is a requirement of the application
that some functions are required in all components of the software, such as “memory
management,” “logging,” “resource sharing,” “error handling,” “exception handling,”
“business rule,” “transaction processing,” and “real-time constraints.” The problem
Introduction to Software Engineering 17
arises when these repeated functions require regular changes and updates to every
module in which they appear. Because of the increment in complexity and size these
small function codes are neglected, causing problems in the software.
2. Sometimes adding or modifying a single functionality/module/component requires
almost all the software modules to be changed.
In the aspect-oriented paradigm, these elements are called “aspects” or “concerns.” The
aim of aspect-oriented software is to modularize or isolate these complex “crosscutting”
concerns or aspects. The aspect-oriented development paradigm is a set of processes
and techniques for identifying, defining, specifying, designing and constructing
“aspects.” “Aspect oriented development paradigm is the mechanisms beyond subrou-
tines and inheritance for localizing the expression of a crosscutting concern” (Elrad,
Filman and Bader 2001).
Common terms used in the implementation of the aspect-oriented paradigm are as
follows:
• Aspect: From the implementation point of view an aspect can be a module or a class
that contains crosscutting concerns from different modules.
• Joinpoint: Joinpoints are the well-implemented points in the program that act as a
junction of two functionalities or functions, such as a method, the entry point of a
method, the exit point of a method, throwing an exception, and so on.
• Advice: Advice is an action or set of instructions that are executed on a joinpoint and
initiated by an aspect.
• Pointcut: Pointcuts are groups of joinpoints that are checked before an advice is
executed.
1.5.2.1 Key Findings
• Aspect-oriented development helps to isolate core concerns from other concerns. It is
helpful to identify important and fundamental areas of code where crosscutting
aspects can be identified and designed.
• Aspect identification and design is not limited to the aspect-oriented paradigm;
rather, it can be used with any other paradigm to enhance the efficiency of the code.
• It helps to modularize the overall architecture of the design which ultimately helps to
maintain the software.
• This paradigm appreciates the reuse of coding constructs, so that the time and cost of
development can be sufficiently reduced.
• It complements and extends the object-oriented development paradigm concepts.
• It promotes the philosophy of “separation of concerns.”
1.5.2.2 Critiques
• It is difficult to identify and define concerns and aspects that actually assist with
crosscutting or modularization, hence an expert team is required.
• Since it is assumed that this paradigm is still evolving, many concepts and concerns
remain unanswered in the paradigm.
18 Component-Based Software Engineering
• If concerns are not specified and designed well, this paradigm may create overheads.
Even where aspects are well designed, it is still a time-consuming process to write,
track, modify and maintain concerns.
• For small-scale applications it is easy to specify and design concerns, but for large
and complex software it is very difficult to manage aspects.
• Not much documentation is supported by the aspect-oriented paradigm; even these
concerns and aspects are in practice limited to coding.
• Planning: Planning is undertaken with special care, such that available increments
can be easily adapted and integrated into the current increment.
• Requirement analysis: Requirements can be collected from any traditional or
advanced methods and tools.
• Box-structure specification: Specifications are defined using box structures.
According to Hevner and Mills (1993), box structures “isolate and separate the cre-
ative definition of behavior, data, and procedures at each level of refinement.” Three
types of box structure are used to specify the software in the cleanroom paradigm:
1.
Black box specifies the behavior of the system when a function is applied to the set
of inputs. Here control is focused on inputs provided and outputs received.
2.
State boxes are similar to encapsulated objects that put input data and transition
operations together. They are used to state the states of the system.
3.
Clear box describes the operations that are transmitting the inputs to the outputs.
• Formal design and correctness verification: Designs are achieved using formal spec-
ification methods in addition to traditional design methods. Formal design methods
Introduction to Software Engineering 19
Planning of Software/
Increment
Correctness T
Formal Design e
Verification
s
t
P
l
Walkthroughs/Code a
Code Generation
Inspection n
Empirical Data/
Statistical Testing
Samples
Certification Process
of the Increment
FIGURE 1.9
One increment of cleanroom software development.
are used at architectural as well as component level. The cleanroom paradigm regres-
sively focuses on verification of the design. Mathematical proofs and logics are used
for the correctness of the proposed design.
• Code generation, inspection and walkthroughs: The cleanroom paradigm translates
the corrected design into appropriate programming languages. Code inspections, deci-
sional charts and technical reviews are conducted to check the correctness of the code.
• Statistical testing with empirical data: In the cleanroom software development para-
digm, testing is conducted using statistical data sets and samples. Testing methods
are not only standard but also mathematically verified. Metrics support mathemati-
cal validations. Exhaustive and regression testing is performed regressively.
• Certification of increments: Once all the phases are tested and verified, the certifica-
tion of the increment by the development team begins. All the necessary tests are
conducted and the increment is released so that it can be integrated with other certi-
fied increments.
20 Component-Based Software Engineering
1.5.3.1 Key Findings
• In the cleanroom development paradigm, test plans are prepared just after the
requirements have been collected and continuously throughout the development
phases. It focuses on error prevention from the beginning.
• Regressive correctness verifications and code inspections are performed on the
design and coding of the proposed software to provide a certified level of reliability
to the software.
• The mathematical basis of the paradigm makes the developed software less prone to
small errors.
• It reduces the effort as well as the cost of testing, which occurs in almost the final
stages of traditional development paradigms.
• The use of box structures makes the paradigm more reliable.
1.5.3.2 Critiques
• To apply the cleanroom software paradigm, trained and experienced people are
required who have a clear understanding of formal methods of development.
• It is a time-consuming methodology compared with other paradigms, since formal
specification, inspections, peer review and statistical testing take a considerable
time.
• It can be difficult to convince customers of how this paradigm works. The concepts of
mathematical proofs and box-structure representations are hard to understand.
• It is comparatively less practiced by software engineers due to its complex structure
and mathematical verification methods.
1.5.4.1 Key Findings
• In the component-based paradigm, the emphasis on reusability ultimately results in
a shorter development life cycle. Component-based development supports modular-
ity, or componentization, which helps control the overall design of the software and
ultimately provides better maintainability to the developed software.
• Due to the nature of componentization, the component-based paradigm supports
parallel construction of components to increase productivity in the development.
• The component-based paradigm supports reusability not only during but also after
development. Components are deposited in the repository for future use.
Introduction to Software Engineering 21
1.5.4.2 Critiques
• Component-based development supports assembly of pre-developed components.
Assembly and integration is a major challenge in this paradigm. No standard mecha-
nism has been defined for the integration of components into the software architec-
ture. The only mechanism available is the design document.
• Interaction among components is both crucial and time-consuming. Interaction
issues are complex and generate complexity in the overall design of the software.
• Testing is comparatively time-consuming and costly as the concept is based on inte-
gration and interaction of components. The main focus is on integration testing of
components.
• Components can be sourced from different vendors or from a third party. Component
quality may therefore vary, ultimately affecting the overall quality of the developed
software.
• Experts are required to identify and integrate components according to the architec-
tural design and requirements of the software.
For further discussion of the component-based development paradigm, see Section 1.6.
Newly developed
Components provided components
by third party
Component 2 Component 2
Component 1 Component 7
Component 20 Component 20
Component 2 Component 8
FIGURE 1.10
Component-based software engineering framework.
components which are integrated through error-free interfaces (Shepperd 1988, Heineman
and Councill 2001, Lin and Xu 2003, Capretz 2005, Jiangou et al. 2009). The objectives of
CBSE are to develop extremely large and complex software systems by integrating com-
mercial off-the-shelf components (COTS), third-party contractual components and newly
developed components to minimize development time, effort and cost (Boehm et al. 1984,
Brereton and Budgen 2000). CBSE offers an improved and enhanced reuse of software
components with additional properties including flexibility, extendibility and better ser-
vice quality to meet the needs of the end user (Basili and Boehm 2001, Jianguoet al. 2011,
Vitharana 2003, Kirti and Sharma 2012).
The CBSE development paradigm is used to develop generalized as well as specific
components. Four parallel processes—new component development, selection of pre-
existing components from the repository, integration of components and control and
manage—are involved in the creation of more than one application concurrently, as
described in Figure 1.11. With each process there must be a feedback method to address
problems and errors arising in component selection, new component development, inter-
action and integration errors among the components, and their side effects. To manage all
these parallel activities, there must be a control procedure or management procedure
which will not only assess the development process but will also manage the requirement
analysis, selection of components, integration of components and, most importantly, the
quality of components submitted to the repository for future reuse.
Jacobson, Griss and Jonsson (1997) define four similar concurrent processes associated
with development of various applications in CBSE. These processes are creation, man-
agement, support and reuse. The creation process involves developing new applications
Introduction to Software Engineering 23
User’s/ Customer’s
Requirements
Components Repository
FIGURE 1.11
Concurrent component-based software development processes.
by reusing existing software components. The management process manages the activi-
ties of the development, including selection of components according to requirements,
cost of component selection and schedule of new application development. The support
process provides help and maintenance activities in the development of new applica-
tions and provides existing components from the repository. The reuse process collects
the requirements and analyzes them to select components from the repository, and is
responsible for the actual development of components through the property of
reusability.
define approaches, challenges and implications of CBSE. The first workshop on CBSE was
held in Tokyo in 1998. The major achievements of the initial phase are as follows (Heineman
and Councill 2001):
a. Specific and concrete terms in the context of CBSE were defined, including “compo-
nent architecture,” “component specification,” “component adaptation,” “compo-
nent acquisition,” and so on.
b. These defined topics created new research areas and research groups.
c. The number of academic and research publications increased and organizations such
as ACM started to publish in the area of CBSE.
• Reusability: Reusability is the focal property of CBSE. Krueger (1992) defines soft-
ware reusability as “the process of creating software systems from existing software
rather than building them from scratch.” Software reuse is the process of integrating
predefined specifications, design architectures, tested code, or test plans with the
proposed software (Johnson and Harris 1991, Maiden and Sutcliffe 1991, Berssoff and
Davis 1991). CBSE relies on reusing these artifacts rather than re-developing them.
Components are developed in such a way that they can be heterogeneously used and
then reused in various environments.
• Composability: One of the fundamental characteristics of CBSE is that the compo-
nents are the reusable, composable software entities, that is, applications are com-
posed of different individual components. These individual reusable components are
designed in such a way that they can be reused in composition with other compo-
nents in various applications with minimum or no fabrication. Components are com-
posed of components, that is, a component is itself made up of components, which
are further made up of other components, and so on (Atkinson et al. 2002). A compo-
nent can be a part of one or more components.
• Shorter development cycle: The component-based development paradigm follows
the “divide, solve and conquer” approach. In this paradigm, complex and bulky
applications are divided into smaller, more manageable units or modules. Then,
rather than starting coding of a complete module from the first line, existing elements
are sought and assembled that satisfy the requirements of the module under consid-
eration. This increases software development speed. In addition, several modules
can be implemented concurrently, regardless of location or context. Thus, develop-
ment time is saved and the development cycle becomes shorter.
• Maintainability: The effort required to add new functionalities to the application, or
modify, update or remove old features from the software, is referred to as the main-
tenance. Since CBSE-based software is made up of reusable and replaceable compo-
nents, we can add, update, remove or replace components according to the
requirements of the software. Maintaining composable and independent components
is much easier than maintaining monolithic software.
• Improved quality and reliability: Component-based technology ensures superior
quality as the CBSE integrates and couples pre-tested and qualified components.
These components are specifically tested at least at the unit level. During their inte-
gration with other pre-tested components, the developer performs integration as
well as system tests (Brown and Wallnau 1998). This regressive form of testing makes
26 Component-Based Software Engineering
component-based applications more robust and improves the quality of the product.
More broadly, the effort, cost and time of testing are noticeably reduced. Components
are independently developed, deployed in various contexts at the same time with
minimal or no fabrication and integrated according to the predefined architecture;
hence, it is assumed that there are no unwanted interactions among the components
to make them unreliable. All the interaction paths are predefined so the reliability
and predictability of components is increased.
• Flexibility and extendibility: Software developers have the choice of customizing,
assembling and integrating the components from a set of available components accord-
ing to their requirements. Replaceable and composable components are easy to add,
update, modify or remove from the application without modifying other components.
Error navigation and fixing are relatively easy as it is limited to component level only.
1.6.3 Componentization
Componentization is the method of identifying the quantity of components in a specific
application developed through component-based development. Componentization
addresses the issue of maintaining a balance between the number of components and the
complexity factors of the system. Essentially, the level of componentization is equivalent to
the level of requirement sufficiency. In determining the level of requirement sufficiency, we
consider as many components as are adequate to solve the software application’s inten-
tion. Figure 1.12 illustrates componentization vs integration cost.
If we divide the problem into a large quantity of components providing small function-
alities, it will increase both the cost of integration and the interaction effort. Not only the
cost but also the number of interactions, the coding complexity, testing effort and number
of duplicate test cases will also increase. If an application is componentized with fewer
components each providing a number of functionalities, it will cost in terms of testing as
well as maintenance. It is desirable to achieve a minimum cost region so that cost and effort
can be balanced against the number of components.
Effort of
Development Integration
Time
Minimum Cost
Number of
Region
Interactions
Integration Effort
Complexity Cost/Reused
Component
Testing
Effort
Number of Components
FIGURE 1.12
Componentization vs integration cost. (Adapted from Pressman 2005.)
Introduction to Software Engineering 27
1.7 Components
“Component” is defined by various dictionaries as follows:
The core element of CBSE is the component itself. Components are the basic unit of reusabil-
ity and the building blocks of component-based software. They are the focal point and prime
operative part of the component-based development paradigm. In the literature, components
have been defined by various researchers in a variety of ways. Some leading definitions are:
Definition 11. Meyer: “A component is a software element (modular unit) satisfying the
conditions: a) it can be used by other software elements, b) it possesses an official
usage description, which is sufficient for a client author to use, and c) it is not tied to
any fixed set of clients” (Meyer 2003).
From an analysis of these prominent and diverse definitions, one can deduce that they all
contain a number of universal points. In the context of this work and in the light of these
universal directives, we can identify a general definition of components as follows:
1.7.1 Types of Components
A component is an identifiable and functionally reusable unit that can be reused at various
levels with different degrees of reusability. Bennatan (2003) describes three classes of
component:
1.7.2 Characteristics of Components
To gain as well as to provide valuable information, components require three fundamental
properties: interfaces, services and deployment techniques.
1.7.3 Component Repository
In CBSE, reusability not only applies at the time of development but is also maintained
after development. When components are developed by the development team, devel-
oped/contracted by the third party or purchased, they must be stored in the repository for
future use. The repository is the component database which contains the following:
1.7.4 Component Selection
The selection of appropriate components for the proposed software is crucial in many
respects. Proper and accurate selection of components makes the software economic, helps
to shorten development time, increases reliability and makes the overall development much
smoother. However, in order to select one component from a number that are available with
similar intentions, selection and verification criteria are needed. At present the only criteria
available for component selection are the customer requirements. Developers generally
Introduction to Software Engineering 31
look for a component that fits into the software design without considering other selection
parameters. Apart from the requirements, major issues in component selection are:
1.7.5 Component Interaction
In component-based software development, interaction among components is necessary
as they are integrated to provide features and functionalities for the proposed software.
Components are designed to interact with each other according to the requirements of the
software. Component interaction is not only necessary but plays a crucial role in the design
of the software architecture. Components are assumed to be context independent, lan-
guage independent and vendor independent. Researchers have used various methods,
including graph theory notation, to show the interaction among components. UML
includes component interaction diagrams to denote such interactions.
Some fundamental issues with the interaction of components are as follows:
1.7.6 Component Dependency
Interaction and integration among components create dependencies among them.
Components depend on each other for functionalities and services. Component depen-
dency governs the level of coupling among the components. As the dependency increases,
interaction among components will also increase. These interaction dependencies generate
complexity in the software architecture. Components can be dependent on each other for
various reasons, including:
1.7.7 Component Composition
Composition defines the capability of the component to be integrated with other compo-
nents. Composition among components results in component-based software. The prop-
erty of composability defines the usability of the component in the software. It is one of the
fundamental properties of the component. Overall behavior of the software depends on
the flexibility or rigidity of the components in respect of composability. Composition
allows the component to behave according to the structure of the design. The composition
of a component is its individual property. In component-based software development,
components are assumed to be self-composable as well as composable independently.
Some basic issues regarding component composition are:
Summary
This chapter is divided into two sections: the first focuses on the basics of software engi-
neering; and the second is devoted to component-based software engineering.
Introduction to Software Engineering 33
The first section covers software engineering definitions used in academia and the
industry. Various models suggested by researchers are discussed in this section. We have
divided these models into two categories: traditional and advanced engineering models.
The second section of this chapter considers the basics of component-based software as
well as current issues in this domain. The evolution of component-based software engi-
neering is discussed in detail, with its advantages and disadvantages. This chapter also
focuses on the features and attributes of the basic unit of this domain, that is, the
component.
References
Agile Alliance. 2003 Principles behind the Agile Manifesto. http://agilemanifesto.org/principles.html.
(Accessed on March 15, 2019.)
Atkinson, C. et al. 2002. Component-Based Product-Line Engineering with UML. Addison-Wesley,
London.
Basili, V. R. and B. Boehm. 2001. “Cots-Based Systems Top 10 List.” IEEE Computer, 34(5): 91–93.
Bauer, F. et al. 1968. Software Engineering: A Report on a Conference Sponsored by NATO Science Committee.
NATO, Brussels.
Beck, K. et al. 2001. “Manifesto for Agile Software Development.” www.agilemanifesto.org/.
(Accessed on April 22, 2019.)
Bennatan, E. M. 2003. Software Project Management: A Practitioner’s Approach. McGraw-Hill, New
York.
Berssoff, E. H. and A. M. Davis. 1991. “Impacts of Life Cycle Models on Software Configuration
Management.” Communications of the ACM, 8(34): 104–118.
Boehm, B. W. 1981. Software Engineering Economics. Prentice Hall, Englewood Cliffs, NJ.
Boehm, B. W. 1986. “A Spiral Model for Software Development and Enhancement,” ACM Software
Engineering Notes, 14–24.
Boehm, B. W., M. Pendo, A. Pyster, E. D. Stuckle, and R. D. William. 1984. “An Environment for
Improving Software Productivity.” IEEE Computer, 17(6): 30–44.
Booch, G. 1987. Software Components with Ada: Structures, Tools and Subsystems. Benjamin-Cummings,
Redwood, CA.
Bradac, M., D. Perry, and L. Votta. 1994. “Prototyping a Process Monitoring Experiment.” IEEE
Transactions on Software Engineering, 20(10): 774–784.
34 Component-Based Software Engineering
Meyer, B. 2003. “The grand challenge of trusted components.” In Proceedings of IEEE ICSE, Portland,
OR, 660–667.
Nadeesha, G. R. 2001. History of Component Based Development. infoeng.ee.ic.ac.uk/~malikz/sur-
prise2001/nr99e/article1. (Accessed on April 7, 2019.)
Object Management Group. 2000. “Unified Modeling Language Specification. Version 1.3.” Retrieved
from https://www.omg.org/spec/UML/1.3/PDF. (Accessed on March 20, 2019.)
Oxford Dictionary. 2018. https://www.oxfordlearnersdictionaries.com/definition/american_english/
component. (Accessed on June 4, 2018.)
The Oxford English Dictionary. 2018. https://www.lexico.com/definition/component. (Accessed on
June 4, 2018.)
Pressman, S. R. 2005. Software Engineering: A Practitioner’s Approach, 6th edn. TMH International
Edition, Boston, MA.
Royce, W. W. 1970. “Managing the Development of Large Software Systems: Concepts and
Techniques.” In Proceedings of WESCON, Los Angeles, CA.
Schach, S. 1990. Software Engineering. Vanderbilt University, Aksen Association.
Shepperd, M. 1988. “A Critique of Cyclomatic Complexity as Software Metric.” Software Engineering
Journal, 3(2): 30–36.
Sparling, M. 2000. “Lessons Learned—Through Six Years of Component-Based Development.”
Communications of the ACM Journal, 43(10): 47–53.
Szyperski, C. 1999. Component Software—Beyond Object-Oriented Programming. Addison-Weseley,
Boston, MA.
Szyperski, C. and C. Pfister. 1997. “Component-Oriented Programming.” In Mühlhäuser, ed.,
WCOP’96 Workshop Report, dPunkt.Verlag, 127–130.
Tiwari, U. K. and S. Kumar. 2014. “Cyclomatic Complexity Metric for Component Based Software.”
ACM SIGSOFT Software Engineering Notes, 39(1): 1–6.
Tiwari, U. K. and S. Kumar. 2016. “Components Integration-Effect Graph: A Black Box Testing and
Test Case Generation Technique for Component-Based Software.” International Journal of
Systems Assurance Engineering and Management, 8(2): 393–407.
Vitharana, P. 2003. “Design Retrieval and Assembly in Component-Based Software Development.”
Communications of the ACM, 46(11): 97–102.
Webster’s Dictionary. 2018. https://www.merriam-webster.com/dictionary/component. (Accessed on
June 4, 2018.)
2
Component-Based Development Models
2.1 Introduction
This chapter discusses component-based software engineering development models pro-
posed by eminent people from the fields of industry, research and academia. These models
provide a broad understanding of the component-based development process. In compo-
nent-based software development, huge and complex software is divided up into modules
or components based on functionality, services or similar criteria. Almost every model
focuses on components being reused and then deposited in the repository for future use.
As we know, components and especially reusable components are the building blocks of
component-based development. Reusable components are assembled and integrated to
fulfill the requirements of the proposed software. Our focus here is on utilizing reusability
rather than new development from scratch.
• Phase 1: Domain Engineering—The first phase of the Y model, which starts with the
commencement of the software, is domain engineering, conducted not only to con-
figure essential requirements and constraints, but also to analyze the domain of the
application. In this phase basic entities and their functions are identified and defined,
reusability and other component features are sought, and an abstract model reflect-
ing a real-world solution is depicted.
• Phase 2: Frameworking—In this phase the attempt is to define a generic architecture
for the proposed software. Authors try to define the structure of the application and
37
38 Component-Based Software Engineering
System Maintenance
Analysis
Deployment
Design
Implementation Testing
Assembly Archiving
Frameworking
Selection/ Adaptation Catalog/ Storage
Domain
Engineering
FIGURE 2.1
Y model for component-based software. (Adapted from Capretz 2005.)
also the generic structure for components so that they can be reused in more than one
application. The frameworking phase attempts to:
a. Identify reusable components fit for the application.
b. Select candidate components.
c. Establish the semantic relationship among components.
d. Tailor components to fit into the framework of the software.
• Phase 3: Assembly—In this phase component selection or framework selection is
made based on the requirements of the specific application’s domain. Components
are selected, based on their reusability, from archived components stored in the
repository for reuse.
• Phase 4: Archiving—This is the process of submitting components to a repository for
future use. This model supports the philosophy of reusability in two ways: exploring
reusability during development and preserving reusability after development.
Archiving involves the following activities:
a. Promoting reusability during development by using reusable components.
b. Depositing reusable components in the repository for future use.
Component-Based Development Models 39
2.2.2 Critiques
• Although phases are defined and sub-activities are clearly mentioned, implementa-
tion details are not discussed and there is no mention of how these activities will
take place.
• Overlapping of activities in different phases is quite common.
• Component selection criteria are not defined.
2.3 BRIDGE Model
Ardhendu Mandal (2009) proposed a component-based model named BRIDGE contain-
ing 13 phases (Figure 2.2). His model contains almost all the development phases that a
model should contain. The BRIDGE model starts with requirements analysis, passing
through feasibility and risk analysis, architecture design, detailed design, pattern and
component search, coding, system building, validation, deployment and on-site testing,
and ending with the maintenance phase. Detailed descriptions of every phase are
provided.
• Phase 1: This is the initial phase of the model which focuses on three broad activities;
requirement analysis, verification and specification.
• Requirement analysis: This activity starts with the gathering of requirements from cus-
tomers and uses standard requirement-gathering techniques. The requirements are
analyzed to remove deficiencies such as redundancy, ambiguity and inconsistency.
• Verification: After gathering and analyzing, the requirements are verified and the
exact requirements configured.
Project Management
Activities
FIGURE 2.2
BRIDGE model for component-based software. (Adapted from Mandal 2009.)
Component-Based Development Models 41
• Specification: After verification, the requirements are well specified and docu-
mented in a software requirement specification that can be referenced in future.
This specification document works as an intermediary between developer and
customer.
• Phase 2: The focus of this phase is on proving the suitability of the application and
considering alternate solutions. The four activities in this phase are feasibility analy-
sis, risk analysis, verification and specification.
• Feasibility analysis: Feasibility analysis covers all types of feasibility, such as eco-
nomic, technical and operational feasibility. Systems analysts and the customer
representatives jointly finalize the constraints and suitability of the proposed
software.
• Risk analysis: Risks that may occur during and after development are identified
and recorded in the risk specification document.
• Verification: Verification of the feasibility and risk analyses is carried out to ensure
the optimal solution is achieved.
• Specification: The final activity in this phase is the preparation of documents
related to the issues identified and verified. Feasibility reports covering all types
of feasibility and risk specification documents are generated for future
reference.
• Phase 3: Phase 3, the beginning of the design phase, concentrates on designing the
high-level, abstract architecture of the application. This phase is divided into three
major activities: software architecture design (SAD), verification and specification.
There are no implementation-related issues in this phase.
• SAD: This is a high-level design architecture activity which focuses on identifying
the sub-systems and/or components of the proposed system. Communication
interfaces for these components and building blocks are also configured, so that
an abstract architecture of the complete application can be prepared. This archi-
tecture design should contain all the necessary functional requirements as defined
in the SRS document. Different stakeholders of the application are involved in
this activity.
• Verification: The design must now be verified to ensure it matches the require-
ments of the proposed system.
• Specification: The output of this phase is the SAD document, which includes all the
details of the abstract architecture design.
• Phase 4: In the BRIDGE model, the design phase is divided into two parts: abstract
design and detailed design. Phase 4 covers the detailed design of the proposed sys-
tem and includes the following activities:
• Detailed design: Covers the low-level design including the implementation details
of the application. The high-level or abstract design is translated into low-level
implementation. Algorithms can be implemented, and data structures identified
and defined in this phase.
• Verification: The detailed design is verified, using the SRS document to identify
any mismatch.
• Specification: Finally the verified design is documented in the software design
document, which is used as the basis for later phases.
42 Component-Based Software Engineering
• Phase 5: This phase covers the identification and selection of suitable components
from the repository. Activities performed in this phase are:
• Component selection: The goal of this activity is to select components according to
the proposed system design. Specific components fulfilling the software require-
ments as well as generalized components providing more than one function are
identified.
• Verification: Verification is carried out after component selection.
• Specification: Specifications are documented in the component specification
document.
• Phase 6: This phase is devoted to coding and testing the proposed software. The
major activities are:
• Coding: For modules and functionalities for which proper components are not
available, coding is performed using standard guidelines.
• Unit testing: During coding, components are tested individually. Unit testing is
the default testing performed on the components. These components must work
according to their specifications.
• Component submission: Newly developed components should be submitted to the
repository so that they can be reused.
• Verification: After testing, components are verified against the design documents.
They must demonstrate compatible behavior as defined in the design
documents.
• Specification: The end of this phase produces testing documents in the form of
unit-level test cases. Specifications of newly developed components are also
included.
• Phase 7: This phase defines the development of the system as a unit from the indi-
vidual sub-systems of previously development components and newly developed
components or similar parts. This phase is time-consuming as it includes the follow-
ing activities:
• System building: Individual components are grouped together, according to the
design, to form the proposed application.
• Component integration: When different components are integrated, a number of
issues may arise, which are resolved in this phase.
• Integration testing: It may be that components working individually do not
work well when they are integrated. Such problems should be dealt with in
this phase.
• System testing: When the complete system is integrated, the highest level of test-
ing is performed to validate the behavior of the overall application. This type of
testing is called system testing.
• Verification: Verification is carried out on the basis of documents from previous
phases.
• Specification: Finally, documents are generated in the form of test plans, test
designs and test cases.
• Phase 8: This phase provides activities related to system validation, that is, the com-
patibility of the proposed application with the functional requirements mentioned in
Component-Based Development Models 43
the SRS document. Quality factors are also measured in this phase. At the end of this
phase a validation report is generated and verified.
• Phase 9: Phase 9 covers issues related to deployment and implementation of the
developed system at the customer’s site, including:
• Deployment and implementation: Deliver the system to the client and implement it
at their site. This installation and deployment may require help and support to be
provided to the client. During development, necessary changes should be made
in the application according to the customer’s requirements.
• Documentation and training: After the deployment of the application, clients and/
or users may need proper training and guidance. This activity emphasizes train-
ing and appropriate documentation for the client.
• Phase 10: The newly deployed system may generate new errors after installation, or
environmental factors like the customer’s hardware or other software may prove
incompatible with the software. Provision should be made to deal with such prob-
lems. This phase focuses on on-site system testing to deal with such unforeseen
events. The output of the phase is an on-site system testing report.
• Phase 11: This is the maintenance and user support phase, which starts after installa-
tion of the system at the client’s site. Maintenance may include various types of activ-
ities that help the client after deployment. Maintenance reports are generated so that
appropriate modifications can be performed at the client’s wish.
• Phase 12: This phase, the configuration management phase, includes tools and tech-
niques to manage and control modifications made to the proposed software in any
phase. Changes suggested by the customer are documented and made, cost-effec-
tively, in the next version. Documents generated during development of previous
phases are used to keep track of the proposed changes.
• Phase 13: The final phase of this development model is the project management
phase. Although it is mentioned last, it is in fact implemented from the beginning.
Project planning, monitoring and similar activities are performed to keep control and
manage the development process.
2.3.2 Critique
Mandal also provides the following critiques of his model:
• The model lacks implementation details. The development phases are mostly general
theoretical concepts.
• Pattern matching and component selection are mentioned as important activities, but
the matching criteria and component selection techniques are not discussed.
• Validation activity is mentioned but validation methods are not defined.
• This model appears to be both complex and time-consuming.
2.4 X Model
Gill and Tomar (2010) proposed a modified development process model for component-
based software. This model supports the concept of reusability during and after develop-
ment. The X model divides development into two broad approaches: composition based
and generation based (Figure 2.3).
This model contains eight phases: domain engineering, domain analysis and specifica-
tion, component and system architecture, design, coding and archiving, component test-
ing, assembly and system testing.
• Phase 1: Domain Engineering—Like the Y model, this model starts with the domain
engineering phase, devoted to analyzing the specific application’s domain so that com-
ponents can be identified, cataloged, developed or reused in the proposed application.
t
en
m
op
el
ev
eD
ar
tw
D
ev
f
So
el
op
ed
m
as
en
tB
tf
n
ne
or
po
R
m
eu
Co
se
n
D
io
ev
at
el
Repository
c
ifi
op
od
m
en
rM
tw
fte
ith
ta
ou
en
tM
m
op
od
el
ifi
ev
ca
D
tio
n
FIGURE 2.3
X model for component-based software. (Adapted from Gill and Tomar 2010.)
Component-Based Development Models 45
If components are identified early enough, they can be reused in a variety of ways in
varied types of applications. Requirements of the system, constraints and other func-
tionalities are also considered in this stage.
• Phase 2: Domain analysis and specification—Gathered requirements as well as the appli-
cations domain are analyzed to identify the general scope of the reusability of compo-
nents. In parallel, specifications are generated to record the findings of these two
phases.
• Phase 3: Component and system architecture—This phase includes two major activities:
defining the architecture of components and defining the architecture of the overall
application. The components’ architecture is designed in such a way that they can be
reused in many applications.
• Phase 4: Design—The design phase describes the implementation details of the
component as well as the overall system. These details include descriptions of
methods, functions, algorithms, data design, structure representation and similar
activities. This phase also includes the design or selection of interfaces used for
component interaction. The system design includes not only the selection of com-
ponents but also the design of components which are not available and need to be
developed.
• Phase 5: Coding and archiving—This phase includes the following activities:
• Implementing components selected from the repository.
• Making changes in the selected components according to the design requirements
of the proposed application, if required.
• Coding of components that are not available and that are designed by the devel-
opment team as a fresh component.
• Submission of newly developed components to the component repository for
future reuse in other applications.
• Phase 6: Component testing—Each component is tested individually in this phase.
Although components are tested by the developer during coding, at the end of
component development they are tested in regard to the requirements of the cus-
tomer or to match the requirement specification. All types of components are
tested in this phase, whether picked from the repository, modified or newly
developed.
• Phase 7: Assembly—The next activity is to assemble the components according to the
predefined design. All the components providing various functionalities are inte-
grated according to the system design. Well-defined interfaces are used to integrate
these components. During the assembly phase all documents generated in previous
phases are considered.
• Phase 8: System Testing—Integrated components are tested as they should behave
according to the pre-defined architecture. When components are assembled, however,
they may generate new and different types of problems. Components may perform
well individually, but after integration they may create errors or faulty outputs.
Components may be incompatible with each other or with the interfaces. Aspects of
the customer’s environment, such as hardware or other applications, may be incom-
patible with the proposed software. These types of issues are resolved in the final
phase of the component-based development model.
46 Component-Based Software Engineering
2.4.2 Critiques
• Implementation details are not discussed. Though phases are defined and sub-activi-
ties are clearly mentioned, there is no discussion of how these activities will take place.
• Selection criteria for components are not defined.
• No risk analysis activity is defined.
2.5 Umbrella Model
In 2011, Dixit and Saxena (2011) proposed a component-based model named the Umbrella
model (Figure 2.4). The phases of the Umbrella model are similar to those of other pro-
posed component-based models, as shown in Figure 2.4.
Umbrella of Testing
or
Verification Activities
Component Selection
Component Specification
Requirement Analysis
FIGURE 2.4
Umbrella model for component-based software. (Adapted from Dixit and Saxena 2011.)
Component-Based Development Models 47
The Umbrella model divides software development into three steps: design, integration,
and deployment or run-time.
2.5.2 Critique
• This model lacks implementation detail. Most of the development phases are general
theoretical concepts.
• Component selection is mentioned as an important activity, but criteria for matching
and techniques of component selection are not discussed.
• Risks associated with developing software or components are not discussed at all.
There is no risk analysis phase in the model.
48 Component-Based Software Engineering
2.6 Knot Model
Chhillar and Kajla (2011) suggested a development model for component-based software
that emphasizes risk analysis in addition to reusability, modularity and feedback in every
phase (Figure 2.5). The model includes four phases: reusable component pool, new compo-
nent development, modification of existing components and development of component-
based software. When components are developed, they are submitted to the component
pool for future reuse. Components can be reused with or without modifications as per the
requirements of the software.
• Reusable component pool phase: All the components are stored and managed in the
component repository. When components are developed, they are submitted to the
pool for reuse in other applications. This pool contains all types of components,
including modified and newly developed.
• New component development phase: When requirements are not fulfilled by exist-
ing or modified components, new developments is carried out. We can use any tradi-
tional model to develop these components. New component development starts with
cost estimation and risk analysis, and is followed by design and coding activities and
software analysis. Component testing and feedback commences the development of
new components.
• Modification of existing components phase: Existing components can be modified
when they fail to provide the desired features. Components are selected from the repos-
itory, their adaptability is checked and risk analysis is carried out. Coding is then per-
formed according to the requirements, and finally the component is tested. After
modification the new version of the component is submitted to the pool so that it can
be reused.
Modification
of
Existing
Components
Reusable Repository of
Components
Co
nt
ne
m
po
po
ne
m
Co
nt
Ba
ew
se
N
d
of
So
t
ftw
en
m
ar
op
eD
el
ev
ev
el
D
op
m
en
t
FIGURE 2.5
The Knot model for component-based software. (Adapted from Chhillar and Kajla 2011.)
Component-Based Development Models 49
2.6.2 Critiques
• The component pool may become too large to manage if components and developed
software products are stored after development.
• The Knot model does not suggest any method for storing or pooling of components
or software products in the repository.
• No criteria for component selection are included in the model.
2.7 Elite Model
Nautiyal et al. (2012) proposed an Elite model for development of component-based soft-
ware (Figure 2.6). The Elite model focuses on reusability of components during develop-
ment and parallel development of components in software. It uses the Rapid Application
Development model of component development. The Elite model consists of nine phases:
requirement, scope elaboration, reusing existing components (without modifications),
Component selection
that can be reused
without modification
Component
Requirement Integration
Analysis and Testing
and Selected or developed
Scope component
Elaboration Deployment
New Component
Development
Feedback
FIGURE 2.6
The Elite model for component-based software. (Adapted from Nautiyal et al. 2012.)
50 Component-Based Software Engineering
• Requirement: The Elite model starts with the requirement-gathering and analysis
phase. Requirements are identified for the existing application or system domain.
“The Requirement phase involves carrying out enough business/application/sys-
tem modeling to define a meaningful build scope. A build delivers a well-defined set
of business functionalities that end-users can use to do real work.” (Nautiyal et al.
2012). The major objectives of this phase are:
• Identifying the problem domain, with the emphasis on understanding the domain
as well as the problem areas.
• Categorizing and prioritizing requirements.
• Setting the aims and objectives of the proposed software, considering previously
defined priorities.
• Identifying the boundaries of the proposed software, considering the limitations
of available resources.
• Scope elaboration: This phase “emphasises on determining the illustrations of build
requirements in the form of technical, behavioural and structural specifications.”
(Nautiyal et al. 2012). Requirements are prioritized, and three types of specifications are
generated: technical specifications including resources required, together with their limi-
tations; functionalities required by the proposed software; and architectural specifica-
tions of the proposed system.
• Reusing existing components (without modifications): After building the specifica-
tions, the next step is to identify components which can be reused as they are, with-
out any modifications. Interfaces are identified and minor modifications are carried
out as required.
• Reusing existing components (with modifications): It is possible that some compo-
nents may be reused with minor or major modifications. But these modifications
must conform to the scope of the proposed application. This step is used to identify
such modifiable components that can later be reused.
• Developing new components: Some components may be required to be developed
from scratch. These components should have defined interfaces and must be designed
and implemented according to the specifications.
• Integration of components: When all components are identified that fit the design of
the software, they must be integrated and tested. Components are integrated in a
bottom-up approach, and then clusters of components are aerated. These clusters are
then integrated to develop the software.
• Component testing: Component testing emphasizes testing individual as well as
integrated components. For testing we can use black-box and white-box testing
methods.
• Release: The next phase is the deployment of the system, including fixing bugs
encountered during deployment. Release includes user-oriented documentation and
training.
• Customer evaluation: When the software is released and deployed to the customer’s
site, the customer is free to evaluate it. Evaluation is the process of taking and giving
feedback from/to the customer. Necessary changes are made to satisfy the customer.
Component-Based Development Models 51
2.7.2 Critiquess
• No actual implementation is available for this model. All the phases and methods are
theoretical.
• Testing methodologies and techniques are not defined.
• There is no risk analysis for any part or component of the proposed software.
• Integration issues are not covered at all.
Summary
This chapter covers development models suggested by eminent researchers in the context
of the component-based software environment. Development models are critically dis-
cussed so that their advantages and disadvantages can be clearly determined. Every model
is suitable for specific scenarios. The major focus of every model is on domain engineering,
component repository, development with reuse and reusability after development, and
these development phases are commonly covered in each model. Our discussion includes
models which are specific to component-based software development.
References
Capretz, L. F. 2005. “Y: A New Component-Based Software Life Cycle Model.” Journal of Computer
Science, 1(1): 76–82, ISSN: 1549-3636.
Chhillar, R. S. and P. Kajla. 2011. “A New-Knot Model for Component Based Software Development.”
International Journal of Computer Science Issues, 8(3), 480–484. ISSN: 1694–0814.
Dixit, A. and P. C. Saxena. 2011. “Umbrella: A New Component-Based Software Development
Model.” International Conference on Computer Engineering and Applications, IPCSIT, Vol. 2. IACSIT
Press, Singapore.
Gill, N. S. and P. Tomar. 2010. “Modified Development Process of Component-Based Software
Engineering.” ACM SIGSOFT Software Engineering Notes, 35(2): 1–6.
Mandal, A. 2009. “BRIDGE: A Model for Modern Software Development Process to Cater the Present
Software Crisis.” IEEE International Advance Computing Conference (IACC 2009), Patiala, India,
6–7.
Nautiyal, L. et al. 2012. “Elite: A New Component-Based Software Development Model.” International
Journal of Computer Technology & Applications, 3(1): 119-124, ISSN: 2229–6093.
3
Major Issues in Component-Based Software
Engineering
3.1 Introduction
Component-based software engineering addresses the expectations and requirements of
customers and users, just as other branches of software engineering do. It follows the same
development steps and phases as other development paradigms. Standard software engi-
neering principles apply to applications developed through component-based software
engineering. Reusability gives the development team the opportunity to concentrate on
the quality aspects of the software (Naur and Randall 1969). The principle of reusability is
applied to development not only of the whole system but also of individual components.
Development with reuse focuses on the identification, selection and composition of reus-
able components. Development for reuse is concerned with the development of such com-
ponents as may be used and then reused in many applications, in similar and heterogeneous
contexts.
This chapter discusses a number of issues in the context of component-based software
development: reuse and reusability of components, integration and integration complexities,
issues related to testing, reliability issues and quality issues.
53
54 Component-Based Software Engineering
According to Poulin (1996) and Prieto-Diaz and Freeman (1987), reusability assessment
techniques can be divided into two approaches, empirical and qualitative. Empirical
approaches rely on objective data to define reusability attributes and characteristics,
whereas qualitative approaches use qualitative software assessment techniques, including
identifying and defining subjective guidelines and standards.
Prieto-Diaz and Freeman (1987) define attributes of a program and related metrics to
compute reusability. They propose that reuse depends on size, program structure, docu-
mentation, programming language and reuse experience. Lines of code are used to count
the size and cyclomatic complexity of a structure, rated from 0 to 10 for documentation;
inter-module language dependency is used to estimate the difficulty of modification and
experience of using the same module. Two levels of reuse are identified: reuse of idea and
knowledge, and reuse of artefacts and components. The authors define a reusability model
using the approach “to provide an environment that helps locate components, and that
estimates the adaptation and conversion effort based on the evaluation of their suitability
for reuse.” (Prieto-Diaz and Freeman 1987).
The evaluation scheme is based on the following assumptions:
Caldiera and Basili (1991) proposed one of the earliest methods of identifying and qualify-
ing reusable components. They define cost, usability and quality as the three factors affect-
ing reusability. Measures and metrics are used to identify qualifying components so that
56 Component-Based Software Engineering
reusability can be identified and the exact component extracted. Component reusability is
characterized using four metrics: (i) volume of operands and operators, using Halstead
Software Science Indicators; (ii) cyclomatic complexity using McCabe’s method; (iii) regu-
larity, which measures the component’s implementation economy; and (iv) reuse fre-
quency, the indirect measure of its functional usefulness. The method also defines an
organization reuse framework in which two organizations are identified: a “project organi-
zation” and an “experience factory.”
The project organization constructs projects using and reusing currently developed and
previously developed components. The experience factory is an organization that builds
and packages the component. The activities of the experience factory are of two types:
synchronous and asynchronous. The basic idea behind both activities is to enhance reus-
ability through or after development.
The extraction process of components from the repository is also defined. Components
to be extracted must first be identified in a three-phase process:
1. Reusability requirements
2. Component extraction
3. Application
Caldiera and Basili (1991) define a range of reusability factors associated with a varied
range of values to measure these attributes. These reusability factors are:
• Cost: This defines the cost of reusability. It includes the cost of searching for, selecting
and adapting components.
• Usefulness: It includes common features of component behavior represented in
other applications as well as various functionalities provided by the component.
• Quality: This defines the basic set of attributes directly or indirectly related to the
component. It includes correctness, readability, testability, ease of modification and
performance.
Barnard (1998) reused components developed in C++, Java and Eiffel to show his exper-
imental findings. His work was based on estimations of the component’s attributes, such
as simplicity, genericity and understandability, through available properties, methods and
defined interfaces. Barnard’s work lists reusability factors in the context of traditional,
object-oriented and component-based software, as:
• Cohesion: Cohesion defines the binding level of sub-components within the particu-
lar component. Better cohesion increases the reusability of the component.
• Coupling: Defines the requirement for a particular component to perform its task,
that is, level of interdependency. Reusability increases as coupling decreases.
• Complexity: Defines ease of use of the component. As complexity decreases, the level
of reusability increases.
• Instance and class variables: These are the variables used by the methods, classes or
components. Fewer variables means an increased level of reusability.
• Depth of inheritance: Depth is defined in terms of reusability. As the depth decreases,
reusability increases.
• Number of children: As the number of children of a component or class increases,
reusability also increases.
• Correctness: Correct components are more reusable. Components should possess the
property of correctness in a defined environment.
• Program size: Size should be justifiable. As the size of the component increases, it
becomes less usable.
• Program documentation: All types of documentation should be prepared for the
component. Documentation increases the understandability of the component.
Proper documentation increases the level of reusability.
• Functions: Defines the number of functions provided by the method. For better reus-
ability it should be minimal, that is, one.
• Complexity: relates to the method’s interface.
• Robustness: This is the property that defines techniques for handling exceptions by
the component.
• Portability: Defines the level of ease with which a class or component can be reused
in different environments.
Boxall and Araban (2004) find that the reuse level of a component is greatly affected by
its understandability regarding the interfaces it uses. They derive the value of understand-
ability using the attributes of the component’s interfaces. Boxall and Arban consider inter-
face size, counts of the argument, number of repetitions, scale of the repetitions and similar
attributes to suggest some metrics for understandability. They use interfaces used for
12 components to provide data for their metrics. Tools used in their metrics are tested for
component interfaces that are developed in C and C++ only. Metrics defined include inter-
face size, the ratio of argument count and procedure counts, where argument count is the
number of arguments included in public procedures and procedure count is the number of
procedures declared publicly by an interface. Other metrics are distinct argument count,
argument repetition scale, identifier length and mean string commonality. Their study is based
on black-box components whose source code is not available. In cases where not much
information is available, interfaces play a vital role in enhancing understandability of
components.
Washizaki et al. (2003) suggested a reusability model for efficient reuse of black-box
components. Factors affecting reusability include functions offered, and adaptability to
new environments and varied requirements. The authors propose metrics for better under-
standability, adaptability level and portability features of components. Since these are
58 Component-Based Software Engineering
black-box components, their code is unavailable. The authors use only available static
information to define their set of metrics. The criteria in their metric model are:
The authors also define a set of metrics for the existence of meta-information, rate of com-
ponent observability, rate of component customizability, self-completeness of the compo-
nent’s return value and self-completeness of the component’s parameters.
Bhattacharya and Perry (2005) focus on the integration contexts of components rather
than just their internal attributes. They propose reusability estimations considering the
integration architecture of the software, as well as component characteristics and architec-
ture compliance metrics to measure issues related to particular properties and integration
issues of components. They suggest “the use of software architecture descriptions as the
context of a software component to use and reuse.” The basic three properties of architec-
tural descriptions are given as:
These authors also propose a set of quantitative metrics for architecture descriptions,
including “architecture compliance metrics and component characteristics metrics.” A set
of sub-metrics is also included. Coefficients used to define these metrics include input and
output data, events related to inputs and outputs, pre-conditions and post-condition com-
pliance coefficients. These are generic metrics applicable to all categories of components.
Gui and Scott’s (2007) study shows that no efficient metrics are available to predict the
effort of modification. They used Java components to suggest metrics to estimate the cohe-
sion properties of intra-components and the coupling properties among them. The authors
claim that these metrics are efficient predictors of the time, effort and amount of changes
required to make a component more useful. Rank correlation and linear regression are
used to evaluate the relative performances of proposed metrics.
Gui and Scott (2008) further define a suite of metrics to compute the reusability of
Java components, and rank them according to their reusability. These metrics are used
to estimate indirect coupling among components as well as to assess the degree of
coupling.
Hristov et al. (2012) categorize reusability metrics into two broad classes: white-box
metrics and the black-box metrics. White-box reusability metrics are based on the logic and
coding structure of the software. Black-box reusability metrics are based on interfaces and
other attributes, as code is not available for black-box components. The authors define fac-
tors directly affecting the reusability of component as:
Major Issues in Component-Based Software Engineering 59
• Availability: The measure that defines the speed of retrieval of the component. If
availability is high then retrieval is easy.
• Documentation: Documentation helps to understand the component.
• Complexity: Complexity determines the ease of usability of the component. It also
defines the level of ease of using the component in the new context.
• Quality: Quality is the measure of usability of the component in the defined context.
Quality is directly related to the extent of fulfilment of requirements by the compo-
nent as well as error-freeness and bug-freeness of the component.
• Maintainability: This relates to reusability after development and deployment of the
component.
• Adaptability: The level of modification required is the level of adaptability of the com-
ponent. It defines the ease with which the component can be reused in the new system.
• Reuse: It identifies the frequency of reuse. This is the actual reuse of the component
in a similar or dissimilar environment.
• Price: The price of reusing a component also affects its reusability.
Poulin (1994) identifies two basic types of reusability model: empirical and qualitative.
Empirical models use experimental data to estimate complexity, size, reliability and similar
issues, which can be used by automated tools to estimate reusability. Qualitative models stay
with predefined assumptions and guidelines to address issues like quality and
certification.
Chen et al. (Chen and Lee 1993) present findings based on a large number of reused
components. They computed size, program volume, program level, difficulty of develop-
ing and effort for all these reused components, concluding that to increase productivity we
should decrease the values of these metrics.
Hislop (1993) defines the theory of function, form and similarity to compute software
reusability. Function defines the actions of a component, form characterizes attributes like
structure and size, and similarity identifies the common properties of components. The
author uses well-defined metrics like McCabe’s complexity metric in his calculation.
Wijayasiriwardhane and Lai (2010) suggest a size-measurement technique named “com-
ponent point” to estimate the size of overall component-based systems. These component
points can be reused as a metric to analyze components for future use. Three classes of
component are defined according to their usability.
Lee and Chang (2000) suggest metrics including complexity and modularity of compo-
nents to predict the reusability and maintainability of object-oriented applications. They
define complexity metrics as “internal–external class complexity,” and modularity metrics
as “class cohesion coupling.”
Cho et al. (2001) propose component reusability measures as the fraction of total inter-
face methods and the number of interface methods in the component that have common
functionality in their domain. Reusability is assumed to rise as the value of the ratio
increases. They also define customizability metrics as the ratio between the customization
methods and the method count present in the interface of that component.
Reusability measures defined in the literature can be analyzed by considering develop-
ment paradigms such as conventional software and programs, object-oriented software,
and component-based software. Table 3.1 provides a summary.
TABLE 3.1
60
Summary of Reuse and Reusability Issues
Author(s)/
Paradigm Measures and Metrics Used Key Findings Factors Affecting Reusability References
Conventional • Lines of code • Defines some attributes of a program and • Size Prieto-Diaz and
software and • Cyclomatic complexity related metrics to compute reusability • Program structure Freeman (1987)
program • Rating from 0 to 10 for • Proposes that reuse depends on size, • Documentation
documentation program structure, documentation, • Programming language
• Inter-module language programming language and reuse experience • Reuse experience
dependency to estimate
difficulty of modification
• Experience of using same
module
Component- • Halstead Software Science • Regularity measures the component’s • Cost Caldiera and Basili
based software Indicator to find volume implementation economy • Usability (1991)
• Cyclomatic complexity using • Reuse frequency, the indirect measure of the • Quality
McCabe’s method. functional usefulness
Component- • Automated tools to estimate • Two categories of reusability: • Cost Poulin (1994)
based software reusability • Empirical models use experimental data to • Reliability
• Predefined assumptions estimate complexity, size, reliability and
similar issues
61
TABLE 3.1 (Continued)
62
Summary of Reuse and Reusability Issues
Author(s)/
Paradigm Measures and Metrics Used Key Findings Factors Affecting Reusability References
Component- • Interaction contexts • Focused on integration contexts of • Integration architecture Bhattacharya and
based software • Properties and attributes of components rather than just their internal Perry (2005)
components attributes
• CBS architecture • Reusability estimations proposed
considering integration architecture of the
software
• Component characteristics and architecture
compliance metrics proposed to measure
issues related to particular properties and
integration issues of components
Object-oriented • Coupling and • Estimate coupling properties among • Complexity of components Gui and Scott (2007)
software cohesion metrics inter-components and cohesion properties of
intra-components
Component- • Coupling between objects • Components ranked according to their • Number of changes made to Gui and Scott (2008)
based software • Response for class reusability the code
engineering • Coupling factors • Metrics to estimate indirect coupling, degree • Time required to carry them
of coupling and functional complexity
• Interaction and integration measures and metrics to capture the complexities for
components in component-based software: When components interact, they have
different configurations, diverse complexity levels, coupling issues and incompatible
interface problems which significantly impact the quality of a component-based sys-
tem. It is not only sufficient but necessary to have appropriate metrics to quantify the
complexities produced by these interactions and integrations. Most of the interaction,
integration and complexity quantification methods available in the literature are pro-
gram oriented, i.e., suitable for small-scale software. But for large, complex compo-
nent-based software systems these metrics are inefficient.
• Level of interactions and integration of components in component-based software:
Component-based software integrates different components providing different
functionalities. Components interact with each other. We need measures and metrics
to assess their level of interaction and integration.
• Number of interactions made by individual components: It is important to evaluate
the number of interactions made by individual components in the system, as these
ultimately affect the complexity of the software. The goal is to keep the number of
interactions to a minimum.
• Component integration effort: Resources (hardware, software or human) required
by the component to provide services should be identified.
• Cost of integration of components: It is quite common for components to be pro-
vided by or purchased from a third party. A particular component’s integration cost
contributes to the overall cost of the system. Integration cost estimation techniques
are required to manage the trade-off between usefulness and cost of the component.
• Effects of integration of components: It is not sufficient merely to integrate compo-
nents: the effects of integration must also be evaluated to assess the behavioral aspects
of the component. Proper assessment methods are required to record the effects of
integration.
• Total time required for integration: Component integration time is a major part of
overall component-based software development. Measures to predict component
integration time are essential.
V G e n 2p,
where 2 is the “result of adding an extra edge from the exit node to the entry node of each
component module graph” (Pressman 2005). In a structured program where we have pred-
icate nodes, complexity is defined as
where predicate nodes are the nodes having two and only two outgoing edges.
In his implementations, McCabe defined a cyclomatic complexity value of a program of
less than 10 as reasonable. If a program has a hierarchical structure, that is, one subpro-
gram is calling other one, the cyclomatic complexity is the summation of individual com-
plexities of these two subprograms and is given as
V G v P1 P2 v P1 v P2 ,
3.3.1 Key Findings
• Complexity depends not on the size but on the coding structure of the program.
• If a program has only one statement then it has complexity 1. That is, V(G) ≥ 1.
• Cyclomatic complexity V(G) actually defines the number of independent logics/
paths in the program.
3.3.2 Metrics Used
• Lines of code
• Control flow of statements
• Interaction among statements
• Independent paths from source to destination
• Vertices and edges
Major Issues in Component-Based Software Engineering 65
3.3.4 Critique
• The same program written in different languages or with different coding styles or
structures may have different complexities.
• Intra-module complexity of simple structured programs can be achieved easily, but
for inter-module complexity, this metric produces a misleading output.
Halstead (1977) identified a complete set of metrics to measure the complexity of a pro-
gram considering various factors. He used recognized scientific theories to prove his study
and metrics on complex production of software. These metrics include program vocabu-
lary, length, volume, potential volume and program level. Halstead proposed methods to
compute the total time and effort to develop the software. These metrics are based on the
lines of codes of the program. Halstead also defined the relationship between these factors
and metrics of programs.
Paradigm:
Conventional program/hierarchical software
Method:
Halstead proposed software science to examine the algorithms developed in ALGOL
and FORTRAN. Halstead considered the algorithms/programs as a collection of “tokens,”
that is, operators and operands. He defined program vocabulary as the count of distinct
operators and distinct operands used in the program. The count of total operators and
operands used in a program is proposed as the program length. The program volume is
defined as the storage volume required to represent the program, and the representation of
the program in the shortest way without repeating operators and operands is known as
potential volume.
Program vocabulary: n n1 n 2 ,
where n1 and n2 are the count of unique operators and operands respectively.
Program length N N1 N 2 ,
where N1 and N2 are the count of total operators and operands respectively.
If the program is assumed to contain binary encoding then the size is defined as pro-
gram volume:
An algorithm can be implemented in various efficient and compact ways. The most
competent and compact length of the program is defined as potential volume. For a pro-
gram potential volume can be attained by specifying signature (name and parameters) of
functions and subprograms previously defined and formulated as
where 2 represents the two operators (one for name of the function and other the separator
used to distinguish the number of parameters) and n2* represents the operand used for the
count of input and output parameters.
Next Halstead defined the level of a program where level is the minimum possible size
of the program. The level of a program having volume V and potential volume V* is
defined as
Program Level L L V / V,
where 0 ≤ L ≤ 1, 0 denotes the maximum possible size and 1 denotes the minimum possible
size of the program.
On the basis of the level of a program, Halstead defined the difficulty of writing a pro-
gram as
D = 1/L,
E V/L D V
As the volume and difficulty of the program increases, the development effort increases.
3.3.5 Key Findings:
• A range of complex metrics and their values are achieved using simple measures
including operators, operands and size of the algorithm.
• There is no in-depth analysis requirement for the structure of the logic code; hence
the ease of computation makes proposed metrics achievable and can be comfortably
automated.
3.3.6 Metrics Used
• Operators and operands
• Functions and subprograms
• Input/output parameters
3.3.8 Critique
• Originally software science was proposed to investigate the complexity of algo-
rithms, not the programs, therefore these metrics are static measures.
• Halstead tested metrics on small-scale programs of even less than 50 statements.
Applicability to large programs is questionable. These small-scale metrics cannot be
generalized with respect to large, multi-module programs/software.
• In his theory Halstead calculated each occurrence of a GOTO statement as a distinct
operator, whereas all the occurrences of an IF statement were treated as a single oper-
ator. Treating and counting different operators as different may create ambiguity.
Albrecht and Gaffney (1983) proposed the function-point technique to measure com-
plexity in terms of size and functionalities provided by the system. This method
addresses the functionality provided by the system from the user’s point of view. To
analyze the software system, Albrecht divided the system into five functional units on
the basis of data consumed and produced. Three complexity weights, high, low and
medium, are associated with these functional units using a set of pre-defined values. In
function-point analysis (FPA), 14 complexity factors have been defined, which have a
rating from 0 to 5. On the basis of these factors, Albrecht calculated the values of unad-
justed function-point, complexity adjustment factors, and finally the value of function
points (Pressman 2005).
Paradigm:
Conventional program/hierarchical software
Method:
FPA categorizes all the functionalities provided by the software in five specific func-
tional units:
• External inputs are the number of distinct data inputs provided to the software or the
control information inputs that modify the data in internal logical files. The same
inputs provided with the same logic are not included in the count for every occur-
rence. All the repeated formats are treated as one count.
• External outputs are the number of distinct data or control outputs provided by the
software. The same outputs achieved with the same logic are not included in the
count for every occurrence. All the repeated formats are treated as one count.
• External inquiries are the number of inputs or outputs provided to or achieved from
the system under consideration without making any change in the internal logical
files. The same inputs/outputs with the same logic are not included in the count for
every occurrence. All the repeated formats are treated as one count.
• Internal logical files present the amount of user data and content residing in the system
or control information produced or used in the application.
• External interface files are the amount of communal data, contents, files or control
information that is accessed, provided or shared among the various applications of
the system.
These five functional units are categorized into three levels of complexity: low/simple,
average/medium or high/complex. Albrecht identified and defined weights for these
68 Component-Based Software Engineering
complexities with respect to all the five functional units. These functional units and corre-
sponding weights are used to count the unadjusted function points:
5 3
where i denotes the five functional units and j denotes the level of complexity.
Similarly, Albrecht defined the complexity adjustment factors on the basis of 14 com-
plexity factors on a scale of 0–5. Adjustment factors provide an adjustment of ±35% rang-
ing from 0.65 to 1.35. These complexity factors include reliable backup and recovery,
communication requirement, distributed processing, critical performance, operational
environment, online data entry, multiple screen inputs, master file updates, complex func-
tional units, complex internal processing, reused code, conversions, distributed installa-
tions and ease of use. Complexity factors are rated as no influence (0), incidental (1),
moderate (2), average (3), significant (4) and essential (5).
The complexity adjustment factor is defined as
14
0.65 0.01
complexity factor i
k 1
The function point is now defined as the product of unadjusted FP and the complexity
adjustment factor.
3.3.9 Key Findings
• The function-point technique does not depend on tools, technologies or languages
used to develop the program or software. Two dissimilar programs having different
lines of code may provide the same number of function points.
• These estimations are not based on lines of code, hence estimations can be made
early in the development phase, even after commencement of the requirements
phase.
3.3.10 Metrics Used
• Count of inputs, outputs, internal logical files, external interfaces and enquiries.
• Weights of corresponding functional unit on the scale of low, medium and high.
• 14 complexity factors on the rating of values 0–5.
3.3.12 Critique
• To compute the correct function-point count, proper analysis of requirements by
trained analysts is required.
• Analysis, counts of functional units and computation of function points are not as
simple as counting lines of code.
Henry and Kafura (1981) proposed a set of complexity computation methods for soft-
ware modules/components. They suggested “software structure metrics based on infor-
mation flow that measures complexity as a function of fan-in and fan-out.” These metrics
are based on the flow of data between the components of the application. Henry and
Kafura defined the length of the module as the procedure length calculated using LOC or
McCabe’s complexity metric. This metric can be computed at a comparatively early stage
of development. The authors used flows of local and global information from UNIX oper-
ating systems modules to define and validate their metrics.
Paradigm:
Conventional programs/hierarchical software
Method:
Henry and Kafura defined three categories of data flow in their work:
• Global flow: When a global data structure is involved between two modules. One
module submits its data to the global data structure and the other accesses the sub-
mitted data from the data structure.
• Direct local flow: Flow of data between two modules is direct local if one module
directly calls another module.
• Indirect local flow: Flow of data between two modules is indirect if one module uses
data as an input returned by another module or both these modules were called by a
third module.
Complexity metrics are defined on the basis of two types of information flow for a par-
ticular module or procedure
• Fan-In: Defines the sum of the number of local flows coming to the module and the
count of data structures used to access the information.
• Fan-Out: Defines the sum of the number of local flows going from the module and
the count of data structures modified by the module.
Henry and Kafura proposed the local flow complexity as “the procedure length multiplied
by the square of fan-in multiplied by fan-out.” This method is used to calculate the count of
“local information flows” coming to (fan-in) and going from (fan-out) the module. That is:
High fan-in and fan-out values indicate high coupling among modules which leads to
problems of maintainability.
70 Component-Based Software Engineering
Global flow complexity is defined in terms of possible read, write and read-write opera-
tions made by the procedures of the module. That is:
Global information in terms of access and update write read write read-write
read-write read read-write read-write 1
3.3.13 Key Findings
• The type, nature, number and format of the information which is going to transit
among the software components are identified and defined well before actual imple-
mentation. These metrics can therefore be applied and estimated during the design
phase.
• These design phase metrics can be used to identify shortcomings and flaws in proce-
dure design construction and ultimately in modules.
• Through their metrics the authors argue that the size of the code plays a negligible
role in complexity estimation.
3.3.14 Metrics Used
• Data and information transit among modules.
• Number of parameters used to access and to provide information.
3.3.16 Critique
• The length is computed using McCabe’s formula or Halstead’s formula, that is, the
length of the code plays a vital role in the metrics.
• If the module has no interaction with other modules then the complexity of that mod-
ule becomes zero.
• In the global information flow, only update operations participate in the complexity.
Chidamber and Kemerer (1994) proposed a metric suite for object-oriented software called
the CK metrics suite. This metric suite is one of the most detailed and popular research
works for object-oriented applications. The metric suite is defined in terms of complexity,
coupling cohesion, depth of inheritance and response set, and is used to assess the com-
plexity of an individual class as well as the complexity of the entire software system. In
their metrics, Chidamber and Kemerer used the cyclomatic method for the complexity
computation of individual classes.
Paradigm:
Object-oriented program/software
Major Issues in Component-Based Software Engineering 71
Method:
The authors define six object-oriented design metrics to analytically evaluate the com-
plexity of software and programs. The metrics were tested on more than 2000 classes
developed in C++ and Smalltalk. The authors consider the metrics available in object-ori-
ented design inefficient, as they lack theoretical foundations. They list a set of six metrics:
• Weighted methods per class: Defined as the summation of the complexities of indi-
vidual methods available in the class. That is
n
• Depth of inheritance tree: Defines the level of inheritance of methods from the deep-
est leaf node to the root in the class. It counts the length of the inheritance hierarchy.
• Number of children: Defines the count of descendent sub-classes directly belonging
to a particular class.
• Coupling between object classes:- Denotes the number of objects that are coupled
with a particular object.
• Response for a class: Defines the set of methods that belong to the class as well as the
set of methods called by a particular method in that class.
• Lack of cohesion in methods:- Defines the level of cohesiveness among methods of
a class.
3.3.17 Key Findings
• The count and the complexity of methods in the classes can be used to predict the
time and effort required in pre- and post-implementation of the class.
• Availability of a greater number of methods in a class implies a smaller number of
descendants of the class but it reduces the reusability feature of the class.
• As the depth of the inheritance tree increases, the scope of reusability will also
increase. However, the longer length of inheritance hierarchy results in overheads for
design complexity and testing efforts.
Abreu and Carapuca (1994) and Abreu and Melo (1996) proposed a metric set named
“Metrics for Object-Oriented Design.” In this metric suite, two fundamental properties of
object-oriented programming are used: attributes and methods. Metrics proposed for the
basic structural system of the object-oriented idea are encapsulation, inheritance, polymor-
phism and message passing. This suite consists of metrics for methods and attributes as an
assessment method for encapsulation.
Cho et al. (2001) developed measures to quantify the quality and complexity of CBSE
components. Using UML diagrams and source code, they define three categories of com-
plexity measures: complexity, customizability and reusability of a component. Some of
these measures are applicable to the design phase, while others can be implemented after
the component installation phase. The argument is that the component should have cus-
tomization properties in order to increase its reusability. The proposed metrics use
McCabe’s cyclomatic complexity and Albrecht’s function points as the basis to compute
the complexity and reusability of a particular program or method.
72 Component-Based Software Engineering
Paradigm:
Component-based software
Method:
Cho et al. categorized their quality estimation measures into three categories: complex-
ity, customizability and reusability.
• Complexity metrics: Four classes of complexity metrics are proposed for compo-
nents—plain, static, dynamic and cyclomatic.
• Plain metrics: These are defined on the basis of number of classes, abstract classes,
interfaces, methods, complexities of individual classes, methods, corresponding
weights, attributes and arguments.
Component plain complexity of a component
number of external classes summation of number of internal classes weight of
corresponding classes number of in-out interfaces number of external methods
summation of number of internal methods weight of corresponding methods
summation of complexity of classes in the component summation of count of .
number of single attributes summation of number of complex attributes weight of
corresponding complex attributes summation of complexity of methods in each
class summation of count of number of single parameter summation of number
of complex parameters weight of corresponding parameter
These authors identify two types of classes: internal and external. Internal classes are
defined in the component, whereas external classes are called from other components
or libraries. Similarly, there are two types of methods: internal and external. Internal
methods are defined within the class, whereas external methods are called from other
classes. Weights are only assigned to internal classes and internal methods.
• Static complexity metrics: Static complexity is measured according to the internal
structure of the component on the basis of associations among classes:
Five types of association are identified and are assigned weights according to the
order of their precedence in composition, generalization, aggregation and depen-
dency. These associations are computed two classes at a time.
• Dynamic complexity metrics: Dynamic complexity is measured by taking the num-
ber of messages passed between the classes into account, within the component:
This metric is dynamic in nature since the number of parameters depends on the nature
of execution.
• Cyclomatic complexity metric: Defined with the help of the source code developed.
These authors used McCabe’s cyclomatic complexity to assess the complexity of each
method existing in a class.
complex parameters weight of corresponding parameter summation of cyclomatic
complexity of individual components e – n 2
Component customization
total number of methods including customizable attributes
number of methods including customizable behaviour weight of behaviour
number of methods including customizable workflows weight of workflows /
total number of methods available in the interface
Reusability at the individual application level can be estimated using either lines
of code or function points.
74 Component-Based Software Engineering
3.3.18 Key Findings
• The defined metrics cover both static and dynamic aspects of the component and the
application, which are applicable to the design and post-implementation phases of
the development.
• As the value of plain complexity increases, the value of a component’s cyclomatic
complexity increases. Dynamic complexity metrics exhibit more accurate results than
static complexity metrics.
• Size, effort, cost and development time of components and component-based appli-
cations can be measured early and easily in the development phase.
3.3.19 Metrics Used
• Number of internal and external classes, internal and external methods, and in-out
interfaces.
• Weights of internal classes, complex attributes, complex parameters and methods.
• Number of associations and their weights.
• Number and frequency of messages.
3.3.21 Critique
• Dynamic complexities are based on lines of code and function points. These metrics
have their own problems and are heavily criticized by practitioners.
• It is not clear whether the weights associated with different entities during complex-
ity estimations will be computed or assigned.
Narasimhan and Hendradjaya (2005) suggest a number of metrics to assess the com-
plexity of component-based software. The packing density metric maps the count of inte-
grated components, and the interaction density metric is used to analyze the interactions
Major Issues in Component-Based Software Engineering 75
among components. Some constituents of the component are identified in their work,
including line of code, operations, classes and modules. A set of criticality criteria for com-
ponent integration and interaction are also suggested.
Vitharana et al. (2004), using what they term the “Business Strategy-based Component,”
developed a method for fabrication of components based on managerial factors like cost
efficiency, ease of assembly, customization, reusability and maintainability. These are used
to estimate such technical metrics as coupling cohesion, count, volume and complexity of
components.
Lau and Wang (2007) argue that reusability is not only the purpose of component
integration but is also a systematic software system construction process. To fulfil the
basic objectives of CBSE, Kung and Zheng analyze an idealized component life cycle
and suggest that components should be composed according to the life cycle. They also
point out that the language of composition should have proper and compatible
syntax.
Jain et al. (2008) assess the association and mappings of cause and effect among the sys-
tem requirements, structural design and the complexity of the system integration process.
They argue for fast integration of components so that the complexity impact of integration
on architectural design of components can be controlled. Five major factors are used to
analyze the integration complexity of a software system. These factors are divided into 18
sub-factors, including commonality in hardware and software subsystems, percentage of
familiar technology, physical modularity, level of reliability, interface openness, orthogo-
nality, testability and so on.
Parsons et al. (2008) propose specific dynamic methods for attaining and utilizing
interactions among the components in component-based development. They also pro-
pose component-level interactions that achieve and record communications between
components at runtime and at design time. The authors used Java components in this
work.
Kharb and Singh (2008) propose a set of integration and interaction complexity metrics
to analyze the complexity of component-based software. They argue that complexity of
interactions have two implicit features: within the component, and from other compo-
nents. Their complexity metrics include percentage of component interactions, interaction
percentage metrics for component integration, actual interactions and interactions per-
formed, and complete interactions in a component-based software.
Sharma and Kushwaha (2012) present an integrated method to assess development and
testing efforts by analyzing the “improved requirement based complexity (IRBC)” in the
context of component-based software.
A number of complexity assessment techniques for CBSE are suggested by academics
on the basis of complexity properties including communication among components, pair-
ing, structure and interface. The interaction and integration complexity measures available
in the literature for development paradigms including conventional software and pro-
grams, object-oriented software, and component-based software, are summarized in
Table 3.2.
The methods and metrics proposed so far in the literature are defined on the basis of
interactions among instructions, operations, procedures and functions of individual and
standalone programs and codes. These metrics are appropriate for small-sized codes. Some
measures are also defined for object-oriented software, but they are not adequate for CBSE
applications. In CBSE, components exchange services and functionalities with each other
through connections and communications. Interaction edges denote the connections
among components, with an edge for each requesting communication and an edge for
TABLE 3.2
76
Summary of Interaction and Integration Complexities
Factors Affecting
Interaction and Author(s)/
Paradigm Measures and Metrics Used Key Findings Integration Complexity References
Conventional • Line of code • Control-flow graph of a program used to • Conditional statements McCabe (1976)
software and • Interaction among statements compute the cyclomatic complexity • Loop statements
programs • Nodes and interactions • Graph-theoretical notation used to draw • Switch cases
control-flow graph where a graph G has n
vertices, e edges and p connected components
Convention-al • Line of code • Proposed a complete set of metrics to measure • Program vocabulary Halstead (1977)
software and • Operator count the complexity of a program considering • Program length
programs • Dissimilar operands count various factors, like program vocabulary, • Program volume
• Total count of dissimilar operators program length, program volume, potential • Effort
• Total count of dissimilar operands volume and others • Time
Modular • External inputs • Proposed function-point analysis technique to • 5 functional units Albrecht and
programming • External outputs measure size of a system in terms of • 14 Complexity factors Gaffney (1983)
• External enquiries functionalities provided • Complexity adjustment
• Internal logical files factors
• External interface files • Degree of influence
Modular • Fan-in information • “Software structure metrics based on • Number of calls to the Henry and
77
(continued)
TABLE 3.2 (Continued)
78
Summary of Interaction and Integration Complexities
Factors Affecting
Interaction and Author(s)/
Paradigm Measures and Metrics Used Key Findings Integration Complexity References
Component- • Prioritization of requirements • Assesses association and cause and effect • Commonality in Jain et al. (2008)
based software • Functional modularity mappings between system requirements, hardware and software
• Feasibility system architecture and systems integration subsystems
• Interface complexity procedure • Percentage of familiar
• Testability • Five major factors to analyze integration technology
complexity of software system • Physical modularity
• These factors are divided into 18 sub-factors • Level of reliability
• Interface openness
• Orthogonality, testability
Component- • Static interaction complexity • Specific dynamic methods for attaining and • Call traces Trevor et al.
based software • Dynamic interaction complexity utilizing interactions among the components • Call graphs (2008)
in component-based development • Runtime paths
• Component-level interactions that achieve and • Calling context trees
record communications between components
at runtime and design time
Component- • Interface • Set of integration and interaction complexity • Maintainability Kharb and
3.4 Complexity Issues
The Cambridge Dictionary defines the term complexity as the “state of being formed of mul-
tiple parts or things and that are difficult to understand.” In the context of algorithms and
programs, it is defined thus: “estimation and prediction of resources acquired by a solution
(algorithm or program) to complexity is identified as an indirect measurement.” (Tiwari
and Kumar 2014). It cannot be calculated as a direct measurement like lines of code or cost.
Complexity is the property of a system that makes it difficult to formulate its overall behav-
ior in a given development environment. Software complexity is a term that encompasses
numerous properties of a piece of software, all of which affect internal as well as external
interactions. Complexity may be characterized as computational, algorithmic or informa-
tion processing. Computational complexity focuses on the amount of resource required for
the execution of algorithms like space and time, whereas information-processing complex-
ity is a measure of the total number of units of information transmitted by an object.
In component-based software engineering, complexity is a measure of the interactions of
the various elements or components of the system. Complexity describes interactions among
entities like functions, modules and components. As the number of entities increases, the
number of interactions between them increases according to the software requirements.
Common major issues regarding complexity of component-based software are:
• Terminology for defining complexity and related terms: Specific and appropriate
vocabulary is required to define terminology related to complexity in the context of
component-based software rather than applying meanings of complexity taken from
other fields.
• Identifying and defining complexity factors of components and component-based
software: Some factors increase and some decrease the complexity of software.
Proper mechanisms should be used to identify and address factors that affect the
complexity of components as well as the complexity of component-based software.
• Techniques are required to assess complexity of individual components:
Components produce complexity due to intra-module interactions which may be
necessary to make the component usable. The use of complex components will
increase the complexity of the overall system. There should be an appropriate method
to assess the complexity of the component. The concept of complexity applies equally
to both types of component, black-box and white-box.
• Metrics are required to evaluate the complexity of component-based software:
Complexity measures for component-based software are as important as the mea-
sures for individual components. Component-based development focuses on inter-
component interactions. These inter-component interactions generate new complexity
aspects that do not exist in traditional or small-scale software or applications. New
measures and metrics are required to address this new problem domain.
80 Component-Based Software Engineering
3.6 Testing Isssues
Testing plays a vital role in the development of components and component-based soft-
ware engineering. Crucial issues in testing components and component-based software
are:
Testing is one of the crucial phases of the overall development of software. Testing veri-
fies the correctness, precision and compatibility of the software at the individual as
well as the system level. Practitioners have identified that improper testing results in
unreliable products (Elberzhager et al. 2012, Tsai et al. 2003). In today’s software
development environment, testing commences just after the finalization of system
requirements.
In component-based software engineering, testing starts from the component level and
moves on to the integrated CBS system level (Pressman 2005). Testing is commonly used
to verify and validate software (Gao et al. 2003, Myers 2004, Weyuker 1998). Verifying com-
ponents in CBSE is the collection of procedures that certify the functionalities of compo-
nents at individual level. Validating components is a series of procedures to ensure the
integrity of integrated components according to the architectural design while fulfilling
the needs of the customer. In the literature, there are two groups of testing techniques,
black-box and white-box.
3.6.1 Black-Box Testing
Black-box testing methods focus on assessing the functional behavior of the software
through inputs provided and outputs observed. Black-box testing treats the internal
logic of code as a black box and the testing observations are captured using inputs and
outputs only.
Major Issues in Component-Based Software Engineering 83
Black-box testing strategies proposed in the literature (Ntafos 1988, Ostrand and
Balcer 1988, Ramamoorthy et al. 1976, Voas 1992 Voas and Miller 1992, 1995, Weyuker
1993) are:
TABLE 3.3
Decision Table
The four quadrants
Condition stub Condition entries
Action stub Action entries
84 Component-Based Software Engineering
In the DTB method, the condition stub and condition entries correspond to an input
condition or set of input conditions. The action stub and action entries correspond to an
output or set of outputs (Voas and Miller 1995, Weyuker 1993).
3.6.1.4 Cause-Effect Graphing
The cause-effect graphing method is also used to combine the input conditions so that
more elaborate assessment can be performed. In this technique, inputs are recognized
as causes and outputs as effects. A Boolean graph known as a cause-effect graph is
drawn by joining these causes (inputs) and effects (effects). It is a simple graphing tech-
nique where nodes represent input conditions and edges represent linking between
nodes.
3.6.2 White-Box Testing
White-box techniques are used to address the testing requirements of the structural design
and internal code of the software. White-box testing methods ensure that all execution
paths and independent logics of the program or module have been tested adequately. The
logical decisions given in the code must also be tested on their true and false sides, and all
looping and branching codes at the extremes as well as within the boundaries must be
checked at least once. Basis-path testing is used in the white-box technique to analyze the
structure of the software.
McCabe (1976) proposed a method to count the number of test cases of a program. He
proposed cyclomatic complexity based on the structural design of the code. He defined
a control-flow graph having n nodes, e edges and p connected components. The cyclo-
matic complexity is calculated as V(G) = e − n + 2p; here 2 is the “result of adding an
extra edge from the exit node to the entry node of each component module graph”
(Chen 2011).
Henderson-Sellers (Henderson-Sellers and Tegarden 1993) proposed an amendment to
McCabe’s formula to compute cyclomatic complexity. Henderson-Sellers’ definition, “V(G)
= e − n + p + 1,” used the concept of modularity. The altered formula argued that to make
components strongly linked, an edge is added, denoted as constant 1 to the multi-compo-
nent flow graph. Orso et al. (2001) suggested a technique using component metadata for
regression testing of components and their interfaces. This metadata consists of data and
control dependencies, source code, complexity metrics, security attributes, information
retrieval mechanisms and execution procedures. They also proposed a specification-based
regression test selection technique for CBS.
Michael et al. (2002) proposed that the system’s effectiveness testing can be increased by
regulating certain component factors. The factors eligible for optimization are cost, reli-
ability, effort and similar attributes. Single as well as multiple application systems are con-
sidered for “software component testing resource allocation,” and “reliability-growth
curves” are used to model the association between failure rates and “cost to decrease this
rate.” Interaction among components and failure rates of components are used in this
methodology.
Bixin et al. (2005) developed a matrix-based method to compute the dependencies in
component-based software. Categories of dependencies available in component-based
Major Issues in Component-Based Software Engineering 85
TABLE 3.4
Summary of Testing Issues in Component-Based Software
Measures and Author(s)/
Paradigm Metrics Used Key Findings Testing Factors References
White- • Statements of • Graph-theoretic notations to • Conditional McCabe
box code draw the control-flow graph statements (1976)
testing • Interaction among and formula for cyclomatic • Loop statements
statements complexity • Switch cases
• Control-flow • Formula gives the count of test
graph cases and computes number of
independent logics in the
program code
• Vertices denote the instruction
of the code and edges present
flow of controls among vertices
White- • Statements of code • Modified formula for • Conditional Henderson-
box • Interaction among computing cyclomatic statements Sellers and
Testing statements complexity with the argument • Loop statements Tegarden
• Multi-component of an extra edge • Switch cases (1993)
flow graph
White- • Source code • Regression testing technique • Metadata Orso et al.
box • Interaction among using component metadata including data (2001)
testing components • Regression test method using and control
• Components software specification dependencies
specification • Source code,
complexity
metrics
• Security
attributes
Black-box • Inter-component • Methodology to optimize • Cost Michael
testing interaction testing schedules, subject to • Reliability et al.
• Single application reliability constraints • Failure rate (2002)
• Multi-application • Generates optimization • Testing schedule
systems opportunities in testing phase
• Effectiveness of the system
testing can be increased by
regulating component factors
White- • Dependency • Component-matrix graph and • Data Bixin et al.
box matrix dependence matrix • Control (2005)
testing • Interaction • proposed to assess • Time
• Source code dependences in component- • Context
• Adjacency matrix based software Categories of • State
dependencies defined
including state, cause and
effect, input/output context
and interface-event
dependence
Black-box • All-transition • Testing method of all-path Reusable and newly Dallal and
testing coverage state to reduce number of test developed classes Sorenson
• Pair coverage cases (2006)
• Full predicate • Black-box and white-box
coverage testing methods
• Round-trip path
coverage
Major Issues in Component-Based Software Engineering 87
White- • Source code • Focused on two aspects of Guidelines and Gill and
box • Interactions testing component-based testing process Tomar
testing • Specification of software: testing requirements step-wise and (2007)
components and test-case documentation phase-wise
process
• Source code, functions,
compatibility, middleware,
interactions and specifications
as testing requirements
• Testing process defined as test
strategy, planning, specification,
execution, recording,
completion and test results
White- • Dependency • Regression selection technique • Method Sen and
box graph to minimize count of test cases parameters Mall
testing • Source code in component-based software • Operations (2012)
• Component state • Algorithm to automatically • Conditions (like
• Reverse generate code from state looping and
engineering model branching)
Black-box • EVOSUITE tool • Whole coverage testing • Object-oriented Fraser and
testing • Line of code technique to cover all testing classes Arcuri
• Number of test goals at same time rather than • Member (2013)
cases one at a time variables
• Approach claimed to minimize • Numeric,
number of tests Boolean, string
• Study uses primitive, • Enumeration
constructor, field method and variables
assignment statement • Test suites
White- • Pre-condition • Specification-based test-case • Component Edwards
box • Post-condition generation technique specification (2000)
testing • Fault injection • Flow graphs defined with the • Enumerating
• Mutants help of specifications paths
• Test cases
White- • Integration • Concept of healing connectors Proved through Chang et al.
box among from developer point of view case studies (2013)
testing components • Problems defined as exceptions
• UML modeling • Connectors usually installed
• Reusability on the basis of integration
information available with
components
3.7 Reliability Issues
Major issues for reliability of component-based software are:
Reliability is key to a system’s quality. Reliability defines not only the correctness but also
the precision attributes of software. The literature includes a number of quality reliability
models, and reliability assessment methodologies have been invented to predict the reliabil-
ity of CBSE applications. In the development of quality software, various independent com-
ponents have to be integrated according to the architectural design of the software. It is not
sufficient to apply traditional reliability metrics when measuring the reliability of such soft-
ware and in a heterogeneous environment. Different researchers have categorized software
reliability in diverse ways. Eusgeld et al. (2008) identify three types of software reliability:
a. Black-box reliability
b. Metric-based reliability
c. Architecture-based reliability or white-box reliability
a. Black-box reliability (Farr 1994, Goel 1985, Ramamoorthy and Bastani 1982) assesses
reliability using failure observations of the software over a period of time during
testing. These reliability estimation models treat the system as a whole rather than
treating its internal structure and intra-component interactions.
b. Software metric-based reliability (Musa 1998) estimates the reliability of the software by
analyzing the its static properties, such as lines of code, asymptotic complexity, devel-
oper experience, software engineering methods and techniques used for testing.
c. Architecture-based reliability (Krishnamurthy and Mathur 1997) estimates reliability
by considering the internal structure of the software. During reliability prediction
and assessment, integration and interaction among components is represented by
architecture. While calculating the reliability of the software, component reliabili-
ties are also considered.
c.
Additive methods (Everett 1999, Xie and Wohlin 1995) use a non-homogeneous pois-
son process (NHPP) to model the reliabilities of the components. In these models,
software failure is represented in NHPP terms through the number of component
failures and the intensity functions of failures of individual components.
calculated as a factor of the usage profile, including the reliability of external services. This
technique is applicable to black-box components. This reliability prediction method is
applied to open and distributed systems. In this method components follow the Markov
chain property.
Yacoub et al. (2004) propose a “scenario-based” reliability estimation approach to the
reliability of CBS. Dependency graphs are used and algorithms proposed that compute the
reliability of CBSE applications. Algorithms are extended to cover distributed components
and the hierarchy of subsystems for reliability estimations.
Rodrigues et al. (2005) offer a reliability estimation method that considers the structure
of the component and the scenarios present in the component-based application. This
method does not calculate a component’s reliability, but assumes that the component’s reli-
ability is given. In this technique, components follow Markov properties. It is assumed that
component failures are independent of each other.
Gokhale and Trivedi (2006) propose reliability predictions for component-based soft-
ware taking its design structure and the failure behavior of the components into account.
A unifying framework to predict reliability of state-based models as well as architecture-
based software is proposed. Software architecture is considered alongside component fail-
ure behavior and interfaces to produce a composite method which is analyzed to predict
the reliability of the application.
Sharma and Trivedi (2007) developed an “architecture-based unified hierarchical
model” to assess and predict the performance of the software, the reliability of the applica-
tion, security and cache behavior of the components. DTMCs are used to model software
and equations are defined that assess the software application on its architectural design
and the behavioral properties of the standalone components. Sharma and Trivedi’s hierar-
chical model covers reliability, performance, security and “cache-miss” behavior
predictions.
Zhang et al. (2008) suggest an approach based on a component-dependence graph. This
is a sub-domain technique based on an architectural-path reliability model; algorithms are
suggested to estimate the reliability of the CBSE applications. The software operational
profile is assumed to be known. Control is also assumed to flow from one component to
another. This method is capable of categorizing software application reliability on the occa-
sion of a change in the operational profile.
Hsu et al. (Hsu and Huang 2011) developed an adaptive method using path-based pre-
diction for complex CBS systems. They used three techniques to estimate path reliability of
the whole application: sequence, branch and design of loops. This method also suggests
the effect of failure for every component in the overall reliability of the application.
Palviainen et al. (2011) proposed an assessment method for the prediction of reliability
of components and CBSE applications. A method for reliability prediction of components
under development is defined, initially during the software development phases. This
method also suggests the effects on reliability of selection of the correct components.
Heuristic, model-driven reliability prediction is combined with component-level and sys-
tem-level techniques to explore the development of reliable CBS applications.
Fiondella et al. (2013) suggest a reliability assessment technique that uses “correlated
failures.” Component reliability is estimated by considering reliabilities of individual com-
ponents, correlation of failures and the architecture of the software. This method follows a
multivariate Bernoulli distribution to compute overall component-based software
reliability.
Reliability methods available in the literature are explored by considering the path-
based estimation paradigm and summarized in Table 3.5.
TABLE 3.5
91
TABLE 3.5 (Continued)
92
Summary of Reliability Issues in Component-Based Software
Measures and Metrics Author(s)/
Paradigm Used Key Findings Reliability Estimation Factors References
Path-based • Control instructions • This method estimates reliability of component-based • Usage profiles Reussner et al.
• Kens software by composing profiles of component usage and • Reliability of components (2003)
• Markov property reliability of environment • Failure rates
• Probability of call of • Method is applicable to black-box components whose
components code is not available
• Service reliability • Components fail independently
• Overall reliability
Scenario- • Component dependency • Scenario-based reliability estimation method for • Individual reliabilities of Yacoub et al.
based graph component-based software omponents (2004)
(Path-based) • Interaction among • Assumes that profiles of execution scenarios are available • Execution times of
components • Stack-based algorithm assesses reliability components
• Transition probability • Methodology can be applied in early phases of software • Link reliabilities
• Transition reliability development since execution scenarios are designed in • Execution scenarios
the design phase • Execution time of a scenario
Path-based • Execution scenarios • Given a reliability estimation method considering the • Operational profiles Rodrigues
• Multiple scenarios, structure of the component and the scenarios present in • Scenarios et al. (2005)
• Markov property component-based application • Component reliability
93
94 Component-Based Software Engineering
3.8 Quality Issues
Quality is one of the most vital properties of any software construct. Quality is related to
almost every other attribute of the software including better reusability, reduced cost,
usability in the overall software, testability, maintainability, ease of use and reliability.
Quality issues should therefore be properly addressed and resolved. The quality of indi-
vidual components and ultimately of the component-based software is equally crucial.
Major issues related to the quality of components and component-based software are:
• Identifying and defining quality attributes: This is a broad area with a good deal of
scope. Though many quality attributes are already defined in the literature, most of
them are in the context of traditional software or for small-scale applications. Quality
attributes that can contribute to the quality of overall component-based software
require more focus.
• Defining quality parameters and procedures for components and component-
based software: Procedures and parameters need to be defined for developing and
assessing quality components for complex and large-scale software. Available param-
eters and procedures address only a small number of quality parameters.
• Measures and metrics to assess the quality of components and component-based
software: Quality can be defined in many ways and can be assessed through many
measures. Component-based software requires specific approaches. Metrics that can
predict or assess the quality of components as well as the complete system are
decisive.
• Measures to improve the quality of existing components: Quality is important for
existing as well as newly developed software constructs. The issue is how existing
constructs can be reused without compromising the overall quality of the compo-
nent-based software. Maximum reusability should be achieved without compromis-
ing on the quality of the software.
• Predicting the overall cost and quality of component-based software products:
Practitioners have paid comparatively less attention to exploring the hidden attri-
butes of component-based software. Overall cost estimation techniques can be devel-
oped in this domain.
• Measures to identify level and degree of componentization: There are no estab-
lished measures and metrics to find the optimized minimum cost of componentiza-
tion. The trade-off between the number of components and the integration cost is not
yet defined.
• Maintaining the quality of the component repository: Newly developed as well as
modified components are deposited in the component repository for future use in
different applications and in varied contexts. Components provided/purchased by
third parties are also placed in the repository. Managing component repositories
involves various considerations, including:
• Size of the repository
• Cost of maintaining the repository
• Maintenance effort
• Selection criteria for components from the repository
Major Issues in Component-Based Software Engineering 95
Summary
This chapter discusses critical issues in the development phases of components and com-
ponent-based software, including:
• Reusability
• Integration and interaction
• Complexity
• Designing and coding
• Testing
• Reliability
• Quality.
This chapter includes a critical literature review of these issues, examining in detail the key
areas of work for researchers and academics.
References
Abreu, F. B. 1995. “Design Metrics for Object-Oriented Software System.” In Proceedings Workshop on
Quantitative Methods (COOP), Aarhus, Denmark, August 1995, 1–30.
Abreu, F. B. and R. Carapuca. 1994. “Object-Oriented Software Engineering: Measuring and
Controlling the Development Process.” In 4th International Conference on Software Quality,
McLean, VA, 3–5.
Abreu, F. B. and W. Melo. 1996. “Evaluating the Impact of Object-Oriented Design on Software
Quality.” In 3rd International Software Metrics Symposium, Berlin, Germany.
Albrecht, A. and J. E. Gaffney. 1983. “Software Function Source Line of Code and Development Effort
Prediction: A Software Science Validation.” IEEE Transaction on Software Engineering, SE-9,
639–648.
Barnard, J. 1998. “A New Reusability Metric for Object-Oriented Software.” Software Quality Journal,
7: 35–50.
Basili, V. R. and D. H. Hutchens. 1983. “An Empirical Study of a Syntactic Complexity Family.” IEEE
Transactions on Software Engineering, 9(6): 664–672.
96 Component-Based Software Engineering
Bhattacharya, S. and D. E. Perry. 2005. “Contextual Reusability Metrics for Event-Based Architectures.”
In International Symposium on Empirical Software Engineering, Queensland, Australia, 459–468.
Bixin, L., Y. Zhou, Y. Wang, and J. Mo. 2005. “Matrix Based Component Dependence Representation
and Its Applications in Software Quality Assurance.” ACM SIGPLAN Notices, 40(11): 29–36.
Boehm, B. 1996. “Anchoring the Software Process.” IEEE Software, 13(4): 73–82.
Boxall, M. and S. Araban. 2004. “Interface Metrics for Reusability Analysis of Components.” In Proceedings
of Australian Software Engineering Conference (ASWEC’2004), Melbourne, Australia, 40–46.
Caldiera, G. and V. R. Basili. 1991. “Identifying and Qualifying Reusable Software Components.”
IEEE Computer, 24: 61–70.
Chen, J. 2011. “Complexity Metrics for Component-Based Software Systems.” International Journal of
Digital Content Technology and Its Applications, 5(3): 235–244.
Chen, D.-J. and P. J. Lee. 1993. “On the Study of Software Reuse Using Reusable C++ Components.”
Journal of Systems Software, 20(1): 19–36.
Cheung, R. A. 1980. “User-Oriented Software Reliability Model.” IEEE Transaction on Software
Engineering, 6(2): 118–125.
Chidamber, S. and C. Kemerer. 1994. “A Metrics Suite for Object-Oriented Design.” IEEE Transactions
on Software Engineering, 20(6): 476–493.
Cho, E. S., M. S. Kim, and S. D. Kim. 2001. “Component Metrics to Measure Component Quality.” In
Proceedings of Eighth Asia-Pacific on Software Engineering Conference (APSEC ’01), IEEE Computer
Society, Washington, DC, 419–426.
Cortellessa, V., H. Singh, and B. Cukic. 2002. “Early reliability assessment of UML based software
models.” In Proceedings of 3rd International Workshop on Software and Performance, Rome.
Dallal, J. and P. Sorenson. 2006. “Generating Class-Based Test Cases for Interface Classes of Object-
Oriented Black Box Frameworks.” World Academy of Science, Engineering and Technology, 16:
96–102, ISSN: 1307-6884.
Edwards, H. S. 2000. “Black-Box Testing Using Flowgraphs: An Experimental Assessment of
Effectiveness and Automation Potential.” Software Testing, Verification and Reliability, 10(4):
249–262.
Elberzhager, F., A. Rosbach, J. Münch, and R. Eschbach. 2012. “Reducing Testing Effort: A Systematic
Mapping Study on Existing Approaches.” Information and Software Technology, 54(10):
1092–1106.
Eusgeld, I., F. C. Freiling, and R. Reussner. 2008. Dependability Metrics. LNCS 4909. Springer-Verlag
Berlin/Heidelberg, 104–125.
Everett, W. 1999. “Software component reliability analysis.” In Proceedings of IEEE Symposium.
Application-Specific Systems and Software Engineering & Technology (ASSET’99), Richardson,
Texas, 204–211.
Farr, W. 1994. “Software Reliability Modeling Survey.” In Lyu M. R. (ed.) Handbook of Software
Reliability Engineering. McGraw-Hill, New York, 71–117.
Fiondella, L., S. Rajasekaran, and S. Gokhale. 2013. “Efficient Software Reliability Analysis with
Correlated Component Failures.” IEEE Transaction on Reliability, 62: 244–255.
Fraser, G. and A. Arcuri. 2013. “Whole Test Suite Generation.” IEEE Transactions on Software
Engineering, 39(2): 276–291.
Gao, J. Z., H. S. Tsao, and Y. Wu. 2003. Testing and Quality Assurance for Component-Based Software.
Artech House, Boston, MA.
Gill, N. S. and Balkishan. 2008. “Dependency and Interaction Oriented Complexity Metrics of
Component-Based Systems.” ACM SIGSOFT Software Engineering Notes, 33(2): 1–5.
Gill, N. S. and P. Tomar. 2007. “CBS Testing Requirements and Test Case Process Documentation
Revisited.” ACM Sigsoft, Software Engineering Note, 32(2): 1–4.
Goel, A. L. 1985. “Software Reliability Models: Assumptions, Limitations and Applicability.” IEEE
Transaction on Software Engineering, 11(12): 1411–1423.
Gokhale, S. et al. 1998. “Reliability Simulation of Component Based Software Systems.” In Proceedings
of 9th International Symposium on Software Reliability Engineering (ISSRE 98), Paderborn, Germany,
November 1998, 192–201.
Major Issues in Component-Based Software Engineering 97
Gokhale, S. S. and K. S. Trivedi. 2006. “Analytical Models for Architecture-Based Software Reliability
Prediction: A Unification Framework.” IEEE Transactions on Reliability, 55(4): 578–590.
Goseva-Popstojanova, K. and K. Trivedi. 2001. “Architecture-Based Approach to Reliability
Assessment of Software Systems.” Performance Evaluation an International Journal, 45: 179–204.
Gui, G. and P. D. Scott. 2008. “New Coupling and Cohesion Metrics for Evaluation of Software
Component Reusability.” Proceedings of the International Conference for Young Computer Scientists,
Huan, China, 1181–1186.
Gui, G. and P. D. Scott. 2007. “Ranking Reusability of Software Components Using Coupling Metrics.”
The Journal of Systems and Software, 80: 1450–1459.
Halstead, M. H. 1977. Elements of Software Science. Elsevier North Holland, New York.
Henderson-Sellers, B. and D. Tegarden. 1993. “The Application of Cyclomatic Complexity to Multiple
Entry/Exit Modules.” Center for Information Technology Research Report No. 60.
Henry, S. and D. Kafura. 1981. “Software Structure Metrics Based on Information Flow.” IEEE
Transactions on Software Engineering, 7: 510–518.
Hervé, C., L. Mariani, and M. Pezzè. 2013. “Exception Handlers for Healing Component-Based
Systems.” ACM Transactions on Software Engineering and Methodology, 22(4): 1–40.
Hislop, G. W. 1993. “Using Existing Software in a Software Reuse Initiative.” In Sixth Annual Workshop
on Software Reuse (WISR’93), Owego, NY, November 2–4, 1993.
Hristov, D., H. Oliver, M. Huq, and W. Janjic. 2012. “Structuring Software Reusability Metrics for
Component-Based Software Development.” In Proceedings of Seventh International Conference on
Software Engineering Advances ICSEA 2012, IARIA, Lisbon, Portugal, ISBN: 978-1-61208-230-1.
Hsu, C. J. and C. Y. Huang. 2011. “An Adaptive Reliability Analysis Using Path Testing for Complex
Component Based Software Systems.” IEEE Transaction on Reliability, 60(1): 158–170.
Jain, R., A. Chandrasekaran, G. Elias, and R. Cloutier. 2008. “Exploring the Impact of Systems
Architecture and Systems Requirements on Systems Integration Complexity.” IEEE Systems
Journal, 2(2): 209–223.
Poulin, J. S. 1994. “Measuring Software Reusability.” In Proceedings of Third International Conference on
Software, Rio de Janerio, Brazil, November 1–4, 1994, 126–138.
Kharb, L. and R. Singh. 2008. “Complexity Metrics for Component-Oriented Software Systems.”
ACM SIGSOFT Software Engineering Notes, 33(2): 1.
Krishnamurthy, S. and A. P. Mathur. 1997. “On the Estimation of Reliability of a Software System
Using Reliabilities of Its Components.” In Proceedings of 8th International Symposium on Software
Reliability Engineering (ISSRE’97), Albuquerque, NM, 146–155.
Kubat, P. 1989. “Assessing Reliability of Modular Software.” Operations Research Letters, 8: 35–41.
Kumari, U. and S. Bhasin. 2011. “A Composite Complexity Measure for Component-Based Systems.”
ACM SIGSOFT Software Engineering Notes, 36(6): 1–5.
Lau, K.-K. and Z. Wang. 2007. “Software Component Models.” IEEE Transactions on Software
Engineering, 33(10): 709–724.
Lee, Y. and K. H. Chang. 2000. “Reusability and Maintainability Metrics for Object-Oriented
Software.” In Proceedings of 38th Annual on Southeast Regional Conference (ACM-SE 38). ACM,
New York, 88–94.
Littlewood, B. 1975. “A Reliability Model for Systems with Markov Structure.” Applied Statistics,
24(2): 172–177.
Littlewood, B. B. and L. Strigini. 1993. “Validation of Ultra-High Dependability for Software-Based
Systems.” Communications of the ACM, 36(11): 69–80.
McCabe, T. 1976. “A Complexity Measure.” IEEE Transactions on Software Engineering, 2(8): 308–320.
Michael, L. R., R. Sampath, and P. A. Aad van Moorsel. 2002. “Optimal Allocation of Test Resources
for Software Reliability Growth Modeling in Software Development.” IEEE Transactions on
Reliability, 51(2): 183–192.
Morris, K. 1989. “Metrics for Object Oriented Software Development.” Masters thesis, M.I.T., Sloan
School of Management, Cambridge, MA.
Myers, G. J. 2004. The Art of Software Testing, 2nd edn. John Wiley & Sons, Hoboken, NJ.
Musa, J. 1998. Software Reliability Engineering. McGraw-Hill, New York.
98 Component-Based Software Engineering
Vitharana, P., H. Jain, and F. Zahedi. 2004. “Strategy-Based Design of Reusable Business Components.”
IEEE Transactions on Systems, Man, and Cybernetics—PART C: Applications and Reviews, 34(4):
460–474.
Voas, J. M. 1992. “A Dynamic Testing Complexity Metric.” Software Quality Journal, 1(2): 101–114.
Voas J. M. and K. W. Miller. 1995. “Software Testability: The New Verification.” IEEE Software, 12(3): 17.
Voas, J. M. and K. W. Miller. 1992. “The Revealing Power of a Test Case.” Journal of Software Testing,
Verification and Reliability, 2(1): 25–42.
Wake, S. and S. Henry. 1988. “A Model Based on Software Quality Factors which Predict
Maintainability.” In Proceedings of Conference on Software Maintenance, Phoenix, Arizona,
382–387.
Washizaki, H., Y. Hirokazu, and F. Yoshiaki. 2003. “A Metrics Suite for Measuring Reusability of
Software Components.” In Proceedings of 9th International Symposium on Software Metric, Sydney,
Australia, 211–223m
Weyuker, E. J. 1993. “More Experience with Data Flow Testing.” IEEE Transactions on Software
Engineering, 19(9): 912–919.
Weyuker, E. J. 1998. “Testing Component-Based Software: A Cautionary Tale.” IEEE Software, 15(5):
54–59.
Wijayasiriwardhane, T. and R. Lai. 2010. “Component Point: A System-Level Size Measure for
Component-Based Software Systems.” The Journal of Systems and Software, 83: 2456–2470.
Xie, M. and C. Wohlin. 1995. “An Additive Reliability Model for the Analysis of Modular Software
Failure Data.” In Proceedings of 6th International Symposium on Software Reliability Engineering
(ISSRE’95), Toulouse, France, 188–194.
Yacoub, S., B. Cukic, and H. Ammar. 2004. “A Scenario-Based Reliability Analysis Approach for
Component Based Software.” IEEE Transactions on Reliability, 53(4): 465–480.
Zhang, F., X. Zhou, J. Chen, and Y. Dong. 2008. “A Novel Model for Component–based Software
Reliability Analysis.” In Proceedings of 11th IEEE High Assurance Systems Engineering Symposium,
Nanjing, China, 303–309.
4
Reusability Metric-Based Selection and
Verification of Components
4.1 Introduction
Effective and well-organized reuse of software deliverables, work products and artifacts
offer a return on the investment in terms of a shorter development cycle, effective cost,
improved quality and increased productivity. Software reusability is the effective reuse of
pre-designed and tested parts of experienced software in new applications. This chapter
presents a reusability metric to estimate the reusability of components in component-based
software engineering. To estimate the reusability metric of the different classes of compo-
nents, function points (Albrecht and Gaffney 1983) are used as the base metric defining the
“reusability matrix” that identifies the component reusability ratio. We can use lines of
code to compute the reusability metric, where function points are not available for the
component.
101
102 Component-Based Software Engineering
with the ability to modify but in a managed and controlled manner (McIlroy 1976). Cooper
describes software reuse as the potential repeated use of a formerly designed software
component, with minor, major or no modifications (Cooper 1994). Software reusability is
the level of usefulness or the extent of reuse. Reusability is defined as the amount of ease
with which pre-designed objects can be used in a new context (Biggerstaff and Richter
1989, Prieto-Diaz 1992, Kim and Stohr 1998).
In literature, software reuse has been defined as a concept or a process, whereas software
reusability is defined as the outcome of that process. Reusability is the actual level of imple-
mentation of reuse. Reusability acts as a measurement tool to quantify the proportion of reuse
of a pre-designed, implemented and qualified artifact. The present work defines a quantifi-
able approach to measure the reusability of a component in component-based software.
• Domain knowledge: facts and rules of the domain that are uniformly applicable to all
the solutions of that particular domain.
• Development process: steps and methodology reused to construct new software
through which previous software was implemented successfully.
• Conceptual design and architecture: either in the form of theoretical documents or the
implementation part of the previous application.
• Requirements specification: defined for earlier software and fits a new application, a
part or the whole document. Specifications include both functional and non-
functional requirements.
• Design documents: include graphical interfaces as well as database designs. Design
includes system-level design and detailed design of each component.
• Code segments and code lists: pre-compiled and pre-executed code fragments developed
for previous software. It includes the code of functions, modules or data structures.
• Interfaces: used to interact and integrate various components without compatibility-
or context-related problems.
• Test cases and test plans: developed for and applied to experienced applications and
may be reused in a new context.
• Test data: used as an input in testing of software components.
• User feedbacks: feedback from earlier software can be an important factor in develop-
ing new software fulfilling the needs of the user.
Only some artifacts are included here, though there is an enormous number of reusable
software artifacts that can be identified. Reusable work products have certain
Reusability Metric-Based Selection and Verification of Components 103
properties that they mutually contribute to explore and support reusability. These
characteristics include but are not limited to (Biggerstaff and Richter 1989, Matsumoto
1989, McClure 1989):
Software artifacts can be characterized at various levels, including statement level, func-
tion level, component level or system level (Maxim 2019).
a.
Statement-level reusability defines the reuse of single-line statements, such as pre-
defined syntactical statements or semantics of syntax.
b.
Function-level reusable artifacts are the implementation of single well-defined
functions.
c.
Component-level reuse describes sub-routines or complete objects of previously devel-
oped software reused in a new application.
d.
System-level reusability is about reusing an entire application.
4.4 Reusability Metrics
In component-based software development, reused and new components are assem-
bled together to develop component-based applications. The comparison between the
reused and the new components defines the metric of component reusability. In this
chapter, we define a quantifiable approach to measuring the reusability of a component
in component-based software. Using the level and degree of reusability, we categorize
components into four classes: newly developed, fully qualified adaptable, partially
qualified adaptable and off-the-shelf components (Brown and Wallnau 1998, Atkinson
et al. 2002).
a.
Newly developed components: These are constructed from scratch according to the
requirements of an individual application. These components are developed either
by the development team or by some third party with experience in developing reus-
able components.
b.
Partially qualified adaptable components: These need a major degree of alteration
in their code to fit into the software under development.
c.
Fully qualified adaptable components: These require a minor degree of adaptation
to qualify for the needs of the current application. Their reusability level is much
higher than that of partially qualified components.
104 Component-Based Software Engineering
d.
Off-the-shelf components: These are pre-designed, pre-constructed and pre-tested
components. They are third-party components or are built up by the development
team as part of the earlier project and can be reused in the current software without
any modification or change.
1.
External inputs are the number of distinct data inputs provided to the software or the
control information inputs that modify the data in internal logical files. The same
inputs provided with the same logic are not included in the count for every occur-
rence. All the repeated formats are treated as one count.
2.
External outputs are the number of distinct data or control outputs that are provided
by the software. The same outputs achieved with the same logic are not included in
the count for every occurrence. All the repeated formats are treated as one count.
3.
External inquiries are the number of inputs or outputs provided to or achieved from
the system under consideration without making any change in the internal logical
files. The same inputs/outputs with the same logic are not included in the count for
every occurrence. All the repeated formats are treated as one count.
4.
Internal logical files present the number of user data and content residing in the
system or control information produced or used in the application.
5.
External interface files are the number of communal data, contents, files or control infor-
mation that are accessed, provided or shared among the various applications of the
system.
These five functional units are categorized into three levels of complexity: low/simple,
average/medium or high/complex. Albrecht identified and defined weights for these
complexities with respect to all five functional units. The functional units and their corre-
sponding weights are used to count the unadjusted function points:
5 3
where i denotes the five functional units and j denotes the level of complexity.
Reusability Metric-Based Selection and Verification of Components 105
The function-point technique does not depend on tools, technologies or languages used to
develop the program or software. Two dissimilar programs having different lines of code
may provide the same number of function points. These estimates are not based on lines of
code, hence estimations can be made early in the development phase, even after the com-
mencement of the requirements phase.
All these metrics can be defined with the help of lines of code (LOC).
Ci Component “i”
CCBS Total number of components in the component-based software
|CNew| Cardinality of new components
|CReused| Cardinality of reused components selected from the repository
|COff-the-shelf| Off-the-shelf components
CAdaptable Adaptable/ modifiable components
CFull-qual Fully qualified adaptable components
CPart-qual Partially qualified components
RMCi Reusability metric of component Ci
RMCCBS Reusability metric of component-based software
RMCFull-Qual Reusability metric of fully qualified components
RMCCBS-Full-Qual Reusability metric of component-based software at system level
RMCi-Full-Qual Reusability metric of component-based software at component level
RMCPart-Qual Reusability metric of partially qualified component
RMCCBS-Part-Qual Reusability metric of partially qualified component at system level
RMCi-Part-Qual Reusability metric of partially qualified component at component level
RMCOff-the-shelf Reusability metric of off-the-shelf components
RMCCBS-Off-the-shelf Reusability metric of off-the-shelf components at application level
RMCi-Off-the-shelf Reusability metric of off-the-shelf components at component level
106 Component-Based Software Engineering
For the reusability metric we define four types of function points of a component:
a. Total function points
b. Reused function points
c. Adaptable function points
d. New function points.
a. Total Function Points of a Component
Using Function Points
These are defined as the summation of reused function points and new function
points:
FPCi = FPCi−Reused + FPC New (4.1)
where FPCi is the total number of function points of a component, FPCi − Reused is the
count of reusable function points and FPCNew is the number of new function points.
Ci − Reused = FPCi − Off − the − shelf + FPCi − Full −qual + FPCi − Part −qual (4.2)
where FPCi − Reused is the count of reusable function points, FPCi − Off − the − shelf is the
count of off-the-shelf function points, FPCi − Full − qual is the number of fully qualified
function points and FPCi − Part − qual is the number of partially qualified function points.
LOCCi − Reused = LOCCi − Off − the − shelf + LOCCi − Full −qual + LOCCi − Part −qual
where LOCCi − Reused is the reused lines of code, LOCCi − LOCCi − Off − the − shelf is the number
of lines of code which can be reused as they are, LOCCi − Full − qual represents the fully
qualified lines of code and LOCCi − Part − qual is the number of lines of code that are par-
tially qualified.
c. Adaptable Function Points of a Component
Using Function Points
Adaptable components include partially qualified as well as fully qualified
components.
where Ci − Adaptable is the total number of reusable function points, FPCi − Full − qual is the
number of fully qualified function points and FPCi − Part − qual is the number of partially
qualified function points.
In this chapter, two levels of reusability metrics are proposed—component level and CBS
system level.
CReused
RMC CBS = (4.6)
C CBS
Using Lines of Code (LOC)
On the basis of lines of code (LOC), we can define the metric as:
LOCReused
RMC CBS =
LOC CBS
Reusability Metric-Based Selection and Verification of Components 109
In the present work, reusability metrics for the following categories of component are
defined:
• Adaptable components, which includes fully qualified adaptable and partially qualified
adaptable components
• Off-the-shelf components, which can be reused without any modification.
A.
Fully qualified components
B.
Partially qualified components
In this chapter, the reusability assessment for both is defined in terms of components as
well as of function points.
110 Component-Based Software Engineering
Case 1: Reusability metric when all parts of the component are involved.
Case 2: Reusability metric when only reused parts of the component are involved.
Case 3: Reusability metric when only adaptable parts of the component are involved.
Every case can be defined using function points as well lines of code.
i.
When all parts of the component are involved
Using function points
Here the ratio of fully qualified function points of a particular component, Ci ,
to the total number of function points of that component is defined as:
FPCi−Full−qualified
RMCi−Full−qualified = (4.8)
FPCi
where FPCi denotes the total function points of a component including new,
adaptable and reused.
where LOCCi denotes the total LOCs of a component including new, custom-
ized and reused.
ii.
When only reused parts of the component are involved
Using function points
In this context the ratio is taken with respect to only reused function points
of the component. Here the total number of fully qualified function points is
divided by the total number of reused function points.
Therefore, the reusability metric in terms of function points is defined as:
FPCi−Full−qualified
RMCi−Full−qualified = (4.9)
FPCi−Reused
where total reused function points FPCi − Reused represent the collection of off-
the-shelf, fully qualified and partially qualified adaptable components from
Equation (4.2).
Reusability Metric-Based Selection and Verification of Components 111
iii.
When only adaptable parts of the component are involved
Using function points
This case takes the ratio of adaptable function points only. The total number
of fully qualified function points is divided by the total numbers of adaptable
function points of the component to give a reusability metric of:
FPCi−Full−qualified
RMCi−Full−qualified = (4.10)
FPCi−Adaptable
where adaptable function points FPCi − Adaptable are the assembly of fully quali-
fied and partially qualified function points only.
Case 1: Reusability metric when all parts of the component are involved.
Case 2: Reusability metric when only reused parts of the component are involved.
Case 3: Reusability metric when only adaptable parts of the component are involved.
Each case can be defined using function points as well as line of code.
i.
When all components are involved
In terms of component count
In this case, the ratio is taken in the context of the total number of compo-
nents involved in the CBS system. Here, the total number of fully qualified
reused components is divided by the total number of components involved
112 Component-Based Software Engineering
FPC CBS−Full−qualified
RMC CBS−Full−qualified = (4.12)
FPC CBS
In terms of lines of code
In this case, we define the reusability metric for fully qualified components
as the ratio between the total lines of code of the component-based software
and the fully qualified lines of code of the CBS:
LOCC CBS−Full−qualified
RMC CBS−Full−qualified =
LOCC CBS
ii.
When only reused components are involved
In terms of component count
In this case, the ratio is taken in the context of reused components only. It is
defined as the total number of fully qualified reused components divided by
the total number of reused components involved in the CBS application
development:
CFull−qualified
RMC CBS−Full−qualified = (4.13)
CReused
In terms of function points
In this case, the reusability metric can be given as the ratio between the num-
ber of fully qualified function points and the reused function points of the
component-based software:
FPC CBS−Full−qualified
RMC CBS−Full−qualified = (4.14)
FPC CBS−Reused
In terms of lines of code
In this case, the reusability metric can be given as the ratio between the num-
ber of fully qualified lines of code and the reused lines of code of the compo-
nent-based software:
LOCC CBS−Full−qualified
RMC CBS−Full−qualified =
LOCC CBS−Reused
Reusability Metric-Based Selection and Verification of Components 113
iii.
When only adaptable components are involved
In terms of component count
In this case, the ratio is taken in the context of adaptable components only. Here,
the total number of fully qualified reused components is divided by the total
number of adaptable components involved in the development. It is defined as:
CFull−qualified
RMC CBS−Full−qualified = (4.15)
C Adaptable
In terms of function points
Here, the reusability metric is defined as the total fully qualified function
points divided by the total number of adaptable function points of the CBS:
FPC CBS−Full−qualified
RMC CBS−Full−qualified = (4.16)
FPC CBS−Adaptable
In terms of lines of code
Here, the reusability metric is defined as the total fully qualified lines of code
divided by the total number of adaptable lines of code of the CBS:
LOCC CBS−Full−qualified
RMC CBS−Full−qualified =
LOCC CBS−Adaptable
Case 1: Reusability metric when all parts of the component are involved.
Case 2: Reusability metric when only reused parts of the component are involved.
Case 3: Reusability metric when only adaptable parts of the component are involved.
Each case can be defined using function points as well as line of code.
i.
When all parts of the component are involved
Using function points
In this case, we calculate the reusability metric in the context of a particular
component when all parts of the components are involved. Here, the total
114 Component-Based Software Engineering
FPCi−Part−qualified
RMCi−Part−qualified = (4.17)
FPCi
Using lines of code
Here, the total number of partially qualified reused lines of code is divided
by the total number of lines of code of the components. That is:
LOCCi−Part−qualified
RMCi−Part −qualified =
LOCCi
ii.
When only reused parts of the component are involved
Using function points
In this context the ratio only refers to the reused function points of the com-
ponent. The total number of partially qualified function points is divided by
the total number of reused function points. Hence, it is defined as:
FPCi−Part−qualified
RMCi−Part−qualified = (4.18)
FPCi−Reused
Using lines of code (LOC)
In this case the total number of partially qualified lines of code is divided by
the total number of reused lines of code. Hence, it is defined as:
LOCCi−Part−qualified
RMCi−Part−qualified =
LOCCi−Reused
iii.
When only adaptable parts of the component are involved
Using function points
In this case, the ratio refers to adaptable function points only. The total num-
ber of fully qualified function points is divided by the total number of adapt-
able function points only. Therefore, the reusability metric is defined as:
FPCi−Part−qualified
RMCi−Part−qualified = (4.19)
FPCi−Adaptable
Using lines of code
The total number of fully qualified lines of code is divided by the total num-
ber of adaptable lines of code only. Therefore, the reusability metric is
defined as:
LOCCi−Part−qualified
RMCi−Part−qualified =
LOCCi−Adaptable
Reusability Metric-Based Selection and Verification of Components 115
Case 1: Reusability metric when all parts of the component are involved.
Case 2: Reusability metric when only reused parts of the component are involved.
Case 3: Reusability metric when only adaptable parts of the component are involved.
Every case can be defined using function points as well as lines of code.
i.
When all components are involved
In terms of component count
This is the case when the reusability metric is measured at system level.
Here, the total number of partially qualified reused components is divided
by the total number of components involved in the development. It is
defined as:
CPart−qualified
RMC CBS−Part −qualified = (4.20)
C CBS
ii.
When only reused components are involved
In terms of component count
In this case, the ratio is taken in the context of reused components only.
The total number of partially qualified adaptable components is divided
by the total number of reused components involved in the development.
Therefore, it is defined as:
CPart−qualified
RMC CBS−Part−qualified = (4.22)
CReused
116 Component-Based Software Engineering
iii.
When only adaptable components are involved
In terms of component count
In this case, the reusability metric is taken in the context of adaptable compo-
nents only. The total number of partially qualified reused components is
divided by the total number of adaptable components involved in the devel-
opment and defined as:
CPart−qualified
RMC CBS−Part −qualified = (4.24)
C Adaptable
In terms of function points
Here, the reusability metric is given as the ratio between the partially quali-
fied and total adaptable function points of the CBS, as:
FPC CBS−Part−qualified
RMC CBS−Part−qualified = (4.25)
FPC CBS−Adaptable
LOCC CBS−Part−qualified
RMC CBS−Part−qualified =
LOCC CBS−Adaptable
Case 1: Reusability metric when all parts of the component are involved.
Case 2: Reusability metric when only reused parts of the component are involved.
Each case can be defined using function points as well as line of code.
i.
When all parts of the component are involved
Using function points
In this context, we compute the reusability metric for a particular compo-
nent. The total number of off-the-shelf reused function points is divided by
the total number of function points of the component. It is defined as:
FPCi−Off −the−shelf
RMCi−Off −the−shelf = (4.26)
FPCi
LOCCi−Off −the−shelf
RMCi−Off −the−shelf =
LOCCi
ii.
When only reused parts of the component are involved
Using function points
This is when the ratio is taken in the context of reused function points of the
component only. Here, the total number of off-the-shelf function points is
divided by the total number of reused function points. It is given as:
FPCi−Off −the−shelf
RMCi−Off −the−shelf = (4.27)
FPCi−Reused
LOCCi−Off −the−shelf
RMCi−Off −the−shelf =
LOCCi−Reused
118 Component-Based Software Engineering
ii.
When only reused components are involved
In terms of component count
In this case, the ratio is taken in the context of reused components only. In this
case the total number of off-the-shelf reused components is divided by the total
number of reused components involved in the development. It is defined as:
C Off −the−shelf
RMC Off −the−shelf = (4.30)
CReused
In terms of function points
In terms of function points, the reusability metric is achieved by dividing total
off-the-shelf function points by total reused function points of the component-
based software. It is defined as:
FPC CBS−Off −the−shelf
RMC CBS−Off −the−shelf =
FPC CBS−Reused
(4.31)
Reusability Metric-Based Selection and Verification of Components 119
4.5.1 Reusability Matrix
The reusability matrix is a matrix containing the reusability metric computations of each
component in the component-based software. Information containing the reusability
matrix works as a selection and verification criterion for the candidate components.
The reusability matrix is a row-column matrix containing the values of size estimations
of components in the form of function points or lines of code. The columns represent the
candidate component’s name and the rows represent the function points of the corre-
sponding components, as shown in Table 4.1.
TABLE 4.1
Reusability Matrix
Candidate Components
Reusability matrix C1 C2 C3 C4
RMCi-Part-qualified Partially qualified Partially qualified Partially qualified Partially qualified RM
RM value of RM value of RM value of value of component C4
component C1 component C2 component C3
RMCi-Full-qualified Fully qualified Fully qualified RM Fully qualified RM Fully qualified RM
RM value of value of value of value of component C4
component C1 component C2 component C3
RMCi-Off-the-shelf Off-the-shelf RM Off-the-shelf RM Off-the-shelf RM Off-the-shelf RM value
value of value of value of of component C4
component C1 component C2 component C3
120 Component-Based Software Engineering
4.5.2 Case Study
This section discusses a case study analyzing the reusability metric with the help of the
reusability matrix. The study concerns a part of some component-based software.
Figure 4.1 shows the component diagram of the Login_Process and Exit_Process of a
software that has five components: UserLoginComponent, LoginVerificationComponent,
UserLoginDatabase, UserPageComponent, and ExitComponent. The user fills data/infor-
mation related to their login in the UserLoginComponent, which is checked and cleared by
the LoginVerificationComponent. Verification and clearance is done through the
UserLoginDatabase component. If the user data is verified then the UserPageComponent
is enabled, otherwise LoginVerificationComponent redirects the UserLoginComponent.
Here the aim is to select a suitable UserPageComponent (shown in oval) component.
This UserPageComponent has five candidate components, denoted as C11, C12, C13, C14
and C15, which we have to test for reusability, as shown in Table 4.2. Here the assumption
is that function points are available with each candidate component. Total function points
of candidate component C11 are 600, C12 has 620 function points, C13 has 580, C14 has 570
and C15 has 480 function points, as shown in Table 4.2. From these function points, the
developer can identify the partially qualified, fully qualified and off-the-shelf function
points of the component, as required in the software under development. In this case study,
UserLoginComponent
Request for
Verification
LoginVerificationComponent UserLoginDatabase
Response for
Verification
Valid User
UserPageComponent
ExitComponent
FIGURE 4.1
User login page.
Reusability Metric-Based Selection and Verification of Components 121
TABLE 4.2
Partially Qualified, Fully Qualified and Off-the-Shelf Function Points of Five Candidate Components
Function Points C11 C12 C13 C14 C15
TABLE 4.3
Reused, Adaptable and New Function Points of Five Candidate Components
Function Points C11 C12 C13 C14 C15
the values of partially qualified, fully qualified and off-the-shelf function points of the
component are taken, as shown in Table 4.2. Here, the columns represent the candidate
components’ names, C11, C12, C13, C14 and C15, and the rows represent the function points of
the corresponding components.
Using Table 4.2, and Equations (4.1), (4.2), (4.3) and (4.4) respectively, we can compute
the values of the number of reused, adaptable and new function points of the candidate
components, as shown in Table 4.3.
With the help of Tables 4.2 and 4.3, now we can draw the reusability matrix for five can-
didate components using the reusability-metric method.
TABLE 4.4
Reusability Matrix when New and Reused Function Points are Involved
Reusability Matrix C11 C12 C13 C14 C15
TABLE 4.5
Reusability Matrix when only Reused Function Points of the Component are Involved
Reusability Matrix C11 C12 C13 C14 C15
TABLE 4.6
Reusability Matrix when only Adaptable Function Points of the Component are Involved
Reusability matrix C11 C12 C13 C14 C15
0.9
0.8
0.7
Reusability-Metric
0.6
0.5 RMCi
0.4 RMi-part-qualified
RMi-full-qualified
0.3
RMi-off-the-shelf
0.2
0.1
0
C11 C12 C13 C14 C15
Candidate Components
FIGURE 4.2
Reusability graph when components contain new and reused function points.
4.5.3.1 Selection of Components When All the Parts of the Component Are Considered
From Table 4.4 and Figure 4.2 we note that, when selection is made at component level,
then the eligible components are:
0.9
0.8
0.7
Reusability-Metric
0.6
0.5 RMi-part-qualified
0.4 RMi-full-qualified
RMi-off-the-shelf
0.3
0.2
0.1
0
C11 C12 C13 C14 C15
Candidate Components
FIGURE 4.3
Reusability graph when components contain reused function points only.
0.8
0.7
0.6
Reusability-Metric
0.5
0.4
RMi-part-qualified
0.3 RMi-full-qualified
0.2
0.1
0
C11 C12 C13 C14 C15
Candidate Components
FIGURE 4.4
Reusability graph when components contain adaptable function points only.
We can select components according to our criteria and selection level. The selection
process for components at system level can be defined in a similar manner. At system level
we store all the values in terms of applications, and all the computations are in terms of
function points defined at CBS-system level.
Reusability Metric-Based Selection and Verification of Components 125
Summary
In this chapter, estimation methods for reusability of the components in the component-
based software environment are elaborated for various categories of reusable components.
The four classes of components are new, partially qualified, fully qualified and off-the-shelf
components. The reusability metric for these identified components at two different
levels—system level and individual component level—are defined. Further, this work
defines the reusability matrix for a number of candidate components serving the same
purpose. Using above defined reusability metric and reusability matrix, one can verify the
selection of an individual component from the candidate components.
A case study has been used to model a scenario of five components. These components
are participating for the same purpose and we have to select one among them for reuse in
software that is under development. The results obtained from this case study illustrate
that the reusability metric can be used at three different levels for component selection as
well as verification. The values of different reusability matrices can be used to select or
reject the candidate component. The results suggest that an increase in the reusability of a
component results in enhanced probability of selection of that component. These metrics
and matrices are helpful not only for computations of the reusability attributes of compo-
nents in CBS applications but can also be stored as a performance coefficient along with the
component in the repository.
References
Albrecht, A. and J. E. Gaffney. 1983. “Software Function Source Line of code and Development Effort
Prediction: A Software Science Validation.” IEEE Transactions on Software Engineering, SE-9:
639–648.
Atkinson, C. et al. 2002. Component-Based Product-Line Engineering with UML. Addison-Wesley,
London.
Basili, V.R. and H. D. Rombach. 1988. “Towards a Comprehensive Framework for Reuse: A Reuse-
Enabling Software Evolution Environment.” Technical Report CS-TR-2158, University of
Maryland.
Bersoff E.H. and A. M. Davis. 1991. “Impacts of Life Cycle Models on Software Configuration
Management.” Communications of the ACM, 8(34): 104–118.
Biggerstaff, T. J. and C. Richter. 1989. Reusability Framework, Assessment, and Directions, Software
Reusability: Concepts and Models. Addison-Wesley Publishing Company, New York, 1–18.
Braun, C. L. 1994. “Reuse.” In Marciniak, IEEE, 1055–IEEE, 1069.
Brown, A. W. and K. C. Wallnau. 1998. “The Current State of CBSE.” IEEE Software, 15(5): 37–46.
Cooper, J. 1994. “Reuse—The Business Implications.” In Marciniak, 1071–1077.
Freeman, P. 1987. “Reusable Software Engineering Concepts and Research Directions.” In Tutorial:
Software Reusability, ed. P Freeman. IEEE Computer Society Press, Los Alamitos, 10–23.
Johnson, L. and D. R. Harri. 1991. Sharing and Reuse of Requirements Knowledge,” In Proceedings of
the KBSE-91. IEEE Press, Los Alamitos, CA, 1991, pp. 57–66.
Kim, Y. and E. A. Stohr. 1998. “Software Reuse: Survey and Research Directions.” Journal of Management
Information Systems, 14: 113–147.
Krueger, C. W. 1992. “Software Reuse.” ACM Computing Surveys (CSUR), 24(2): 131–183.
Lim, W. C. 1994. “Effects of Reuse on Quality, Productivity, and Economics.” IEEE Software, 11(5): 23.
126 Component-Based Software Engineering
Maiden, N. and A. Sutcliff. 1991. Analogical Matching for Specification Reuse.” In Proceedings of
KBSE-91, IEEE Press, Los Alamitos, CA, 108–116.
Matsumoto, Y. 1989. Some Experiences in Promoting Reusable Software: Presentation in Higher Abstract
Levels, Software Reusability: Concepts and Models. ACM Addison-Wesley Publishing Company,
New York, 157–185.
Maxim, R. B. 2019. “Software Reuse and Component-Based Software Engineering.” CIS 376, UM-Dearborn.
McClure, C. 1989. CASE Is Software Automation. Prentice Hall, Englewood Cliffs, NJ.
McIlroy, M. D. 1976. “Mass Produced Software Components. In J.M. Buxton, P. Naur, and B. Randell,
eds., Software Engineering Concepts and Techniques.” NATO Conference on Software Engineering,
NATO Science Committee, Garmisch, Germany 88–98.
Prieto-Diaz, R. 1992. Classification of Reusable Modules, Software Reusability: Concepts and Models, J. B.
Ted and J. P. Alan, eds. Addison-Wesley Publication Company, New York, 99–123.
Tracz, W. 1995. Confessions of a Used Program Salesman: Institutionalizing Software Reuse. Addison-
Wesley, Reading, MA.
5
Interaction and Integration Complexity Metrics
for Component-Based Software
5.1 Introduction
In component-based software development, the developer’s emphasis is on the assembly
and integration of pre-constructed, pre-examined, customizable and easily deployable
software components, rather than on developing software from scratch. Complexity
assessment is always an exigent task for the designers of large-scale applications and soft-
ware where software is divided into various components or modules. Complexity accu-
mulates as the size of the software increases. Integration and interactions among the
components follow a well-defined architectural design and should occur according to the
user’s requirement specification. Components interact with each other for two basic
reasons:
127
128 Component-Based Software Engineering
IEEE 1987, Schach 1990, IEEE 1990, Gaedke and Rehse 2000, Tiwari and Kumar 2014). In
the literature, complexity of programs and software is treated as a “multidimensional con-
struct” (Boehm 1981, Shepperd 1988).
Interaction and integration complexities of various pieces of code play a vital role in the
overall behavior of software. As the code count increases, the interaction level of the soft-
ware also increases in accordance with its requirements.
This chapter aims to suggest and develop competent and proficient interaction com-
plexity estimation measurements and metrics. Two complexity computation techniques
are suggested:
5.2.2 Notations
Graph-theory notation is used throughout this chapter to denote all types of interactions
shared among various components. Components are shown as vertices, and interactions
among components are denoted as edges. Generally, components make interactions to
share and communicate information, data, control or similar resources.
Interaction and Integration Complexity Metrics for Component-Based Software 129
Component Component
C1 C2
Responding-Edge (Incoming-Interaction
edge to Component C1) Iin
C1 (Iin)
Interacting Arguments from Component C2
FIGURE 5.1
Interactions between two components.
For in-out interaction metrics, edges are divided into two categories:
• Incoming interactions
• Outgoing interactions
i. Incoming interactions: Edges in the graph that are coming towards the compo-
nent are referred to as incoming interactions. These edges are drawn due to:
• A request sent by any other component to the receiving component, or
• A response from some other component to the request-sending component
Figure 5.1 shows incoming interactions and outgoing interactions. The edge com-
ing from component C2 in the direction of component C1 is termed the incoming
interaction for component C1 and denoted as Iin.
ii. Outgoing interactions: The edges in the graph that are going out from the compo-
nent are referred to as the outgoing interactions. These edges are drawn due to
either:
• A request sent by the component to some other component, or
• A response from the component to the request-sending component.
In Figure 5.1, the edge going from component C1 in the direction of component
C2 is termed the outgoing interaction for component C1 and denoted as Iout.
When we assess in-out interaction complexity, all types of edges are considered, both
incoming and outgoing interactions. From Figure 5.1 we can observe that component C1
contains one outgoing interaction (Iout) as a requesting-edge from C1 to C2 and one incom-
ing interaction (Iin) as a response-edge coming from C2 to C1. As components C1 and C2
are both inter-dependent for requests and responses, evaluation of the in-out interaction
complexity mustr also consider complexities produced by both components C1 and C2.
TI Ci I out I in (5.1)
where Iin is the incoming interaction and Iout is the outgoing interaction of component “C.”
130 Component-Based Software Engineering
TI CBS I out
I in (5.2)
i 1 i 1
• If IRCi = 1 when Iin = Iout, the total number of incoming interactions is equal to the total
number of outgoing interactions. We can conclude that there is equivalent depen-
dency among contributing components.
• If IRCi < 1 it means that Iout < Iin, that is, the total number of incoming interactions is
greater than the total number of outgoing interactions. We can infer the result that
there is high dependency on the responding component.
• If IRCi > 1 it means Iout > Iin, implying that there is high dependency on the responding
component.
I out
IR CBS i 1
n (5.4)
I
i 1
in
1
2 , half of the components are disjoint.
AI Cn
Iin Iout
1, atleast one interaction among components. (5.5)
Cn 1, coupling is very high among components.
• If AICn< ½, it implies that at least half of the interacting components are disjoint,
• If AICn = 1, it denotes that there is at least one interaction among components, and
• If AICn > 1, then it is concluded that components are highly coupled.
1, underflow condition.
IPCn
Iin Iout 1,balanced condition. (5.6)
TI CBS 1, overflow condition.
• If IPCn < 1, it implies an underflow situation, that is, more interactions are possible
among the components,
• If IPCn = 1, it shows a balanced situation, and
• If IPCn > 1>, it shows an overflow situation, that is, components are sharing heavy
interaction, which will increase the complexity.
5.2.9 Case Study
To implement the in-out interaction complexity, we consider two exemplar case studies,
the first having two components, and the second having four components.
132 Component-Based Software Engineering
TI C 2 I out I in 0 0 0
Total Interactions of Component-Based Software (TICBS):
This software consists of two components only, C1 and C2. Applying the values given
in Table 5.1 in Equation (5.2), we get,
n n
TI CBS
i 1
I out I
i 1
in 1 0 1 0 2
IR=
C1 1=
/1 1
For component C2, there are no incoming-outgoing interactions. Hence the interac-
tion ratio is not applicable to component C2.
Interaction Ratio of Component-Based Software (IRCBS):
Case 1 consists of two components, C1 and C2. Applying the values given in Table 5.1
in Equation (5.4), we get,
n
I out
IR CBS i 1
n 1/ 1 1
I
i 1
in
TABLE 5.1
Number of In-Out Interactions between Components C1 and C2
Component Incoming Interactions (Iin) Outgoing Interactions (Iout)
C1 1 1
C2 0 0
Interaction and Integration Complexity Metrics for Component-Based Software 133
AI Cn
Iin Iout 2 / 2 1
Cn
Interaction Percentage of Component C (IPCn):
We apply the values defined in Table 5.1 to Equation (5.6) to compute the interaction
percentage.
Interaction Percentage of Component C1:
IPC1
Iin Iout = 2 / 2 = 1
TI CBS
IPC2
Iin Iout 0 / 12 0
TI CBS
TI C1 I out I in 2 2 4
TI C 2 I out I in 0 0 0
TI C 3 I out I in 2 2 4
TI C 4 I out I in 2 2 4
134 Component-Based Software Engineering
C1-C3 (Iout)
C1-C3 (Iin)
C3-C1 (Iout)
C3-C4 (Iin)
C3-C4 (Iout)
C1-C4 (Iin)
C4-C3(Iout)
C1-C4 (Iout)
C4-C3 (Iin)
C4-C2 (Iout)
Component C2 Component C4
C4-C2 (Iin)
FIGURE 5.2
Interactions among four components.
TABLE 5.2
Number of In-Out Interactions among Four Components
Component Incoming Interactions (Iin) Outgoing Interactions (Iout)
C1 2 2
C2 0 0
C3 2 2
C4 2 2
n n
TI CBS
i 1
I out I
i 1
in
2 0 2 2 2 0 2 2 6 6 12
=
IR C1 2=
/2 1
For component C2, there are no incoming–outgoing interactions. Hence the interac-
tion ratio is not applicable to component 2.
For component C3:
I out
IR Ci = ,
I in
=
IR C3 2=
/2 1
=
IR C4 2=
/2 1
I out
IR CBS i 1
n 6/6 1
I in
i 1
IPC1
Iin Iout 4 / 12 0.33
TI CBS
136 Component-Based Software Engineering
IPC2
Iin Iout 0 / 12 0
TI CBS
Interaction Percentage of Component C3:
IPC3
Iin Iout 4 / 12 0.33
TI CBS
Interaction Percentage of Component C4:
IPC4
Iin Iout 4 / 12 0.33
TI CBS
V G e n 2p,
where 2 is the “result of adding an extra edge from the exit node to the entry node of each
component module graph” (Pressman 2005). In a structured program where we have pred-
icate nodes, complexity is defined as
V G v P1 P2 v P1 v P2 ,
where P1 and P2 are two subprograms and P1 is calling P2.
Interaction and Integration Complexity Metrics for Component-Based Software 137
Complexity depends not on the size, but on the coding structure of the program. If a
program has only one statement then it has complexity 1. That is, V(G) ≥ 1. Cyclomatic
complexity V(G) actually defines the number of independent logics/paths in the
program.
Component C1
1
Component C2
1
Requesting
2 Edge
1 2
1 3 Closed Region 3
(CR)
4
Responding
4 Edge
FIGURE 5.3
Interaction flow graph of two components.
138 Component-Based Software Engineering
• Method 1:
For such multifaceted and multi-component software, cyclomatic complexity is
defined as:
V G E V 2 P (5.7)
where
|E| denotes number of edges, |V| is used to denote total number of vertices,
|P| is the total of contributing components and
Constant 2 is used to indicate that “the node V contributes to the complexity if its
out-degree is 2.”
• Method 2:
Another metric for computing the cyclomatic complexity of component-based soft-
ware is defined in Equation (5.8) as:
n m
V G IC i CR j OR (5.8)
i 1 j 1
where
5.3.3 Case Study
To illustrate these metrics, example case studies with different scenarios are discussed.
Both the methods defined in Equations (5.7) and (5.8) are applied to all scenarios.
Method 1:
From Figure 5.6, it is noted that
= =
E 10 , V 9, and P = 2, therefore,
Interaction and Integration Complexity Metrics for Component-Based Software 139
V G E V 2 P
10 9 2 2 5
Method 2:
From Figure 5.3, it is noted that,
n 2, m 1, IC 1 1, IC 2 2, CR 1, OR 1, therefore ,
n m
V G
i 1
IC i CR OR
j 1
j
V G 1 2 1 1 5
Method 1:
From Figure 5.4, it is noted that,
|E| = 22, |V| = 16, and |P| = 3, therefore,
V G E V 2 P
22 16 2 3 11
Method 2:
From Figure 5.4, it is noted that,
n 3, m 2, IC 1 2, IC 2 2, IC 3 2, CR 1, OR 1, therefore,
n m
V G IC i CR j OR
i 1 j 1
V G 2 2 4 2 1 11
140 Component-Based Software Engineering
Component C1 Component C2
1
1
3 2
CR1
2
2
1
4
4 Open Region
3 1
(OR)
Component C3
5 1
CR2
2 3 4
6
1 2 3 4
C3
5 6
FIGURE 5.4
Interaction flow graph of three components with 2 CRs.
= =
E 24 , V 16, and P = 3, therefore,
Interaction and Integration Complexity Metrics for Component-Based Software 141
Component C1
1
Component C2
1
Closed Region 1
(CR1)
3 2
1
3 1 4
4
Closed Region 2
(CR2)
CR4
5 1
2 3 4 5
6 CR3
1 2 3 4
FIGURE 5.5
Interaction flow graph of three components with 4 CRs.
V G E V 2 P
24 16 2 3 13
Method 2:
From the Figure 5.5, it is noted that,
n 3, m 4, IC 1 2, IC 2 2, IC 3 4, CR 1, OR 1, therefore,
n m
V G IC i CR j OR
i 1 j 1
V G 2 2 4 4 1 13
142 Component-Based Software Engineering
Component C1
Component C2
1
1
2
Closed Region 1
(CR1) 3 2
2
3 1 4 1
4
5 CR 2
2
1
Open Region (OR)
6
2
CR 3
1
1 3
2 3 4 5
4
1 2 3 4
5
Component C3 Component C4
FIGURE 5.6
Interaction flow graph of four components with 3 CRs.
Interaction and Integration Complexity Metrics for Component-Based Software 143
Method 1:
From Figure 5.6, it is noted that,
|E| = 28, |V| = 21, and |P| = 4, therefore,
V G E V 2 P
28 21 2 4 13
Method 2:
From Figure 5.6, it is noted that,
n 4, m 3, IC 1 2, IC 2 2, IC 3 2, IC 4 1, CR 3, OR 1, therefore,
n m
V G
i 1
IC i CR OR
j 1
j
V G 2 2 4 1 3 1 13
Method 1:
From Figure 5.7, it is noted that,
|E| = 32, |V| = 21, and |P| = 4, therefore,
V G E V 2 P
32 21 2 4 17
Method 2:
From Figure 5.7, it is noted that,
n 4, m 7 , IC 1 2, IC 2 2, IC 3 4, IC 4 1, CR 7 , OR 1, therefore,
n m
V G IC i CR j OR
i 1 j 1
V G 2 2 4 1 7 1 17
144 Component-Based Software Engineering
Component C1 Component C2
1
Open Region (OR)
1
1
2
2 3
Closed Region 1 1
3 (CR1)
2
4
4
2 Closed Region 2
(CR2)
Closed Region 6
(CR6)
5
Clo
sed 1
(C Regio
R3
) n3
6
Closed Region 5 2
Closed Region 4 (CR5)
(CR4)
1
1 3
2 3 4 5 Closed
Region 7
(CR7) 4
1 2 3 4
6
Component C4
Component C3
FIGURE 5.7
Interaction flow graph of four components with 7 CRs.
Interaction and Integration Complexity Metrics for Component-Based Software 145
1 C2
1
1 Open Region ( OR)
2
2
Closed Region 1 1
3
2 (CR1)
3
2
4
5
4
Closed Region 2
(CR2)
6
Closed Region 6
Cl (CR6)
ose
dR
Closed Region 4 (C egio
R3
(CR4) ) n3 1
1
Cl
ose
d
(C Regio 2
R5
) n5
2 3 4 5
1 2 3 4 3
Closed Region 7 1
(CR7)
C3 6
Closed Region 9
(CR9)
Closed Region 8 4
(CR8)
1 Closed Region
1 10 (CR10)
2
5
3 4
2 3
4
6
FIGURE 5.8
Interaction flow graph of five components with 10 CRs.
146 Component-Based Software Engineering
Method 1:
From Figure 5.8, it is noted that,
|E| = 44, |V| = 27, and |P| = 5, therefore,
V G E V 2 P
44 27 2 5 24
Method 2:
From Figure 5.8, it is noted that,
n 5, m 10, IC 1 2, IC 2 2, IC 3 4, IC 4 1, IC 5 4, CR 10, OR 1,
therefore,
n m
V G IC i CR j OR
i 1 j 1
V G 2 2 4 1 4 10 1 24
Summary
In the context of software engineering, complexity is probably the most crucial factor of
software design. The emphasis for researchers is on devising methods and techniques that
help to reduce the overall complexity of the software. Complexity is expressed as the esti-
mation of efficient use of resources and the level of difficulty to handle the software.
Component-based software applications are composed of independently deployable com-
ponents. Assembling tthese components has a common intesion to contribute their func-
tionalities to the system.
This chapter focuses on three major complexity computation metrics: in-out interaction
complexity, integration complexity and cyclomatic complexity for component-based soft-
ware. Complexity graphs have been used to define these metrics.
The in-out interaction is defined on the basis of edges that are coming towards to the
component as well as edges leaving the component. Six different but relative metrics for
interactions among components are defined:
References
Bauer, F. et al. 1968. Software Engineering: A Report on a Conference Sponsored by NATO Science Committee.
Scientific Affairs Division, NATO, Brussels.
Boehm, B. W. 1981. Software Engineering Economics. Prentice Hall, Englewood Cliffs, NJ.
Gaedke, M. and J. Rehse. 2000. “Supporting Compositional Reuse in Component-Based Web
Engineering.” In Proceedings of ACM symposium on Applied computing (SAC ’00). ACM Press,
New York, 927–933.
IEEE. 1987. Software Engineering Standards. IEEE Press, New York.
IEEE Standard 610.12-1990. 1990. Glossary of Software Engineering Terminology. IEEE, New York, ISBN:
1–55937–079–3 .
Krueger, C. W. 1992. “Software Reuse.” ACM Computing Surveys (CSUR), 24(2): 131–183.
McCabe, T. 1976. “A Complexity Measure.” IEEE Transactions on Software Engineering, 2(8): 308–320.
Pressman S. R. 2005. Software Engineering A Practitioner’s Approach, 6th edn. TMH International
Edition, New York.
Schach, S. 1990. “Software Engineering.” Vanderbilt University, Aksen Association.
Shepperd, M. 1988. “A Critique of Cyclomatic Complexity as Software Metric.” Software Engineering
Journal, 3(2): 30–36.
Tiwari, U. and S. Kumar. 2014. “Cyclomatic Complexity Metric for Component Based Software.”
ACM SIGSOFT Software Engineering Notes, 39(1): 1–6.
6
Component-Based Software Testing Techniques
and Test-Case Generation Methods
6.1 Introduction
Testing is the process of executing a program with the intent of finding errors
(Myers 1979)
Testing is one of the core activities of the software development process. In component-
based development the approach emphasizes the “use of pre-built and pre-tested compo-
nents.” Here the focus of developers is on black-box testing (functional testing) as well as
white-box testing (structural testing). Black-box testing emphasizes the behavioral attri-
butes of the components when they interact with each other. White-box testing techniques
are used to address the testing of the structural design and internal code of the software.
In this chapter, functional testing and structural testing strategies, and test-case genera-
tion techniques for CBSE are discussed. When components are integrated, they produce
explicit effects called integration effects. Integration-effect methodology is a black-box
technique as it covers the input and output domains only. In this chapter black-box and
white-box test-case generation methods are discussed. These methods are compared only
with the boundary-value analysis method since other black-box techniques require specific
input conditions and the number of test cases depends on those conditions. A white-box
testing technique named cyclomatic complexity is described.
6.2 Testing Techniques
For testing purposes, software constructs are divided into two broad categories:
input/output constructs and process/logic/code constructs. Input/output constructs refer to
inputs provided to the software and outputs produced by it. Normally input/output is
presented in the form of data, information and specific values. Process/logic/code con-
structs refer to the actual processing of software, including coding and internal structure.
Testing techniques are classified into two major classes according to the construct type:
1. Black-box testing
2. White-box testing
149
150 Component-Based Software Engineering
6.2.1 Black-Box Testing
Black-box testing techniques only consider software inputs and outputs. These techniques
apply to software whose code is not available or accessible. Input and output domains are
divided into various classes or partitions and tested separately. Black-box emphasizes the
external behavioral attributes of the components when they interact with each other. An
overview of black-box testing is shown in Figure 6.1. Tester control is on input and output
domains only. The internal structure and logic of the program and software are not acces-
sible. The literature describes various categories of black-box testing techniques: bound-
ary-value analysis, equivalence-class partitioning, decision table-based and cause-effect
graphing (see Chapter 3).
Input to a program/software is given in the form of data, information or specific values
as per the requirements. After processing, the achieved outputs are collected and tested.
Since the logic is hidden from the testers, they only control the inputs/outputs of the
software.
6.2.2 White-Box Testing
White-box testing techniques consider the internal logic of the software and apply to its
structural code. In this technique the program statements are checked and errors are
fixed. White-box testing techniques are used to address the testing of the structural
design and internal code of the software. An overview of white-box testing methods is
shown in Figure 6.2. White-box testing not only tests the structural behavior of software
but also considers the control flows, basis paths, data structures (within sub-component
and outside the sub-components), independent paths, logical mistakes, coding errors,
incoming and outgoing interfaces, and semantic errors. Here input provided to the soft-
ware is assumed to be true and the functionalities provided by the software are checked
(see Chapter 3).
Black-Box
Sub Component 1
{Internal Logic}
FIGURE 6.1
Black-box component testing.
Component-Based Software Testing Techniques and Test-Case Generation Methods 151
White-Box
Component 2
Component 20
Data, Data,
Information, Component n Information,
Specific values Specific values
Comp
Comp 400
Provided to the software 1001 Achieved from the software
under test Component under test
21
Component 99
FIGURE 6.2
White-box component testing.
components are provided as black-box, so their internal code is generally not accessible
(Weyuker 1998, Harrold et al. 1999), reducing control and testability of components.
Testing levels of component-based software must integrate software testing methods
into a predefined and planned sequence of actions. Testing methodologies must incorpo-
rate planning of testing, test-case design, implementation of testing, testing results and the
assessment of collected data. A component is typically made up of different independent
sub-components from various contexts. Therefore, a testing technique that addresses these
components not only individually but as a whole is required. Conventional testing meth-
odologies are applicable to small-scale programs and software, but they appear inefficient
when applied in the context of component-based software.
6.3.2 Test Specification
Every test has its attributes and characteristics. A test specification defines and specifies the
attributes of a test. It describes each individual test, the cardinality and the cardinality ratio of
test items belonging to each domain, the test elements and their formats, the testing process
for each independent test item, test-case recording methods, test-case semantics, testing dates,
and the minimum and maximum time taken to conduct the tests. In general, a test specifica-
tion provides the complete structure of the testing elements and its corresponding attributes.
6.3.3 Test Plan
The software under consideration has its own requirements and testers try to verify the
software in accordance with those requirements. A test plan is like a manuscript directive
that addresses the verification of the particular software’s requirement and the design
specifications. Testers, developers and sometimes end users contribute inputs to the test
plan, which consists of:
• Testing process
• Basic inputs and final outcomes
• Software elements that should be tested
• Required quality levels
• People involved in the testing process, i.e., tester, developer and end user
Component-Based Software Testing Techniques and Test-Case Generation Methods 153
6.3.4 Test Cases
In addition to other elements, the component-based test case is a complete document that
contains three basic constructs:
A test case executes the test plans, records the actual outcomes and compares them with
the expected ones. Test cases are the documents that are used at various levels of testing.
6.3.5 Test Documentation
During component-level testing the major focus is on input, output, and internal as well as
related and supporting logics of the component (Figure 6.3). Test documentation is a test-
ing repository not only for the individual component but for the complete testing data of
component-based software during integration and system-level testing.
Component C1
Internal Processing
(White BoxTechniques)
FIGURE 6.3
Testing of individual components. (Adapted from Umesh and Kumar 2017.)
C1
Component 1
Effect
C1 Λ C2 Int
e1
C2
Component 2
FIGURE 6.4
Integration-effect graph of two components.
TABLE 6.1
Probable Values of Integration-Effect Matrix of Two Components
Components C1 Integration Effect C2 Integration Effect
C1 1 Effect of (C1): 0: if not connected Integration effect
0/1 1: if connected of (C1ΛC2): 0/1
C2 0: if not connected Integration effect of (C2ΛC1): 0/1 1 Effect of (C2): 0/1
1: if connected
156 Component-Based Software Engineering
Step 1. Record the effect of integration of components. If the effect is “1,” it will contrib-
ute to the count of test cases.
Step 2. Count the “1s” recorded in Step1.
The following case studies with different scenarios illustrate the method described above.
C1 1 1 1 1
C2 1 1 1 1
158 Component-Based Software Engineering
C1 1 1 1 1
C2 1 0 1 1
TABLE 6.4
Integration Effect Matrix of Two Components
Integration- Integration-
Components C1 C2
Effect Effect
C1 1 1 1 1
C2 1 1 1 1
Component-Based Software Testing Techniques and Test-Case Generation Methods 159
Test cases for component C2, that is the case when C2 is involved
= Count of test cases for first component C1 + Count of test cases for first
component C2
= 1 + 1 = 2.
When we apply the boundary-value analysis (BVA) technique to this scenario we get:
If there are n number of components in the software, the least amount of test cases
produced by BVA are 4n + 1.
In the given scenario we have n = 2,
Then, count of test cases = 4 * n + 1
∗
= 4 2 + 1 = 9
C1
Re
st
qu
ue
es
q
Re
Re
sp
se
on
on
se
sp
Re
C2 C3
FIGURE 6.5
Interactions among three components.
160 Component-Based Software Engineering
where
Integration of (C1 Λ C2 Λ C3) denotes the integration of 3 given components,
Effect of (C1) denotes the effects in the software produced by C1; it is either 0 or 1
Effect of (C2) denotes the effects in the software produced by C2; it is either 0 or 1
Effect of (C3) denotes the effects in the software produced by C3; it is either 0 or 1
C1
C1 and C2
Int e1
C2
C1 and C3
Int e2
C3
C2 and C3
Int No effect
FIGURE 6.6
Integration-effect graph of three components.
Component-Based Software Testing Techniques and Test-Case Generation Methods 161
Integration effect of (C1 Λ C2) denotes the effect produced by interaction of
components C1 and C2; it is either 0 or 1.
Integration effect of (C2 Λ C3) denotes the effect produced by interaction of com-
ponents C2 and C3; it is either 0 or 1.
Integration effect of (C1 Λ C3) denotes the effect produced by interaction of com-
ponents C1 and C3; it is either 0 or 1.
Λ shows the “AND” operation.
Effect of (C1, C2, and C3) is denoted “1” if components contain no faults, but “0” if
Effect of (C1, C2, and C3) contains any error or fault. The integration value
of (C1 Λ C2 Λ C3) in the integration-effect matrix will be 1 if and only if Effect
of (C1) = 1, Effect of (C2) = 1, Effect of (C3) = 1, and Integration effect
of (C1 Λ C2 Λ C3 ) = 1. If any of the given effects is 0, the integration value
of (C1 Λ C2 Λ C3) will be 0.
TABLE 6.5
Probable Values of Integration-Effect Matrix of Three Components
Components C1 Integration Effect C2 Integration Effect C3 Integration Effect
C1 1 Effect of (C1): 0/1 0 or 1 Integration effect of 0 or 1 Integration effect of
(C1 Λ C2): 0/1 (C1Λ C3): 0/1
C2 0 or 1 Integration effect of 1 Effect of (C2): 0/1 0 or 1 Integration effect of
(C2ΛC1): 0/1 (C2 Λ C3): 0/1
C3 0 or 1 Integration effect of 0 or 1 Integration effect of 1 Effect of (C3): 0/1
(C3 Λ C1): 0/1 (C3 Λ C2): 0/1
TABLE 6.6
Actual Values of Integration-Effect Matrix of Three Components
Components C1 Integration Effect C2 Integration Effect C3 Integration Effect
C1 1 1 1 1 1 1
C2 1 1 1 1 0 0
C3 1 1 0 0 1 1
162 Component-Based Software Engineering
Test cases for component C3, that is the case when C3 is involved
= Count of test cases for first component C1 + Count of test cases for first
component C2 + Count of test cases for first component C3 = 2 + 1 + 1 = 4.
When we apply boundary-value analysis techniques to this scenario we get:
If there are n number of components in the software, the least amount of test cases
produced by BVA are 4n + 1.
In the given scenario we have n = 3,
Then, count of test cases = 4 * 3 + 1
= 4 ∗ 3 + 1 = 13
C1 C3
C2 C4
FIGURE 6.7
Interactions among four components.
Component-Based Software Testing Techniques and Test-Case Generation Methods 163
Int e1
C1 C1 and C2
e2
C1 and C3 Int
C2
Int e3
C2 and C4
C3
Int e4
C3 and C4
C4 C2 and C4
Int e5
C1 and C4
Int No effect
C2 and C3
Int No effect
FIGURE 6.8
Integration-effect graph of four components.
The components interact and produce effects, recorded as “e1,” “e2,” “e3,” “e4”
and “e5” respectively.
The effect of integration can be computed as:
Integration of ( C1 Λ C2 Λ C3 Λ C 4 )
= Effect of ( C1) Λ Effect of ( C2 ) Λ Effect of ( C3 ) Λ Effect of ( C 4 ) Λ
Integration effect of ( C1 Λ C2 ) Λ Integration effect of ( C1 Λ C3 )
Λ Integration effect of ( C1 Λ C 4 ) Λ Integration effect of ( C2 Λ C1)
Λ Integration effect of ( C2 Λ C3 ) Λ Integration effect of ( C2 Λ C 4 )
Λ Integration effect of ( C3 Λ C1) Λ Integration effect of ( C3 Λ C2 )
164 Component-Based Software Engineering
TABLE 6.7
Probable Values of Integration-Effect Matrix of Four Components
Integration Integration Integration Integration
Components C1 Effect C2 Effect C3 Effect C4 Effect
TABLE 6.8
Actual Values of Integration-Effect Matrix of four Components
Components C1 Effect C2 Effect C3 Effect C4 Effect
C1 1 1 1 1 1 1 1 1
C2 1 1 1 1 0 0 1 1
C3 1 1 0 0 1 1 1 1
C4 1 1 1 1 1 1 1 1
C1 C3
C2 C4
C5
FIGURE 6.9
Interactions among five components.
Component-Based Software Testing Techniques and Test-Case Generation Methods 167
Integration of ( C1 Λ C2 Λ C3 Λ C 4 Λ C5 )
Int e1
C1 and C2
C1
Int e2
C1 and C3
C2 Int e3
C2 and C4
Int e4
C3 C3 and C4
C1 and C4 Int e5
C4 C2 and C3
Int e6
C3 and C5
Int e7
C5
C4 and C5
Int e8
C1 and C5
Int No Effect
C2 and C5
Int No Effect
FIGURE 6.10
Integration-effect graph of five components.
168 Component-Based Software Engineering
effect of (C2 Λ C5) Integration effect of (C3 Λ C1) Integration effect of (C3 Λ C2)
Λ Integration effect of (C3 Λ C4) Λ Integation effect of (C3 Λ C5) Λ Integration
effect of (C4 Λ C1) Λ Integration efect of (C4 Λ C2) Λ Integration effect of
(C4 Λ C3) Λ Integation Effect of (C4 Λ C5) Λ Integration effect of ( C5 Λ C1)
Λ Integration effect of ( C5 Λ C2 ) Λ Integration effect of ( C5 Λ C3 ) Λ Integation
efect of ( C5 Λ C 4 )
where
Integration of (C1 Λ C2 Λ C3 Λ C4 Λ C5) denotes the integration of five given
components,
Effect of (C1) denotes the effects in the software produced by C1; it is either
0 or 1
Effect of (C2) denotes the effects in the software produced by C2; it is either
0 or 1
Effect of (C3) denotes the software effects produced by C3; it is either 0 or 1
Effect of (C4) denotes the effects in the software produced by C4; it is either
0 or 1
Effect of (C4) denotes the effects in the software produced by C5; it is either
0 or 1
Integration effect of (C1 Λ C2) denotes the effects produced by interaction of
components C1 and C2; it is either 0 or 1.
Integration effect of (C1 Λ C3) denotes the effects produced by interaction of
components C1 and C3; it is either 0 or 1.
Integration effect of (C1 Λ C4) denotes the effects produced by interaction of
components C1 and C4; it is either 0 or 1.
Integration effect of (C1 Λ C5) denotes the effects produced by interaction of
components C1 and C5; it is either 0 or 1.
Integration effect of (C2 Λ C1) denotes the effects produced by interaction of
components C2 and C1; it is either 0 or 1.
Integration effect of (C2 Λ C3) denotes the effects produced by interaction of
components C2 and C3; it is either 0 or 1.
Integration effect of (C2 Λ C4) denotes the effects produced by interaction of
components C2 and C4; it is either 0 or 1.
Integration effect of (C2 Λ C5) denotes the effects produced by interaction of
components C2 and C5; it is either 0 or 1.
Integration effect of (C3 Λ C1) denotes the effects produced by interaction of
components C3 and C1; it is either 0 or 1.
Integration effect of (C3 Λ C2) denotes the effects produced by interaction of
components C3 and C2; it is either 0 or 1.
Integration effect of (C3 Λ C4) denotes the effects produced by interaction of
components C3 and C4; it is either 0 or 1.
Component-Based Software Testing Techniques and Test-Case Generation Methods 169
Test cases for component C2, that is the case when C2 is involved
TABLE 6.9
Probable Values of Integration-Effect Matrix of Five Components
Integration Integration Integration Integration Integration
Components C1 Effect C2 Effect C3 Effect C4 Effect C5 Effect
C1 1 Effect of 1 Integration 1 Integration 0 Integration 0 Integration
(C1): 0/1 effect of effect of effect of effect of
(C1 Λ C2): (C1 Λ C3): (C1 Λ C4): (C1Λ C5):
0/1 0/1 0/1 0/1
C2 0 Integration 1 Effect of 0 Integration 1 Integration 0 Integration
effect of (C2): 0/1 effect of effect of effect of
(C2 Λ C1): (C2 Λ C3): (C2 Λ C4): (C2 Λ C5):
0/1 0/1 0/1 0/1
C3 0 Integration 0 Integration 1 Effect of 1 Integration 1 Integration
effect of effect of (C3): 0/1 effect of effect of
(C3 Λ C1): (C3 Λ C2): (C3 Λ C4): (C3 Λ C5):
0/1 0/1 0/1 0/1
C4 1 Integration 0 Integration 0 Integration 1 Effect of 0 Integration
effect of effect of effect of (C4): 0/1 effect of
(C4 Λ C1): (C4 Λ C2): (C4 Λ C3): (C4 Λ C5):
0/1 0/1 0/1 0/1
C5 0 Integration 0 Integration 0 Integration 0 Integration 1 Effect of
effect of effect of effect of effect of (C5): 0/1
(C5 Λ C1): (C5 Λ C2): (C5 Λ C3): (C5 Λ C4):
0/1 0/1 0/1 0/1
TABLE 6.10
Actual Values of Integration-Effect Matrix of Five Components
Integration Integration Integration Integration Integration
Components C1 Effect C2 Effect C3 Effect C4 Effect C5 Effect
C1 1 1 1 1 1 1 1 1 0 0
C2 1 1 1 1 1 1 1 1 0 0
C3 1 1 1 1 1 1 1 1 1 1
C4 1 1 1 1 1 1 1 1 1 1
C5 0 0 0 0 1 1 1 1 1 1
Test cases for component C3, that is the case when C3 is involved
= ( Count of 1s below “Integration Effect ” column in the third row ) – 1
= 5–1= 4
Test cases for component C4, that is the case when C4 is involved
= ( Count of 1s below “Integration Effect ” column in the fourth row ) – 1
= 5–1= 4
Test cases for component C5, that is the case when C5 is involved
= ( Count of 1s below “Integration Effect ” column in the fourth row ) – 1
= 3–1= 2
Component-Based Software Testing Techniques and Test-Case Generation Methods 171
= Count of test cases for first component C1 + Count of test cases for first
component C2 + Count of test cases for first component C3
+ Count of test cases for first component C3 +
Count of test cases for first component C5.
= 3 + 3 + 4 + 4 + 2 = 16..
= 4 ∗ 5 + 1 = 21
• Method 1:
Cyclomatic complexity, denoted V(G), where G is the flow graph, constructed accord-
ing to the flow control of the components, can be defined as:
V ( G) = E − V + 2 + P
where
|E| denotes the number of edges, |V| denotes the total number of vertices
|P| is the total count of contributing components
Constant 2 is used to indicate that “the node V contributes to the complexity if its
out-degree is 2”
172 Component-Based Software Engineering
• Method 2:
In the second way of computing the cyclomatic complexity of component-based soft-
ware, another metric can be used which is defined as:
n m
V ( G) = ∑ ( IC )i + ∑ ( CR )j + OR
i =1 j =1
where
6.5.2 Case Study
We need flow graphs to compute the cyclomatic complexity of component-based soft-
ware. Notation for control-flow graphs and the base case of the proposed method are
similar to those discussed in Chapter 5. We now describe a case study having six compo-
nents: C1, C2, C3, C4, C5, and C6 (Figure 6.11). C1 is integrated with C2, C3, C4 and C6,
forming closed regions CR1, CR4, CR5 and CR8, respectively. Component C2 has been
integrated with C3 and that forms a closed region CR2. Component C3 has been linked
with C5 and C6,making closed regions CR10 and CR15. Component C4 is associated with
C5 and C6 and constructs closed regions CR9 and C13. Component C5 facing C6 is creat-
ing closed region CR12. The integration of components C1, C2 and C3 forms closed
region CR3. Integration of C1, C3 and C5 forms closed region CR6. Integration of C1, C4
and C5 forms closed region CR7. Integration of C4 and C6 creates closed region CR11.
Integration of components C3, C5 and C6 is constructing closed region CR14. There is an
open region OR.
From Figure 6.10, it can be seen that the internal cyclomatic complexities of C1, C2, C3,
C4, C5 and C6 are 4, 2, 4, 4, 2 and 2, respectively.
Now we compute the number of test cases in the case study defined in Figure 6.10, using
McCabe’s formula and the method discussed in this chapter, as:
V ( G ) = e − n + 2p = 70 − 44 + 2 ∗ 6 = 38
Component-Based Software Testing Techniques and Test-Case Generation Methods 173
C1 C2
1 1
1 OR
1 2 2
C3
3 2 3 2 CR2
4 4 1
CR1
5 5
1 2
3
6 6
3 2 4
4
7 7 5
CR3
3
CR4
8 6
CR5 4
CR8 7
C5
1 CR7 1 CR6
8
1
1 2 2 CR15
CR10
3 CR9 1
2 4 3 2 4
5 5 1
CR14
3
2
3
6 6
CR12 2
CR11
4 5 4
7 7
C4
CR13 6
8
C6
FIGURE 6.11
Interaction flow graph of six components.
174 Component-Based Software Engineering
Applying Method 1:
When the above-defined Method 1 is applied to the case study, we get:
V ( G ) = E − V + 2 + P = 70 − 44 + 2 + 6 = 34
Applying Method 2:
When the above-defined Method 2 is applied to the case study, we get:
=n 6=
, m 15=
, IC1 4=
, IC 2 2=
, IC 3 4=
, IC 4 4=
, IC 5 2=
, IC6 2, CR = 1, then
V ( G ) = ( 4 + 2 + 4 + 4 + 2 + 2 ) + 15 + 1 = 34.
6.5.3 Performance Results
a. Black-Box Testing: When various case studies for different scenarios are analyzed in
the context of black-box testing, it is observed that using the integration-effect matrix
reduces the number of test cases compared to the boundary-value analysis (black-
box) method, and better results are achieved. The results of both methods for varying
numbers of components are shown in Table 6.11.
b. White-Box Testing: The performance of the cyclomatic complexity metric shown
here is better than that of the methods defined in the literature. Compared with the
McCabe method (white-box), better results are achieved in terms of reduced number
of test cases. The results are shown in Table 6.12.
TABLE 6.11
Comparison of Black-Box Test Cases
Components Boundary-Value Test Cases Integration-Effect Metric Test Cases
2 9 2
3 13 4
4 17 10
5 21 16
TABLE 6.12
Comparison of White-Box Test Cases
Component-Based
Number of Components McCabe’s Complexity Cyclomatic Complexity
2 (1 closed region) 5 5
3 (2 closed regions) 12 11
3 (4 closed regions) 14 13
4 (3 closed regions) 15 13
4 (7 closed regions) 19 17
5 (10 closed regions) 27 24
6 (15 closed regions) 38 34
Component-Based Software Testing Techniques and Test-Case Generation Methods 175
Summary
Whether it is traditional, object-oriented or component-based software, software testing is
an essential software development process. Testing is performed not only to make the
underlying software error free but to enhance and ensure the development of reliable and
quality software products. In today’s software development environment, testing com-
mences just after the finalization of system requirements. Testing starts from component
level and moves forward to the integrated complex system level. Testing techniques avail-
able in the literature are divided into two broad categories: black-box and white-box.
Testing is an attempt to make the software under consideration error free. Identifying
and fixing errors in early phases of development is always helpful to minimize overall
development costs. Testing software is expensive in terms of cost, effort and time. Testing
is one of the critical and crucial phases of the overall development of software. Testing is
the fundamental activity to verify the correctness, precision and compatibility of the soft-
ware at individual level as well as system level. Practitioners have identified that improper
testing results in untrustworthy and unreliable products.
Testing is commonly used to verify and validate software. Verifying components in CBSE
constitutes the collection of procedures to certify the functionalities of components at indi-
vidual level. Validating components is the group of procedures that ensures the integrity of
integrated components according to the architectural design and fulfills the needs of the
customer. Testing methods must incorporate test planning, test-case design, implementa-
tion of testing, results and the assessment of collected data. Typically, software is made up
of different parts or sub-components. These subparts are not only independently deployed
but are from various contexts. Therefore, we need testing techniques which address not
only the subparts individually but the whole software.
This chapter includes two broad concepts for component-based software testing: testing
methods and test-case generation techniques for component-based software.
Testing techniques discussed here are further categorized as white-box and black-box
testing techniques. Testing levels for components and component-based software are intro-
duced and the testing process for components and their levels is discussed in detail.
This chapter also includes two types of test-case generation techniques in the context of
component-based software: the integration-effect matrix method for black-box testing and
the cyclomatic complexity method for white-box testing are inter-module techniques
applicable to interaction among various components. The integration-effect matrix is help-
ful for testing and recording the effects of components whose code is not accessible, and
the cyclomatic complexity method is applicable to components for which code is available.
There is a greater degree of predictability in terms of cost, effort, quality and risk if the test-
ability of the software can be predicted properly and early.
References
Basili, V. R. 2001. “Cots-Based Systems Top 10 Lists.” IEEE Computer, 34(5): 91–93.
Chen, J. 2011. “Complexity Metrics for Component-Based Software Systems.” International Journal of
Digital Content Technology and Its Applications, 5(3): 235–244.
Gill, N. S. and Balkishan. 2008. “Dependency and Interaction Oriented Complexity Metrics of
Component Based Systems.” ACM SIGSOFT Software Engineering Notes, 33(2): 1.
176 Component-Based Software Engineering
Gross, H.-G. 2009. Component-Based Software Testing with UML. Springer-Verlag, Berlin Heidelberg.
Harrold, M. J., D. Liang, and S. Sinha. 1999. “An Approach to Analyzing and Testing Component-
Based Systems.” In Workshop on Testing Distributed Component Based Systems (ICSE 1999), Los
Angeles, CA.
Myers, G. 1979. The Art of Software Testing. Wiley, Hoboken, NJ.
Umesh, K.T. and S. Kumar. 2017, “Component Level Testing in Component-Based Software
Development.” International Journal of Innovations & Advancement in Computer Science, 6(1):
69–73.
Weyuker, E. J. 1998. “Testing Component-Based Software: A Cautionary Tale.” IEEE Software, 15(5):
54–59.
Weyuker, E. J. 2001. “The Trouble with Testing Components.” In Heineman, G. T. and Councill, W. T.,
eds., Component-Based Software Engineering. Addison-Wesley, Reading, MA.
7
Estimating Execution Time and Reliability
of Component-Based Software
7.1 Introduction
Component-based software development is a proficient archetype for constructing quality
software products. Reusability is the basic concept behind the component-based software
environment where components interact to share services and information. Reliability of
commercial off-the-shelf (COTS) component-based software systems may depend on the
reliabilities of the individual commercial off-the-shelf (COTS) components that are in the
system (Hamlet et al. 2001). This chapter describes a reliability estimation method that uses
reusability features of components on an execution path or execution history. Function
points are used as the basic metric to estimate the reusability metric of the different catego-
ries of components. In the absence of function points, we can use lines of code to define the
reusability metric.
In this chapter, the focus is on the interaction aspect of components, which can be shown
by an interaction graph. An interaction metric is a measurement of the actual execution
time of individual components in component-based software. Interaction metrics are easy
to compute, and are informative when analyzing the performance of components as well
as component-based software.
7.2 Interaction Metric
In component-based software, components communicate with each other. The manner of
their interaction depends upon various issues, including the pre-specified architectural
path design, implementation set of a particular application, operational profiles of users
and the set of defined inputs (D’Souza and Wills 1998, Michael 2000, Philippe 2003, Hopkins
2000). There is more than one possible path in component-based software applications,
and a particular path is selected from a number of probable paths. Execution of a particular
path requires the invocation of components residing on that path, from source component
to destination component. Components call or are called by each other either sequentially
or using backward and forward loops (Meyer 2003). These invocations and interactions
must be taken into account when estimating the execution time of the overall software.
In the literature, the importance of interactions among components and software con-
structs has been broadly explained in terms of cohesion and coupling, information flow,
177
178 Component-Based Software Engineering
cyclomatic complexity, function-point analysis and many other aspects (Vitharana 2003,
Basili and Boehm 2001, Jianguo et al. 2001, Tyagi and Sharma 2012). Distinguished research-
ers have suggested various categories of metrics to compute the interaction complexities of
software. Chidamber and Kemerer proposed a set of metrics to count the number of instant
derived classes, the longest path from start node, total methods residing in a class, number
of coupled classes, the ratio of difference in methods in a class and the number of methods
executing in response after invoking a class (Jacobson et al. 1997, McIlroy 1976). Misra
(1992) defined a metric to estimate the method-level complexity of a class by weighing up
the internal structure of the method. Fothi et al. (2003) proposed a metric to analyze class
complexity taking control structures, data and the relationship between them into account.
For component-based software, practitioners have proposed various estimation metrics
considering the interaction, coupling and cohesion properties of components (Gill and
Balkishan 2008, Noel 2006, Sengupta and Kanjilal 2011). Gill and Balkishan (2008) sug-
gested a collection of component-based metrics to assess the interaction dependency and
coupling properties of components. Their work includes metrics for component depen-
dency and density of component interaction.
Abbreviation Term
Ci Component
Re-Ci Reliability of a component
RMCi Reusability metric of a component
InvCi Invocation counts of the component
ET-Ci Execution time of component
EPi Execution of a path
ET-Pi Execution time of a path
ET-CBS Execution time of component-based software
EP-Pi Probability of an execution of a path
ReqIi,j Request interaction
ResIi,j Response interaction
IM-Ci,j Interaction metric between two components
PI-Ci,j Probability of interaction between two components
Av-ET-Ci Average execution time of a component
Ac-ET-Pi Actual execution time of path
Ac-ET-CBS Actual execution time of CBS
RMCi-Full-qualified Fully qualified reusability metric
RMCi-Part-qualified Partially qualified adaptable reusability metric
RMCi-Off-the-shelf Off-the-shelf reusability metric
Re-Pi Reliability of a path
Re-CBS Reliability of component-based software
Estimating Execution Time and Reliability of Component-Based Software 179
Terms listed above in the context of execution time and reliability estimations can be
defined as follows:
EP2 = C 21 → C 22 → C 23 → → C 2k
EP3 = C 31 → C 32 → C 33 → → C 3k
EPn = C n1 → C n2 → C n3 → → C nk
ET − Pi = ∑ ( ET − C ∗ InvC ) (7.1)
i i
i =1
where (ET-Ci) is the execution time of a component and InvCi is the number of invoca-
tions. Since the execution history may contain backward and forward loops, we have
taken the number of invocations of a component into account. The minimum value of
InvCi is 1, since a component lying on a particular path must be invoked at least once.
180 Component-Based Software Engineering
k. Response Interaction (ResIi,j): Response interactions are the directed edges incoming
to the component Ci. They show the response to requests or invocations made by
component Cj to component Ci. They are shown as ResIi,j. They contain the total num-
ber of response parameters required to fulfill the response (Figure 7.1).
<ReqI1,4=6, InvC4=5>
C1 C4
<ReqI1,3=12,
InvC3=10>
<ResI2,1=8, <ReqI4,5=12,
InvC1=5> InvC4=9>
C3
<ReqI3,5=10, InC5=7>
<ReqI1,3=12,
InvC3=5>
C2 C5
FIGURE 7.1
Interactions among five components.
Estimating Execution Time and Reliability of Component-Based Software 181
ResI i , j
IMCi , j =
ReqI i , j + ResI i , j
m. Probability of Interaction between two components (PI-Ci, j): Suppose there are two
components, Ci and Cj. The interaction probability is the probability of selection of
component Cj for execution, after component Ci has been executed. In other words,
the interaction probability is the probability that component Ci invoked component
Cj to perform its services. The sum of total probability of interactions invoked by a
component to other components is always 1. A predefined execution history prede-
termines the probability of interaction among components as unity.
Av − ET − Ci = ET − Ci × InvCi (7.2)
where ET-Ci is the execution time of component Ci and InvCi is the count of invocations of
component Ci.
ET − Pi = ∑ ( ET − C ∗ InvC ) (7.3)
i i
i =1
Since the path may contain backward and forward loops, we have taken the number of
invocations of a component into account. The minimum value of InvCi is 1, since a compo-
nent lying on a particular path must be invoked at least once.
another component Cj, their interaction metric IM-Ci,j is computed. Now we define the
actual execution time of a path Pi as
i =1
where Ac represents the actual time. Ci and Cj are two consecutive components lying on
the same path Pi. Component Ci invokes component Cj and Cj responds. There is an inter-
action between Ci and Cj so their interaction metric IM-Ci,j is also calculated. ET-Ci is the
execution time of the component; InvCi is the invocation count. Without the loss of gener-
ality, we assume that the default value of the interaction metric between two components
IM-Ci,j is 0.01, if interaction is one way.
where i = 1 to n, that is the number of components on path Pi, InvCi is the number of invo-
cations of component i and IM-Ci,j is the interaction metric computed between components
Ci and Cj.
P1C1→C2→C5
P2C1→C3→C5
P3C1→C2→C3→C5
P4C1→C4→C5
TABLE 7.1
Inputs of Path P1
Ci ET-Ci InvCi Av-ET-Ci
C1 4 5 20
C2 4 7 28
C5 8 5 40
TABLE 7.2
Interaction Metrics of Path P1
Ci Cj ReqIi,j ResIj,i IM-Ci,j
C1 C2 10 8 0.4
C3 C5 8 0 0.01
184 Component-Based Software Engineering
TABLE 7.3
Inputs of Path P2
Ci ET-Ci InvCi Av-ET-Ci
C1 4 9 36
C3 8 10 80
C5 8 7 56
TABLE 7.4
Interaction Metrics of Path P2
Ci Cj ReqIi,j ResIj,i IM-Ci,j
C1 C3 12 6 0.3
C3 C5 10 0 0.01
lying on path P2 are given in Table 7.3. Calculations of requests from one component
to other components, responses from called components and the interaction metrics
are given in Table 7.4.
• Computations of requests, responses and interaction metrics of path P2
Values defined in Tables 7.3 and 7.4 are applied to Equation (7.4) to calculate the
actual execution time of path P2, as
TABLE 7.5
Inputs of Path P3
Ci ET-Ci InvCi Av-ET-Ci
C1 4 5 20
C2 4 7 28
C3 8 5 40
C5 8 5 40
TABLE 7.6
Interaction Metrics of Path P3
Ci Cj ReqIi,j ResIj,i IM-Ci,j
C1→C2 10 8 0.4
C2→C3 12 0 0.01
C3→C5 10 0 0.01
TABLE 7.7
Inputs of Path P4
Ci ET-Ci InvCi Av-ET-Ci
C1 4 1 4
C4 4 5 80
C5 8 9 56
TABLE 7.8
Interaction Metrics of Path P4
Ci Cj ReqIi,j ResIj,i IM-Ci,j
C1 C4 6 0 0.01
C4 C5 12 6 0.3
186 Component-Based Software Engineering
TABLE 7.9
Actual Execution Time of Four
Different Paths
Path (Pi) Ac-ET-Pi
P1 116.41
P2 252.31
P3 196.42
P5 116.31
= ( ( 4 ∗ 1) + ( 4 ∗ 5 ) + 0.01) + ( ( 4 ∗ 5 ) + ( 8 ∗ 9 ) + 0.7 )
Ac − ET − CBS = Ac − ET − P1 + Ac − ET − P2 + Ac − ET − P3 + Ac − ET − P4
= 681.45
7.4 Reliability Estimation
In software engineering, reliability is the foundation stone of the software’s quality.
Reliability can be defined as:
the likelihood of execution without failure for some definite interval of natural units or time.
(Musa 1998)
Estimating Execution Time and Reliability of Component-Based Software 187
According to IEEE:
a.
Time-dependent reliability:
These techniques assume that reliability is dependent on time. As time passes, reli-
ability of “things” decreases.
Among the reliability growth models most used in academia and industry are the
exponential model and the logarithmic Poisson model.
• Exponential model: Exponential models rely on the concept that finite failures can
occur in infinite time.
• Logarithmic Poisson model: These models are based on the assumption that infinite
failures can occur in infinite time.
b.
Time-independent reliability:
This type of reliability model assumes that reliability does not depend on time, espe-
cially in the case of software reliability, but on how software behaves on certain inputs.
• Burn-in Phase: This is the phase when hardware products try to cope with environ-
mental factors. In the earlier stages, errors and failure rates are at their peak. With
time and changes in the product as well as changes in the environment, the failure
rate becomes constant. Hardware products then enter the useful life phase.
• Useful Life Phase: Once initial failures and errors are fixed, the useful life phase of
the product commences, when it can be used without major faults. Minor errors may
be encountered and fixed without major effort.
• Wear-Out Phase: After a certain period of time, due to environmental factors like dust,
regression, temperature, weather, lack of maintenance and similar, the product may
encounter major failures and the need for replacement arises. Hardware products can
become outdated due to environmental maladies, that is, they start to wear out.
Time when product Time when product can be used Time when product
comes in environment starts deteriorating
FIGURE 7.2
Hardware reliability. (Adapted from Klutke et al. 2003.)
Modifications
in the code
Actual Curve
Ideal Curve
Time
FIGURE 7.3
Software reliability.
Estimating Execution Time and Reliability of Component-Based Software 189
But in practice, as time passes, user requirements and demands change, and developers
attempt to modify the software to fulfill them. As the changes are made, new faults and
errors may be introduced and the failure rate rises again. Errors are removed and the fail-
ure rate again settles at constant level. After a period of time, new requirements may intro-
duce new errors. Again, these are fixed and make the software reusable. In this way
software is said to deteriorate rather than to wear out. These repeated activities continue
and maintenance activities keep software usable. This is called an actual curve (Figure 7.3).
The major differences between hardware and software reliabilities are:
a.
Rate of Occurrence of Failure (ROCOF): ROCOF is the number of failures that occur
in software in a specified period of time. For reliable software there must be as few
failures as possible. As the failure count increases, reliability of the software system
decreases.
b.
Mean Time to Failure (MTTF): MTTF is the average time between failure occurrences
in operational software. Software running for a long time may produce a number of
failures. MTTF is the metric used to keep track of the average time between successive
failures. As the MTTF value increases, so does the reliability of the software.
c.
Mean Time to Repair (MTTR): MTTR is the average time taken to repair the failure
and make the software operational. As the repair time increases, the loss due to fail-
ure also increases. Therefore for better reliability of the software system, the MTTR
should be less.
d.
Mean Time between Failures (MTBF): This is defined as the sum of MTTF and
MTTR. That is, MTBF = MTTF + MTTR.
e.
Probability of Failure on Demand (POFOD): This is defined as the probability of
failure of software when a request for service occurs. This is different from other
metrics as it does not involve time. If the software is designed in a way that is free
190 Component-Based Software Engineering
from service requests, then such a metric is of no use. But if the service requests
are quite frequent, then this metric is quite useful to predict the probability of
on-demand failure.
f.
Software Availability: This is the metric used to assess the time for which the system
is likely to be available for operations over a period of time. It is defined as the ratio
of MTBF and sum of MTBF and MTTR. That is:
MTBF
Software availability =
MTBF + MTTR
Case 1: Reusability metric when all parts of the component are involved
Case 2: Reusability metric when only reused parts of the component are involved
Case 3: Reusability metric when only adaptable parts of the component are involved
Each case can be defined using function points as well as lines of code.
i.
When all parts of the component are involved
Using function points:
In this context, the ratio of fully qualified function points of a particular compo-
nent Ci to the total number of function points of that component is taken. This
is defined as:
FPCi−Full−qualified
RMCi−Full−qualified =
FPCi
where FPCi denotes the total function points of a component including new,
adaptable and reused.
Using lines of code (LOC):
On the basis of lines of code (LOC), we can define the metric as
LOCCi−Full−qualified
RMCi−Full−qualified =
LOCCi
where LOCCi denotes the total LOC of a component including new, customized
and reused.
ii.
When only reused parts of the component are involved
Using function points:
In this context the ratio is taken with respect only to the reused function points
of the component. Here the total number of fully qualified function points is
divided by the total number of reused function points.
Therefore, the reusability metric in terms of function points is defined as:
FPCi−Full−qualified
RMCi −Full −qualified =
FPCi−Reused
where total reused function points FPCi − Reused represent the collection of off-
the-shelf, fully qualified and partially qualified adaptable components.
192 Component-Based Software Engineering
LOCCi−Full−qualified
RMCi−Full−qualified =
LOCCi−Adaptable
where adaptable LOCs are the assembly of fully qualified and partially quali-
fied LOCs only.
i.
When all parts of the component are involved
Using function points:
In this case, we calculate the reusability metric in the context of a particular
component when all parts of the components are involved. Here, the total num-
ber of partially qualified reused function points is divided by the total number
of function points of the components. That is:
LOCCi−Part −qualified
RMCi−Part −qualified =
LOCCi
ii.
When only reused parts of the component are involved
Using function points:
In this context the ratio is taken with respect to reused function points of the
component only. In this case the total number of partially qualified function
points is divided by the total number of reused function points. Hence, it is
defined as
FPCi−Part −qualified
RMCi−Part −qualified =
FPCi−Reused
Using lines of code (LOC):
In this case the total number of partially qualified lines of code is divided by the
total number of reused lines of code. Hence, it is defined as
LOCCi−Part −qualified
RMCi−Part −qualified =
LOCCi−Reused
iii.
When only adaptable parts of the component are involved
Using function points:
In this case, the ratio is taken in the context of adaptable function points only.
The total number of fully qualified function points is divided by the total
number of adaptable function points only. Therefore, the reusability metric is
defined as
FPCi−Part −qualified
RMCi−Part −qualified =
FPCi−Adaptable
194 Component-Based Software Engineering
LOCCi−Part −qualified
RMCi−Part −qualified =
LOCCi−Adaptable
Case 1: Reusability metric when all parts of the component are involved
Case 2: Reusability metric when only reused parts of the component are involved
Each case can be defined using function points as well as lines of code.
i. When all parts of the component are involved
Using function points
In this context, we compute the reusability metric for a particular component. The
total number of off-the-shelf reused function points is divided by the total function
points of the component. It is defined as:
FPCi−Off −the−shelf
RMCi−Off −the−shelf =
FPCi
Using lines of code
The total number of off-the-shelf reused lines of code is divided by the total number
of lines of code of the component. It is defined as
LOCCi−Off −the−shelf
RMCi−Off −the−shelf =
LOCCi
ii.
When only reused parts of the component are involved
Using function points:
This is the case when the ratio is taken in the context of only reused function points
of the component. Here, the total number of off-the-shelf function points is divided
by the total reused function points. It is given as
FPCi−Off −the−shelf
RMCi−Off −the−shelf =
FPCi−Reused
Estimating Execution Time and Reliability of Component-Based Software 195
LOCCi−Off −the−shelf
RMCi−Off −the−shelf =
LOCCi−Reused
Re − CBS = ∑ Re − P × EP − P (7.7)
i i
Hi =1
where Re-CBS is the reliability of the component-based software, EP-Pi is the probability of
the execution of a path and RMCi is the reusability metric of the component.
7.5.5 Case Studies
To illustrate the proposed method, we use a case study with five components. The compo-
nent interaction graph (CIG) of the case study is shown in Figure 7.4. To elaborate the role
of reusability and interaction issues, we have used all three types of component: new,
196 Component-Based Software Engineering
C1 C2
C3 C4
C5
FIGURE 7.4
Component interaction graph.
reused adaptable (both partially qualified and fully qualified) and off the shelf. Component
C3 is the newly developed component, hence its reusability metric is 0.01. Components C1
and C5 are partially qualified components having reusability of 0.18 and 0.2 respectively.
Component C2 is fully qualified, with a reusability metric of 0.8, and component C4 is an
off-the-shelf component having reusability metric 1. When one component invokes another
component through an out-interaction edge, the calling component will get services in
return. We show this type of request through out-interaction edges, and the services in
return are shown as in-interaction edges (Figure 7.4). All the interaction edges among com-
ponents are either in-interaction or out-interaction edges. These interaction edges play an
important role in the computation of the interaction metric of these interacting
components.
The component interaction graph in Figure 7.4 shows three paths of execution from the
starting component C1 to the ending component C2.
Estimating Execution Time and Reliability of Component-Based Software 197
Each component, its interaction edge and execution history have corresponding values for
these parameters: execution time of Ci, reliability of Ci, number of invocations of Ci for
execution and reusability metric of Ci. On the basis of these parameters, we calculate the
AT-Ci of component i. These values and computations, which take account of the execution
history of the CBS application, are shown in Table 7.10.
( ) (
RGC1 = (Re− Ci + 1 − Re− Ci RMCi ∗IM − Ci ) = (0.6 + 1 − 0.60.18∗0.6 )
RGC1 = 0.65
( ) (
RGC 2 = (Re− Ci + 1 − Re− Ci RMCi ∗IM − Ci ) = (0.8 + 1 − 0.80.8∗0.5 )
RGC 2 = 0.89
TABLE 7.10
Inputs of Path P1
Ci ET-Ci Re-Ci InvCi RMCi IM-Ci Av-ET-Ci
TABLE 7.11
Probability of Component Interaction in Path P1
Ci Cj PI-Cij
C1 C1 1
C1 C3 0.33
C3 C5 1
198 Component-Based Software Engineering
( ) (
RGC 3 = (Re− Ci + 1 − Re− Ci RMCi ∗IM − Ci ) = (0.7 + 1 − 0.7 0.01∗0. )
RGC 3 = 0.7
( ) (
RGC 4 = (Re− Ci + 1 − Re− Ci RMCi ∗IM − Ci ) = (0.7 + 1 − 0.7 1∗0.5 )
RGC 4 = 0.86
( ) (
RGC 5 = (Re− Ci + 1 − Re− Ci RMCi ∗IM − Ci ) = (0.9 + 1 − 0.90.2∗0.33 )
RGC 5 = 0.91
EP1 = 0.94
EP2 = ( 0.65 × 1) + ( 0.65 × 0.8 × 0.33 ) + ( ( 0.65 × 0.8 × 0.33 ) × 0.91 × 0.5 )
EP2 = 0.90
TABLE 7.12
Inputs of Path P2
Ci ET-Ci Re-Ci InvCi RMCi IM-Ci Av-ET-Ci
TABLE 7.13
Probability of Component Interaction in Path P2
Ci Cj PI-Cij
C1 C 1 1
C1 C 4 0.33
C4 C 5 0.5
Applying the values of Tables 7.14 and 7.15 to Equation (7.6), we calculate the reli-
ability of path P3
EP3 = ( 0.65 × 1) + ( 0.65 × 0.8 × 0.33 ) + ( ( 0.65 × 0.8 × 0.33 ) × 0.89 × 0.5 )
+ ( ( 0.665 × 0.8 × 0.33 ) × 0.89 × 0.5 ) × 0.91 × 0.5
EP3 = 0.94
TABLE 7.14
Inputs of Path P3
Ci ET-Ci Re-Ci InvCi RMCi IM-Ci Av-ET-Ci
TABLE 7.15
Probability of Component Interaction in Path P3
Ci Cj PI-Cij
C1 C 1 1
C1 C 2 0.33
C2 C 4 0.5
C4 C 5 0.5
TABLE 7.16
Execution Time and Reliabilities of Paths
Hi ET-Pi EP-Pi Ac-ET-Pi Re-Pi
H1 26 0.33 27.93 0.94
H2 36 0.33 37.43 0.90
H3 48 0.33 49.93 0.94
TABLE 7.17
Actual Execution Time and Reliability of CBS Application
Application Ac-ET-CBS Re-CBS
CBS 115.29 0.92
Summary
This chapter describes methods for estimating execution time and reliability for compo-
nent-based applications. The component interaction graph (CIG) not only contains interac-
tion information about components but also holds useful attributes of each coordinating
component. Estimates of execution time and reliability are based on the CIG. These com-
putations are based on two fundamental properties of component-based software: reus-
ability and interaction. Reusability metrics and interaction metrics are used to explore the
properties of components. The reusability metric of components plays a vital role in the
reliability of their execution history and ultimately in the reliability of the CBS application.
The results obtained from our study suggest that increased reusability of components sup-
ports growth in the reliability of the application because they are pre-tested and qualified
components as compared to new components. Since interaction among components
through well-defined interfaces contributes to execution time, the interaction metric has
been taken into account in the calculation of the time for execution histories and CBS
applications.
Although there is a nominal growth in execution time due to these interactions, the
interaction metric is only used to assess the interaction of components; it does not restrict
their execution. These ratio metrics are not only helpful in computations of the execution
and reliability of CBS applications but can be used as an attribute to be stored in the reposi-
tory along with the component for future reuse.
References
Basili, V. R. and B. Boehm. 2001. “Cots-Based Systems Top 10 List.” IEEE Computer, 34(5): 91–93.
Chidamber, S. and C. Kemerer. 1994. “A Metrics Suite for Object -Oriented Design,” IEEE Transactions
on Software Engineering, 20(6): 476–493.
Estimating Execution Time and Reliability of Component-Based Software 201
D’Souza, D. F. and A. C. Wills. 1998. Objects, Components, and Frameworks with UML: The Catalysis
Approach. Addison-Wesley, Reading, MA.
Fothi, A., J. Gaizler, and Z. Porkol. 2003. The Structured Complexity of Object-Oriented Programs.
Mathematical and Computer Modeling, 38: 815–827.
Gill, N.S. and Balkishan. 2008. “Dependency and Interaction Oriented Complexity Metrics of
Component-Based Systems.” ACM SIGSOFT Software Engineering Notes, 33(2): 1–5.
Hamlet, D., D. Mason, and D. Woit. 2001. “Theory of Software Component Reliability.” In Proceedings
of 23rd International Conference on Software Engineering, Toronto, Canada.
Henry, S. and D. Kafura. 1981. “Software Structure Metrics Based on Information Flow.” IEEE
Transactions on Software Engineering, 7: 510–518.
Hopkins, J. 2000. “Component Primer.” Communications of the ACM, 43(10): 28–30.
IEEE Standard 610.12-1990. 1990. Glossary of Software Engineering Terminology. IEEE, New York, ISBN:
1–55937–079–3.
Jacobson, I., M. Griss, and P. Jonsson. 1997. Software Reuse: Architecture, Process and Organization for
Business Success, ACM Press New York.
Jianguo, C., H. Wang, Y. Zhou, and D. S. Bruda. 2001. “Complexity Metrics for Component-Based
Software Systems.” International Journal of Digital Content Technology and Its Applications, 5(3):
235–244.
Klutke, G., P. C. Kiessler, and M. A. Wortman. 2003. “A Critical Look at the Bathtub Curve.” IEEE
Transactions on Reliability, 52(1): 125–129.
Littlewood, B. and L. Strigini. 1993. “Validation of Ultra-High Dependability for Software-based
Systems.” Communications of the ACM, 36(11): 69–80.
Lyu, M. R. 1996. Handbook of Software Reliability Engineering. McGraw-Hill, New York.
Malaiya, Y. K., M. N. Li, J. M. Bieman, and R. Karcich. 2002. “Software Reliability Growth with Test
Coverage.” IEEE Transactions on Reliability, 51(4): 420–426.
McIlroy, M. D. [1968] 1976. “Mass Produced Software Components.” In J. M. Buxton, P. Naur, and B.
Randell, eds., Software Engineering Concepts and Techniques. NATO Conference on Software
Engineering, NATO Science Committee, Garmisch, Germany, 88–98.
Meyer, B. 2003. “The Grand Challenge of Trusted Components.” In Proceedings of IEEE ICSE, Portland,
OR, 660–667.
Michael, S. 2000. “Lessons Learned–Through Six Years of Component-Based Development.”
Communications of the ACM Journal, 43(10): 47–53.
Misra, K. B. 1992. Reliability Analysis and Prediction. Elsevier, Amsterdam, ISBN: 0–444–89606–6.
Musa, J. 1998. Software Reliability Engineering. McGraw-Hill, New York.
Noel, S. 2006. “Complexity Metrics AS Predictors of Maintainability and Integrability of Software
Components.” Journal of Arts and Sciences, 5: 39–50.
Philippe, K. 2003. The Rational Unified Process: An Introduction, 3rd ed. Addison-Wesley, Upper Saddle
River, NJ.
Sengupta, S. and A. Kanjilal. 2011. “Measuring Complexity of Component Based Architecture: A
Graph Based Approach.” ACM SIGSOFT Software Engineering Notes, 36(1): 1–10.
Tyagi, K. and A. Sharma. 2012. “Reliability of Component Based Systems—A Critical Survey.”
WSEAS Transactions on Computers, 11(2): 45–54.
Vitharana, P. 2003. “Design Retrieval and Assembly in Component-Based Software Development.”
Communications of the ACM, 46(11): 97–102.
Index
203
204 Index