0% found this document useful (0 votes)
8 views

Software Engineering 22BCAE3-1

The document provides an overview of Software Engineering, detailing its definition, processes, and the necessity of software evolution. It discusses various software categories, common myths surrounding software development, and outlines a generic view of the software engineering process, including maturity levels and key process areas. The content is structured for a Software Engineering course at Ananda College, aimed at third-year BCA students.

Uploaded by

memeboys766
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views

Software Engineering 22BCAE3-1

The document provides an overview of Software Engineering, detailing its definition, processes, and the necessity of software evolution. It discusses various software categories, common myths surrounding software development, and outlines a generic view of the software engineering process, including maturity levels and key process areas. The content is structured for a Software Engineering course at Ananda College, aimed at third-year BCA students.

Uploaded by

memeboys766
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 162

Ananda College

(Accredited with ‘B’ Grade by NAAC)


(Affiliated to Alagappa University)

Devakottai

Subject Code : 22BCAE3


Subject Name : Software Engineering
Class : III BCA
Semester : VI

Staff Name : Mr. J. John Kennedy


Department of Computer Applications

Page 1 of 162
Page 2 of 162
22BCAE3 - Software Engineering

Unit – 1

Introduction to Software Engineering

Software is a program or set of programs containing instructions that provide the desired
functionality. Engineering is the process of designing and building something that serves a particular
purpose and finds a cost-effective solution to problems.

What is Software Engineering?

Software Engineering is the process of designing, developing, testing, and maintaining software. It
is a systematic and disciplined approach to software development that aims to create high-quality,
reliable, and maintainable software.

1. Software engineering includes a variety of techniques, tools, and methodologies, including


requirements analysis, design, testing, and maintenance.

2. It is a rapidly evolving field, and new tools and technologies are constantly being developed to
improve the software development process.

3. By following the principles of software engineering and using the appropriate tools and
methodologies, software developers can create high-quality, reliable, and maintainable software
that meets the needs of its users.

4. Software Engineering is mainly used for large projects based on software systems rather than
single programs or applications.

5. The main goal of Software Engineering is to develop software applications for improving
quality, budget, and time efficiency.

6. Software Engineering ensures that the software that has to be built should be consistent,
correct, also on budget, on time, and within the required requirements.

The Evolving Role of Software

Software Evolution is a term that refers to the process of developing software initially, and then
timely updating it for various reasons, i.e., to add new features or to remove obsolete functionalities,
etc. This article focuses on discussing Software Evolution in detail.

Page 3 of 162
What is Software Evolution?

The software evolution process includes fundamental activities of change analysis, release
planning, system implementation, and releasing a system to customers.

1. The cost and impact of these changes are accessed to see how much the system is affected by
the change and how much it might cost to implement the change.

2. If the proposed changes are accepted, a new release of the software system is planned.

3. During release planning, all the proposed changes (fault repair, adaptation, and new
functionality) are considered.

4. A design is then made on which changes to implement in the next version of the system.

5. The process of change implementation is an iteration of the development process where the
revisions to the system are designed, implemented, and tested.

Necessity of Software Evolution

Software evaluation is necessary just because of the following reasons:

1. Change in requirement with time: With time, the organization’s needs and modus Operandi of
working could substantially be changed so in this frequently changing time the tools(software)
that they are using need to change to maximize the performance.

2. Environment change: As the working environment changes the things(tools) that enable us to
work in that environment also changes proportionally same happens in the software world as
the working environment changes then, the organizations require reintroduction of old
software with updated features and functionality to adapt the new environment.

3. Errors and bugs: As the age of the deployed software within an organization increases their
preciseness or impeccability decrease and the efficiency to bear the increasing complexity
workload also continually degrades. So, in that case, it becomes necessary to avoid use of
obsolete and aged software. All such obsolete Pieces of software need to undergo the evolution
process in order to become robust as per the workload complexity of the current environment.

Page 4 of 162
4. Security risks: Using outdated software within an organization may lead you to at the verge of
various software-based cyberattacks and could expose your confidential data illegally
associated with the software that is in use. So, it becomes necessary to avoid such security
breaches through regular assessment of the security patches/modules are used within the
software. If the software isn’t robust enough to bear the current occurring Cyber attacks so it
must be changed (updated).

5. For having new functionality and features: In order to increase the performance and fast data
processing and other functionalities, an organization need to continuously evolute the
software throughout its life cycle so that stakeholders & clients of the product could work
efficiently.

Changing Nature of Software

Nowadays, seven broad categories of computer software present continuing challenges for
software engineers. Which is given below:

1. System Software: System software is a collection of programs that are written to service other
programs. Some system software processes complex but determinate, information structures.
Other system application processes largely indeterminate data. Sometimes when, the system
software area is characterized by the heavy interaction with computer hardware that requires
scheduling, resource sharing, and sophisticated process management.

Page 5 of 162
2. Application Software: Application software is defined as programs that solve a specific
business need. Application in this area processes business or technical data in a way that
facilitates business operation or management technical decision-making. In addition to
conventional data processing applications, application software is used to control business
functions in real-time.

3. Engineering and Scientific Software: This software is used to facilitate the engineering function
and task. however modern applications within the engineering and scientific area are moving
away from conventional numerical algorithms. Computer-aided design, system simulation, and
other interactive applications have begun to take a real-time and even system software
characteristic.

4. Embedded Software: Embedded software resides within the system or product and is used to
implement and control features and functions for the end-user and for the system itself.
Embedded software can perform limited and esoteric functions or provide significant function
and control capability.

5. Product-line Software: Designed to provide a specific capability for use by many customers,
product-line software can focus on the limited and esoteric marketplace or address the mass
consumer market.

6. Web Application: It is a client-server computer program that the client runs on the web
browser. In their simplest form, Web apps can be little more than a set of linked hypertext files
that present information using text and limited graphics. However, as e-commerce and B2B
applications grow in importance. Web apps are evolving into a sophisticated computing
environment that not only provides a standalone feature, computing function, and content to
the end user.

7. Artificial Intelligence Software: Artificial intelligence software makes use of a nonnumerical


algorithm to solve a complex problem that is not amenable to computation or straightforward
analysis. Applications within this area include robotics, expert systems, pattern recognition,
artificial neural networks, theorem proving, and game playing.

Software Myths

Page 6 of 162
Most, experienced experts have seen myths or superstitions (false beliefs or interpretations)
or misleading attitudes (naked users) which creates major problems for management and technical
people. The types of software-related myths are listed below.

i) Management Myths:

Myth 1:

We have all the standards and procedures available for software development.

Fact:

 Software experts do not know all the requirements for the software development.

 And all existing processes are incomplete as new software development is based on new and
different problem.

Myth 2:

The addition of the latest hardware programs will improve the software development.

Fact:

 The role of the latest hardware is not very high on standard software development; instead
(CASE) Engineering tools help the computer, they are more important than hardware to
produce quality and productivity.

 Hence, the hardware resources are misused.

Myth 3:

Page 7 of 162
 With the addition of more people and program planners to Software development can help
meet project deadlines (If lagging behind).

Fact:

 If software is late, adding more people will merely make the problem worse. This is because the
people already working on the project now need to spend time educating the newcomers, and
are thus taken away from their work. The newcomers are also far less productive than the
existing software engineers, and so the work put into training them to work on the software
does not immediately meet with an appropriate reduction in work.

(ii)Customer Myths:

The customer can be the direct users of the software, the technical team, marketing / sales
department, or other company. Customer has myths leading to false expectations (customer) & that’s
why you create dissatisfaction with the developer.

Myth 1:

A general statement of intent is enough to start writing plans (software development) and
details of objectives can be done over time.

Fact:

 Official and detailed description of the database function, ethical performance, communication,
structural issues and the verification process are important.

 Unambiguous requirements (usually derived iteratively) are developed only through effective
and continuous
communication between customer and developer.

Myth 2:

Software requirements continually change, but change can be easily accommodated because
software is flexible

Fact:

Page 8 of 162
 It is true that software requirements change, but the impact of change varies with the time at
which it is introduced. When requirements changes are requested early (before design or code
has been started), the cost impact is relatively small. However, as time passes, the cost impact
grows rapidly—resources have been committed, a design framework has been established, and
change can cause upheaval that requires additional resources and major design modification.

(iii)Practitioner’s Myths:

Myths 1:

They believe that their work has been completed with the writing of the plan.

Fact:

 It is true that every 60-80% effort goes into the maintenance phase (as of the latter software
release). Efforts are required, where the product is available first delivered to customers.

Myths 2:

There is no other way to achieve system quality, until it is “running”.

Fact:

Page 9 of 162
 Systematic review of project technology is the quality of effective software verification method.
These updates are quality filters and more accessible than test.

Myth 3:

An operating system is the only product that can be successfully exported project.

Fact:

 A working system is not enough, the right document brochures and booklets are also required
to provide guidance & software support.

Myth 4:

Engineering software will enable us to build powerful and unnecessary document & always
delay us.

Fact:

 Software engineering is not about creating documents. It is about creating a quality product.
Better quality leads to reduced rework. And reduced rework results in faster delivery times

A Generic view of process

A Generic view of process Software Engineering Process : A set of activities, methods,


practices, and transformations that people use to develop and maintain software and the associated
products (e.g., project plans, design documents, code, test cases, and user manuals)

Software Engineering - A Layered Technology Software engineering encompasses a process,


the management of activities, technical methods, and use of tools to develop software products

Page 10 of 162
 The foundation for software engineering is the process layer. It is the glue that holds the
technology layers together and enables rational and timely development of computer software.
 Process defines a framework that must be established for effective delivery of software
engineering technology.
 The software process forms the basis for management control of software projects and
establishes the context in which technical methods are applied, work products (models,
documents, data, reports, etc.) are produced, milestones are established, quality is ensured, and
change is properly managed.
 Software engineering methods provide the technical ―how to’s‖ for building software.
Methods encompass a broad array of tasks that include communication, req. analysis, design,
coding, testing and support.
 Software engineering tools provide automated or semi-automated support for the process and
the methods.
 When tools are integrated so that info. Created by one tool can be used by another, a system
for the support of software development called computer-aided software engineering is
established.

A Process framework

 Establishes the foundation for a complete software process


 Identifies a number of framework activities applicable to all software projects
 Also include a set of umbrella activities that are applicable across the entire software process.
 Used as a basis for the description of process models
 Generic process activities

- Communication
- Planning
- Modeling
- Construction
- Deployment
Page 11 of 162
Communication
activity Planning
activity Modeling
activity
o analysis action
o requirements gathering work task
o elaboration work task
o negotiation work task
o specification work task
o validation work task
o design action
o data design work task
o architectural design work task
o interface design work task
o component-level design work task
o Construction activity
o Deployment activity
 Umbrella activities (exampl

- software project tracking and control


- risk management
- software quality assurance
- formal technical reviews
- measurement
- s/w configuration management
- reusability management
- work product preparation and production (e.g., models,
- documents, logs)

Page 12 of 162
-

CMM Levels.

Level 1: Initial.
 A software development organization at this level is characterized by ad hoc
activities.
 Very few or no processes are defined and followed.
 Since software production processes are not defined, different engineers follow their
own process and as a result development efforts become chaotic.
 The success of projects depends on individual efforts and heroics.
 Since formal project management practices are not followed, under time
pressure short cuts are tried out leading to low quality.

Level 2: Repeatable

 At this level, the basic project management practices such as tracking cost and
schedule are established.
 Size and cost estimation techniques like function point analysis, COCOMO, etc. are
used.
 The necessary process discipline is in place to repeat earlier success on projects with
similar applications. Opportunity to repeat a process exists only when a company

Page 13 of 162
produces a family of products

Page 14 of 162
Level 3: Defined

 At this level the processes for both management and development activities are
defined and documented.
 There is a common organization-wide understanding of activities, roles, and
responsibilities.
 The processes though defined, the process and product qualities are not
measured.
 ISO 9000 aims at achieving this level.

Level 4: Managed

 At this level, the focus is on software metrics.


 Two types of metrics are collected.
o Product metrics measure the characteristics of the product being developed,
such as its size, reliability, time complexity, understandability, etc.
o Process metrics reflect the effectiveness of the process being used, such as
average defect correction time, productivity, average number of defects found
per hour inspection, average number of failures detected during testing per
LOC, etc.
 Quantitative quality goals are set for the products. The software process and product
quality are measured and quantitative quality requirements for the product are met.
 Various tools like Pareto charts, fishbone diagrams, etc. are used to measure the
product and process quality.
 Thus, the results of process measurements are used to evaluate project performance
rather than improve the process.
Key process areas (KPA):

Each maturity level is characterized by several Key Process Areas (KPAs) except for SEI
CMM level 1 that includes the areas an organization should focus to improve its software
process to the next level.
Page 15 of 162
Process Patterns

 Process patterns define a set of activities, actions, work tasks, work products
and/or related behaviors
 A template is used to define a pattern
 Typical examples:
o Customer communication (a process activity)
o Analysis (an action)
o Requirements gathering (a process task)
o Reviewing a work product (a process task)
o Design model (a work product)

Process Assessment

 The process should be assessed to ensure that it meets a set of basic process
criteria that have been shown to be essential for a successful software
engineering.

The generic process framework – Detailed Activities of each phase

 Communication

 Planning

Page 16 of 162
 Modeling

 Construction

 Deployment

Capability Maturity Model Integration (CMMI)

The Capability Maturity Model Integration (CMMI) is a process and behavioral model that helps
organizations improve their development processes and encourage productive, efficient behaviors.
Developed at Carnegie Mellon University's Software Engineering Institute, CMMI provides a framework
for organizations to assess and enhance their processes, ultimately leading to higher quality products and
services2.

Key Aspects of CMMI:

1. Levels of Maturity: CMMI defines five levels of maturity for processes, ranging from Level 1
(Initial) to Level 5 (Optimizing). Each level represents a different degree of process maturity and
capability.

2. Process Areas: CMMI includes various process areas, such as project management, requirements
management, and quality assurance, which are essential for effective process improvement.

3. Continuous Improvement: The model encourages continuous process improvement through


regular assessments and feedback loops.

4. Applicability: While initially developed for software engineering, CMMI has evolved to be
applicable to various industries, including hardware development, service industries, and more.

5. Benchmarking: Organizations can use CMMI to set benchmarks for evaluating their processes
and identifying areas for improvement.

Benefits of Implementing CMMI:

 Enhanced Productivity: By streamlining processes, organizations can achieve higher productivity


and efficiency.

Page 17 of 162
 Reduced Risk: CMMI helps in identifying and mitigating risks early in the development process.

 Improved Quality: The focus on process improvement leads to higher quality products and
services.

 Customer Satisfaction: Meeting customer expectations and delivering high-quality products


enhances customer satisfaction and loyalty.

 Market Competitiveness: Organizations can improve their market value and competitiveness by
adhering to CMMI standards.

Evolution of CMMI:

CMMI has undergone several iterations, with the most recent version (V2.0) being released in 2018. This
version combines previous areas of focus (product and service development, service establishment, and
product and service acquisition) into a single, more user-friendly and adaptable model2.

Process patern:

As the software team moves through the software process they encounter problems. It would be very
useful if solutions to these problems were readily available so that problems can be resolved quickly.
Process-related problems which are encountered during software engineering work, it identifies the
encountered problem and in which environment it is found, then it will suggest proven solutions to
problem, they all are described by process pattern. By solving problems a software team can construct a
process that best meets needs of a project.

Uses of the process pattern

At any level of abstraction, patterns can be defined. They can be used to describe a problem and solution
associated with framework activity in some situations. While in other situations patterns can be used to
describe a problem and solution associated with a complete process model.

Template:

 Pattern Name – Meaningful name must be given to a pattern within context of software process
(e.g. Technical Reviews).

Page 18 of 162
 Forces – The issues that make problem visible and may affect its solution also environment in
which pattern is encountered.

Type:

It is of three types :

1. Stage pattern – Problems associated with a framework activity for process are described by stage
pattern. Establishing Communication might be an example of a staged pattern. This pattern would
incorporate task pattern Requirements Gathering and others.

2. Task-pattern – Problems associated with a software engineering action or work task and relevant
to successful software engineering practice (e.g., Requirements Gathering is a task pattern) are
defined by task-pattern.

3. Phase pattern – Even when the overall flow of activities is iterative in nature, it defines sequence
of framework activities that occurs within process. Spiral Model or Prototyping might be an
example of a phase pattern.

Process Assesment

Software Process Assessment is a disciplined and organized examination of the software process which
is being used by any organization bases the on the process model. The Software Process Assessment
includes many fields and parts like identification and characterization of current practices, the ability of
current practices to control or avoid significant causes of poor (software) quality, cost, schedule and
identifying areas of strengths and weaknesses of the software.

Types of Software Assessment :

 Self Assessment : This is conducted internally by the people of their own organisation.

 Second Party assessment: This is conducted by an external team or people of the own
organisation are supervised by an external team.

 Third Party assessment:

In an ideal case Software Process Assessment should be performed in a transparent, open and
collaborative environment. This is very important for the improvement of the software and the
Page 19 of 162
development of the product. The results of the Software Process Assessment are confidential and are only
accessible to the company. The assessment team must contain at least one person from the organization
that is being assessed.

Process Models:

Software Pr oce ss

identifies is e xamine d by identifies cap abilities and


modifications to risk of

Softwar e Pr ocess Asses


sment

leads to leads to Capability Determin


ation
Softwar e Pr ocess Imp
rovement motivates

Software Process

Software Process

PSP TSP

Personal Software Process (PSP)

 Recommends five framework activities:


o Planning
o High-level design
o High-level design review
o Development
o Postmortem
o stresses the need for each software engineer to identify errors early and as important, to
understand the types of errors
Team Software Process (TSP)
 Each project is ―launched‖ using a ―script‖ that defines the tasks to be
Page 20 of 162
accomplished
 Teams are self-directed
 Measurement is encouraged
 Measures are analyzed with the intent of improving the team process

Page 21 of 162
PROCESS MODELS

SDLC Overview
SDLC, Software Development Life Cycle is a process used by software industry to design, develop
and test high quality softwares. The SDLC aims to produce a high quality software that meets or exceeds
customer expectations, reaches completion within times and cost estimates.
 SDLC is the acronym of Software Development Life Cycle.
 It is also called as Software development process.
 The software development life cycle (SDLC) is a framework defining tasks performed at each
step in the software development process.

A typical Software Development life cycle consists of the following stages:

 Stage 1: Planning and Requirement Analysis


 Stage 2: Defining Requirements
 Stage 3: Designing the product architecture
 Stage 4: Building or Developing the Product
 Stage 5: Testing the Product
 Stage 6: Deployment in the Market and Maintenance

Life cycle model


A software life cycle model also called process model is a descriptive and
diagrammatic representation of the software life cycle. A life cycle model represents all the
activities required to make a software product transit through its life cycle phases. It also
captures the order in which these activities are to be undertaken. In other words, a life
cycle model maps the different activities performed on a software product from its
inception to retirement.

Different life cycle models may map the basic development activities to phases in different
ways. Thus, no matter which life cycle model is followed, the basic activities are included
in all life cycle models though the activities may be carried out in different orders in
different life cycle models. During any life cycle phase, more than one activity may also be

Page 22 of 162
carried out.

1. Waterfall Model
2. Prototyping Model
3. Incremental Model
4. RAD Model
5. Spiral Model

Evolutionary Process Model

The evolutionary model is based on the concept of making an initial product and then evolving the
software product over time with iterative and incremental approaches with proper feedback.
In this type of model, the product will go through several iterations and come up when the final
product is built through multiple iterations. The development is carried out simultaneously
with the feedback during the development. This model has a number of advantages such as
customer involvement, taking feedback from the customer during development, and building
the exact product that the user wants. Because of the multiple iterations, the chances of errors
get reduced and the reliability and efficiency will increase

Evolutionary Model

Types of Evolutionary Process Models

1. Iterative Model

2. Incremental Model

3. Spiral Model

Page 23 of 162
1. Waterfall Model or Phased life cycle model

• Oldest software lifecycle model and best understood by upper management


• Used when requirements are well understood and risk is low
• Work flow is in a linear (i.e., sequential) fashion
Page 24 of 162
• Used often with well-defined adaptations or enhancements to current software
• Begins with customer specification of Requirements and progresses through
planning, modeling, construction and deployment

The sequential phases in Waterfall model are:

 Requirement Gathering and analysis: All possible requirements of the system to be developed
are captured in this phase and documented in a requirement specification doc.
 System Design: The requirement specifications from first phase are studied in this phase and
system design is prepared. System Design helps in specifying hardware and system requirements
and also helps in defining overall system architecture.
 Implementation: With inputs from system design, the system is first developed in small
programs called units, which are integrated in the next phase. Each unit is developed and tested
for its functionality which is referred to as Unit Testing.
 Integration and Testing: All the units developed in the implementation phase are integrated
into a system after testing of each unit. Post integration the entire system is tested for any faults
and failures.
 Deployment of system: Once the functional and non functional testing is done, the product is
deployed in the customer environment or released into the market.
 Maintenance: There are some issues which come up in the client environment. To fix those
issues patches are released. Also to enhance the product some better versions are released.
Maintenance is done to deliver these changes in the customer environment.

All these phases are cascaded to each other in which progress is seen as flowing steadily downwards
(like a waterfall) through the phases. The next phase is started only after the defined set of goals are
achieved for previous phase and it is signed off, so the name "Waterfall Model". In this model phases do
not overlap.

Page 25 of 162
Advantages:

 It is very simple

 It divides the large task of building a software system into a seri es of clearly
divided phases.
 Each phase is well documented

Problems

 Doesn't support iteration, so changes can cause confusion

 Difficult for customers to state all requirements explicitly and up front

 Requires customer patience because a working version of the program doesn't


occur until the final phase
 Problems can be somewhat alleviated in the model through the addition of
feedback loops

INCREMENTAL PROCESS MODELS

1. Incremental Model

Page 26 of 162
o In this life cycle model, the software is first broken down into several modules which can
be incrementally constructed and delivered.
o Used when requirements are well understood
o Multiple independent deliveries are identified
o Work flow is in a linear (i.e., sequential) fashion within an increment and is staggered
between increments
o Iterative in nature; focuses on an operational product with each increment
o The development team first develops the core modules of the system.
o This initial product skeleton is refined into increasing levels of capability adding new
functionalities in successive versions.
o Each evolutionary version may be developed using an iterative waterfall model of
development.
o Provides a needed set of functionality sooner while delivering optional components
later
o Useful also when staffing is too short for a full-scale development

Page 27 of 162
Iterative Model

This model is most often used in the following scenarios:

 Requirements of the complete system are clearly defined and understood.


 Major requirements must be defined; however, some functionalities or requested enhancements
may evolve with time.
 There is a time to the market constraint.
 A new technology is being used and is being learnt by the development team while working on the
project.
 Resources with needed skill set are not available and are planned to be used on contract basis for
specific iterations.
 There are some high risk features and goals which may change in the future.

Evolutionary Process Model

The evolutionary model is also known as successive versions or incremental models. The main
aim of this evolutionary model is to deliver the products in total processes over time. It also combines
the iterative and collective model of the software development life cycle (SDLC).

Based on the evolutionary model, we can divide the development model into many modules to
Page 28 of 162
help the developer build and transfer incrementally. On the other hand, we can also develop the
skeleton of the initial product. Also, it refines the project to increase levels of capability by adding new
functionalities in successive versions.

Characteristics of the Evolutionary Model

There are so many characteristics of using the evolutionary model in our project. These
characteristics are as follows.

o We can develop the evolutionary model with the help of an iterative waterfall model of
development.

o There are three types of evolutionary models. These are the Iterative model, Incremental model
and Spiral model.

o Many primary needs and architectural planning must be done

o to implement the evolutionary model.

o When the new product version is released, it includes the new functionality and some changes in
the existing product, which are also released with the latest version.

o This model also permits the developer to change the requirement, and the developer can divide
the process into different manageable work modules.

o The development team also have to respond to customer feedback throughout the development
process by frequently altering the product, strategy, or process.

Rapid Application Model (RAD)

• RAD is a high speed adaptation of linear sequential model. It is characterized by a


very short development life cycle, in which the objective is to accelerate the
development.
• The RAD model follows a component based approach.
Page 29 of 162
• In this approach individual components developed by different people are
assembled to develop a large software system.

Page 30 of 162
The RAD model consist of the following phases

• Business Modeling:
In this phase, define the flow of information within the organization, so that it
covers all the functions. This helps in clearly understand the nature, type source
and process of information.
• Data Modeling:
In this phase, convert the component of the information flow into a set of data
objects. Each object is referred as an Entity.
• Process Modeling:
In this phase, the data objects defined in the previous phase are used to depict
the flow of information . In addition adding , deleting, modifying and retrieving
the data objects are included in process modeling.
• Application Designing:
In this phase, the generation of the application and coding take place. Using
fourth generation programming languages or 4 GL tools is the preferred choice
for the software developers.
• Testing:
In this phase, test the new program components.

Page 31 of 162
The RAD has following advantages

• Due to emphasis on rapid development , it results in the delivery of fully


functional project in short time period.
• It encourages the development of program component reusable.

The RAD has following disadvantages

• It requires dedication and commitment on the part of the developers as well as the
client to meet the deadline. If either party is indifferent in needs of other, the
project will run into serious problem.
• For large but scalable projects It is not suitable as RAD requires sufficient human
resources to create the right number of RAD teams.
• RAD requires developers and customers who are committed to rapid fire
activities
• Its application area is restricted to system that are modular and reusable in
nature.
• It is not suitable for the applications that have a high degree of technical risk.
• For large but scalable projects, RAD requires sufficient human resources to
create the right number of RAD teams.
• RAD requires developers and customers who are committed to rapid fire
activities.
• Not all types of applications are appropriate for RAD.
• RAD is not appropriate when technical risks are high.

Evolutionary Process Models:

Prototype Models:

A prototype is a toy implementation of the system. A prototype usually exhibits limited


functional capabilities, low reliability, and inefficient performance compared to the actual
Page 32 of 162
software. A prototype is usually built using several shortcuts. The shortcuts might involve
using inefficient, inaccurate, or dummy functions. The shortcut implementation of a
function, for example, may produce the desired results by using a table look-up instead of
performing the actual computations.

Need for a prototype in software development

There are several uses of a prototype. An important purpose is to illustrate the input data
formats, messages, reports, and the interactive dialogues to the customer. This is a
valuable mechanism for gaining better understanding of the customer’s needs:

• how the screens might look like


• how the user interface would behave
• how the system would produce output

Page 33 of 162
• Follows an evolutionary and iterative approach

• Used when requirements are not well understood

• Serves as a mechanism for identifying software requirements

• Focuses on those aspects of the software that are visible to the customer/user

• In this model, product development starts with an initial requirements gathering


phase.
• A quick design is carried out and the prototype is built.

• The developed prototype is submitted to the customer for his evaluation.

• Based on the customer feedback, the requirements are refined and the prototype is
suitably modified.
• This cycle of obtaining customer feedback and modifying the prototype continues till
the customer approves the prototype.
• The actual system is developed using the iterative waterfall approach. However, in
the prototyping model of development, the requirements analysis and specification
phase becomes redundant as the working prototype approved by the customer
becomes redundant as the working prototype approved by the customer becomes
an animated requirements specification.

Page 34 of 162
Disadvantages

 The customer sees a "working version" of the software, wants to stop all
development and then buy the prototype after a "few fixes" are made
 Developers often make implementation compromises to get the software running
quickly (e.g., language choice, user interface, operating system choice, inefficient
algorithms)
 Lesson learned
o Define the rules up front on the final disposition of the prototype before it is
built
o In most circumstances, plan to discard the prototype and engineer the
actual production software with a goal toward quality
o

Spiral Model

• Invented by Dr. Barry Boehm in 1988


• Follows an evolutionary approach
• Used when requirements are not well understood and risks are high
• Inner spirals focus on identifying software requirements and project risks; may
also incorporate prototyping
• Outer spirals take on a classical waterfall approach after requirements have been
defined, but permit iterative growth of the software
• Operates as a risk-driven model…a go/no-go decision occurs after each
complete spiral in order to react to risk determinations
• Requires considerable expertise in risk assessment

Page 35 of 162
• Serves as a realistic model for large-scale software development

First quadrant (Objective Setting)

Page 36 of 162
• During the first quadrant, it is needed to identify the objectives of the phase.

• Examine the risks associated with these objectives.

Second Quadrant (Risk Assessment and Reduction)

• A detailed analysis is carried out for each identified project risk.

• Steps are taken to reduce the risks. For example, if there is a risk that
the requirements are inappropriate, a prototype system may be
developed.

Third Quadrant (Development and Validation)

• Develop and validate the next level of the product after


resolving the identified risks.

Fourth Quadrant (Review and Planning)

• Review the results achieved so far with the customer and plan the
next iteration around the spiral.

• Progressively more complete version of the software gets built with


each iteration around the spiral.

Spiral Model Advantages

• Focuses attention on reuse options.


• It is a realistic approach to the development of large scale systems and software.
• Focuses attention on early error elimination.
• Puts quality objectives up front.
• Integrates development and maintenance.
• Provides a framework for hardware/software development.
Disadvantages:

• Contractual development often specifies process model

• and deliverables in advance.

• Requires risk assessment expertise.

The Unified Process

The Unified Process (UP) is a software development framework used for object-oriented
modeling. The framework is also known as Rational Unified Process (RUP) and the
Open Unified Process (Open UP). Some of the key features of this process include:
 It defines the order of phases.
 It is component-based, meaning a software system is built as a set of software
components. There must be well-defined interfaces between the components for
smooth communication.
 It follows an iterative, incremental, architecture-centric, and use-case driven approach

The phases of the unified process


Inception
The main goal of this phase involves delimiting the project scope. This is where we define
why we are making this product in the first place. It should have the following:
 What are the key features?
 How does this benefit the customers?
 Which methodology will we follow?
 What are the risks involved in executing the project?
 Schedule and cost estimates.
Elaboration
We build the system given the requirements, cost, and time constraints and all the risks
involved. It should include the following:
 Develop with the majority of the functional requirements implemented.
 Finalize the methodology to be used.
 Deal with the significant risks involved.
Construction
This phase is where the development, integration, and testing take place. We build the
Page 38 of 162
complete architecture in this phase and hand the final documentation to the client.
Transition
This phase involves the deployment, multiple iterations, beta releases, and improvements
of the software. The users will test the software, which may raise potential issues. The
development team will then fix those errors.
Conclusion
This method allows us to deal with the changing requirements throughout the
development period. The unified process model has various applications which also
makes it complex in nature. Therefore, it's most suitable for smaller projects and
should be implemented by a team of professionals.

Page 39 of 162
Unit – II

Software Requirements

Introduction to Software Requirements

Software requirements are the specifications and descriptions of the functionality and
constraints of a software system. They are fundamental to the software development process as
they outline what the system should do and how it should perform.

Key Aspects of Software Requirements:

1. Types of Requirements:

 Functional Requirements: These define the specific behaviors and functions that the
software must perform. For example, a functional requirement for a banking application
might be the ability to transfer funds between accounts.

 Non-Functional Requirements: These describe the attributes of the system, such as


performance, security, and usability. For instance, the system must be able to handle
1,000 transactions per second or ensure data is encrypted.

2. Gathering Requirements:

 Stakeholder Interviews: Engaging with stakeholders (clients, users, developers) to


understand their needs and expectations.

 Workshops and Brainstorming Sessions: Collaborative sessions to explore and define


requirements.

 Surveys and Questionnaires: Collecting input from a large group of stakeholders.

3. Documentation:

 Software Requirements Specification (SRS): A detailed document that outlines all the
functional and non-functional requirements of the system. The SRS serves as a reference
for developers, testers, and stakeholders throughout the project lifecycle.

4. Analysis:

Page 40 of 162
 Requirement Analysis: Assessing and refining the gathered requirements to ensure they
are clear, complete, and feasible.

 Prioritization: Determining the importance of each requirement to address the most


critical aspects first.

5. Validation and Verification:

 Validation: Ensuring that the requirements accurately reflect the needs and expectations
of the stakeholders.

 Verification: Confirming that the requirements are feasible and can be implemented
within the constraints of the project.

Importance of Software Requirements:

 Foundation for Development: Requirements provide the foundation for all subsequent
phases of the software development lifecycle. They guide design, implementation, and
testing efforts.

 Communication Tool: Well-documented requirements serve as a communication tool


among stakeholders, ensuring everyone has a clear understanding of what the system
should do.

 Risk Mitigation: Clear and well-defined requirements help in identifying and mitigating
risks early in the project.

 Quality Assurance: Properly defined requirements contribute to the quality of the final
product by ensuring it meets user needs and expectations.

Software requirements are crucial for the successful development and delivery of a software
system, serving as the blueprint that guides the entire project. They ensure that the final
product aligns with the stakeholders' vision and provides the desired functionalities and
performance.

User requirements in software engineering

User requirements in software engineering are essential for understanding and defining what
end-users expect from a software system. These requirements serve as the foundation for
designing, developing, and testing the software to ensure it meets user needs. Here’s an
overview:
Page 41 of 162
Key Aspects of User Requirements:

1. Definition:

o User requirements are statements that describe what the end-users need from the
software system. They focus on user goals, tasks, and interactions with the
system.

2. Types of User Requirements:

o Functional Requirements: These specify the actions that the system must be
able to perform, such as user authentication, data processing, and reporting.

o Non-Functional Requirements: These outline the attributes of the system, like


performance, usability, reliability, and security.

3. Gathering User Requirements:

o Interviews and Surveys: Conducting interviews and surveys with potential


users to gather their needs and expectations.

o Workshops and Focus Groups: Engaging users in interactive sessions to explore


and define their requirements.

o Observation: Observing users in their natural environment to understand their


workflows and pain points.

o Use Cases and Scenarios: Developing use cases and scenarios to illustrate how
users will interact with the system.

4. Documentation:

o User Requirements Specification (URS): A detailed document that captures all


user requirements. It serves as a reference for the development team throughout
the project lifecycle.

o User Stories: Short, simple descriptions of a feature from the perspective of the
end-user, often used in Agile development.

5. Analysis and Validation:

o Requirement Analysis: Assessing and refining the gathered requirements to


ensure they are clear, complete, and feasible.

Page 42 of 162
o Validation: Ensuring that the requirements accurately reflect user needs and are
achievable within the project constraints.

6. Importance of User Requirements:

o Foundation for Design: User requirements guide the design phase by providing a
clear understanding of user needs and expectations.

o Improved User Satisfaction: By focusing on user requirements, the resulting


software is more likely to meet user expectations and improve satisfaction.

o Reduced Development Costs: Early identification of user requirements helps


avoid costly changes and rework later in the development process.

o Enhanced Communication: Clear documentation of user requirements facilitates


better communication among stakeholders, developers, and testers.

Examples of User Requirements:

 Functional Requirement Example: "The system shall allow users to log in using their
email and password."

 Non-Functional Requirement Example: "The system shall load the user dashboard
within 3 seconds under normal load conditions."

System requirements

System requirements refer to the specifications and constraints that define the
characteristics and functionality of a software system. These requirements outline what the
system must achieve to meet the needs of users and stakeholders. They serve as a blueprint for
the design, development, testing, and maintenance phases of the software development lifecycle.

Key Aspects of System Requirements:

1. Types of System Requirements:

o Functional Requirements: These describe specific behaviors or functions of the


system. For example, a functional requirement might specify that the system
should allow users to log in using a username and password.

o Non-Functional Requirements: These specify the system's qualities and


constraints, such as performance, usability, reliability, and security. An example of

Page 43 of 162
a non-functional requirement might be that the system must handle 1,000
simultaneous user logins without performance degradation.

2. Gathering System Requirements:

o Stakeholder Interviews: Engaging with stakeholders, including users, clients, and


developers, to understand their needs and expectations.

o Workshops and Brainstorming Sessions: Collaborative sessions to explore and


define system requirements.

o Observation and Analysis: Studying existing systems and processes to identify


requirements for improvement.

3. Documentation:

o System Requirements Specification (SRS): A comprehensive document that


captures all functional and non-functional requirements. The SRS serves as a
reference throughout the development process.

4. Analysis and Validation:

o Requirement Analysis: Evaluating and refining the gathered requirements to


ensure they are clear, complete, and feasible.

o Validation: Ensuring that the requirements accurately reflect the needs and
expectations of the stakeholders.

5. Importance of System Requirements:

o Foundation for Design and Development: System requirements provide the


foundation for designing and developing the software system.

o Guiding Testing Efforts: Clear requirements help in creating effective test cases
to verify that the system meets its intended functionality and performance.

o Risk Mitigation: Well-defined requirements help in identifying and mitigating


risks early in the project.

o Enhanced Communication: Documenting system requirements facilitates better


communication among stakeholders, developers, and testers.

Examples of System Requirements:

Page 44 of 162
 Functional Requirement Example: "The system shall allow users to reset their
passwords via an email verification process."

 Non-Functional Requirement Example: "The system shall load the homepage within 2
seconds under normal operating conditions."

Interface specification

Interface specification is a critical part of software engineering that outlines the details of how
different components of a software system interact with each other. It serves as a contract
between different parts of the system, ensuring that each part knows how to communicate with
the others effectively.

Key Aspects of Interface Specification:

1. Purpose:

o The primary purpose of an interface specification is to define the interactions


between software components, subsystems, or external systems. This ensures
compatibility and facilitates seamless communication.

2. Types of Interfaces:

o User Interfaces (UI): Defines how users interact with the system, including
input methods (e.g., forms, buttons) and output displays (e.g., screens, reports).

o Application Programming Interfaces (API): Defines how software components


interact programmatically, including the methods, protocols, and data formats
used for communication.

o Hardware Interfaces: Defines how software interacts with hardware


components, specifying the communication protocols, signals, and data exchanges
required.

3. Components of an Interface Specification:

o Interface Description: A detailed description of the interface, including its


purpose and the components it connects.

o Data Types and Formats: Defines the types of data exchanged through the
interface, including data structures, formats, and constraints.

Page 45 of 162
o Function Signatures: Describes the functions or methods available through the
interface, including their names, parameters, return types, and expected behavior.

o Protocols and Standards: Specifies the communication protocols, standards, and


conventions used for interaction.

o Error Handling: Outlines how errors are detected, reported, and handled through
the interface.

o Security Considerations: Describes the security measures in place to protect


data and ensure secure communication.

4. Importance of Interface Specification:

o Modularity and Reusability: Well-defined interfaces enable modular design,


allowing components to be developed, tested, and maintained independently. This
promotes reusability and flexibility in the system.

o Clear Communication: Interface specifications provide a clear and unambiguous


contract between components, reducing misunderstandings and integration
issues.

o Scalability: By defining how components interact, interface specifications


support the scalability and extensibility of the system, making it easier to add or
modify components.

o Interoperability: Ensures that different components, possibly developed by


different teams or organizations, can work together seamlessly.

Example of an API Interface Specification:

Interface: UserAuthenticationAPI

Description:

This API provides methods for user authentication, including login and logout functionalities.

Data Types:

- UserCredentials: { username: String, password: String }

Function Signatures:

Page 46 of 162
1. login(credentials: UserCredentials): boolean

- Description: Authenticates the user with the provided credentials.

- Parameters: credentials (UserCredentials) - The user's login credentials.

- Returns: boolean - True if authentication is successful, false otherwise.

2. logout(userId: String): void

- Description: Logs out the specified user.

- Parameters: userId (String) - The ID of the user to log out.

- Returns: void

Error Handling:

- InvalidCredentialsError: Returned when the provided credentials are incorrect.

- UserNotFoundError: Returned when the specified user ID is not found.

Security Considerations:

- All data exchanges must be encrypted using HTTPS.

- Passwords must be hashed before storage and comparison.

Feasibility Study

A feasibility study is a systematic analysis used to assess the practicality, viability, and
potential success of a proposed project or solution. It evaluates various aspects of the project to
determine whether it is achievable within the constraints of resources, time, and technology.
Feasibility studies help stakeholders make informed decisions about proceeding with a project.

Types of Feasibility Studies

1. Technical Feasibility:

o Purpose: Evaluates whether the technology and resources required for the
project are available or can be developed.

o Key Considerations:

Page 47 of 162
 Availability of hardware, software, and technical expertise.

 Scalability and compatibility with existing systems.

 Assessment of technical risks.

2. Economic Feasibility:

o Purpose: Determines whether the project is cost-effective and financially viable.

o Key Considerations:

 Cost-benefit analysis.

 Return on Investment (ROI) and payback period.

 Budget availability and funding requirements.

3. Operational Feasibility:

o Purpose: Assesses whether the project aligns with organizational goals and can
be integrated into current workflows.

o Key Considerations:

 Acceptance by stakeholders and end-users.

 Impact on business operations.

 Ease of implementation and usability.

4. Legal Feasibility:

o Purpose: Ensures the project complies with relevant laws, regulations, and
policies.

o Key Considerations:

 Intellectual property rights.

 Privacy and data protection laws.

 Industry-specific regulations.

5. Schedule Feasibility:

o Purpose: Evaluates whether the project can be completed within the required
time frame.

Page 48 of 162
o Key Considerations:

 Availability of resources and workforce.

 Realistic timelines for each phase of the project.

 Dependencies and potential delays.

Requirement Engineering Process

The Requirement Engineering Process is a critical phase in software engineering that


focuses on defining, documenting, and maintaining the software requirements. It ensures that
the end product meets the needs and expectations of its users and stakeholders. Here's an
overview of the process:

Steps in the Requirement Engineering Process:

1. Elicitation:

o Objective: To gather requirements from stakeholders, including users, clients, and


other relevant parties.

o Techniques: Interviews, surveys, questionnaires, workshops, brainstorming


sessions, observation, and document analysis.

2. Analysis:

o Objective: To refine and structure the gathered requirements to ensure they are
clear, complete, and feasible.

o Techniques: Requirement prioritization, use case analysis, and modeling (e.g., data
flow diagrams, entity-relationship diagrams).

3. Specification:

o Objective: To document the requirements in a formal and structured manner.

o Deliverables: Software Requirements Specification (SRS) document, use cases,


user stories, and functional and non-functional requirement documents.

4. Validation:

o Objective: To ensure that the requirements accurately reflect the needs and
expectations of the stakeholders and are achievable.

o Techniques: Requirement reviews, inspections, walkthroughs, and prototyping.


Page 49 of 162
5. Management:

o Objective: To track and manage changes to the requirements throughout the


software development lifecycle.

o Techniques: Requirement traceability matrices, version control, and change


management processes.

System Models

System models are abstract representations that help in understanding, analyzing, and
designing complex software systems. They provide a structured way to visualize the system's
components, their interactions, and the overall architecture. Here are some common types of
system models used in software engineering:

1. Context Models

Context models define the boundaries of the system and its interactions with external entities
such as users, other systems, and external devices. They help in identifying the system’s scope
and its environment.

 Example: A context diagram showing the system at the center with lines connecting it to
external entities, indicating the flow of data or interactions.

2. Behavioral Models

Behavioral models describe how the system behaves in response to internal or external
events. They focus on the dynamics of the system, including workflows, processes, and state
changes.

 Example: Use case diagrams, sequence diagrams, and state machine diagrams in UML
(Unified Modeling Language).

3. Structural Models

Structural models depict the organization of the system components and their relationships.
They focus on the static aspects, such as system architecture, data structures, and component
hierarchy.

 Example: Class diagrams, component diagrams, and deployment diagrams in UML.

4. Data Models
Page 50 of 162
Data models represent the structure of the data within the system. They define how data is
stored, organized, and manipulated, focusing on entities, attributes, and relationships.

 Example: Entity-Relationship (ER) diagrams, which show entities, their attributes, and
the relationships between them.

5. Functional Models

Functional models describe the functional requirements of the system, including the
processes and activities the system must perform to achieve its goals.

 Example: Data flow diagrams (DFDs) that illustrate the flow of data through the system,
identifying processes, data stores, and data flows.

6. Architectural Models

Architectural models provide a high-level view of the system’s structure, showing the major
components and their interactions. They help in defining the overall system architecture.

 Example: Layered architecture models, client-server models, and microservices


architecture diagrams.

7. Interaction Models

Interaction models focus on the communication between different system components or


between the system and external entities. They illustrate the sequence and flow of messages or
data.

 Example: Sequence diagrams and communication diagrams in UML.

System models are essential tools in software engineering that aid in understanding,
designing, and communicating complex software systems. Each type of model provides a
different perspective, contributing to a comprehensive view of the system’s structure, behavior,
and functionality.

Unit – III

Design Engineering

Design engineering is a critical aspect of the software development process that focuses
on creating the architecture, components, interfaces, and other elements of a software system. It

Page 51 of 162
transforms requirements into a blueprint for constructing the software, ensuring that it meets
functional and non-functional requirements while being maintainable, scalable, and robust.

Design Process

 Architectural Design: Creating the high-level structure of the software system, defining
the major components and their interactions.

 Detailed Design: Defining the internal structure and behavior of each component,
including data structures, algorithms, and control logic.

 Interface Design: Specifying how different components interact, including the methods
and data formats used for communication.

 Design Reviews: Conducting reviews to ensure that the design meets the requirements
and adheres to best practices.

Design Process and Quality

In software engineering, the design process and quality assurance are deeply intertwined. A
well-defined design process ensures that the system is built correctly from the start, and quality
assurance verifies that the final product meets all required standards and performs as expected.
Here’s a comprehensive look at both aspects:

Design Process

1. Requirements Analysis:

o Objective: Understand and gather user needs and system requirements.

o Activities: Stakeholder interviews, surveys, document analysis, and requirements


workshops.

2. High-Level Design (Architectural Design):

o Objective: Define the overall architecture of the system, including major


components and their interactions.

o Deliverables: Architectural diagrams, high-level system flow, and component


interactions.

3. Detailed Design:

o Objective: Specify the internal structure and behavior of each component,


including data structures, algorithms, and interfaces.
Page 52 of 162
o Deliverables: Detailed design documents, class diagrams, sequence diagrams, and
state diagrams.

4. Interface Design:

o Objective: Define how different components and systems interact with each other.

o Deliverables: API specifications, interface protocols, and user interface designs.

5. Prototype Development:

o Objective: Create prototypes to validate design concepts and gather user feedback.

o Activities: Building and testing prototypes, collecting feedback, and refining


designs.

6. Design Review and Approval:

o Objective: Ensure the design meets all requirements and is feasible for
implementation.

o Activities: Conducting design reviews with stakeholders, addressing feedback, and


obtaining approval.

Quality Assurance

1. Verification and Validation (V&V):

o Objective: Ensure the software meets all specified requirements and performs as
expected.

o Activities:

 Verification: Checking that the system is built correctly according to the


design (e.g., code reviews, static analysis).

 Validation: Ensuring the built system meets user needs and requirements
(e.g., user acceptance testing).

2. Testing:

o Unit Testing: Testing individual components or modules for correctness.

o Integration Testing: Testing combined components to ensure they work together


as intended.

Page 53 of 162
o System Testing: Testing the entire system for compliance with the requirements.

o Acceptance Testing: Ensuring the system meets user needs and is ready for
deployment.

3. Performance Testing:

o Objective: Ensure the system performs well under expected load conditions.

o Activities: Load testing, stress testing, and scalability testing.

4. Security Testing:

o Objective: Identify and mitigate security vulnerabilities.

o Activities: Penetration testing, vulnerability scanning, and security audits.

5. Usability Testing:

o Objective: Ensure the system is user-friendly and meets user experience


standards.

o Activities: User testing sessions, heuristic evaluations, and usability surveys.

6. Quality Metrics and Monitoring:

o Objective: Track and measure quality attributes to ensure continuous


improvement.

o Metrics: Defect density, code coverage, mean time to failure (MTTF), and customer
satisfaction scores.

Design Concepts

Design concepts are foundational principles and ideas that guide the software development
process, ensuring that the final product is robust, maintainable, and meets user needs. Here are
some key design concepts in software engineering:

Key Design Concepts:

1. Abstraction:

o Definition: Simplifying complex systems by focusing on the essential features


while ignoring unnecessary details.

Page 54 of 162
o Example: Abstracting the details of file handling by providing a simple interface for
reading and writing files.

2. Encapsulation:

o Definition: Bundling data and the methods that operate on the data within a single
unit, typically a class, and restricting access to some of the object's components.

o Example: Hiding the internal implementation of a class and exposing only the
necessary methods to interact with it.

3. Modularity:

o Definition: Dividing a system into smaller, manageable, and independent modules


or components.

o Example: Breaking down a large application into separate modules, such as user
authentication, payment processing, and inventory management.

4. Separation of Concerns:

o Definition: Dividing a system into distinct features that overlap as little as possible,
making the system easier to manage and understand.

o Example: Separating the user interface logic from business logic and data access
layers in a web application.

5. Cohesion and Coupling:

o Cohesion: Refers to how closely related and focused the responsibilities of a single
module are.

o Coupling: Refers to the degree of dependence between modules.

o Example: Aim for high cohesion (e.g., a module handling only database operations)
and low coupling (e.g., modules interacting through well-defined interfaces).

6. Inheritance:

o Definition: A mechanism to create a new class that is based on an existing class,


inheriting its attributes and methods.

o Example: Creating a "Manager" class that inherits from an "Employee" class,


adding specific attributes and methods relevant to managers.

Page 55 of 162
7. Polymorphism:

o Definition: The ability of different classes to be treated as instances of the same


class through a common interface.

o Example: A function that can accept objects of different classes (e.g., circle,
rectangle) but calls the appropriate method based on the object's actual class.

8. Design Patterns:

o Definition: Reusable solutions to common problems in software design.

o Example: Singleton, Factory, Observer, and Strategy are some commonly used
design patterns.

9. Agile Design:

o Definition: Emphasizing flexibility and iterative development, allowing for


continuous feedback and adaptation.

o Example: Using Agile methodologies to frequently refine and improve the design
based on user feedback and changing requirements.

10. User-Centered Design:

o Definition: Focusing on the needs, preferences, and limitations of end-users


throughout the design process.

o Example: Conducting usability testing and incorporating user feedback to enhance


the user experience.

Design Model

A Design Model in software engineering is a blueprint or a structured representation of


how the components and elements of a system are designed and organized to meet the system's
requirements. It serves as the bridge between the high-level architectural design and the actual
code implementation

Key Elements of a Design Model

1. Components:

o The design model outlines the various components (or modules) of the system
and how they interact with one another. Each component is responsible for a
specific functionality or a set of functionalities.
Page 56 of 162
o Components can be both software (e.g., classes, libraries) and hardware (e.g.,
devices, servers).

2. Data Design:

o Focuses on defining how the data will be stored, accessed, and manipulated. This
includes designing databases, data structures, and data flows.

3. Control Design:

o Refers to how the flow of control is managed within the system. It specifies how
operations, processes, or services will be coordinated, including the flow of
messages or events.

4. Interface Design:

o Details how different components and systems will communicate. It defines the
inputs and outputs for each component and the protocols for communication (e.g.,
API specifications, message formats).

5. User Interface Design:

o Focuses on designing the system’s interface with users. This includes screens,
forms, and interactive elements that users interact with, ensuring usability and a
good user experience (UX).

6. Behavioral Design:

o Describes the dynamic aspects of the system, including how components behave
during execution, such as sequence diagrams, state diagrams, and activity
diagrams.

Types of Design Models

1. High-Level Design (HLD):

o Describes the system architecture and components in abstract terms, often using
block diagrams or component diagrams.

o Defines major modules, components, and how they interact at a high level.

o It focuses on the overall structure without delving into the implementation details.

Page 57 of 162
Example:

o An online shopping system may have modules like User Management, Order
Processing, Inventory Management, and Payment Gateway.

2. Low-Level Design (LLD):

o Breaks down the high-level design into more detailed designs, focusing on each
module’s implementation.

o It includes class diagrams, database schema designs, and method/function details.

o Often represented using UML diagrams like class diagrams, sequence


diagrams, or state diagrams.

Example:

o In the User Management module, the design may include a class for User, with
attributes like username, password, and email, and methods like register(), login(),
and updateProfile().

Creating an Architectural deisign

Software architecture

Software Architecture refers to the high-level structuring of a software system. It


defines the system's components, their interactions, and the principles guiding its
design. A well-designed architecture helps ensure that the software system is
scalable, maintainable, efficient, and secure.

Key Principles of Software Architecture

1. Separation of Concerns:

o Dividing a system into distinct sections, each responsible for a specific part of the
functionality, allows for better organization and easier maintenance.

2. Modularity:

o Breaking the system into smaller, independent modules or components, so that


each one can be developed, tested, and maintained separately.

3. Scalability:

Page 58 of 162
o Designing the system to handle increased load by adding resources without a
complete redesign (vertical or horizontal scaling).

4. Reusability:

o Creating components or services that can be reused in different parts of the


system or even in different projects.

5. Maintainability:

o Ensuring the system can be easily modified to fix bugs, add new features, or
improve performance without major disruptions.

6. Flexibility:

o Designing the system to accommodate changes in technology, business needs, or


customer requirements over time.

7. Security:

o Ensuring the system is protected from unauthorized access, data breaches, and
other vulnerabilities.

Types of Software Architecture Styles

1. Layered (N-tier) Architecture:

o Divides the system into layers, where each layer only communicates with the layer
directly below or above it.

o Example: Web applications with a presentation layer, business logic layer, and
data access layer.

2. Client-Server Architecture:

o Divides the system into two main components: clients that request services and
servers that provide them.

o Example: Web browser (client) and web server.

3. Microservices Architecture:

o Breaks down the system into small, independently deployable services, each
focused on a single business capability.

Page 59 of 162
o Example: E-commerce systems where payment, shipping, and inventory are
separate services.

4. Event-Driven Architecture:

o Uses events to trigger communication between components or services.


Components are decoupled and respond to events asynchronously.

o Example: Stock market platforms where price changes (events) trigger updates in
trading systems.

5. Service-Oriented Architecture (SOA):

o Uses services as the main building blocks, where each service provides a specific
business function, and services communicate via standardized interfaces (e.g.,
SOAP, REST).

o Example: Enterprise applications where different systems (HR, CRM, Finance)


communicate through services.

6. Peer-to-Peer (P2P) Architecture:

o All nodes (peers) in the system are both consumers and providers of services.
There's no centralized server.

o Example: File-sharing systems like BitTorrent.

Data Design

Data Design refers to the process of defining the structure, organization, and
management of data within a system. It ensures that the data supports the system's functional
and performance requirements while being easy to maintain, scalable, and secure.

Key Aspects of Data Design

1. Data Modeling:

o Conceptual Data Model:

 High-level representation focusing on entities, attributes, and relationships.


Page 60 of 162
 Example: Entity-Relationship Diagram (ERD).

o Logical Data Model:

 Detailed representation showing the organization of data without


considering physical implementation.

 Includes primary keys, foreign keys, and relationships.

o Physical Data Model:

 Actual implementation of the logical model in a database management


system (DBMS).

 Includes tables, columns, data types, and indexes.

2. Normalization:

o Process of organizing data to minimize redundancy and improve integrity.

o Forms:

 1NF (First Normal Form): Eliminate duplicate columns and ensure


atomicity.

 2NF (Second Normal Form): Remove partial dependencies.

 3NF (Third Normal Form): Remove transitive dependencies.

3. Data Storage and Access:

o Define how data is stored (e.g., relational databases, NoSQL, flat files).

o Optimize data retrieval using indexing and caching.

4. Data Integrity:

o Ensuring data accuracy and consistency through constraints:

 Entity Integrity: Each table has a primary key.

 Referential Integrity: Foreign keys match primary keys in referenced


tables.

 Domain Integrity: Data types and constraints on column values.

5. Security:

Page 61 of 162
o Implement access control, encryption, and audit mechanisms to protect data.

Example: Library Management System

Conceptual Data Model

Entities:

1. Book: Attributes - BookID, Title, Author, Genre.

2. Member: Attributes - MemberID, Name, Email, Phone.

3. Loan: Attributes - LoanID, IssueDate, DueDate, ReturnDate.

Relationships:

 A Member can borrow multiple Books (1:M).

 A Loan records the borrowing of a Book by a Member.

Logical Data Model

Tables:

1. Books:

o BookID (PK), Title, Author, Genre.

2. Members:

o MemberID (PK), Name, Email, Phone.

3. Loans:

o LoanID (PK), BookID (FK), MemberID (FK), IssueDate, DueDate, ReturnDate.

Architectural styles and patterns

Page 62 of 162
Architectural styles and patterns are fundamental concepts in software engineering that
provide standardized solutions to common design problems. They help in defining the structure,
interactions, and organization of software systems. Here’s an overview of some key
architectural styles and patterns

Architectural Styles

Architectural styles and patterns are high-level strategies for designing software systems,
providing templates and guidelines for organizing components and their interactions. They are
fundamental to software architecture, ensuring scalability, maintainability, and performance.

Architectural Styles

1. Layered Architecture:

o Description: Divides the system into layers, each with a specific responsibility.

o Example: Presentation Layer, Business Logic Layer, Data Access Layer.

o Use Case: Enterprise applications, web applications.

o Advantages:

 Separation of concerns.

 Easier maintenance and scalability.

o Disadvantages:

 Performance overhead due to multiple layers.

2. Client-Server Architecture:

o Description: Separates the system into clients (requesters) and servers


(responders).

o Example: Web browsers (clients) and web servers.

o Use Case: Web applications, email systems.

o Advantages:

 Centralized control.

 Scalability by adding more clients or servers.

o Disadvantages:

Page 63 of 162
 Single point of failure (server).

 Network dependency.

3. Event-Driven Architecture:

o Description: Uses events to trigger communication between components.

o Example: Publish-subscribe systems, real-time notifications.

o Use Case: IoT systems, banking systems.

o Advantages:

 High responsiveness.

 Decouples components.

o Disadvantages:

 Complexity in event management.

 Hard to debug.

4. Microservices Architecture:

o Description: Divides the system into small, independent services communicating


via APIs.

o Example: E-commerce platform (payment, product catalog, order management as


separate services).

o Use Case: Large-scale distributed systems.

o Advantages:

 Independent deployment and scalability.

 Technology agnostic.

o Disadvantages:

 Increased complexity in communication and deployment.

 Requires robust monitoring.

5. Pipe and Filter Architecture:

Page 64 of 162
o Description: Data flows through a series of processing components (filters)
connected by pipes.

o Example: UNIX shell pipelines, compilers.

o Use Case: Data processing applications.

o Advantages:

 Easy to add or replace filters.

 Reusability of filters.

o Disadvantages:

 Performance bottlenecks in the pipeline.

 Hard to debug errors.

Architectural Patterns

1. Model-View-Controller (MVC):

o Description: Separates the application into three components:

 Model (business logic and data),

 View (UI),

 Controller (input processing).

o Example: Web frameworks like Django, Ruby on Rails.

o Use Case: Interactive web applications.

o Advantages:

 Separation of concerns.

 Supports multiple views for the same data.

o Disadvantages:

 Increased complexity in connecting components.

2. Repository Pattern:

o Description: Centralizes data management with a repository that mediates


between the application and the database.
Page 65 of 162
o Example: Data Access Object (DAO) in Java applications.

o Use Case: Systems with complex data queries.

o Advantages:

 Decouples business logic from data logic.

 Easier to switch data sources.

o Disadvantages:

 Extra abstraction layer can add overhead.

3. Singleton Pattern:

o Description: Restricts a class to a single instance and provides a global point of


access.

o Example: Logging service.

o Use Case: When exactly one instance is required, like a configuration manager.

o Advantages:

 Controlled access to the instance.

o Disadvantages:

 Harder to test and may lead to anti-patterns.

4. Observer Pattern:

o Description: Defines a one-to-many dependency, where changes in one object


notify dependent objects.

o Example: Event listeners in GUI frameworks.

o Use Case: Notification systems.

o Advantages:

 Decouples subject and observers.

o Disadvantages:

 Can lead to performance issues with many observers.

5. Builder Pattern:
Page 66 of 162
o Description: Constructs complex objects step by step.

o Example: Building a query in SQL or constructing a UI component.

o Use Case: Complex object creation with numerous configurations.

o Advantages:

 Increases code readability.

 Allows reusability of construction code.

o Disadvantages:

 Can become complex if the number of configurations grows.

Architectural design

Architectural design defines the high-level structure of a software system, showing its
components, their relationships, and how they interact. It focuses on organizing the system into
modules or subsystems and ensuring that the architecture aligns with the functional and non-
functional requirements.

Example Scenario: E-commerce Website

System Overview: An e-commerce platform where customers can browse products, place orders,
and make payments.

Architecture Design: Layered Architecture

Layers:

1. Presentation Layer: Handles user interface and user interactions.

2. Business Logic Layer: Contains the core functionality, like processing orders.

3. Data Access Layer: Manages interactions with the database.

4. Database: Stores persistent data, such as products, orders, and users.

Page 67 of 162
Components and Interactions:

1. User Interface (UI):

o Accessible via a web browser or mobile app.

o Enables users to browse products, add items to the cart, and place orders.

2. Business Logic:

o Handles order processing, payment validation, and inventory updates.

3. Database:

o Stores product catalog, user data, order history, and payment records.

4. External Services:

o Payment Gateway for processing payments.

o Shipping Service for order delivery.

Example Workflow:

1. User Action: The customer adds a product to the cart using the UI.

2. Business Logic: The system validates the stock and calculates the price.

3. Data Access: The system retrieves product details and inventory data from the database.

4. External Service: Once the customer places the order, the payment gateway processes
the payment.

5. Response: The system confirms the order and updates the inventory.

Conceptual model of UML


Page 68 of 162
A conceptual model of UML (Unified Modeling Language) describes the basic building
blocks of UML and how they interact. It focuses on concepts rather than implementation,
serving as a blueprint for understanding the structure of a system and its relationships.

Key Elements of the UML Conceptual Model

1. Basic Building Blocks:

o Things: The primary elements of a model.

 Structural Things: Class, Interface, Component, Node.

 Behavioral Things: Use case, Interaction, State machine.

 Grouping Things: Package (for organizing elements).

 Annotational Things: Notes (to add comments or constraints).

o Relationships: Associations, dependencies, generalizations, realizations.

o Diagrams: Visual representations of things and relationships.

2. Common Mechanisms:

o Specifications: Underlying details of an element.

o Adornments: Visual cues like labels or stereotypes.

o Common Divisions: Divide systems into abstract/concrete or static/dynamic


aspects.

o Extensibility Mechanisms: Stereotypes, tagged values, and constraints.

3. UML Diagrams: UML divides its diagrams into two main categories:

o Structural Diagrams: Class, Object, Component, Deployment, and Package


diagrams.

o Behavioral Diagrams: Use Case, Sequence, Activity, State, Communication,


Interaction Overview, and Timing diagrams.

Example: Online Banking System

Conceptual Model Components:

1. Structural Things:

o Class: Customer, Account, Transaction.


Page 69 of 162
o Interface: PaymentProcessor.

o Component: Banking App, ATM System.

o Node: Server, Client Device.

2. Relationships:

o Association: Customer ↔ Account.

o Dependency: Banking App → PaymentProcessor.

3. Diagrams:

o Class Diagram: Shows the structure of the system.

o Use Case Diagram: Depicts functionalities like "Transfer Money" and "Check
Balance."

o Sequence Diagram: Illustrates interactions between the customer, banking app,


and server.

Basic Structural Modeling

Basic Structural Modeling in UML refers to the process of describing the static aspects of
a system, such as its components, relationships, and organization. This focuses on the "things"
within the system, like classes, objects, and their connections.

Key Concepts of Basic Structural Modeling

1. Class: Represents a blueprint for objects, containing attributes and operations.

2. Object: A specific instance of a class with actual data.

3. Interface: Defines a contract or a set of methods that a class must implement.

4. Component: A physical, replaceable part of the system, such as a software module.

5. Node: Represents a physical computing resource, like a server or device.

6. Relationships:

o Association: A link between two or more classes.

o Aggregation: A whole-part relationship (e.g., a Library contains Books).

Page 70 of 162
o Composition: A strong form of aggregation where parts cannot exist without the
whole.

o Generalization: Inheritance (e.g., a Dog is a type of Animal).

o Dependency: Indicates one class depends on another (e.g., uses it temporarily).

Example: Library Management System

1. Class Diagram

A basic structure showing classes and their relationships.

Classes:

 Book: Attributes: title, author, ISBN.

 Member: Attributes: name, memberID, email.

 Library: Attributes: name, location.

 Loan: Attributes: loanID, issueDate, returnDate.

Relationships:

 Library aggregates Books.

 Members borrow Books (association).

 Loan is associated with both Member and Book.

2. Component Diagram

Describes how components interact.

Components:

 Library Management System (LMS)

 Database

 Notification Service

Relationship:

 LMS interacts with the Database to fetch/update book and member details.

 Notification Service sends reminders for due dates.


Page 71 of 162
Structural Modeling Benefits:

1. Clarifies the relationships and static structure of the system.

2. Serves as a blueprint for implementation.

3. Ensures modularity and maintainability.

Class diagram

A class diagram is a type of static structure diagram in UML (Unified Modeling


Language) that describes the structure of a system by showing its classes, attributes,
operations, and the relationships between objects.

Here’s an example scenario:

Scenario: Online Library Management System

1. Classes:
o Book
o Member
o Librarian
o Loan
2. Class Diagram Representation:

Below is a textual representation of the class diagram:

+---------------------+ +---------------------+ +-----------------+


| Book | | Member | | Librarian |
+---------------------+ +---------------------+ +-----------------+
| - bookID: int | | - memberID: int | | - librarianID: int |
| - title: string | | - name: string | | - name: string |
| - author: string | | - email: string | | - email: string |
| - isAvailable: bool | | - phone: string | | |
+---------------------+ +---------------------+ +-----------------+
| + addBook() | | + registerMember() | | + manageLoans() |
| + removeBook() | | + borrowBook() | | + addBook() |
| | | + returnBook() | | |
+---------------------+ +---------------------+ +-----------------+

Page 72 of 162
+-----------------+
| Loan |
+-----------------+
| - loanID: int |
| - issueDate: Date|
| - returnDate: Date|
| - bookID: int |
| - memberID: int |
+-----------------+
| + issueLoan() |
| + returnLoan() |
+-----------------+

Relationships:
1. **Member** can **borrow many books**, so there is a one-to-many relationship between
Member and Loan.
2. **Librarian** can **manage multiple loans** and add/remove books.
3. Each **Loan** is associated with one **Book** and one **Member**.

Sequence diagram

A sequence diagram in UML represents how objects interact in a particular scenario of a


use case. It shows the sequence of messages exchanged between objects and the order in which
these interactions occur.

Example Scenario: Online Library Management System - Borrowing a Book

Actors/Objects:

1. Member

2. Library System

3. Librarian

4. Database

Steps in the Process:

Page 73 of 162
1. The Member requests to borrow a book.

2. The Library System checks the book's availability in the database.

3. If the book is available, the Librarian confirms the request.

4. The Library System records the loan in the database.

5. The Member receives confirmation.

Sequence Diagram Description:

Here is the textual representation of the sequence diagram:

Member Library System Librarian Database

| | | |

|---- Request Borrow(BookID) --------->| |

| |---- Check Availability(BookID) ------>|

| | |<-- Availability---|

| |<--- Book Available/Unavailable -------|

| |---- Confirm Loan ------------------->|

| | |---- Record Loan -->|

| | |<-- Loan Recorded --|

|<--- Loan Confirmed ------------------| |

Key Points:

1. The Member initiates the borrowing process.

2. The Library System acts as the intermediary and handles checking the book's
availability and updating the loan status.

3. The Librarian confirms the loan after verifying availability.

4. The Database stores book availability and loan records.

Would you like me to create a visual representation of this sequence diagram?

Collaboration diagrams
Page 74 of 162
Collaboration diagrams, also known as communication diagrams in UML, focus on the
interactions between objects in a system. They emphasize the structural organization of objects
that send and receive messages.

Key Elements of a Collaboration Diagram:

1. Objects: Represented as rectangles with their names.

o Format: ObjectName: ClassName.

2. Links: Lines connecting objects, representing relationships.

3. Messages: Arrows with sequence numbers on the links, showing the flow of messages
between objects.

Example Scenario: Online Shopping - Place Order

Actors/Objects:

1. Customer

2. ShoppingCart

3. OrderSystem

4. PaymentGateway

5. Database

Steps:

1. The Customer adds items to the ShoppingCart.

2. The Customer places the order.

3. The OrderSystem verifies the order details with the Database.

4. The PaymentGateway processes the payment.

5. The OrderSystem confirms the order to the Customer.

Collaboration Diagram (Textual Representation):

Page 75 of 162
Customer ShoppingCart OrderSystem Database PaymentGateway

| | | | |

1: addItem() --------->| | | |

| | | | |

2: placeOrder() ------>| 3: verifyOrder() ------------->| |

| | |<-- confirm ---| |

| | 4: processPayment() --------->| |

| | | |<-- paymentStatus--|

|<-- orderConfirmed ----------------| | |

Key Points in the Diagram:

1. Message Flow: Messages like addItem(), placeOrder(), and processPayment() have


sequence numbers to show their order.

2. Object Relationships: The links show direct relationships between the objects involved.

3. Interactions: Each message represents an interaction or method invocation.

Use case Diagrams

A Use Case Diagram is a type of UML diagram that visualizes the functional requirements
of a system by showing its actors, use cases, and their relationships. It provides a high-level view
of what the system does from the perspective of its users.

Key Elements of a Use Case Diagram

1. Actors: Represent the roles interacting with the system (human users or other systems).

Page 76 of 162
o Primary Actor: Directly interacts with the system.

o Secondary Actor: Supports the primary actor or system.

2. Use Cases: Represent the functionalities or services the system provides.

3. System Boundary: Encapsulates all the use cases within the system.

4. Relationships:

o Association: Link between an actor and a use case.

o Include: A use case that is always performed as part of another use case.

o Extend: A use case that adds optional behavior to another use case.

o Generalization: Shows inheritance between use cases or actors.

Example: Online Shopping System

Actors:

 Customer: Browses and purchases products.

 Admin: Manages inventory and user accounts.

 Payment Gateway: Processes payments.

Use Cases:

 Browse Products

 Add to Cart

 Checkout

 Make Payment

 Manage Products

 Generate Reports

Relationships:

 The Customer is associated with Browse Products, Add to Cart, and Checkout.

 Checkout includes Make Payment.

Page 77 of 162
 The Admin is associated with Manage Products and Generate Reports.

 Make Payment extends to interact with the Payment Gateway.

Textual Representation:

Actors:

- Customer

- Admin

- Payment Gateway

Use Cases:

1. Browse Products

2. Add to Cart

3. Checkout

- Includes: Make Payment

4. Make Payment (interacts with Payment Gateway)

5. Manage Products

6. Generate Reports

Example Use Case Diagram Description:

+-------------------------------------+

| Online Shopping System |

| |

| (Customer) -------- (Browse Products) |

| | | |

| |-------- (Add to Cart) ----------- |

Page 78 of 162
| | | Includes |

| --------> (Checkout) -----------------|

| | Extends |

| (Admin) ----- (Manage Products) |

| | |

| ------> (Generate Reports) |

+--------------------------------------------+

Component Diag

A Component Diagram in UML visualizes the physical and logical components of a


system and their relationships. It focuses on how components interact to form a complete
system, including software modules, hardware devices, and external systems.

Key Elements of a Component Diagram

1. Components:

o Represented as rectangles with a small rectangle symbol at the top.

o Examples: Software modules, databases, APIs, or external systems.

2. Interfaces:

o Represent the points of interaction between components.

o Noted by a circle (provided interface) or a semicircle (required interface).

3. Relationships:

o Dependency: One component depends on another for functionality.

o Association: A direct relationship between components.

o Realization: A component implements an interface.

4. Nodes:
Page 79 of 162
o Physical hardware devices that host components.

Example: Online Shopping System

Components:

1. Web Application:

o Provides a user interface for browsing and ordering products.

2. Payment Gateway:

o Handles online payment processing.

3. Database:

o Stores product, user, and order information.

4. Inventory Service:

o Manages product stock levels.

Relationships:

1. The Web Application depends on the Payment Gateway for processing payments.

2. The Web Application interacts with the Database to retrieve product details.

3. The Web Application communicates with the Inventory Service to check stock
availability.

Textual Representation of the Component Diagram:

+---------------------------------------+

| Web Application |

| - User Interface |

| - Shopping Cart |

| - Checkout |

+---------------------------------------+

| |
Page 80 of 162
| v

| +-------------------+

| | Payment Gateway |

| | - Process Payment |

| +-------------------+

+------------------+ +--------------------+

| Database |<------>| Inventory Service |

| - User Data | | - Stock Management |

| - Product Info | | |

| - Order Details | +--------------------+

+------------------+

Description of the Relationships:

1. The Web Application depends on the Payment Gateway for financial transactions.

2. The Web Application queries the Database for user and product data.

3. The Inventory Service ensures products are available before confirming an order.

Unit IV
Testing Strategies

Definition - Software Testing :

 It is the process of Creating, Implementing and Evaluating tests.


 Testing measures software quality
 Testing can find faults. When they are removed, software quality is improved.
Page 81 of 162
 Testing is executing a program with an indent of finding Error/Fault and Failure.
 IEEE Terminology : An examination of the behavior of the program by executing on
sample data sets.
 Testing is a process of executing a program with the intent of finding an error .
 A good test case is one that has a high probability of finding an as yet undiscovered error.
 Testing is a process of exercising or evaluating a system component by manual or
automated means to verify that it satisfies specified requirement

Why testing is important?

 This is China Airlines Airbus A300 crashing due to a software bug on April 26, 1994,
killing 264 innocent lives
 Software bugs can potentially cause monetary and human loss, history is full of such
examples

 In 1985, Canada's Therac-25 radiation therapy machine malfunctioned due to software


bug and delivered lethal radiation doses to patients, leaving 3 people dead and critically
injuring 3 others
 In April of 1999, a software bug caused the failure of a $1.2 billion military satellite
launch, the costliest accident in history
 In may of 1996, a software bug caused the bank accounts of 823 customers of a major
U.S. bank to be credited with 920 million US dollars
 As you see, testing is important because software bugs could be expensive or even
dangerous
 As Paul Elrich puts it - "To err is human, but to really foul things up you need a
computer."

What is the objective of software testing?

Software testing has many objectives.


 Software testing helps to make sure that it meets all the requirement it was supposed to
meet.
 It will bring out all the errors, if any, while using the software.
 Software testing helps to understand that the software that is being tested is a complete
success

Page 82 of 162
 Software testing helps to give a quality certification that the software can be used by the
client immediately.
 It ensures quality of the product.

A Strategic approach to software testing

 Many software errors are eliminated before testing begins by conducting effective technical
reviews
 Testing begins at the component level and works outward toward the integration of the
entire computer-based system.
 Different testing techniques are appropriate at different points in time.
 The developer of the software conducts testing and may be assisted by independent test
groups for large projects.
 Testing and debugging are different activities.
 Debugging must be accommodated in any testing strategy.

Verification and Validation

 Make a distinction between verification (are we building the product right?) and validation
(are we building the right product?)
 Software testing is only one element of Software Quality Assurance (SQA)
 Quality must be built in to the development process, you can’t use testing to add quality after
the fact

Organizing for Software Testing

 The role of the Independent Test Group (ITG) is to remove the conflict of interest inherent
when the builder is testing his or her own product.
 Misconceptions regarding the use of independent testing teams
o The developer should do no testing at all
o Software is tossed “over the wall” to people to test it mercilessly
o Testers are not involved with the project until it is time for it to be tested
 The developer and ITGC must work together throughout the software project to ensure that
thorough tests will be conducted

Page 83 of 162
Software Testing Strategy

Types of Testing

• Black Box Testing


• White Box Testing
• Grey Box Testing

Black Box Testing


This testing is termed as BlackBox Testing since the tester views the program as a black
box, that is the tester is completely unconcerned about the internal behavior and structure of
the program.

White Box Testing

The tester views the internal behavior and structure of the program. The testing strategy
permits one to examine the internal structure of the program.

Grey Box Testing

Combination of Black box & White box.

Critical or Complex Modules can be tested using White box testing while the rest of the
application is tested using Black Box Testing.

Levels of Testing

• Unit Testing
• Integration Testing
• System Testing Or FURPS……Testing
• Acceptance Testing
• Regression Testing

Unit Testing

 Lowest Level of Testing


 Individual unit of the software are tested in isolation from other parts of a program
 It is a testing activity which has to be performed by development team during coding /
after coding is completed.
Page 84 of 162
 What can be a UNIT?
• Screen components.
• Screen / Program.
• Back-end related to a Screen.
• Back-end and the Screen.

Integration Testing

• Intermediate level of testing


• Progressively unit tested software components are integrated and tested until the
software works as a whole
• Test that evaluate the interaction and consistency of interacting components

Types of Integration Testing

 Big Bang Testing


 Bottom-Up Testing
 Top-Down Testing

Big Bang Testing

• A type of Integration test in which the software component of an application are


combined all at once into a overall system.
• According to this approach, every module is first unit tested in isolation from other
module. After each module is tested, all of the modules are integrated together at
once.

Page 85 of 162
Bottom-Up Testing

• Begins construction and testing with atomic modules (i.e.) modules at lowest level in
program structure
• The terminal module is tested in isolation first then the next set of higher level modules
are tested with the previously tested lower module.

Top-Down Testing

• Program merged and tested from the top to the bottom


• Modules are integrated by moving downward through the control hierarchy with the
main control module.

Page 86 of 162
System Testing

• Testing conducted on a complete, integrated system to evaluate the system’s compliance


with its specified requirements
• In which the complete software build is made and tested to show that all requirements
are met.
• Here all possible testings w.r.t
• Functionality, Usability, Installation, Performance, Security, etc… are
expected to be carried out.

Activities of The Testing Team

 Functionality Testing.
 Usability Testing.
 Reliability Testing.
 Performance Testing.
 Scalability Testing.
Functionality Testing

This testing is done to ensure that all the functionalities defined in the requirements are
being implemented correctly.

Usability Testing

The catch phrase “User Friendliness” can be achieved through Usability Testing.

• Ease Of Operability
• Communicativeness
• This test is done keeping the kind of end user's who are going to use the product.
Reliability Testing

• These Tests are carried out to assess the system’s capability to handle various scenarios
like failure of a hard drive on the web, database server or communication link failure.

Page 87 of 162
• Software reliability is defined in statistical terms as “the probability of failure-free
operation of a computer program in a specified environment for a specified time”.
Performance Testing

This test is done to ensure that the software / product works the way it is supposed to at
various load (load testing ), Stress (stress testing ) and Volume ( volume testing ).

Volume Testing

 The purpose is to find weakness in the system with respect to its handling of large
amounts of data during short time periods.
Stress Testing

 The purpose is to that the system has the capacity to handle large numbers of
processing transactions during peak periods.
Performance Testing

• Can be accomplished in parallel with Volume and Stress testing because we want
to assess performance under all conditions.
• System performance is generally assessed in terms of response times and
throughput rates under differing processing and configuration conditions.

Scalability Testing

These tests ensure the degree to which the application/ system loads / processes can be
distributed across additional servers and clients.

Acceptance Testing

• Acceptance Testing is defined as the process of formal testing conducted to determine


whether or not the system satisfies its acceptance criteria and to enable the customer to
determine whether or not to accept the system
• The purpose of this testing is to ensure that Customer’s requirements objectives are met
and that all the components are correctly included in the customer package.
Regression Testing

Regression testing is the process of testing changes to computer programs to make sure
that the older programming still works with the new changes.

Strategic issues

Page 88 of 162
 Specify product requirements in a quantifiable manner before testing starts.
 Specify testing objectives explicitly.
 Identify categories of users for the software and develop a profile for each.
 Develop a test plan that emphasizes rapid cycle testing.
 Build robust software that is designed to test itself.
 Use effective formal reviews as a filter prior to testing.
 Conduct formal technical reviews to assess the test strategy and test cases.
 Develop a continuous improvement approach for the testing process.

Validation testing

The process of evaluating software during the development process or at the end of the
development process to determine whether it satisfies specified business requirements.
Validation Testing ensures that the product actually meets the client's needs. It can also
be defined as to demonstrate that the product fulfills its intended use when deployed on
appropriate environment.
It answers to the question, Are we building the right product?
Validation Testing - Workflow:
Validation testing can be best demonstrated using V-Model. The Software/product
under test is evaluated during this type of testing.
Activities:

 Unit Testing
 Integration Testing
 System Testing
 User Acceptance Testing

Verification and Validation

• Software testing is part of a broader group of activities called verification and validation
that are involved in software quality assurance
• Verification (Are the algorithms coded correctly?)
– The set of activities that ensure that software correctly implements a specific
function or algorithm

Page 89 of 162
• Validation (Does it meet user requirements?)
– The set of activities that ensure that the software that has been built is traceable
to customer requirements

Alpha and Beta Testing

• Alpha testing
– Conducted at the developer’s site by end users
– Software is used in a natural setting with developers watching intently
– Testing is conducted in a controlled environment
• Beta testing
– Conducted at end-user sites
– Developer is generally not present
– It serves as a live application of the software in an environment that cannot be
controlled by the developer
– The end-user records all problems that are encountered and reports these to the
developers at regular intervals
After beta testing is complete, software engineers make software modifications and prepare for
release of the software product to the entire customer base.

The art of debugging

What is Debugging ?

Debugging happens as a result of testing. When a test case uncovers an error, debugging
is the process that causes the removal of that error.

The Debugging Process

Debugging is not testing, but always happens as a response of testing. The debugging
process will have one of two outcomes:

(1) The cause will be found, corrected and removed,

or

(2) the cause will not be found. Why is debugging difficult?


The symptom and the cause are geographically remote.

Page 90 of 162
 The symptom may disappear when another error is corrected.
 The symptom may actually be the result of nonerrors (eg round off in accuracies).
 The symptom may be caused by a human error that is not easy to find.
 The symptom may be intermittent.
 The symptom might be due to the causes that are distributed across various tasks on
diverse processes.

Debugging Strategies

• Objective of debugging is to find and correct the cause of a software error


• Bugs are found by a combination of systematic evaluation, intuition, and luck
• Debugging methods and tools are not a substitute for careful evaluation based on a
complete design model and clear source code

• There are three main debugging strategies


– Brute force
– Backtracking
– Cause elimination

Brute Force

• Most commonly used and least efficient method


• Used when all else fails
• Involves the use of memory dumps, run-time traces, and output statements
• Leads many times to wasted effort and time

Backtracking

• Can be used successfully in small programs


• The method starts at the location where a symptom has been uncovered
• The source code is then traced backward (manually) until the location of the cause is
found
• In large programs, the number of potential backward paths may become unmanageably
large

Page 91 of 162
Cause Elimination

The third approach to debugging, cause elimination, is manifested by induction or


deduction and introduces the concept of binary partitioning. This approach is also called
induction and deduction. Data related to the error occurrence are organized to isolate potential
causes. A "cause hypothesis" is devised and the data are used to prove or disprove the
hypothesis. Alternatively, a list of all possible causes is developed and tests are conducted to
eliminated each. If initial tests indicate that a particular cause hypothesis shows promise, the
data are refined in an attempt to isolate the bug.

Product metrics

Product metrics are quantitative measures used to evaluate the characteristics,


performance, and quality of a product. These metrics are crucial in assessing whether a product
meets its intended goals, user needs, and quality standards. They provide insights into areas such
as usability, functionality, performance, and reliability. Product metrics help product teams make
informed decisions, improve product development, and enhance the overall user experience.

Product metrics can be applied across various domains, such as software development,
hardware design, and consumer products. Below are the key categories and examples of product
metrics commonly used in different contexts.

Categories of Product Metrics

1. Functional Metrics
These metrics assess how well the product performs the tasks it is designed for. They
help evaluate the product's core functionality.

o Example Metrics:

 Feature Utilization: The percentage of users who use specific features of


the product.

 Functionality Completion: The percentage of functionality that is


implemented and working as intended.

 Error Rate: The frequency of functional failures or bugs in the product.

Page 92 of 162
o Example: For a software application, the metric might track how often users
successfully complete a specific task (e.g., completing a purchase on an e-
commerce site).

2. Usability Metrics
These metrics focus on how easy and efficient the product is to use from a user’s
perspective. They help determine how user-friendly and intuitive the product is.

o Example Metrics:

 Time on Task: The amount of time a user spends to complete a specific


task using the product.

 Task Success Rate: The percentage of users who successfully complete a


task without encountering errors or issues.

 User Satisfaction: Typically measured through surveys or Net Promoter


Score (NPS) to understand how users feel about using the product.

 Learnability: How easily new users can learn to use the product.

o Example: For a website, a usability metric could measure how long it takes for a
new user to navigate through the site and make a purchase, reflecting how
intuitive the interface is.

3. Performance Metrics
These metrics measure the product's efficiency, speed, and responsiveness. They are
particularly important for digital products, such as websites or software applications.

o Example Metrics:

 Page Load Time: The time it takes for a webpage or application screen to
load.

 Response Time: The time it takes for the system to respond to user input.

 Throughput: The amount of data or number of transactions a system can


handle over a specific period.

Page 93 of 162
o Example: For a cloud-based service, performance metrics could track how quickly
the system processes requests and how well it scales with an increasing number
of users.

4. Reliability Metrics
These metrics focus on how dependable and consistent the product is under normal use.
Reliability is often linked to product defects, downtime, or failures.

o Example Metrics:

 Mean Time Between Failures (MTBF): The average time between


product failures or crashes.

 Mean Time to Repair (MTTR): The average time it takes to fix a failure or
defect.

 Defect Density: The number of defects found per unit of product size (e.g.,
lines of code or components).

o Example: For a piece of hardware like a laptop, reliability metrics might track how
often the device fails over its lifecycle and how long it takes to repair issues.

5. Quality Metrics
These metrics measure how well the product meets predefined quality standards,
focusing on product defects and how often they occur.

o Example Metrics:

 Defect Count: The total number of defects or bugs discovered in the


product.

 Defect Severity: Categorization of defects based on their impact on


product functionality (e.g., critical, major, minor).

 Customer Complaints: The number of customer complaints or support


tickets related to product quality.

o Example: For a mobile app, quality metrics could include how many bugs are
reported by users after the app is released, and whether these bugs affect core
functionality.

Page 94 of 162
6. Adoption Metrics
These metrics measure the extent to which users or customers are adopting the product,
reflecting its market acceptance and popularity.

o Example Metrics:

 Active Users: The number of users who interact with the product within a
given timeframe (e.g., daily, monthly active users).

 Customer Retention Rate: The percentage of customers who continue to


use the product over a specific period.

 Conversion Rate: The percentage of users who take a desired action, such
as signing up or making a purchase.

o Example: For a subscription-based service, adoption metrics might measure how


many new customers sign up each month and how many continue to use the
service after the first three months.

7. Customer Satisfaction Metrics


These metrics focus on how satisfied users are with the product and whether the
product meets their expectations.

o Example Metrics:

 Net Promoter Score (NPS): A score that measures customer loyalty based
on the likelihood of recommending the product to others.

 Customer Satisfaction Score (CSAT): A direct measure of user


satisfaction, typically gathered via surveys.

 Customer Effort Score (CES): Measures how easy it is for customers to


get their issues resolved or complete a specific action.

o Example: After using an e-commerce platform, users may be asked to rate their
satisfaction with the purchasing process on a scale of 1 to 10 (CSAT), or whether
they would recommend the platform to others (NPS).

Page 95 of 162
8. Cost and Revenue Metrics
These metrics are focused on the financial aspects of the product, such as development
costs, operational costs, and revenue generation.

o Example Metrics:

 Cost Per Acquisition (CPA): The cost incurred to acquire a new customer.

 Customer Lifetime Value (CLTV): The total revenue generated from a


customer over their entire relationship with the product.

 Return on Investment (ROI): The ratio of net profit to the cost of


investment.

o Example: For a SaaS company, revenue metrics might measure how much
recurring income is generated per customer, and whether the costs to acquire
and support the customer exceed the revenue generated.

Software Quality

Software quality is called the conformance to explicitly stated functional and


performance requirements, documented development standards, and implicit characteristics.

A software is said to be of a good quality if it

• Meets the customer’s requirement


• Is completed in the estimated Time.
• Is completed within the estimated cost.

Meet Requirement

Metrics for analysis model


Page 96 of 162
Metrics for analysis models are quantitative measures used to evaluate and assess the
quality, effectiveness, and efficiency of an analysis model in the software development process.
An analysis model helps in understanding the problem domain and defining requirements before
moving to the design and implementation phases. The metrics help ensure that the model is
accurate, complete, maintainable, and efficient.

Categories of Metrics for Analysis Models

1. Correctness Metrics
These metrics assess whether the analysis model correctly represents the problem
domain and the required functionality. It measures the model's ability to capture all user
requirements and business needs without errors.

o Example Metrics:

 Requirement Traceability: The degree to which each requirement is


linked to a corresponding model element (e.g., use case, class).

 Defect Density: The number of errors or defects found in the analysis


model relative to the size of the model.

 Consistency: Measures whether the analysis model is free from


contradictions or conflicting requirements.

o Example: In a system for managing hospital appointments, correctness would


measure if all user stories (e.g., scheduling, rescheduling, and cancelling
appointments) are accurately represented in the model.

2. Completeness Metrics
These metrics measure whether the analysis model includes all the necessary elements
and relationships to satisfy the problem requirements. A complete model ensures that no
essential detail or feature is overlooked.

o Example Metrics:

 Feature Coverage: The extent to which the analysis model addresses all
functional requirements.

 Requirement Coverage: Percentage of identified requirements covered by


the model.

Page 97 of 162
 Model Size: The number of elements in the model (e.g., use cases, classes,
relationships), which can indicate the depth of the analysis.

o Example: In a banking system, completeness would measure if all features (such


as account creation, withdrawals, transfers, and loan applications) are present in
the analysis model.

3. Cohesion Metrics
These metrics evaluate how closely related the components of the analysis model are
within their respective contexts. A cohesive model ensures that related components are
logically grouped together.

o Example Metrics:

 Functional Cohesion: Measures the extent to which the elements of the


model are grouped according to their purpose.

 Data Cohesion: Ensures that data-related elements (e.g., attributes, data


entities) are logically organized.

 Internal Consistency: Ensures that internal model components work


together logically and without contradictions.

o Example: In an e-commerce system, a cohesive model would group features like


product catalog, shopping cart, and payment gateway under an "Online Shopping"
module.

4. Coupling Metrics
These metrics evaluate the degree to which different components in the analysis model
are dependent on each other. Low coupling is desirable because it means the components
are independent, making the model easier to maintain and modify.

o Example Metrics:

 Data Coupling: The degree to which data flow between components is


minimal and clear.

Page 98 of 162
 Control Coupling: Measures the extent of dependency between
components in terms of control flow (e.g., use cases triggering other use
cases).

 Dependency Metrics: Measures how strongly the elements of the model


are interconnected. A high level of dependency means that changing one
component might affect several others.

o Example: In a library management system, if the "Book" and "Member" classes


are tightly coupled (i.e., if modifying a book-related function requires changes to
member-related functionality), the model might need to be refactored to reduce
coupling.

5. Understandability and Clarity Metrics


These metrics evaluate how easily the analysis model can be understood by stakeholders,
including developers, users, and analysts. The goal is to ensure that the model is clear and
easily communicable.

o Example Metrics:

 Clarity Index: A qualitative measure of how easily the model can be


understood based on how well it is documented and its structure.

 Diagram Complexity: The complexity of diagrams (e.g., use case, class,


sequence diagrams). A simple, well-structured diagram is more
understandable than a complex, cluttered one.

 Stakeholder Satisfaction: User or stakeholder feedback on how easy it is


to interpret the model.

o Example: A user interface (UI) design might be evaluated for clarity based on
whether stakeholders (e.g., business analysts, product owners) can easily
comprehend the system’s functionality from the analysis model.

6. Maintainability Metrics
These metrics evaluate how easily the analysis model can be updated or modified as

Page 99 of 162
requirements evolve. Maintainability ensures the model remains useful and adaptable
over time.

o Example Metrics:

 Modularity: The extent to which the model is divided into independent,


self-contained modules that can be modified independently.

 Change Impact: Measures how changes to one part of the model affect
other parts. A model with low change impact is easier to maintain.

 Refactorability: The ease with which the model can be refactored or


reorganized without disrupting functionality.

o Example: If a change in business logic requires minimal adjustments to the model


without affecting unrelated components, the analysis model has good
maintainability.

7. Verifiability and Testability Metrics


These metrics assess how easily the analysis model can be verified and validated against
the requirements. It ensures that the model can be tested to confirm that it works as
expected.

o Example Metrics:

 Test Coverage: The extent to which the analysis model is testable and can
be mapped to specific test cases.

 Verifiability: Measures whether the model is clear enough to be verified


against business rules or user requirements.

 Consistency with Requirements: The degree to which the analysis model


matches the defined requirements and expectations.

o Example: A software system's requirement to allow users to change passwords


would be verifiable if the analysis model includes this as a specific, testable
scenario (e.g., a use case for password change).

Page 100 of 162


8. Complexity Metrics
These metrics measure the overall complexity of the analysis model, including how
intricate the relationships between components are. A more complex model may be
harder to understand and maintain, so minimizing complexity is often desired.

o Example Metrics:

 Cyclomatic Complexity: Measures the complexity of decision paths in the


model, often used for control flow analysis.

 Model Size: The number of elements in the analysis model (e.g., use cases,
classes, relationships). A very large model can be more difficult to maintain
and understand.

 Depth of Hierarchy: The number of levels of abstraction or inheritance


within the model. More levels can indicate higher complexity.

o Example: A complex financial system analysis model with numerous user roles
and complex transactions may result in higher complexity metrics, which would
indicate the need for simplification or modularization.

Metrics for Design Model

Metrics for Design Models are quantitative measures used to evaluate the quality,
complexity, and effectiveness of a software design. These metrics help in assessing how well the
design meets the requirements, how easy it is to understand, and how maintainable and scalable
the system will be. By analyzing design models using metrics, teams can make informed
decisions about the quality of the system’s architecture and design choices.

Categories of Metrics for Design Models

1. Size Metrics These metrics measure the scale or size of the design model. Larger designs
may indicate higher complexity, but it's important to evaluate whether the size reflects
necessary features or unnecessary complexity.

o Example Metrics:

 Number of Classes: The total number of classes in the design, indicating


the scale of the object-oriented design. Too many classes could signal a
fragmented design.

Page 101 of 162


 Number of Methods/Functions: The total number of methods/functions
in the design. An unusually high number of methods could indicate a design
that’s too granular or has high complexity.

 Number of Relationships: The total number of relationships between


classes, such as associations, dependencies, or inheritance links, showing
how interconnected the design components are.

 Component Count: The number of different components (e.g., modules or


services) in the design. More components might indicate modularity, but
too many can lead to unnecessary complexity.

o Example: A design for an online banking system might have 50 classes and 200
methods, which could be reasonable, but a higher number might suggest the need
for design simplification.

2. Complexity Metrics These metrics evaluate how complex the design is, which can affect
its maintainability, scalability, and understandability. High complexity in the design model
can lead to difficulty in implementation and testing.

o Example Metrics:

 Coupling: Measures the degree to which different components or classes


are dependent on each other. Low coupling is desirable because it suggests
that changes in one component won't require changes in others.

 Example: Class Coupling measures how many classes a given class


depends on. A high number of dependent classes suggests a tightly
coupled design.

 Cohesion: The degree to which elements within a class or module are


related. Higher cohesion within classes and modules generally indicates a
more organized and maintainable design.

 Example: If a class handles both database access and user interface


logic, it might have low cohesion, indicating a need for refactoring.

 Cyclomatic Complexity (for Design): Though cyclomatic complexity is


traditionally a code metric, it can also be applied to design elements like

Page 102 of 162


workflows or decision-making processes, indicating the number of
decision points in the design.

 Inheritance Depth: In object-oriented designs, the depth of inheritance in


a class hierarchy. High inheritance depth can lead to fragile designs that are
difficult to extend or modify.

 Fan-in/Fan-out: These metrics measure how many components depend


on a particular component (fan-in) and how many components a particular
component depends on (fan-out). Higher fan-out indicates that a
component is highly connected, which can lead to higher complexity.

o Example: A highly coupled design in which many modules or classes depend on a


central module could indicate that the system is at risk for fragile changes that
may break many parts of the system.

3. Modularity Metrics Modularity metrics evaluate how well the design is broken down into
independent, reusable, and manageable components or modules. A modular design is
easier to understand, maintain, and extend.

o Example Metrics:

 Modularity: Measures the extent to which the design is divided into


distinct modules or components. Higher modularity usually leads to easier
maintainability and scalability.

 Module Size: The size of each module in terms of classes, functions, or


responsibilities. A large module might indicate that it should be split into
smaller, more manageable pieces.

 Number of Interfaces: The number of interfaces between components.


Too many interfaces can increase complexity, while too few may limit
flexibility.

 Reusability: Measures how many components or classes are reusable


across different parts of the system or other projects.

Page 103 of 162


o Example: A modular design for a customer relationship management (CRM)
system could have separate modules for user authentication, customer data
management, and reporting, ensuring that each module can evolve independently.

4. Maintainability Metrics Maintainability metrics assess how easy it is to change and


evolve the design over time. Highly maintainable designs are easy to modify, extend, and
debug.

o Example Metrics:

 Change Impact: Measures the potential impact of changes to one


component on other components. Lower change impact indicates that the
design is modular and loosely coupled.

 Refactorability: The ease with which the design can be refactored to


improve structure or performance. A design that requires minimal changes
to its internal structure is considered easier to refactor.

 Modifiable Architecture: A measure of how flexible the design is to


accommodate future changes without requiring a complete overhaul.

 Documentation Quality: The completeness and clarity of design


documentation, which affects how easily others can understand and modify
the design.

o Example: In an e-commerce platform, if the payment processing module is


designed to be separate from the order management module, it is easier to modify
or replace one without affecting the other, making the design more maintainable.

5. Performance Metrics These metrics assess the efficiency and performance implications
of the design. A well-designed system should be optimized to handle the expected
workload without unnecessary overhead.

o Example Metrics:

 Latency: Measures the time it takes for a component or module to respond


to an action. Lower latency indicates better performance.

Page 104 of 162


 Resource Usage: Assesses how efficiently the system uses resources such
as memory, CPU, and network bandwidth.

 Scalability: The ability of the design to handle increased load or scale


efficiently. Designs that are scalable can handle growth without significant
changes to the architecture.

 Concurrency: Measures the degree to which the design can handle


multiple tasks simultaneously without performance degradation.

o Example: In a cloud-based system, a design that scales efficiently by adding more


instances of a service as traffic increases is considered scalable, ensuring that the
performance remains optimal under heavy load.

6. Usability Metrics These metrics measure how easy it is for users (or developers) to
interact with the system’s design. For software designs with user interfaces, usability is a
crucial factor in ensuring a positive user experience.

o Example Metrics:

 User Interface Consistency: Measures the consistency of UI elements


across the system. A consistent interface makes the system easier for
users to navigate and use.

 Responsiveness: Measures how quickly the system responds to user


actions.

 User Satisfaction: A qualitative metric often based on user feedback or


usability tests, indicating how easy the system is to use.

o Example: A design for a web application might use consistent buttons, icons, and
menus throughout, improving usability and reducing the learning curve for users.

7. Security Metrics These metrics assess how well the design incorporates security
principles and practices to protect against threats and vulnerabilities. A secure design
reduces the risk of data breaches, unauthorized access, and other security issues.

o Example Metrics:

Page 105 of 162


 Access Control: The extent to which the design enforces strict access
control mechanisms, such as user roles and permissions.

 Data Integrity: Measures how well the design ensures that data is accurate
and protected from unauthorized modification.

 Threat Mitigation: Evaluates how the design addresses potential security


threats, such as SQL injection, cross-site scripting (XSS), or buffer overflow
vulnerabilities.

 Encryption: Measures the use of encryption techniques to protect


sensitive data during transmission and storage.

o Example: In a financial application, a design that includes strong encryption for


transactions and secure user authentication methods would have a high-security
metric.

Metrics for source code

Metrics for source code are quantitative measures used to evaluate the quality,
maintainability, and performance of the software's source code. These metrics help developers
and teams understand how well the code is written, how complex it is, how easy it is to maintain,
and whether it adheres to coding standards. The goal is to improve software quality, reduce
errors, and ensure long-term maintainability.

Categories of Metrics for Source Code

1. Size Metrics
Size metrics measure the length or size of the codebase. They are helpful in determining
how large or small a system is and provide insights into complexity, but should be
interpreted with care since larger codebases don’t always mean worse quality.

o Example Metrics:

 Lines of Code (LOC): The total number of lines in the source code,
including comments and blank lines. While it's a simple metric, it provides
an indication of the size of the codebase.

Page 106 of 162


 Effective Lines of Code (ELOC): The number of lines in the code that
contribute to functionality (i.e., excluding comments, blank lines, and white
space).

 Comment Density: The percentage of code lines that are comments,


indicating how well-documented the code is.

 File Count: The number of source code files in a project.

o Example: In a large project, a high LOC count may indicate a feature-rich system,
but if the comment density is low, it could suggest that the code might be difficult
for others to understand.

2. Complexity Metrics
Complexity metrics are used to evaluate the intricacy or difficulty of understanding and
maintaining the code. High complexity often indicates hard-to-maintain code, which can
be error-prone and difficult to test.

o Example Metrics:

 Cyclomatic Complexity: Measures the number of independent paths


through a program's source code. Higher values suggest more complex
code with more decision points (e.g., if-else conditions).

 Halstead Complexity Measures: These metrics are based on the number


of operators and operands in the code and are used to measure the
complexity of a program in terms of its mathematical properties.

 Nesting Depth: The level of nested loops or conditional statements in the


code. High nesting can make the code harder to read and debug.

 Fan-in/Fan-out: The number of modules or functions that a particular


module depends on (fan-in) and the number of modules that depend on a
particular module (fan-out). High fan-out may indicate that the module is
too complex or has too many responsibilities.

o Example: A function with a cyclomatic complexity of 20 may be too complicated


and could benefit from refactoring into smaller, more manageable functions.

Page 107 of 162


3. Maintainability Metrics
These metrics focus on how easy the source code is to maintain, including making
changes, fixing bugs, or adding new features. More maintainable code is generally
modular, well-documented, and easy to understand.

o Example Metrics:

 Maintainability Index: A composite score derived from various metrics


like cyclomatic complexity, lines of code, and Halstead metrics. A higher
score indicates easier maintenance.

 Code Duplication: The extent to which the same or similar code appears
in multiple places, indicating areas where refactoring might be needed.

 Code Churn: The number of lines of code that have been modified over a
specific period, helping to identify areas of the codebase that change
frequently and may require additional testing or attention.

 Modularity: Measures how well the code is divided into smaller,


independent components or modules. Highly modular code is easier to
modify, test, and understand.

o Example: If multiple parts of the codebase contain the same logic (code
duplication), it may need to be refactored to create reusable functions or classes,
improving maintainability.

4. Quality Metrics
These metrics are used to assess the overall quality of the source code, including
readability, adherence to coding standards, and bug density.

o Example Metrics:

 Defect Density: The number of defects (bugs or issues) found per


thousand lines of code. Lower defect density indicates higher-quality code.

 Code Quality Score: A composite score based on various quality checks,


such as adherence to coding standards, proper use of design patterns, and
the lack of security vulnerabilities.

Page 108 of 162


 Code Review Defects: The number of defects or issues identified during
code review. High numbers suggest the need for improvements in the
coding process or team practices.

o Example: A defect density of 0.5 defects per 1,000 lines of code is considered good,
indicating the code is relatively error-free.

5. Testability Metrics
Testability metrics assess how easily the source code can be tested. Code that is easy to
test is usually modular, with clear separation of concerns, and limited dependencies.

o Example Metrics:

 Test Coverage: The percentage of the codebase that is covered by


automated tests. Higher coverage indicates that more parts of the code are
being tested.

 Test Case Density: The number of test cases per unit of code (e.g., test
cases per thousand lines of code). This helps measure the thoroughness of
testing.

 Defect Detection Rate: The rate at which defects are found by automated
or manual tests. A high rate can indicate poor quality, but it can also point
to areas of code that need more thorough testing.

o Example: If a project has 85% test coverage, it indicates that most of the
functionality is being tested, which can help identify potential bugs early in the
development cycle.

6. Performance Metrics
These metrics focus on how efficient and optimized the source code is, which can affect
the performance of the application, including speed and resource usage.

o Example Metrics:

 Execution Time: The time it takes for a function or program to execute.


Efficient code should minimize execution time, particularly in
performance-critical areas.

Page 109 of 162


 Memory Usage: The amount of memory the program consumes.
Inefficient code can lead to excessive memory consumption, causing
performance issues.

 Response Time: In web applications, the time it takes for a system to


respond to user requests (e.g., page load time).

o Example: If a database query in the code takes 2 seconds to execute, optimizing


the code or the query could reduce the execution time and improve performance.

7. Security Metrics
Security metrics help identify how secure the source code is. These metrics assess
vulnerabilities and security risks present in the code, such as data breaches or insecure
dependencies.

o Example Metrics:

 Vulnerability Density: The number of security vulnerabilities per


thousand lines of code. High vulnerability density indicates a need for
security improvements.

 Code Review for Security: The number of security issues discovered


during code reviews.

 Dependency Risk: Evaluates the risk level of third-party libraries or


frameworks used in the codebase (e.g., outdated libraries with known
vulnerabilities).

o Example: If a source code review identifies that a critical library is outdated and
has known security vulnerabilities, it should be updated to reduce risk.

Metrics for Testing

Metrics for Testing are quantitative measures used to assess the effectiveness,
efficiency, and coverage of testing activities. These metrics provide insights into the quality of
the testing process, the effectiveness of test cases, and the overall reliability of the software
product. By tracking and analyzing these metrics, teams can identify gaps in testing, improve
test coverage, and ensure that defects are detected early in the development lifecycle.

Categories of Metrics for Testing

Page 110 of 162


1. Test Coverage Metrics These metrics evaluate the extent to which the software has been
tested. Higher test coverage generally leads to fewer undiscovered defects and better
confidence in the quality of the software.

o Example Metrics:

 Code Coverage: The percentage of the source code that is executed during
testing. This can be measured using tools that track the lines of code
executed by test cases.

 Line Coverage: The percentage of code lines tested.

 Branch Coverage: The percentage of decision points (if-else


conditions) tested, indicating how thoroughly the logical paths are
exercised.

 Path Coverage: The percentage of all possible execution paths


through the code that have been tested.

 Condition Coverage: Measures whether each condition (e.g., parts


of an if statement) in the code has been tested.

o Example: A system might have 80% line coverage and 70% branch coverage,
indicating that while a large portion of the code has been tested, some decision
points may require more test cases.

2. Test Effectiveness Metrics These metrics assess how effective the testing process is in
identifying defects and ensuring the software behaves as expected.

o Example Metrics:

 Defect Detection Percentage: The percentage of defects found during


testing compared to the total number of defects identified (either during
testing or post-release). Higher defect detection rates during testing
suggest a more effective test suite.

 Defect Discovery Rate: The rate at which defects are found over time.
This can help identify whether testing is catching defects early or if there
are still many undiscovered defects at later stages of testing.

Page 111 of 162


 Test Pass Percentage: The percentage of test cases that pass out of the
total number of test cases executed. A high percentage suggests the system
is stable, while a low percentage may indicate significant issues.

o Example: If 90% of test cases pass and the defect detection percentage is 85%, it
suggests that the testing is effective in identifying defects and that the product is
relatively stable.

3. Test Execution Metrics These metrics evaluate how efficiently the testing process is
being conducted, including the time and resources spent on testing.

o Example Metrics:

 Test Execution Time: The amount of time it takes to run the entire test
suite. This is an important metric to track for performance testing,
regression testing, and automated testing.

 Test Case Execution Efficiency: The number of test cases executed per
unit of time (e.g., test cases per hour). This metric helps determine how
quickly and efficiently the testing process is running.

 Test Case Completion Rate: The percentage of planned test cases that
were actually executed. A lower rate may indicate that testing is falling
behind schedule.

o Example: If 50 test cases are planned for a release, and 40 test cases have been
executed on time, the completion rate would be 80%. Monitoring this metric helps
keep testing on track and ensures that key tests are not missed.

4. Defect Metrics These metrics assess the number and severity of defects found during
testing. They help teams understand the quality of the product and the areas that may
need more focus.

o Example Metrics:

 Defects Found (by Severity): The number of defects found categorized by


severity (e.g., critical, major, minor). This helps prioritize fixing high-
severity defects.

Page 112 of 162


 Defect Density: The number of defects per unit of size (e.g., defects per
1,000 lines of code). This helps assess the overall quality of the code and
testing process.

 Defect Resolution Time: The average time taken to fix a defect after it is
identified. This metric is used to assess how quickly issues are resolved
during the testing process.

 Defect Retention Rate: The percentage of defects found during testing


that are still unresolved by the end of the testing phase. A high retention
rate suggests that defects are not being adequately addressed.

o Example: If a critical bug is found late in the testing cycle, it may indicate the need
for additional testing or that earlier testing was not thorough enough. If there are
numerous minor defects found in the same area, it may indicate the need for a re-
evaluation of that part of the system.

5. Test Case Design Metrics These metrics assess the quality and coverage of the test
cases themselves. Well-designed test cases increase the likelihood of finding defects and
ensure that all critical features are tested.

o Example Metrics:

 Test Case Defect Density: The number of defects found per test case. This
metric can indicate the effectiveness of the test case design.

 Test Case Pass/Fail Rate: The percentage of test cases that pass versus
those that fail. This helps gauge how well the tests are designed and
whether the product meets expectations.

 Test Case Redundancy: The degree to which test cases repeat tests of the
same functionality. A high redundancy rate can indicate inefficiencies in
the test design and unnecessary overlaps.

 Test Case Coverage: The percentage of requirements, features, or code


covered by the test cases. Higher coverage typically indicates that the
system is being thoroughly tested.

Page 113 of 162


o Example: If a test case for a login feature passes 99% of the time, but fails
intermittently, the test case design might need to be adjusted to account for
specific edge cases or performance issues.

6. Test Automation Metrics These metrics assess the effectiveness and efficiency of test
automation efforts, which are crucial for large-scale projects and continuous
integration/continuous delivery (CI/CD) pipelines.

o Example Metrics:

 Automated Test Coverage: The percentage of test cases that are


automated. Higher automated test coverage reduces the time needed for
manual testing and increases testing efficiency.

 Automation Test Execution Time: The time it takes for automated tests
to run. Shorter execution times improve the efficiency of the overall
testing process.

 Automation Pass Rate: The percentage of automated tests that pass


during execution. A low pass rate may indicate issues with test scripts or
the application.

 Automation Maintenance Effort: The amount of effort required to


update or maintain automated test scripts. This metric helps track how
easily the test automation framework can adapt to changes in the
application.

o Example: If automated tests are being run in a CI/CD pipeline and fail frequently,
it may indicate that the automated tests are poorly designed or that changes in
the application are not being properly accounted for in the test scripts.

7. Risk Metrics These metrics focus on assessing the risks associated with testing, such as
the likelihood of undetected defects, the impact of defects, and whether the testing efforts
are focused on the most critical areas of the application.

o Example Metrics:

Page 114 of 162


 Risk Exposure: The likelihood of a defect occurring in a high-risk area of
the system. Testing efforts should focus on high-risk areas to mitigate the
most critical issues.

 Test Coverage by Risk: Measures the amount of testing coverage for high-
risk features or components. Ensuring adequate coverage of high-risk
areas helps reduce the probability of defects going undetected.

 Failure Rate by Risk Level: The failure rate of tests for different risk levels
(e.g., high, medium, low). This metric helps identify which areas of the
system are most likely to fail under certain conditions.

o Example: If the login system is identified as a high-risk area and test coverage is
only 50%, this indicates a need to focus more testing efforts on the login
functionality to mitigate the potential for defects.

Metrics for Maintenance

Metrics for Maintenance are quantitative measures used to assess the effectiveness,
efficiency, and quality of maintenance activities in software development. Maintenance activities
typically involve correcting defects, updating the software to meet new requirements, improving
performance, and adapting to changes in the environment. These metrics help evaluate how
well maintenance efforts contribute to the software's long-term sustainability, stability, and
performance.

Categories of Metrics for Maintenance

1. Defect-Related Metrics These metrics focus on defects that arise during the maintenance
phase, including how quickly they are identified and resolved, and their impact on the
system.

o Example Metrics:

 Defect Density: The number of defects found during the maintenance


phase per unit of size (e.g., defects per 1,000 lines of code). Higher defect
density during maintenance may indicate that the system is becoming
more fragile and error-prone.

Page 115 of 162


 Example: If a system has a defect density of 5 defects per 1,000 lines
of code, this could be high and indicate that the system may require
refactoring or more careful testing after maintenance.

 Defect Resolution Time: The average time taken to resolve a defect after it
is reported. Shorter resolution times typically indicate a more responsive
maintenance process.

 Example: If defects are resolved within 2 days on average, this


indicates that the maintenance process is efficient.

 Defect Introduction Rate: The rate at which new defects are introduced
during the maintenance process. Ideally, maintenance should not introduce
more defects than are fixed.

 Example: A defect introduction rate of 0.1 defects per day indicates


that new defects are being introduced at a low rate, which is
desirable.

2. Cost-Related Metrics These metrics assess the financial impact of maintenance activities
and help organizations manage maintenance budgets and allocate resources effectively.

o Example Metrics:

 Maintenance Cost: The total cost of maintaining a system over a period of


time. This includes labor costs, tools, and infrastructure required for
maintenance activities.

 Example: If the annual maintenance cost of an application is


$200,000, this can be compared to the cost of developing new
features or refactoring to assess cost-effectiveness.

 Cost of Maintenance as a Percentage of Development Cost: The ratio of


maintenance costs to initial development costs. Higher percentages may
indicate that the system requires more ongoing maintenance or that
development practices (e.g., poor code quality) were suboptimal.

 Example: If the maintenance cost is 30% of the original


development cost, this could be reasonable for a mature product but
high for a new application.

Page 116 of 162


 Maintenance Backlog: The accumulated work that has not yet been
completed in the maintenance phase, often measured as the number of
open tickets or issues.

 Example: A backlog of 50 unresolved issues may signal that the


maintenance team is overwhelmed or that the system is becoming
more difficult to maintain.

3. Performance and Stability Metrics These metrics assess how well the system performs
and remains stable during the maintenance phase, particularly as updates and fixes are
implemented.

o Example Metrics:

 System Downtime: The amount of time the system is unavailable due to


maintenance activities or defect resolution. This is a critical metric for
ensuring minimal disruption to end users.

 Example: If the system experiences an average downtime of 2 hours


per month for maintenance, it may be acceptable in many
environments, but excessive downtime may need attention.

 Availability: The percentage of time the system is available for use. High
availability is crucial for systems that are mission-critical or provide
ongoing services.

 Example: A system with 99.9% availability experiences only 8.77


hours of downtime per year, which is typically considered
acceptable for most enterprise applications.

 Performance Degradation: The extent to which system performance (e.g.,


speed, responsiveness) deteriorates due to the maintenance process.
Ideally, maintenance activities should not degrade performance.

 Example: A system may experience a 5% slowdown after


maintenance, which might indicate that optimization or refactoring
is needed.

4. Change-Related Metrics These metrics track the frequency and scope of changes made to
the software during the maintenance phase and assess the impact of these changes on
the system.
Page 117 of 162
o Example Metrics:

 Number of Changes: The total number of changes made during the


maintenance phase, such as defect fixes, feature enhancements, or system
upgrades.

 Example: If there are 100 changes made to a system in a quarter,


tracking this can help identify how often the system is updated and
whether the maintenance workload is manageable.

 Change Volume: The size of changes in terms of the number of lines of


code modified or the number of modules affected. Larger changes may
introduce more risk and require more testing.

 Example: A change volume of 500 lines of code may require


additional testing to ensure that no defects are introduced.

 Change Failure Rate: The percentage of changes that introduce new defects
or cause system failures. A high failure rate indicates a need for more
rigorous testing and validation before changes are applied.

 Example: If 20% of changes introduce defects, this could suggest


inadequate testing or poor change management processes.

5. Software Maintenance Effectiveness Metrics These metrics assess the effectiveness of the
overall maintenance process in terms of quality, timeliness, and the degree to which the
system continues to meet user needs.

o Example Metrics:

 User Satisfaction: A measure of how satisfied end users are with the
system after maintenance activities. Surveys, feedback forms, and usability
studies can be used to gather this data.

 Example: If 90% of users report satisfaction after a new release or


update, it suggests that maintenance activities are effectively
addressing user needs.

 Service Level Agreement (SLA) Compliance: The percentage of


maintenance activities that are completed within the defined SLAs, such as

Page 118 of 162


resolving critical defects within 24 hours. Meeting SLAs ensures timely
responses to customer needs.

 Example: If 95% of maintenance requests are resolved within the


agreed SLA, it indicates that the maintenance process is responsive
and effective.

 Maintenance Efficiency: The ratio of effort spent on productive


maintenance tasks (e.g., resolving defects, adding enhancements) versus
unproductive tasks (e.g., rework, investigation of non-reproducible issues).

 Example: A high maintenance efficiency ratio (e.g., 80% productive


vs. 20% unproductive) suggests that the maintenance team is
effectively focusing on valuable tasks.

6. Legacy System Maintenance Metrics These metrics are specific to the maintenance of
legacy systems, where challenges such as outdated technology, lack of documentation, and
difficulty in finding skilled personnel can increase maintenance complexity.

o Example Metrics:

 Legacy System Maintenance Cost: The cost of maintaining a legacy system


compared to modern alternatives or replacement solutions.

 Example: If maintaining an old mainframe system costs $500,000


annually, while replacing it with a cloud-based system might cost
$300,000, this can guide decisions about whether to continue
maintenance or migrate to a new system.

 Technical Debt: The amount of rework required in the system due to


suboptimal design, outdated code, or accumulated quick fixes. Higher
technical debt usually means higher maintenance costs.

 Example: If a system has high technical debt (e.g., poorly


documented code, obsolete libraries), the cost of maintenance
increases, and it may hinder future enhancements.

 Number of Workarounds: The number of temporary solutions or


workarounds used to address defects or limitations in a legacy system. A
high number may indicate that the system is becoming increasingly
difficult to maintain.
Page 119 of 162
 Example: If 40% of issues are resolved through workarounds, this
suggests that the system might need to be refactored or replaced.

Example of Using Metrics for Maintenance

Consider a customer management system under maintenance:

1. Defect-Related Metrics: The system has a defect density of 3 defects per 1,000 lines of code
and an average defect resolution time of 4 hours. This suggests that defects are being
identified and resolved quickly.

2. Cost-Related Metrics: The annual maintenance cost of the system is $150,000, which is
25% of the original development cost. This is acceptable for a mature system but may
increase if additional features are added.

3. Performance and Stability Metrics: The system experiences an average downtime of 1


hour per month, and its availability rate is 99.95%, indicating high stability and minimal
disruption.

4. Change-Related Metrics: 150 changes were made during the last quarter, with a change
failure rate of 5%. This shows that most changes are successful, but there is room for
improvement in testing and change management.

5. Software Maintenance Effectiveness Metrics: User satisfaction is 85%, and SLA


compliance is 90%. This suggests that the maintenance team is responsive to user needs
and generally meets service expectations.

6. Legacy System Maintenance Metrics: The system has significant technical debt, and 30%
of issues are resolved using workarounds. This may require refactoring to reduce
technical debt and improve future maintenance.

Unit – V

Metrics for Process and Products

Metrics for Process and Products are used to evaluate both the development processes and the
products that result from these processes. These metrics help organizations assess the
efficiency, quality, and effectiveness of the processes used to create software, as well as the
quality, performance, and reliability of the final product. These metrics are essential for
continuous improvement and decision-making in software development.

Page 120 of 162


1. Metrics for Process

Metrics for process focus on how well the software development and maintenance processes are
functioning. They provide insights into the efficiency, effectiveness, and predictability of the
processes, helping organizations streamline operations, improve quality, and reduce costs.

2. Metrics for Products

Metrics for products focus on assessing the quality and performance of the software product
itself. These metrics provide insights into how well the product meets user requirements, how
reliable it is, and how it performs in real-world usage.

Software Measurement

Software Measurement is the process of collecting, analyzing, and using quantitative data to
assess various aspects of the software development and maintenance lifecycle. These metrics are
used to evaluate both the software development process and the software product itself.
Software measurement provides insight into the quality, efficiency, and effectiveness of the
development process, and it helps track progress, identify issues, and make informed decisions.

Types of Software Measurement

Software measurement can be broadly divided into two categories:

1. Process Measurement

2. Product Measurement

1. Process Measurement

Process measurement focuses on assessing the performance of the software development and
maintenance processes. The goal is to evaluate the efficiency, effectiveness, and quality of the
process itself to improve productivity, reduce costs, and enhance overall software quality.

2. Product Measurement

Product measurement evaluates the final software product, assessing its quality, functionality,
performance, and overall value delivered to users. These metrics are focused on ensuring that
the software meets user expectations, performs efficiently, and is reliable over time.

Benefits of Software Measurement

Page 121 of 162


 Improved Decision Making: Accurate data from software measurements helps project
managers and stakeholders make informed decisions about resource allocation, process
improvements, and product development.

 Predictability: By using historical data, software metrics can help predict future
performance, timelines, and resource needs.

 Quality Improvement: Metrics provide insights into potential weaknesses in the


software development process and product quality, allowing for targeted improvements.

 Risk Management: Monitoring key metrics can help identify potential risks early in the
development lifecycle, enabling proactive mitigation strategies.

 Continuous Improvement: By regularly collecting and analyzing metrics, organizations


can continuously improve their processes and products.

Conclusion

Software measurement is a critical practice in software engineering, providing valuable insights


into both the development process and the final product. Process metrics help evaluate how
effectively the development team is working, while product metrics assess the quality and
performance of the final product. By leveraging these metrics, software teams can make data-
driven decisions, improve efficiency, enhance quality, and deliver better software products to
users.

Metrics for Software quality

Metrics for Software Quality are used to evaluate the overall quality of the software product,
assessing aspects like functionality, performance, reliability, usability, and maintainability. These
metrics are essential for identifying areas that need improvement, ensuring that the product
meets user expectations, and delivering high-quality software. Below are some of the most
commonly used metrics for software quality:

1. Functionality Metrics

Functionality metrics focus on how well the software meets the specified requirements and
delivers the expected features to the user.

 Defect Density: Measures the number of defects per unit of software, typically per 1,000
lines of code (LOC) or per function point.

Page 122 of 162


o Example: If 20 defects are found in 2,000 lines of code, the defect density is 10
defects per 1,000 LOC.

 Requirement Coverage: Measures the percentage of requirements that have been


implemented or covered by the software.

o Example: If 85 out of 100 requirements are implemented, the requirement


coverage is 85%.

 Functionality Testing Coverage: Measures how much of the software’s functionality has
been tested.

o Example: If 90 out of 100 functionalities are tested, the coverage is 90%.

2. Reliability Metrics

Reliability metrics assess the software's ability to perform under expected conditions over time.

 Mean Time Between Failures (MTBF): Measures the average time the system operates
without failure. It is used to assess the reliability of the software.

o Example: If a system operates for 1,000 hours and encounters 5 failures, the
MTBF is 200 hours.

 Defect Recovery Time: Measures the average time taken to fix defects after they are
identified.

o Example: If the total time to fix 10 defects is 100 hours, the defect recovery time is
10 hours per defect.

 Failure Rate: The frequency at which failures occur in the system over time.

o Example: If there are 3 failures in 100 hours of operation, the failure rate is 0.03
failures per hour.

3. Performance Metrics

Performance metrics evaluate how well the software performs under various conditions,
including speed, scalability, and resource usage.

Page 123 of 162


 Response Time: The time it takes for the system to respond to a user's request. Lower
response time improves user satisfaction.

o Example: If the system responds to 500 requests in 100 seconds, the average
response time is 0.2 seconds.

 Throughput: The number of transactions or operations the system can handle per unit
of time.

o Example: If a system processes 1,000 transactions in 1 minute, the throughput is


1,000 transactions per minute.

 Scalability: The ability of the software to handle an increasing number of users or


transactions without performance degradation.

o Metric: Often measured by testing the software under different loads to observe
performance degradation as load increases.

o Example: If a system supports 100 users with acceptable performance but fails to
handle 200 users efficiently, its scalability is limited.

 Resource Utilization: Measures how efficiently the system uses hardware resources
(e.g., CPU, memory, disk space).

o Example: If a system uses 2 GB of RAM out of 8 GB, the resource utilization is


25%.

4. Usability Metrics

Usability metrics measure how easy it is for users to interact with the software and how well it
meets user needs.

 User Satisfaction: The degree to which users are satisfied with the software, often
measured through surveys or feedback forms.

o Example: If 10 users rate the software with an average score of 4 out of 5, the
user satisfaction score is 4.

 Learnability: Measures how easy it is for new users to learn to use the software. A
system with high learnability has a short learning curve.

o Metric: Typically assessed by the time it takes for new users to complete a set of
basic tasks or through usability testing.

Page 124 of 162


 Task Success Rate: The percentage of users who can successfully complete a given task
using the software.

o Example: If 80 out of 100 users can complete a task, the task success rate is 80%.

5. Maintainability Metrics

Maintainability metrics assess how easy it is to modify, update, and extend the software over
time.

 Code Complexity (Cyclomatic Complexity): Measures the complexity of the software’s


control flow. High complexity suggests that the code is harder to understand and maintain.

o Formula:

Cyclomatic Complexity=E−N+2P\text{Cyclomatic Complexity} = E - N + 2P

Where:

 E = number of edges in the flow graph

 N = number of nodes in the flow graph

 P = number of connected components in the flow graph

o Example: A cyclomatic complexity score of 10 indicates a moderate level of


complexity, whereas a score of 50 suggests that the software might be difficult to
maintain.

 Code Churn: Measures how often the code is modified. Frequent changes may indicate
issues with design or requirements instability.

o Example: If 200 lines of code were modified in a software project that has 1,000
lines of code, the code churn is 20%.

 Time to Implement Changes: Measures how long it takes to implement changes or new
features in the system.

o Example: If it takes 5 hours to implement a change and there are 2 changes, the
time to implement changes is 2.5 hours.

6. Customer Metrics
Page 125 of 162
These metrics evaluate how well the software meets customer needs and expectations, helping
to assess the overall success of the product in the market.

 Net Promoter Score (NPS): A measure of customer loyalty based on how likely users
are to recommend the product to others. It is calculated by subtracting the percentage of
detractors (users who would not recommend the software) from the percentage of
promoters (users who would recommend it).

 Churn Rate: Measures the percentage of users who stop using the software after a
certain period.

o Formula:

o Example: If a software has 200 users and loses 20 users in a month, the churn
rate is 10%.

Software quality metrics are essential for assessing the performance, reliability, and overall
value of a software product. These metrics help organizations identify areas for improvement,
ensure that the software meets user needs, and track progress toward quality goals.

Risk Management

A risk is a probable problem; it might happen, or it might not. There are main two
characteristics of risk.

 Uncertainty: the risk may or may not happen which means there are no 100% risks.

 Loss: If the risk occurs in reality, undesirable results or losses will occur.

What is Risk Management?


Risk Management is a systematic process of recognizing, evaluating, and handling threats or
risks that have an effect on the finances, capital, and overall operations of an organization.
These risks can come from different areas, such as financial instability, legal issues, errors in
strategic planning, accidents, and natural disasters.

Why is risk management important?

Risk management is important because it helps organizations to prepare for unexpected


circumstances that can vary from small issues to major crises. By actively understanding,

Page 126 of 162


evaluating, and planning for potential risks, organizations can protect their financial health,
continued operation, and overall survival.

Let’s Understand why risk management important with an example.

Suppose In a software development project, one of the key developers unexpectedly falls ill and
is unable to contribute to the product for an extended period.

One of the solution that organization may have , The team uses collaborative tools and
procedures, such as shared work boards or project management software, to make sure that
each member of the team is aware of all tasks and responsibilities, including those of their
teammates.

An organization must focus on providing resources to minimize the negative effects of possible
events and maximize positive results in order to reduce risk effectively. Organizations can more
effectively identify, assess, and mitigate major risks by implementing a consistent, systematic,
and integrated approach to risk management.

The risk management process

Risk management is a sequence of steps that help a software team to understand, analyze, and
manage uncertainty. Risk management process consists of

 Risks Identification.

 Risk Assessment.

 Risks Planning.

 Risk Monitoring

Page 127 of 162


Steps in the Risk Management Process

1. Risk Identification

This involves identifying potential risks that might affect the project. These could include:

 Technical Risks: Challenges related to software design, architecture, technology, or


integration.

 Project Management Risks: Risks related to scheduling, resource allocation, or team


collaboration.

 External Risks: Risks from external factors like market changes, regulatory
requirements, or third-party vendors.

 Operational Risks: Risks in the daily functioning of the system, such as server outages
or bugs.

 Human Risks: Risks from human factors, such as skill shortages, miscommunication, or
team member turnover.

Risk identification can be done using various methods, such as:

 Brainstorming: Gathering the team to discuss potential risks.

 Interviews/Surveys: Consulting with experts or stakeholders.

 SWOT Analysis: Analyzing strengths, weaknesses, opportunities, and threats.

 Checklists: Using pre-established risk checklists based on previous projects.

Page 128 of 162


2. Risk Assessment

Once the risks are identified, the next step is to assess them by evaluating their likelihood and
potential impact. This can be done using the following approaches:

 Qualitative Risk Assessment: Risks are ranked based on their likelihood and impact,
often using a simple scale (e.g., Low, Medium, High).

 Quantitative Risk Assessment: Uses numerical values to assess the probability of a risk
occurring and its potential impact on the project’s objectives. This often involves:

o Expected Monetary Value (EMV): Calculating the financial impact of risks.

o Risk Probability and Impact Matrix: Plotting risks on a matrix to assess their
severity.

3. Risk Mitigation and Planning

Once risks are assessed, mitigation strategies must be developed to reduce the probability and
impact of risks. Risk mitigation can include:

 Avoidance: Changing the project plan to eliminate the risk or its impact.

o Example: Choosing a more reliable technology to avoid technical risks.

 Transfer: Shifting the risk to another party, such as outsourcing or using insurance.

o Example: Purchasing insurance to transfer the risk of a potential data breach.

 Mitigation: Reducing the probability or impact of the risk by taking steps to control it.

o Example: Increasing testing efforts to reduce the impact of defects.

 Acceptance: Acknowledging the risk and deciding to live with it, either by preparing
contingency plans or taking no action if the risk is low and unlikely to have a major
impact.

o Example: Deciding to accept minor delays in a feature's development that do not


significantly impact the project’s final deadline.

4. Risk Monitoring and Control

Risk monitoring ensures that risks are continuously tracked and controlled throughout the
project. This step involves:

Page 129 of 162


 Regularly Reviewing Risks: Continuously updating the risk register and monitoring
known risks.

 Tracking New Risks: Identifying new risks that may emerge as the project progresses.

 Reviewing Mitigation Plans: Adjusting mitigation strategies as needed and ensuring


they are being implemented effectively.

 Monitoring Key Risk Indicators (KRIs): Defining and monitoring risk indicators to
identify when a risk is becoming more likely or impacting the project.

 Risk Audits: Conducting audits to ensure that risk management processes are being
followed and that no major risks are overlooked.

Risk Management Tools and Techniques

 Risk Register: A document or tool that tracks identified risks, their assessment,
mitigation strategies, and the responsible team members.

 Risk Matrix: A matrix that helps visualize the risks in terms of their probability and
impact. It categorizes risks into different levels of severity (e.g., low, medium, high).

 Monte Carlo Simulation: A statistical technique used to assess the probability of various
outcomes in a project by simulating different risk scenarios.

 Failure Mode and Effect Analysis (FMEA): A systematic method for evaluating
potential failures in a system and determining their effects, likelihood, and priority for
mitigation.

 PERT Charts (Program Evaluation and Review Technique): A project management


tool used to evaluate and control project schedules, factoring in uncertainties.

Risk Management Strategies

 Proactive Risk Management: Identifying and addressing risks before they occur by
creating preventive strategies and contingency plans.

 Reactive Risk Management: Dealing with risks after they arise by using corrective
actions to minimize their impact.

 Contingency Planning: Developing backup plans to respond to risks that may occur,
ensuring that the project can continue smoothly in case of unexpected issues.
Page 130 of 162
Benefits of Effective Risk Management

1. Improved Decision-Making: By understanding potential risks, project managers can


make better decisions regarding resource allocation, timelines, and scope.

2. Cost Control: Effective risk management can help prevent costly issues and delays by
addressing risks early on, minimizing unexpected expenses.

3. Improved Project Success Rate: Proactively managing risks increases the likelihood
that the project will be completed on time, within budget, and meet quality standards.

4. Stakeholder Confidence: Properly managing risks helps build trust with stakeholders,
ensuring that they feel confident in the project's success.

Reactive Vs proactive risk strategies

Reactive vs Proactive Risk Management Strategies are two approaches to handling risks in
projects, including software development. Both strategies have their advantages and
disadvantages, and the choice between them often depends on the nature of the project, the
environment, and the available resources. Here's a detailed comparison of both strategies:

Proactive Risk Management Strategy

Definition:
Proactive risk management involves identifying potential risks before they occur and taking
steps to avoid or mitigate them. The focus is on anticipating problems and implementing
strategies to prevent them from happening.

Key Characteristics:

 Prevention Focused: Proactively addresses risks by anticipating them early in the project.

 Planning Ahead: Involves creating strategies, contingency plans, and controls well before
the risk materializes.

 Predictive: Attempts to foresee problems and issues before they arise based on historical
data, trends, or expert judgment.

Examples:

 Conducting thorough feasibility studies and testing early in the project lifecycle.

Page 131 of 162


 Implementing code reviews, rigorous testing, and other quality assurance practices to
reduce defects before they occur.

 Having a dedicated risk management team or process in place to identify and address
potential issues as early as possible.

Reactive Risk Management Strategy

Definition:
Reactive risk management involves dealing with risks only after they have materialized. This
strategy focuses on responding to problems when they occur, often with corrective actions to
mitigate the impact.

Key Characteristics:

 Response-Focused: Reactive risk management emphasizes managing risks when they


have already occurred or are imminent.

 Problem-Solving Approach: The focus is on finding solutions to risks that have already
been identified or are happening in real time.

 Adaptability: Reactive strategies are flexible and can change depending on the risk's
nature, as they focus on dealing with real situations rather than predicting them.

Examples:

 Fixing a critical bug or issue after it is discovered during user acceptance testing or in the
production environment.

 Reacting to schedule delays caused by unforeseen resource shortages or vendor issues.

 Implementing patching or hotfixes after a security vulnerability has been identified.

Comparison Between Proactive and Reactive Risk Management


Criteria Proactive Risk Management Reactive Risk Management

Prevention and mitigation before risk Responding after the risk has
Focus
occurs. occurred.

Timing Early planning and preparation. Real-time problem-solving.

Lower long-term costs by preventing Higher costs due to fixing issues


Costs
issues. post-occurrence.

Page 132 of 162


Criteria Proactive Risk Management Reactive Risk Management

More effective at reducing the Effective at addressing immediate


Effectiveness
likelihood of risks. threats.

Resources Requires more resources upfront for Fewer resources needed until a
Required planning. problem arises.

Anticipates and mitigates risks in Deals with risks after they have
Risk Handling
advance. materialized.

May be less flexible due to extensive Highly flexible and adaptable to


Flexibility
planning. changes.

Stakeholder Builds greater stakeholder confidence May reduce confidence if risks are
Confidence in project success. not handled well.

Software Risks

Software Risks refer to potential issues or uncertainties that may arise during the development,
implementation, or maintenance of a software project, and can negatively affect its success.
These risks can lead to project delays, cost overruns, poor quality, or even project failure if not
identified and managed properly.

Software risks can be broadly classified into several categories based on their nature and
source. Understanding and addressing these risks is crucial for ensuring a successful software
project.

Types of Software Risks

1. Technical Risks

o Description: These risks arise from the technical aspects of the software, such as
design, development, testing, or technology.

o Examples:

 Unclear Requirements: If requirements are not well-defined or


misunderstood, the software might not meet user expectations.

Page 133 of 162


 Complexity of the Technology: Using unfamiliar or cutting-edge technology
can lead to integration issues or difficulties in development.

 Inadequate Architecture: Poorly designed software architecture might


cause scalability issues, performance bottlenecks, or maintainability
challenges.

 Integration Problems: Difficulty in integrating various subsystems or third-


party components, especially when using incompatible technologies.

2. Project Management Risks

o Description: Risks related to project scheduling, budgeting, resource allocation,


and overall management.

o Examples:

 Schedule Delays: Delays in development due to underestimating the time


required or overestimating team capacity.

 Cost Overruns: Spending more resources than initially planned, which


could happen if the scope increases or if risks are not managed efficiently.

 Scope Creep: The uncontrolled expansion of the project’s scope, usually


caused by changing or unclear requirements.

 Lack of Resources: Insufficient human, financial, or technical resources


leading to bottlenecks or incomplete work.

3. Human Risks

o Description: Risks associated with people involved in the software project,


including developers, managers, and stakeholders.

o Examples:

 Skill Shortages: The development team lacks the necessary expertise or


experience to address certain technical challenges.

 Team Turnover: High employee turnover, which can lead to a loss of


knowledge and experience, affecting continuity and productivity.

Page 134 of 162


 Communication Breakdown: Miscommunication between team members,
stakeholders, or between different teams, leading to misunderstandings and
misaligned objectives.

 Resistance to Change: Teams or stakeholders may resist adopting new


technologies or processes that are crucial for the project's success.

4. Business and Market Risks

o Description: Risks arising from external factors, including market conditions,


customer needs, or business goals.

o Examples:

 Changing Market Conditions: Shifting customer demands, competitive


pressures, or economic downturns may render the software obsolete or
irrelevant.

 Misalignment with Business Objectives: The software may fail to address


the business needs it was designed for or may not provide the expected
value.

 Regulatory Changes: New laws or regulations might require changes to the


software, increasing costs or delaying the release.

5. Operational Risks

o Description: Risks related to the software's performance in a production


environment, including reliability, security, and support.

o Examples:

 Performance Issues: The software might not perform optimally under real-
world conditions, leading to slow response times, system crashes, or user
dissatisfaction.

 Security Vulnerabilities: Unpatched security flaws that could lead to data


breaches, unauthorized access, or other cyber threats.

 Data Integrity: Risks to data quality, loss, or corruption during data


processing or storage.

Page 135 of 162


 Operational Failures: Problems with the deployment, hosting, or
maintenance of the software, leading to downtime or service disruptions.

6. Quality Risks

o Description: Risks associated with the quality of the software product, affecting its
functionality, usability, and maintainability.

o Examples:

 Defects and Bugs: Undetected defects that could affect the software’s
functionality or cause failures in certain scenarios.

 Inadequate Testing: Incomplete or insufficient testing could lead to


undetected bugs or poor software quality.

 Lack of Documentation: Insufficient or outdated documentation for the


software, making it hard to maintain or troubleshoot.

 Usability Issues: The software might be difficult to use, resulting in a poor


user experience.

7. Legal and Compliance Risks

o Description: Risks related to legal, ethical, and compliance issues, including


intellectual property (IP) rights, licensing, and adherence to industry standards.

o Examples:

 Intellectual Property Violations: Using third-party libraries or code without


proper licensing or violating patents.

 Data Privacy Issues: Failing to comply with data protection regulations


(e.g., GDPR) when handling sensitive user information.

 Non-Compliance: Not adhering to industry-specific standards, regulations,


or best practices could result in penalties, reputational damage, or legal
action.

Common Causes of Software Risks

Page 136 of 162


1. Unclear Requirements: If requirements are not well understood or documented, it can
lead to misalignment between the software's functionality and user needs, creating a
high-risk environment.

2. Lack of Stakeholder Engagement: Inadequate involvement of stakeholders, including end-


users or business managers, can result in software that does not meet expectations or
does not address key business needs.

3. Inexperienced Development Team: An inexperienced team may not be able to anticipate


challenges or implement solutions efficiently, leading to increased risks during
development.

4. Technological Changes: Rapid technological advancements or adopting new technologies


can introduce risks related to compatibility, training, and implementation challenges.

5. Poor Communication: Misunderstandings, unclear objectives, and misaligned priorities


between teams or with stakeholders can escalate risks significantly.

Risk Identification

Risk Identification is the process of recognizing potential risks that could affect a project,
including its objectives, timelines, quality, cost, and scope. It is one of the most critical steps in
risk management, as early identification allows teams to address risks before they escalate into
bigger problems.

In software development, risk identification helps in proactively identifying obstacles or


uncertainties that could hinder the successful completion of the project. This process is not
limited to technical issues but also includes management, operational, and environmental risks.

Steps in Risk Identification

1. Define the Risk Management Process:

o Establish clear objectives for identifying risks, including understanding the scope,
requirements, and constraints of the project.

o Set up a risk management team or identify responsible individuals.

2. Collect Data:

Page 137 of 162


o Gather data from various sources, including project documentation, requirements,
design specifications, historical data from similar projects, and insights from
stakeholders.

o Look for any changes in project scope, team structure, or external conditions that
could lead to new risks.

3. Brainstorming:

o Organize brainstorming sessions with the project team and stakeholders (e.g.,
developers, business analysts, quality assurance team, end-users) to gather
different perspectives on potential risks.

o Use methods such as SWOT Analysis (Strengths, Weaknesses, Opportunities,


Threats) or Cause-and-Effect Diagrams (Fishbone) to identify risks
systematically.

4. Use Historical Data and Expert Judgment:

o Analyze data from previous projects, similar products, or industry reports to


predict potential risks.

o Leverage the experience of senior team members or subject-matter experts who


can anticipate common risks in similar situations.

5. Identify Categories of Risks:

o Group risks into specific categories for better organization and understanding.
These could include:

 Technical Risks: Related to software design, architecture, technology, or


platform integration.

 Project Management Risks: Time, cost, scope, resource allocation, and


team issues.

 Operational Risks: Challenges related to deployment, maintenance, and


support.

 Business Risks: Market changes, regulatory compliance, or misalignment


with business objectives.

Page 138 of 162


 External Risks: Environmental factors, legal risks, or risks arising from
third-party vendors.

6. Identify Potential Risk Triggers:

o Recognize specific triggers or early-warning signs that indicate the onset of risks.
For instance, delayed deliverables might trigger schedule risks, or new legislative
changes might introduce compliance risks.

7. Involve Stakeholders:

o Engage project stakeholders (clients, users, developers, testers) to identify risks


from their perspectives. Stakeholders can provide valuable insights into
operational, market, and usability risks.

o Use surveys or interviews to gather inputs from different departments, such as


marketing, legal, or finance.

Tools and Techniques for Risk Identification

1. Checklists:

o Use predefined risk checklists, based on industry standards or previous project


experiences, to identify common risks associated with software development.

2. Risk Breakdown Structure (RBS):

o A hierarchical framework that categorizes and organizes risks into different


levels, helping to systematically identify potential risks across different categories.

o The RBS helps in visualizing how risks are structured and how they relate to each
other.

3. Interviews and Surveys:

o Conduct interviews with subject matter experts, team members, and stakeholders
to uncover hidden risks.

o Surveys can help gather opinions from a larger group of stakeholders on the
potential risks they foresee.

4. Delphi Technique:

Page 139 of 162


o A structured communication method where a panel of experts independently
answers questionnaires, and feedback is provided after each round to refine the
identification process.

5. Flowcharts and Diagrams:

o Use flowcharts or process maps to visually represent software development


workflows and highlight where risks may occur.

o Tools like Fishbone Diagrams (Ishikawa) can help identify causes and effects of
potential risks.

6. Risk Register:

o Maintain a Risk Register where all identified risks are recorded, categorized, and
tracked throughout the project. This document is continuously updated and
reviewed.

7. SWOT Analysis:

o Analyzing the project's Strengths, Weaknesses, Opportunities, and Threats


helps in identifying internal and external factors that could pose risks.

8. Expert Judgment:

o Seek input from experienced professionals or consultants who have encountered


similar challenges in past projects.

9. Mind Mapping:

o Use mind maps to explore the potential risks and their relationships in a non-
linear way, encouraging creative identification of risks across different aspects of
the project.

Common Categories of Software Risks

1. Technical Risks:

o Technology limitations: Using unproven or emerging technologies that may have


hidden challenges.

o Integration issues: Difficulty in integrating with existing systems, hardware, or


third-party services.
Page 140 of 162
o Design flaws: The software architecture or design may not be scalable or may not
meet user expectations.

o Software defects: Undetected bugs or coding errors that could cause functionality
issues.

2. Management Risks:

o Schedule delays: Missing deadlines due to underestimating task complexity or


poor time management.

o Budget overruns: Exceeding the budget due to scope creep, resource


mismanagement, or unforeseen technical challenges.

o Scope creep: Uncontrolled changes or additions to the project scope that could
result in delays or budget increases.

3. Operational Risks:

o Deployment failures: Issues arising during the deployment phase that might
prevent the software from going live.

o Maintenance challenges: Post-release maintenance may be more complex or


costly than anticipated, leading to ongoing risks.

o Infrastructure problems: Server failures, lack of scalability, or problems with


network availability can disrupt software operations.

4. Business Risks:

o Market changes: Changes in market trends, customer demands, or competitor


activity could make the software irrelevant.

o Regulatory changes: New laws or regulations that require modifications to the


software or delay its release.

o User acceptance: Users may resist adopting the software due to usability issues,
which can impact the overall success of the project.

5. External Risks:

o Legal risks: Violations of intellectual property rights or non-compliance with legal


regulations such as data privacy laws.

Page 141 of 162


o Economic risks: Economic downturns or shifts that could affect the funding or
prioritization of the project.

o Vendor risks: Problems with third-party vendors or external dependencies (e.g.,


delays in delivery of essential components or services).

Risk Projection

Risk Projection is the process of predicting the future impact of identified risks and estimating
the potential outcomes or consequences of those risks on a software project. It involves
assessing how a risk might evolve over time and estimating the likelihood of its occurrence, its
potential severity, and how it could affect various aspects of the project (e.g., timeline, budget,
scope, quality). The goal is to prioritize risks based on their potential impact and take
appropriate actions to mitigate them before they affect the project.

Risk projection helps in decision-making by giving stakeholders a clear view of potential future
challenges and by allowing them to allocate resources effectively to address the most critical
risks.

Steps in Risk Projection

1. Risk Estimation:

o Estimate the probability (likelihood) and impact (severity) of each identified


risk. This helps in determining how likely the risk is to occur and how severely it
will affect the project.

o Use quantitative or qualitative methods for estimation:

 Qualitative Risk Estimation: Descriptive terms such as "high," "medium,"


and "low" to classify the likelihood and impact.

 Quantitative Risk Estimation: Use numerical values or probability


distributions (e.g., 30% chance of risk occurrence) to estimate the
likelihood and impact.

2. Risk Prioritization:

o Rank risks based on their severity and probability to determine which risks need
the most attention.

Page 142 of 162


o High-probability and high-impact risks should be given the highest priority, while
low-probability and low-impact risks can be addressed later or monitored
throughout the project.

3. Impact Analysis:

o For each risk, perform a detailed impact analysis to determine how it will affect
different project components (e.g., schedule, resources, costs, quality).

o Consider both direct impacts (e.g., a delay in the development phase) and
indirect impacts (e.g., the effect of delays on the overall project timeline or
customer satisfaction).

4. Time Horizon Projection:

o Estimate when the risks are likely to occur during the project lifecycle. Some risks
may be immediate, while others could arise later in the development, testing, or
deployment stages.

o Use time-based projections to understand the risk’s potential impact over the
short term and long term.

5. Use of Risk Matrices:

o Create a Risk Matrix or Risk Heat Map, which is a visual representation of risk
likelihood versus impact. This helps in understanding the risk profile of the
project:

 X-axis: Likelihood (Low to High)

 Y-axis: Impact (Low to High)

 Each risk is placed in one of the four quadrants: low impact/low likelihood,
low impact/high likelihood, high impact/low likelihood, high impact/high
likelihood.

o The risks that fall into the "high impact/high likelihood" quadrant require
immediate attention.

6. Scenario Modeling:

Page 143 of 162


o Use what-if analysis or Monte Carlo simulations to model different risk
scenarios. This involves creating different project scenarios with varying risk
probabilities to understand the range of potential outcomes.

o Monte Carlo simulation: A technique that uses random sampling to simulate a


range of possible outcomes and calculate the likelihood of different risk scenarios
occurring.

7. Risk Thresholds:

o Establish thresholds for acceptable risk levels. Risks that exceed these thresholds
may require immediate mitigation strategies, while those within acceptable limits
can be monitored.

o For example, a project may have an acceptable cost variance of 5%. If a risk
project could cause a cost overrun greater than 5%, it needs to be flagged for
further mitigation.

Risk Refinement

Risk Refinement is the process of continuously improving and refining the understanding of
risks throughout the software development lifecycle. After initial identification and projection of
risks, risk refinement involves further breaking down, analyzing, and evaluating the risks to
gain a deeper understanding of their potential impact and likelihood. This process helps refine
mitigation strategies, update risk responses, and ensure that risks are managed effectively as
the project progresses.

Risk refinement typically occurs in iterative phases, with the level of detail and accuracy
increasing over time as more information becomes available. It also helps in identifying new
risks that may arise and adjusting existing risk management plans.

Steps in Risk Refinement

1. Review and Reassess Risks:

o After the initial identification and projection phases, continuously review the
identified risks to assess if there have been any changes in the project or
environment that could alter their impact or probability.

o Reassess risks at regular intervals or after significant project milestones, like


design reviews, prototyping, or testing phases.

Page 144 of 162


2. Decompose Risks into Sub-Risks:

o Break down broad, high-level risks into smaller, more manageable sub-risks. This
allows for a better understanding of the factors contributing to the overall risk and
enables more precise mitigation actions.

o For example, if a major technical risk is identified (e.g., integration issues with a
third-party service), it can be refined into specific sub-risks such as "failure to
meet API standards" or "lack of available technical support."

3. Evaluate the Risk Trigger Events:

o Refine the identification of specific risk triggers that indicate when a risk is
becoming more likely to occur. A risk trigger is an event or condition that signals
the possibility of the risk happening.

o For example, for a schedule risk, a potential trigger could be delayed completion of
a critical task in the project schedule.

4. Quantify Risks with More Precision:

o If the risk estimates in the initial stages were qualitative (e.g., high, medium, low),
refine them by applying quantitative measures such as probability distributions
or cost estimates. This allows for a more accurate evaluation of the risk
exposure.

o This can be done using tools like Monte Carlo simulations, which provide
statistical estimates for risk outcomes based on different variables.

5. Update the Risk Matrix:

o Regularly update the Risk Matrix (or Risk Heat Map) based on refined risk
projections. This involves placing risks in a visual matrix with axes for likelihood
and impact, and re-prioritizing them accordingly.

o As new risks are identified or the probability and impact of existing risks change,
the matrix should be updated to reflect the most current state of risk
management.

6. Review Mitigation Plans:

Page 145 of 162


o Revisit the mitigation strategies for each risk and refine them based on new
insights or changes in the project. This may involve adjusting existing plans or
developing new strategies to address risks more effectively.

o For example, if a risk related to project delays becomes more likely, refine the
mitigation plan by allocating additional resources or adjusting timelines.

7. Continuous Monitoring:

o Risk refinement is an ongoing process. Continuously monitor risks throughout


the project to ensure that new risks are identified and that existing risks are
appropriately managed.

o Use tools like risk tracking software, dashboard visualizations, or periodic risk
review meetings to keep stakeholders informed.

8. Evaluate Risk Interdependencies:

o As risks are refined, assess how they are interrelated or whether one risk could
trigger another. Risk interdependencies can magnify or reduce the impact of
certain risks.

o For example, a delay in software development could increase the likelihood of


schedule risks but could also lead to cost overruns, which in turn could trigger
resource allocation risks.

9. Apply Sensitivity Analysis:

o Sensitivity analysis can help determine how changes in certain risk factors will
affect the overall project. For instance, evaluating how sensitive the project is to
delays or cost overruns can help refine mitigation actions and project planning.

o This helps in understanding the “most critical” risks that require the most
attention and how minor adjustments can significantly reduce overall project
risks.

10. Incorporate Lessons Learned:

o If the project is part of an ongoing program, refine risk management by


incorporating lessons learned from previous projects. Historical data, past risk

Page 146 of 162


events, and resolutions can offer valuable insights into potential risks and improve
the overall risk management approach.

RMMM

RMMM stands for Risk Mitigation, Monitoring, and Management. It is a strategy used in
project management, particularly in software development and other engineering fields, to
systematically handle risks throughout a project's lifecycle. The RMMM process ensures that
potential risks are proactively addressed and monitored, minimizing the impact on the project
and enabling the team to maintain control over uncertainties.

Key Components of RMMM

1. Risk Mitigation:

o Risk Mitigation refers to the actions and strategies taken to reduce the likelihood
or impact of identified risks. The goal of mitigation is to reduce the severity of a
risk's effect or prevent it from happening altogether.

o Mitigation strategies can include:

 Preventive actions: Taking proactive steps to avoid the occurrence of the


risk. For example, adopting best practices to avoid software defects or using
proven technologies to reduce technical risks.

 Risk avoidance: Changing project scope or approach to eliminate the risk


or reduce its likelihood of occurrence.

 Risk transfer: Shifting the impact of the risk to another party (e.g.,
outsourcing a high-risk task to a more experienced vendor).

 Risk acceptance: Deciding to accept the risk and its potential impact,
often with the contingency in place to address it if it occurs.

2. Risk Monitoring:

o Risk Monitoring involves tracking identified risks, keeping an eye on potential


new risks, and assessing the effectiveness of mitigation efforts over time.

o Regular monitoring is essential for:

 Tracking risk triggers: Monitoring specific indicators that could suggest


a risk is becoming more likely or severe.

Page 147 of 162


 Updating risk status: Reassessing risk severity and likelihood as the
project progresses, and adjusting mitigation strategies as needed.

 Documentation: Keeping detailed records of risks, mitigation actions, and


status updates for future reference and decision-making.

3. Risk Management:

o Risk Management refers to the overall process of identifying, assessing,


prioritizing, and responding to risks in the context of a project. It involves
planning and implementing actions to mitigate risks and monitoring the project
for any new risks that may arise.

o Risk management includes:

 Risk identification: Identifying and categorizing all possible risks at the


beginning of the project.

 Risk assessment: Estimating the probability, impact, and exposure of


identified risks.

 Risk response planning: Developing strategies to deal with risks, either


by mitigating, avoiding, transferring, or accepting them.

 Risk review: Regularly reviewing risk management strategies and


updating them based on changes in the project environment.

RMMM PLAN

An RMMM Plan (Risk Mitigation, Monitoring, and Management Plan) is a structured


approach to identifying, mitigating, and managing risks throughout the lifecycle of a project. It
ensures that risks are properly handled, reducing their potential negative impacts on the project.
The plan provides clear guidance on how risks will be addressed, monitored, and tracked during
the project's execution.

Components of an RMMM Plan

1. Risk Identification:

o Objective: Identify potential risks that may affect the project’s success.

o Process: This step involves brainstorming, expert judgment, historical data


analysis, and other techniques to identify both internal and external risks.

Page 148 of 162


o Tools/Techniques: Risk workshops, surveys, expert opinions, SWOT analysis,
cause-and-effect diagrams.

o Example: "Risk of third-party vendor delays."

2. Risk Assessment:

o Objective: Assess the likelihood and impact of each identified risk.

o Process: After identifying the risks, each risk is evaluated in terms of its
probability of occurrence and potential impact on the project.

o Risk Assessment Matrix: This matrix ranks risks based on their probability and
impact, often classified as high, medium, or low.

o Tools/Techniques: Probability and Impact Matrix, Risk Assessment Charts.

o Example: "The likelihood of the vendor delay is high, but the impact on the
schedule is medium."

3. Risk Mitigation Strategies:

o Objective: Develop plans and strategies to reduce or eliminate the identified risks.

o Process: For each high-priority risk, define mitigation actions that can reduce the
likelihood or minimize the impact. Mitigation can involve preventive, corrective,
or contingency actions.

o Strategies:

 Risk Avoidance: Alter the project plan to eliminate the risk.

 Risk Reduction: Reduce the impact or likelihood of the risk.

 Risk Transfer: Shift the risk to another party (e.g., outsourcing,


insurance).

 Risk Acceptance: Acknowledge the risk and prepare contingency plans if


it occurs.

o Example: "Mitigate vendor delays by negotiating penalties for late delivery, or find
an alternative supplier."

4. Risk Monitoring:

Page 149 of 162


o Objective: Continuously monitor risks to track any changes in their status, detect
new risks, and evaluate the effectiveness of mitigation measures.

o Process: Regularly review risk status, update risk assessments, and adapt
strategies as necessary. Monitoring also involves identifying risk triggers—events
or conditions that signal the risk is likely to occur.

o Tools/Techniques: Risk Tracking Software, Regular risk review meetings,


Dashboards, Risk Registers.

o Example: "Monitor the vendor’s progress and track any delay in their delivery
schedule. Review risk triggers monthly."

5. Risk Management Plan:

o Objective: Provide the overall framework and governance structure for handling
risks throughout the project lifecycle.

o Components:

 Roles and Responsibilities: Assign risk owners who are responsible for
managing specific risks.

 Risk Tolerance: Define the acceptable level of risk for the project.

 Risk Review Frequency: Determine how often risks will be reviewed


(e.g., weekly, monthly).

 Contingency Planning: Define actions to take if a risk materializes.

o Example: "Project Manager will be responsible for overseeing the implementation


of the risk management process. Risk reviews will be conducted during the
weekly status meetings."

6. Risk Documentation (Risk Register):

o Objective: Maintain a comprehensive record of all identified risks, assessments,


mitigation actions, and monitoring outcomes.

o Process: Document all risks, their status, mitigation actions, owners, and triggers
in a centralized Risk Register.

o Content:

 Risk ID
Page 150 of 162
 Risk Description

 Likelihood and Impact Assessment

 Mitigation Strategies

 Risk Owners

 Status/Progress Updates

o Tools/Techniques: Risk Register, Project Management Software.

o Example: A risk register entry for "Vendor Delay" might look like:

 Risk ID: R1

 Risk Description: Delay in delivery of third-party components

 Likelihood: High

 Impact: Medium

 Mitigation: Establish penalties in the contract and explore alternative


suppliers.

 Risk Owner: Procurement Manager

 Status: Ongoing (reviewed monthly)

Quality Management

Quality Concepts

Quality Management also called Software quality Assurance(SQA).

• Serves as an umbrella activity that is applied throughout the software process


• Involves doing the software development correctly versus doing it over again
• Reduces the amount of rework, which results in lower costs and improved time to market

1. Quality
2. Quality Control
3. Quality assurance
4. Cost of quality
5.
Two kinds of quality may be encountered:
Page 151 of 162
 Quality of design
 Quality of conformance

Quality of Design

Quality of design refers to the characteristics tht designers specify for an item. The grade
of materials, tolerance and performance specifications all contribute quality of design.

Quality of Conformance

Quality of Conformance is the degree to which the design specifications are followed
during manufacturing.

Quality Control (QC)

QC is the series of inspections, reviews,and tests used throughout the development cycle
to ensure that each work product meets the requirements placed upon it.

QC includes a feedback loop to the process that created the work product.

Quality Assurance (QA)

• Consists of a set of auditing and reporting functions that assess the effectiveness and
completeness of quality control activities.
• Provides management personnel with data that provides insight into the quality of the
products.
• Alerts management personnel to quality problems so that they can apply the necessary
resources to resolve quality issues.
Cost of Quality

Includes all costs incurred in the pursuit of quality or in performing quality-related


activities.

Involves various kinds of quality costs.

• Prevention costs
Quality planning, formal technical reviews, test equipment, training

• Appraisal costs
Inspections, equipment calibration and maintenance, testing

• Failure costs – subdivided into internal failure costs and external failure costs

Page 152 of 162


Internal failure costs

• Incurred when an error is detected in a product prior to shipment.


• Include rework, repair, and failure mode analysis.
External failure costs

• Involves defects found after the product has been shipped


• Include complaint resolution, product return and replacement, help line
support, and warranty work.

Software Quality Assurance

Conformance to explicitly stated functional and performance requirements, explicitly


documented development standards, and implicit characteristics that are expected of all
professionally developed software.

The above definition emphasizes three points

(1) Software requirements are the foundation from which quality is measured; lack of
conformance to requirements is lack of quality.

(2) Specified standards define a set of development criteria that guide the manner in
which software is engineered; if the criteria are not followed, lack of quality will
almost surely result.

(3) A set of implicit requirements often goes unmentioned; if software fails to meet
implicit requirements, software quality is suspect.

(i) SQA Activities

• Prepares an SQA plan for a project.


• Participates in the development of the project's software process description.
• Reviews software engineering activities to verify compliance with the defined software
process.
• Audits designated software work products to verify compliance with those defined as part
of the software process.

Page 153 of 162


• Ensures that deviations in software work and work products are documented. and
handled according to a documented procedure.
• Records any noncompliance and reports to senior management.
• Coordinates the control and management of change.
• Helps to collect and analyze software metrics.

(ii) Software Reviews

Software reviews are a “filter” for the software engineering process. That is reviews
are applied at various points during the software development process an server to
uncover errors that can then be removed.

Software reviews serve to “Purify” the software analysis, design, coding, and testing
activities.

• Catch large classes of errors that escape the originator more than other
practitioners
• Include the formal technical review (also called a walkthrough or inspection)
– Acts as the most effective SQA filter
– Conducted by software engineers for software engineers
– Effectively uncovers errors and improves software quality
– Has been shown to be up to 75% effective in uncovering design flaws
(which constitute 50-65% of all errors in software)
• Require the software engineers to expend time and effort, and the organization
to cover the costs.

Formal Technical Review (FTR)

Formal Technical Review is aSQA activity that is performed by software engineers.

Objectives of FTR are

 To uncover errors in function, logic, or implementation for any representation of the


software.
 To verify that the software under review meets its requirements.
 To ensure that the software has been represented according to predefined standards.
 To achieve software that is developed in a uniform manner.
Page 154 of 162
 To make projects more manageable.

In addition, the FTR serves as a training ground for junior software engineers to observe
different approaches to software analysis, design, and construction.

Promotes backup and continuity because a number of people become familiar with other
parts of the software.

May sometimes be a sample-driven review.

 Project managers must quantify those work products that are the primary targets for
formal technical reviews.
 The sample of products that are reviewed must be representative of the products as a
whole.

1) Review Meeting

• Has the following constraints


– From 3-5 people should be involved
– Advance preparation (i.e., reading) should occur for each participant but should
require no more than two hours a piece and involve only a small subset of
components
– The duration of the meeting should be less than two hours
• Focuses on a specific work product (a software requirements specification, a detailed
design, a source code listing)
• Activities before the meeting
– The producer informs the project manager that a work product is complete and
ready for review
– The project manager contacts a review leader, who evaluates the product for
readiness, generates copies of product materials, and distributes them to the
reviewers for advance preparation
– Each reviewer spends one to two hours reviewing the product and making notes
before the actual review meeting
– The review leader establishes an agenda for the review meeting and schedules the
time and location

Page 155 of 162


• Activities during the meeting
– The meeting is attended by the review leader, all reviewers, and the producer
– One of the reviewers also serves as the recorder for all issues and decisions
concerning the product
– After a brief introduction by the review leader, the producer proceeds to "walk
through" the work product while reviewers ask questions and raise issues
– The recorder notes any valid problems or errors that are discovered; no time or
effort is spent in this meeting to solve any of these problems or errors
• Activities at the conclusion of the meeting
– All attendees must decide whether to
• Accept the product without further modification
• Reject the product due to severe errors (After these errors are corrected,
another review will then occur)
• Accept the product provisionally (Minor errors need to be corrected but
no additional review is required)
– All attendees then complete a sign-off in which they indicate that they took part in
the review and that they concur with the findings
• Activities following the meeting
– The recorder produces a list of review issues that
• Identifies problem areas within the product
• Serves as an action item checklist to guide the producer in making
corrections
– The recorder includes the list in an FTR summary report
• This one to two-page report describes what was reviewed, who reviewed
it, and what were the findings and conclusions
– The review leader follows up on the findings to ensure that the producer makes
the requested corrections

(i) FTR Guidelines

1) Review the product, not the producer.


2) Set an agenda and maintain it.
3) Limit debate and rebuttal; conduct in-depth discussions off-line.
4) Enunciate problem areas, but don't attempt to solve the problem noted.
Page 156 of 162
5) Take written notes; utilize a wall board to capture comments.
6) Limit the number of participants and insist upon advance preparation.
7) Develop a checklist for each product in order to structure and focus the review.
8) Allocate resources and schedule time for FTRs.
9) Conduct meaningful training for all reviewers.
10)Review your earlier reviews to improve the overall review process.

Statistical Software Quality Assurance

Statistical quality assurance implies the following steps.

1) Collect and categorize information (i.e., causes) about software defects that occur.
2) Attempt to trace each defect to its underlying cause (e.g., nonconformance to
specifications, design error, violation of standards, poor communication with the
customer).
3) Using the Pareto principle (80% of defects can be traced to 20% of all causes), isolate the
20%.
Although hundreds of errors are uncovered all can be tracked to one of the following causes.

• Incomplete or erroneous specifications.


• Misinterpretation of customer communication.
• Intentional deviation from specifications.
• Violation of programming standards.
• Errors in data representation.
• Inconsistent component interface.
• Errors in design logic.
• Incomplete or erroneous testing.
• Inaccurate or incomplete documentation.
• Errors in programming language translation of design.
• Ambiguous or inconsistent human/computer interface.
]
Six Sigma
 Popularized by Motorola in the 1980s
 Is the most widely used strategy for statistical quality assurance
 Uses data and statistical analysis to measure and improve a company's operational
performance

Page 157 of 162


 Identifies and eliminates defects in manufacturing and service-related processes
 The "Six Sigma" refers to six standard deviations (3.4 defects per a million occurrences)

Software reliability

Software Reliability refers to the probability of a software application or system


performing its required functions without failure over a specified period under certain
conditions. It is a key aspect of software quality, ensuring that a system operates consistently
and correctly as intended. The reliability of software is measured in terms of its ability to
perform without encountering defects, errors, or failures that affect its functionality,
performance, or user experience.

Key Concepts of Software Reliability

1. Reliability Definition:

o The reliability of software is the likelihood that the software will function without
failure under normal operational conditions for a defined period. This is usually
expressed as a probability or percentage, with a higher value representing more
reliable software.

2. Failure:

o A failure occurs when the software behaves incorrectly or produces unintended


results due to a defect, error, or unexpected situation. A failure can be caused by
various factors such as coding bugs, hardware failures, environmental issues, or
unanticipated user inputs.

3. Mean Time Between Failures (MTBF):

o MTBF is a commonly used metric for measuring software reliability. It is the


average time between two consecutive failures in the software's operation.

4. Mean Time to Failure (MTTF):

o MTTF is used to predict the time until the first failure of a software system or
component. It is typically applied to non-repairable systems where failure cannot
be fixed, but components may be replaced.

5. Failure Rate:

Page 158 of 162


o The failure rate of a system indicates how often software failures occur. It is the
inverse of MTBF and is typically expressed as failures per unit of time (e.g.,
failures per hour or per day).

6. Software Fault vs. Software Failure:

o A fault is a defect in the software code or design that may potentially lead to a
failure, whereas a failure occurs when the fault causes the software to behave
incorrectly or undesirably during execution.

Factors Affecting Software Reliability

1. Code Quality:

o Poorly written code or untested code is more likely to contain defects that lead to
software failures. Using coding standards, code reviews, and unit testing helps
improve code reliability.

2. Software Testing:

o The extent and effectiveness of testing directly impact software reliability.


Comprehensive testing, including functional testing, stress testing, performance
testing, and fault tolerance testing, helps uncover issues before deployment.

o Automated testing tools and continuous integration can help ensure that software
reliability is maintained throughout the development process.

3. Complexity of the System:

o As the software system becomes more complex, the chances of introducing errors
or failures increase. Proper system design, modularization, and maintaining low
complexity can improve reliability.

4. Environmental Factors:

o The conditions under which the software operates (hardware, network


conditions, user environments) can affect reliability. Software that is sensitive to
changes in these conditions may experience failures more often.

5. Error Handling:

o Robust error handling can prevent software from failing when unexpected
conditions arise. Proper logging, exception handling, and fallback mechanisms

Page 159 of 162


can ensure the system continues to function correctly or gracefully handles
errors.

6. Maintenance:

o Regular maintenance and updates to address issues, patch vulnerabilities, and


improve performance contribute to the overall reliability of software. Infrequent
updates can lead to system failures as the software becomes outdated or
incompatible with evolving hardware or operating systems.

The ISO 9000 quality standards

ISO 9000 is a set of international standards on quality management and quality


assurance developed to help companies effectively document the quality system elements to be
implemented to maintain an efficient quality system. They are not specific to any one industry
and can be applied to organizations of any size.

ISO 9000 can help a company satisfy its customers, meet regulatory requirements, and
achieve continual improvement.
ISO 9000 Series standards

The ISO 9000 family contains these standards:

 ISO 9001:2015: Quality management systems - Requirements


 ISO 9000:2015: Quality management systems - Fundamentals and vocabulary)
 ISO 9004:2009: Quality management systems – Managing for the sustained success of an
organization (continuous improvement)
 ISO 19011:2011: Guidelines for auditing management systems

ISO 9000 certification

Individuals and organizations cannot be certified to ISO 9000. ISO 9001 is the only
standard within the ISO 9000 family to which organizations can certify.

ISO 9000 principles of quality management

The ISO 9000:2015 and ISO 9001:2015 standards are based on seven quality management
principles that senior management can apply for organizational improvement:

Page 160 of 162


1. Customer focus
o Understand the needs of existing and future customers
o Align organizational objectives with customer needs and expectations
o Meet customer requirements
o Measure customer satisfaction
o Manage customer relationships
o Aim to exceed customer expectations

Learn more about the customer experience and customer satisfaction.

2. Leadership
o Establish a vision and direction for the organization
o Set challenging goals
o Model organizational values
o Establish trust
o Equip and empower employees
o Recognize employee contributions

Learn more about leadership and find related resources.

3. Engagement of people
o Ensure that people’s abilities are used and valued
o Make people accountable
o Enable participation in continual improvement
o Evaluate individual performance
o Enable learning and knowledge sharing
o Enable open discussion of problems and constraints

Learn more about employee involvement.

4. Process approach
o Manage activities as processes
o Measure the capability of activities
o Identify linkages between activities
o Prioritize improvement opportunities
o Deploy resources effectively

Page 161 of 162


Learn more about a process view of work and see process analysis tools.

5. Improvement
o Improve organizational performance and capabilities
o Align improvement activities
o Empower people to make improvements
o Measure improvement consistently
o Celebrate improvements

Learn more about approaches to continual improvement.

6. Evidence-based decision making


o Ensure the accessibility of accurate and reliable data
o Use appropriate methods to analyze data
o Make decisions based on analysis
o Balance data analysis with practical experience

See tools for decision making.

7. Relationship management
o Identify and select suppliers to manage costs, optimize resources, and create value
o Establish relationships considering both the short and long term
o Share expertise, resources, information, and plans with partners
o Collaborate on improvement and development activities
o Recognize supplier successes

Learn more about supplier quality and see resources related to managing the
supply chain.

Page 162 of 162

You might also like