Software Engineering 22BCAE3-1
Software Engineering 22BCAE3-1
Devakottai
Page 1 of 162
Page 2 of 162
22BCAE3 - Software Engineering
Unit – 1
Software is a program or set of programs containing instructions that provide the desired
functionality. Engineering is the process of designing and building something that serves a particular
purpose and finds a cost-effective solution to problems.
Software Engineering is the process of designing, developing, testing, and maintaining software. It
is a systematic and disciplined approach to software development that aims to create high-quality,
reliable, and maintainable software.
2. It is a rapidly evolving field, and new tools and technologies are constantly being developed to
improve the software development process.
3. By following the principles of software engineering and using the appropriate tools and
methodologies, software developers can create high-quality, reliable, and maintainable software
that meets the needs of its users.
4. Software Engineering is mainly used for large projects based on software systems rather than
single programs or applications.
5. The main goal of Software Engineering is to develop software applications for improving
quality, budget, and time efficiency.
6. Software Engineering ensures that the software that has to be built should be consistent,
correct, also on budget, on time, and within the required requirements.
Software Evolution is a term that refers to the process of developing software initially, and then
timely updating it for various reasons, i.e., to add new features or to remove obsolete functionalities,
etc. This article focuses on discussing Software Evolution in detail.
Page 3 of 162
What is Software Evolution?
The software evolution process includes fundamental activities of change analysis, release
planning, system implementation, and releasing a system to customers.
1. The cost and impact of these changes are accessed to see how much the system is affected by
the change and how much it might cost to implement the change.
2. If the proposed changes are accepted, a new release of the software system is planned.
3. During release planning, all the proposed changes (fault repair, adaptation, and new
functionality) are considered.
4. A design is then made on which changes to implement in the next version of the system.
5. The process of change implementation is an iteration of the development process where the
revisions to the system are designed, implemented, and tested.
1. Change in requirement with time: With time, the organization’s needs and modus Operandi of
working could substantially be changed so in this frequently changing time the tools(software)
that they are using need to change to maximize the performance.
2. Environment change: As the working environment changes the things(tools) that enable us to
work in that environment also changes proportionally same happens in the software world as
the working environment changes then, the organizations require reintroduction of old
software with updated features and functionality to adapt the new environment.
3. Errors and bugs: As the age of the deployed software within an organization increases their
preciseness or impeccability decrease and the efficiency to bear the increasing complexity
workload also continually degrades. So, in that case, it becomes necessary to avoid use of
obsolete and aged software. All such obsolete Pieces of software need to undergo the evolution
process in order to become robust as per the workload complexity of the current environment.
Page 4 of 162
4. Security risks: Using outdated software within an organization may lead you to at the verge of
various software-based cyberattacks and could expose your confidential data illegally
associated with the software that is in use. So, it becomes necessary to avoid such security
breaches through regular assessment of the security patches/modules are used within the
software. If the software isn’t robust enough to bear the current occurring Cyber attacks so it
must be changed (updated).
5. For having new functionality and features: In order to increase the performance and fast data
processing and other functionalities, an organization need to continuously evolute the
software throughout its life cycle so that stakeholders & clients of the product could work
efficiently.
Nowadays, seven broad categories of computer software present continuing challenges for
software engineers. Which is given below:
1. System Software: System software is a collection of programs that are written to service other
programs. Some system software processes complex but determinate, information structures.
Other system application processes largely indeterminate data. Sometimes when, the system
software area is characterized by the heavy interaction with computer hardware that requires
scheduling, resource sharing, and sophisticated process management.
Page 5 of 162
2. Application Software: Application software is defined as programs that solve a specific
business need. Application in this area processes business or technical data in a way that
facilitates business operation or management technical decision-making. In addition to
conventional data processing applications, application software is used to control business
functions in real-time.
3. Engineering and Scientific Software: This software is used to facilitate the engineering function
and task. however modern applications within the engineering and scientific area are moving
away from conventional numerical algorithms. Computer-aided design, system simulation, and
other interactive applications have begun to take a real-time and even system software
characteristic.
4. Embedded Software: Embedded software resides within the system or product and is used to
implement and control features and functions for the end-user and for the system itself.
Embedded software can perform limited and esoteric functions or provide significant function
and control capability.
5. Product-line Software: Designed to provide a specific capability for use by many customers,
product-line software can focus on the limited and esoteric marketplace or address the mass
consumer market.
6. Web Application: It is a client-server computer program that the client runs on the web
browser. In their simplest form, Web apps can be little more than a set of linked hypertext files
that present information using text and limited graphics. However, as e-commerce and B2B
applications grow in importance. Web apps are evolving into a sophisticated computing
environment that not only provides a standalone feature, computing function, and content to
the end user.
Software Myths
Page 6 of 162
Most, experienced experts have seen myths or superstitions (false beliefs or interpretations)
or misleading attitudes (naked users) which creates major problems for management and technical
people. The types of software-related myths are listed below.
i) Management Myths:
Myth 1:
We have all the standards and procedures available for software development.
Fact:
Software experts do not know all the requirements for the software development.
And all existing processes are incomplete as new software development is based on new and
different problem.
Myth 2:
The addition of the latest hardware programs will improve the software development.
Fact:
The role of the latest hardware is not very high on standard software development; instead
(CASE) Engineering tools help the computer, they are more important than hardware to
produce quality and productivity.
Myth 3:
Page 7 of 162
With the addition of more people and program planners to Software development can help
meet project deadlines (If lagging behind).
Fact:
If software is late, adding more people will merely make the problem worse. This is because the
people already working on the project now need to spend time educating the newcomers, and
are thus taken away from their work. The newcomers are also far less productive than the
existing software engineers, and so the work put into training them to work on the software
does not immediately meet with an appropriate reduction in work.
(ii)Customer Myths:
The customer can be the direct users of the software, the technical team, marketing / sales
department, or other company. Customer has myths leading to false expectations (customer) & that’s
why you create dissatisfaction with the developer.
Myth 1:
A general statement of intent is enough to start writing plans (software development) and
details of objectives can be done over time.
Fact:
Official and detailed description of the database function, ethical performance, communication,
structural issues and the verification process are important.
Unambiguous requirements (usually derived iteratively) are developed only through effective
and continuous
communication between customer and developer.
Myth 2:
Software requirements continually change, but change can be easily accommodated because
software is flexible
Fact:
Page 8 of 162
It is true that software requirements change, but the impact of change varies with the time at
which it is introduced. When requirements changes are requested early (before design or code
has been started), the cost impact is relatively small. However, as time passes, the cost impact
grows rapidly—resources have been committed, a design framework has been established, and
change can cause upheaval that requires additional resources and major design modification.
(iii)Practitioner’s Myths:
Myths 1:
They believe that their work has been completed with the writing of the plan.
Fact:
It is true that every 60-80% effort goes into the maintenance phase (as of the latter software
release). Efforts are required, where the product is available first delivered to customers.
Myths 2:
Fact:
Page 9 of 162
Systematic review of project technology is the quality of effective software verification method.
These updates are quality filters and more accessible than test.
Myth 3:
An operating system is the only product that can be successfully exported project.
Fact:
A working system is not enough, the right document brochures and booklets are also required
to provide guidance & software support.
Myth 4:
Engineering software will enable us to build powerful and unnecessary document & always
delay us.
Fact:
Software engineering is not about creating documents. It is about creating a quality product.
Better quality leads to reduced rework. And reduced rework results in faster delivery times
Page 10 of 162
The foundation for software engineering is the process layer. It is the glue that holds the
technology layers together and enables rational and timely development of computer software.
Process defines a framework that must be established for effective delivery of software
engineering technology.
The software process forms the basis for management control of software projects and
establishes the context in which technical methods are applied, work products (models,
documents, data, reports, etc.) are produced, milestones are established, quality is ensured, and
change is properly managed.
Software engineering methods provide the technical ―how to’s‖ for building software.
Methods encompass a broad array of tasks that include communication, req. analysis, design,
coding, testing and support.
Software engineering tools provide automated or semi-automated support for the process and
the methods.
When tools are integrated so that info. Created by one tool can be used by another, a system
for the support of software development called computer-aided software engineering is
established.
A Process framework
- Communication
- Planning
- Modeling
- Construction
- Deployment
Page 11 of 162
Communication
activity Planning
activity Modeling
activity
o analysis action
o requirements gathering work task
o elaboration work task
o negotiation work task
o specification work task
o validation work task
o design action
o data design work task
o architectural design work task
o interface design work task
o component-level design work task
o Construction activity
o Deployment activity
Umbrella activities (exampl
Page 12 of 162
-
CMM Levels.
Level 1: Initial.
A software development organization at this level is characterized by ad hoc
activities.
Very few or no processes are defined and followed.
Since software production processes are not defined, different engineers follow their
own process and as a result development efforts become chaotic.
The success of projects depends on individual efforts and heroics.
Since formal project management practices are not followed, under time
pressure short cuts are tried out leading to low quality.
Level 2: Repeatable
At this level, the basic project management practices such as tracking cost and
schedule are established.
Size and cost estimation techniques like function point analysis, COCOMO, etc. are
used.
The necessary process discipline is in place to repeat earlier success on projects with
similar applications. Opportunity to repeat a process exists only when a company
Page 13 of 162
produces a family of products
Page 14 of 162
Level 3: Defined
At this level the processes for both management and development activities are
defined and documented.
There is a common organization-wide understanding of activities, roles, and
responsibilities.
The processes though defined, the process and product qualities are not
measured.
ISO 9000 aims at achieving this level.
Level 4: Managed
Each maturity level is characterized by several Key Process Areas (KPAs) except for SEI
CMM level 1 that includes the areas an organization should focus to improve its software
process to the next level.
Page 15 of 162
Process Patterns
Process patterns define a set of activities, actions, work tasks, work products
and/or related behaviors
A template is used to define a pattern
Typical examples:
o Customer communication (a process activity)
o Analysis (an action)
o Requirements gathering (a process task)
o Reviewing a work product (a process task)
o Design model (a work product)
Process Assessment
The process should be assessed to ensure that it meets a set of basic process
criteria that have been shown to be essential for a successful software
engineering.
Communication
Planning
Page 16 of 162
Modeling
Construction
Deployment
The Capability Maturity Model Integration (CMMI) is a process and behavioral model that helps
organizations improve their development processes and encourage productive, efficient behaviors.
Developed at Carnegie Mellon University's Software Engineering Institute, CMMI provides a framework
for organizations to assess and enhance their processes, ultimately leading to higher quality products and
services2.
1. Levels of Maturity: CMMI defines five levels of maturity for processes, ranging from Level 1
(Initial) to Level 5 (Optimizing). Each level represents a different degree of process maturity and
capability.
2. Process Areas: CMMI includes various process areas, such as project management, requirements
management, and quality assurance, which are essential for effective process improvement.
4. Applicability: While initially developed for software engineering, CMMI has evolved to be
applicable to various industries, including hardware development, service industries, and more.
5. Benchmarking: Organizations can use CMMI to set benchmarks for evaluating their processes
and identifying areas for improvement.
Page 17 of 162
Reduced Risk: CMMI helps in identifying and mitigating risks early in the development process.
Improved Quality: The focus on process improvement leads to higher quality products and
services.
Market Competitiveness: Organizations can improve their market value and competitiveness by
adhering to CMMI standards.
Evolution of CMMI:
CMMI has undergone several iterations, with the most recent version (V2.0) being released in 2018. This
version combines previous areas of focus (product and service development, service establishment, and
product and service acquisition) into a single, more user-friendly and adaptable model2.
Process patern:
As the software team moves through the software process they encounter problems. It would be very
useful if solutions to these problems were readily available so that problems can be resolved quickly.
Process-related problems which are encountered during software engineering work, it identifies the
encountered problem and in which environment it is found, then it will suggest proven solutions to
problem, they all are described by process pattern. By solving problems a software team can construct a
process that best meets needs of a project.
At any level of abstraction, patterns can be defined. They can be used to describe a problem and solution
associated with framework activity in some situations. While in other situations patterns can be used to
describe a problem and solution associated with a complete process model.
Template:
Pattern Name – Meaningful name must be given to a pattern within context of software process
(e.g. Technical Reviews).
Page 18 of 162
Forces – The issues that make problem visible and may affect its solution also environment in
which pattern is encountered.
Type:
It is of three types :
1. Stage pattern – Problems associated with a framework activity for process are described by stage
pattern. Establishing Communication might be an example of a staged pattern. This pattern would
incorporate task pattern Requirements Gathering and others.
2. Task-pattern – Problems associated with a software engineering action or work task and relevant
to successful software engineering practice (e.g., Requirements Gathering is a task pattern) are
defined by task-pattern.
3. Phase pattern – Even when the overall flow of activities is iterative in nature, it defines sequence
of framework activities that occurs within process. Spiral Model or Prototyping might be an
example of a phase pattern.
Process Assesment
Software Process Assessment is a disciplined and organized examination of the software process which
is being used by any organization bases the on the process model. The Software Process Assessment
includes many fields and parts like identification and characterization of current practices, the ability of
current practices to control or avoid significant causes of poor (software) quality, cost, schedule and
identifying areas of strengths and weaknesses of the software.
Self Assessment : This is conducted internally by the people of their own organisation.
Second Party assessment: This is conducted by an external team or people of the own
organisation are supervised by an external team.
In an ideal case Software Process Assessment should be performed in a transparent, open and
collaborative environment. This is very important for the improvement of the software and the
Page 19 of 162
development of the product. The results of the Software Process Assessment are confidential and are only
accessible to the company. The assessment team must contain at least one person from the organization
that is being assessed.
Process Models:
Software Pr oce ss
Software Process
Software Process
PSP TSP
Page 21 of 162
PROCESS MODELS
SDLC Overview
SDLC, Software Development Life Cycle is a process used by software industry to design, develop
and test high quality softwares. The SDLC aims to produce a high quality software that meets or exceeds
customer expectations, reaches completion within times and cost estimates.
SDLC is the acronym of Software Development Life Cycle.
It is also called as Software development process.
The software development life cycle (SDLC) is a framework defining tasks performed at each
step in the software development process.
Different life cycle models may map the basic development activities to phases in different
ways. Thus, no matter which life cycle model is followed, the basic activities are included
in all life cycle models though the activities may be carried out in different orders in
different life cycle models. During any life cycle phase, more than one activity may also be
Page 22 of 162
carried out.
1. Waterfall Model
2. Prototyping Model
3. Incremental Model
4. RAD Model
5. Spiral Model
The evolutionary model is based on the concept of making an initial product and then evolving the
software product over time with iterative and incremental approaches with proper feedback.
In this type of model, the product will go through several iterations and come up when the final
product is built through multiple iterations. The development is carried out simultaneously
with the feedback during the development. This model has a number of advantages such as
customer involvement, taking feedback from the customer during development, and building
the exact product that the user wants. Because of the multiple iterations, the chances of errors
get reduced and the reliability and efficiency will increase
Evolutionary Model
1. Iterative Model
2. Incremental Model
3. Spiral Model
Page 23 of 162
1. Waterfall Model or Phased life cycle model
Requirement Gathering and analysis: All possible requirements of the system to be developed
are captured in this phase and documented in a requirement specification doc.
System Design: The requirement specifications from first phase are studied in this phase and
system design is prepared. System Design helps in specifying hardware and system requirements
and also helps in defining overall system architecture.
Implementation: With inputs from system design, the system is first developed in small
programs called units, which are integrated in the next phase. Each unit is developed and tested
for its functionality which is referred to as Unit Testing.
Integration and Testing: All the units developed in the implementation phase are integrated
into a system after testing of each unit. Post integration the entire system is tested for any faults
and failures.
Deployment of system: Once the functional and non functional testing is done, the product is
deployed in the customer environment or released into the market.
Maintenance: There are some issues which come up in the client environment. To fix those
issues patches are released. Also to enhance the product some better versions are released.
Maintenance is done to deliver these changes in the customer environment.
All these phases are cascaded to each other in which progress is seen as flowing steadily downwards
(like a waterfall) through the phases. The next phase is started only after the defined set of goals are
achieved for previous phase and it is signed off, so the name "Waterfall Model". In this model phases do
not overlap.
Page 25 of 162
Advantages:
It is very simple
It divides the large task of building a software system into a seri es of clearly
divided phases.
Each phase is well documented
Problems
1. Incremental Model
Page 26 of 162
o In this life cycle model, the software is first broken down into several modules which can
be incrementally constructed and delivered.
o Used when requirements are well understood
o Multiple independent deliveries are identified
o Work flow is in a linear (i.e., sequential) fashion within an increment and is staggered
between increments
o Iterative in nature; focuses on an operational product with each increment
o The development team first develops the core modules of the system.
o This initial product skeleton is refined into increasing levels of capability adding new
functionalities in successive versions.
o Each evolutionary version may be developed using an iterative waterfall model of
development.
o Provides a needed set of functionality sooner while delivering optional components
later
o Useful also when staffing is too short for a full-scale development
Page 27 of 162
Iterative Model
The evolutionary model is also known as successive versions or incremental models. The main
aim of this evolutionary model is to deliver the products in total processes over time. It also combines
the iterative and collective model of the software development life cycle (SDLC).
Based on the evolutionary model, we can divide the development model into many modules to
Page 28 of 162
help the developer build and transfer incrementally. On the other hand, we can also develop the
skeleton of the initial product. Also, it refines the project to increase levels of capability by adding new
functionalities in successive versions.
There are so many characteristics of using the evolutionary model in our project. These
characteristics are as follows.
o We can develop the evolutionary model with the help of an iterative waterfall model of
development.
o There are three types of evolutionary models. These are the Iterative model, Incremental model
and Spiral model.
o When the new product version is released, it includes the new functionality and some changes in
the existing product, which are also released with the latest version.
o This model also permits the developer to change the requirement, and the developer can divide
the process into different manageable work modules.
o The development team also have to respond to customer feedback throughout the development
process by frequently altering the product, strategy, or process.
Page 30 of 162
The RAD model consist of the following phases
• Business Modeling:
In this phase, define the flow of information within the organization, so that it
covers all the functions. This helps in clearly understand the nature, type source
and process of information.
• Data Modeling:
In this phase, convert the component of the information flow into a set of data
objects. Each object is referred as an Entity.
• Process Modeling:
In this phase, the data objects defined in the previous phase are used to depict
the flow of information . In addition adding , deleting, modifying and retrieving
the data objects are included in process modeling.
• Application Designing:
In this phase, the generation of the application and coding take place. Using
fourth generation programming languages or 4 GL tools is the preferred choice
for the software developers.
• Testing:
In this phase, test the new program components.
Page 31 of 162
The RAD has following advantages
• It requires dedication and commitment on the part of the developers as well as the
client to meet the deadline. If either party is indifferent in needs of other, the
project will run into serious problem.
• For large but scalable projects It is not suitable as RAD requires sufficient human
resources to create the right number of RAD teams.
• RAD requires developers and customers who are committed to rapid fire
activities
• Its application area is restricted to system that are modular and reusable in
nature.
• It is not suitable for the applications that have a high degree of technical risk.
• For large but scalable projects, RAD requires sufficient human resources to
create the right number of RAD teams.
• RAD requires developers and customers who are committed to rapid fire
activities.
• Not all types of applications are appropriate for RAD.
• RAD is not appropriate when technical risks are high.
Prototype Models:
There are several uses of a prototype. An important purpose is to illustrate the input data
formats, messages, reports, and the interactive dialogues to the customer. This is a
valuable mechanism for gaining better understanding of the customer’s needs:
Page 33 of 162
• Follows an evolutionary and iterative approach
• Focuses on those aspects of the software that are visible to the customer/user
• Based on the customer feedback, the requirements are refined and the prototype is
suitably modified.
• This cycle of obtaining customer feedback and modifying the prototype continues till
the customer approves the prototype.
• The actual system is developed using the iterative waterfall approach. However, in
the prototyping model of development, the requirements analysis and specification
phase becomes redundant as the working prototype approved by the customer
becomes redundant as the working prototype approved by the customer becomes
an animated requirements specification.
Page 34 of 162
Disadvantages
The customer sees a "working version" of the software, wants to stop all
development and then buy the prototype after a "few fixes" are made
Developers often make implementation compromises to get the software running
quickly (e.g., language choice, user interface, operating system choice, inefficient
algorithms)
Lesson learned
o Define the rules up front on the final disposition of the prototype before it is
built
o In most circumstances, plan to discard the prototype and engineer the
actual production software with a goal toward quality
o
Spiral Model
Page 35 of 162
• Serves as a realistic model for large-scale software development
Page 36 of 162
• During the first quadrant, it is needed to identify the objectives of the phase.
• Steps are taken to reduce the risks. For example, if there is a risk that
the requirements are inappropriate, a prototype system may be
developed.
• Review the results achieved so far with the customer and plan the
next iteration around the spiral.
The Unified Process (UP) is a software development framework used for object-oriented
modeling. The framework is also known as Rational Unified Process (RUP) and the
Open Unified Process (Open UP). Some of the key features of this process include:
It defines the order of phases.
It is component-based, meaning a software system is built as a set of software
components. There must be well-defined interfaces between the components for
smooth communication.
It follows an iterative, incremental, architecture-centric, and use-case driven approach
Page 39 of 162
Unit – II
Software Requirements
Software requirements are the specifications and descriptions of the functionality and
constraints of a software system. They are fundamental to the software development process as
they outline what the system should do and how it should perform.
1. Types of Requirements:
Functional Requirements: These define the specific behaviors and functions that the
software must perform. For example, a functional requirement for a banking application
might be the ability to transfer funds between accounts.
2. Gathering Requirements:
3. Documentation:
Software Requirements Specification (SRS): A detailed document that outlines all the
functional and non-functional requirements of the system. The SRS serves as a reference
for developers, testers, and stakeholders throughout the project lifecycle.
4. Analysis:
Page 40 of 162
Requirement Analysis: Assessing and refining the gathered requirements to ensure they
are clear, complete, and feasible.
Validation: Ensuring that the requirements accurately reflect the needs and expectations
of the stakeholders.
Verification: Confirming that the requirements are feasible and can be implemented
within the constraints of the project.
Foundation for Development: Requirements provide the foundation for all subsequent
phases of the software development lifecycle. They guide design, implementation, and
testing efforts.
Risk Mitigation: Clear and well-defined requirements help in identifying and mitigating
risks early in the project.
Quality Assurance: Properly defined requirements contribute to the quality of the final
product by ensuring it meets user needs and expectations.
Software requirements are crucial for the successful development and delivery of a software
system, serving as the blueprint that guides the entire project. They ensure that the final
product aligns with the stakeholders' vision and provides the desired functionalities and
performance.
User requirements in software engineering are essential for understanding and defining what
end-users expect from a software system. These requirements serve as the foundation for
designing, developing, and testing the software to ensure it meets user needs. Here’s an
overview:
Page 41 of 162
Key Aspects of User Requirements:
1. Definition:
o User requirements are statements that describe what the end-users need from the
software system. They focus on user goals, tasks, and interactions with the
system.
o Functional Requirements: These specify the actions that the system must be
able to perform, such as user authentication, data processing, and reporting.
o Use Cases and Scenarios: Developing use cases and scenarios to illustrate how
users will interact with the system.
4. Documentation:
o User Stories: Short, simple descriptions of a feature from the perspective of the
end-user, often used in Agile development.
Page 42 of 162
o Validation: Ensuring that the requirements accurately reflect user needs and are
achievable within the project constraints.
o Foundation for Design: User requirements guide the design phase by providing a
clear understanding of user needs and expectations.
Functional Requirement Example: "The system shall allow users to log in using their
email and password."
Non-Functional Requirement Example: "The system shall load the user dashboard
within 3 seconds under normal load conditions."
System requirements
System requirements refer to the specifications and constraints that define the
characteristics and functionality of a software system. These requirements outline what the
system must achieve to meet the needs of users and stakeholders. They serve as a blueprint for
the design, development, testing, and maintenance phases of the software development lifecycle.
Page 43 of 162
a non-functional requirement might be that the system must handle 1,000
simultaneous user logins without performance degradation.
3. Documentation:
o Validation: Ensuring that the requirements accurately reflect the needs and
expectations of the stakeholders.
o Guiding Testing Efforts: Clear requirements help in creating effective test cases
to verify that the system meets its intended functionality and performance.
Page 44 of 162
Functional Requirement Example: "The system shall allow users to reset their
passwords via an email verification process."
Non-Functional Requirement Example: "The system shall load the homepage within 2
seconds under normal operating conditions."
Interface specification
Interface specification is a critical part of software engineering that outlines the details of how
different components of a software system interact with each other. It serves as a contract
between different parts of the system, ensuring that each part knows how to communicate with
the others effectively.
1. Purpose:
2. Types of Interfaces:
o User Interfaces (UI): Defines how users interact with the system, including
input methods (e.g., forms, buttons) and output displays (e.g., screens, reports).
o Data Types and Formats: Defines the types of data exchanged through the
interface, including data structures, formats, and constraints.
Page 45 of 162
o Function Signatures: Describes the functions or methods available through the
interface, including their names, parameters, return types, and expected behavior.
o Error Handling: Outlines how errors are detected, reported, and handled through
the interface.
Interface: UserAuthenticationAPI
Description:
This API provides methods for user authentication, including login and logout functionalities.
Data Types:
Function Signatures:
Page 46 of 162
1. login(credentials: UserCredentials): boolean
- Returns: void
Error Handling:
Security Considerations:
Feasibility Study
A feasibility study is a systematic analysis used to assess the practicality, viability, and
potential success of a proposed project or solution. It evaluates various aspects of the project to
determine whether it is achievable within the constraints of resources, time, and technology.
Feasibility studies help stakeholders make informed decisions about proceeding with a project.
1. Technical Feasibility:
o Purpose: Evaluates whether the technology and resources required for the
project are available or can be developed.
o Key Considerations:
Page 47 of 162
Availability of hardware, software, and technical expertise.
2. Economic Feasibility:
o Key Considerations:
Cost-benefit analysis.
3. Operational Feasibility:
o Purpose: Assesses whether the project aligns with organizational goals and can
be integrated into current workflows.
o Key Considerations:
4. Legal Feasibility:
o Purpose: Ensures the project complies with relevant laws, regulations, and
policies.
o Key Considerations:
Industry-specific regulations.
5. Schedule Feasibility:
o Purpose: Evaluates whether the project can be completed within the required
time frame.
Page 48 of 162
o Key Considerations:
1. Elicitation:
2. Analysis:
o Objective: To refine and structure the gathered requirements to ensure they are
clear, complete, and feasible.
o Techniques: Requirement prioritization, use case analysis, and modeling (e.g., data
flow diagrams, entity-relationship diagrams).
3. Specification:
4. Validation:
o Objective: To ensure that the requirements accurately reflect the needs and
expectations of the stakeholders and are achievable.
System Models
System models are abstract representations that help in understanding, analyzing, and
designing complex software systems. They provide a structured way to visualize the system's
components, their interactions, and the overall architecture. Here are some common types of
system models used in software engineering:
1. Context Models
Context models define the boundaries of the system and its interactions with external entities
such as users, other systems, and external devices. They help in identifying the system’s scope
and its environment.
Example: A context diagram showing the system at the center with lines connecting it to
external entities, indicating the flow of data or interactions.
2. Behavioral Models
Behavioral models describe how the system behaves in response to internal or external
events. They focus on the dynamics of the system, including workflows, processes, and state
changes.
Example: Use case diagrams, sequence diagrams, and state machine diagrams in UML
(Unified Modeling Language).
3. Structural Models
Structural models depict the organization of the system components and their relationships.
They focus on the static aspects, such as system architecture, data structures, and component
hierarchy.
4. Data Models
Page 50 of 162
Data models represent the structure of the data within the system. They define how data is
stored, organized, and manipulated, focusing on entities, attributes, and relationships.
Example: Entity-Relationship (ER) diagrams, which show entities, their attributes, and
the relationships between them.
5. Functional Models
Functional models describe the functional requirements of the system, including the
processes and activities the system must perform to achieve its goals.
Example: Data flow diagrams (DFDs) that illustrate the flow of data through the system,
identifying processes, data stores, and data flows.
6. Architectural Models
Architectural models provide a high-level view of the system’s structure, showing the major
components and their interactions. They help in defining the overall system architecture.
7. Interaction Models
System models are essential tools in software engineering that aid in understanding,
designing, and communicating complex software systems. Each type of model provides a
different perspective, contributing to a comprehensive view of the system’s structure, behavior,
and functionality.
Unit – III
Design Engineering
Design engineering is a critical aspect of the software development process that focuses
on creating the architecture, components, interfaces, and other elements of a software system. It
Page 51 of 162
transforms requirements into a blueprint for constructing the software, ensuring that it meets
functional and non-functional requirements while being maintainable, scalable, and robust.
Design Process
Architectural Design: Creating the high-level structure of the software system, defining
the major components and their interactions.
Detailed Design: Defining the internal structure and behavior of each component,
including data structures, algorithms, and control logic.
Interface Design: Specifying how different components interact, including the methods
and data formats used for communication.
Design Reviews: Conducting reviews to ensure that the design meets the requirements
and adheres to best practices.
In software engineering, the design process and quality assurance are deeply intertwined. A
well-defined design process ensures that the system is built correctly from the start, and quality
assurance verifies that the final product meets all required standards and performs as expected.
Here’s a comprehensive look at both aspects:
Design Process
1. Requirements Analysis:
3. Detailed Design:
4. Interface Design:
o Objective: Define how different components and systems interact with each other.
5. Prototype Development:
o Objective: Create prototypes to validate design concepts and gather user feedback.
o Objective: Ensure the design meets all requirements and is feasible for
implementation.
Quality Assurance
o Objective: Ensure the software meets all specified requirements and performs as
expected.
o Activities:
Validation: Ensuring the built system meets user needs and requirements
(e.g., user acceptance testing).
2. Testing:
Page 53 of 162
o System Testing: Testing the entire system for compliance with the requirements.
o Acceptance Testing: Ensuring the system meets user needs and is ready for
deployment.
3. Performance Testing:
o Objective: Ensure the system performs well under expected load conditions.
4. Security Testing:
5. Usability Testing:
o Metrics: Defect density, code coverage, mean time to failure (MTTF), and customer
satisfaction scores.
Design Concepts
Design concepts are foundational principles and ideas that guide the software development
process, ensuring that the final product is robust, maintainable, and meets user needs. Here are
some key design concepts in software engineering:
1. Abstraction:
Page 54 of 162
o Example: Abstracting the details of file handling by providing a simple interface for
reading and writing files.
2. Encapsulation:
o Definition: Bundling data and the methods that operate on the data within a single
unit, typically a class, and restricting access to some of the object's components.
o Example: Hiding the internal implementation of a class and exposing only the
necessary methods to interact with it.
3. Modularity:
o Example: Breaking down a large application into separate modules, such as user
authentication, payment processing, and inventory management.
4. Separation of Concerns:
o Definition: Dividing a system into distinct features that overlap as little as possible,
making the system easier to manage and understand.
o Example: Separating the user interface logic from business logic and data access
layers in a web application.
o Cohesion: Refers to how closely related and focused the responsibilities of a single
module are.
o Example: Aim for high cohesion (e.g., a module handling only database operations)
and low coupling (e.g., modules interacting through well-defined interfaces).
6. Inheritance:
Page 55 of 162
7. Polymorphism:
o Example: A function that can accept objects of different classes (e.g., circle,
rectangle) but calls the appropriate method based on the object's actual class.
8. Design Patterns:
o Example: Singleton, Factory, Observer, and Strategy are some commonly used
design patterns.
9. Agile Design:
o Example: Using Agile methodologies to frequently refine and improve the design
based on user feedback and changing requirements.
Design Model
1. Components:
o The design model outlines the various components (or modules) of the system
and how they interact with one another. Each component is responsible for a
specific functionality or a set of functionalities.
Page 56 of 162
o Components can be both software (e.g., classes, libraries) and hardware (e.g.,
devices, servers).
2. Data Design:
o Focuses on defining how the data will be stored, accessed, and manipulated. This
includes designing databases, data structures, and data flows.
3. Control Design:
o Refers to how the flow of control is managed within the system. It specifies how
operations, processes, or services will be coordinated, including the flow of
messages or events.
4. Interface Design:
o Details how different components and systems will communicate. It defines the
inputs and outputs for each component and the protocols for communication (e.g.,
API specifications, message formats).
o Focuses on designing the system’s interface with users. This includes screens,
forms, and interactive elements that users interact with, ensuring usability and a
good user experience (UX).
6. Behavioral Design:
o Describes the dynamic aspects of the system, including how components behave
during execution, such as sequence diagrams, state diagrams, and activity
diagrams.
o Describes the system architecture and components in abstract terms, often using
block diagrams or component diagrams.
o Defines major modules, components, and how they interact at a high level.
o It focuses on the overall structure without delving into the implementation details.
Page 57 of 162
Example:
o An online shopping system may have modules like User Management, Order
Processing, Inventory Management, and Payment Gateway.
o Breaks down the high-level design into more detailed designs, focusing on each
module’s implementation.
Example:
o In the User Management module, the design may include a class for User, with
attributes like username, password, and email, and methods like register(), login(),
and updateProfile().
Software architecture
1. Separation of Concerns:
o Dividing a system into distinct sections, each responsible for a specific part of the
functionality, allows for better organization and easier maintenance.
2. Modularity:
3. Scalability:
Page 58 of 162
o Designing the system to handle increased load by adding resources without a
complete redesign (vertical or horizontal scaling).
4. Reusability:
5. Maintainability:
o Ensuring the system can be easily modified to fix bugs, add new features, or
improve performance without major disruptions.
6. Flexibility:
7. Security:
o Ensuring the system is protected from unauthorized access, data breaches, and
other vulnerabilities.
o Divides the system into layers, where each layer only communicates with the layer
directly below or above it.
o Example: Web applications with a presentation layer, business logic layer, and
data access layer.
2. Client-Server Architecture:
o Divides the system into two main components: clients that request services and
servers that provide them.
3. Microservices Architecture:
o Breaks down the system into small, independently deployable services, each
focused on a single business capability.
Page 59 of 162
o Example: E-commerce systems where payment, shipping, and inventory are
separate services.
4. Event-Driven Architecture:
o Example: Stock market platforms where price changes (events) trigger updates in
trading systems.
o Uses services as the main building blocks, where each service provides a specific
business function, and services communicate via standardized interfaces (e.g.,
SOAP, REST).
o All nodes (peers) in the system are both consumers and providers of services.
There's no centralized server.
Data Design
Data Design refers to the process of defining the structure, organization, and
management of data within a system. It ensures that the data supports the system's functional
and performance requirements while being easy to maintain, scalable, and secure.
1. Data Modeling:
2. Normalization:
o Forms:
o Define how data is stored (e.g., relational databases, NoSQL, flat files).
4. Data Integrity:
5. Security:
Page 61 of 162
o Implement access control, encryption, and audit mechanisms to protect data.
Entities:
Relationships:
Tables:
1. Books:
2. Members:
3. Loans:
Page 62 of 162
Architectural styles and patterns are fundamental concepts in software engineering that
provide standardized solutions to common design problems. They help in defining the structure,
interactions, and organization of software systems. Here’s an overview of some key
architectural styles and patterns
Architectural Styles
Architectural styles and patterns are high-level strategies for designing software systems,
providing templates and guidelines for organizing components and their interactions. They are
fundamental to software architecture, ensuring scalability, maintainability, and performance.
Architectural Styles
1. Layered Architecture:
o Description: Divides the system into layers, each with a specific responsibility.
o Advantages:
Separation of concerns.
o Disadvantages:
2. Client-Server Architecture:
o Advantages:
Centralized control.
o Disadvantages:
Page 63 of 162
Single point of failure (server).
Network dependency.
3. Event-Driven Architecture:
o Advantages:
High responsiveness.
Decouples components.
o Disadvantages:
Hard to debug.
4. Microservices Architecture:
o Advantages:
Technology agnostic.
o Disadvantages:
Page 64 of 162
o Description: Data flows through a series of processing components (filters)
connected by pipes.
o Advantages:
Reusability of filters.
o Disadvantages:
Architectural Patterns
1. Model-View-Controller (MVC):
View (UI),
o Advantages:
Separation of concerns.
o Disadvantages:
2. Repository Pattern:
o Advantages:
o Disadvantages:
3. Singleton Pattern:
o Use Case: When exactly one instance is required, like a configuration manager.
o Advantages:
o Disadvantages:
4. Observer Pattern:
o Advantages:
o Disadvantages:
5. Builder Pattern:
Page 66 of 162
o Description: Constructs complex objects step by step.
o Advantages:
o Disadvantages:
Architectural design
Architectural design defines the high-level structure of a software system, showing its
components, their relationships, and how they interact. It focuses on organizing the system into
modules or subsystems and ensuring that the architecture aligns with the functional and non-
functional requirements.
System Overview: An e-commerce platform where customers can browse products, place orders,
and make payments.
Layers:
2. Business Logic Layer: Contains the core functionality, like processing orders.
Page 67 of 162
Components and Interactions:
o Enables users to browse products, add items to the cart, and place orders.
2. Business Logic:
3. Database:
o Stores product catalog, user data, order history, and payment records.
4. External Services:
Example Workflow:
1. User Action: The customer adds a product to the cart using the UI.
2. Business Logic: The system validates the stock and calculates the price.
3. Data Access: The system retrieves product details and inventory data from the database.
4. External Service: Once the customer places the order, the payment gateway processes
the payment.
5. Response: The system confirms the order and updates the inventory.
2. Common Mechanisms:
3. UML Diagrams: UML divides its diagrams into two main categories:
1. Structural Things:
2. Relationships:
3. Diagrams:
o Use Case Diagram: Depicts functionalities like "Transfer Money" and "Check
Balance."
Basic Structural Modeling in UML refers to the process of describing the static aspects of
a system, such as its components, relationships, and organization. This focuses on the "things"
within the system, like classes, objects, and their connections.
6. Relationships:
Page 70 of 162
o Composition: A strong form of aggregation where parts cannot exist without the
whole.
1. Class Diagram
Classes:
Relationships:
2. Component Diagram
Components:
Database
Notification Service
Relationship:
LMS interacts with the Database to fetch/update book and member details.
Class diagram
1. Classes:
o Book
o Member
o Librarian
o Loan
2. Class Diagram Representation:
Page 72 of 162
+-----------------+
| Loan |
+-----------------+
| - loanID: int |
| - issueDate: Date|
| - returnDate: Date|
| - bookID: int |
| - memberID: int |
+-----------------+
| + issueLoan() |
| + returnLoan() |
+-----------------+
Relationships:
1. **Member** can **borrow many books**, so there is a one-to-many relationship between
Member and Loan.
2. **Librarian** can **manage multiple loans** and add/remove books.
3. Each **Loan** is associated with one **Book** and one **Member**.
Sequence diagram
Actors/Objects:
1. Member
2. Library System
3. Librarian
4. Database
Page 73 of 162
1. The Member requests to borrow a book.
| | | |
| | |<-- Availability---|
Key Points:
2. The Library System acts as the intermediary and handles checking the book's
availability and updating the loan status.
Collaboration diagrams
Page 74 of 162
Collaboration diagrams, also known as communication diagrams in UML, focus on the
interactions between objects in a system. They emphasize the structural organization of objects
that send and receive messages.
3. Messages: Arrows with sequence numbers on the links, showing the flow of messages
between objects.
Actors/Objects:
1. Customer
2. ShoppingCart
3. OrderSystem
4. PaymentGateway
5. Database
Steps:
Page 75 of 162
Customer ShoppingCart OrderSystem Database PaymentGateway
| | | | |
1: addItem() --------->| | | |
| | | | |
| | 4: processPayment() --------->| |
| | | |<-- paymentStatus--|
2. Object Relationships: The links show direct relationships between the objects involved.
A Use Case Diagram is a type of UML diagram that visualizes the functional requirements
of a system by showing its actors, use cases, and their relationships. It provides a high-level view
of what the system does from the perspective of its users.
1. Actors: Represent the roles interacting with the system (human users or other systems).
Page 76 of 162
o Primary Actor: Directly interacts with the system.
3. System Boundary: Encapsulates all the use cases within the system.
4. Relationships:
o Include: A use case that is always performed as part of another use case.
o Extend: A use case that adds optional behavior to another use case.
Actors:
Use Cases:
Browse Products
Add to Cart
Checkout
Make Payment
Manage Products
Generate Reports
Relationships:
The Customer is associated with Browse Products, Add to Cart, and Checkout.
Page 77 of 162
The Admin is associated with Manage Products and Generate Reports.
Textual Representation:
Actors:
- Customer
- Admin
- Payment Gateway
Use Cases:
1. Browse Products
2. Add to Cart
3. Checkout
5. Manage Products
6. Generate Reports
+-------------------------------------+
| |
| | | |
Page 78 of 162
| | | Includes |
| | Extends |
| | |
+--------------------------------------------+
Component Diag
1. Components:
2. Interfaces:
3. Relationships:
4. Nodes:
Page 79 of 162
o Physical hardware devices that host components.
Components:
1. Web Application:
2. Payment Gateway:
3. Database:
4. Inventory Service:
Relationships:
1. The Web Application depends on the Payment Gateway for processing payments.
2. The Web Application interacts with the Database to retrieve product details.
3. The Web Application communicates with the Inventory Service to check stock
availability.
+---------------------------------------+
| Web Application |
| - User Interface |
| - Shopping Cart |
| - Checkout |
+---------------------------------------+
| |
Page 80 of 162
| v
| +-------------------+
| | Payment Gateway |
| | - Process Payment |
| +-------------------+
+------------------+ +--------------------+
| - Product Info | | |
+------------------+
1. The Web Application depends on the Payment Gateway for financial transactions.
2. The Web Application queries the Database for user and product data.
3. The Inventory Service ensures products are available before confirming an order.
Unit IV
Testing Strategies
This is China Airlines Airbus A300 crashing due to a software bug on April 26, 1994,
killing 264 innocent lives
Software bugs can potentially cause monetary and human loss, history is full of such
examples
Page 82 of 162
Software testing helps to give a quality certification that the software can be used by the
client immediately.
It ensures quality of the product.
Many software errors are eliminated before testing begins by conducting effective technical
reviews
Testing begins at the component level and works outward toward the integration of the
entire computer-based system.
Different testing techniques are appropriate at different points in time.
The developer of the software conducts testing and may be assisted by independent test
groups for large projects.
Testing and debugging are different activities.
Debugging must be accommodated in any testing strategy.
Make a distinction between verification (are we building the product right?) and validation
(are we building the right product?)
Software testing is only one element of Software Quality Assurance (SQA)
Quality must be built in to the development process, you can’t use testing to add quality after
the fact
The role of the Independent Test Group (ITG) is to remove the conflict of interest inherent
when the builder is testing his or her own product.
Misconceptions regarding the use of independent testing teams
o The developer should do no testing at all
o Software is tossed “over the wall” to people to test it mercilessly
o Testers are not involved with the project until it is time for it to be tested
The developer and ITGC must work together throughout the software project to ensure that
thorough tests will be conducted
Page 83 of 162
Software Testing Strategy
Types of Testing
The tester views the internal behavior and structure of the program. The testing strategy
permits one to examine the internal structure of the program.
Critical or Complex Modules can be tested using White box testing while the rest of the
application is tested using Black Box Testing.
Levels of Testing
• Unit Testing
• Integration Testing
• System Testing Or FURPS……Testing
• Acceptance Testing
• Regression Testing
Unit Testing
Integration Testing
Page 85 of 162
Bottom-Up Testing
• Begins construction and testing with atomic modules (i.e.) modules at lowest level in
program structure
• The terminal module is tested in isolation first then the next set of higher level modules
are tested with the previously tested lower module.
Top-Down Testing
Page 86 of 162
System Testing
Functionality Testing.
Usability Testing.
Reliability Testing.
Performance Testing.
Scalability Testing.
Functionality Testing
This testing is done to ensure that all the functionalities defined in the requirements are
being implemented correctly.
Usability Testing
The catch phrase “User Friendliness” can be achieved through Usability Testing.
• Ease Of Operability
• Communicativeness
• This test is done keeping the kind of end user's who are going to use the product.
Reliability Testing
• These Tests are carried out to assess the system’s capability to handle various scenarios
like failure of a hard drive on the web, database server or communication link failure.
Page 87 of 162
• Software reliability is defined in statistical terms as “the probability of failure-free
operation of a computer program in a specified environment for a specified time”.
Performance Testing
This test is done to ensure that the software / product works the way it is supposed to at
various load (load testing ), Stress (stress testing ) and Volume ( volume testing ).
Volume Testing
The purpose is to find weakness in the system with respect to its handling of large
amounts of data during short time periods.
Stress Testing
The purpose is to that the system has the capacity to handle large numbers of
processing transactions during peak periods.
Performance Testing
• Can be accomplished in parallel with Volume and Stress testing because we want
to assess performance under all conditions.
• System performance is generally assessed in terms of response times and
throughput rates under differing processing and configuration conditions.
Scalability Testing
These tests ensure the degree to which the application/ system loads / processes can be
distributed across additional servers and clients.
Acceptance Testing
Regression testing is the process of testing changes to computer programs to make sure
that the older programming still works with the new changes.
Strategic issues
Page 88 of 162
Specify product requirements in a quantifiable manner before testing starts.
Specify testing objectives explicitly.
Identify categories of users for the software and develop a profile for each.
Develop a test plan that emphasizes rapid cycle testing.
Build robust software that is designed to test itself.
Use effective formal reviews as a filter prior to testing.
Conduct formal technical reviews to assess the test strategy and test cases.
Develop a continuous improvement approach for the testing process.
Validation testing
The process of evaluating software during the development process or at the end of the
development process to determine whether it satisfies specified business requirements.
Validation Testing ensures that the product actually meets the client's needs. It can also
be defined as to demonstrate that the product fulfills its intended use when deployed on
appropriate environment.
It answers to the question, Are we building the right product?
Validation Testing - Workflow:
Validation testing can be best demonstrated using V-Model. The Software/product
under test is evaluated during this type of testing.
Activities:
Unit Testing
Integration Testing
System Testing
User Acceptance Testing
• Software testing is part of a broader group of activities called verification and validation
that are involved in software quality assurance
• Verification (Are the algorithms coded correctly?)
– The set of activities that ensure that software correctly implements a specific
function or algorithm
Page 89 of 162
• Validation (Does it meet user requirements?)
– The set of activities that ensure that the software that has been built is traceable
to customer requirements
• Alpha testing
– Conducted at the developer’s site by end users
– Software is used in a natural setting with developers watching intently
– Testing is conducted in a controlled environment
• Beta testing
– Conducted at end-user sites
– Developer is generally not present
– It serves as a live application of the software in an environment that cannot be
controlled by the developer
– The end-user records all problems that are encountered and reports these to the
developers at regular intervals
After beta testing is complete, software engineers make software modifications and prepare for
release of the software product to the entire customer base.
What is Debugging ?
Debugging happens as a result of testing. When a test case uncovers an error, debugging
is the process that causes the removal of that error.
Debugging is not testing, but always happens as a response of testing. The debugging
process will have one of two outcomes:
or
Page 90 of 162
The symptom may disappear when another error is corrected.
The symptom may actually be the result of nonerrors (eg round off in accuracies).
The symptom may be caused by a human error that is not easy to find.
The symptom may be intermittent.
The symptom might be due to the causes that are distributed across various tasks on
diverse processes.
Debugging Strategies
Brute Force
Backtracking
Page 91 of 162
Cause Elimination
Product metrics
Product metrics can be applied across various domains, such as software development,
hardware design, and consumer products. Below are the key categories and examples of product
metrics commonly used in different contexts.
1. Functional Metrics
These metrics assess how well the product performs the tasks it is designed for. They
help evaluate the product's core functionality.
o Example Metrics:
Page 92 of 162
o Example: For a software application, the metric might track how often users
successfully complete a specific task (e.g., completing a purchase on an e-
commerce site).
2. Usability Metrics
These metrics focus on how easy and efficient the product is to use from a user’s
perspective. They help determine how user-friendly and intuitive the product is.
o Example Metrics:
Learnability: How easily new users can learn to use the product.
o Example: For a website, a usability metric could measure how long it takes for a
new user to navigate through the site and make a purchase, reflecting how
intuitive the interface is.
3. Performance Metrics
These metrics measure the product's efficiency, speed, and responsiveness. They are
particularly important for digital products, such as websites or software applications.
o Example Metrics:
Page Load Time: The time it takes for a webpage or application screen to
load.
Response Time: The time it takes for the system to respond to user input.
Page 93 of 162
o Example: For a cloud-based service, performance metrics could track how quickly
the system processes requests and how well it scales with an increasing number
of users.
4. Reliability Metrics
These metrics focus on how dependable and consistent the product is under normal use.
Reliability is often linked to product defects, downtime, or failures.
o Example Metrics:
Mean Time to Repair (MTTR): The average time it takes to fix a failure or
defect.
Defect Density: The number of defects found per unit of product size (e.g.,
lines of code or components).
o Example: For a piece of hardware like a laptop, reliability metrics might track how
often the device fails over its lifecycle and how long it takes to repair issues.
5. Quality Metrics
These metrics measure how well the product meets predefined quality standards,
focusing on product defects and how often they occur.
o Example Metrics:
o Example: For a mobile app, quality metrics could include how many bugs are
reported by users after the app is released, and whether these bugs affect core
functionality.
Page 94 of 162
6. Adoption Metrics
These metrics measure the extent to which users or customers are adopting the product,
reflecting its market acceptance and popularity.
o Example Metrics:
Active Users: The number of users who interact with the product within a
given timeframe (e.g., daily, monthly active users).
Conversion Rate: The percentage of users who take a desired action, such
as signing up or making a purchase.
o Example Metrics:
Net Promoter Score (NPS): A score that measures customer loyalty based
on the likelihood of recommending the product to others.
o Example: After using an e-commerce platform, users may be asked to rate their
satisfaction with the purchasing process on a scale of 1 to 10 (CSAT), or whether
they would recommend the platform to others (NPS).
Page 95 of 162
8. Cost and Revenue Metrics
These metrics are focused on the financial aspects of the product, such as development
costs, operational costs, and revenue generation.
o Example Metrics:
Cost Per Acquisition (CPA): The cost incurred to acquire a new customer.
o Example: For a SaaS company, revenue metrics might measure how much
recurring income is generated per customer, and whether the costs to acquire
and support the customer exceed the revenue generated.
Software Quality
Meet Requirement
1. Correctness Metrics
These metrics assess whether the analysis model correctly represents the problem
domain and the required functionality. It measures the model's ability to capture all user
requirements and business needs without errors.
o Example Metrics:
2. Completeness Metrics
These metrics measure whether the analysis model includes all the necessary elements
and relationships to satisfy the problem requirements. A complete model ensures that no
essential detail or feature is overlooked.
o Example Metrics:
Feature Coverage: The extent to which the analysis model addresses all
functional requirements.
Page 97 of 162
Model Size: The number of elements in the model (e.g., use cases, classes,
relationships), which can indicate the depth of the analysis.
3. Cohesion Metrics
These metrics evaluate how closely related the components of the analysis model are
within their respective contexts. A cohesive model ensures that related components are
logically grouped together.
o Example Metrics:
4. Coupling Metrics
These metrics evaluate the degree to which different components in the analysis model
are dependent on each other. Low coupling is desirable because it means the components
are independent, making the model easier to maintain and modify.
o Example Metrics:
Page 98 of 162
Control Coupling: Measures the extent of dependency between
components in terms of control flow (e.g., use cases triggering other use
cases).
o Example Metrics:
o Example: A user interface (UI) design might be evaluated for clarity based on
whether stakeholders (e.g., business analysts, product owners) can easily
comprehend the system’s functionality from the analysis model.
6. Maintainability Metrics
These metrics evaluate how easily the analysis model can be updated or modified as
Page 99 of 162
requirements evolve. Maintainability ensures the model remains useful and adaptable
over time.
o Example Metrics:
Change Impact: Measures how changes to one part of the model affect
other parts. A model with low change impact is easier to maintain.
o Example Metrics:
Test Coverage: The extent to which the analysis model is testable and can
be mapped to specific test cases.
o Example Metrics:
Model Size: The number of elements in the analysis model (e.g., use cases,
classes, relationships). A very large model can be more difficult to maintain
and understand.
o Example: A complex financial system analysis model with numerous user roles
and complex transactions may result in higher complexity metrics, which would
indicate the need for simplification or modularization.
Metrics for Design Models are quantitative measures used to evaluate the quality,
complexity, and effectiveness of a software design. These metrics help in assessing how well the
design meets the requirements, how easy it is to understand, and how maintainable and scalable
the system will be. By analyzing design models using metrics, teams can make informed
decisions about the quality of the system’s architecture and design choices.
1. Size Metrics These metrics measure the scale or size of the design model. Larger designs
may indicate higher complexity, but it's important to evaluate whether the size reflects
necessary features or unnecessary complexity.
o Example Metrics:
o Example: A design for an online banking system might have 50 classes and 200
methods, which could be reasonable, but a higher number might suggest the need
for design simplification.
2. Complexity Metrics These metrics evaluate how complex the design is, which can affect
its maintainability, scalability, and understandability. High complexity in the design model
can lead to difficulty in implementation and testing.
o Example Metrics:
3. Modularity Metrics Modularity metrics evaluate how well the design is broken down into
independent, reusable, and manageable components or modules. A modular design is
easier to understand, maintain, and extend.
o Example Metrics:
o Example Metrics:
5. Performance Metrics These metrics assess the efficiency and performance implications
of the design. A well-designed system should be optimized to handle the expected
workload without unnecessary overhead.
o Example Metrics:
6. Usability Metrics These metrics measure how easy it is for users (or developers) to
interact with the system’s design. For software designs with user interfaces, usability is a
crucial factor in ensuring a positive user experience.
o Example Metrics:
o Example: A design for a web application might use consistent buttons, icons, and
menus throughout, improving usability and reducing the learning curve for users.
7. Security Metrics These metrics assess how well the design incorporates security
principles and practices to protect against threats and vulnerabilities. A secure design
reduces the risk of data breaches, unauthorized access, and other security issues.
o Example Metrics:
Data Integrity: Measures how well the design ensures that data is accurate
and protected from unauthorized modification.
Metrics for source code are quantitative measures used to evaluate the quality,
maintainability, and performance of the software's source code. These metrics help developers
and teams understand how well the code is written, how complex it is, how easy it is to maintain,
and whether it adheres to coding standards. The goal is to improve software quality, reduce
errors, and ensure long-term maintainability.
1. Size Metrics
Size metrics measure the length or size of the codebase. They are helpful in determining
how large or small a system is and provide insights into complexity, but should be
interpreted with care since larger codebases don’t always mean worse quality.
o Example Metrics:
Lines of Code (LOC): The total number of lines in the source code,
including comments and blank lines. While it's a simple metric, it provides
an indication of the size of the codebase.
o Example: In a large project, a high LOC count may indicate a feature-rich system,
but if the comment density is low, it could suggest that the code might be difficult
for others to understand.
2. Complexity Metrics
Complexity metrics are used to evaluate the intricacy or difficulty of understanding and
maintaining the code. High complexity often indicates hard-to-maintain code, which can
be error-prone and difficult to test.
o Example Metrics:
o Example Metrics:
Code Duplication: The extent to which the same or similar code appears
in multiple places, indicating areas where refactoring might be needed.
Code Churn: The number of lines of code that have been modified over a
specific period, helping to identify areas of the codebase that change
frequently and may require additional testing or attention.
o Example: If multiple parts of the codebase contain the same logic (code
duplication), it may need to be refactored to create reusable functions or classes,
improving maintainability.
4. Quality Metrics
These metrics are used to assess the overall quality of the source code, including
readability, adherence to coding standards, and bug density.
o Example Metrics:
o Example: A defect density of 0.5 defects per 1,000 lines of code is considered good,
indicating the code is relatively error-free.
5. Testability Metrics
Testability metrics assess how easily the source code can be tested. Code that is easy to
test is usually modular, with clear separation of concerns, and limited dependencies.
o Example Metrics:
Test Case Density: The number of test cases per unit of code (e.g., test
cases per thousand lines of code). This helps measure the thoroughness of
testing.
Defect Detection Rate: The rate at which defects are found by automated
or manual tests. A high rate can indicate poor quality, but it can also point
to areas of code that need more thorough testing.
o Example: If a project has 85% test coverage, it indicates that most of the
functionality is being tested, which can help identify potential bugs early in the
development cycle.
6. Performance Metrics
These metrics focus on how efficient and optimized the source code is, which can affect
the performance of the application, including speed and resource usage.
o Example Metrics:
7. Security Metrics
Security metrics help identify how secure the source code is. These metrics assess
vulnerabilities and security risks present in the code, such as data breaches or insecure
dependencies.
o Example Metrics:
o Example: If a source code review identifies that a critical library is outdated and
has known security vulnerabilities, it should be updated to reduce risk.
Metrics for Testing are quantitative measures used to assess the effectiveness,
efficiency, and coverage of testing activities. These metrics provide insights into the quality of
the testing process, the effectiveness of test cases, and the overall reliability of the software
product. By tracking and analyzing these metrics, teams can identify gaps in testing, improve
test coverage, and ensure that defects are detected early in the development lifecycle.
o Example Metrics:
Code Coverage: The percentage of the source code that is executed during
testing. This can be measured using tools that track the lines of code
executed by test cases.
o Example: A system might have 80% line coverage and 70% branch coverage,
indicating that while a large portion of the code has been tested, some decision
points may require more test cases.
2. Test Effectiveness Metrics These metrics assess how effective the testing process is in
identifying defects and ensuring the software behaves as expected.
o Example Metrics:
Defect Discovery Rate: The rate at which defects are found over time.
This can help identify whether testing is catching defects early or if there
are still many undiscovered defects at later stages of testing.
o Example: If 90% of test cases pass and the defect detection percentage is 85%, it
suggests that the testing is effective in identifying defects and that the product is
relatively stable.
3. Test Execution Metrics These metrics evaluate how efficiently the testing process is
being conducted, including the time and resources spent on testing.
o Example Metrics:
Test Execution Time: The amount of time it takes to run the entire test
suite. This is an important metric to track for performance testing,
regression testing, and automated testing.
Test Case Execution Efficiency: The number of test cases executed per
unit of time (e.g., test cases per hour). This metric helps determine how
quickly and efficiently the testing process is running.
Test Case Completion Rate: The percentage of planned test cases that
were actually executed. A lower rate may indicate that testing is falling
behind schedule.
o Example: If 50 test cases are planned for a release, and 40 test cases have been
executed on time, the completion rate would be 80%. Monitoring this metric helps
keep testing on track and ensures that key tests are not missed.
4. Defect Metrics These metrics assess the number and severity of defects found during
testing. They help teams understand the quality of the product and the areas that may
need more focus.
o Example Metrics:
Defect Resolution Time: The average time taken to fix a defect after it is
identified. This metric is used to assess how quickly issues are resolved
during the testing process.
o Example: If a critical bug is found late in the testing cycle, it may indicate the need
for additional testing or that earlier testing was not thorough enough. If there are
numerous minor defects found in the same area, it may indicate the need for a re-
evaluation of that part of the system.
5. Test Case Design Metrics These metrics assess the quality and coverage of the test
cases themselves. Well-designed test cases increase the likelihood of finding defects and
ensure that all critical features are tested.
o Example Metrics:
Test Case Defect Density: The number of defects found per test case. This
metric can indicate the effectiveness of the test case design.
Test Case Pass/Fail Rate: The percentage of test cases that pass versus
those that fail. This helps gauge how well the tests are designed and
whether the product meets expectations.
Test Case Redundancy: The degree to which test cases repeat tests of the
same functionality. A high redundancy rate can indicate inefficiencies in
the test design and unnecessary overlaps.
6. Test Automation Metrics These metrics assess the effectiveness and efficiency of test
automation efforts, which are crucial for large-scale projects and continuous
integration/continuous delivery (CI/CD) pipelines.
o Example Metrics:
Automation Test Execution Time: The time it takes for automated tests
to run. Shorter execution times improve the efficiency of the overall
testing process.
o Example: If automated tests are being run in a CI/CD pipeline and fail frequently,
it may indicate that the automated tests are poorly designed or that changes in
the application are not being properly accounted for in the test scripts.
7. Risk Metrics These metrics focus on assessing the risks associated with testing, such as
the likelihood of undetected defects, the impact of defects, and whether the testing efforts
are focused on the most critical areas of the application.
o Example Metrics:
Test Coverage by Risk: Measures the amount of testing coverage for high-
risk features or components. Ensuring adequate coverage of high-risk
areas helps reduce the probability of defects going undetected.
Failure Rate by Risk Level: The failure rate of tests for different risk levels
(e.g., high, medium, low). This metric helps identify which areas of the
system are most likely to fail under certain conditions.
o Example: If the login system is identified as a high-risk area and test coverage is
only 50%, this indicates a need to focus more testing efforts on the login
functionality to mitigate the potential for defects.
Metrics for Maintenance are quantitative measures used to assess the effectiveness,
efficiency, and quality of maintenance activities in software development. Maintenance activities
typically involve correcting defects, updating the software to meet new requirements, improving
performance, and adapting to changes in the environment. These metrics help evaluate how
well maintenance efforts contribute to the software's long-term sustainability, stability, and
performance.
1. Defect-Related Metrics These metrics focus on defects that arise during the maintenance
phase, including how quickly they are identified and resolved, and their impact on the
system.
o Example Metrics:
Defect Resolution Time: The average time taken to resolve a defect after it
is reported. Shorter resolution times typically indicate a more responsive
maintenance process.
Defect Introduction Rate: The rate at which new defects are introduced
during the maintenance process. Ideally, maintenance should not introduce
more defects than are fixed.
2. Cost-Related Metrics These metrics assess the financial impact of maintenance activities
and help organizations manage maintenance budgets and allocate resources effectively.
o Example Metrics:
3. Performance and Stability Metrics These metrics assess how well the system performs
and remains stable during the maintenance phase, particularly as updates and fixes are
implemented.
o Example Metrics:
Availability: The percentage of time the system is available for use. High
availability is crucial for systems that are mission-critical or provide
ongoing services.
4. Change-Related Metrics These metrics track the frequency and scope of changes made to
the software during the maintenance phase and assess the impact of these changes on
the system.
Page 117 of 162
o Example Metrics:
Change Failure Rate: The percentage of changes that introduce new defects
or cause system failures. A high failure rate indicates a need for more
rigorous testing and validation before changes are applied.
5. Software Maintenance Effectiveness Metrics These metrics assess the effectiveness of the
overall maintenance process in terms of quality, timeliness, and the degree to which the
system continues to meet user needs.
o Example Metrics:
User Satisfaction: A measure of how satisfied end users are with the
system after maintenance activities. Surveys, feedback forms, and usability
studies can be used to gather this data.
6. Legacy System Maintenance Metrics These metrics are specific to the maintenance of
legacy systems, where challenges such as outdated technology, lack of documentation, and
difficulty in finding skilled personnel can increase maintenance complexity.
o Example Metrics:
1. Defect-Related Metrics: The system has a defect density of 3 defects per 1,000 lines of code
and an average defect resolution time of 4 hours. This suggests that defects are being
identified and resolved quickly.
2. Cost-Related Metrics: The annual maintenance cost of the system is $150,000, which is
25% of the original development cost. This is acceptable for a mature system but may
increase if additional features are added.
4. Change-Related Metrics: 150 changes were made during the last quarter, with a change
failure rate of 5%. This shows that most changes are successful, but there is room for
improvement in testing and change management.
6. Legacy System Maintenance Metrics: The system has significant technical debt, and 30%
of issues are resolved using workarounds. This may require refactoring to reduce
technical debt and improve future maintenance.
Unit – V
Metrics for Process and Products are used to evaluate both the development processes and the
products that result from these processes. These metrics help organizations assess the
efficiency, quality, and effectiveness of the processes used to create software, as well as the
quality, performance, and reliability of the final product. These metrics are essential for
continuous improvement and decision-making in software development.
Metrics for process focus on how well the software development and maintenance processes are
functioning. They provide insights into the efficiency, effectiveness, and predictability of the
processes, helping organizations streamline operations, improve quality, and reduce costs.
Metrics for products focus on assessing the quality and performance of the software product
itself. These metrics provide insights into how well the product meets user requirements, how
reliable it is, and how it performs in real-world usage.
Software Measurement
Software Measurement is the process of collecting, analyzing, and using quantitative data to
assess various aspects of the software development and maintenance lifecycle. These metrics are
used to evaluate both the software development process and the software product itself.
Software measurement provides insight into the quality, efficiency, and effectiveness of the
development process, and it helps track progress, identify issues, and make informed decisions.
1. Process Measurement
2. Product Measurement
1. Process Measurement
Process measurement focuses on assessing the performance of the software development and
maintenance processes. The goal is to evaluate the efficiency, effectiveness, and quality of the
process itself to improve productivity, reduce costs, and enhance overall software quality.
2. Product Measurement
Product measurement evaluates the final software product, assessing its quality, functionality,
performance, and overall value delivered to users. These metrics are focused on ensuring that
the software meets user expectations, performs efficiently, and is reliable over time.
Predictability: By using historical data, software metrics can help predict future
performance, timelines, and resource needs.
Risk Management: Monitoring key metrics can help identify potential risks early in the
development lifecycle, enabling proactive mitigation strategies.
Conclusion
Metrics for Software Quality are used to evaluate the overall quality of the software product,
assessing aspects like functionality, performance, reliability, usability, and maintainability. These
metrics are essential for identifying areas that need improvement, ensuring that the product
meets user expectations, and delivering high-quality software. Below are some of the most
commonly used metrics for software quality:
1. Functionality Metrics
Functionality metrics focus on how well the software meets the specified requirements and
delivers the expected features to the user.
Defect Density: Measures the number of defects per unit of software, typically per 1,000
lines of code (LOC) or per function point.
Functionality Testing Coverage: Measures how much of the software’s functionality has
been tested.
2. Reliability Metrics
Reliability metrics assess the software's ability to perform under expected conditions over time.
Mean Time Between Failures (MTBF): Measures the average time the system operates
without failure. It is used to assess the reliability of the software.
o Example: If a system operates for 1,000 hours and encounters 5 failures, the
MTBF is 200 hours.
Defect Recovery Time: Measures the average time taken to fix defects after they are
identified.
o Example: If the total time to fix 10 defects is 100 hours, the defect recovery time is
10 hours per defect.
Failure Rate: The frequency at which failures occur in the system over time.
o Example: If there are 3 failures in 100 hours of operation, the failure rate is 0.03
failures per hour.
3. Performance Metrics
Performance metrics evaluate how well the software performs under various conditions,
including speed, scalability, and resource usage.
o Example: If the system responds to 500 requests in 100 seconds, the average
response time is 0.2 seconds.
Throughput: The number of transactions or operations the system can handle per unit
of time.
o Metric: Often measured by testing the software under different loads to observe
performance degradation as load increases.
o Example: If a system supports 100 users with acceptable performance but fails to
handle 200 users efficiently, its scalability is limited.
Resource Utilization: Measures how efficiently the system uses hardware resources
(e.g., CPU, memory, disk space).
4. Usability Metrics
Usability metrics measure how easy it is for users to interact with the software and how well it
meets user needs.
User Satisfaction: The degree to which users are satisfied with the software, often
measured through surveys or feedback forms.
o Example: If 10 users rate the software with an average score of 4 out of 5, the
user satisfaction score is 4.
Learnability: Measures how easy it is for new users to learn to use the software. A
system with high learnability has a short learning curve.
o Metric: Typically assessed by the time it takes for new users to complete a set of
basic tasks or through usability testing.
o Example: If 80 out of 100 users can complete a task, the task success rate is 80%.
5. Maintainability Metrics
Maintainability metrics assess how easy it is to modify, update, and extend the software over
time.
o Formula:
Where:
Code Churn: Measures how often the code is modified. Frequent changes may indicate
issues with design or requirements instability.
o Example: If 200 lines of code were modified in a software project that has 1,000
lines of code, the code churn is 20%.
Time to Implement Changes: Measures how long it takes to implement changes or new
features in the system.
o Example: If it takes 5 hours to implement a change and there are 2 changes, the
time to implement changes is 2.5 hours.
6. Customer Metrics
Page 125 of 162
These metrics evaluate how well the software meets customer needs and expectations, helping
to assess the overall success of the product in the market.
Net Promoter Score (NPS): A measure of customer loyalty based on how likely users
are to recommend the product to others. It is calculated by subtracting the percentage of
detractors (users who would not recommend the software) from the percentage of
promoters (users who would recommend it).
Churn Rate: Measures the percentage of users who stop using the software after a
certain period.
o Formula:
o Example: If a software has 200 users and loses 20 users in a month, the churn
rate is 10%.
Software quality metrics are essential for assessing the performance, reliability, and overall
value of a software product. These metrics help organizations identify areas for improvement,
ensure that the software meets user needs, and track progress toward quality goals.
Risk Management
A risk is a probable problem; it might happen, or it might not. There are main two
characteristics of risk.
Uncertainty: the risk may or may not happen which means there are no 100% risks.
Loss: If the risk occurs in reality, undesirable results or losses will occur.
Suppose In a software development project, one of the key developers unexpectedly falls ill and
is unable to contribute to the product for an extended period.
One of the solution that organization may have , The team uses collaborative tools and
procedures, such as shared work boards or project management software, to make sure that
each member of the team is aware of all tasks and responsibilities, including those of their
teammates.
An organization must focus on providing resources to minimize the negative effects of possible
events and maximize positive results in order to reduce risk effectively. Organizations can more
effectively identify, assess, and mitigate major risks by implementing a consistent, systematic,
and integrated approach to risk management.
Risk management is a sequence of steps that help a software team to understand, analyze, and
manage uncertainty. Risk management process consists of
Risks Identification.
Risk Assessment.
Risks Planning.
Risk Monitoring
1. Risk Identification
This involves identifying potential risks that might affect the project. These could include:
External Risks: Risks from external factors like market changes, regulatory
requirements, or third-party vendors.
Operational Risks: Risks in the daily functioning of the system, such as server outages
or bugs.
Human Risks: Risks from human factors, such as skill shortages, miscommunication, or
team member turnover.
Once the risks are identified, the next step is to assess them by evaluating their likelihood and
potential impact. This can be done using the following approaches:
Qualitative Risk Assessment: Risks are ranked based on their likelihood and impact,
often using a simple scale (e.g., Low, Medium, High).
Quantitative Risk Assessment: Uses numerical values to assess the probability of a risk
occurring and its potential impact on the project’s objectives. This often involves:
o Risk Probability and Impact Matrix: Plotting risks on a matrix to assess their
severity.
Once risks are assessed, mitigation strategies must be developed to reduce the probability and
impact of risks. Risk mitigation can include:
Avoidance: Changing the project plan to eliminate the risk or its impact.
Transfer: Shifting the risk to another party, such as outsourcing or using insurance.
Mitigation: Reducing the probability or impact of the risk by taking steps to control it.
Acceptance: Acknowledging the risk and deciding to live with it, either by preparing
contingency plans or taking no action if the risk is low and unlikely to have a major
impact.
Risk monitoring ensures that risks are continuously tracked and controlled throughout the
project. This step involves:
Tracking New Risks: Identifying new risks that may emerge as the project progresses.
Monitoring Key Risk Indicators (KRIs): Defining and monitoring risk indicators to
identify when a risk is becoming more likely or impacting the project.
Risk Audits: Conducting audits to ensure that risk management processes are being
followed and that no major risks are overlooked.
Risk Register: A document or tool that tracks identified risks, their assessment,
mitigation strategies, and the responsible team members.
Risk Matrix: A matrix that helps visualize the risks in terms of their probability and
impact. It categorizes risks into different levels of severity (e.g., low, medium, high).
Monte Carlo Simulation: A statistical technique used to assess the probability of various
outcomes in a project by simulating different risk scenarios.
Failure Mode and Effect Analysis (FMEA): A systematic method for evaluating
potential failures in a system and determining their effects, likelihood, and priority for
mitigation.
Proactive Risk Management: Identifying and addressing risks before they occur by
creating preventive strategies and contingency plans.
Reactive Risk Management: Dealing with risks after they arise by using corrective
actions to minimize their impact.
Contingency Planning: Developing backup plans to respond to risks that may occur,
ensuring that the project can continue smoothly in case of unexpected issues.
Page 130 of 162
Benefits of Effective Risk Management
2. Cost Control: Effective risk management can help prevent costly issues and delays by
addressing risks early on, minimizing unexpected expenses.
3. Improved Project Success Rate: Proactively managing risks increases the likelihood
that the project will be completed on time, within budget, and meet quality standards.
4. Stakeholder Confidence: Properly managing risks helps build trust with stakeholders,
ensuring that they feel confident in the project's success.
Reactive vs Proactive Risk Management Strategies are two approaches to handling risks in
projects, including software development. Both strategies have their advantages and
disadvantages, and the choice between them often depends on the nature of the project, the
environment, and the available resources. Here's a detailed comparison of both strategies:
Definition:
Proactive risk management involves identifying potential risks before they occur and taking
steps to avoid or mitigate them. The focus is on anticipating problems and implementing
strategies to prevent them from happening.
Key Characteristics:
Prevention Focused: Proactively addresses risks by anticipating them early in the project.
Planning Ahead: Involves creating strategies, contingency plans, and controls well before
the risk materializes.
Predictive: Attempts to foresee problems and issues before they arise based on historical
data, trends, or expert judgment.
Examples:
Conducting thorough feasibility studies and testing early in the project lifecycle.
Having a dedicated risk management team or process in place to identify and address
potential issues as early as possible.
Definition:
Reactive risk management involves dealing with risks only after they have materialized. This
strategy focuses on responding to problems when they occur, often with corrective actions to
mitigate the impact.
Key Characteristics:
Problem-Solving Approach: The focus is on finding solutions to risks that have already
been identified or are happening in real time.
Adaptability: Reactive strategies are flexible and can change depending on the risk's
nature, as they focus on dealing with real situations rather than predicting them.
Examples:
Fixing a critical bug or issue after it is discovered during user acceptance testing or in the
production environment.
Prevention and mitigation before risk Responding after the risk has
Focus
occurs. occurred.
Resources Requires more resources upfront for Fewer resources needed until a
Required planning. problem arises.
Anticipates and mitigates risks in Deals with risks after they have
Risk Handling
advance. materialized.
Stakeholder Builds greater stakeholder confidence May reduce confidence if risks are
Confidence in project success. not handled well.
Software Risks
Software Risks refer to potential issues or uncertainties that may arise during the development,
implementation, or maintenance of a software project, and can negatively affect its success.
These risks can lead to project delays, cost overruns, poor quality, or even project failure if not
identified and managed properly.
Software risks can be broadly classified into several categories based on their nature and
source. Understanding and addressing these risks is crucial for ensuring a successful software
project.
1. Technical Risks
o Description: These risks arise from the technical aspects of the software, such as
design, development, testing, or technology.
o Examples:
o Examples:
3. Human Risks
o Examples:
o Examples:
5. Operational Risks
o Examples:
Performance Issues: The software might not perform optimally under real-
world conditions, leading to slow response times, system crashes, or user
dissatisfaction.
6. Quality Risks
o Description: Risks associated with the quality of the software product, affecting its
functionality, usability, and maintainability.
o Examples:
Defects and Bugs: Undetected defects that could affect the software’s
functionality or cause failures in certain scenarios.
o Examples:
Risk Identification
Risk Identification is the process of recognizing potential risks that could affect a project,
including its objectives, timelines, quality, cost, and scope. It is one of the most critical steps in
risk management, as early identification allows teams to address risks before they escalate into
bigger problems.
o Establish clear objectives for identifying risks, including understanding the scope,
requirements, and constraints of the project.
2. Collect Data:
o Look for any changes in project scope, team structure, or external conditions that
could lead to new risks.
3. Brainstorming:
o Organize brainstorming sessions with the project team and stakeholders (e.g.,
developers, business analysts, quality assurance team, end-users) to gather
different perspectives on potential risks.
o Group risks into specific categories for better organization and understanding.
These could include:
o Recognize specific triggers or early-warning signs that indicate the onset of risks.
For instance, delayed deliverables might trigger schedule risks, or new legislative
changes might introduce compliance risks.
7. Involve Stakeholders:
1. Checklists:
o The RBS helps in visualizing how risks are structured and how they relate to each
other.
o Conduct interviews with subject matter experts, team members, and stakeholders
to uncover hidden risks.
o Surveys can help gather opinions from a larger group of stakeholders on the
potential risks they foresee.
4. Delphi Technique:
o Tools like Fishbone Diagrams (Ishikawa) can help identify causes and effects of
potential risks.
6. Risk Register:
o Maintain a Risk Register where all identified risks are recorded, categorized, and
tracked throughout the project. This document is continuously updated and
reviewed.
7. SWOT Analysis:
8. Expert Judgment:
9. Mind Mapping:
o Use mind maps to explore the potential risks and their relationships in a non-
linear way, encouraging creative identification of risks across different aspects of
the project.
1. Technical Risks:
o Software defects: Undetected bugs or coding errors that could cause functionality
issues.
2. Management Risks:
o Scope creep: Uncontrolled changes or additions to the project scope that could
result in delays or budget increases.
3. Operational Risks:
o Deployment failures: Issues arising during the deployment phase that might
prevent the software from going live.
4. Business Risks:
o User acceptance: Users may resist adopting the software due to usability issues,
which can impact the overall success of the project.
5. External Risks:
Risk Projection
Risk Projection is the process of predicting the future impact of identified risks and estimating
the potential outcomes or consequences of those risks on a software project. It involves
assessing how a risk might evolve over time and estimating the likelihood of its occurrence, its
potential severity, and how it could affect various aspects of the project (e.g., timeline, budget,
scope, quality). The goal is to prioritize risks based on their potential impact and take
appropriate actions to mitigate them before they affect the project.
Risk projection helps in decision-making by giving stakeholders a clear view of potential future
challenges and by allowing them to allocate resources effectively to address the most critical
risks.
1. Risk Estimation:
2. Risk Prioritization:
o Rank risks based on their severity and probability to determine which risks need
the most attention.
3. Impact Analysis:
o For each risk, perform a detailed impact analysis to determine how it will affect
different project components (e.g., schedule, resources, costs, quality).
o Consider both direct impacts (e.g., a delay in the development phase) and
indirect impacts (e.g., the effect of delays on the overall project timeline or
customer satisfaction).
o Estimate when the risks are likely to occur during the project lifecycle. Some risks
may be immediate, while others could arise later in the development, testing, or
deployment stages.
o Use time-based projections to understand the risk’s potential impact over the
short term and long term.
o Create a Risk Matrix or Risk Heat Map, which is a visual representation of risk
likelihood versus impact. This helps in understanding the risk profile of the
project:
Each risk is placed in one of the four quadrants: low impact/low likelihood,
low impact/high likelihood, high impact/low likelihood, high impact/high
likelihood.
o The risks that fall into the "high impact/high likelihood" quadrant require
immediate attention.
6. Scenario Modeling:
7. Risk Thresholds:
o Establish thresholds for acceptable risk levels. Risks that exceed these thresholds
may require immediate mitigation strategies, while those within acceptable limits
can be monitored.
o For example, a project may have an acceptable cost variance of 5%. If a risk
project could cause a cost overrun greater than 5%, it needs to be flagged for
further mitigation.
Risk Refinement
Risk Refinement is the process of continuously improving and refining the understanding of
risks throughout the software development lifecycle. After initial identification and projection of
risks, risk refinement involves further breaking down, analyzing, and evaluating the risks to
gain a deeper understanding of their potential impact and likelihood. This process helps refine
mitigation strategies, update risk responses, and ensure that risks are managed effectively as
the project progresses.
Risk refinement typically occurs in iterative phases, with the level of detail and accuracy
increasing over time as more information becomes available. It also helps in identifying new
risks that may arise and adjusting existing risk management plans.
o After the initial identification and projection phases, continuously review the
identified risks to assess if there have been any changes in the project or
environment that could alter their impact or probability.
o Break down broad, high-level risks into smaller, more manageable sub-risks. This
allows for a better understanding of the factors contributing to the overall risk and
enables more precise mitigation actions.
o For example, if a major technical risk is identified (e.g., integration issues with a
third-party service), it can be refined into specific sub-risks such as "failure to
meet API standards" or "lack of available technical support."
o Refine the identification of specific risk triggers that indicate when a risk is
becoming more likely to occur. A risk trigger is an event or condition that signals
the possibility of the risk happening.
o For example, for a schedule risk, a potential trigger could be delayed completion of
a critical task in the project schedule.
o If the risk estimates in the initial stages were qualitative (e.g., high, medium, low),
refine them by applying quantitative measures such as probability distributions
or cost estimates. This allows for a more accurate evaluation of the risk
exposure.
o This can be done using tools like Monte Carlo simulations, which provide
statistical estimates for risk outcomes based on different variables.
o Regularly update the Risk Matrix (or Risk Heat Map) based on refined risk
projections. This involves placing risks in a visual matrix with axes for likelihood
and impact, and re-prioritizing them accordingly.
o As new risks are identified or the probability and impact of existing risks change,
the matrix should be updated to reflect the most current state of risk
management.
o For example, if a risk related to project delays becomes more likely, refine the
mitigation plan by allocating additional resources or adjusting timelines.
7. Continuous Monitoring:
o Use tools like risk tracking software, dashboard visualizations, or periodic risk
review meetings to keep stakeholders informed.
o As risks are refined, assess how they are interrelated or whether one risk could
trigger another. Risk interdependencies can magnify or reduce the impact of
certain risks.
o Sensitivity analysis can help determine how changes in certain risk factors will
affect the overall project. For instance, evaluating how sensitive the project is to
delays or cost overruns can help refine mitigation actions and project planning.
o This helps in understanding the “most critical” risks that require the most
attention and how minor adjustments can significantly reduce overall project
risks.
RMMM
RMMM stands for Risk Mitigation, Monitoring, and Management. It is a strategy used in
project management, particularly in software development and other engineering fields, to
systematically handle risks throughout a project's lifecycle. The RMMM process ensures that
potential risks are proactively addressed and monitored, minimizing the impact on the project
and enabling the team to maintain control over uncertainties.
1. Risk Mitigation:
o Risk Mitigation refers to the actions and strategies taken to reduce the likelihood
or impact of identified risks. The goal of mitigation is to reduce the severity of a
risk's effect or prevent it from happening altogether.
Risk transfer: Shifting the impact of the risk to another party (e.g.,
outsourcing a high-risk task to a more experienced vendor).
Risk acceptance: Deciding to accept the risk and its potential impact,
often with the contingency in place to address it if it occurs.
2. Risk Monitoring:
3. Risk Management:
RMMM PLAN
1. Risk Identification:
o Objective: Identify potential risks that may affect the project’s success.
2. Risk Assessment:
o Process: After identifying the risks, each risk is evaluated in terms of its
probability of occurrence and potential impact on the project.
o Risk Assessment Matrix: This matrix ranks risks based on their probability and
impact, often classified as high, medium, or low.
o Example: "The likelihood of the vendor delay is high, but the impact on the
schedule is medium."
o Objective: Develop plans and strategies to reduce or eliminate the identified risks.
o Process: For each high-priority risk, define mitigation actions that can reduce the
likelihood or minimize the impact. Mitigation can involve preventive, corrective,
or contingency actions.
o Strategies:
o Example: "Mitigate vendor delays by negotiating penalties for late delivery, or find
an alternative supplier."
4. Risk Monitoring:
o Process: Regularly review risk status, update risk assessments, and adapt
strategies as necessary. Monitoring also involves identifying risk triggers—events
or conditions that signal the risk is likely to occur.
o Example: "Monitor the vendor’s progress and track any delay in their delivery
schedule. Review risk triggers monthly."
o Objective: Provide the overall framework and governance structure for handling
risks throughout the project lifecycle.
o Components:
Roles and Responsibilities: Assign risk owners who are responsible for
managing specific risks.
Risk Tolerance: Define the acceptable level of risk for the project.
o Process: Document all risks, their status, mitigation actions, owners, and triggers
in a centralized Risk Register.
o Content:
Risk ID
Page 150 of 162
Risk Description
Mitigation Strategies
Risk Owners
Status/Progress Updates
o Example: A risk register entry for "Vendor Delay" might look like:
Risk ID: R1
Likelihood: High
Impact: Medium
Quality Management
Quality Concepts
1. Quality
2. Quality Control
3. Quality assurance
4. Cost of quality
5.
Two kinds of quality may be encountered:
Page 151 of 162
Quality of design
Quality of conformance
Quality of Design
Quality of design refers to the characteristics tht designers specify for an item. The grade
of materials, tolerance and performance specifications all contribute quality of design.
Quality of Conformance
Quality of Conformance is the degree to which the design specifications are followed
during manufacturing.
QC is the series of inspections, reviews,and tests used throughout the development cycle
to ensure that each work product meets the requirements placed upon it.
QC includes a feedback loop to the process that created the work product.
• Consists of a set of auditing and reporting functions that assess the effectiveness and
completeness of quality control activities.
• Provides management personnel with data that provides insight into the quality of the
products.
• Alerts management personnel to quality problems so that they can apply the necessary
resources to resolve quality issues.
Cost of Quality
• Prevention costs
Quality planning, formal technical reviews, test equipment, training
• Appraisal costs
Inspections, equipment calibration and maintenance, testing
• Failure costs – subdivided into internal failure costs and external failure costs
(1) Software requirements are the foundation from which quality is measured; lack of
conformance to requirements is lack of quality.
(2) Specified standards define a set of development criteria that guide the manner in
which software is engineered; if the criteria are not followed, lack of quality will
almost surely result.
(3) A set of implicit requirements often goes unmentioned; if software fails to meet
implicit requirements, software quality is suspect.
Software reviews are a “filter” for the software engineering process. That is reviews
are applied at various points during the software development process an server to
uncover errors that can then be removed.
Software reviews serve to “Purify” the software analysis, design, coding, and testing
activities.
• Catch large classes of errors that escape the originator more than other
practitioners
• Include the formal technical review (also called a walkthrough or inspection)
– Acts as the most effective SQA filter
– Conducted by software engineers for software engineers
– Effectively uncovers errors and improves software quality
– Has been shown to be up to 75% effective in uncovering design flaws
(which constitute 50-65% of all errors in software)
• Require the software engineers to expend time and effort, and the organization
to cover the costs.
In addition, the FTR serves as a training ground for junior software engineers to observe
different approaches to software analysis, design, and construction.
Promotes backup and continuity because a number of people become familiar with other
parts of the software.
Project managers must quantify those work products that are the primary targets for
formal technical reviews.
The sample of products that are reviewed must be representative of the products as a
whole.
1) Review Meeting
1) Collect and categorize information (i.e., causes) about software defects that occur.
2) Attempt to trace each defect to its underlying cause (e.g., nonconformance to
specifications, design error, violation of standards, poor communication with the
customer).
3) Using the Pareto principle (80% of defects can be traced to 20% of all causes), isolate the
20%.
Although hundreds of errors are uncovered all can be tracked to one of the following causes.
Software reliability
1. Reliability Definition:
o The reliability of software is the likelihood that the software will function without
failure under normal operational conditions for a defined period. This is usually
expressed as a probability or percentage, with a higher value representing more
reliable software.
2. Failure:
o MTTF is used to predict the time until the first failure of a software system or
component. It is typically applied to non-repairable systems where failure cannot
be fixed, but components may be replaced.
5. Failure Rate:
o A fault is a defect in the software code or design that may potentially lead to a
failure, whereas a failure occurs when the fault causes the software to behave
incorrectly or undesirably during execution.
1. Code Quality:
o Poorly written code or untested code is more likely to contain defects that lead to
software failures. Using coding standards, code reviews, and unit testing helps
improve code reliability.
2. Software Testing:
o Automated testing tools and continuous integration can help ensure that software
reliability is maintained throughout the development process.
o As the software system becomes more complex, the chances of introducing errors
or failures increase. Proper system design, modularization, and maintaining low
complexity can improve reliability.
4. Environmental Factors:
5. Error Handling:
o Robust error handling can prevent software from failing when unexpected
conditions arise. Proper logging, exception handling, and fallback mechanisms
6. Maintenance:
ISO 9000 can help a company satisfy its customers, meet regulatory requirements, and
achieve continual improvement.
ISO 9000 Series standards
Individuals and organizations cannot be certified to ISO 9000. ISO 9001 is the only
standard within the ISO 9000 family to which organizations can certify.
The ISO 9000:2015 and ISO 9001:2015 standards are based on seven quality management
principles that senior management can apply for organizational improvement:
2. Leadership
o Establish a vision and direction for the organization
o Set challenging goals
o Model organizational values
o Establish trust
o Equip and empower employees
o Recognize employee contributions
3. Engagement of people
o Ensure that people’s abilities are used and valued
o Make people accountable
o Enable participation in continual improvement
o Evaluate individual performance
o Enable learning and knowledge sharing
o Enable open discussion of problems and constraints
4. Process approach
o Manage activities as processes
o Measure the capability of activities
o Identify linkages between activities
o Prioritize improvement opportunities
o Deploy resources effectively
5. Improvement
o Improve organizational performance and capabilities
o Align improvement activities
o Empower people to make improvements
o Measure improvement consistently
o Celebrate improvements
7. Relationship management
o Identify and select suppliers to manage costs, optimize resources, and create value
o Establish relationships considering both the short and long term
o Share expertise, resources, information, and plans with partners
o Collaborate on improvement and development activities
o Recognize supplier successes
Learn more about supplier quality and see resources related to managing the
supply chain.