0% found this document useful (0 votes)
15 views46 pages

SOFTWARE ENGINEERING UNIT V NOTES

The document discusses software quality, defining it as the degree of conformance to requirements and expectations, and outlines two main approaches: defect management and quality attributes. It details the components of a Software Quality Management System, including managerial structure, individual responsibilities, and quality system activities, as well as the importance of software measurement and metrics for evaluating software products. Additionally, it highlights various software quality factors, types of metrics, and their significance in improving software development processes and ensuring customer satisfaction.

Uploaded by

katibasjucet2025
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views46 pages

SOFTWARE ENGINEERING UNIT V NOTES

The document discusses software quality, defining it as the degree of conformance to requirements and expectations, and outlines two main approaches: defect management and quality attributes. It details the components of a Software Quality Management System, including managerial structure, individual responsibilities, and quality system activities, as well as the importance of software measurement and metrics for evaluating software products. Additionally, it highlights various software quality factors, types of metrics, and their significance in improving software development processes and ensuring customer satisfaction.

Uploaded by

katibasjucet2025
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 46

SOFTWARE ENGINEERING

UNIT V

PRODUCT METRICS:
SOFTWARE QUALITY

Software quality is defined as a field of study and practice that describes the desirable attributes
of software products. There are two main approaches to software quality:

1. Defect management

2. Quality attributes.

Software quality is the degree of conformance of a software product to requirements and


expectations. Business requirements are related to end-user functionality, while expectations
refer to behavior that affects the general application, like usability, security, and performance.

Software Quality Management System

A quality management system is the principal methods used by organizations to provide that the
products they develop have the desired quality.

A quality system subsists of the following:

Managerial Structure and Individual Responsibilities: A quality system is the responsibility


of the organization as a whole. However, every organization has a sever quality department to
perform various quality system activities. The quality system of an arrangement should have the
support of the top management. Without help for the quality system at a high level in a company,
some members of staff will take the quality system seriously.

Quality System Activities: The quality system activities encompass the following:

1. Auditing of projects
2. Review of the quality system
3. Development of standards, methods, and guidelines, etc.
Production of documents for the top management summarizing the effectiveness of the quality
system in the organization.

Factors of Software Quality/ quality methods


The modern read of high-quality associates with software many quality factors like the
following:

1. Portability: A software is claimed to be transportable, if it may be simply created to


figure in several package environments, in several machines, with alternative code
merchandise, etc.
2. Usability: A software has smart usability if completely different classes of users (i.e.
knowledgeable and novice users) will simply invoke the functions of the merchandise.
3. Reusability: A software has smart reusability if completely different modules of the
merchandise will simply be reused to develop new merchandise.
4. Correctness: Software is correct if completely different needs as laid out in the SRS
document are properly enforced.
5. Maintainability: A software is reparable, if errors may be simply corrected as and once
they show up, new functions may be simply added to the merchandise, and therefore the
functionalities of the merchandise may be simply changed, etc.
6. Reliability. Software is more reliable if it has fewer failures. Since software engineers
do not deliberately plan for their software to fail, reliability depends on the number and
type of mistakes they make. Designers can improve reliability by ensuring the software is
easy to implement and change, by testing it thoroughly, and also by ensuring that if failures
occur, the system can handle them or can recover easily.
7. Efficiency. The more efficient software is, the less it uses of CPU-time, memory, disk
space, network bandwidth, and other resources. This is important to customers in order to
reduce their costs of running the software, although with today’s powerful computers, CPU
time, memory and disk usage are less of a concern than in years gone by.
Software Quality Management System
Software Quality Management System contains the methods that are used by the authorities to
develop products having the desired quality.
Managerial Structure
Quality System is responsible for managing the structure as a whole. Every Organization has a
managerial structure.
Individual Responsibilities
Each individual present in the organization must have some responsibilities that should be
reviewed by the top management and each individual present in the system must take this
seriously.
Quality System Activities
The activities which each quality system must have been
1. Project Auditing.
2. Review of the quality system.
3. It helps in the development of methods and guidelines.

Evolution of Quality Management System


Quality Systems are basically evolved over the past some years. The evolution of a Quality
Management System is a four-step process.
1. The main task of quality control is to detect defective devices, and it also helps in
finding the cause that leads to the defect. It also helps in the correction of bugs.
2. Quality Assurance helps an organization in making good quality products. It also helps
in improving the quality of the product by passing the products through security checks.
3. Total Quality Management (TQM) checks and assures that all the procedures must be
continuously improved regularly through process measurements.
Software Measurement
A measurement is a manifestation of the size, quantity, amount, or dimension of a particular
attribute of a product or process.

Software measurement is a titrate impute of a characteristic of a software product or the


software process. It is an authority within software engineering.

The software measurement process is defined and governed by ISO Standard.


Software Measurement Principles:

The software measurement process can be characterized by five activities-


1. Formulation: The derivation of software measures and metrics appropriate for the
representation of the software that is being considered.
2. Collection: The mechanism used to accumulate data required to derive the formulated
metrics.
3. Analysis: The computation of metrics and the application of mathematical tools.
4. Interpretation: The evaluation of metrics results in insight into the quality of the
representation.
5. Feedback: Recommendation derived from the interpretation of product metrics
transmitted to the software team.

Needs for Software Measurement:


Software is measured to:
 Create the quality of the current product or process.
 Anticipate future qualities of the product or process.
 Enhance the quality of a product or process.
 Regulate the state of the project concerning budget and schedule.
 Enable data-driven decision-making in project planning and control.
 Identify bottlenecks and areas for improvement to drive process improvement activities.
 Ensure that industry standards and regulations are followed.
 Give software products and processes a quantitative basis for evaluation.
 Enable the ongoing improvement of software development practices.

Classification of Software Measurement:


There are 2 types of software measurement:
1. Direct Measurement: In direct measurement, the product, process, or thing is
measured directly using a standard scale.
2. Indirect Measurement: In indirect measurement, the quantity or quality to be
measured is measured using related parameters i.e. by use of reference.
Software Metrics:
A metric is a measurement of the level at which any impute belongs to a system product or
process.
Software metrics is a quantifiable or countable assessment of the attributes of a software
product.

There are 4 functions related to software metrics:


1. Planning
2. Organizing
3. Controlling
4. Improving
Characteristics of software Metrics:
1. Quantitative: Metrics must possess quantitative nature. It means metrics can be
expressed in numerical values.
2. Understandable: Metric computation should be easily understood, and the method of
computing metrics should be clearly defined.
3. Applicability: Metrics should be applicable in the initial phases of the development of
the software.
4. Repeatable: When measured repeatedly, the metric values should be the same and
consistent in nature.
5. Economical: The computation of metrics should be economical.
6. Language Independent: Metrics should not depend on any programming language.
Classification of Software Metrics:
There are 3 types of software metrics:
1. Product Metrics: Product metrics are used to evaluate the state of the product, tracing
risks and undercover prospective problem areas. The ability of the team to control quality
is evaluated. Examples include lines of code, cyclomatic complexity, code coverage, defect
density, and code maintainability index.
2. Process Metrics: Process metrics pay particular attention to enhancing the long-term
process of the team or organization. These metrics are used to optimize the development
process and maintenance activities of software. Examples include effort variance, schedule
variance, defect injection rate, and lead time.
3. Project Metrics: The project metrics describes the characteristic and execution of a
project. Examples include effort estimation accuracy, schedule deviation, cost variance,
and productivity.
Usually measures-
 Number of software developer
 Staffing patterns over the life cycle of software
 Cost and schedule
 Productivity
Advantages of Software Metrics:
1. Reduction in cost or budget.
2. It helps to identify the particular area for improvising.
3. It helps to increase the product quality.
4. Managing the workloads and teams.
5. Reduction in overall time to produce the product,.
6. It helps to determine the complexity of the code and to test the code with resources.
7. It helps in providing effective planning, controlling and managing of the entire product.

Disadvantages of Software Metrics:


1. It is expensive and difficult to implement the metrics in some cases.
2. Performance of the entire team or an individual from the team can’t be determined.
Only the performance of the product is determined.
3. Sometimes the quality of the product is not met with the expectation.
4. It leads to measure the unwanted data which is wastage of time.
5. Measuring the incorrect data leads to make wrong decision making.

Product Metrics landscape


Product metrics in software engineering refer to the quantifiable measurements used to assess
the characteristics and performance of software products throughout their development and
maintenance lifecycle.
These metrics provide valuable insights into various aspects of software quality, effectiveness,
efficiency, and reliability. By employing a comprehensive product metrics framework, software
engineering teams can gain a deeper understanding of their products, make data-driven decisions,
and continuously improve their software development processes.

Product metric: A subset of product metrics, encompass a wide range of quantitative


measurements that evaluate different dimensions of software systems. These metrics can be
classified into various categories, such as Metrics for Investment & Capacity, Metrics for
Quality, Metrics for Process, and Metrics for Progress

A robust product metrics strategy involves selecting the appropriate metrics for a given software
project and collecting the necessary data throughout the development process. It is crucial to
choose metrics that align with the project’s objectives and address its specific requirements.
Once the most appropriate metrics are identified, they need to be consistently measured and
analyzed to provide meaningful insights. Tracking product metrics over time allows for trend
analysis and enables comparisons between different releases or versions. By establishing
benchmarks and targets, software engineering teams can set performance goals and track
progress toward achieving them.

Product metrics are software product measures at any stage of their development, from
requirements to established systems. Product metrics are related to software features only.

Product metrics fall into two classes:

1. Dynamic metrics that are collected by measurements made from a program in


execution.
2. Static metrics that are collected by measurements made from system representations
such as design, programs, or documentation.

Dynamic metrics help in assessing the efficiency and reliability of a program while static
metrics help in understanding, understanding and maintaining the complexity of a software
system.

Dynamic metrics are usually quite closely related to software quality attributes. It is relatively
easy to measure the execution time required for particular tasks and to estimate the time
required to start the system. These are directly related to the efficiency of the system failures
and the type of failure can be logged and directly related to the reliability of the software. On
the other hand, static matrices have an indirect relationship with quality attributes. A large
number of these matrices have been proposed to try to derive and validate the relationship
between the complexity, understandability, and maintainability. several static metrics which
have been used for assessing quality attributes, given in table of these, program or component
length and control complexity seem to be the most reliable predictors of understandability,
system complexity, and maintainability.

Software Product Metrics:


Software
S.No. Metric Description

Fan-in is a measure of the number of functions that call some other


function (say X). Fan-out is the number of functions which are called by
Fan-in/Fan- function X. A high value for fan-in means that X is tightly coupled to
(1)
out the rest of the design and changes to X will have extensive knock-on
effects. A high value for fan-out suggests that the overall complexity of
the control logic needed to coordinate the called components.

This is measure of the size of a program. Generally, the large the size of
Length of
(2) the code of a program component, the more complex and error-prone
code
that component is likely to be.

(3) Cyclomatic This is a measure of the control complexity of a program. This control
Software
S.No. Metric Description

complexity complexity may be related to program understandability.

This is a measure of the average length of distinct identifier in a


Length of
(4) program. The longer the identifiers, the more understandable the
identifiers
program.

Depth of This is a measure of the depth of nesting of if statements in a program.


(5) conditional Deeply nested if statements are hard to understand and are potentially
nesting error-prone.

This is a measure of the average length of words and sentences in


(6) Fog index documents. The higher the value for the Fog index, the more difficult
the document may be to understand.

Types of Metrics in Software Engineering

In software engineering, product metrics play a crucial role in evaluating the quality,
performance, and effectiveness of software systems. There are different types of metrics in
software engineering, including:

1. Metrics For Investment & Capacity


2. Metrics for Quality
3. Metrics for Process
4. Metrics for Progress

Software measurement, which encompasses the collection and analysis of product metrics, is
vital in software engineering. It provides objective data to evaluate the performance of software
products and development processes. Software metrics help identify areas for improvement,
track progress, and make data-driven decisions. These metrics assist in assessing the impact of
process changes, evaluating development methodologies, and allocating resources effectively.

Software Metrics For Investment & Capacity

There’s a lot of hype out there around shipping software faster, and there’s no doubt that speed to
market is important. But shipping code quickly is not helpful if you don’t know it’s the right
feature to be building to begin with. For many organizations, the engineering team is the largest
investment your company is making, so it’s important to ensure that this investment is being
pointed in the right direction.
It’s also important to recognize that this investment is not limitless and avoid the trap of
overcommitting engineering resources at the expense of spreading the focus too thin. At the end
of the day, engineering is a zero-sum game. Given the same number of engineers and dollars, the
team cannot build everything, so prioritizing is paramount. Understanding the overall capacity of
your engineering teams and mapping how the work they do breaks down by logical categories or
allocations will help you ensure your organization

is providing maximal value to your company and your customers.

Two KPIs that will help you do that are: Category or Investment Breakdown, also
called Allocation, and Ramp Time.

 Allocation: a way of visualizing how close your team is to that goal by breaking down
the work they do across axes that matter to the business.
 Ramp Time: Tracking how long before new software engineers contribute fully

Software Quality Metrics

Software quality metrics are essential tools in software quality assurance, providing quantitative
measurements to evaluate the quality of software products. These metrics are designed to assess
various aspects of software systems, such as functionality, reliability, maintainability, and
usability. By utilizing a comprehensive software quality metrics framework, organizations can
systematically measure and track the quality of their software products throughout the
development lifecycle.

Once you know your engineering team is building the right products and features, it’s important
to ensure that the software being developed can provide that value to your customers
consistently. Quality metrics measure that consistency. Any quality problem, whether that is a
bug, a glitch, or something else unforeseen, is a potential threat to successful delivery of value.
Unhappy customers are not simply a problem for the revenue team. Quality issues will
eventually come back to the engineers – your team.

It’s important to monitor quality metrics in order to minimize customer impact and ultimately to
maximize customer satisfaction and retention. Three important KPIs to keep a close eye on are:
Bugs, Time to Resolution, and Uptime.

 Bugs: By understanding the number and severity of bugs that exist per product or
feature, and comparing that with product or feature usage among your customer base, you will
have a better understanding of which bugs to prioritize fixing, and therefore where to devote
your resources.
 Time to Resolution: measure the time it takes to resolve reported bugs, failures, and
other incidents.
 Uptime: Measure how consistently your product is delivering value

By tracking and analyzing software quality metrics, organizations can improve the overall
quality of their software products, enhance customer satisfaction, and reduce maintenance efforts
and costs. Furthermore, these metrics enable organizations to benchmark their software quality
against industry standards and best practices, driving continuous improvement and ensuring
competitiveness in the market.
Process Metrics in Software Engineering

Process metrics in software engineering refer to the quantitative measurements used to evaluate
and monitor the effectiveness, efficiency, and quality of the software development process itself.
These metrics focus on the activities, workflows, and practices employed throughout the
software development lifecycle. By analyzing process metrics, software engineering teams can
identify areas for improvement and make data-driven decisions to enhance their development
processes.

Delivering code faster is where most vendors and vocal members of the

tech community focus. But delivering predictably is important to set proper expectations, drive
alignment across functional teams, and allow for better execution and therefore better penetration
of the market for that value your organization has worked so hard to build. Process metrics for
software engineering teams include Cycle Time & Lead Time, Deployment Frequency, and Task
(or other unit of work) Resolution Rate Over Time.

 Cycle time: Measures the time taken to complete a specific task or process.
 Lead time: Measures the time elapsed from the initiation of a software development task
to its completion
 Deployment Frequency: Tracks how fast and iterative a software engineering team is at
delivering value
 Task Resolution Rate Over Time: Tracks how a team is trending over completing work

These metrics help teams evaluate the efficiency of their development processes, identify
bottlenecks or inefficiencies, and make adjustments to improve overall performance.

Project Metrics in Software Engineering


Project metrics in software engineering:are the quantifiable measurements used to assess the
performance and progress of software development projects. These metrics provide valuable
insights into various aspects of project management, including schedule adherence, resource
utilization, cost control, and overall project quality. By tracking and analyzing project metrics,
software engineering teams can effectively manage projects and ensure successful project
delivery.

While Sales and Marketing teams (and their leaders) may have a lot of influence with the
executives and board, they all depend on the Engineering teams to deliver. At the end of the day,
these business leaders need to plan around delivery for sales and marketing timelines.

With that in mind, it’s important to be able to measure and report on the progress your team is
making toward that value creation in order to drive alignment across all the teams in your
company.

How you present these topics to business leaders will largely depend on the leaders you work
with and their preferences. But some great KPIs to track and stay on top of things internally are:
Completion / Burn down Percentage, and Predicted Ship Date.
 Completion / Burn down Percentage: measures the trend of work that has been
completed vs. what remains to be done over a certain period of time.
 Predicted Ship Date: An estimate as to when a given release, project, feature, or product
will ship

By employing process metrics in project management, software engineering teams can be a


driver in ensuring that your team is getting product in the hands of users. Better still, if you can
take a data-driven approach that factors in things like scope creep over time, metric averages
from other projects of similar size and scope, and burn down percentage through the life of the
project, you’ll be able to give a more confident estimation. These metrics allow teams to
proactively address quality-related issues and manage risks associated with software
development projects. Advanced Engineering Management Platforms can even incorporate these
factors into a prediction or forecast for each of your deliverables.

Software Reliability in Software Engineering


Software reliability is a critical aspect of software engineering that focuses on the ability of a
software system to consistently perform its intended functions without failure or errors over a
specific period of time. Reliability measures the system’s ability to operate correctly under
various conditions and to withstand the occurrence of software defects or failures. Software
reliability is crucial for ensuring user satisfaction, maintaining system integrity, and minimizing
the impact of software failures on business operations.

Software defect metrics play a key role in assessing and improving software reliability. These
metrics are used to quantify the number and severity of defects or issues within the software
system. Examples of software defect metrics include defect density, which measures the number
of defects per unit of code, and mean time to failure, which calculates the average time between
failures in the system.

These metrics help identify areas of the software that are prone to defects and allow for targeted
efforts to improve reliability.

Improving software reliability requires a proactive approach to defect prevention, detection, and
removal. Software engineering teams utilize various techniques such as rigorous testing and code
review to identify and fix defects early in the development lifecycle. By continuously monitoring
software defect metrics, teams can track the effectiveness of these measures and implement
corrective actions as needed.

Software reliability is essential in industries where system failures can have significant
consequences, such as healthcare, finance, and aviation. Reliability instills trust in the software
product and helps minimize financial losses and reputational damage due to software failures.
Software engineering teams should prioritize software reliability and employ effective defect
metrics to improve the overall quality and dependability of their software systems.

Software Metrics
A software metric is a measure of software characteristics which are measurable or countable.
Software metrics are valuable for many reasons, including measuring software performance,
planning work items, measuring productivity, and many other uses.
Within the software development process, many metrics are that are all connected. Software
metrics are similar to the four functions of management: Planning, Organization, Control, or
Improvement.

Classification of Software Metrics


Software metrics can be classified into two types as follows:

1. Product Metrics: These are the measures of various characteristics of the software product.
The two important software characteristics are:

i. Size and complexity of software.


ii. Quality and reliability of software.

2. Process Metrics: These are the measures of various characteristics of the software
development process. For example, the efficiency of fault detection. They are used to measure
the characteristics of methods, techniques, and tools that are used for developing software.

Types of Metrics
Internal metrics: Internal metrics are the metrics used for measuring properties that are viewed
to be of greater importance to a software developer. For example, Lines of Code (LOC) measure.

External metrics: External metrics are the metrics used for measuring properties that are viewed
to be of greater importance to the user, e.g., portability, reliability, functionality, usability, etc.

Hybrid metrics: Hybrid metrics are the metrics that combine product, process, and resource
metrics. For example, cost per FP where FP stands for Function Point Metric.

Project metrics: Project metrics are the metrics used by the project manager to check the
project's progress. Data from the past projects are used to collect various metrics, like time and
cost; these estimates are used as a base of new software. Note that as the project proceeds, the
project manager will check its progress from time-to-time and will compare the effort, cost, and
time with the original effort, cost and time. Also understand that these metrics are used to
decrease the development costs, time efforts and risks. The project quality can also be improved.
As quality improves, the number of errors and time, as well as cost required, is also reduced.

Advantage of Software Metrics


i. Comparative study of various design methodology of software systems.
ii. For analysis, comparison, and critical study of different programming language
concerning their characteristics.
iii. In comparing and evaluating the capabilities and productivity of people involved
in software development.
iv. In the preparation of software quality specifications.
v. In the verification of compliance of software systems requirements and
specifications.
vi. In making inference about the effort to be put in the design and development of
the software systems.
vii. In getting an idea about the complexity of the code.
viii. In taking decisions regarding further division of a complex module is to be
done or not.
ix. In guiding resource manager for their proper utilization.
x. In comparison and making design tradeoffs between software development and
maintenance cost.
xi. In providing feedback to software managers about the progress and quality during
various phases of the software development life cycle.
xii. In the allocation of testing resources for testing the code.

Disadvantage of Software Metrics


i. The application of software metrics is not always easy, and in some cases, it is difficult
and costly.
ii. The verification and justification of software metrics are based on historical/empirical
data whose validity is difficult to verify.
iii. These are useful for managing software products but not for evaluating the performance
of the technical staff.
iv. The definition and derivation of Software metrics are usually based on assuming which
are not standardized and may depend upon tools available and working environment.
v. Most of the predictive models rely on estimates of certain variables which are often not
known precisely.

vi. Function points are one of the most commonly used measures of software size and
complexity. In this article, we’ll take a look at what function points are, how they’re
calculated, and some of the benefits and drawbacks of using them. Whether you’re
just getting started in software engineering or you’ve been doing it for years,
understanding them can be a valuable tool in your development arsenal.
Defining function points and how they are used in software engineering

A function point is a unit of measurement used to quantify the amount of business functionality
being delivered by a software application. Function points allow software engineers to better
measure the size of a project, identify areas in need of optimization, and analyze development
performance benchmarks over time. Due to its accuracy and flexibility, function point analysis
has become a standardized tool for studying application complexity. Through the evaluation of
data elements, transaction types, external inputs, outputs and inquiries within an application,
calculations are performed to translate user requirements into uniform standards which can then
be measured against industry baselines. This generalized method for measuring virtually any
type of IT system ensures engineers have the data needed to correctly assess the scope of a
project for cost estimation and process improvement purposes.

The benefits of using function points to measure software size

Function point analysis is an effective method for measuring software size. By focusing on the
features and functions that a user can access and use, this metric can accurately determine the
complexity of an application. They are especially useful when comparing projects of different
sizes; they provide a consistent method for assessing and measuring project scope, instead of
asking developers to estimate the number of lines of code each project requires. Additionally,
because they take into account differences between programming languages, function points are
particularly beneficial during development or design changes. Not only do they allow insightful
comparisons between systems of differing sizes and structures, but also provide a wide range of
other benefits such as identifying development bottlenecks or highlighting discrepancies in
customer specifications. Ultimately, they offer an effective way to measure the size of software
projects at any stage in its lifecycle — from planning to implementation.

Comparing and contrasting with other measures of software size, such as lines of code

Function points serve as an effective measure for determining the relative size of a software
system as it allows for quantifying the efforts associated with developing and maintaining the
system. Initially limited to only certain languages, such as COBOL, function points have since
become more dynamic and flexible in accommodating other widely used languages like Java.

In comparison to lines of code (LOC) metrics, function points are capable of providing a more
holistic view in terms of evaluating the complexity of software development that takes into
account key factors such as data elements, data files, user inputs and outputs. Moreover, since
function points take into consideration the type of each element instead of merely counting them,
it helps to assign a proper “size” metric even if the code varies drastically in length.

Thus, while both approaches are essential in measuring software size, function point analysis
offers more varied and specific aspects which makes it more advantageous compared to LOC
metrics.

Some examples of how they can be used to estimate the cost and effort required for a
software project

Function points can be a valuable tool for determining the expense, time, and effort needed to
successfully complete a software project. The cost calculation of such projects can vary
depending on their scope and complexity, making function points an advantageous tool for
estimating the relative cost. For example, the Inputs Model is an effective method for calculating
time and effort which uses values associated with the number of user inputs to determine the
project’s cost. The Outputs Model also takes into account factors such as output record types,
inquiries, interface files, external programs and reports to better determine total project cost.
These models highlight how function points effectively identify even subtle elements of a
software project in order to make accurate estimations.

Some of the drawbacks of using Function Points

Function points are widely used as a measure of software size, but their use also comes with
potential drawbacks. The difficulty of assigning accurate values to the complexity factors makes
function point analysis unreliable. Further, installing and running a software sizing tool can be
expensive, making it impractical for smaller teams or projects.

Moreover, since different types of user experiences require different weights when assessing the
complexity of an application, manually determining the weights is both time-consuming and
subjective.

Finally, it should also be noted that parameters like bugs or maintenance support have no
representation, which further reduces their effectiveness as an absolute measure of software size.

In conclusion, function points have emerged as a key tool for measuring the size of a software
product. Not only do they provide an objective metric for sizing software, but they are also easy
to count and allow for accurate comparison between different programs. Furthermore, the use of
function points facilitates faster and more accurate estimates of cost and effort associated with a
software project. However, it’s important to remember that the use of function points isn’t
without its drawbacks; potential miscalculations may impact on both finance and timeline
estimates.

Explore how Developer Analytics gives organizations consistent and objective metrics for
measuring software development productivity.
Related articles...

What Is Risk Management in Software Engineering


Risk management in software engineering is both the process and the strategy for removing or
minimizing risk involved in software projects.

It entails assessing, identifying, and prioritizing project risks or uncertainties. With it comes
planning, monitoring, managing, and mitigating risks. Even meticulously planned projects come
with their fair share of risk factors.

That’s why project managers need to account for these potential risks in their planning process,
by including project scope and cost estimates to stay on track. Also, they need mitigation
strategies to handle risks that arise throughout the project lifecycle.
Principles of Risk Management in Software Engineering
Risk management can be broken down into five key principles:

1. Risk Identification

2. Risk Assessment

3. Risk Handling

4. Risk Mitigation and Control

5. Risk Monitoring and Review

Risk Identification
The first step to handling or mitigating risks is to identify the risks at play.

This is an active process that should be taken care of by a dedicated project manager or other
member of the team. Their job is to unpack the project’s details and identify any and all potential
risks that could be a setback.

In general, it’s important to identify three main types of risk that crop up frequently in
software projects:

1. Technical risk – which involves the choice of technology, integration of tools and
software, hardware setup, security, data protection, dependencies, and any other technical
factor that could potentially derail the project or significantly change its scope or costs.

2. Project risk – that includes the software project’s timeline, budget, resources, scope, and
software requirements. This could stem from technical risks or other factors like internal
resource constraints, stakeholder changes, or scheduling conflicts.

3. Business risk – which includes any risk associated with the business outcome of the
project; be it the costs of development, business requirements, marketability, or cost to
operate.
Using a checklist or systematic analysis, there should be a workflow for identifying as many
risks as possible, upfront in the project. At this stage, they only need to be potential risks, and
they can be of any size or severity.

Risk Assessment
Once the risks have been identified and broadly categorized, you’ll need a framework to assess
them further, and prioritize the actions and resources needed to address them.

Internal vs. External Risks


First, you’ll want to further categorize each risk according to whether it’s
an internal or external risk factor.

Internal Risks
Internal risks are within reach of the organization. They can either be handled directly as part of
the project planning; or you may need to take mitigation, control, or monitoring steps to reduce
the risk, if unable to eliminate it entirely.

Examples include:

 Technical issues: incompatible technology, systems, or frameworks

 Resource management: organizational turnover, or lack of budget and technical


equipment

 Project management: shifting project schedule, metrics, or budgets

 Requirements management: lack of approvals, ambiguous software testing


requirements, or overly broad specifications

External risks
External risks originate outside, and are beyond the organization’s (or project team’s) control.
In most cases, external risks can only be managed (or minimized) through mitigation, control, or
monitoring strategies. In some cases, the risk is simply unavoidable.

External risks include:

 Market risks: unruly market trends or competitive environments


 Legal and regulatory risks: changing laws or regulations

 Vendor risks: failure to deliver or meet requirements, bankruptcy, or personnel issues

At this stage, you should combine both categories in order to group your software development
risks by type and origin.

This is important because it will help you determine whether you should consider risk handling
to address your risks, or risk mitigation in an attempt to shield the project from the impacts of
your risks.

Risk Impact
Next, you’ll want to score each risk according to its potential impact on the project.

You can do this by assessing each risk according to two factors:

1. Severity – the size or magnitude of the impact this risk could have on the overall project.

2. Probability of occurrence – how likely this risk is to occur and/or have the expected
outcome.
Risk Prioritization & Planning
Finally, you can use the risk impact assessment to prioritize those risks that require action.
Ultimately, each one of these risks should have a prioritization and an action plan.

Prioritize solutions for the highest impact risks — which are most likely to happen, and have the
greatest effect on the success of your project.

Then work backwards. The lowest-impact risks may be acceptable risks that require no further
action.

For each risk, you’ll then need to determine the best course of action. In broad strokes,

Most risks will require one of three types of action plan:

1. Risk handling: direct actions taken to eliminate the risk or greatly reduce its likelihood
or impact

2. Risk mitigation and control: systemic actions to reduce or eliminate the risk

3. Risk monitoring and review: ongoing actions to assess or measure the risk

Risk Handling
First up: handle the risk outright.

Whenever possible, project plans should include actionable steps to address any potential risks
that are internal and addressable through targeted intervention.

In other words: fix the stuff you can, before it becomes a problem.

For example, one of the technical risks you identify could be that your team is planning to
implement a new technology they’ve never deployed for this type of project. That would
represent a significant risk.

A simple risk-handling measure would be to choose a proven technology instead; or conduct a


deeper analysis to confirm the viability of this new tech before committing to its use.

Team members can get creative with risk handling. Sometimes it’s helpful to conduct an old-
fashioned brainstorming session where developers and stakeholders ideate on ways to eliminate
or resolve potential roadblocks.

But this is just one piece of the software risk management process.

Risk Mitigation and Control

Not every risk can be stopped or handled.

Many risks are either external (you have minimal control over them) or simply unresolvable.

In addition to handling the risks that have been identified through proactive measures, the next
step of an effective risk management strategy is mitigation and control; i.e. taking steps in order
to minimize, either the impact or the probability of the risk. (Or both!)

Usually, this involves implementing new policies, procedures, or processes that address the risk
at a systemic level.

Key example: project risks such as budget overages or missed deadlines are simply
unstoppable. It’s not a switch; you can’t just stop things from taking longer than they’re meant
to.

But these risks can be mitigated.


One risk mitigation strategy is to employ time tracking across the organization.

Using historical project and time data as part of the software project management process
enables smarter, data-driven decision making. Software development projects are more likely to
stay on track, and on budget, if the planning process is rooted in empirical data collected from
past projects, rather than generalized estimates and forecasts.

This improves the accuracy of the project scope, budgets, and timelines.

Again, this doesn’t eliminate the risk; but it introduces mechanisms that reduce the risk’s impact:

 Lesser probability of occurrence: estimated work hours, budgets, and timelines are
more likely to be accurate and less likely to become a significant risk factor.

 Lesser severity: in cases where estimates are inaccurate, they’re more likely to be within
an acceptable margin of error if based on real-world data.

The key here is to uncover strategies that allow you to contain, minimize, and manage risks using
levers that are within your control.

Other mitigation strategies would include things like transferring the risk. This could mean
outsourcing to a third-party contractor, using an insurance policy to cover potential loss,
partnering with another company, or using a penalty clause in a vendor agreement.

Risk Monitoring and Review

Some risks are entirely unmanageable and cannot be mitigated. The best you can do is to monitor
the situation.

Once you’ve implemented risk handling and risk mitigation strategies for all of the risks that can
be addressed through one of these two processes, the remaining risks should be thoroughly
documented, and monitoring plans should be set in place.

For example, if you’re unable to draw from historical developer time data as part of your
mitigation strategy, you can instead choose to deploy time tracking for the duration of the
project. That way, you end up dealing with a monitoring strategy.

You’ll be able to measure and track the timeline of the project better, and intervene if the data
suggests that the project is at risk.
Principle of Risk Management

1. Global Perspective: In this, we review the bigger system description, design, and
implementation. We look at the chance and the impact the risk is going to have.
2. Take a forward-looking view: Consider the threat which may appear in the future and
create future plans for directing the next events.
3. Open Communication: This is to allow the free flow of communications between the
client and the team members so that they have certainty about the risks.
4. Integrated management: In this method risk management is made an integral part of
project management.
5. Continuous process: In this phase, the risks are tracked continuously throughout the risk
management paradigm.

Risk Management Activities


Risk management consists of three main activities, as shown in fig:
Risk Assessment
The objective of risk assessment is to division the risks in the condition of their loss, causing
potential. For risk assessment, first, every risk should be rated in two methods:

o The possibility of a risk coming true (denoted as r).


o The consequence of the issues relates to that risk (denoted as s).

Based on these two methods, the priority of each risk can be estimated:

p=r*s

Where p is the priority with which the risk must be controlled, r is the probability of the risk
becoming true, and S is the severity of loss caused due to the risk becoming true. If all identified
risks are set up, then the most likely and damaging risks can be controlled first, and more
comprehensive risk abatement methods can be designed for these risks.

1. Risk Identification: The project organizer needs to anticipate the risk in the project as early
as possible so that the impact of risk can be reduced by making effective risk management
planning.

A project can be of use by a large variety of risk. To identify the significant risk, this might
affect a project. It is necessary to categories into the different risk of classes.

There are different types of risks which can affect a software project:
1. Technology risks: Risks that assume from the software or hardware technologies that are
used to develop the system.
2. People risks: Risks that are connected with the person in the development team.
3. Organizational risks: Risks that assume from the organizational environment where the
software is being developed.
4. Tools risks: Risks that assume from the software tools and other support software used to
create the system.
5. Requirement risks: Risks that assume from the changes to the customer requirement and
the process of managing the requirements change.
6. Estimation risks: Risks that assume from the management estimates of the resources
required to build the system

2. Risk Analysis: During the risk analysis process, you have to consider every identified risk and
make a perception of the probability and seriousness of that risk.

There is no simple way to do this. You have to rely on your perception and experience of
previous projects and the problems that arise in them.

It is not possible to make an exact, the numerical estimate of the probability and seriousness of
each risk. Instead, you should authorize the risk to one of several bands:

1. The probability of the risk might be determined as very low (0-10%), low (10-25%),
moderate (25-50%), high (50-75%) or very high (+75%).
2. The effect of the risk might be determined as catastrophic (threaten the survival of the
plan), serious (would cause significant delays), tolerable (delays are within allowed
contingency), or insignificant.

Risk Control

It is the process of managing risks to achieve desired outcomes. After all, the identified risks of a
plan are determined; the project must be made to include the most harmful and the most likely
risks. Different risks need different containment methods. In fact, most risks need ingenuity on
the part of the project manager in tackling the risk.

There are three main methods to plan for risk management:

1. Avoid the risk: This may take several ways such as discussing with the client to change
the requirements to decrease the scope of the work, giving incentives to the engineers to
avoid the risk of human resources turnover, etc.
2. Transfer the risk: This method involves getting the risky element developed by a third
party, buying insurance cover, etc.
3. Risk reduction: This means planning method to include the loss due to risk. For
instance, if there is a risk that some key personnel might leave, new recruitment can be
planned.

Risk Leverage: To choose between the various methods of handling risk, the project plan must
consider the amount of controlling the risk and the corresponding reduction of risk. For this, the
risk leverage of the various risks can be estimated.

Risk leverage is the variation in risk exposure divided by the amount of reducing the risk.

Risk leverage = (risk exposure before reduction - risk exposure after reduction) / (cost of
reduction)

Risk planning: The risk planning method considers each of the key risks that have been
identified and develop ways to maintain these risks.

For each of the risks, you have to think of the behavior that you may take to minimize the
disruption to the plan if the issue identified in the risk occurs.

You also should think about data that you might need to collect while monitoring the plan so that
issues can be anticipated.

Again, there is no easy process that can be followed for contingency planning. It rely on the
judgment and experience of the project manager.

Risk Monitoring: Risk monitoring is the method king that your assumption about the
product, process, and business risks has not changed.

Reactive risk management: Tries to reduce the damage of potential threats and speed
up an organization's recovery from them, but assumes that those threats will happen eventually.

Proactive risk management: identifies possible threats and aims to prevent those
events from ever happening in the first place.

Proactive and Reactive: What’s the Difference?

Reactive risk management could mean the following:

 Preventing potential risks from becoming incidents


 Mitigating damage from incidents
 Stopping small threats from worsening
 Continuing critical business functions despite incidents
 Evaluating each incident to solve its root cause
 Monitoring to assure that the incident does not recur

On the other hand, proactive risk management strategies include:

 Identifying existing risks to the enterprise, business unit, or project


 Developing a risk response
 Prioritizing identified risks according to the magnitude of their threat
 Analyzing risks to determine the best treatment for each
 Implementing controls necessary to prevent hazards from becoming threats or incidents
 Monitoring the threat environment continuously

Using the terms “proactive” and “reactive” when discussing risk management can confuse
people, but that shouldn’t be so; proactive and reactive risk management are different things.
Understanding the difference between the two is crucial to developing effective risk mitigation
strategies. So let’s understand the nuances of each approach in detail.

The basics are simple. Reactive risk management tries to reduce the damage of potential threats
and speed up an organization’s recovery from them, but assumes that those threats will happen
eventually. Proactive risk management identifies possible threats and aims to prevent those
events from ever happening in the first place.

Each strategy has activities, metrics, and behaviors useful in risk analysis.

Reactive Risk Management

One fundamental point about reactive risk management is that the disaster or threat must occur
before management responds. In contrast, proactive risk management is about taking
preventative measures before the event to decrease its severity. That’s a good thing to do.

At the same time, however, organizations should develop reactive risk management plans that
can be deployed after the event – because many times, the unwanted event will happen. If
management hasn’t developed reactive risk management plans, then executives end up making
decisions about how to respond as the event happens; that can be costly and stressful.
There is one Catch-22 with reactive risk management: Although this approach gives you time to
understand the risk before acting, you’re still one step behind the unfolding threat. Other projects
will lag as you attend to the problem at hand.

Helping to Withstand Future Risks

The reactive approach learns from past (or current) events and prepares for future events. For
example, businesses can purchase cyber security insurance to cover the costs of a security
disruption.

This strategy assumes that a breach will happen at some point. But once that breach does occur,
the business might understand more about how to avoid future violations and perhaps could even
tailor its insurance policies accordingly.

Proactive Risk Management

As the name suggests, proactive risk management means that you identify risks before they
happen and figure out ways to avoid or alleviate the risk. It seeks to reduce the hazard’s risk
potential or (even better) prevent the threat altogether.

A good example is vulnerability testing and remediation. Any organization of appreciable size is
likely to have vulnerabilities in its software that attackers could find and exploit. So regular
testing can find and patch those vulnerabilities to eliminate that threat.

Allows for More Control over Risk Management

A proactive management strategy gives you more control over your risk management. For
example, you can decide which issues should be top priorities and what potential damage you
will accept.

Proactive management also involves constantly monitoring your systems, risk processes, cyber
security, competition, business trends, and so forth. Understanding the level of risk before an
event allows you to instruct your employees on how to mitigate them.

A proactive approach, however, implies that each risk is constantly monitored. It also requires
regular risk reviews to update your current risk profile and to identify new risks affecting the
company. This approach drives management to be constantly aware of the direction of those
risks.

What about Predictive Risk Management?

Predictive risk management about predicting future risks, outcomes, and threats. Some predictive
components may sound similar to proactive or reactive strategies.

Predictive risk management attempts to:

 Identify the probability of risk in a situation based on one or more variables


 Anticipate potential future risks and their probability
 Anticipate necessary risk controls

Five Risk Management Strategies with Examples


Now that we understand the two main types of risk management strategies, let’s review how
companies implement these strategies in the real world. The following real-world examples
might not be conclusive, but they can guide your risk management strategy.

1. MVP or experiment development

Instead of launching a full product line or entering a new market, companies can launch products
in a lean, iterative fashion- the ‘minimum viable product’ – to a small market subsection. This
way, companies can test their products’ operational and financial elements and mitigate the
market-related risks before they launch to a broader audience.

For example, an airline could test facial recognition technology to make security checks faster,
but might want to validate privacy and data security concerns first by trying it out at one airport
before going nationwide.

2. Risk isolation
Companies can isolate potential threats to their business model by separating specific parts of
their infrastructure to protect them from external threats.

For example, some companies might restrict access to critical parts of their software ecosystem
by requiring engineers to work at a specific location instead of working remotely (which opens
the door to potential cyber threats).

3. Risk-reward analysis

Companies may undertake specific initiatives to understand the opportunity cost of entering a
new market or the risk of possibly gaining market share in a saturated market. Before taking the
initiatives at a broader level, the analysis would help them understand the market forces and their
ability to induce or reduce risks with what-if scenarios.

For example, a direct-to-consumer delivery company might want to project the anticipated
demand for entering the market with faster medical supply delivery.

4. Data projection

Companies can analyze data with the help of machine learning techniques to understand specific
behavioral or threat patterns in their ways of working. These data analysis efforts might also help
them understand what second-order effects are lurking because of inefficient processes or lax
attention on certain parts of the business.

For example, a large retailer might use data analysis to find inefficiencies in its supply chain to
reduce last-mile delivery times and get an edge over the competition.

5. Certification

To stay relevant and retain customer trust, companies could also obtain safety and security
certifications to prove they are a resilient brand that can sustain and mitigate significant
operational risks.

For example, a new fintech company might get certified for PCI-DSS security standards before
scaling in a new market to build trust with its customers.

Software risk analysis in software development is a systematic process that involves


identifying and evaluating any problem that might happen during the creation, implementation,
and maintaining of software systems. It can guarantee that projects are finished on schedule,
within budget, and with the appropriate quality. It is a crucial component of software
development.

What is Software Risk Analysis in Software Development?


Software risk analysis in Software Development involves identifying which application risks
should be tested first. Risk is the possible loss or harm that an organization might face. Risk
can include issues like project management, technical challenges, resource constraints, changes
in requirements, and more finding every possible risk and estimating are the two goals of risk
analysis. Think about the potential consequences of testing your software and how it could
impact your software when creating a test plan. Risk detection during the production phase
might be costly. Therefore, risk analysis in testing is the best way to figure out what goes
wrong before going into production.
Why perform software risk analysis?
Using different technologies, software developers add new features in Software Development.
Software system vulnerabilities grow in combination with technology. Software goods are
therefore more vulnerable to malfunctioning or performing poorly.
Many factors, including timetable delays, inaccurate cost projections, a lack of resources, and
security hazards, contribute to the risks associated with software in Software Development.

Certain risks are unavoidable, some of them are as follows:


 The amount of time you set out to test.
 Flaw leaks can happen in complicated or large-scale applications.
 The client has an immediate requirement to finish the job.
 The specifications are inadequate.
Therefore, it’s critical to identify, priorities, and reduce risk or take proactive preventative
action during the software development process, as opposed to monitoring risk possibilities.

Types of Software Risk


Given below table shows the type of risk and their impact with example:
Type of Risk Description Impact Examples

Risks arising from Technical risks can  Incomplete or


technical challenges lead to delays, cost inaccurate requirements
or limitations in the overruns, and even  Unforeseen technical
Technical risks software software failure if complexities
Type of Risk Description Impact Examples

development not properly  Integration issues with


process. managed. third-party systems
 Inadequate testing and
quality assurance

 Insecure coding
Risks related to practices
Security risks can
vulnerabilities in the  Lack of proper access
lead to financial
software that could controls
losses, reputational
allow unauthorized  Vulnerabilities in third-
damage, and legal
access or data party libraries
liabilities.
breaches.  Insufficient data
Security risks security measures

 Inadequate
Risks associated infrastructure capacity
Scalability risks can
with the software’s  Inefficient algorithms
lead to performance
ability to handle or data structures
bottlenecks,
increasing  Lack of scalability
outages, and lost
workloads or user testing
revenue.
demands.  Poorly designed
Scalability risks architecture

Risks related to the  Inefficient algorithms


Performance risks
software’s ability to or data structures
can lead to user
meet performance  Excessive memory or
dissatisfaction, lost
expectations in CPU usage
productivity, and
terms of speed,  Poor database
competitive
responsiveness, and performance
disadvantage.
Performance risks resource utilization.  Network latency issues

 Unrealistic cost
estimates
Budgetary risks can  Scope creep or changes
Risks associated
lead to financial in requirements
with exceeding the
strain, project  Unforeseen expenses,
project’s budget or
delays, and even such as third-party licenses
financial constraints.
cancellation. or hardware upgrades
 Inefficient resource
Budgetary risks utilization

Risks arising from Contractual and  Unclear or ambiguous


Contractual & legal legal or contractual legal risks can lead contract terms
risks obligations that are to disputes, delays,  Failure to comply with
Type of Risk Description Impact Examples

not properly and even legal intellectual property laws


understood or action.  Data privacy violations
managed.  Lack of proper
documentation and record-
keeping

 Inadequate monitoring
and alerting systems
Risks associated
Operational risks  Lack of proper disaster
with the ongoing
can lead to recovery plans
operation and
downtime, outages,  Insufficient training for
maintenance of the
and data loss. operational staff
software system.
 Poor change
Operational risks management practices

 Unrealistic timelines or
Risks related to Schedule risks can milestones
delays in the lead to increased  Underestimation of task
software costs, pressure on complexity
development resources, and  Resource dependencies
process or missed missed market or conflicts
deadlines. opportunities.  Unforeseen events or
Schedule risks delays

How to perform software risk analysis in Software Development


In order to conduct risk analysis in software development, first you have to evaluate the source
code in detail to understand its component. This evaluation is done to address components of
code and map their interactions. With the help of the map, transaction can be detected and
assessed. The map is subjected to structural and architectural guidelines in order to recognize
and understand the primary software defects. Following are the steps to perform software risk
analysis.

Risk Assessment
The purpose of the risk assessment is to identify and priorities the risks at the earliest stage and
avoid losing time and money.

Under risk assessment, you will go through:


 Risk identification: It is crucial to detect the type of risk as early as possible and
address them. The risk types are classified into
 People risks: related to the people in the software development team
 Tools risks: related to using tools and other software
 Estimation risks: related to estimates of the resources required to build the
software
 Technology risks: are related to the usage of hardware or software technologies
required to build the software
 Organizational risks: are related to the organizational environment where the
software is being created.
 Risk analysis: Experienced developers analyze the identified risk based on their
experience gained from previous software . In the next phase, the Software Development
team estimates the probability of the risk occurring and its seriousness
 Risk prioritization: The risk priority can be identified using the formula below
p=r*s
Where,
p stands for priority
r stands for the probability of the risk becoming true or false
s stands for the severity of the risk.
After identifying the risks, the ones with the probability of becoming true and higher loss must
be prioritized and controlled.
Risk control
Risk control is performed to manage the risks and obtain desired results. Once identified, the
risks can be classified into the most and least harmful.
Under risk control, you will go through:
 Risk management planning: You can leverage three main strategies to plan risk
management.
 Reduce the risk: This method involves planning to reduce the loss caused by the
risk. For instance, planning to hire new employees to replace employees serving
notice.
 Transfer the risk: This method involves buying insurance or hiring a third-party
organization to solve a challenging problem that might pose harmful risks
 Avoid the risk: This method involves implementing various strategies, such as
incentivizing underpaid, hardworking engineers who might quit the organization
 Risk monitoring: It includes tracking and evaluating different levels of risk in the
software development team. After completing the risk monitoring process, the findings can
be utilized to devise new strategies to update ineffective methods
 Risk resolution: It involves eliminating the overall risk or finding solutions. This
method includes techniques such as design to cost approach, simulating the prototype,
benchmarking, etc.
Key Benefits of Software Risk Analysis
There are multiple benefits to using software risk analysis techniques within your software in
software development, ultimately leading you to complete your projects while successfully
navigating obstacles along the way. Some of the most positive outcomes you can expect when
using this framework include: There are many benefits to using software a
 Better decision-making: When you have the right information in front of you, it is
much easier to make good decisions. Data-driven decision-making is one of the best ways
to ensure the successful completion of a project, which can have knock-on benefits such as
cost savings and faster turnaround times.
 Early warning: If you are aware of an issue before it affects your software and
operations, then you will be able to prevent expensive and time-draining fixes from being
necessary.
 Reduced software costs and time: Addressing potential risks ahead of time can help
reduce software costs and time by avoiding costly rework or delays due to unexpected
issues.
 Improved software quality: Risk analysis can help identify potential quality issues
and ensure that software quality is maintained throughout the development process.
 Increased stakeholder confidence: Conducting risk analysis can increase stakeholder
confidence in the software development process by demonstrating that potential risks are
managed proactively.
 Compliance with regulations: Risk analysis can help ensure compliance with industry
regulations and standards.
Best Tools for Software Risk Analysis
Some of the most commonly used tools for software risk analysis are as follows:
 Failure Mode and Effects Analysis (FMEA)
 FMEA is an organized method for locating, evaluating, and ranking possible
flaws in a process or system. It is a qualitative method that evaluates the
possibility and seriousness of prospective failures using the opinion of experts.
When risks are found and addressed early in the software development lifecycle,
FMEA is a useful technique.
 Fault Tree Analysis (FTA)
 FTA is a logical method for assessing system failure reasons. It begins with an
undesirable occurrence at the highest level and proceeds downward to find the
lower-level events that may have contributed to the event. FTA is a helpful tool
for comprehending the intricate connections that exist between various system
hazards.
 Risk Matrix
 Prioritizing risks according to likelihood and impact may be done easily with a
risk matrix. A likelihood and impact rating is given to each risk, and the two
ratings are then compounded to provide a risk score. Prioritization of more
research and mitigation is given to risks with high risk ratings.
 Decision Tree
 A decision tree is a diagram that represents a series of decisions and their
possible outcomes. Decision trees are helpful in weighing the advantages and
disadvantages of various options.
 Monte Carlo Simulation
 Monte Carlo is a quantitative technique for calculating the probability of
different outcomes. It includes running computer simulation multiple times,
using random values as input each time. The results of these simulation can be
used to calculate the chances of different outcomes.

Risk identification is a systematic attempt to specify threats to the project plan (estimates,
schedule, resource loading, etc.). By identifying known and predictable risks, the project
manager takes a first step toward avoiding them when possible and controlling them when
necessary.

There are two distinct types of risks: generic risks and product-specific risks. Generic risks are
a potential threat to every software project. Product-specific risks can be identified only by those
with a clear understanding of the technology, the people, and the environment that is specific to
the project at hand. To identify product-specific risks, the project plan and the software statement
of scope are examined and an answer to the following question is developed: "What special
characteristics of this product may threaten our project plan?"

One method for identifying risks is to create a risk item checklist. The checklist can be used for
risk identification and focuses on some subset of known and predictable risks in the following
generic subcategories:

• Product size: risks associated with the overall size of the software to be built or modified.

• Business impact: risks associated with constraints imposed by management or the


marketplace.
• Customer characteristics: risks associated with the sophistication of the customer and the
developer's ability to communicate with the customer in a timely manner.

• Process definition: risks associated with the degree to which the software process has been
defined and is followed by the development organization.

• Development environment: risks associated with the availability and quality of the tools to be
used to build the product.

• Technology to be built: risks associated with the complexity of the system to be built and the
"newness" of the technology that is packaged by the system.

• Staff size and experience: risks associated with the overall technical and project experience of
the software engineers who will do the work.

The risk item checklist can be organized in different ways. Questions relevant to each of the
topics can be answered for each software project. The answers to these questions allow the
planner to estimate the impact of risk. A different risk item checklist format simply lists
characteristics that are relevant to each generic subcategory. Finally, a set of “risk components
and drivers" are listed along with their probability of occurrence. Drivers for performance,
support, cost, and schedule are discussed in answer to later questions.

A number of comprehensive checklists for software project risk have been proposed in the
literature. These provide useful insight into generic risks for software projects and should be
used whenever risk analysis and management is instituted. However, a relatively short list of
questions can be used to provide a preliminary indication of whether a project is “at risk.”

Assessing Overall Project Risk

The following questions have derived from risk data obtained by surveying experienced software
project managers in different part of the world. The questions are ordered by their relative
importance to the success of a project.
1. Have top software and customer managers formally committed to support the project?
2. Are end-users enthusiastically committed to the project and the system/product to be built?
3. Are requirements fully understood by the software engineering team and their customers?
4. Have customers been involved fully in the definition of requirements?
5. Do end-users have realistic expectations?
6. Is project scope stable?
7. Does the software engineering team have the right mix of skills?
8. Are project requirements stable?
9. Does the project team have experience with the technology to be implemented?
10. Is the number of people on the project team adequate to do the job?
11. Do all customer/user constituencies agree on the importance of the project and on the
requirements for the system/product to be built?

If any one of these questions is answered negatively, mitigation, monitoring, and management
steps should be instituted without fail. The degree to which the project is at risk is directly
proportional to the number of negative responses to these questions.

Risk Components and Drivers

The U.S. Air Force has written a pamphlet that contains excellent guidelines for software risk
identification and abatement. The Air Force approach requires that the project manager identify
the risk drivers that affect software risk components—performance, cost, support, and schedule.
In the context of this discussion,

The risk components are defined in the following manner:

• Performance risk: the degree of uncertainty that the product will meet its
requirements and be fit for its intended use.

• Cost risk: the degree of uncertainty that the project budget will be
maintained.

• Support risk: the degree of uncertainty that the resultant software will be
easy to correct, adapt, and enhance.

• Schedule risk: the degree of uncertainty that the project schedule will be
maintained and that the product will be delivered on time.

The impact of each risk driver on the risk component is divided into one of four impact
categories—negligible, marginal, critical, or catastrophic.

Risk projection: also called risk estimation, attempts to rate each risk in two ways—the
likelihood or probability that the risk is real and the consequences of the problems associated
with the risk, should it occur.

Risk Projection (aka Risk Estimation)

Attempts to rate each risk in two ways

i. The probability that the risk is real

ii. The consequences of the problems associated with the risk, should it occur.
Project planner, along with other managers and technical staff, performs four risk projection
activities:
(1) Establish a measure that reflects the perceived likelihood of a risk

(2) Delineate the consequences of the risk

(3) Estimate the impact of the risk on the project and the product

(4) Note the overall accuracy of the risk projection so that there will be no
misunderstandings.

Developing a Risk Table

Risk table provides a project manager with a simple technique for risk projection
Steps in Setting up Risk Table

(1) Project team begins by listing all risks in the first column of the table. Accomplished with
the help of the risk item checklists.

(2) Each risk is categorized in the second column

(e.g., PS implies a project size risk, BU implies a business risk).

(3) The probability of occurrence of each risk is entered in the next column of the table. The
probability value for each risk can be estimated by team members individually.

(4) Individual team members are polled in round-robin fashion until their assessment of risk
probability begins to converge.
Assessing Impact of Each Risk

(1) Each risk component is assessed using the Risk Characterization Table (Figure 1) and impact
category is determined.

(2) Categories for each of the four risk components—performance, support, cost, and
schedule—are averaged to determine an overall impact value.

i. A risk that is 100 percent probable is a constraint on the software project.

ii. The risk table should be implemented as a spreadsheet model. This


enables easy manipulation and sorting of the entries.

iii. A weighted average can be used if one risk component has more
significance for the project.

(3) Once the first four columns of the risk table have been completed, the table is sorted by
probability and by impact.

· · High-probability, high-impact risks percolate to the top of the table, and low-
probability risks drop to the bottom.

(4) Project manager studies the resultant sorted table and defines a cutoff line.

 Cutoff line (drawn horizontally at some point in the table) implies that only risks
that lie above the line will be given further attention.

 Risks below the line are re-evaluated to accomplish second-order prioritization.

 Risk impact and probability have a distinct influence on management concern.

i. Risk factor with a high impact but a very low probability of occurrence
should not absorb a significant amount of management time.
ii. High-impact risks with moderate to high probability and low-impact risks
with high probability should be carried forward into the risk analysis
steps that follow.

All risks that lie above the cutoff line must be managed.

The column labeled RMMM contains a pointer into a Risk Mitigation, Monitoring
and Management Plan

Assessing Risk Impact

Three factors determine the consequences if a risk occurs:

i. Nature of the risk - the problems that are likely if it occurs.

e.g., a poorly defined external interface to customer hardware (a technical risk)


will preclude early design and testing and will likely lead to system integration
problems late in a project.

ii. Scope of a risk - combines the severity with its overall distribution (how much of the
project will be affected or how many customers are harmed?).

iii. Timing of a risk - when and how long the impact will be felt.

Steps recommended to determine the overall consequences of a risk:

1. Determine the average probability of occurrence value for each risk component.
2. Using Figure 1, determine the impact for each component based on the criteria shown.
3. Complete the risk table and analyze the results as described in the preceding sections.
Overall risk exposure, RE, determined using:
RE = P x C
P is the probability of occurrence for a risk
C is the cost to the project should the risk occur.

Example

Assume the software team defines a project risk in the following manner:

Risk Refinement
A risk may be stated generally during early stages of project planning.
With time, more is learned about the project and the risk may be possible to refine the risk into a
set of more detailed risks

Represent risk in condition-transition-consequence (CTC) format.


Stated in the following form:

Given that <condition> then there is concern that (possibly) <consequence>

The RMMM Plan

Risk Mitigation, Monitoring and Management Plan (RMMM) - documents all work
performed as part of risk analysis and is used by the project manager as part of the overall
project plan.

RMM stands for risk mitigation, monitoring and management. There are three issues in
strategy for handling the risk is

1. Risk Avoidance
2. Risk Monitoring
3. Risk Management

Risk Mitigation
Risk mitigation means preventing the risk to occur (risk avoidance). Following are the steps to be
taken for mitigating the risks.

1. Communicate with the concerned staff to find of probable risk.


2. Find out and eliminate all those causes that can create risk before the project starts.
3. Develop a policy in an organization which will help to continue the project even through
same staff leaves the organization.
4. Everybody in the project team should be acquainted with the current development
activity
5. Maintain the corresponding documents in timely manner
6. Conduct timely reviews in order to speed up work.
7. For conducting every critical activity during software development, provide the
additional staff if required.

Risk Monitoring
In Risk Monitoring process following thing must be monitored by the project manager.

1. The approach and behavior of the team member as pressure of project varies.
2. The degree in which the team performs with the spirit of “Team-Work”.
3. The type of cooperation between the team members.
4. The type of problem occur in team member.
5. Availability of jobs within and outside of the organization.

The objective of risk mitigation is

1. To check whether the predicted risk really occur or not.


2. To ensure the steps defined to avoid the risk are applied properly or not.
3. To gather the information this can be useful for analyzing the risk.

Risk Management
Project manager performs this task when risk becomes a reality. If project manager is successful
in applying the project mitigation effectively then it becomes very much easy to manage the
risks.
For example,
Consider a scenario that many people are leaving the organization then if sufficient additional
staff is available, if current development activity is known to everybody in the team, if latest and
systematic documentation is available then any ‘new comer’ can easily understand current
development activity. This will ultimately help in continuing the work without any interval.

Alternative to RMMM - risk information sheet (RIS)


RIS is maintained using a database system, so that creation and information entry, priority
ordering, searches, and other analysis may be accomplished easily.

Risk monitoring is a project tracking activity


Three primary objectives:
i. Assess whether predicted risks do, in fact, occur
ii. Ensure that risk aversion steps defined for the risk are being properly applied
iii. Collect information that can be used for future risk analysis.
Problems that occur during a project can be traced to more than one risk.
Another job of risk monitoring is to attempt to allocate origin (what risk(s) caused which
Problems throughout the project).

A risk management technique is usually seen in the software Project plan. This can be divided
into Risk Mitigation, Monitoring, and Management Plan (RMMM). In this plan, all works are
done as part of risk analysis. As part of the overall project plan project manager generally uses
this RMMM plan.
In some software teams, risk is documented with the help of a Risk Information Sheet (RIS).
This RIS is controlled by using a database system for easier management of information i.e.
creation, priority ordering, searching, and other analysis. After documentation of RMMM and
start of a project, risk mitigation and monitoring steps will start.

Risk Mitigation:
It is an activity used to avoid problems (Risk Avoidance).
Steps for mitigating the risks as follows.
Finding out the risk.
Removing causes that are the reason for risk creation.
Controlling the corresponding documents from time to time.
Conducting timely reviews to speed up the work.

Risk Monitoring:
It is an activity used for project tracking.
It has the following primary objectives as follows.

To check if predicted risks occur or not.


To ensure proper application of risk aversion steps defined for risk.
To collect data for future risk analysis.
To allocate what problems are caused by which risks throughout the project.

Risk Management and planning:


It assumes that the mitigation activity failed and the risk is a reality. This task is done by Project
manager when risk becomes reality and causes severe problems. If the project manager
effectively uses project mitigation to remove risks successfully then it is easier to manage the
risks. This shows that the response that will be taken for each risk by a manager. The main
objective of the risk management plan is the risk register. This risk register describes and focuses
on the predicted threats to a software project.

Example:
Let us understand RMMM with the help of an example of high staff turnover.

Risk Mitigation:
To mitigate this risk, project management must develop a strategy for reducing turnover. The
possible steps to be taken are:
Meet the current staff to determine causes for turnover (e.g., poor working conditions, low pay,
and competitive job market).
Mitigate those causes that are under our control before the project starts.
Once the project commences, assume turnover will occur and develop techniques to ensure
continuity when people leave.
Organize project teams so that information about each development activity is widely dispersed.
Define documentation standards and establish mechanisms to ensure that documents are
developed in a timely manner.
Assign a backup staff member for every critical technologist.

Risk Monitoring:
As the project proceeds, risk monitoring activities commence. The project manager monitors
factors that may provide an indication of whether the risk is becoming more or less likely. In the
case of high staff turnover, the following factors can be monitored:
General attitude of team members based on project pressures.
Interpersonal relationships among team members.
Potential problems with compensation and benefits.
The availability of jobs within the company and outside it.

Risk Management:
Risk management and contingency planning assumes that mitigation efforts have failed and that
the risk has become a reality. Continuing the example, the project is well underway, and a
number of people announce that they will be leaving. If the mitigation strategy has been
followed, backup is available, information is documented, and knowledge has been dispersed
across the team. In addition, the project manager may temporarily refocus resources (and
readjust the project schedule) to those functions that are fully staffed, enabling newcomers who
must be added to the team to “get up to the speed“.

Drawbacks of RMMM:
It incurs additional project costs.
It takes additional time.
For larger projects, implementing an RMMM may itself turn out to be another tedious project.
RMMM does not guarantee a risk-free project, in fact, risks may also come up after the project is
delivered.

You might also like