SOFTWARE ENGINEERING UNIT V NOTES
SOFTWARE ENGINEERING UNIT V NOTES
UNIT V
PRODUCT METRICS:
SOFTWARE QUALITY
Software quality is defined as a field of study and practice that describes the desirable attributes
of software products. There are two main approaches to software quality:
1. Defect management
2. Quality attributes.
A quality management system is the principal methods used by organizations to provide that the
products they develop have the desired quality.
Quality System Activities: The quality system activities encompass the following:
1. Auditing of projects
2. Review of the quality system
3. Development of standards, methods, and guidelines, etc.
Production of documents for the top management summarizing the effectiveness of the quality
system in the organization.
A robust product metrics strategy involves selecting the appropriate metrics for a given software
project and collecting the necessary data throughout the development process. It is crucial to
choose metrics that align with the project’s objectives and address its specific requirements.
Once the most appropriate metrics are identified, they need to be consistently measured and
analyzed to provide meaningful insights. Tracking product metrics over time allows for trend
analysis and enables comparisons between different releases or versions. By establishing
benchmarks and targets, software engineering teams can set performance goals and track
progress toward achieving them.
Product metrics are software product measures at any stage of their development, from
requirements to established systems. Product metrics are related to software features only.
Dynamic metrics help in assessing the efficiency and reliability of a program while static
metrics help in understanding, understanding and maintaining the complexity of a software
system.
Dynamic metrics are usually quite closely related to software quality attributes. It is relatively
easy to measure the execution time required for particular tasks and to estimate the time
required to start the system. These are directly related to the efficiency of the system failures
and the type of failure can be logged and directly related to the reliability of the software. On
the other hand, static matrices have an indirect relationship with quality attributes. A large
number of these matrices have been proposed to try to derive and validate the relationship
between the complexity, understandability, and maintainability. several static metrics which
have been used for assessing quality attributes, given in table of these, program or component
length and control complexity seem to be the most reliable predictors of understandability,
system complexity, and maintainability.
This is measure of the size of a program. Generally, the large the size of
Length of
(2) the code of a program component, the more complex and error-prone
code
that component is likely to be.
(3) Cyclomatic This is a measure of the control complexity of a program. This control
Software
S.No. Metric Description
In software engineering, product metrics play a crucial role in evaluating the quality,
performance, and effectiveness of software systems. There are different types of metrics in
software engineering, including:
Software measurement, which encompasses the collection and analysis of product metrics, is
vital in software engineering. It provides objective data to evaluate the performance of software
products and development processes. Software metrics help identify areas for improvement,
track progress, and make data-driven decisions. These metrics assist in assessing the impact of
process changes, evaluating development methodologies, and allocating resources effectively.
There’s a lot of hype out there around shipping software faster, and there’s no doubt that speed to
market is important. But shipping code quickly is not helpful if you don’t know it’s the right
feature to be building to begin with. For many organizations, the engineering team is the largest
investment your company is making, so it’s important to ensure that this investment is being
pointed in the right direction.
It’s also important to recognize that this investment is not limitless and avoid the trap of
overcommitting engineering resources at the expense of spreading the focus too thin. At the end
of the day, engineering is a zero-sum game. Given the same number of engineers and dollars, the
team cannot build everything, so prioritizing is paramount. Understanding the overall capacity of
your engineering teams and mapping how the work they do breaks down by logical categories or
allocations will help you ensure your organization
Two KPIs that will help you do that are: Category or Investment Breakdown, also
called Allocation, and Ramp Time.
Allocation: a way of visualizing how close your team is to that goal by breaking down
the work they do across axes that matter to the business.
Ramp Time: Tracking how long before new software engineers contribute fully
Software quality metrics are essential tools in software quality assurance, providing quantitative
measurements to evaluate the quality of software products. These metrics are designed to assess
various aspects of software systems, such as functionality, reliability, maintainability, and
usability. By utilizing a comprehensive software quality metrics framework, organizations can
systematically measure and track the quality of their software products throughout the
development lifecycle.
Once you know your engineering team is building the right products and features, it’s important
to ensure that the software being developed can provide that value to your customers
consistently. Quality metrics measure that consistency. Any quality problem, whether that is a
bug, a glitch, or something else unforeseen, is a potential threat to successful delivery of value.
Unhappy customers are not simply a problem for the revenue team. Quality issues will
eventually come back to the engineers – your team.
It’s important to monitor quality metrics in order to minimize customer impact and ultimately to
maximize customer satisfaction and retention. Three important KPIs to keep a close eye on are:
Bugs, Time to Resolution, and Uptime.
Bugs: By understanding the number and severity of bugs that exist per product or
feature, and comparing that with product or feature usage among your customer base, you will
have a better understanding of which bugs to prioritize fixing, and therefore where to devote
your resources.
Time to Resolution: measure the time it takes to resolve reported bugs, failures, and
other incidents.
Uptime: Measure how consistently your product is delivering value
By tracking and analyzing software quality metrics, organizations can improve the overall
quality of their software products, enhance customer satisfaction, and reduce maintenance efforts
and costs. Furthermore, these metrics enable organizations to benchmark their software quality
against industry standards and best practices, driving continuous improvement and ensuring
competitiveness in the market.
Process Metrics in Software Engineering
Process metrics in software engineering refer to the quantitative measurements used to evaluate
and monitor the effectiveness, efficiency, and quality of the software development process itself.
These metrics focus on the activities, workflows, and practices employed throughout the
software development lifecycle. By analyzing process metrics, software engineering teams can
identify areas for improvement and make data-driven decisions to enhance their development
processes.
Delivering code faster is where most vendors and vocal members of the
tech community focus. But delivering predictably is important to set proper expectations, drive
alignment across functional teams, and allow for better execution and therefore better penetration
of the market for that value your organization has worked so hard to build. Process metrics for
software engineering teams include Cycle Time & Lead Time, Deployment Frequency, and Task
(or other unit of work) Resolution Rate Over Time.
Cycle time: Measures the time taken to complete a specific task or process.
Lead time: Measures the time elapsed from the initiation of a software development task
to its completion
Deployment Frequency: Tracks how fast and iterative a software engineering team is at
delivering value
Task Resolution Rate Over Time: Tracks how a team is trending over completing work
These metrics help teams evaluate the efficiency of their development processes, identify
bottlenecks or inefficiencies, and make adjustments to improve overall performance.
While Sales and Marketing teams (and their leaders) may have a lot of influence with the
executives and board, they all depend on the Engineering teams to deliver. At the end of the day,
these business leaders need to plan around delivery for sales and marketing timelines.
With that in mind, it’s important to be able to measure and report on the progress your team is
making toward that value creation in order to drive alignment across all the teams in your
company.
How you present these topics to business leaders will largely depend on the leaders you work
with and their preferences. But some great KPIs to track and stay on top of things internally are:
Completion / Burn down Percentage, and Predicted Ship Date.
Completion / Burn down Percentage: measures the trend of work that has been
completed vs. what remains to be done over a certain period of time.
Predicted Ship Date: An estimate as to when a given release, project, feature, or product
will ship
Software defect metrics play a key role in assessing and improving software reliability. These
metrics are used to quantify the number and severity of defects or issues within the software
system. Examples of software defect metrics include defect density, which measures the number
of defects per unit of code, and mean time to failure, which calculates the average time between
failures in the system.
These metrics help identify areas of the software that are prone to defects and allow for targeted
efforts to improve reliability.
Improving software reliability requires a proactive approach to defect prevention, detection, and
removal. Software engineering teams utilize various techniques such as rigorous testing and code
review to identify and fix defects early in the development lifecycle. By continuously monitoring
software defect metrics, teams can track the effectiveness of these measures and implement
corrective actions as needed.
Software reliability is essential in industries where system failures can have significant
consequences, such as healthcare, finance, and aviation. Reliability instills trust in the software
product and helps minimize financial losses and reputational damage due to software failures.
Software engineering teams should prioritize software reliability and employ effective defect
metrics to improve the overall quality and dependability of their software systems.
Software Metrics
A software metric is a measure of software characteristics which are measurable or countable.
Software metrics are valuable for many reasons, including measuring software performance,
planning work items, measuring productivity, and many other uses.
Within the software development process, many metrics are that are all connected. Software
metrics are similar to the four functions of management: Planning, Organization, Control, or
Improvement.
1. Product Metrics: These are the measures of various characteristics of the software product.
The two important software characteristics are:
2. Process Metrics: These are the measures of various characteristics of the software
development process. For example, the efficiency of fault detection. They are used to measure
the characteristics of methods, techniques, and tools that are used for developing software.
Types of Metrics
Internal metrics: Internal metrics are the metrics used for measuring properties that are viewed
to be of greater importance to a software developer. For example, Lines of Code (LOC) measure.
External metrics: External metrics are the metrics used for measuring properties that are viewed
to be of greater importance to the user, e.g., portability, reliability, functionality, usability, etc.
Hybrid metrics: Hybrid metrics are the metrics that combine product, process, and resource
metrics. For example, cost per FP where FP stands for Function Point Metric.
Project metrics: Project metrics are the metrics used by the project manager to check the
project's progress. Data from the past projects are used to collect various metrics, like time and
cost; these estimates are used as a base of new software. Note that as the project proceeds, the
project manager will check its progress from time-to-time and will compare the effort, cost, and
time with the original effort, cost and time. Also understand that these metrics are used to
decrease the development costs, time efforts and risks. The project quality can also be improved.
As quality improves, the number of errors and time, as well as cost required, is also reduced.
vi. Function points are one of the most commonly used measures of software size and
complexity. In this article, we’ll take a look at what function points are, how they’re
calculated, and some of the benefits and drawbacks of using them. Whether you’re
just getting started in software engineering or you’ve been doing it for years,
understanding them can be a valuable tool in your development arsenal.
Defining function points and how they are used in software engineering
A function point is a unit of measurement used to quantify the amount of business functionality
being delivered by a software application. Function points allow software engineers to better
measure the size of a project, identify areas in need of optimization, and analyze development
performance benchmarks over time. Due to its accuracy and flexibility, function point analysis
has become a standardized tool for studying application complexity. Through the evaluation of
data elements, transaction types, external inputs, outputs and inquiries within an application,
calculations are performed to translate user requirements into uniform standards which can then
be measured against industry baselines. This generalized method for measuring virtually any
type of IT system ensures engineers have the data needed to correctly assess the scope of a
project for cost estimation and process improvement purposes.
Function point analysis is an effective method for measuring software size. By focusing on the
features and functions that a user can access and use, this metric can accurately determine the
complexity of an application. They are especially useful when comparing projects of different
sizes; they provide a consistent method for assessing and measuring project scope, instead of
asking developers to estimate the number of lines of code each project requires. Additionally,
because they take into account differences between programming languages, function points are
particularly beneficial during development or design changes. Not only do they allow insightful
comparisons between systems of differing sizes and structures, but also provide a wide range of
other benefits such as identifying development bottlenecks or highlighting discrepancies in
customer specifications. Ultimately, they offer an effective way to measure the size of software
projects at any stage in its lifecycle — from planning to implementation.
Comparing and contrasting with other measures of software size, such as lines of code
Function points serve as an effective measure for determining the relative size of a software
system as it allows for quantifying the efforts associated with developing and maintaining the
system. Initially limited to only certain languages, such as COBOL, function points have since
become more dynamic and flexible in accommodating other widely used languages like Java.
In comparison to lines of code (LOC) metrics, function points are capable of providing a more
holistic view in terms of evaluating the complexity of software development that takes into
account key factors such as data elements, data files, user inputs and outputs. Moreover, since
function points take into consideration the type of each element instead of merely counting them,
it helps to assign a proper “size” metric even if the code varies drastically in length.
Thus, while both approaches are essential in measuring software size, function point analysis
offers more varied and specific aspects which makes it more advantageous compared to LOC
metrics.
Some examples of how they can be used to estimate the cost and effort required for a
software project
Function points can be a valuable tool for determining the expense, time, and effort needed to
successfully complete a software project. The cost calculation of such projects can vary
depending on their scope and complexity, making function points an advantageous tool for
estimating the relative cost. For example, the Inputs Model is an effective method for calculating
time and effort which uses values associated with the number of user inputs to determine the
project’s cost. The Outputs Model also takes into account factors such as output record types,
inquiries, interface files, external programs and reports to better determine total project cost.
These models highlight how function points effectively identify even subtle elements of a
software project in order to make accurate estimations.
Function points are widely used as a measure of software size, but their use also comes with
potential drawbacks. The difficulty of assigning accurate values to the complexity factors makes
function point analysis unreliable. Further, installing and running a software sizing tool can be
expensive, making it impractical for smaller teams or projects.
Moreover, since different types of user experiences require different weights when assessing the
complexity of an application, manually determining the weights is both time-consuming and
subjective.
Finally, it should also be noted that parameters like bugs or maintenance support have no
representation, which further reduces their effectiveness as an absolute measure of software size.
In conclusion, function points have emerged as a key tool for measuring the size of a software
product. Not only do they provide an objective metric for sizing software, but they are also easy
to count and allow for accurate comparison between different programs. Furthermore, the use of
function points facilitates faster and more accurate estimates of cost and effort associated with a
software project. However, it’s important to remember that the use of function points isn’t
without its drawbacks; potential miscalculations may impact on both finance and timeline
estimates.
Explore how Developer Analytics gives organizations consistent and objective metrics for
measuring software development productivity.
Related articles...
It entails assessing, identifying, and prioritizing project risks or uncertainties. With it comes
planning, monitoring, managing, and mitigating risks. Even meticulously planned projects come
with their fair share of risk factors.
That’s why project managers need to account for these potential risks in their planning process,
by including project scope and cost estimates to stay on track. Also, they need mitigation
strategies to handle risks that arise throughout the project lifecycle.
Principles of Risk Management in Software Engineering
Risk management can be broken down into five key principles:
1. Risk Identification
2. Risk Assessment
3. Risk Handling
Risk Identification
The first step to handling or mitigating risks is to identify the risks at play.
This is an active process that should be taken care of by a dedicated project manager or other
member of the team. Their job is to unpack the project’s details and identify any and all potential
risks that could be a setback.
In general, it’s important to identify three main types of risk that crop up frequently in
software projects:
1. Technical risk – which involves the choice of technology, integration of tools and
software, hardware setup, security, data protection, dependencies, and any other technical
factor that could potentially derail the project or significantly change its scope or costs.
2. Project risk – that includes the software project’s timeline, budget, resources, scope, and
software requirements. This could stem from technical risks or other factors like internal
resource constraints, stakeholder changes, or scheduling conflicts.
3. Business risk – which includes any risk associated with the business outcome of the
project; be it the costs of development, business requirements, marketability, or cost to
operate.
Using a checklist or systematic analysis, there should be a workflow for identifying as many
risks as possible, upfront in the project. At this stage, they only need to be potential risks, and
they can be of any size or severity.
Risk Assessment
Once the risks have been identified and broadly categorized, you’ll need a framework to assess
them further, and prioritize the actions and resources needed to address them.
Internal Risks
Internal risks are within reach of the organization. They can either be handled directly as part of
the project planning; or you may need to take mitigation, control, or monitoring steps to reduce
the risk, if unable to eliminate it entirely.
Examples include:
External risks
External risks originate outside, and are beyond the organization’s (or project team’s) control.
In most cases, external risks can only be managed (or minimized) through mitigation, control, or
monitoring strategies. In some cases, the risk is simply unavoidable.
At this stage, you should combine both categories in order to group your software development
risks by type and origin.
This is important because it will help you determine whether you should consider risk handling
to address your risks, or risk mitigation in an attempt to shield the project from the impacts of
your risks.
Risk Impact
Next, you’ll want to score each risk according to its potential impact on the project.
1. Severity – the size or magnitude of the impact this risk could have on the overall project.
2. Probability of occurrence – how likely this risk is to occur and/or have the expected
outcome.
Risk Prioritization & Planning
Finally, you can use the risk impact assessment to prioritize those risks that require action.
Ultimately, each one of these risks should have a prioritization and an action plan.
Prioritize solutions for the highest impact risks — which are most likely to happen, and have the
greatest effect on the success of your project.
Then work backwards. The lowest-impact risks may be acceptable risks that require no further
action.
For each risk, you’ll then need to determine the best course of action. In broad strokes,
1. Risk handling: direct actions taken to eliminate the risk or greatly reduce its likelihood
or impact
2. Risk mitigation and control: systemic actions to reduce or eliminate the risk
3. Risk monitoring and review: ongoing actions to assess or measure the risk
Risk Handling
First up: handle the risk outright.
Whenever possible, project plans should include actionable steps to address any potential risks
that are internal and addressable through targeted intervention.
In other words: fix the stuff you can, before it becomes a problem.
For example, one of the technical risks you identify could be that your team is planning to
implement a new technology they’ve never deployed for this type of project. That would
represent a significant risk.
Team members can get creative with risk handling. Sometimes it’s helpful to conduct an old-
fashioned brainstorming session where developers and stakeholders ideate on ways to eliminate
or resolve potential roadblocks.
But this is just one piece of the software risk management process.
Many risks are either external (you have minimal control over them) or simply unresolvable.
In addition to handling the risks that have been identified through proactive measures, the next
step of an effective risk management strategy is mitigation and control; i.e. taking steps in order
to minimize, either the impact or the probability of the risk. (Or both!)
Usually, this involves implementing new policies, procedures, or processes that address the risk
at a systemic level.
Key example: project risks such as budget overages or missed deadlines are simply
unstoppable. It’s not a switch; you can’t just stop things from taking longer than they’re meant
to.
Using historical project and time data as part of the software project management process
enables smarter, data-driven decision making. Software development projects are more likely to
stay on track, and on budget, if the planning process is rooted in empirical data collected from
past projects, rather than generalized estimates and forecasts.
This improves the accuracy of the project scope, budgets, and timelines.
Again, this doesn’t eliminate the risk; but it introduces mechanisms that reduce the risk’s impact:
Lesser probability of occurrence: estimated work hours, budgets, and timelines are
more likely to be accurate and less likely to become a significant risk factor.
Lesser severity: in cases where estimates are inaccurate, they’re more likely to be within
an acceptable margin of error if based on real-world data.
The key here is to uncover strategies that allow you to contain, minimize, and manage risks using
levers that are within your control.
Other mitigation strategies would include things like transferring the risk. This could mean
outsourcing to a third-party contractor, using an insurance policy to cover potential loss,
partnering with another company, or using a penalty clause in a vendor agreement.
Some risks are entirely unmanageable and cannot be mitigated. The best you can do is to monitor
the situation.
Once you’ve implemented risk handling and risk mitigation strategies for all of the risks that can
be addressed through one of these two processes, the remaining risks should be thoroughly
documented, and monitoring plans should be set in place.
For example, if you’re unable to draw from historical developer time data as part of your
mitigation strategy, you can instead choose to deploy time tracking for the duration of the
project. That way, you end up dealing with a monitoring strategy.
You’ll be able to measure and track the timeline of the project better, and intervene if the data
suggests that the project is at risk.
Principle of Risk Management
1. Global Perspective: In this, we review the bigger system description, design, and
implementation. We look at the chance and the impact the risk is going to have.
2. Take a forward-looking view: Consider the threat which may appear in the future and
create future plans for directing the next events.
3. Open Communication: This is to allow the free flow of communications between the
client and the team members so that they have certainty about the risks.
4. Integrated management: In this method risk management is made an integral part of
project management.
5. Continuous process: In this phase, the risks are tracked continuously throughout the risk
management paradigm.
Based on these two methods, the priority of each risk can be estimated:
p=r*s
Where p is the priority with which the risk must be controlled, r is the probability of the risk
becoming true, and S is the severity of loss caused due to the risk becoming true. If all identified
risks are set up, then the most likely and damaging risks can be controlled first, and more
comprehensive risk abatement methods can be designed for these risks.
1. Risk Identification: The project organizer needs to anticipate the risk in the project as early
as possible so that the impact of risk can be reduced by making effective risk management
planning.
A project can be of use by a large variety of risk. To identify the significant risk, this might
affect a project. It is necessary to categories into the different risk of classes.
There are different types of risks which can affect a software project:
1. Technology risks: Risks that assume from the software or hardware technologies that are
used to develop the system.
2. People risks: Risks that are connected with the person in the development team.
3. Organizational risks: Risks that assume from the organizational environment where the
software is being developed.
4. Tools risks: Risks that assume from the software tools and other support software used to
create the system.
5. Requirement risks: Risks that assume from the changes to the customer requirement and
the process of managing the requirements change.
6. Estimation risks: Risks that assume from the management estimates of the resources
required to build the system
2. Risk Analysis: During the risk analysis process, you have to consider every identified risk and
make a perception of the probability and seriousness of that risk.
There is no simple way to do this. You have to rely on your perception and experience of
previous projects and the problems that arise in them.
It is not possible to make an exact, the numerical estimate of the probability and seriousness of
each risk. Instead, you should authorize the risk to one of several bands:
1. The probability of the risk might be determined as very low (0-10%), low (10-25%),
moderate (25-50%), high (50-75%) or very high (+75%).
2. The effect of the risk might be determined as catastrophic (threaten the survival of the
plan), serious (would cause significant delays), tolerable (delays are within allowed
contingency), or insignificant.
Risk Control
It is the process of managing risks to achieve desired outcomes. After all, the identified risks of a
plan are determined; the project must be made to include the most harmful and the most likely
risks. Different risks need different containment methods. In fact, most risks need ingenuity on
the part of the project manager in tackling the risk.
1. Avoid the risk: This may take several ways such as discussing with the client to change
the requirements to decrease the scope of the work, giving incentives to the engineers to
avoid the risk of human resources turnover, etc.
2. Transfer the risk: This method involves getting the risky element developed by a third
party, buying insurance cover, etc.
3. Risk reduction: This means planning method to include the loss due to risk. For
instance, if there is a risk that some key personnel might leave, new recruitment can be
planned.
Risk Leverage: To choose between the various methods of handling risk, the project plan must
consider the amount of controlling the risk and the corresponding reduction of risk. For this, the
risk leverage of the various risks can be estimated.
Risk leverage is the variation in risk exposure divided by the amount of reducing the risk.
Risk leverage = (risk exposure before reduction - risk exposure after reduction) / (cost of
reduction)
Risk planning: The risk planning method considers each of the key risks that have been
identified and develop ways to maintain these risks.
For each of the risks, you have to think of the behavior that you may take to minimize the
disruption to the plan if the issue identified in the risk occurs.
You also should think about data that you might need to collect while monitoring the plan so that
issues can be anticipated.
Again, there is no easy process that can be followed for contingency planning. It rely on the
judgment and experience of the project manager.
Risk Monitoring: Risk monitoring is the method king that your assumption about the
product, process, and business risks has not changed.
Reactive risk management: Tries to reduce the damage of potential threats and speed
up an organization's recovery from them, but assumes that those threats will happen eventually.
Proactive risk management: identifies possible threats and aims to prevent those
events from ever happening in the first place.
Using the terms “proactive” and “reactive” when discussing risk management can confuse
people, but that shouldn’t be so; proactive and reactive risk management are different things.
Understanding the difference between the two is crucial to developing effective risk mitigation
strategies. So let’s understand the nuances of each approach in detail.
The basics are simple. Reactive risk management tries to reduce the damage of potential threats
and speed up an organization’s recovery from them, but assumes that those threats will happen
eventually. Proactive risk management identifies possible threats and aims to prevent those
events from ever happening in the first place.
Each strategy has activities, metrics, and behaviors useful in risk analysis.
One fundamental point about reactive risk management is that the disaster or threat must occur
before management responds. In contrast, proactive risk management is about taking
preventative measures before the event to decrease its severity. That’s a good thing to do.
At the same time, however, organizations should develop reactive risk management plans that
can be deployed after the event – because many times, the unwanted event will happen. If
management hasn’t developed reactive risk management plans, then executives end up making
decisions about how to respond as the event happens; that can be costly and stressful.
There is one Catch-22 with reactive risk management: Although this approach gives you time to
understand the risk before acting, you’re still one step behind the unfolding threat. Other projects
will lag as you attend to the problem at hand.
The reactive approach learns from past (or current) events and prepares for future events. For
example, businesses can purchase cyber security insurance to cover the costs of a security
disruption.
This strategy assumes that a breach will happen at some point. But once that breach does occur,
the business might understand more about how to avoid future violations and perhaps could even
tailor its insurance policies accordingly.
As the name suggests, proactive risk management means that you identify risks before they
happen and figure out ways to avoid or alleviate the risk. It seeks to reduce the hazard’s risk
potential or (even better) prevent the threat altogether.
A good example is vulnerability testing and remediation. Any organization of appreciable size is
likely to have vulnerabilities in its software that attackers could find and exploit. So regular
testing can find and patch those vulnerabilities to eliminate that threat.
A proactive management strategy gives you more control over your risk management. For
example, you can decide which issues should be top priorities and what potential damage you
will accept.
Proactive management also involves constantly monitoring your systems, risk processes, cyber
security, competition, business trends, and so forth. Understanding the level of risk before an
event allows you to instruct your employees on how to mitigate them.
A proactive approach, however, implies that each risk is constantly monitored. It also requires
regular risk reviews to update your current risk profile and to identify new risks affecting the
company. This approach drives management to be constantly aware of the direction of those
risks.
Predictive risk management about predicting future risks, outcomes, and threats. Some predictive
components may sound similar to proactive or reactive strategies.
Instead of launching a full product line or entering a new market, companies can launch products
in a lean, iterative fashion- the ‘minimum viable product’ – to a small market subsection. This
way, companies can test their products’ operational and financial elements and mitigate the
market-related risks before they launch to a broader audience.
For example, an airline could test facial recognition technology to make security checks faster,
but might want to validate privacy and data security concerns first by trying it out at one airport
before going nationwide.
2. Risk isolation
Companies can isolate potential threats to their business model by separating specific parts of
their infrastructure to protect them from external threats.
For example, some companies might restrict access to critical parts of their software ecosystem
by requiring engineers to work at a specific location instead of working remotely (which opens
the door to potential cyber threats).
3. Risk-reward analysis
Companies may undertake specific initiatives to understand the opportunity cost of entering a
new market or the risk of possibly gaining market share in a saturated market. Before taking the
initiatives at a broader level, the analysis would help them understand the market forces and their
ability to induce or reduce risks with what-if scenarios.
For example, a direct-to-consumer delivery company might want to project the anticipated
demand for entering the market with faster medical supply delivery.
4. Data projection
Companies can analyze data with the help of machine learning techniques to understand specific
behavioral or threat patterns in their ways of working. These data analysis efforts might also help
them understand what second-order effects are lurking because of inefficient processes or lax
attention on certain parts of the business.
For example, a large retailer might use data analysis to find inefficiencies in its supply chain to
reduce last-mile delivery times and get an edge over the competition.
5. Certification
To stay relevant and retain customer trust, companies could also obtain safety and security
certifications to prove they are a resilient brand that can sustain and mitigate significant
operational risks.
For example, a new fintech company might get certified for PCI-DSS security standards before
scaling in a new market to build trust with its customers.
Insecure coding
Risks related to practices
Security risks can
vulnerabilities in the Lack of proper access
lead to financial
software that could controls
losses, reputational
allow unauthorized Vulnerabilities in third-
damage, and legal
access or data party libraries
liabilities.
breaches. Insufficient data
Security risks security measures
Inadequate
Risks associated infrastructure capacity
Scalability risks can
with the software’s Inefficient algorithms
lead to performance
ability to handle or data structures
bottlenecks,
increasing Lack of scalability
outages, and lost
workloads or user testing
revenue.
demands. Poorly designed
Scalability risks architecture
Unrealistic cost
estimates
Budgetary risks can Scope creep or changes
Risks associated
lead to financial in requirements
with exceeding the
strain, project Unforeseen expenses,
project’s budget or
delays, and even such as third-party licenses
financial constraints.
cancellation. or hardware upgrades
Inefficient resource
Budgetary risks utilization
Inadequate monitoring
and alerting systems
Risks associated
Operational risks Lack of proper disaster
with the ongoing
can lead to recovery plans
operation and
downtime, outages, Insufficient training for
maintenance of the
and data loss. operational staff
software system.
Poor change
Operational risks management practices
Unrealistic timelines or
Risks related to Schedule risks can milestones
delays in the lead to increased Underestimation of task
software costs, pressure on complexity
development resources, and Resource dependencies
process or missed missed market or conflicts
deadlines. opportunities. Unforeseen events or
Schedule risks delays
Risk Assessment
The purpose of the risk assessment is to identify and priorities the risks at the earliest stage and
avoid losing time and money.
Risk identification is a systematic attempt to specify threats to the project plan (estimates,
schedule, resource loading, etc.). By identifying known and predictable risks, the project
manager takes a first step toward avoiding them when possible and controlling them when
necessary.
There are two distinct types of risks: generic risks and product-specific risks. Generic risks are
a potential threat to every software project. Product-specific risks can be identified only by those
with a clear understanding of the technology, the people, and the environment that is specific to
the project at hand. To identify product-specific risks, the project plan and the software statement
of scope are examined and an answer to the following question is developed: "What special
characteristics of this product may threaten our project plan?"
One method for identifying risks is to create a risk item checklist. The checklist can be used for
risk identification and focuses on some subset of known and predictable risks in the following
generic subcategories:
• Product size: risks associated with the overall size of the software to be built or modified.
• Process definition: risks associated with the degree to which the software process has been
defined and is followed by the development organization.
• Development environment: risks associated with the availability and quality of the tools to be
used to build the product.
• Technology to be built: risks associated with the complexity of the system to be built and the
"newness" of the technology that is packaged by the system.
• Staff size and experience: risks associated with the overall technical and project experience of
the software engineers who will do the work.
The risk item checklist can be organized in different ways. Questions relevant to each of the
topics can be answered for each software project. The answers to these questions allow the
planner to estimate the impact of risk. A different risk item checklist format simply lists
characteristics that are relevant to each generic subcategory. Finally, a set of “risk components
and drivers" are listed along with their probability of occurrence. Drivers for performance,
support, cost, and schedule are discussed in answer to later questions.
A number of comprehensive checklists for software project risk have been proposed in the
literature. These provide useful insight into generic risks for software projects and should be
used whenever risk analysis and management is instituted. However, a relatively short list of
questions can be used to provide a preliminary indication of whether a project is “at risk.”
The following questions have derived from risk data obtained by surveying experienced software
project managers in different part of the world. The questions are ordered by their relative
importance to the success of a project.
1. Have top software and customer managers formally committed to support the project?
2. Are end-users enthusiastically committed to the project and the system/product to be built?
3. Are requirements fully understood by the software engineering team and their customers?
4. Have customers been involved fully in the definition of requirements?
5. Do end-users have realistic expectations?
6. Is project scope stable?
7. Does the software engineering team have the right mix of skills?
8. Are project requirements stable?
9. Does the project team have experience with the technology to be implemented?
10. Is the number of people on the project team adequate to do the job?
11. Do all customer/user constituencies agree on the importance of the project and on the
requirements for the system/product to be built?
If any one of these questions is answered negatively, mitigation, monitoring, and management
steps should be instituted without fail. The degree to which the project is at risk is directly
proportional to the number of negative responses to these questions.
The U.S. Air Force has written a pamphlet that contains excellent guidelines for software risk
identification and abatement. The Air Force approach requires that the project manager identify
the risk drivers that affect software risk components—performance, cost, support, and schedule.
In the context of this discussion,
• Performance risk: the degree of uncertainty that the product will meet its
requirements and be fit for its intended use.
• Cost risk: the degree of uncertainty that the project budget will be
maintained.
• Support risk: the degree of uncertainty that the resultant software will be
easy to correct, adapt, and enhance.
• Schedule risk: the degree of uncertainty that the project schedule will be
maintained and that the product will be delivered on time.
The impact of each risk driver on the risk component is divided into one of four impact
categories—negligible, marginal, critical, or catastrophic.
Risk projection: also called risk estimation, attempts to rate each risk in two ways—the
likelihood or probability that the risk is real and the consequences of the problems associated
with the risk, should it occur.
ii. The consequences of the problems associated with the risk, should it occur.
Project planner, along with other managers and technical staff, performs four risk projection
activities:
(1) Establish a measure that reflects the perceived likelihood of a risk
(3) Estimate the impact of the risk on the project and the product
(4) Note the overall accuracy of the risk projection so that there will be no
misunderstandings.
Risk table provides a project manager with a simple technique for risk projection
Steps in Setting up Risk Table
(1) Project team begins by listing all risks in the first column of the table. Accomplished with
the help of the risk item checklists.
(3) The probability of occurrence of each risk is entered in the next column of the table. The
probability value for each risk can be estimated by team members individually.
(4) Individual team members are polled in round-robin fashion until their assessment of risk
probability begins to converge.
Assessing Impact of Each Risk
(1) Each risk component is assessed using the Risk Characterization Table (Figure 1) and impact
category is determined.
(2) Categories for each of the four risk components—performance, support, cost, and
schedule—are averaged to determine an overall impact value.
iii. A weighted average can be used if one risk component has more
significance for the project.
(3) Once the first four columns of the risk table have been completed, the table is sorted by
probability and by impact.
· · High-probability, high-impact risks percolate to the top of the table, and low-
probability risks drop to the bottom.
(4) Project manager studies the resultant sorted table and defines a cutoff line.
Cutoff line (drawn horizontally at some point in the table) implies that only risks
that lie above the line will be given further attention.
i. Risk factor with a high impact but a very low probability of occurrence
should not absorb a significant amount of management time.
ii. High-impact risks with moderate to high probability and low-impact risks
with high probability should be carried forward into the risk analysis
steps that follow.
All risks that lie above the cutoff line must be managed.
The column labeled RMMM contains a pointer into a Risk Mitigation, Monitoring
and Management Plan
ii. Scope of a risk - combines the severity with its overall distribution (how much of the
project will be affected or how many customers are harmed?).
iii. Timing of a risk - when and how long the impact will be felt.
1. Determine the average probability of occurrence value for each risk component.
2. Using Figure 1, determine the impact for each component based on the criteria shown.
3. Complete the risk table and analyze the results as described in the preceding sections.
Overall risk exposure, RE, determined using:
RE = P x C
P is the probability of occurrence for a risk
C is the cost to the project should the risk occur.
Example
Assume the software team defines a project risk in the following manner:
Risk Refinement
A risk may be stated generally during early stages of project planning.
With time, more is learned about the project and the risk may be possible to refine the risk into a
set of more detailed risks
Risk Mitigation, Monitoring and Management Plan (RMMM) - documents all work
performed as part of risk analysis and is used by the project manager as part of the overall
project plan.
RMM stands for risk mitigation, monitoring and management. There are three issues in
strategy for handling the risk is
1. Risk Avoidance
2. Risk Monitoring
3. Risk Management
Risk Mitigation
Risk mitigation means preventing the risk to occur (risk avoidance). Following are the steps to be
taken for mitigating the risks.
Risk Monitoring
In Risk Monitoring process following thing must be monitored by the project manager.
1. The approach and behavior of the team member as pressure of project varies.
2. The degree in which the team performs with the spirit of “Team-Work”.
3. The type of cooperation between the team members.
4. The type of problem occur in team member.
5. Availability of jobs within and outside of the organization.
Risk Management
Project manager performs this task when risk becomes a reality. If project manager is successful
in applying the project mitigation effectively then it becomes very much easy to manage the
risks.
For example,
Consider a scenario that many people are leaving the organization then if sufficient additional
staff is available, if current development activity is known to everybody in the team, if latest and
systematic documentation is available then any ‘new comer’ can easily understand current
development activity. This will ultimately help in continuing the work without any interval.
A risk management technique is usually seen in the software Project plan. This can be divided
into Risk Mitigation, Monitoring, and Management Plan (RMMM). In this plan, all works are
done as part of risk analysis. As part of the overall project plan project manager generally uses
this RMMM plan.
In some software teams, risk is documented with the help of a Risk Information Sheet (RIS).
This RIS is controlled by using a database system for easier management of information i.e.
creation, priority ordering, searching, and other analysis. After documentation of RMMM and
start of a project, risk mitigation and monitoring steps will start.
Risk Mitigation:
It is an activity used to avoid problems (Risk Avoidance).
Steps for mitigating the risks as follows.
Finding out the risk.
Removing causes that are the reason for risk creation.
Controlling the corresponding documents from time to time.
Conducting timely reviews to speed up the work.
Risk Monitoring:
It is an activity used for project tracking.
It has the following primary objectives as follows.
Example:
Let us understand RMMM with the help of an example of high staff turnover.
Risk Mitigation:
To mitigate this risk, project management must develop a strategy for reducing turnover. The
possible steps to be taken are:
Meet the current staff to determine causes for turnover (e.g., poor working conditions, low pay,
and competitive job market).
Mitigate those causes that are under our control before the project starts.
Once the project commences, assume turnover will occur and develop techniques to ensure
continuity when people leave.
Organize project teams so that information about each development activity is widely dispersed.
Define documentation standards and establish mechanisms to ensure that documents are
developed in a timely manner.
Assign a backup staff member for every critical technologist.
Risk Monitoring:
As the project proceeds, risk monitoring activities commence. The project manager monitors
factors that may provide an indication of whether the risk is becoming more or less likely. In the
case of high staff turnover, the following factors can be monitored:
General attitude of team members based on project pressures.
Interpersonal relationships among team members.
Potential problems with compensation and benefits.
The availability of jobs within the company and outside it.
Risk Management:
Risk management and contingency planning assumes that mitigation efforts have failed and that
the risk has become a reality. Continuing the example, the project is well underway, and a
number of people announce that they will be leaving. If the mitigation strategy has been
followed, backup is available, information is documented, and knowledge has been dispersed
across the team. In addition, the project manager may temporarily refocus resources (and
readjust the project schedule) to those functions that are fully staffed, enabling newcomers who
must be added to the team to “get up to the speed“.
Drawbacks of RMMM:
It incurs additional project costs.
It takes additional time.
For larger projects, implementing an RMMM may itself turn out to be another tedious project.
RMMM does not guarantee a risk-free project, in fact, risks may also come up after the project is
delivered.