0% found this document useful (0 votes)
748 views

Software Quality Engineering - Unit 3

The document discusses software quality management and various software reliability models. It begins by defining software quality management and its key activities. It then provides definitions of software reliability and describes the bathtub curve model for software reliability. The majority of the document summarizes different software reliability models including the Jelinski-Moranda model, Musa's basic execution time model, the bug seeding model, Shooman model, Littlewood-Verrall model, and Goel-Okumoto model. Each model makes different assumptions about fault discovery and removal and how to predict or estimate software reliability over time.

Uploaded by

piyush Rajput
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
748 views

Software Quality Engineering - Unit 3

The document discusses software quality management and various software reliability models. It begins by defining software quality management and its key activities. It then provides definitions of software reliability and describes the bathtub curve model for software reliability. The majority of the document summarizes different software reliability models including the Jelinski-Moranda model, Musa's basic execution time model, the bug seeding model, Shooman model, Littlewood-Verrall model, and Goel-Okumoto model. Each model makes different assumptions about fault discovery and removal and how to predict or estimate software reliability over time.

Uploaded by

piyush Rajput
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 44

Software Quality

Engineering
You can’t control what you can’t measure – Tom Demacro
KCA-035
Unit-3 Contents

▪ Software Quality Management and Models: Modeling Process


▪ Software Reliability Models: The Rayleigh Model
▪ Exponential Distribution Model
▪ Software Reliability Growth Models
▪ Software Reliability Allocation Models
▪ Criteria for Model Evaluation
▪ Software Quality Assessment Models
▪ Hierarchical Model of Software Quality Assessment.
Software Quality Management

From Wikipedia, the free encyclopedia


Software quality management (SQM) is a management process that aims to
develop and manage the quality of software in such a way so as to best ensure that
the product meets the quality standards expected by the customer while also
meeting any necessary regulatory and developer requirements, if any. Quality
management comprises the following activities: quality control (QC), quality
assurance (QA), and quality planning (QP).
Software Quality
Management
Effective quality management is
indicated by a high QA maturity
level. In its turn, mature quality
assurance is impossible without clear
quality planning and thorough
quality control activities. Test
Maturity Model Integration (TMMi),
the most popular QA maturity model,
provides a clear set of QA, QP and
QC activities
Software Reliability

According to ANSI, Software Reliability is defined as: the probability of failure-


free software operation for a specified period in a specified environment

Bathtub curve for hardware reliability Revised bathtub curve for software reliability
Software Reliability Models

Reliability modeling is the process of


predicting or understanding the reliability of a
component or system prior to its
implementation.
Software Reliability Models are statistical
models which can be used to make predictions
about a software system's failure rate, given the
failure history of the system. The models make
assumptions about the fault discovery and
removal process. These assumptions determine
the form of the model and the meaning of the
model's parameters.
Software Reliability Models
Basics Prediction Models Estimation Models
Uses data from the current
Data Reference Uses historical information
software development effort.
When used in Usually made prior to coding Usually made later in the life
development & test phases; can be used as cycle during testing (after some
cycle early as requirement phase. data have been collected).
Predict reliability at some Estimate reliability at either
Time Frame
future time. present or some next time.
Musa's Execution Time
Example Exponential distribution models
Model
Software Reliability Models
Jelinski and Moranda Model (J-M Model):
Assumes:
⁓ The program contains N initial faults which is an unknown but fixed constant.
⁓ Each fault in the program is independent and equally likely to cause a failure
during a test.
⁓ Time intervals between occurrences of failure are independent of each other.
⁓ Whenever a failure occurs, a corresponding fault is removed with certainty.
⁓ The fault that causes a failure is assumed to be instantaneously removed, and
no new faults are inserted during the removal of the detected fault.
⁓ The software failure rate during a failure interval is constant and is
proportional to the number of faults remaining in the program.
Jelinski and Moranda Model (J-M Model):
Failure Rate at ith failure interval is given by
𝜆 𝑡𝑖 ) = ∅ 𝑁 − 𝑖 − 1 𝑤ℎ𝑒𝑟𝑒 𝑖 = 1, 2, … , 𝑁
Where,
• ∅ = a proportional constant, the contribution any one fault makes to the
overall program
• 𝑁 = the number of initial faults in the program
• 𝑡𝑖 = the time between the (i-1)th and the ith failures.
For example, the initial failure intensity is 𝜆(𝑡1 ) = ∅𝑁
and after the first failure, the failure intensity decreases to
𝜆(𝑡2 ) = ∅(𝑁-1)
This means that initial failure intensity 𝜆(𝑡1 ) after 1st failure, decreases to 𝜆(𝑡2 )
and so on
Software Reliability Models
Software Reliability Models
Musa’s Basic execution time model
• Model focuses on failure intensity, failure
intensity decreases with time i.e., as
(execution time increases, the failure Where,
intensity decreases • λ0 is the initial failure intensity at the
• Each failure causes the same amount of start of the execution (i.e., at time t=0)
decrement in the failure intensity i.e., the • μ is expected number of failures by the
failure intensity decreases with a constant given time t
rate with the number of failures • v0: is the total number of failures
• Failure intensity (number of failures per occurring over an infinite time period
unit time) as a function of the number of
Note: Reliability growth relies on the
failures is given by
assumption that faults are removed
immediately after being discovered.
Software Reliability Models
Basic Execution Time Model
Logarithmic Poisson execution time model
• This model is also developed by Musa et. al.
(MUSA79)
• The failure intensity function is different as
compared to basic model. It assumes infinite
failure
Where,
• Failure intensity function (decrement per • λ is the initial failure intensity at the
0
failure) decreases exponentially whereas it is
start of the execution (i.e., at time t=0)
constant for basic model.
• θ is called failure intensity decay
• Failure intensity function is parameter
λ(µ) = λ0 exp(-θµ) • μ is expected number of failures by
the given time t
Software Reliability Models
The Bug Seeding Model
Mills’error seeding model proposed an error
seeding method to estimate the number of
errors in a program by introducing seeded
errors into the program. From the debugging Where,
data, which consists of inherent errors and N = total number of inherent errors
induced errors, the unknown number of n1 = total number of induced errors
inherent errors could be estimated. If both r = total number of errors removed
inherent errors and induced errors are during debugging
equally likely to be detected, then the k = total number of induced errors in r
probability of k induced errors in r removed removed errors
errors follows a hypergeometric distribution r – k = total number of inherent errors in
which is given as RHS
r removed errors
Software Reliability Models
Shooman Model
The model proposed by Shooman in 1972 based on the
following assumptions: Based on the assumptions,
• The total number of machine language instructions in er(x) = e(0)-ec(x)
the software program is constant Where
• The number of errors at the start of integration testing • x is debugging time at start
is constant and decreases directly as errors are of execution
corrected. No new errors are introduced during the • e(0) is errors present at time
process of testing x=0
• The difference between the errors initially present and • ec(x) is cumulative number
the cumulative errors corrected represents the residual of errors corrected by x
errors • er(x) is residual errors
• The failure rate is proportional to the number of
residual errors
Littelwood-Verrall Model
Littlewood and Verrall’s Model is a software reliability model that was proposed by
Littlewood and Verrall in the 1990s. It is an extension of the Jelinski-Moranda (J-M) model
and is also known as the J-M/L-V model. The assumptions in this model include the
following:
• Failures are independent and identically distributed
• Failure rates over time due to changes in patterns and software environment
• The failure rate function λ(t) is non-decreasing and non-negative over time.
• The time intervals between failures are distributed exponentially.
• The size of the software system under test is known and has remained constant over time.
The formula for the software reliability function in Littlewood and Verall’s Model is given by,
F(t) = λ(t) * S(t)
F(t) = number of failures up to time t.
Software
S(t) = size of the software system being tested up to time t. Reliability
λ(t) = time-varying failure rate. Models
Software Reliability Models 𝝏m(t)
Goel-Okumoto Model = b[a-m(t)]
𝝏𝑡
The Goel-Okumoto model (also called as exponential NHPP Where
model) is based on the following assumptions: • a is the expected total
• All faults in a program are mutually independent of the number of faults that
failure detection point of view. exist in the software
before testing
• The number of failures detected at any time is proportional
• b rate at which failure
to the current number of faults in a program. This means
that the probability of the failures for faults actually rate decreases
occurring, i.e., detected, is constant. • t is the execution time/
number of tests
• The isolated faults are removed prior to future test
occasions. • Mean value function
(Predicted no. of defects
• Each time a software failure occurs, the software error solution of diif. eq. 1 is
which caused it is immediately removed, and no new errors
are introduced. m(t) = a(1-e-bt)
Software Reliability Models
Musa-Okumoto Model
The Musa-Okumoto logarithmic model is a model used to predict computer system
performance. The following assumptions underpin the Musa-Okumoto logarithmic
model:
• The amount of work a computer system can do in a given amount of time is directly
related to its performance.
• The performance of a system can be predicted based on factors such as the number of
processors, memory capacity, and processor speed.
• A logarithmic function can be used to model the relationship between system
performance and these factors.
• Based on the values of these factors, the logarithmic function can be used to predict
the performance of a system.
Rayleigh model
Rayleigh framework serves as the basis for quality improvement strategy especially
it prevents defect prevention and early defect detection and removal. As an in-
process tool, it provides the data which can indicate the direction of the quality.

It articulates the points on defect prevention and early defect removal. Based on the
model, if the error injection rate is reduced, the entire area under the Rayleigh curve
becomes smaller, leading to a smaller projected field defect rate. Also, more defect
removal at the front end of the development process will lead to a lower defect rate
at later testing phases and during maintenance. Both scenarios aim to lower the
defects in the latter testing phases, which in turn lead to fewer defects in the field.
Rayleigh Model
The goal is to shift the peak of the Rayleigh
curve to the left while lowering it as much
as possible. Strategy is to achieve the defect
injection/removal pattern represented by
the lowest curve, one with an error
injection rate. In the figure RHS, the Y-axis
represents the defect rate. The development
phases represented by the X-axis are high-
level design review (I0), low-level design
review (I1), code inspection (I2), unit test
(UT), component test (CT), system test
(ST), and product general availability
(GA).
Exponential Distribution Model
Exponential Distribution Model
In software reliability the exponential distribution is one of the better-known
software reliability growth models and is often the basis of many other software
reliability growth models. Following should be taken into consideration when
applying the exponential distribution for reliability projection or estimating the
number of software defects:
• With all types of modeling and estimation, the more accurate and precise the input
data, the better the outcome
• Data tracking for software reliability estimation is done either in terms of precise
CPU execution time or on a calendar-time basis.
• Normally execution-time tracking is for small projects or special reliability studies;
calendar-time tracking is common for commercial development. When calendar-
time data are used, a basic assumption for the exponential model is that the testing
effort is homogeneous throughout the testing phase
Exponential Distribution
Exponential Distribution Model
For instance, in the example shown in Figures from
Model
RHS, the testing effort remained consistently high and
homogeneous throughout the system test phase; a
separate team of testers worked intensively based on a
predetermined test plan. The product was also large
(>100 KLOC) and therefore the trend of the defect
arrival rates tended to be stable even though no Exponential Model Cumulative Distribution
execution-time data were available.
To verify the assumption, indicators of the testing
effort, such as the person-hours in testing for each time
unit (e.g., day or week), test cases run, or the number of
variations executed, are needed. If the testing effort is
clearly not homogeneous, some sort of normalization
must be made. Otherwise, models other than the
exponential distribution should be considered .

Exponential Model Density Distribution


Exponential
Exponential Distribution
Distribution Model Model
The exponential model can be regarded as the basic form of software reliability
growth model.
The Reliability Function for the Exponential Distribution
R(t)=e−λt
Given a failure rate, lambda, we can calculate the probability of success over time t.

Example 1: Consider we have data on 1,650 units that have operated for an average
of 400 hours. Overall, there have been 145 failures. Calculate the reliability at 850
hours
Exponential Distribution Model
We’re given 1,650 its ran on average 400 hours, thus 400 time 1,650 provides the
total time.

With the failure rate we can calculate the reliability at 850 hours

Example 2: For λ=0.001 or 1 failure for 1000 hours, reliability (R) is around
_____ for 8 hours of operation.
Answer: 0.992
Advantages of Reliability Growth Models
• Predicting Reliability: Reliability growth models are used to predict the reliability
of a system over time, which can help organizations to make informed decisions
about the allocation of resources and the prioritization of improvements to the
system.
• Guiding the Testing Process: Reliability growth models can be used to guide the
testing process, by helping organizations to determine which tests should be run,
and when they should be run, in order to maximize the improvement of the
system’s reliability.
• Improving the Allocation of Resources: Reliability growth models can help
organizations to make informed decisions about the allocation of resources, by
providing an estimate of the expected reliability of the system over time, and by
helping to prioritize improvements to the system.
• Identifying Problem Areas: Reliability growth models can help organizations to
identify problem areas in the system, and to focus their efforts on improving these
areas in order to improve the overall reliability of the system.
Disadvantages of Reliability Growth Models
• Predictive Accuracy: Reliability growth models are only predictions, and
actual results may differ from the predictions. Factors such as changes in the
system, changes in the environment, and unexpected failures can impact the
accuracy of the predictions.
• Model Complexity: Reliability growth models can be complex and may
require a high level of technical expertise to understand and use effectively.
• Data Availability: Reliability growth models require data on the system’s
reliability, which may not be available or may be difficult to obtain.
Reliability Allocation Models

Reliability Allocation deals with the setting of reliability goals for individual
subsystems such that a specified reliability goal is met, and the hardware and
software subsystem goals are well-balanced among themselves. It involves a
balancing act of determining how to allocate reliability to the components in the
system so the system will meet its reliability goal while at the same time
ensuring that the system meets all of the other associated performance
specifications.
Three allocation methods: equal allocation, weighted reliability allocation and
cost optimization allocation.
Reliability Allocation Models

Create a complex reliability block diagrams that model advanced systems. A


reliability block diagram represents the reliability relationships of the
components in a system and can comprise blocks in any combination of series,
parallel, stand-by and load sharing configurations. After constructing the
reliability model of a complex system, we can optimize the allocation of
reliability to different components in order to meet a reliability specification
while considering the cost of the allocation.
In three given methods, the simplest method is equal reliability allocation,
which distributes the reliabilities uniformly among all components. For example,
suppose a system with five components in series has a reliability objective of
90% for a given operating time. The uniform allocation of the objective to all
components would require each component to have a reliability of 98% for the
specified operating time.
Reliability Allocation Models

Weighted reliability allocation: The reliability optimization process begins with


the development of a model that represents the entire system. This is
accomplished with the construction of a system reliability block diagram that
represents the reliability relationships of the components in the system. From this
model, the system reliability impact of different component modifications can be
estimated and considered alongside the costs that would be incurred in the
process of making those modifications. It is then possible to perform an
optimization analysis for this problem, finding the best combination of
component reliability improvements that meet or exceed the performance goals
at the lowest cost.
Reliability Allocation Models

By raising the individual component


reliability to a hypothetical value of 1
(100% reliability, which implies that
the component will never fail), the
overall system reliability goal will not
be met by improving the reliability of
just one component. The next logical
step would be to try to increase the
reliability of two components.
Reliability Allocation Models
Consider, for example, a system with three
components connected reliability-wise in
parallel. The reliabilities for each component
for a given time are:
R1=60 R2=70 and R3=80.A reliability
goal, RG=99 is required for this system. The
initial system reliability is:
RS=1−(1−0.6)⋅(1−0.7)⋅(1−0.8)=0.976
The current system reliability is inadequate to
meet the goal. From the figure RHS, it can be
seen that the reliability goal can be reached by
improving Component 1, Component 2 or
Component 3
Reliability Allocation Models

Cost/Penalty optimization allocation: Developing the "cost of reliability"


relationship will give the engineer an understanding of which components to
improve and how to best concentrate the effort and allocate resources in doing
so. The first step will be to obtain a relationship between the cost of
improvement and reliability.
Quantifying the Cost/Penalty Function
One needs to quantify a cost function for each component, Ci, in terms of the
reliability, Ri, of each component, or:
Ci=f(Ri)
Model Evaluation

Criteria for Model Evaluation


• Predictive validity
• Capability
• Quality of assumptions
• Applicability
• Simplicity
For reliability models, in 1984 a group of experts (Iannino et al., 1984) devised a
set of criteria for model assessment and comparison. The criteria are listed as
follows , by order of importance as determined by the group :
• Predictive validity: The capability of the model to predict failure behavior or
the number of defects for a specified time period based on the current data in
the model.
• Capability: The ability of the model to estimate with satisfactory accuracy
quantities needed by software managers, engineers , and users in planning and
managing software development projects or controlling change in operational
software systems.
• Quality of assumptions: The likelihood that the model assumptions can be
met, and the assumptions' plausibility from the viewpoint of logical consistency
and software engineering experience.
• Applicability: The model's degree of applicability across different software
products ( size , structure, functions, etc.).
• Simplicity: A model should be simple in three aspects: (1) simple and
inexpensive to collect data, (2) simple in concept and does not require extensive
mathematical background for software development practitioners to
comprehend, and (3) readily implemented by computer programs.
Software Quality Assessment Models
Software Generalized model for Quality Assessment: Require little
or no more project-specific data. Three categories are:
Quality 1. Overall Model: provides a single estimate of
overall product quality
Assessment 2. Segmented Model: provides different quality
estimates for difference industrial segments
Models 3. Dynamic Model: provides quality trend or
distribution over time or development process
Software Quality Assessment Models

Overall Model: Most general subtype of generalized quality models


• Provide a rough estimate of product quality, e.g.,
defect density = total defects/ product size
• Lump all product together – abstraction of commonly observed facts about
quality generally true over all kinds of application domains. E.g.,
⁓ 80:20 rule which states 80% of defects are concentrated in 20% of product
modules/ components
⁓ Linkages between software defect, risk, process maturity to quality
Software Quality Assessment Models

Segmented Model: Abstraction of


commonly observed facts about quality
over product markets segment. E.g.,
Reliability Levels:
1. Defect Safety critical software
2. Commercial software
3. Auxiliary software
Software Quality Assessment Models

Dynamic Model: provide information


about quality over time or development
phases. E.g.,
1. Defect distribution over
development phases
2. Putnam Model – effort and defect
profiles over time
3. Reliability growth product testing
Product-Specific Models provide more precise
Product- quality assessments using product-specific data.
Three categories:
Specific 1. Semi-Customized Model
Models 2. Observation-based Model
3. Measurement-driven predictive Model
Software Quality Assessment Models

Semi-Customized Model:
• Use general characteristics and historical information about product,
process or environment
• Example: Defect Removal Models (DRMs) provide defect distribution
profile over development phases based on previous releases of the same
product
• Information from these semi-customized models can be directly used to
predict defect distribution for the current release
Software Quality Assessment Models

Observation-based models:
• Relate observations of the software system behavior to information about
related activities for more precise quality assessments
• Examples of such models include various software reliability growth
models (SRGMs)
• Usually use data from current projects
Software Quality Assessment Models
Measurement-driven predictive Model
• Establish predictive relations between quality and other measurements
based on historical data
• Provide entry predictions of quality
• Identify problems early for timely actions
• Use statistical analysis techniques/ learning algorithms
• Developers use predictive relations from this model to focus their
inspection effort on selected modules to effectively utilize limited
resources.
Software Reliability Engineering Process.
End of Unit-3

You might also like