Stqa Notes-3-91
Stqa Notes-3-91
UNIT I
Introduction to Software Testing and Quality Assurance
Introduction to Software Testing: Nature of errors and the need for testing
Definition of Quality and Quality Assurance: Understanding quality in software development,
Distinction between Quality Assurance (QA), Quality Control (QC), Quality Management (QM), and
Software Quality Assurance (SQA)
Software Development Life Cycle (SDLC): Overview of SDLC phases and their relationship to
testing, Role of testing in each phase, Software quality factors and their impact on testing
Verification and Validation (V&V): Definition of V&V and its significance in software
development, Different types of V&V mechanisms, Concepts of Software Reviews, Inspection, and
Walkthrough
======================================================================
Software testing is nothing but an art of investigating software to ensure that its quality under test is
in line with the requirement of the client. Software testing is carried out in a systematic manner with
the intent of finding defects in a system. It is required for evaluating the system.
Regular testing ensures that the software is developed as per the requirement of the client.
However, if the software is shipped with bugs embedded in it, you never know when they can create
a problem and then it will be very difficult to rectify defect because scanning hundreds and
thousands of lines of code and fixing a bug is not an easy task. You never know that while fixing one
bug you may introduce another bug unknowingly in the system.
Software testing is now a very significant and integral part of software development. Ideally, it is
best to introduce software testing in every phase of software development life cycle. Actually a
majority of software development time is now spent on testing
Page 1 of 89
TYCS SEM V USCS5032 Software Testing & Quality Assurance
Testing objectives:
1. Testing is a process of executing a program with the intent of finding an error.
2. A good test case is one that has a high probability of finding an as-yet-undiscovered error.
3. A successful test is one that uncovers an as-yet-undiscovered error.
Testing Principles
Before applying methods to design effective test cases, a software engineer must understand the
basic principles that guide software testing.
• All tests should be traceable to customer requirements. As we have seen, the objective of software
testing is to uncover errors. It follows that the most severe defects (from the customer’s point of
view) are those that cause the program to fail to meet its requirements.
• Tests should be planned long before testing begins. Test planning can begin as soon as the
requirements model is complete.
Detailed definition of test cases can begin as soon as the design model has been solidified. Therefore,
all tests can be planned and designed before any code has been generated.
• The Pareto principle applies to software testing. Stated simply, the Pareto principle implies that 80
percent of all errors uncovered during testing will likely be traceable to 20 percent of all program
components. The problem, of course, is to isolate these suspect components and to thoroughly test
them.
• Testing should begin “in the small” and progress toward testing “in the large.” The first tests
planned and executed generally focus on individual components. As testing progresses, focus shifts
in an attempt to find errors in integrated clusters of components and ultimately in the entire system.
• Exhaustive testing is not possible. The number of path permutations for even a moderately sized
program is exceptionally large. For this reason, it is impossible to execute every combination of
paths during testing. It is possible, however, to adequately cover program logic and to ensure that all
conditions in the component-level design have been exercised.
• To be most effective, testing should be conducted by an in dependent third party. By most
effective, we mean testing that has the highest probability of finding errors (the primary objective of
testing).
For reasons that have been introduced earlier in this chapter and are considered in more detail in, the
software engineer who created the system is not the best person to conduct all tests for the software.
Page 2 of 89
TYCS SEM V USCS5032 Software Testing & Quality Assurance
faults, because then we could look for them during verification. Regrettably, the data is inconclusive
and it is only possible to make vague statements about these things.
Specifications are a common source of faults. A software system has an overall specification, derived
from requirements analysis. In addition, each component of the software ideally has an individual
specification that is derived from architectural design. The specification for a component can be:
• ambiguous (unclear)
• incomplete
• faulty.
Any such problems should, of course, be detected and remedied by verification of the specification
prior to development of the component, but, of course, this verification cannot and will not be totally
effective. So there are often problems with a component specification.
This is not all – there are other problems with specifications. During programming, the developer of
a component may misunderstand the component specification.
The next type of error is where a component contains faults so that it does not meetits specification.
This may be due to two kinds of problem:
1. errors in the logic of the code – an error of commission
2. code that fails to meet all aspects of the specification – an error of omission.
This second type of error is where the programmer has failed to appreciate and correctly understand
all the detail of the specification and has therefore omitted some necessary code.
Finally, the kinds of errors that can arise in the coding of a component are:
• data not initialized
• loops repeated an incorrect number of times.
• boundary value errors.
Boundary values are values of the data at or near critical values. For example, supposea component
has to decide whether a person can vote or not, depending on their age.The voting age is 18. Then
boundary values, near the critical value, are 17, 18 and 19.
As we have seen , there are many things that can go wrong and perhaps therefore itis no surprise that
verification is such a time-consuming activity.
#1) Functionality Errors: Functionality is a way the software is intended to behave. Software has a
functionality error if something that you expect it to do is hard, awkward, confusing, or impossible
Expected Functionality for Cancel button is that the ‘Create new project’ window should close and
none of the changes should be saved (i.e. no new project must be created). If the Cancel button is not
clickable then it is a functionality error.
#2) Communication Errors: These errors occur in communication from software to end-user.
Anything that the end user needs to know in order to use the software should be made available on
screen.
Few examples of communication errors are – No Help instructions/menu provided, features that are
part of the release but are not documented in the help menu, a button named ‘Save’ should not erase
a file etc.
Page 3 of 89
TYCS SEM V USCS5032 Software Testing & Quality Assurance
#3) Missing command errors: This happens to occur when an expected command is missing.
However, there is no option for the user to exit from this window without creating the project. Since
‘Cancel’ option/button is not provided to the user, this is a missing command error
#4) Syntactic Error: Syntactic errors are misspelled words or grammatically incorrect sentences and
are very evident while testing software GUI. Please note that we are NOT referring to syntax errors
in code. The compiler will warn the developer about any syntax errors that occur in the code
#5) Error handling errors: Any errors that occur while the user is interacting with the software
needs to be handled in a clear and meaningful manner. If not, it is called as an Error Handling Error.
Take a look at this image. The error message gives no indication of what the error actually is. Is it
missing mandatory field, saving error, page loading error or is it a system error. Hence, this is an
‘Error Handing Error’.
#6) Calculation Errors: These errors occur due to any of the following reasons:
• Bad logic
• Incorrect formulae
• Data type mismatch
• Coding errors
• Function call issues , etc.
In 1999, NASA lost its Mars climate orbiter because one of the subcontractors NASA employed had
used English units instead of the intended metric system, which caused the orbiter’s thrusters to work
incorrectly. Due to this bug, the orbiter crashed almost immediately when it arrived at Mars.
#7) Control flow errors: The control flow of a software describes what it will do next and on what
condition.
For example, consider a system where user has to fill in a form and the options available to user are:
Save, Save and Close, and Cancel. If a user clicks on ‘Save and Close’ button, the user information
in the form should be saved and the form should close. If clicking on the button does not close the
form, then it is a control flow error.
We have listed some of the vital differences between bug, defect, error, fault, and failure in the
below table.
Page 4 of 89
TYCS SEM V USCS5032 Software Testing & Quality Assurance
Raised The Test The Testers i The Developers Human The failure finds
by Engineers s dentify the and automation mistakes cause fault. by the manual
ubmit the defect. And it test test engineer
bug. was also engineers raise through
solved by the the error. the development
developer in cycle.
the
development
phase or
stage.
Differe Different Different type Different type of Different type of Fault -----
nt type of bugs of Defects are Error is as are as follows:
types are as as follows: below: o Business Logic
follows: Based o Syntactic Error Faults
o Logic on priority: o User interface o Functional and
o High error Logical Faults
bugs
o Medium o Flow control o Faulty GUI
o Algorith error o Performance
o Low
mic bugs o Error handling Faults
And based on
o Resource error o Security Faults
the severity:
bugs o Calculation o Software/
o Critical
error hardware fault
o Major o Hardware error
o Minor o Testing Error
o Trivial
Way to Following With the help Below are ways The fault can be The way to
preven are the way of the to prevent prevented with the prevent failure a
t the to stop following, we the Errors: help of the following: re as follows:
reason the bugs: can prevent Enhance the Peer review. Confirm re-
s Test-driven the Defects: software quality Assess the functional testing.
Page 5 of 89
TYCS SEM V USCS5032 Software Testing & Quality Assurance
Page 6 of 89
TYCS SEM V USCS5032 Software Testing & Quality Assurance
Regression Testing
Regression testing is a method of testing that is used to ensure that changes made to the software do
not introduce new bugs or cause existing functionality to break. It is typically done after changes
have been made to the code, such as bug fixes or new features, and is used to verify that the software
still works as intended.
Regression testing can be performed in different ways, such as:
Retesting: This involves testing the entire application or specific functionality that was affected by
the changes.
Re–execution: This involves running a previously executed test suite to ensure that the changes did
not break any existing functionality.
Comparison: This involves comparing the current version of the software with a previous version to
ensure that the changes did not break any existing functionality.
Smoke Testing
This test is done to make sure that the software under testing is ready or stable for further testing
It is called a smoke test as the testing of an initial pass is done to check if it did not catch the fire or
smoke in the initial switch on.
Example:
If the project has 2 modules so before going to the module make sure that module 1 works properly
Page 7 of 89
TYCS SEM V USCS5032 Software Testing & Quality Assurance
Alpha Testing
This is a type of validation testing. It is a type of acceptance testing which is done before the product
is released to customers. It is typically done by QA people.
Example:
When software testing is performed internally within the organization
Beta Testing
The beta test is conducted at one or more customer sites by the end-user of the software. This version
is released for a limited number of users for testing in a real-time environment
Example:
When software testing is performed for the limited number of people
System Testing
System Testing is carried out on the whole system in the context of either system requirement
specifications or functional requirement specifications or in the context of both. The software is
tested such that it works fine for the different operating systems. It is covered under the black box
testing technique. In this, we just focus on the required input and output without focusing on internal
working.
In this, we have security testing, recovery testing, stress testing, and performance testing
Example:
This includes functional as well as non-functional testing
Stress Testing
In this, we give unfavourable conditions to the system and check how they perform in those
conditions.
Example:
(a) Test cases that require maximum memory or other resources are executed
(b) Test cases that may cause thrashing in a virtual operating system
(c) Test cases that may cause excessive disk requirement
Performance Testing
It is designed to test the run-time performance of software within the context of an integrated system.
It is used to test the speed and effectiveness of the program. It is also called load testing. In it we
check, what the performance of the system is in the given load.
Example:
Checking several processor cycles.
Acceptance Testing
Acceptance testing is done by the customers to check whether the delivered products perform the
desired tasks or not, as stated in requirements.
We focus on the management issues and the process-specific activities that enable a software
organization to ensure that it does “the right things at the right time in the right way”
Page 8 of 89
TYCS SEM V USCS5032 Software Testing & Quality Assurance
Quality
The American Heritage Dictionary defines quality as “a characteristic or attribute of something.” As
an attribute of an item, quality refers to measurable characteristics— things we are able to compare
to known standards such as length, color, electrical properties, and malleability. However, software,
largely an intellectual entity, is more challenging to characterize than physical objects.
Nevertheless, measures of a program’s characteristics do exist. These properties include cyclomatic
complexity, cohesion, number of function points, lines of code, and many others. When we examine
an item based on its measurable characteristics, two kinds of quality may be encountered: quality of
design and quality of conformance.
Quality of design refers to the characteristics that designers specify for an item. The grade of
materials, tolerances, and performance specifications all contribute to the quality of design. As
higher-grade materials are used, tighter tolerances and greater levels of performance are specified,
the design quality of a product increases, if the product is manufactured according to specifications.
Quality of conformance is the degree to which the design specifications are followed during
manufacturing. Again, the greater the degree of conformance, the higher is the level of quality of
conformance.
In software development, quality of design encompasses requirements, specifications, and the design
of the system. Quality of conformance is an issue focused primarily on implementation. If the
implementation follows the design and the resulting system meets its requirements and performance
goals, conformance quality is high.
But are quality of design and quality of conformance the only issues that software engineers must
consider? Robert Glass argues that a more “intuitive” relationship is in order:
User satisfaction = compliant product + good quality + delivery within budget and schedule
At the bottom line, Glass contends that quality is important, but if the user isn’t satisfied, nothing
else really matters. DeMarco reinforces this view when he states:
“A product’s quality is a function of how much it changes the world for the better.”
This view of quality contends that if a software product provides substantial benefit to its end-users,
they may be willing to tolerate occasional reliability or performance problems.
Quality Control
Variation control may be equated to quality control. But how do we achieve quality control? Quality
control involves the series of inspections, reviews, and tests used throughout the software process to
ensure each work product meets the requirements placed upon it. Quality control includes a feedback
loop to the process that created the work product. The combination of measurement and feedback
allows us to tune the process when the work products created fail to meet their specifications. This
approach views quality control as part of the manufacturing process. Quality control activities may
Page 9 of 89
TYCS SEM V USCS5032 Software Testing & Quality Assurance
be fully automated, entirely manual, or a combination of automated tools and human interaction. A
key concept of quality control is that all work products have defined, measurable specifications to
which we may compare the output of each process. The feedback loop is essential to minimize the
defects produced.
Quality Assurance
Quality assurance consists of the auditing and reporting functions of management. The goal of
quality assurance is to provide management with the data necessary to be informed about product
quality, thereby gaining insight and confidence that product quality is meeting its goals. Of course, if
the data provided through quality assurance identify problems, it is management’s responsibility to
address the problems and apply the necessary resources to resolve quality issues.
Below are the differences between Quality Assurance and Quality Control:
It focuses on providing assurance that the quality It focuses on fulfilling the quality requested.
requested will be achieved.
It is involved during the development phase. It is not included during the development phase.
It does not include the execution of the program. It always includes the execution of the program.
The aim of quality assurance is to prevent defects. The aim of quality control is to identify and
improve the defects.
It is responsible for the entire software development It is responsible for the software testing life cycle.
life cycle.
Page 10 of 89
TYCS SEM V USCS5032 Software Testing & Quality Assurance
It pays main focus is on the intermediate process. Its primary focus is on final products.
All team members of the project are involved. Generally, the testing team of the project is
involved.
It aims to prevent defects in the system. It aims to identify defects or bugs in the system.
Statistical Process Control (SPC) statistical technique Statistical Quality Control (SQC) statistical
is applied on Quality Assurance. technique is applied on Quality Control.
Cost of Quality
The cost of quality includes all costs incurred in the pursuit of quality or in performing quality-
related activities. Cost of quality studies are conducted to provide a baseline for the current cost of
quality, identify opportunities for reducing the cost of quality, and provide a normalized basis of
comparison. The basis of normalization is almost always dollars. Once we have normalized quality
costs on a dollar basis, we have the necessary data to evaluate where the opportunities lie to improve
our processes. Furthermore, we can evaluate the effect of changes in dollar-based terms.
Quality costs may be divided into costs associated with prevention, appraisal, and failure. Prevention
costs include
• quality planning
• formal technical reviews
• test equipment
• training
Appraisal costs include activities to gain insight into product condition the “first time through” each
process. Examples of appraisal costs include
• in-process and inter process inspection
• equipment calibration and maintenance
• testing
Page 11 of 89
TYCS SEM V USCS5032 Software Testing & Quality Assurance
Failure costs are those that would disappear if no defects appeared before shipping a product to
customers. Failure costs may be subdivided into internal failure costs and external failure costs.
Internal failure costs are incurred when we detect a defect in our product prior to shipment. Internal
failure costs include
• rework
• repair
• failure mode analysis
External failure costs are associated with defects found after the product has been shipped to the
customer. Examples of external failure costs are
• complaint resolution
• product return and replacement
• help line support
• warranty work
As expected, the relative costs to find and repair a defect increase dramatically as we go from
prevention to detection to internal failure to external failure costs
Many definitions of software quality have been proposed in the literature. For our purposes, software
quality is defined as
“Conformance to explicitly stated functional and performance requirements, explicitly
documented development standards, and implicit characteristics that are expected of all
professionally developed software.”
The definition serves to emphasize three important points:
1. Software requirements are the foundation from which quality is measured. Lack of conformance
to requirements is lack of quality.
2. Specified standards define a set of development criteria that guide the manner in which software
is engineered. If the criteria are not followed, lack of quality will almost surely result.
3. A set of implicit requirements often goes unmentioned (e.g., the desire for ease of use and good
maintainability). If software conforms to its explicit requirements but fails to meet implicit
requirements, software quality is suspect.
Software quality assurance (SQA) is an umbrella activity that is applied throughout the software
process. SQA encompasses
1. a quality management approach,
2. effective software engineering technology (methods and tools),
3. formal technical reviews that are applied throughout the software process,
4. a multitiered testing strategy,
5. control of software documentation and the changes made to it,
6. a procedure to ensure compliance with software development standards
7. measurement and reporting mechanisms
Page 12 of 89
TYCS SEM V USCS5032 Software Testing & Quality Assurance
Software quality assurance is a "planned and systematic pattern of actions" [SCH98] that are
required to ensure high quality in software. The implication for software is that many different
constituencies have software quality assurance responsibility—software engineers, project managers,
customers, salespeople, and the individuals who serve within an SQA group.
The Software Engineering Institute recommends a set of SQA activities that address quality
assurance planning, oversight, record keeping, analysis, and reporting. These activities are performed
(or facilitated) by an independent SQA group that:
Prepares an SQA plan for a project. The plan is developed during project plan ning and is
reviewed by all interested parties. Quality assurance activities performed by the software engineering
team and the SQA group are governed by the plan. The plan identifies
• evaluations to be performed
• audits and reviews to be performed
• standards that are applicable to the project
• procedures for error reporting and tracking
• documents to be produced by the SQA group
• amount of feedback provided to the software project team
Participates in the development of the project’s software process description. The software team
selects a process for the work to be performed. The SQA group reviews the process description for
compliance with organizational policy, internal software standards, externally imposed standards
(e.g., ISO-9001), and other parts of the software project plan.
Reviews software engineering activities to verify compliance with the defined software process.
The SQA group identifies, documents, and tracks deviations from the process and verifies that
corrections have been made.
Audits designated software work products to verify compliance with those defined as part of
the software process. The SQA group reviews selected work products; identifies, documents, and
tracks deviations; verifies that corrections have been made; and periodically reports the results of its
work to the project manager.
Ensures that deviations in software work and work products are documented and handled
according to a documented procedure. Deviations may be encountered in the project plan, process
description, applicable standards, or technical work products.
Records any noncompliance and reports to senior management. Non compliance items are
tracked until they are resolved
Software Development Life Cycle (SDLC) is a process used by the software industry to design,
develop and test high quality softwares. The SDLC aims to produce a high-quality software that
meets or exceeds customer expectations, reaches completion within times and cost estimates.
Page 13 of 89
TYCS SEM V USCS5032 Software Testing & Quality Assurance
What is SDLC?
SDLC is a process followed for a software project, within a software organization. It consists of a
detailed plan describing how to develop, maintain, replace and alter or enhance specific software.
The life cycle defines a methodology for improving the quality of software and the overall
development process.
The following figure is a graphical representation of the various stages of a typical SDLC.
Page 14 of 89
TYCS SEM V USCS5032 Software Testing & Quality Assurance
Page 15 of 89
TYCS SEM V USCS5032 Software Testing & Quality Assurance
This model classifies all software requirements into 11 software quality factors. The 11 factors are
grouped into three categories – product operation, product revision, and product transition factors.
• Product operation factors − Correctness, Reliability, Efficiency, Integrity, Usability.
• Product revision factors − Maintainability, Flexibility, Testability.
• Product transition factors − Portability, Reusability, Interoperability.
Page 16 of 89
TYCS SEM V USCS5032 Software Testing & Quality Assurance
Reliability
Reliability requirements deal with service failure. They determine the maximum allowed failure rate
of the software system, and can refer to the entire system or to one or more of its separate functions.
Efficiency
It deals with the hardware resources needed to perform the different functions of the software
system. It includes processing capabilities (given in MHz), its storage capacity (given in MB or GB)
and the data communication capability (given in MBPS or GBPS).It also deals with the time between
recharging of the system’s portable units, such as, information system units located in portable
computers, or meteorological units placed outdoors.
Integrity
This factor deals with the software system security, that is, to prevent access to unauthorized persons,
also to distinguish between the group of people to be given read as well as write permit.
Usability- Usability requirements deal with the staff resources needed to train a new employee and to
operate the software system.
Product Revision Quality Factors
According to McCall’s model, three software quality factors are included in the product revision
category. These factors are as follows −
Maintainability -This factor considers the efforts that will be needed by users and maintenance
personnel to identify the reasons for software failures, to correct the failures, and to verify the
success of the corrections.
Flexibility- This factor deals with the capabilities and efforts required to support adaptive
maintenance activities of the software. These include adapting the current software to additional
circumstances and customers without changing the software. This factor’s requirements also support
perfective maintenance activities, such as changes and additions to the software in order to improve
its service and to adapt it to changes in the firm’s technical or commercial environment.
Testability -Testability requirements deal with the testing of the software system as well as with its
operation. It includes predefined intermediate results, log files, and also the automatic diagnostics
performed by the software system prior to starting the system, to find out whether all components of
the system are in working order and to obtain a report about the detected faults. Another type of
these requirements deals with automatic diagnostic checks applied by the maintenance technicians to
detect the causes of software failures.
Page 17 of 89
TYCS SEM V USCS5032 Software Testing & Quality Assurance
expected to save development resources, shorten the development period, and provide higher quality
modules.
Interoperability
Interoperability requirements focus on creating interfaces with other software systems or with other
equipment firmware. For example, the firmware of the production machinery and testing equipment
interfaces with the production control software.
Verification and Validation (V&V): Definition of V&V and its significance in software development,
Different types of V&V mechanisms, Concepts of Software Reviews, Inspection, and Walkthrough.
Verification is the process of checking that a software achieves its goal without any bugs. It is the
process to ensure whether the product that is developed is right or not. It verifies whether the
developed product fulfills the requirements that we have. Verification is static testing.
Validation is the process of checking whether the software product is up to the mark or in other
words product has high level requirements. It is the process of checking the validation of product i.e.
it checks what we are developing is the right product. it is validation of actual and expected product.
Validation is the dynamic testing.
Verification Validation
The verifying process includes checking documents, It is a dynamic mechanism of testing
design, code, and program and validating the actual product
It does not involve executing the code It always involves executing the code
Verification uses methods like reviews, It uses methods like Black Box Testing, White
walkthroughs, inspections, and desk- checking etc. Box Testing, and non-functional testing
Whether the software conforms to specification is It checks whether the software meets the
checked requirements and expectations of a customer
Page 18 of 89
TYCS SEM V USCS5032 Software Testing & Quality Assurance
What Is Verification?
Verification is achieved through:
What Is Validation?
Validation demonstrates that a software or systems product is fit for purpose. That is, it satisfies all
the customer's stated an implied needs (the Wright brothers needed to fly).
Validation can be performed progressively throughout the development life cycle. For example,
written user requirements can be validated by creating a model or prototype and asking the user to
confirm (or validate) that the demonstrated functionality meets their needs. System testing is a major
validation event where a system is validated against the user's statement of requirement. It aims to
show that all faults which could degrade system performance have been removed before the system
is operated. Validation is not complete however until the end user formally agrees that the
operational system is fit for purpose.
Page 19 of 89
TYCS SEM V USCS5032 Software Testing & Quality Assurance
V-Model
V-Model also referred to as the Verification and Validation Model. In this, each phase of SDLC must
complete before the next phase starts. It follows a sequential design process same as the waterfall
model. Testing of the device is planned in parallel with a corresponding stage of development.
Page 20 of 89
TYCS SEM V USCS5032 Software Testing & Quality Assurance
Verification: It involves a static analysis method (review) done without executing code. It is the
process of evaluation of the product development process to find whether specified requirements
meet.
So V-Model contains Verification phases on one side of the Validation phases on the other side.
Verification and Validation process is joined by coding phase in V-shape. Thus it is known as V-
Model.
Page 21 of 89
TYCS SEM V USCS5032 Software Testing & Quality Assurance
Software Review is systematic inspection of software by one or more individuals who work together
to find and resolve errors and defects in the software during the early stages of Software
Development Life Cycle (SDLC). Software review is an essential part of Software Development Life
Cycle (SDLC) that helps software engineers in validating the quality, functionality and other vital
features and components of the software. It is a whole process that includes testing the software
product and it makes sure that it meets the requirements stated by the client.
Usually performed manually, software review is used to verify various documents like requirements,
system designs, codes, test plans and test cases.
Objectives of Software Review:
The objective of software review is:
1. To improve the productivity of the development team.
2. To make the testing process time and cost effective.
3. To make the final software with fewer defects.
4. To eliminate the inadequacies.
Page 22 of 89
TYCS SEM V USCS5032 Software Testing & Quality Assurance
In addition, the FTR serves as a training ground, enabling junior engineers to observe different
approaches to software analysis, design, and implementation. The FTR also serves to promote
backup and continuity because a number of people become familiar with parts of the software that
they may not have otherwise seen. The FTR is actually a class of reviews that includes walkthroughs,
inspections, round-robin reviews and other small group technical assessments of software. Each FTR
is conducted as a meeting and will be successful only if it is properly planned, controlled, and
attended.
Structured walkthroughs
This is simply the term for an organized meeting at which a program (or some other product) is
examined by a group of colleagues. The major aim of the meeting is to try to find bugs which might
otherwise go undetected for some time. The word “structured” simply means “well organized”. The
term “walkthrough” means the activity of the programmer explaining step by step the working of
his/her program. The reasoning behind structured walk-throughs is just this: that by letting other
people look at your program, errors will be found much more quickly.
To walkthrough a program you need only:
Page 23 of 89
TYCS SEM V USCS5032 Software Testing & Quality Assurance
• the specification
• the text of the program on paper.
In carrying out a walkthrough, a good approach is to study it one method at a time. Some of the
checks are fairly straightforward:
• variables initialized
• loops correctly initialized and terminated
• method calls have the correct parameters.
Another check depends on the logic of the method. Pretend to execute the method as if you were a
computer, avoiding following any calls into other methods. Check that:
• the logic of the method achieves its desired purpose.
During inspection you can also check that:
• variable and method names are meaningful
• indentation is clear and consistent.
The prime goal of a walkthrough is to find bugs, but checking for a weakness in style may point to a
bug. The evidence from controlled experiments suggests that walkthroughs are a very effective way
of finding errors. In fact walkthroughs are at least as good a way of identifying bugs as actually
running the program (doing testing).
Although structured walkthroughs were initially used to find bugs in program code, the technique is
valuable for reviewing the products at every stage of development – the requirements specification, a
software specification, architectural design, component design, the code, the test data, the results of
testing, the documentation.
• gauge the size and membership of the group carefully so that there are plenty of ideas, but so
that everyone is fully involved
• expect participants to study the material prior to the meeting
• concentrate attention on the product rather than the person, to avoid criticizing the author
• limit the length of the meeting, so that everyone knows that it is business-like
• control the meeting with the aid of agreed rules and an assertive chairperson
• restrict the activity to identifying problems, not solving them
• briefly document the faults (not the cures) for later reference.
Page 24 of 89
TYCS SEM V USCS5032 Software Testing & Quality Assurance
Inspections
These are similar to structured walkthroughs – a group of people meet to review a piece of work. But
they are different from walkthroughs in several respects. Checklists are used to ensure that no
relevant considerations are ignored. Errors that are discovered are classified according to type and
carefully recorded on forms. Statistics on errors are computed, for example in terms of errors per
1,000 lines of code. Thus inspections are not just well organized, they are completely formal. In
addition management is informed of the results of inspections, though usually they do not attend the
meeting. Thus inspections are potentially more threatening to the programmer than walkthroughs.
There are other, minor, differences between inspections and walkthroughs. Normally there are only
four members in an inspection team:
• the moderator, who co-ordinates activities
• the person who designed the program component being inspected
• the programmer
• the tester – a person who acts as someone who will be responsible for testing the
component.
The essence of inspections is that the study of products is carried out under close management
supervision. Thus inspections are overtly a mechanism for increased control over programmers’
work, similar to the way that quality control is carried out on a factory floor. Some programmers
might feel threatened in this situation and become defensive, perhaps trying to hide their mistakes.
Perhaps this makes the discovery of errors more painful, and programming a less enjoyable activity
*******************************************************************************
Page 25 of 89
TYCS SEM V USCS5032 Software Testing & Quality Assurance
UNIT II
Software Testing Techniques and Strategies
Testing Fundamentals: Basics of software testing process, Test case design principles and
techniques, Test execution, reporting, and documentation
White Box Testing and Black Box Testing: Functional/Specification based Testing as Black Box,
Black box: Equivalence Partitioning, Boundary Value Analysis, Decision Table Testing, State
Transition Testing. Structural Testing as White Box, White Box: Statement testing, Branch testing.
Experience-based: Error guessing, exploratory testing, Checklist-based testing.
Software Testing Strategies: Strategic approach to software testing Unit Testing: purpose,
techniques, and best practices, Integration Testing: approaches and challenges, Validation Testing:
ensuring adherence to user requirements, System Testing: comprehensive end-to-end testing
Software Metrics: Concept of software metrics and their importance, Developing and utilizing
different types of metrics, Complexity metrics and their significance in testing
========================================================================
Testing presents an interesting anomaly for the software engineer. During earlier software
engineering activities, the engineer attempts to build software from an abstract concept to a tangible
product. Now comes testing. The engineer creates a series of test cases that are intended to
"demolish" the software that has been built. In fact, testing is the one step in the software process that
could be viewed (psychologically, at least) as destructive rather than constructive. Software
engineers are by their nature constructive people. Testing requires that the developer discard
preconceived notions of the "correctness" of software just developed and overcome a conflict of
interest that occurs when errors are uncovered.
Testing plays an important role in achieving and assessing the quality of a software product. On the
one hand, we improve the quality of the products as we repeat a test–find defects–fix cycle during
development. On the other hand, we assess how good our system is when we perform system-level
tests before releasing a product. Thus, software testing is a verification process for software quality
assessment and improvement. Generally speaking, the activities for software quality assessment can
be divided into two broad categories, namely, static analysis and dynamic analysis.
• Static Analysis: As the term “static” suggests, it is based on the examination of a number of
documents, namely requirements documents, software models, design documents, and source code.
Traditional static analysis includes code review, inspection, walk-through, algorithm analysis, and
proof of correctness. It does not involve actual execution of the code under development. Instead, it
examines code and reasons over all possible behaviors that might arise during run time. Compiler
optimizations are standard static analysis.
• Dynamic Analysis: Dynamic analysis of a software system involves actual program execution in
order to expose possible program failures. The behavioral and performance properties of the program
are also observed. Programs are executed with both typical and carefully chosen input values. Often,
the input set of a program can be impractically large. However, for practical considerations, a finite
subset of the input set can be selected. Therefore, in testing, we observe some representative program
behaviors and reach a conclusion about the quality of the system. Careful selection of a finite test set
is crucial to reaching a reliable conclusion.
Page 26 of 89
TYCS SEM V USCS5032 Software Testing & Quality Assurance
OBJECTIVES OF TESTING
By performing static and dynamic analyses, practitioners want to identify as many faults as possible
so that those faults are fixed at an early stage of the software development. Static analysis and
dynamic analysis are complementary in nature, and for better effectiveness, both must be performed
repeatedly and alternated. Practitioners and researchers need to remove the boundaries between static
and dynamic analysis and create a hybrid analysis that combines the strengths of both approaches.
The stakeholders in a test process are the programmers, the test engineers, the project managers, and
the customers. A stakeholder is a person or an organization who influences a system’s behaviors or
who is impacted by that system. Different stakeholders view a test process from different
perspectives as explained below:
• It does work: While implementing a program unit, the programmer may want to test whether
or not the unit works in normal circumstances. The programmer gets much confidence if the
unit works to his or her satisfaction. The same idea applies to an entire system as well—once
a system has been integrated, the developers may want to test whether or not the system
performs the basic functions. Here, for the psychological reason, the objective of testing is to
show that the system works, rather than it does not work
• It does not work: Once the programmer (or the development team) is satisfied that a unit (or
the system) works to a certain degree, more tests are conducted with the objective of finding
faults in the unit (or the system). Here, the idea is to try to make the unit (or the system) fail
• Reduce the risk of failure: Most of the complex software systems contain faults, which
cause the system to fail from time to time. This concept of “failing from time to time” gives
rise to the notion of failure rate. As faults are discovered and fixed while performing more
and more tests, the failure rate of a system generally decreases. Thus, a higher level objective
of performing tests is to bring down the risk of failing to an acceptable level.
• Reduce the cost of testing: The different kinds of costs associated with a test process include
the cost of designing, maintaining, and executing test cases, the cost of analyzing the result of
executing each test case, the cost of documenting the test cases, and the cost of actually
executing the system and documenting it.
Therefore, the less the number of test cases designed, the less will be the associated cost of testing.
However, producing a small number of arbitrary test cases is not a good way of saving cost. The
highest level of objective of performing tests is to produce low-risk software with fewer numbers of
test cases. This idea leads us to the concept of effectiveness of test cases. Test engineers must
therefore judiciously select fewer, effective test cases.
TEST CASE DESIGN
The design of tests for software and other engineered products can be as challenging as the initial
design of the product itself. Yet, for reasons that we have already discussed, software engineers often
treat testing as an afterthought, developing test cases that may "feel right" but have little assurance of
being complete. Recalling the objectives of testing, we must design tests that have the highest
likelihood of finding the most errors with a minimum amount of time and effort.
A rich variety of test case design methods have evolved for software. These methods provide the
developer with a systematic approach to testing. More important, methods provide a mechanism that
Page 27 of 89
TYCS SEM V USCS5032 Software Testing & Quality Assurance
can help to ensure the completeness of tests and provide the highest likelihood for uncovering errors
in software.
Any engineered product (and most other things) can be tested in one of two ways:
(1) Knowing the specified function that a product has been designed to perform, tests can be
conducted that demonstrate each function is fully operational while at the same time searching for
errors in each function;
(2) Knowing the internal workings of a product, tests can be conducted to ensure that "all gears
mesh," that is, internal operations are performed according to specifications and all internal
components have been adequately exercised. The first test approach is called black-box testing and
the second, white-box testing.
When computer software is considered, black-box testing alludes to tests that are conducted at the
software interface. Although they are designed to uncover errors, black-box tests are used to
demonstrate that software functions are operational, that input is properly accepted and output is
correctly produced, and that the integrity of external information (e.g., a database) is maintained. A
black-box test examines some fundamental aspect of a system with little regard for the internal
logical structure of the software.
White-box testing of software is predicated on close examination of procedural detail. Logical paths
through the software are tested by providing test cases that exercise specific sets of conditions and/or
loops. The "status of the program" may be examined at various points to determine if the expected or
asserted status corresponds to the actual status.
At first glance it would seem that very thorough white-box testing would lead to "100 percent correct
programs." All we need do is define all logical paths, develop test cases to exercise them, and
evaluate results, that is, generate test cases to exercise program logic exhaustively. Unfortunately,
exhaustive testing presents certain logistical problems. For even small programs, the number of
possible logical paths can be very large. For example, consider the 100 line program in the language
C. After some basic data declaration, the program contains two nested loops that execute from 1 to
20 times each, depending on conditions specified at input. Inside the interior loop, four if-then-else
constructs are required. There are approximately 1014 possible paths that may be executed in this
program!
To put this number in perspective, we assume that a magic test processor ("magic" because no such
processor exists) has been developed for exhaustive testing. The processor can develop a test case,
execute it, and evaluate the results in one milisecond. Working 24 hours a day, 365 days a year, the
processor would work for 3170 years to test the program. This would, undeniably, cause havoc in
most development schedules. Exhaustive testing is impossible for large software systems.
White-box testing should not, however, be dismissed as impractical. A limited number of important
logical paths can be selected and exercised. Important data structures can be probed for validity. The
attributes of both black- and white-box testing can be combined to provide an approach that validates
the software interface and selectively ensures that the internal workings of the software are correct.
Page 28 of 89
TYCS SEM V USCS5032 Software Testing & Quality Assurance
TEST CASE
The test case is defined as a group of conditions under which a tester determines whether a software
application is working as per the customer's requirements or not. Test case designing includes
preconditions, case name, input conditions, and expected result. A test case is a first level action and
derived from test scenarios.
Page 29 of 89
TYCS SEM V USCS5032 Software Testing & Quality Assurance
It is an in-details document that contains all possible inputs (positive as well as negative) and the
navigation steps, which are used for the test execution process. Writing of test cases is a one-time
attempt that can be used in the future at the time of regression testing.
Test case gives detailed information about testing strategy, testing process, preconditions, and
expected output. These are executed during the testing process to check whether the software
application is performing the task for that it was developed or not.
Test case helps the tester in defect reporting by linking defect with test case ID. Detailed test case
documentation works as a full proof guard for the testing team because if developer missed
something, then it can be caught during execution of these full-proof test cases.
To write the test case, we must have the requirements to derive the inputs, and the test scenarios must
be written so that we do not miss out on any features for testing. Then we should have the test case
template to maintain the uniformity, or every test engineer follows the same approach to prepare the
test document.
Generally, we will write the test case whenever the developer is busy in writing the code.
o When the customer gives the business needs then, the developer starts developing and says
that they need 3.5 months to build this product.
o And In the meantime, the testing team will start writing the test cases.
o Once it is done, it will send it to the Test Lead for the review process.
o And when the developers finish developing the product, it is handed over to the testing team.
Page 30 of 89
TYCS SEM V USCS5032 Software Testing & Quality Assurance
o The test engineers never look at the requirement while testing the product document because
testing is constant and does not depends on the mood of the person rather than the quality of
the test engineer.
To require consistency in the test case execution: we will see the test case and start testing the
application.
To make sure a better test coverage: for this, we should cover all possible scenarios and document
it, so that we need not remember all the scenarios again and again.
It depends on the process rather than on a person: A test engineer has tested an application
during the first release, second release, and left the company at the time of third release. As the test
engineer understood a module and tested the application thoroughly by deriving many values. If the
person is not there for the third release, it becomes difficult for the new person. Hence all the derived
values are documented so that it can be used in the future.
To avoid giving training for every new test engineer on the product: When the test engineer
leaves, he/she leaves with a lot of knowledge and scenarios. Those scenarios should be documented
so that the new test engineer can test with the given scenarios and also can write the new scenarios.
The primary purpose of writing a test case is to achieve the efficiency of the application.
As we know, the actual result is written after the test case execution, and most of the time, it would
be same as the expected result. But if the test step will fail, it will be different. So, the actual result
field can be skipped, and in the Comments section, we can write about the bugs.
And also, the Input field can be removed, and this information can be added to the Description
field.
Page 31 of 89
TYCS SEM V USCS5032 Software Testing & Quality Assurance
The above template we discuss above is not the standard one because it can be different for each
company and also with each application, which is based on the test engineer and the test lead. But,
for testing one application, all the test engineers should follow a usual template, which is formulated.
The test case should be written in simple language so that a new test engineer can also understand
and execute the same.
Step number: It is also essential because if step number 20 is failing, we can document the bug
report and hence prioritize working and also decide if it’s a critical bug.
Test case type: It can be functional, integration or system test cases or positive or negative or
positive and negative test cases.
Pre-condition: These are the necessary conditions that need to be satisfied by every test engineer
before starting the test execution process. Or it is the data configuration or the data setup that needs
to be created for the testing.
For example: In an application, we are writing test cases to add users, edit users, and delete users.
The per-condition will be seen if user A is added before editing it and removing it.
Test data: These are the values or the input we need to create as per the per-condition.
Page 32 of 89
TYCS SEM V USCS5032 Software Testing & Quality Assurance
The test lead may be given the test data like username or password to test the application, or the test
engineer may themself generate the username and password.
Severity
The severity can be major, minor, and critical, the severity in the test case talks about the
importance of that particular test cases. All the text execution process always depends on the severity
of the test cases.
We can choose the severity based on the module. There are many features include in a module, even
if one element is critical, we claim that test case to be critical. It depends on the functions for which
we are writing the test case.
For example, we will take the Gmail application and let us see the severity based on the modules:
Modules Severity
Login Critical
Help Minor
Compose mail Critical
Setting Minor
Inbox Critical
Sent items Major
Logout Critical
Modules Severity
Amount transfer critical
Feedback minor
Brief description
The test engineer has written a test case for a particular feature. If he/she comes and reads the test
cases for the moment, he/she will not know for what feature has written it. So, the brief description
will help them in which feature test case is written.
Here, we are writing a test case for the ICICI application’s Login module:
Page 33 of 89
TYCS SEM V USCS5032 Software Testing & Quality Assurance
Page 34 of 89
TYCS SEM V USCS5032 Software Testing & Quality Assurance
The functional test case for amount transfer module is in the below Excel file:
Integration test case : In this, we should not write something which we already covered in the
functional test cases, and something we have written in the integration test case should not be written
in the system test case again.
Page 35 of 89
TYCS SEM V USCS5032 Software Testing & Quality Assurance
White Box Testing and Black Box Testing: Functional/Specification based Testing as Black Box,
Black box: Equivalence Partitioning, Boundary Value Analysis, Decision Table Testing, State
Transition Testing. Structural Testing as White Box, White Box: Statement testing, Branch testing.
Experience-based: Error guessing, exploratory testing, Checklist-based testing.
Specification-based testing is a black-box testing technique that uses the specifications of a system
to derive test cases. The specifications can be functional or non-functional and can be at different
levels of abstraction, such as user requirements, system requirements, or design specifications.
Page 36 of 89
TYCS SEM V USCS5032 Software Testing & Quality Assurance
Page 37 of 89
TYCS SEM V USCS5032 Software Testing & Quality Assurance
1. State Transition: State transition testing is a testing technique used to uncover errors in the
transition of the system from one state to another. It used to test the behavior of a system.
2. Decision Table: This technique is based on the idea that a system can have a number of
different inputs, and that each input can have a number of different values. The different
combinations of inputs and values are known as decision points.
3. Equivalence Partitioning: This technique is based on the idea that a system can have a number
of different inputs, and that each input can be divided into a number of different equivalence
classes. An equivalence class is a set of inputs that are expected to produce the same output from a
system.
4. Boundary Value Analysis: When carrying out the test, the tester will input the different
boundary values into the system, and observe the system’s response.
5. All pair testing:
6. Classification Tree Method
7. Use Case Testing
Equivalence Partitioning
Equivalence partitioning is a black-box testing method that divides the input domain of a program
into classes of data from which test cases can be derived. An ideal test case single-handedly uncovers
a class of errors (e.g., incorrect processing of all character data) that might otherwise require many
cases to be executed before the general error is observed. Equivalence partitioning strives to define a
test case that uncovers classes of errors, thereby reducing the total number of test cases that must be
developed.
Test case design for equivalence partitioning is based on an evaluation of equivalence classes for an
input condition. Using concepts introduced in the preceding section, if a set of objects can be linked
by relationships that are symmetric, transitive, and reflexive, an equivalence class is present. An
equivalence class represents a set of valid or invalid states for input conditions. Typically, an input
condition is a specific numeric value, a range of values, a set of related values, or a Boolean
condition. Equivalence classes may be defined according to the following guidelines:
1. If an input condition specifies a range, one valid and two invalid equivalence classes are defined.
2. If an input condition requires a specific value, one valid and two invalid equivalence classes are
defined.
3. If an input condition specifies a member of a set, one valid and one invalid equivalence class are
defined.
4. If an input condition is Boolean, one valid and one invalid class are defined.
As an example, consider data maintained as part of an automated banking application. The user can
access the bank using a personal computer, provide a six-digit password, and follow with a series of
typed commands that trigger various banking functions. During the log-on sequence, the software
supplied for the banking application accepts data in the form
area code—blank or three-digit number
prefix—three-digit number not beginning with 0 or 1
suffix—four-digit number
password—six digit alphanumeric string
commands—check, deposit, bill pay, and the like
The input conditions associated with each data element for the banking application can be specified
as
Page 38 of 89
TYCS SEM V USCS5032 Software Testing & Quality Assurance
area code: Input condition, Boolean—the area code may or may not be present
Input condition, range—values defined between 200 and 999, with specific
exceptions.
prefix: Input condition, range—specified value >200
Input condition, value—four-digit length
password: Input condition, Boolean—a password may or may not be present.
Input condition, value—six-character string.
command: Input condition, set—containing commands noted previously.
Applying the guidelines for the derivation of equivalence classes, test cases for each input domain
data item can be developed and executed. Test cases are selected so that the largest numbers of
attributes of an equivalence class are exercised at once.
Page 39 of 89
TYCS SEM V USCS5032 Software Testing & Quality Assurance
1. List all actions that can be associated with a specific procedure (or module).
2. List all conditions (or decisions made) during execution of the procedure.
3. Associate specific sets of conditions with specific actions, eliminating impossible combinations of
conditions; alternatively, develop every possible permutation of conditions.
4. Define rules by indicating what action(s) occurs for a set of conditions.
The condition is simple if the user provides the correct username and password the user will be
redirected to the homepage. If any of the input is wrong, an error message will be displayed.
• • is
Case 1 – Username and password both were wrong. The user Hshown
– Home screen
an error is displayed
message.
• Case 2 – Username was correct, but the password was wrong. The user is shown an error
message.
• Case 3 – Username was wrong, but the password was correct. The user is shown an error
message.
• Case 4 – Username and password both were correct, and the user navigated to the homepage
• Enter the correct username and correct password and click on login, and the expected result
will be the user should be navigated to the homepage
Page 40 of 89
TYCS SEM V USCS5032 Software Testing & Quality Assurance
• Enter wrong username and wrong password and click on login, and the expected result will
be the user should get an error message
• Enter correct username and wrong password and click on login, and the expected result will
be the user should get an error message
• Enter wrong username and correct password and click on login, and the expected result will
be the user should get an error message
⚫A black-box test design technique in which test cases are designed to execute the combinations of
inputs and their corresponding behaviour shown in a decision table.
⚫This technique is used to ensure that the system behaves correctly as it moves from one state to
another.
⚫It is referred to as a ’cause-effect’ table
⚫Each column can then be considered as a test case of a business rule
⚫It is good for testing business rules or combinations
⚫Contains conditions (Input) and actions (Outputs)
For example, consider a login screen of a web application. It has two states: “not logged in” and
“logged in”. There are two transitions between these states: “login” and “logout”. In the “not logged
in” state, the user can only see the login form, and in the “logged in” state, the user can see the main
application interface. To test this, we would create a set of test cases that exercise each transition,
such as logging in with valid credentials and verifying that the user is taken to the main interface, or
logging out and verifying that the user is returned to the login form.
Page 41 of 89
TYCS SEM V USCS5032 Software Testing & Quality Assurance
Deals With:
Sequence of Events
Handling of Events depending on Events and Conditions that occurred in the past.
Page 42 of 89
TYCS SEM V USCS5032 Software Testing & Quality Assurance
Important terms
State diagram:
⚫ It represents the states that a component can assume and shows the events that cause and/or
result from a change from one state to another.
⚫ State diagrams are a powerful tool in state transition testing, as they provide a clear and
intuitive visual representation of the different states and transitions within a system, allowing for
more effective and efficient testing.
⚫ A state diagram helps to identify the different possible states of a system and the conditions
under which the system moves from one state to another.
State Transition Testing is a black box testing technique in which changes made in input
conditions cause state changes or output changes in the Application under Test(AUT). State
transition testing helps to analyze behaviour of an application for different input conditions. Testers
can provide positive and negative input test values and record the system behavior.
It is the model on which the system and the tests are based. Any system where you get a different
output for the same input, depending on what has happened before, is a finite state system.
State Transition Testing Technique is helpful where you need to test different system transitions.
When to Use State Transition?
⚫This can be used when a tester is testing the application for a finite set of input values.
⚫When the tester is trying to test sequence of events that occur in the application under test. I.e., this
will allow the tester to test the application behavior for a sequence of input values.
⚫When the system under test has a dependency on the events/values in the past.
4) Actions that result from a transition (an error message or being given the cash.)
Page 43 of 89
TYCS SEM V USCS5032 Software Testing & Quality Assurance
KEYPOINTS:
State Transition testing is defined as the testing technique in which changes in input conditions
cause’s state changes in the Application under Test.
⚫In Software Engineering, State Transition Testing Technique is helpful where you need to test
different system transitions.
⚫Two main ways to represent or design state transition, State transition diagram, and State transition
table.
⚫In state transition diagram the states are shown in boxed texts, and the transition is represented by
arrows.
⚫In state transition table all the states are listed on the left side, and the events are described on the
top.
⚫This main advantage of this testing technique is that it will provide a pictorial or tabular
representation of system behavior which will make the tester to cover and understand the system
behavior efficiently.
⚫The main disadvantage of this testing technique is that we can’t rely in this technique every time.
What is Structural Testing ?
Structural testing, also known as glass box testing or white box testing is an approach where the tests
are derived from the knowledge of the software's structure or internal implementation.
The other names of structural testing include clear box testing, open box testing, logic driven testing
or path driven testing.
Path Coverage = (Number paths exercised / Total No of paths in the program) x 100 %
Page 44 of 89
TYCS SEM V USCS5032 Software Testing & Quality Assurance
We have gone through blackbox testing techniques like equivalence partitioning and boundary
value analysis. These approaches are more structured, and there is a clear approach defined to apply
these techniques. If multiple testers apply the same technique on a requirement, then they will come
up with a similar set of test cases.
When applying experience based test techniques, the test cases are derived from the tester’s skill and
intuition. Their past works with similar applications and technologies also play a role in this. These
techniques can be helpful in looking out for tests that were not easily identified by other structured
ones. Depending on the tester’s approach, it may achieve widely varying degrees of coverage and
effectiveness. Coverage can be difficult to assess and may not be measurable with these techniques.
Once the structured testing is complete, there can be an opportunity to use this testing technique. It
will ensure the testing of important and error-prone areas of applications.
Experience is that precious tool of the tester, which is always required in performing all types of
testing. However, there are certain circumstances, when a tester is left out with his/her past
experience as the only resource to carry out the testing. Some of the expected situations are as
follow:
Apart from the above situations, software product involving fewer risks could also be tested through
this testing approach. However, software products having higher risks may also be tested through
this technique but should be accompanied by the formal & effective testing procedures and
documentation.
Intuition too, play a key role in this technique. For example, consider your favorite web or mobile
app that you use – E.g. Amazon. You did not get any requirements or a document on how to use a
functionality. Yet, most of you are aware of the features and functionalities that they offer. How did
you do that? By exploring the application on your own. This is a classic example of knowing the
application by using it over time.
A similar thought applies to a tester as well. It’s just that they are trained to look for finer details and
to look out for defects. Over a period, they know the areas that can be buggy.
Page 45 of 89
TYCS SEM V USCS5032 Software Testing & Quality Assurance
For example, if you test an eCommerce application like Amazon, there are some scenarios that a
tester would know from his experience that a casual user might not try such as:
These types of scenarios (and many more) come from experience, and they would apply to most of
the ecommerce applications.
Scenarios to avoid
While we can always use this testing along with the structured techniques, in some cases, we cannot
use it stand alone. At times there are contractual requirements where we need to present test
coverage and specific test matrices. Experience based techniques cannot measure the test coverage.
Hence, we should avoid them in such cases.
Error Guessing – Tester applies his experience to guess the areas in the application that are prone to
error.
It's a simple technique of guessing and detecting the potential defects that may creep into the
software product. In this technique, a tester makes use of his skills, acquired knowledge and past
experience to identify the vulnerable areas of the software product that are likely to be affected by
the bugs.
Error guessing technique may be considered as a risk analysis method, where an experienced tester
applies his wisdom and gained experience, to spot the areas or functionalities of the software product
that are likely to be tainted with the potential defects. Thereafter, tester assigns each area with low-
risk, medium-risk and high-risk defect prone areas, and accordingly prepares the test cases to locate
these defects.
Page 46 of 89
TYCS SEM V USCS5032 Software Testing & Quality Assurance
What could not be found through formal testing technique may be spotted through error guessing.
However, it is preferred that the formal testing technique should be followed by the error guessing
technique.
Exploratory Testing – As the name implies, the tester explores the application, and uses his
experience to navigate thru different functionalities.
Exploratory testing best use may be seen in the event of inadequate specifications & requirement
and severely limited time.
Checklist Based Testing– In this technique, we apply a tester's experience to create a checklist of
different functionalities and use cases for testing.
In this technique, experienced tester based on his past experience prepares the checklist, which work
as a manual to direct the testing process. The checklist is of high and standard level and consistently
reminds the tester of what to be tested. Checklist prepared by a tester is not the static and the final
list, i.e. changes may be brought into it in proportion to needs & requirements, occurring during the
course of testing. Further, it is pertinent to mention that the checklist is the only tool to ensure the
complete test coverage in this testing
Common Pitfalls
As the name suggests, this testing technique is solely based on the experience of the tester. The
quality of testing depends on the tester, and it would differ from person to person.
This approach might result in a very poorly tested application if the experience of the tester is not
enough and can lead to bugs in the application.
In some cases, the tester has the experience but the domain is new. E.g. the application is in a
banking domain, but the tester has worked on eCommerce applications. In such cases, the experience
based testing would not work as well.
Hence, it’s extremely important that we use this technique when a tester has been working in the
same domain or on the application for a long time.
Software Testing Strategies: Strategic approach to software testing Unit Testing: purpose,
techniques, and best practices, Integration Testing: approaches and challenges, Validation Testing:
ensuring adherence to user requirements, System Testing: comprehensive end-to-end testing
Page 47 of 89
TYCS SEM V USCS5032 Software Testing & Quality Assurance
A software testing strategy is the set of steps that need to be done to assure the highest possible
quality of an end-product. It is a plan of actions that an in-house QA department or an outsourced
QA team follows to provide the level of quality set by you. If you choose the strategy that your
project does not require to be perfect, you waste time and resources for nothing.
1. Waterfall testing strategy
2. Agile testing strategy
3. DevOps testing strategy
4. Risk based testing strategy
5. Exploratory testing strategy
6. Alpha beta testing strategy
7. Regression testing strategy
To make it clearer if the Test Plan is some destination, then QA Test strategy is a map to reach
that destination.
The classical strategy for testing computer software begins with “testing in the small” and works
outward toward “testing in the large.” Stated in the jargon of software testing, we begin with unit
testing, then progress toward integration testing, and culminate with validation and system testing.
Unit Testing
In conventional applications, unit testing focuses on the smallest compliable program unit—the
subprogram (e.g., module, subroutine, procedure, component). Once each of these units has been
tested individually, it is integrated into a program structure while a series of regression tests are run
to uncover errors due to interfacing between the modules and side effects caused by the addition of
new units. Finally, the system as a whole is tested to ensure that errors in requirements are uncovered
Unit testing refers to testing program units in isolation. However, there is no consensus on the
definition of a unit. Some examples of commonly understood units are functions, procedures, or
methods. Even a class in an object-oriented programming language can be considered as a program
unit. Syntactically, a program unit is a piece of code, such as a function or method of class, that is
invoked from outside the unit and that can invoke other program units. Moreover, a program unit is
assumed to implement a well-defined function providing a certain level of abstraction to the
implementation of higher level functions. The function performed by a program unit may not have a
direct association with a system-level function. Thus, a program unit may be viewed as a piece of
code implementing a “low”-level function
Most software consists of a number of components, each the size of a small program. How do we test
each component? One answer is to create an environment to test each component in isolation (Figure
shown below). This is termed unit testing. A driver component makes method calls on the
Page 48 of 89
TYCS SEM V USCS5032 Software Testing & Quality Assurance
component under test. Any methods that the component uses are simulated as stubs. These stubs are
rudimentary replacements for missing methods. A stub does one of the following:
• carries out an easily written simulation of the mission of the component
• displays a message indicating that the component has been executed
• nothing
Now, given that a program unit implements a function, it is only natural to test the unit before it is
integrated with other units. Thus, a program unit is tested in isolation, that is, in a stand-alone
manner. There are two reasons for testing a unit in a stand-alone manner.
First, errors found during testing can be attributed to a specific unit so that it can be easily fixed.
Moreover, unit testing removes dependencies on other program units.
Second, during unit testing it is desirable to verify that each distinct execution of a program unit
produces the expected result. In terms of code details, a distinct execution refers to a distinct path in
the unit. Unit testing has a limited scope. A programmer will need to verify whether or not a code
works correctly by performing unit-level testing. Intuitively, a programmer needs to test a unit as
follows:
• Execute every line of code. This is desirable because the programmer needs to know what
happens when a line of code is executed. In the absence of such basic observations, surprises at
a later stage can be expensive.
• Execute every predicate in the unit to evaluate them to true and false separately.
• Observe that the unit performs its intended function and ensure that it contains no known
errors.
In spite of the above tests, there is no guarantee that a satisfactorily tested unit is functionally
correct from a systemwide perspective. Not everything pertinent to a unit can be tested in isolation
because of the limitations of testing in isolation. This means that some errors in a program unit can
only be found later, when the unit is integrated with other units in the integration testing and
system testing phases. Even though it is not possible to find all errors in a program unit in
isolation, it is still necessary to ensure that a unit performs satisfactorily before it is used by other
program units. It serves no purpose to integrate an erroneous unit with other units for the following
reasons: (i) many of the subsequent tests will be a waste of resources and (ii) finding the root
causes of failures in an integrated system is more resource consuming.
Unit testing is performed by the programmer who writes the program unit because the programmer
is intimately familiar with the internal details of the unit. The objective for the programmer is to be
satisfied that the unit works as expected. Since a programmer is supposed to construct a unit with
no errors in it, a unit test is performed by him or her to their satisfaction in the beginning and to the
satisfaction of other programmers when the unit is integrated with other units. This means that all
programmers are accountable for the quality of their own work, which may include both new code
and modifications to the existing code. The idea here is to push the quality concept down to the
Page 49 of 89
TYCS SEM V USCS5032 Software Testing & Quality Assurance
lowest level of the organization and empower each programmer to be responsible for his or her
own quality. Therefore, it is in the best interest of the programmer to take preventive actions to
minimize the number of defects in the code. The defects found during unit testing are internal to
the software development group and are not reported up the personnel hierarchy to be counted in
quality measurement metrics. The source code of a unit is not used for interfacing by other group
members until the programmer completes unit testing and checks in the unit to the version control
system.
1. Unit tests help to fix bugs early in the development cycle and save costs.
2. It helps the developers to understand the testing code base and enables them to make changes
quickly
3. Good unit tests serve as project documentation
4. Unit tests help with code re-use. Migrate both your code and your tests to your new project.
Tweak the code until the tests run again.
The Unit Testing Techniques are mainly categorized into three parts which are Black box testing
that involves testing of user interface along with input and output, White box testing that involves
testing the functional behaviour of the software application and Gray box testing that is used to
execute test suites, test methods, test cases and performing risk analysis.
• Statement Coverage
• Decision Coverage
• Branch Coverage
• Condition Coverage
• Finite State Machine Coverage
• Test Driven Development (TDD) & Unit Testing
Unit testing in TDD involves an extensive use of testing frameworks. A unit test framework is used
in order to create automated unit tests. Unit testing frameworks are not unique to TDD, but they are
essential to it. Below we look at some of what TDD brings to the world of unit testing:
Page 50 of 89
TYCS SEM V USCS5032 Software Testing & Quality Assurance
• Bugs identified during unit testing must be fixed before proceeding to the next phase in
SDLC
• Adopt a “test as your code” approach. The more code you write without testing, the more
paths you have to check for errors.
Summary
Integration Testing
Integration testing is known as the second level of the software testing process, following unit
testing. Integration testing involves checking individual components or units of a software project to
expose defects and problems to verify that they work together as designed.
As a rule, the usual software project consists of numerous software modules, many of them built by
different programmers. Integration testing shows the team how well these disparate elements work
together. After all, each unit may function perfectly on its own, but the pressing question is, “But can
they be brought together and work smoothly?”
So, integration testing is the way we find out if the various parts of a software application can play
well with others!
Page 51 of 89
TYCS SEM V USCS5032 Software Testing & Quality Assurance
This method involves integrating all the modules and components and testing them at once as a
single unit. This method is also known as non-incremental integration testing.
Bottom-Up Method
This method requires testing the lower-level modules first, which are then used to facilitate the
higher module testing. The process continues until every top-level module is tested. Once all the
lower-level modules are successfully tested and integrated, the next level of modules is formed.
This method is also called "sandwich testing." It involves simultaneously testing top-level modules
with lower-level modules and integrating lower-level modules with top-level modules, and testing
them as a system. So, this process is, in essence, a fusion of the bottom-up and top-down testing
types.
Incremental Approach
This approach integrates two or more logically related modules, then tests them. After this, other
related modules are gradually introduced and integrated until all the logically related modules are
successfully tested. The tester can use either the top-down or bottom-up methods.
These elements are dummy programs used in integration testing to facilitate software testing activity,
acting as substitutes for any missing models in the testing process. These programs don’t implement
the missing software module’s entire programming logic, but they do simulate the everyday data
communication with the calling module.
Page 52 of 89
TYCS SEM V USCS5032 Software Testing & Quality Assurance
Top-Down Approach
Unlike the bottom-up method, the top-down approach tests the higher-level modules first, working
the way down to the lower-level modules. Testers can use stubs if any lower-level modules aren’t
ready.
Validation Testing
At the culmination of integration testing, software is completely assembled as a package,
interfacing errors have been uncovered and corrected, and a final series of software tests—
validation testing—may begin. Validation can be defined in many ways, but a simple (albeit harsh)
definition is that validation succeeds when software functions in a manner that can be reasonably
expected by the customer. At this point a battle-hardened software developer might protest: "Who
or what is the arbiter of reasonable expectations?"
Reasonable expectations are defined in the Software Requirements Specification— a document
that describes all user-visible attributes of the software. The specification contains a section called
Validation Criteria. Information contained in that section forms the basis for a validation testing
approach.
Validation Test Criteria Software validation is achieved through a series of black-box tests that
demonstrate conformity with requirements. A test plan outlines the classes of tests to be conducted
and a test procedure defines specific test cases that will be used to demonstrate conformity with
requirements. Both the plan and procedure are designed to ensure that all functional requirements
are satisfied, all behavioral characteristics are achieved, all performance requirements are attained,
documentation is correct, and human engineered and other requirements are met (e.g.,
transportability, compatibility, error recovery, maintainability).
After each validation test case has been conducted, one of two possible conditions exist:
(1) The function or performance characteristics conform to specification and are accepted
or
(2) a deviation from specification is uncovered and a deficiency list is created.
Deviation or error discovered at this stage in a project can rarely be corrected prior to scheduled
delivery. It is often necessary to negotiate with the customer to establish a method for resolving
deficiencies.
Page 53 of 89
TYCS SEM V USCS5032 Software Testing & Quality Assurance
Alpha and Beta Testing It is virtually impossible for a software developer to foresee how the
customer will really use a program. Instructions for use may be misinterpreted; strange combinations
of data may be regularly used; output that seemed clear to the tester may be unintelligible to a user in
the field.
When custom software is built for one customer, a series of acceptance tests are conducted to enable
the customer to validate all requirements. Conducted by the end user rather than software engineers,
an acceptance test can range from an informal "test drive" to a planned and systematically executed
series of tests. In fact, acceptance testing can be conducted over a period of weeks or months, thereby
uncovering cumulative errors that might degrade the system over time.
If software is developed as a product to be used by many customers, it is impractical to perform
formal acceptance tests with each one. Most software product builders use a process called alpha and
beta testing to uncover errors that only the end-user seems able to find.
The alpha test is conducted at the developer's site by a customer. The software is used in a natural
setting with the developer "looking over the shoulder" of the user and recording errors and usage
problems. Alpha tests are conducted in a controlled environment.
The beta test is conducted at one or more customer sites by the end-user of the software. Unlike
alpha testing, the developer is generally not present. Therefore, the beta test is a "live" application of
the software in an environment that cannot be controlled by the developer. The customer records all
problems (real or imagined) that are encountered during beta testing and reports these to the
developer at regular intervals. As a result of problems reported during beta tests, software engineers
make modifications and then prepare for release of the software product to the entire customer base.
SYSTEM TESTING
Software is only one element of a larger computer-based system. Ultimately, software is
incorporated with other system elements (e.g., hardware, people, information), and a series of system
integration and validation tests are conducted. These tests fall outside the scope of the software
process and are not conducted solely by software engineers. However, steps taken during software
design and testing can greatly improve the probability of successful software integration in the larger
system. A classic system testing problem is "finger-pointing." This occurs when an error is
uncovered, and each system element developer blames the other for the problem. Rather than
indulging in such nonsense, the software engineer should anticipate potential interfacing problems
and
(1) Design error-handling paths that test all information coming from other elements of the system,
(2) Conduct a series of tests that simulate bad data or other potential errors at the software interface,
(3) Record the results of tests to use as "evidence" if finger-pointing does occur, and
(4) Participate in planning and design of system tests to ensure that software is adequately tested.
System testing is actually a series of different tests whose primary purpose is to fully exercise the
computer-based system. Although each test has a different purpose, all work to verify that system
Page 54 of 89
TYCS SEM V USCS5032 Software Testing & Quality Assurance
elements have been properly integrated and perform allocated functions. In the sections that follow,
we discuss the types of system tests that are worthwhile for software-based systems.
Recovery Testing Many computer based systems must recover from faults and resume processing
within a pre-specified time. In some cases, a system must be fault tolerant; that is, processing faults
must not cause overall system function to cease. In other cases, a system failure must be corrected
within a specified period of time or severe economic damage will occur.
Recovery testing is a system test that forces the software to fail in a variety of ways and verifies that
recovery is properly performed. If recovery is automatic (performed by the system itself),
reinitialization, checkpointing mechanisms, data recovery, and restart are evaluated for correctness.
If recovery requires human intervention, the mean-time-to-repair (MTTR) is evaluated to determine
whether it is within acceptable limits.
Security Testing Any computer-based system that manages sensitive information or causes actions
that can improperly harm (or benefit) individuals is a target for improper or illegal penetration.
Penetration spans a broad range of activities: hackers who attempt to penetrate systems for sport;
disgruntled employees who attempt to penetrate for revenge; dishonest individuals who attempt to
penetrate for illicit personal gain. Security testing attempts to verify that protection mechanisms built
into a system will, in fact, protect it from improper penetration. To quote: "The system's security
must, of course, be tested for invulnerability from frontal attack—but must also be tested for
invulnerability from flank or rear attack." During security testing, the tester plays the role(s) of the
individual who desires to penetrate the system. Anything goes! The tester may attempt to acquire
passwords through external clerical means; may attack the system with custom software designed to
breakdown any defenses that have been constructed; may overwhelm the system, thereby denying
service to others; may purposely cause system errors, hoping to penetrate during recovery; may
browse through insecure data, hoping to find the key to system entry. Given enough time and
resources, good security testing will ultimately penetrate a system. The role of the system designer is
to make penetration cost more than the value of the information that will be obtained.
Stress testing executes a system in a manner that demands resources in abnormal quantity,
frequency, or volume. For example, (1) special tests may be designed that generate ten interrupts per
second, when one or two is the average rate, (2) input data rates may be increased by an order of
magnitude to determine how input functions will respond, (3) test cases that require maximum
memory or other resources are executed, (4) test cases that may cause thrashing in a virtual operating
system are designed, (5) test cases that may cause excessive hunting for disk-resident data are
created. Essentially, the tester attempts to break the program. A variation of stress testing is a
technique called sensitivity testing. In some situations (the most common occur in mathematical
algorithms), a very small range of data contained within the bounds of valid data for a program may
cause extreme and even erroneous processing or profound performance degradation. Sensitivity
testing attempts to uncover data combinations within valid input classes that may cause instability or
improper processing.
Performance tests are often coupled with stress testing and usually require both hardware and
software instrumentation. That is, it is often necessary to measure resource utilization (e.g., processor
cycles) in an exacting fashion. External instrumentation can monitor execution intervals, log events
(e.g., interrupts) as they occur, and sample machine states on a regular basis. By instrumenting a
system, the tester can uncover situations that lead to degradation and possible system failure.
Page 55 of 89
TYCS SEM V USCS5032 Software Testing & Quality Assurance
SOFTWARE METRICS
The simplest measure of software is its size. Two possible metrics are the size in bytes and the size in
number of statements. The size in statements is often termed LOCs (lines of code), sometimes
SLOCs (source lines of code). The size in bytes obviously affects the main memory and disk space
requirements and affects performance. The size measured in statements relates to development effort
and maintenance costs. But a longer program does not necessarily take longer to develop than a
shorter program, because the complexity of the software also has an effect. A metric such as LOCs
takes no account of complexity. (We shall see shortly how complexity can be measured.)
There are different ways of interpreting even a simple metric like LOCs, since it is possible to
exclude, or include, comments, data declaration statements, and so on. Arguably, blank lines are not
included in the count.
The second major metric is person months, a measure of developer effort. Since people’s time is the
major factor in software development, person months usually determine cost. If an organization
measures the development time for components, the information can be used to predict the time of
future developments. It can also be used to gauge the effectiveness of new techniques that are used.
The third basic metric is the number of bugs. As a component is being developed, a log can be kept
of the bugs that are found. In week 1 there might be 27, in week 2 there might be 13, and so on. As
we shall see later, this helps predict how many bugs remain at the end of the development. These
figures can also be used to assess how good new techniques are. Collecting and documenting quality
information can be seen as threatening to developers but a supportive culture can help.
Complexity metric
In the early days of programming, main memory was small and processors were slow. It was
considered normal to try hard to make programs efficient. One effect of this was that programmers
often used tricks. Nowadays the situation is rather different – the pressure is on to reduce the
development time of programs and ease the burden of maintenance. So the emphasis is on writing
programs that are clear and simple, and therefore easy to check, understand and modify. What are the
arguments for simplicity?
■ it is quicker to debug simple software
■ it is quicker to test simple software
■ simple software is more likely to be reliable
■ it is quicker to modify simple software.
If we look at the world of design engineering, a good engineer insists on maintaining a complete
understanding and control over every aspect of the project. The more difficult the project the more
firmly the insistence on simplicity – without it no one can understand what is going on. Software
designers and programmers have sometimes been accused of exhibiting the exact opposite
characteristic; they deliberately avoid simple solutions and gain satisfaction from the complexities of
their designs.
However, many software designers and programmers today strive to make their software as clear and
simple as possible. A programmer finishes a program and is satisfied both that it works correctly and
that it is clearly written.
Arguably what we perceive as clarity or complexity is an issue for psychology. It is concerned with
how the brain works. We cannot establish a measure of complexity – for example, the number of
statements in a program – without investigating how such a measure corresponds with programmers’
perceptions and experiences. We now describe one attempt to establish a meaningful measure of
Page 56 of 89
TYCS SEM V USCS5032 Software Testing & Quality Assurance
complexity. One aim of such work is to guide programmers in selecting clear program structures and
rejecting unclear structures, either during design or afterwards.
The approach taken is to hypothesize about what factors affect program complexity. For example, we
might conjecture that program length, the number of alternative paths through the program and the
number of references to data might all affect complexity.
Amongst several attempts to measure complexity is McCabe’s cyclomatic complexity. McCabe
suggests that complexity does not depend on the number of statements. Instead it depends only on
the decision structure of the program – the number of if, while and similar statements. To calculate
the cyclomatic complexity of a program, count the number of conditions and add one. For example,
the program fragment
has a complexity of 2, because there are two independent paths through the program. Similarly a
while and a repeat each count one towards the complexity count. Compound conditions like:
if a > b and c > d then
count two because this if statement could be rewritten as two, nested if statements. Note that a
program that consists only of a sequence of statements, has a cyclomatic complexity of 1, however
long it is. Thus the smallest value of this metric is 1.
There are two ways of using McCabe’s measure. First, if we had two algorithms that solve the same
problem, we could use this measure to select the simpler. Second, McCabe suggests that if the
cyclomatic complexity of a component is greater than 10, then it is too complex. In such a case, it
should either be rewritten or else broken down into several smaller components.
Cyclomatic complexity is a useful attempt to quantify complexity, and it is claimed that it has been
successfully applied. It is, however, open to several criticisms as follows.
First, why is the value of 10 adopted as the limit? This figure for the maximum allowed complexity
is somewhat arbitrary and unscientific.
Second, the measure makes no allowance for the sheer length of a module, so that a one-page
module (with no decisions) is rated as equally complex as a thousand-page module (with no
decisions).
Third, the measure depends only on control flow, ignoring, for example, references to data. One
program might only act upon a few items of data, while another might involve operations on a
variety of complex objects. (Indirect references to data, say via pointers, are an extreme case.)
Finally, there is no evidence to fully correlate McCabe’s measure with the complexity of a module as
perceived by human beings.
So McCabe’s measure is a crude attempt to quantify the complexity of a software component. But it
suffers from obvious flaws and there are various suggestions for devising an improved measure.
However, McCabe’s complexity measure has become famous and influential as a starting point for
work on metrics.
Page 57 of 89
TYCS SEM V USCS5032 Software Testing & Quality Assurance
UNIT III
Defect Management and Software Quality Assurance
Defect Management: Definition of defects and their lifecycle, Defect management process,
including defect reporting and tracking, Metrics related to defects and their utilization for process
improvement
Software Quality Assurance: Understanding quality concepts and the Quality Movement:
Background issues and challenges in SQA, Activities and approaches in Software Quality Assurance,
Software Reviews: Formal Technical Reviews and their benefits, Statistical Quality Assurance and
Software Reliability
Statistical process control techniques for quality assurance: Software reliability measurement and
improvement, The ISO 9000 Quality Standards and their application in software development
Quality Improvement Techniques: Introduction to quality improvement methodologies, Utilizing
quality costs for decision-making, Introduction to quality improvement tools: Pareto Diagrams,
Cause-effect Diagrams, Scatter Diagrams, Run chart
What is a Defect?
A software bug arises when the expected result don't match with the actual results. It can also be
error, flaw, failure, or fault in a computer program. Most bugs arise from mistakes and errors made
by developers, architects.
Following are the methods for preventing programmers from introducing bugs during development:
• Programming Techniques adopted
• Software Development methodologies
• Peer Review
• Code Analysis
Defect Status
Defect Status or Bug Status in defect life cycle is the present state from which the defect or a bug is
currently undergoing. The goal of defect status is to precisely convey the current state or progress of
a defect or bug in order to better track and understand the actual progress of the defect life cycle.
Page 58 of 89
TYCS SEM V USCS5032 Software Testing & Quality Assurance
Defect States
#1) New: This is the first state of a defect in the Defect Life Cycle. When any new defect is found, it
falls in a ‘New’ state, and validations & testing are performed on this defect in the later stages of the
Defect Life Cycle.
#2) Assigned: In this stage, a newly created defect is assigned to the development team to work on
the defect. This is assigned by the project lead or the manager of the testing team to a developer.
#3) Open/Active: Here, the developer starts the process of analyzing the defect and works on fixing
it, if required.
If the developer feels that the defect is not appropriate then it may get transferred to any of the below
four states namely Duplicate, Deferred, Rejected, or Not a Bug-based upon a specific reason. We
will discuss these four states in a while.
#4) Fixed: When the developer finishes the task of fixing a defect by making the required changes
then he can mark the status of the defect as “Fixed”.
Page 59 of 89
TYCS SEM V USCS5032 Software Testing & Quality Assurance
#5) Pending Retest: After fixing the defect, the developer assigns the defect to the tester to retest the
defect at their end, and until the tester works on retesting the defect, the state of the defect remains in
“Pending Retest”.
#6) Retest: At this point, the tester starts the task of retesting the defect to verify if the defect is fixed
accurately by the developer as per the requirements or not.
#7) Reopen: If any issue persists in the defect, then it will be assigned to the developer again for
testing and the status of the defect gets changed to ‘Reopen’.
#8) Verified: If the tester does not find any issue in the defect after being assigned to the developer
for retesting and he feels that if the defect has been fixed accurately then the status of the defect gets
assigned to ‘Verified’.
#9) Closed: When the defect does not exist any longer, then the tester changes the status of the
defect to “Closed”.
A Few More:
• Rejected: If the defect is not considered a genuine defect by the developer then it is marked
as “Rejected” by the developer.
• Duplicate: If the developer finds the defect as same as any other defect or if the concept of
the defect matches any other defect then the status of the defect is changed to ‘Duplicate’ by
the developer.
• Deferred: If the developer feels that the defect is not of very important priority and it can get
fixed in the next releases or so in such a case, he can change the status of the defect as
‘Deferred’.
• Not a Bug: If the defect does not have an impact on the functionality of the application, then
the status of the defect gets changed to “Not a Bug”.
Page 60 of 89
TYCS SEM V USCS5032 Software Testing & Quality Assurance
Page 61 of 89
TYCS SEM V USCS5032 Software Testing & Quality Assurance
1. Defect Prevention
The first stage of the defect management process is defect prevention. In this stage, the execution
of procedures, methodology, and standard approaches decreases the risk of defects. Defect removal
at the initial phase is the best approach in order to reduction its impact.
Because in the initial phase of fixing or resolving defects is less expensive, and the impact can also
be diminished.
But for the future phases, identifying faults and then fixing it is an expensive process, and the effects
of defect can also be amplified.
The defect prevention stage includes the following significant steps:
o Estimate Predictable Impact
o Minimize expected impact
o Identify Critical Risk
Page 62 of 89
TYCS SEM V USCS5032 Software Testing & Quality Assurance
2. Deliverable Baseline
The second stage of the defect management process is the Deliverable baseline. Here, the
deliverable defines the system, documents, or product.
We can say that the deliverable is a baseline as soon as a deliverable reaches its pre-defined
milestone.
In this stage, the deliverable is carried from one step to another; the system's existing defects also
move forward to the next step or a milestone.
In other words, we can say that once a deliverable is baselined, any additional changes are
controlled.
3. Defect Discovery
The next stage of the defect management process is defect discovery. At the early stage of the defect
management process, defect discovery is very significant. And later, it might cause greater damage.
If developers have approved or documented the defect as a valid one, then only a defect is considered
as discovered.
As we understood that, it is practically impossible to eliminate each defect from the system and make
a system defect-free. But we can detect the defects early before they become expensive to the
project.
The following phases have been including in the defect discovery stage; let's understand them in
details:
o Identify a defect
o Report a defect
o Acknowledge Defect
Page 63 of 89
TYCS SEM V USCS5032 Software Testing & Quality Assurance
4. Defect Resolution
Once the defect discovery stage has been completed successfully, we move to the next step of the
defect management process, Defect Resolution.
The Defect Resolution is a step-by-step procedure of fixing the defects, or we can say that this
process is beneficial in order to specified and track the defects.
This process begins with handing over the defects to the development team. The developers need to
proceed with the resolution of the defect and fixed them based on the priority.
Once the defect has been selected, the developer sends a defect report of resolution to the test
manager's testing team.
The defect resolution process also involves the notification back to the test engineer to confirm that
the resolution is verified
We need to follow the below steps in order to accomplish the defect resolution stage.
o Prioritize the risk
o Fix the defect
o Report the Resolution
Page 64 of 89
TYCS SEM V USCS5032 Software Testing & Quality Assurance
In the second step, the developer will fix the defects as per the priority, which implies that the higher
priority defects are resolved first. Then the developer will fix the lower priority defects.
Step3: Report the Resolution
In the last step of defect resolution, the developer needs to send the fixed defects report. As it is the
responsibility of development teams to make sure that the testing team is well aware of when the
defects are going to be fixed and how the fault has been fixed.
This step will be beneficial for the testing team's perspective to understand the root of the defect.
5. Process Improvement
In the above stage (defect resolution), the defects have been arranged and fixed.
Now, in the process improvement phase, we will look into the lower priority defects because these
defects are also essential as well as impact the system.
All the acknowledged defects are equal to a critical defect from the process improvement phase
perspective and need to be fixed.
The people involved in this particular stage need to recall and check from where the defect was
initiated.
Depending on that, we can make modifications in the validation process, base-lining document,
review process that may find the flaws early in the process, and make the process less costly.
These minor defects allow us to learn how we can enhance the process and avoid the existence of
any kind of defects that may affect the system or the product failure in the future.
6. Management Reporting
Management reporting is the last stage of the defect management process. It is a significant and
essential part of the defect management process. The management reporting is required to make sure
that the generated reports have an objective and increase the defect management process.
In simple words, we can say that the evaluation and reporting of defect information support
organization and risk management, process improvement, and project management.
The information collected on specific defects by the project teams is the root of the management
reporting. Therefore, every organization needs to consider the information gathered throughout the
defect management process and the grouping of individual defects.
Defect Workflow and States
Various organizations that achieve the software testing with the help of a tool that keeps track of the
defects during the bug/defect lifecycle and also contains defect reports.
Generally, one owner of the defects reports at each state of defect lifecycle, responsible for finishing
a task that would move defect report to the successive state.
Sometimes, defect report may not have an owner in the last phases of the defect lifecycle if we
may face the following situation:
o If the defect is invalid, then the Defect report is cancelled.
o The defect report is considered deferred if the defect won't be fixed as part of the project.
o If the fault cannot be detected anymore, hence defect report is regarded as not reproducible.
o The defect report is considered closed if the defect has been fixed and tested.
Page 65 of 89
TYCS SEM V USCS5032 Software Testing & Quality Assurance
Defect States
If defects are identified throughout the testing, the testing team must manage them in the following
three states:
o Initial state
o Returned state
o Confirmation state
1. Initial State
2. Returned state
o The second state of defect is returned state. In this, the person receiving the test report
rejects and asks the report creator to provide further information.
o In a returned state, the test engineers can provide more information or accept the rejection of
the report.
o If various reports are rejected, the test manager should look out for faults in the initial
information collection process itself.
o The returned state is also referred as the clarification state or rejected state.
3. Confirmation state
o The last state of defect is the confirmation state, where the test engineer performed
a confirmation testingto make sure that the defect has been fixed.
o It is achieved by repeating the steps, which found the defect at the time of testing.
Page 66 of 89
TYCS SEM V USCS5032 Software Testing & Quality Assurance
Confirm Resolution
o The defect management process will also help us to make sure the resolution of defects being
tracked.
o One of the most significant procedures of the defect management process is the defect or
bug tracking process.
o For defect tracking, we have various automation tools available in the market, which can help
us to track the defect in the early stages.
o These days, various different tools are available in order to track different types of
defects. For example,
o Software Tools: These types of tools are used to identified or track non-technical
problems.
o User-facing Tools: These types of tools will help us to discover the defects, which
are related to production.
o The defect management process is also offering us valuable defect metrics together with
automation tools.
o And these valuable defect metrics help us in reporting and continuous enhancements.
Page 67 of 89
TYCS SEM V USCS5032 Software Testing & Quality Assurance
Software test metrics are used to quantify the process of software testing. The quality, growth, or
improvement needed at a stage can be measured using the metrics. So that whatever is lagging
this time can be improved for the next cycle. Metrics provide a comparative measure of a
process. Metrics also help to determine software's quality and measures required to create a
defect-free quality product. The average time taken to fix a bug is a better parameter than the
time allocated for the same.
A software testing metric indicates the degree to which a process, component, or tool is efficient.
Here we have a quantitative measure of the effectiveness of an approach. Let's say we automated
certain test cases and we have 50% test coverage using tool A. Next time the tool was changed
and manual testing was also used so test coverage was 66%. Now we have solid data to justify
why manual testing in combination with automation would be better to test the product.
1. Defect density
2. Defect severity
3. Defect aging
4. Defect reopen rate
5. Defect trend analysis
Software Quality Assurance: Understanding quality concepts and the Quality Movement:
Background issues and challenges in SQA, Activities and approaches in Software Quality Assurance,
Software Reviews: Formal Technical Reviews and their benefits, Statistical Quality Assurance and
Software Reliability
When the expression “quality” is used, we usually think in terms of an excellent product or service
that fulfills or exceeds our expectations. These expectations are based on the intended use and the
selling price. For example, a customer expects a different performance from a plain steel washer than
from a chrome-plated steel washer because they are a different grade. When a product surpasses our
expectations, we consider that quality. Thus, it is somewhat of an intangible based on perception.
Quality can be quantified as follows:
Q=P/E
where, Q = quality
P = performance
E = expectation
If Q is greater than 1.0, then the customer has a good feeling about the product or service. Of course,
the determination of P and E will most likely be based on perception with the organization
determining performance and the customer determining expectations.
A more definitive definition of quality is given in ISO 9000: 2000. It is defined as the degree to
which a set of inherent characteristics fulfills requirements. Degree means that quality can be used
with adjectives such as poor, good, and excellent. Inherent is defined as existing in something,
especially as a permanent characteristic. Characteristics can be quantitative or qualitative.
Page 68 of 89
TYCS SEM V USCS5032 Software Testing & Quality Assurance
Requirement is a need or expectation that is stated; generally implied by the organization, its
customers, and other interested parties; or obligatory.
Quality has nine different dimensions. Table below shows these nine dimensions of quality with
their meanings and explanations in terms of a slide projector.
These dimensions are somewhat independent; therefore, a product can be excellent in one dimension
and average or poor in another. Very few, if any, products excel in all nine dimensions. For
example, the Japanese were cited for high-quality cars in the 1970s based only on the dimensions
of reliability, conformance, and aesthetics. Therefore, quality products can be determined by
using a few of the dimensions of quality.
Marketing has the responsibility of identifying the relative importance of each ted into the
requirements for the development of a new product or the improvement of an existing one.
Quality software refers to a software which is reasonably bug or defect free, is delivered in time and
within the specified budget, meets the requirements and/or expectations, and is maintainable. In the
software engineering context, software quality reflects both functional quality as well as structural
quality.
• Software Functional Quality − It reflects how well it satisfies a given design, based on the
functional requirements or specifications.
• Software Structural Quality − It deals with the handling of non-functional requirements that
support the delivery of the functional requirements, such as robustness or maintainability, and
the degree to which the software was produced correctly.
Software Quality Assurance − Software Quality Assurance (SQA) is a set of activities to ensure the
quality in software engineering processes that ultimately result in quality software products. The
activities establish and evaluate the processes that produce products. It involves process-focused
action.
• Software Quality Control − Software Quality Control (SQC) is a set of activities to ensure the
quality in software products. These activities focus on determining the defects in the actual
products produced. It involves product-focused action.
Page 69 of 89
TYCS SEM V USCS5032 Software Testing & Quality Assurance
In the software industry, the developers will never declare that the software is free of defects, unlike
other industrial product manufacturers usually do. This difference is due to the following reasons.
1. Changing requirement
2. Time and resource constraint
3. Complexity of software systems
4. Emerging technologies and platform
5. Global software development
6. Lack of SQA expertise
7. Cost vs quality tradeoff
Software Quality Assurance involves a range of activities and approaches aimed at ensuring that
software product meet the desired quality standards.
SQA Encompasses
o A quality management approach
o Effective Software engineering technology (methods and tools)
o Formal technical reviews that are tested throughout the software process
o A multitier testing strategy
o Control of software documentation and the changes made to it.
o A procedure to ensure compliances with software development standards
Page 70 of 89
TYCS SEM V USCS5032 Software Testing & Quality Assurance
Formal Technical Review (FTR) is a software quality control activity performed by software
engineers.
In addition, the purpose of FTR is to enable junior engineer to observe the analysis, design, coding
and testing approach more closely. FTR also works to promote back up and continuity become
familiar with parts of software they might not have seen otherwise. Actually, FTR is a class of
reviews that include walkthroughs, inspections, round robin reviews and other small group technical
assessments of software. Each FTR is conducted as meeting and is considered successful only if it is
properly planned, controlled and attended.
EXAMPLE: suppose during the development of the software without FTR design cost 10 units,
coding cost 15 units and testing cost 10 units then the total cost till now is 35 units without
maintenance but there was a quality issue because of bad design so to fix it we have to re design the
software and final cost will become 70 units. that is why FTR is so helpful while developing the
software.
The review meeting: Each review meeting should be held considering the following constraints-
Involvement of people:
1. Between 3, 4 and 5 people should be involve in the review.
2. Advance preparation should occur but it should be very short that is at the most 2 hours of
work for every person.
3. The short duration of the review meeting should be less than two hour. Gives these
constraints, it should be clear that an FTR focuses on specific (and small) part of the
overall software.
At the end of the review, all attendees of FTR must decide what to do.
1. Accept the product without any modification.
2. Reject the project due to serious error (Once corrected, another app need to be reviewed),
or
3. Accept the product provisional (minor errors are encountered and should be corrected, but
no additional review will be required).
The decision was made, with all FTR attendees completing a sign-of indicating their participation in
the review and their agreement with the findings of the review team.
Page 71 of 89
TYCS SEM V USCS5032 Software Testing & Quality Assurance
Review guidelines :- Guidelines for the conducting of formal technical reviews should be
established in advance. These guidelines must be distributed to all reviewers, agreed upon, and then
followed. A review that is unregistered can often be worse than a review that does not minimum set
of guidelines for FTR.
BENEFITS: The primary objective of formal technical reviews is to find errors during the process so
that they do not become defects after release of the software. The obvious benefit of formal technical
reviews is the early discovery of errors so that they do not propagate to the next step in the software
process.
To illustrate, program X is estimated to have a reliability of 0.96 over eight elapsed processing hours.
In other words, if program X were to be executed 100 times and require eight hours of elapsed
processing time (execution time), it is likely to operate correctly (without failure) 96 times out of
100.
Page 72 of 89
TYCS SEM V USCS5032 Software Testing & Quality Assurance
MTTF is described as the time interval between the two successive failures. An MTTF of 200 mean
that one failure can be expected each 200-time units. The time units are entirely dependent on the
system & it can even be stated in the number of transactions. MTTF is consistent for systems with
large transactions.
Once failure occurs, some-time is required to fix the error. MTTR measures the average time it takes
to track the errors causing the failure and to fix them.
We can merge MTTF & MTTR metrics to get the MTBF metric.
Thus, an MTBF of 300 denoted that once the failure appears, the next failure is expected to appear
only after 300 hours. In this method, the time measurements are real-time & not the execution time
as in MTTF.
It is the number of failures appearing in a unit time interval. The number of unexpected events over a
specific time of operation. ROCOF is the frequency of occurrence with which unexpected role is
likely to appear. A ROCOF of 0.02 mean that two failures are likely to occur in each 100
operational time unit steps. It is also called the failure intensity metric
Page 73 of 89
TYCS SEM V USCS5032 Software Testing & Quality Assurance
POFOD is described as the probability that the system will fail when a service is requested. It is the
number of system deficiency given several systems inputs.
POFOD is the possibility that the system will fail when a service request is made.
A POFOD of 0.1 means that one out of ten service requests may fail.POFOD is an essential
measure for safety-critical systems. POFOD is relevant for protection systems where services are
demanded occasionally.
Availability (AVAIL)
Availability is the probability that the system is applicable for use at a given time. It takes into
account the repair time & the restart time for the system. An availability of 0.995 means that in every
1000 time units, the system is feasible to be available for 995 of these. The percentage of time that a
system is applicable for use, taking into account planned and unplanned downtime. If a system is
down an average of four hours out of 100 hours of operation, its AVAIL is 96%.
Statistical process control techniques for quality assurance: Software reliability measurement and
improvement, The ISO 9000 Quality Standards and their application in software development
Quality Improvement Techniques: Introduction to quality improvement methodologies, Utilizing
quality costs for decision-making, Introduction to quality improvement tools: Pareto Diagrams,
Cause-effect Diagrams, Scatter Diagrams, Run chart
Statistical process control is a simple way to encourage continuous improvement. When a process is
continuously monitored and controlled, managers can ensure that it works at its full potential,
resulting in consistent, quality manufacturing.
Statistical process control monitored the quality of products without compromising on safety.Today,
the worldwide manufacturing sector uses statistical process controls extensively.
Modern manufacturing companies have to deal with constantly fluctuating prices of raw materials
and a great deal of competition. Companies cannot control these factors, but they can control the
quality of their products and processes. They need to constantly work on improving the quality,
efficiency, and cost margins to be a market leader.
Inspection continues to be the key form of detecting quality issues for most companies, but its
efficacy is debatable. With statistical process control, an organization can shift from being detection-
based to prevention-based. With the constant monitoring of process performance, operators can
detect changing trends or processes before performance is affected.
Page 74 of 89
TYCS SEM V USCS5032 Software Testing & Quality Assurance
Cause-and-effect diagrams
Also known as the Ishikawa diagram or the fishbone diagram. Cause-and-effect diagrams are used to
identify several causes of any problem. When created, the diagrams look like a fishbone, with each
main bone stretching out into smaller branches that go deeper into each cause.
Check sheets
These are simple, ready-to-use forms that can be collected and then analyzed. Check sheets are
especially good for data that is repeatedly under observation and collected by the same person or in
the same location.
• Histograms
Histograms look like bar charts and are graphs that represent frequency distributions. They are ideal
for numbered data. A histogram is a visual representation of how the outputs of a product or process
vary. Histograms aid in process analysis and demonstrate the capabilities of a process. They display
frequency distributions and have the appearance of bar charts. For numbered data, they are perfect.
• Pareto charts
These are bar graphs that represent time and money or frequency and cost. Pareto charts are
particularly useful to measure problem frequency. They show the 80/20 Pareto principle: addressing
20 percent of the processes will resolve 80 percent of the problems.
Scatter diagrams
Also known as an X-Y graph. Scatter diagrams work best when paired with numerical data.
Stratification
This is a tool to separate data that simplifies pattern identification. Stratification is a process that
sorts objects, people, and related data into layers or specific groups. It is perfect for data from
different sources.
• Control charts
These are the oldest, most popular statistical process tools. These statistical process tools are the
most well-known and the oldest. Control charts use graphics to explain how a process's variability
changes over time. They can reveal irregularities and irrational variations when used to track the
operation.
Data stratification
A slight difference from the stratification tool in the quality control tools.
Page 75 of 89
TYCS SEM V USCS5032 Software Testing & Quality Assurance
Defect maps
These are maps that visualize and track a product’s defects, focusing on physical locations and flaws.
Every defect is identified on the map.
Events logs
These are standardized records that record key software and hardware events.
Process flowcharts
Process flowcharts are a snapshot of steps in a process, displayed in the order they occur.
Progress centers
When decisions need to be made, progress centers are centralized locations that allow businesses to
monitor progress and collect data.
Randomization
This tool determines the numbers of individuals or events needed to be included in the statistical
analysis.
The ISO 9000 series of standards is based on the assumption that if a proper stage is followed for
production, then good quality products are bound to follow automatically. The types of industries to
which the various ISO standards apply are as follows.
Page 76 of 89
TYCS SEM V USCS5032 Software Testing & Quality Assurance
1. ISO 9001: This standard applies to the organizations engaged in design, development,
production, and servicing of goods. This is the standard that applies to most software
development organizations.
2. ISO 9002: This standard applies to those organizations which do not design products but are
only involved in the production. Examples of these category industries contain steel and car
manufacturing industries that buy the product and plants designs from external sources and
are engaged in only manufacturing those products. Therefore, ISO 9002 does not apply to
software development organizations.
3. ISO 9003: This standard applies to organizations that are involved only in the installation and
testing of the products. For example, Gas companies.
• Document control –
All documents concerned with the development of a software product should be properly
managed and controlled.
• Planning –
Proper plans should be prepared and monitored.
• Review –
For effectiveness and correctness all important documents across all phases should be
independently checked and reviewed .
• Testing –
The product should be tested against specification.
• Organizational Aspects –
Various organizational aspects should be addressed e.g., management reporting of the
quality team.
Advantages of ISO 9000 Certification :
Some of the advantages of the ISO 9000 certification process are following :
• Business ISO-9000 certification forces a corporation to specialize in “how they are doing
business”. Each procedure and work instruction must be documented and thus becomes a
springboard for continuous improvement.
• Employees morale is increased as they’re asked to require control of their processes and
document their work processes
• Better products and services result from continuous improvement process.
• Increased employee participation, involvement, awareness and systematic employee
training are reduced problems.
Shortcomings of ISO 9000 Certification :
Some of the shortcoming of the ISO 9000 certification process are following :
Page 77 of 89
TYCS SEM V USCS5032 Software Testing & Quality Assurance
• ISO 9000 does not give any guideline for defining an appropriate process and does not
give guarantee for high quality process.
• ISO 9000 certification process have no international accreditation agency exists.
1. Six Sigma
2. Lean Manufacturing
3. TQM or Reengineering
4. Kaizen
5. Agile Methodologies
6. PDSA
Page 78 of 89
TYCS SEM V USCS5032 Software Testing & Quality Assurance
PDSA: The basic Plan-Do-Study-Act (PDSA) cycle was first developed by Shewhart and then
modified by Deming. It is an effective improvement technique
The four steps in the cycle are exactly as stated. First, plan carefully what is to be done. Next, carry
out the plan (do it). Third, study the results—did the plan work as intended, or were the results
different? Finally, act on the results by identifying what worked as planned and what didn’t. Using
the knowledge learned, develop an improved plan and repeat the cycle.
Kaizen : Kaizen is a Japanese word for the philosophy that defines management’s role in
continuously encouraging and implementing small improvements involving everyone. It is the
process of continuous improvement in small increments that make the process more—efficient,
effective, under control, and adaptable. Improvements are usually accomplished at little or no
expense, without sophisticated techniques or expensive equipment. It focuses on simplification by
breaking down complex processes into their sub-processes and then improving them.
Six Sigma : Six Sigma is the process of producing high and improved quality output. This can be
done in two phases – identification and elimination. The cause of defects is identified and
appropriate elimination is done which reduces variation in whole processes. A six sigma method is
one in which 99.99966% of all the products to be produced have the same features and are of free
from defects.
Page 79 of 89
TYCS SEM V USCS5032 Software Testing & Quality Assurance
Page 80 of 89
TYCS SEM V USCS5032 Software Testing & Quality Assurance
1. Define
2. Measure
3. Analyze
4. Design
5. Verify
Agile Methodologies: Agile Methodologies such as Scrum or Kanban are iterative and
collaborative approaches to software development. While primary focus is on project
management and product delivery, agile methodologies also emphasis on quality. Agile teams
frequently inspect and adapt their processes, collaborate closely with stakeholders and prioritise
customer satisfaction. Continuous feedback and improvement are integral part of agile
methodologies.
Cost of quality (COQ) is defined as a methodology that allows an organization to determine the
extent to which its resources are used for activities that prevent poor quality, that appraise the
quality of the organization’s products or services, and that result from internal and external
failures. Having such information allows an organization to determine the potential savings to be
gained by implementing process improvements.
Cost of poor quality (COPQ) is defined as the costs associated with providing poor quality products
or services. There are three categories:
1. Appraisal costs are costs incurred to determine the degree of conformance to quality
requirements.
2. Internal failure costs are costs associated with defects found before the customer receives
the product or service.
3. External failure costs are costs associated with defects found after the customer receives
the product or service.
Quality-related activities that incur costs may be divided into prevention costs, appraisal costs, and
internal and external failure costs.
Appraisal costs
Appraisal costs are associated with measuring and monitoring activities related to quality. These
costs are associated with the suppliers’ and customers’ evaluation of purchased materials, processes,
products, and services to ensure that they conform to specifications. They could include:
• Verification: Checking of incoming material, process setup, and products against agreed
specifications
• Quality audits: Confirmation that the quality system is functioning correctly
• Supplier rating: Assessment and approval of suppliers of products and services
Internal failure costs are incurred to remedy defects discovered before the product or service is
delivered to the customer. These costs occur when the results of work fail to reach design quality
standards and are detected before they are transferred to the customer. They could include:
Page 81 of 89
TYCS SEM V USCS5032 Software Testing & Quality Assurance
External failure costs are incurred to remedy defects discovered by customers. These costs occur
when products or services that fail to reach design quality standards are not detected until after
transfer to the customer. They could include:
• Repairs and servicing: Of both returned products and those in the field
• Warranty claims: Failed products that are replaced or services that are re-performed under a
guarantee
• Complaints: All work and costs associated with handling and servicing customers’
complaints
• Returns: Handling and investigation of rejected or recalled products, including transport costs
PREVENTION COSTS
Prevention costs are incurred to prevent or avoid quality problems. These costs are associated with
the design, implementation, and maintenance of the quality management system. They are planned
and incurred before actual operation, and they could include:
To utilize quality costs for decision-making, organizations can follow these steps:
1. Cost Analysis
2. Root Cause Analysis
3. Prioritization
4. Decision making
5. Continuous Improvement
Alfredo Pareto (1848–1923) conducted extensive studies of the distribution of wealth in Europe.
He found that there were a few people with a lot of money and many people with little money.
This unequal distribution of wealth became an integral part of economic theory. Dr. Joseph Juran
recognized this concept as a universal that could be applied to many fields. He coined the phrases
vital few and useful many. A Pareto diagram is a graph that ranks data classifications in
descending order from left to right, as shown below. In this case, the data classifications are types
of coating machines. Other possible data classifications are problems, complaints, causes, types of
Page 82 of 89
TYCS SEM V USCS5032 Software Testing & Quality Assurance
nonconformities, and so forth. The vital few are on the left, and the useful many are on the right.
It is sometimes necessary to combine some of the useful many into one classification called
“other”. When this category is used, it is placed on the far right
The vertical scale is dollars (or frequency), and the percent of each category can be placed above
the column. In this case, Pareto diagrams were constructed for both frequency and dollars. As can
be seen from the figure, machine 35 has the greatest number of nonconformities, but machine 51
has the greatest dollar value. Pareto diagrams can be distinguished from histograms (to be
discussed) by the fact that the horizontal scale of a Pareto diagram is categorical, whereas the
scale for the histogram is numerical.
Pareto diagrams are used to identify the most important problems. Usually, 75% of the total
results from 25% of the items. This fact is shown in the figure, where coating machines 35 and 51
account for about 75% of the total.
Actually, the most important items could be identified by listing them in descending order.
However, the graph has the advantage of providing a visual impact, showing those vital few
characteristics that need attention. Resources are then directed to take the necessary corrective
action.
1. Determine the method of classifying the data: by problem, cause, nonconformity, and so forth.
2. Decide if dollars (best), frequency, or both are to be used to rank the characteristics.
Page 83 of 89
TYCS SEM V USCS5032 Software Testing & Quality Assurance
4. Summarize the data and rank order categories from largest to smallest.
Note that a quality improvement of the vital few, say, 50%, is a much greater return on investment
than a 50% improvement of the useful many. Also, experience has shown that it is easier to make
a 50% improvement in the vital few. The use of a Pareto diagram is a never-ending process. For
example, let’s assume that coating machine 51 is the target for correction in the improvement
program. A project team is assigned to investigate and make improvements. The next time a
Pareto analysis is made, another machine, say, 35 becomes the target for correction, and the
improvement process continues until coating machine nonconformities become an insignificant
quality problem. The Pareto diagram is a powerful quality improvement tool. It is applicable to
problem identification and the measurement of progress.
causes. Figure above illustrates a C&E diagram with the effect on the right and causes on the left.
The effect is the quality characteristic that needs improvement. Causes are sometimes broken
down into the major causes of work methods, materials, measurement, people, equipment, and the
environment.
Each major cause is further subdivided into numerous minor causes. For example, under work
methods, we might have training, knowledge, ability, physical characteristics, and so forth. C&E
diagrams are the means of picturing all these major and minor causes. Figure below shows a C&E
diagram for house paint peeling using four major causes.
The first step in the construction of a C&E diagram is for the project team to identify the effect or
quality problem. It is placed on the right side of a large piece of paper by the team leader. Next,
Page 84 of 89
TYCS SEM V USCS5032 Software Testing & Quality Assurance
the major causes are identified and placed on the diagram. Determining all the minor causes
requires brainstorming by the project team. Brainstorming is an idea generating technique that is
well suited to the C&E diagram. It uses the creative thinking capacity of the team.
Attention to a few essentials will provide a more accurate and usable result:
1. Participation by every member of the team is facilitated by each member taking a turn giving
one idea at a time. If a member cannot think of a minor cause, he or she passes for that round.
Another idea may occur at a later round. Following this procedure prevents one or two individuals
from dominating the brainstorming session
fig B
2. Quantity of ideas, rather than quality, is encouraged. One person’s idea will trigger someone
else’s idea, and a chain reaction occurs. Frequently, a trivial, or “dumb,” idea will lead to the best
solution.
4. Visibility of the diagram is a primary factor of participation. In order to have space for all the
minor causes, a 2-foot by 3-foot piece of paper is recommended. It should be taped to a wall for
maximum visibility.
5. Create a solution-oriented atmosphere and not a gripe session. Focus on solving a problem
rather than discussing how it began. The team leader should ask questions using the why, what,
where, when, who, and how techniques.
6. Let the ideas incubate for a period of time (at least overnight) and then have another
brainstorming session. Provide team members with a copy of the ideas after the first session.
When no more ideas are generated, the brainstorming activity is terminated.
Once the C&E diagram is complete, it must be evaluated to determine the most likely causes. This
activity is accomplished in a separate session. The procedure is to have each person vote on the
minor causes. Team members may vote on more than one cause. Those causes with the most
Page 85 of 89
TYCS SEM V USCS5032 Software Testing & Quality Assurance
votes are circled, as shown in Fig B, and the four or five most likely causes of the effect are
determined.
Solutions are developed to correct the causes and improve the process. Criteria for judging the
possible solutions include cost, feasibility, resistance to change, consequences, training, and so
forth. Once the team agrees on solutions, testing and implementation follow.
Diagrams are posted in key locations to stimulate continued reference as similar or new problems
arise. The diagrams are revised as solutions are found and improvements are made.
The C&E diagram has nearly unlimited application in research, manufacturing, marketing, office
operations, service, and so forth. One of its strongest assets is the participation and contribution of
everyone involved in the brainstorming process. The diagrams are useful to
1. Analyze actual conditions for the purpose of product or service quality improvement, more
efficient use of resources, and reduced costs.
Scatter Diagrams
The simplest way to determine if a cause-and-effect relationship exists between two variables
is to plot a scatter diagram. Figure C shows the relationship between automotive speed and gas
mileage. The figure
fig C
shows that as speed increases, gas mileage decreases. Automotive speed is plotted on the x-axis
and is the independent variable. The independent variable is usually controllable. Gas mileage is
on the y-axis and is the dependent, or response, variable. Other examples of relationships are as
follows:
Page 86 of 89
TYCS SEM V USCS5032 Software Testing & Quality Assurance
There are a few simple steps for constructing a scatter diagram. Data are collected as ordered
pairs (x, y). The automotive speed (cause) is controlled and the gas mileage (effect) is measured.
Table
shows resulting x, y paired data. The horizontal and vertical scales are constructed with the higher
values on the right for the x-axis and on the top for the y-axis. After the scales are labeled, the
data are plotted. Using dotted lines, the technique of plotting sample number 1 (30, 38) is
illustrated in Fig C. The x-value is 30, and the y-value is 38. Sample numbers 2 through 16 are
plotted, and the scatter diagram is complete. If two points are identical, the technique illustrated at
60 mi/h can be used. Once the scatter diagram is complete, the relationship or correlation between
the two variables can be evaluated. Figure D shows different patterns and their interpretation. At
(a), there is a positive correlation between the two variables, because as x increases, y increases.
At (b), there is a negative correlation between
Page 87 of 89
TYCS SEM V USCS5032 Software Testing & Quality Assurance
Fig D
the two variables, because as x increases, y decreases. At (c), there is no correlation, and this pattern
is sometimes referred to as a shotgun pattern. The patterns described in (a), (b), and (c) are easy to
understand; however, those described in (d), (e), and (f) are more difficult. At (d), there may or may
not be a relationship between the two variables. There appears to be a negative relationship between
x and y, but it is not too strong. Further statistical analysis is needed to evaluate this pattern. At (e),
we have stratified the data to represent different causes for the same effect. Some examples are gas
mileage with the wind versus against the wind, two different suppliers of material, and two different
machines. One cause is plotted with a small solid circle, and the other cause is plotted with an open
triangle. When the data are separated, we see that there is a strong correlation. At (f), we have a
curvilinear relationship rather than a linear one.
When all the plotted points fall on a straight line, we have a perfect correlation. Because of variations
in the experiment and measurement error, this perfect situation will rarely, if ever, occur.
It is sometimes desirable to fit a straight line to the data in order to write a prediction equation. For
example, we may wish to estimate the gas mileage at 42 mi/h. A line can be placed on the scatter
diagram by sight or mathematically using least squares analysis. In either approach, the idea is to
make the deviation of the points on each side of the line equal. Where the line is extended beyond the
data, a dashed line is used, because there are no data in that area.
RUN CHART
A run chart, which is shown in Figure D, is a very simple technique for analyzing the process in the
development stage or, for that matter, when other charting techniques are not applicable. The
important point is to draw a picture of the process and let it “talk” to you. A picture is worth a
thousand words, provided someone is listening. Plotting the data points is a very effective way of
finding out about the process. This activity should be done as the first step in data analysis. Without a
run chart, other data analysis tools—such as the average, sample standard deviation, and histogram—
can lead to erroneous conclusions.
Page 88 of 89
TYCS SEM V USCS5032 Software Testing & Quality Assurance
Fig D
The particular run chart shown in Figure D is referred to as an X _ chart and is used to record the
variation in the average value of samples. Other charts, such as the R chart (range) or p chart
(proportion) would have also served for explanation purposes. The horizontal axis is labeled
“Subgroup Number,” which identifies a particular sample consisting of a fixed number of
observations. These subgroups are plotted by order of production, with the first one inspected being
1 and the last one on this chart being 25. The vertical axis of the graph is the variable, which in this
particular case is weight measured in kilograms.
Each small solid diamond represents the average value within a subgroup. Thus, subgroup number 5
consists of, say, four observations, 3.46, 3.49, 3.45, and 3.44, and their average is 3.46 kg. This value
is the one posted on the chart for subgroup number 5. Averages are used on control charts rather than
individual observations because average values will indicate a change in variation much faster. Also,
with two or more observations in a sample, a measure of the dispersion can be obtained for a
particular subgroup.
The solid line in the center of the chart can have three different interpretations, depending on the
available data. First, it can be the average of the plotted points, which in the case of an X — chart is
the average of the averages or “X-double bar.” Second, it can be a standard or reference value, X —
0, based on representative prior data, an economic value based on production costs or service needs,
or an aimed-at value based on specifications. Third, it can be the population mean, μ, if that value is
known.
Page 89 of 89