0% found this document useful (0 votes)
4 views

Software Testing and Quality Assurance.docx (1)

Uploaded by

mouryapayal2004
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views

Software Testing and Quality Assurance.docx (1)

Uploaded by

mouryapayal2004
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 97

Unit 1

Software Testing Basics

● Testing is an engineering activity


Software systems are becoming more challenging to build. They are playing an
increasingly important role in the society. People with software development skills are in
demand. There is pressure for software development professional to focus on quality
issues. Poor quality software that can cause loss of life or property is no longer
acceptable to society. Failure can result in catastrophic losses.

Conditions demand software development staff with interest and training in the areas of
software product and process quality. Highly qualified staff make sure that software
products are built on time, within budget, and are of the highest quality.

Quality is determined by attributes such as reliability, correctness, usability and the


ability to meet all user requirements.

The education and training of engineers in each engineering discipline is based on the
teaching of related scientific principles as shown in Figure 1.1.

A joint task force has been formed to define a body of knowledge that covers the
software engineering discipline, to discuss the nature of education for this new
profession and to define a code of ethics for the software engineering discipline. The
members of the joint task force are IEEE Computer Society and the Association of
Computing Machinery (ACM).

Using an engineering approach to software development means the following

1. The development of the process is well understood.

2. Projects are planned.

3. Life cycle models are defined and adhered to.

4. Standards are in place for product and process

5. Measurements are employed to evaluate product and process quality.

6. Components are reused.


Validation and Verification process play a key role in quality determination. Engineers
should have proper education, training and certification.

A test specialist is one whose education is based on the principles, practices and
processes that constitute the software engineering discipline and whose specific focus
is on one area of that discipline, software testing.

A test specialist who is trained as an engineer should have knowledge on the following

· Test related principles,

· Processes,

· Measurements,

· Standards,

· Plans,

· Tools and methods

· How to apply them to the testing tasks.

Testing concepts, are not an isolated collection of technical and managerial activities, it
should be integrated within the context of a quality testing process. It grows in
competency and uses engineering principles to guide improvement growth.

The Role of Process in Software Quality


The need for software products of high quality has pressured those in the software
profession to identify and quantify quality factors such as usability, testability,
maintainability and reliability and to identify engineering practices that support the
production of quality products having these favourable attributes. Among the practices
identified that contribute to the development of high-quality software are,

· Project Planning

· Requirements Management

· Development of formal specification


Structured design with use of information hiding and encapsulation, design and code
reuse, inspections and reviews, product and process measurements, education and
training of software professionals, development and application of CASE tools, use of
effective testing techniques and integration of testing activities into the entire life cycle.

Process, in the software engineering domain, is the set of methods, practices,


standards, documents, activities, policies, and procedures that software engineers use
to develop and maintain a software system and its associated artifacts, such as project
and test plans, design documents, code, and manuals.

Adding individual practices to an existing software development practices in an adhoc


way is not satisfactory. The software development process is similar to any other
engineering activity, it must be engineered. It must be

· Designed

· Implemented

· Evaluated

· Maintained

Similar to other engineering process, a software development process must evolve in a


consistent and predictable manner. Both best technical and managerial practices must
be integrated in a systematic manner. Most of the software process improvement
modes accepted by the industries are high-level modes. They focus on the software as
a whole and do not support specific development of any sub process such as design
and testing.

Component of an Engineering Process

Testing as a process
The software development process is described as a series of phases, procedures and steps
that result in the production of software products, embedded within the software development
process are several other processes including testing.

Testing is related to two other processes called verification and validation.

Validation is the process of evaluating a software system or components during or at the end of
the development cycle in order to determine whether it satisfies specified requirements.
Validation is usually associated with traditionally execution-based testing, that is, exercising the
code with test cases.

Verification is the process of evaluating a software system or component to


determine whether the products of a given development phase satisfies the
conditions imposed at the start of that phase. Verification is usually associated with
inspections and reviews of software deliverables.

Two definitions of testing are,

Testing is described as a group of procedures carried out to evaluate


some aspect of a piece of software.

Testing can be described as a process used for revealing defects in


software, and for establishing that the software has attained a specified
degree of quality with respect to selected attributes.

Testing covers both validation and verification activities. Testing includes the
following,

· Technical reviews,

· Test planning,

· Test tracking,

· Test case design,

· Unit test,

· Integration test,

· System test,

· Acceptance test, and

· Usability test.

Testing can also be described as a dual-purpose process. It reveals defects and


evaluates quality attributes of the software such as
· Reliability,

· Security,

· Usability, and

· Correctness.

The debugging process begins after testing has been carried out and the tester has
noted that the software is not behaving as specified.

Debugging is the process of

1. Locating the fault or defect,

2. Repairing the code,

3. Retesting the code.

Testing has economic, technical and managerial aspects. Testing must be managed.
Organizational policy for testing must be defined and documented.

Basic Definitions
Many of the definitions generally used in testing are based on the terms described in the IEEE
Standards Collection for Software Engineering. The standards collection includes the IEEE
Standard Glossary of Software Engineering Terminology, which is a dictionary of software
engineering vocabulary.

Errors
An error is a mistake, misconception, or misunderstanding on the part of a software developer.

Faults (Defects)
A fault (defect) is introduced into the software as the result of an error. It is an irregularity in the
software that may cause it to behave incorrectly, and not according to its specification.
Failures
A failure is the inability of a software system or component to perform its required functions
within specified performance requirements.

Test Cases
To detect defects in a piece of software the tester selects a set of input data and then executes
the software with the input data under a particular set of conditions.

A test case is a test-related item which contains the following information:

1. A set of test inputs. These are data items received from an external source
by the code under test. The external source can be hardware, software, or
human.

2. Execution conditions. These are conditions required for running the test, for
example, a certain state of a database, or a configuration of a hardware device.

3. Expected outputs. These are the specified results to be produced by the


code under test.

Test
A test is a group of related test cases, or a group of related test cases and test procedures.

Test Oracle
A test oracle is a document, or piece of software that allows testers to determine whether a test
has been passed or failed.

Test Bed
A test bed is an environment that contains all the hardware and software needed to test a
software component or a software system.

Software Quality
Software quality can either be defined as

1. Quality relates to the degree to which a system, system component, or


process meets specified requirements.
2. Quality relates to the degree to which a system, system component, or
process meets customer or user needs, or expectations.

Metric
A metric is a quantitative measure of the degree to which a system, system component, or
process has a given attribute

Quality Metric
A quality metric is a quantitative measurement of the degree to which an item possesses a
given quality attribute. Some examples of quality metric are,

1. Correctness—the degree to which the system performs its intended function

2. Reliability—the degree to which the software is expected to perform its


required functions under stated conditions for a stated period of time

3. Usability—relates to the degree of effort needed to learn, operate, prepare


input, and interpret output of the software

4. Integrity—relates to the system’s ability to withstand both intentional and


accidental attacks

5. Portability—relates to the ability of the software to be transferred from one


environment to another

6. Maintainability—the effort needed to make changes in the software

7. Interoperability—the effort needed to link or couple one system to another.

Software Quality Assurance Group


The software quality assurance (SQA) group is a team of people with the necessary training and
skills to ensure that all necessary actions are taken during the development process so that the
resulting software conforms to established technical requirements.

Reviews
A review is a group meeting whose purpose is to evaluate a software artifact or a set of software
artifacts.

Software Testing Principles


Testing principles are important to test specialists and engineers because they are the
foundation for developing testing knowledge and acquiring testing skills. They also
provide guidance for defining testing activities. A principle can be defined as,

1. A general or fundamental law.

2. A rule or code of conduct.

3. The laws or facts of nature underlying the working of an artificial


device.

In the software domain, principles may also refer to rules or codes of conduct relating to
professionals who design, develop, test, and maintain software systems. The following
are a set of testing principles,

Principle 1. Testing is the process of exercising a software component using a selected


set of test cases, with the intent of revealing defects, and evaluating quality.

This principle supports testing as an execution-based activity to detect defects. It also


supports the separation of testing from debugging since the intent of debugging is to
locate defects and repair the software.

The term “software component” means any unit of software ranging in size and
complexity from an individual procedure or method, to an entire software system.

The term “defects” represents any deviations in the software that have a negative
impact on its functionality, performance, reliability, security, and/or any other of its
specified quality attributes.

Principle 2. When the test objective is to detect defects, then a good test case is one
that has a high probability of revealing a yet undetected defects.

Testers must carry out testing in the same way as scientists carry out experiments.
Testers need to create a hypothesis and work towards proving or disproving it, it means
he/she must prove the presence or absence or a particular type of defect.

Principle 3. Test results should be inspected meticulously.

Testers need to carefully inspect and interpret test results. Several erroneous and costly
scenarios may occur if care is not taken.

A failure may be overlooked, and the test may be granted a “pass” status when in reality
the software has failed the test. Testing may continue based on erroneous test results.
The defect may be revealed at some later stage of testing, but in that case it may be
more costly and difficult to locate and repair.

Principle 4. A test case must contain the expected output or result.

The test case is of no value unless there is an explicit statement of the expected outputs
or results. Expected outputs allow the tester to determine

· Whether a defect has been revealed,

· Pass/fail status for the test.

It is very important to have a correct statement of the output so that time is not spent
due to misconceptions about the outcome of a test. The specification of test inputs and
outputs should be part of test design activities.

Principle 5. Test cases should be developed for both valid and invalid input conditions.

A tester must not assume that the software under test will always be provided with valid
inputs. Inputs may be incorrect for several reasons.

Software users may have misunderstandings, or lack information about the nature of
the inputs. They often make typographical errors even when complete/correct
information is available. Devices may also provide invalid inputs due to erroneous
conditions and malfunctions.

Principle 6. The probability of the existence of additional defects in a software


component is proportional to the number of defects already detected in that component.

The higher the number of defects already detected in a component, the more likely it is
to have additional defects when it undergoes further testing.

If there are two components A and B, and testers have found 20 defects in A and 3
defects in B, then the probability of the existence of additional defects in A is higher than
B.

Principle 7. Testing should be carried out by a group that is independent of the


development group.

This principle is true for psychological as well as practical reasons. It is difficult for a
developer to admit that software he/she has created and developed can be faulty.
Testers must realize that
· Developers have a great pride in their work,

· Practically it is difficult for the developer to conceptualize where


defects could be found.

Principle 8. Tests must be repeatable and reusable.

The tester needs to record the exact conditions of the test, any special events that
occurred, equipment used, and a carefully note the results. This information is very
useful to the developers when the code is returned for debugging so that they can
duplicate test conditions. It is also useful for tests that need to be repeated after defect
repair.

Principle 9. Testing should be planned.

Test plans should be developed for each level of testing. The objective for each level
should be described in the associated plan. The objectives should be stated as
quantitatively as possible.

Principle 10. Testing activities should be integrated into the software life cycle.

Testing activity should be integrated into the software life cycle starting as early as in the
requirements analysis phase, and continue on throughout the software life cycle in
parallel with development activities.

Principle 11. Testing is a creative and challenging task.

Difficulties and challenges for the tester include the following:

· A tester needs to have good knowledge of the software engineering


discipline.

· A tester needs to have knowledge from both experience and education


about software specification, designed, and developed.

· A tester needs to be able to manage many details.

· A tester needs to have knowledge of fault types and where faults of a


certain type might occur in code construction.

· A tester needs to reason like a scientist and make hypotheses that relate
to presence of specific types of defects.
· A tester needs to have a good understanding of the problem domain of
the software that he/she is testing. Familiarly with a domain may come from
educational, training, and work related experiences.

· A tester needs to create and document test cases. To design the test
cases the tester must select inputs often from a very wide domain. The
selected test cases should have the highest probability of revealing a defect.
Familiarly with the domain is essential.

· A tester needs to design and record test procedures for running the tests.

· A tester needs to plan for testing and allocate proper resources.

· A tester needs to execute the tests and is responsible for recording


results.

· A tester needs to analyse test results and decide on success or failure for
a test. This involves understanding and keeping track of huge amount of
detailed information.

· A tester needs to learn to use tools and keep updated of the newest test
tools.

· A tester needs to work and cooperate with requirements engineers,


designers, and developers, and often must establish a working relationship
with clients and users.

· A tester needs to be educated and trained in this specialized area and


often will be required to update his/her knowledge on a regular basis due to
changing technologies.

The Tester’s Role in a Software


Development Organization
The tester’s job is to

· Reveal defects,

· Find weak points,


· Inconsistent behaviour,

· Circumstances where the software does not work as expected.

It is difficult for developers to effectively test their own code. A tester needs very good
programming experience in order to understand how code is constructed, and to know
where and what types of, defects could occur.

A tester should work with the developers to produce high-quality software that meets
the customers’ requirements.

Teams of testers and developers are very common in industry, and projects should have
a correct developer/tester ratio. The ratio will vary depending on

· Available resources,

· Type of project,

· TMM level.

· Nature of the project

· Project Schedules

A testers also need to work with requirements engineers to make sure that
requirements are testable, and to plan for system and acceptance test.

Testers also need to work with designers to plan for integration and unit test.

Test managers need to cooperate with project managers in order to develop


reasonable test plans, and with upper management to provide input for the
development and maintenance of organizational

· Testing standards,

· Polices,

· Goals.

Testers also need to cooperate with software quality assurance staff and software
engineering process group members.

Testers are a part of the development group. They concentrate on testing. They may be
part of the software quality assurance group. Testers are specialists, their main function
is to plan, execute, record, and analyse tests. They do not debug software. When
defects are detected during testing, software should be returned to the developers.

The developers locate the defect and repair the code. The developers have a detailed
understanding of the code, and they can perform debugging better.

Testers need the support of management. Testers ensure that developers release code
with few or no defects, and that marketers can deliver software that satisfies the
customers’ requirements, and is reliable, usable, and correct.

Origins of defects

Origins of Defects
Defects have negative effects on software users. Software engineers work very hard to produce
high-quality software with a low number of defects.

Figure 1.4 Origins of Defects

1. Education: The software engineer did not have the proper educational
background to prepare the software artifact.

2. Communication: The software engineer was not informed about something by a


colleague.

3. Oversight: The software engineer omitted to do something.

4. Transcription: The software engineer knows what to do, but makes a mistake in
doing it.

5. Process: The process used by the software engineer misdirected his/her actions.

The impact of defect on the user ranges from a minor inconvenience to rendering the software
unfit for use. Testers have to discover these defects before the software is in operation. The
results of the tests are analysed to determine whether the software has behaved correctly.

In this scenario a tester develops hypotheses about possible defects. Test cases are then
designed based on the hypotheses. The hypotheses are used to,

· Design test cases.


· Design test procedures.

· Assemble test sets.

· Select the testing levels suitable for the tests.

· Evaluate the results of the tests.

1. Fault Model
REPORT THIS AD

A fault (defect) model can be described as a link between the error made, and the fault/defect in
the software.

2. Defect Repository
To increase the effectiveness of their testing and debugging processes, software organizations
need to initiate the creation of a defect database, or defect repository. The defect repository
supports storage and retrieval of defect data from all projects in a centrally accessible location.

Defect Classes, the Defect Repository, and


Test Design
Defects can be classified in many ways. It is important for an organization to follow a single
classification scheme and apply it to all projects.

Some defects will fit into more than one class or category. Because of this problem, developers,
testers, and SQA staff should try to be as consistent as possible when recording defect data.

The defect types and frequency of occurrence should be used in test planning, and test design.
Execution-based testing strategies should be selected that have the strongest possibility of
detecting particular types of defects. The four classes of defects are as follows,

· Requirements and specifications defects,

· Design defects,

· Code defects,

· Testing defects

1. Requirements and Specifications Defects


The beginning of the software life cycle is important for ensuring high quality in the software
being developed. Defects injected in early phases can be very difficult to remove in later
phases. Since many requirements documents are written using a natural language
representation, they may become

· Ambiguous,

· Contradictory,

· Unclear,

· Redundant,

· Imprecise.

Some specific requirements/specification defects are:

1.1 Functional Description Defects


The overall description of what the product does, and how it should behave (inputs/outputs), is
incorrect, ambiguous, and/or incomplete.

1.2 Feature Defects


Features is described as distinguishing characteristics of a software component or system.
Feature defects are due to feature descriptions that are missing, incorrect, incomplete, or
unnecessary.

1.3 Feature Interaction Defects


These are due to an incorrect description of how the features should interact with each other.

1.4 Interface Description Defects


These are defects that occur in the description of how the target software is to interface with
external software, hardware, and users.

2. Design Defects
Design defects occur when the following are incorrectly designed,

· System components,

· Interactions between system components,


· Interactions between the components and outside software/hardware, or
users

It includes defects in the design of algorithms, control, logic, data elements, module interface
descriptions, and external software/hardware/user interface descriptions. The design defects
are,

2.1 Algorithmic and Processing Defects


These occur when the processing steps in the algorithm as described by the pseudo code are
incorrect.

2.2 Control, Logic, and Sequence Defects


Control defects occur when logic flow in the pseudo code is not correct.

2.3 Data Defects


These are associated with incorrect design of data structures.

2.4 Module Interface Description Defects


These defects occur because of incorrect or inconsistent usage of parameter types, incorrect
number of parameters or incorrect ordering of parameters.

2.5 Functional Description Defects


The defects in this category include incorrect, missing, or unclear design elements.

2.6 External Interface Description Defects


These are derived from incorrect design descriptions for interfaces with COTS components,
external software systems, databases, and hardware devices.

3. Coding Defects
Coding defects are derived from errors in implementing the code. Coding defects classes are
similar to design defect classes. Some coding defects come from a failure to understand
programming language constructs, and miscommunication with the designers.

3.1 Algorithmic and Processing Defects


Code related algorithm and processing defects include
· Unchecked overflow and underflow conditions,

· Comparing inappropriate data types,

· Converting one data type to another,

· Incorrect ordering of arithmetic operators,

· Misuse or omission of parentheses,

· Precision loss,

· Incorrect use of signs.

3.2 Control, Logic and Sequence Defects


This type of defects include incorrect expression of case statements, incorrect iteration of loops,
and missing paths.

3.3 Typographical Defects


These are mainly syntax errors, for example, incorrect spelling of a variable name that are
usually detected by a compiler or self-reviews, or peer reviews.

3.4 Initialization Defects


This type of defects occur when initialization statements are omitted or are incorrect. This may
occur because of misunderstandings or lack of communication between programmers, or
programmer`s and designer`s, carelessness, or misunderstanding of the programming
environment.

3.5 Data-Flow Defects


Data-Flow defects occur when the code does not follow the necessary data-flow conditions.

3.6 Data Defects


These are indicated by incorrect implementation of data structures.

3.7 Module Interface Defects


Module Interface defects occurs because of using incorrect or inconsistent parameter types, an
incorrect number of parameters, or improper ordering of the parameters.

3.8 Code Documentation Defects


When the code documentation does not describe what the program actually does, or is
incomplete or ambiguous, it is called a code documentation defect.

3.9 External Hardware, Software Interfaces Defects


These defects occur because of problems related to

· System calls,

· Links to databases,

· Input/output sequences,

· Memory usage,

· Resource usage,

· Interrupts and exception handling,

· Data exchanges with hardware,

· Protocols,

· Formats,

· Interfaces with build files,

· Timing sequences.

4. Testing Defects
Test plans, test cases, test harnesses, and test procedures can also contain defects. These
defects are called testing defects. Defects in test plans are best detected using review
techniques.

4.1 Test Harness Defects


In order to test software, at the unit and integration levels, auxiliary code must be developed.
This is called the test harness or scaffolding code. The test harness code should be carefully
designed, implemented, and tested since it is a work product and this code can be reused when
new releases of the software are developed.

4.2 Test Case Design and Test Procedure Defects


These consists of incorrect, incomplete, missing, inappropriate test cases, and test procedures.
Developer/Tester Support for Developing
a Defect Repository
Software engineers and test specialists should follow the examples of engineers in
other disciplines who make use of defect data. A requirement for repository
development should be a part of testing and/or debugging policy statements.

Forms and templates should be designed to collect the data. Each defect and frequency
of occurrence must be recorded after testing.

Defect monitoring should be done for each on-going project. The distribution of defects
will change when changes are made to the process.

Figure 1.5 The Defect Repository and Support for TMM Maturity Goals

The defect data is useful for test planning. It is a TMM level 2 maturity goal. It helps a
tester to select applicable testing techniques, design the test cases, and allocate the
amount of resources needed to detect and remove defects. This allows tester to
estimate testing schedules and costs.

The defect data can support debugging activities also. A defect repository can help in
implementing several TMM maturity goals including

· Controlling and monitoring of test,

· Software quality evaluation and control,

· Test measurement,

· Test process improvement.


Unit 2

Testing Techniques and Levels of Testing:

Using White Box Approach to Test design– Static Testing Vs. Structural
Testing, Code Functional Testing, Coverage and Control Flow Graphs,
Using Black Box Approaches to Test Case Design, Random Testing,
Requirements based testing, Decision tables, State-based testing,
Cause-effect graphing, Error guessing, Compatibility testing, Levels of
Testing -Unit Testing, Integration Testing, Defect Bash Elimination. System
Testing - Usability and Accessibility Testing, Configuration Testing,
Compatibility Testing.

White Box Testing

White Box Testing is a testing technique in which software’s internal


structure, design, and coding are tested to verify input-output flow and
improve design, usability, and security. In white box testing, code is
visible to testers, so it is also called Clear box testing, Open box testing,
Transparent box testing, Code-based testing, and Glass box testing.
It is one of two parts of the Box Testing approach to software testing. Its
counterpart, Blackbox testing, involves testing from an external or
end-user perspective. On the other hand, White box testing in software
engineering is based on the inner workings of an application and
revolves around internal testing.

The term “WhiteBox” was used because of the see-through box


concept. The clear box or WhiteBox name symbolizes the ability to see
through the software’s outer shell (or “box”) into its inner workings.
Likewise, the “black box” in “Black Box Testing” symbolizes not being
able to see the inner workings of the software so that only the end-user
experience can be tested.

What do you verify in White Box Testing?


White box testing involves the testing of the software code for the
following:

● Internal security holes


● Broken or poorly structured paths in the coding processes
● The flow of specific inputs through the code
● Expected output
● The functionality of conditional loops
● Testing of each statement, object, and function on an individual
basis

The testing can be done at system, integration, and unit levels of


software development. One of the basic goals of whitebox testing is to
verify a working flow for an application. It involves testing a series of
predefined inputs against expected or desired outputs so that when a
specific input does not result in the expected output, you have
encountered a bug.

How do you perform White Box Testing?

We have divided it into two basic steps to give you a simplified


explanation of white box testing. This is what testers do when testing an
application using the white box testing technique:

STEP 1) UNDERSTAND THE SOURCE CODE

The first thing a tester will often do is learn and understand the source
code of the application. Since white box testing involves the testing of
the inner workings of an application, the tester must be very
knowledgeable in the programming languages used in the applications
they are testing. Also, the testing person must be highly aware of secure
coding practices. Security is often one of the primary objectives of
testing software. The tester should be able to find security issues and
prevent attacks from hackers and naive users who might inject
malicious code into the application either knowingly or unknowingly.

STEP 2) CREATE TEST CASES AND EXECUTE


The second basic step to white box testing involves testing the
application’s source code for proper flow and structure. One way is by
writing more code to test the application’s source code. The tester will
develop little tests for each process or series of processes in the
application. This method requires that the tester must have intimate
knowledge of the code and is often done by the developer. Other
methods include Manual Testing, trial, and error testing and the use of
testing tools as we will explain further on in this article.

The goal of WhiteBox testing in software engineering is to verify all the


decision branches, loops, and statements in the code.

To exercise the statements in the above white box testing example,


WhiteBox test cases would be

● A = 1, B = 1
● A = -1, B = -3

White Box Testing Techniques

A major White box testing technique is Code Coverage analysis. Code


Coverage analysis eliminates gaps in a Test Case suite. It identifies
areas of a program that are not exercised by a set of test cases. Once
gaps are identified, you create test cases to verify untested parts of the
code, thereby increasing the quality of the software product

There are automated tools available to perform Code coverage analysis.


Below are a few coverage analysis techniques a box tester can use:

Statement Coverage:- This technique requires every possible statement


in the code to be tested at least once during the testing process of
software engineering.

Branch Coverage – This technique checks every possible path (if-else


and other conditional loops) of a software application.

Apart from above, there are numerous coverage types such as


Condition Coverage, Multiple Condition Coverage, Path Coverage,
Function Coverage etc. Each technique has its own merits and attempts
to test (cover) all parts of software code. Using Statement and Branch
coverage you generally attain 80-90% code coverage which is sufficient.

Following are important WhiteBox Testing Techniques:

● Statement Coverage
● Decision Coverage
● Branch Coverage
● Condition Coverage
● Multiple Condition Coverage
● Finite State Machine Coverage
● Path Coverage
● Control flow testing
● Data flow testing

Types of White Box Testing

White box testing encompasses several testing types used to evaluate


the usability of an application, block of code or specific software
package. There are listed below —

● Unit Testing: It is often the first type of testing done on an


application. Unit Testing is performed on each unit or block of
code as it is developed. Unit Testing is essentially done by the
programmer. As a software developer, you develop a few lines of
code, a single function or an object and test it to make sure it
works before continuing Unit Testing helps identify a majority of
bugs, early in the software development lifecycle. Bugs identified
in this stage are cheaper and easy to fix.
● Testing for Memory Leaks: Memory leaks are leading causes of
slower running applications. A QA specialist who is experienced at
detecting memory leaks is essential in cases where you have a
slow running software application.
Apart from the above, a few testing types are part of both black box and
white box testing. They are listed below

● White Box Penetration Testing: In this testing, the tester/developer


has full information of the application’s source code, detailed
network information, IP addresses involved and all server
information the application runs on. The aim is to attack the code
from several angles to expose security threats.
● White Box Mutation Testing: Mutation testing is often used to
discover the best coding techniques to use for expanding a
software solution.

White Box Testing Tools

Below is a list of top white box testing tools.

● EclEmma
● NUnit
● PyUnit
● HTMLUnit
● CppUnit

Advantages of White Box Testing

● Code optimization by finding hidden errors.


● White box tests cases can be easily automated.
● Testing is more thorough as all code paths are usually covered.
● Testing can start early in SDLC even if GUI is not available.

Disadvantages of WhiteBox Testing

● White box testing can be quite complex and expensive.


● Developers who usually execute white box test cases detest it. The
white box testing by developers is not detailed and can lead to
production errors.
● White box testing requires professional resources with a detailed
understanding of programming and implementation.
● White-box testing is time-consuming, bigger programming
applications take the time to test fully.

Conclusion:

● White box testing can be quite complex. The complexity involved


has a lot to do with the application being tested. A small
application that performs a single simple operation could be white
box tested in few minutes, while larger programming applications
take days, weeks, and even longer to fully test.
● White box testing in software testing should be done on a software
application as it is being developed after it is written and again
after each modification.

What is Static Testing?


Static Testing, a software testing technique in which the software is tested
without executing the code. It has two parts as listed below:
· Review - Typically used to find and eliminate errors or ambiguities
in documents such as requirements, design, test cases, etc.
· Static analysis - The code written by developers are analysed
(usually by tools) for structural defects that may lead to defects.

Types of Reviews:
The types of reviews can be given by a simple diagram:

Static Analysis - By Tools:


Following are the types of defects found by the tools during static analysis:
· A variable with an undefined value
· Inconsistent interface between modules and components
· Variables that are declared but never used
· Unreachable code (or) Dead Code
· Programming standards violations
· Security vulnerabilities
· Syntax violations

What is Structural Testing?


Structural testing, also known as glass box testing or white box testing is
an approach where the tests are derived from the knowledge of the
software's structure or internal implementation.
The other names of structural testing includes clear box testing, open box
testing, logic driven testing or path driven testing.

Structural Testing Techniques:


· Statement Coverage - This technique is aimed at exercising all
programming statements with minimal tests.
· Branch Coverage - This technique is running a series of tests to
ensure that all branches are tested at least once.
· Path Coverage - This technique corresponds to testing all possible
paths which means that each statement and branch are covered.

Calculating Structural Testing Effectiveness:


Statement Testing = (Number of Statements Exercised / Total Number of Statements) x 100 %

Branch Testing = (Number of decisions outcomes tested / Total Number of decision Outcomes)
x 100 %
Path Coverage = (Number paths exercised / Total Number of paths in the program) x 100 %

Advantages of Structural Testing:


· Forces test developer to reason carefully about implementation
· Reveals errors in "hidden" code
· Spots the Dead Code or other issues with respect to best
programming practices.

Disadvantages of Structural Box Testing:


· Expensive as one has to spend both time and money to perform
white box testing.
· Every possibility that few lines of code is missed accidentally.
· Indepth knowledge about the programming language is necessary
to perform white box testing.

Functional Testing
Before proceeding to functional testing, we should know about the testing,
what testing is?

What is testing?
In simple terms, the testing is to compare the actual result with the
expected result. Testing is done to identify whether all the function is
working as expectations.

What is Software Testing?


Software testing is a technique to check whether the actual result matches
the expected result and to ensure that the software has not any defect or
bug.

Software testing ensures that the application has not any defect or the
requirement is missing to the actual need. Either manual or automation
testing can do software testing.
Software testing also defines as verification of application under test (AUT).

There are two types of testing:

Functional Testing:
It is a type of software testing which is used to verify the functionality of
the software application, whether the function is working according to the
requirement specification. In functional testing, each function tested by
giving the value, determining the output, and verifying the actual output
with the expected value. Functional testing performed as black-box testing
which is presented to confirm that the functionality of an application or
system behaves as we are expecting. It is done to verify the functionality of
the application.

Functional testing also called as black-box testing, because it focuses on


application specification rather than actual code. Tester has to test only the
program rather than the system.

Goal of functional testing


The purpose of the functional testing is to check the primary entry function,
necessarily usable function, the flow of screen GUI. Functional testing
displays the error message so that the user can easily navigate throughout
the application.

What is the process of functional testing?


Testers follow the following steps in the functional testing:

● Tester does verification of the requirement specification in the


software application.
● After analysis, the requirement specification tester will make a plan.
● After planning the tests, the tester will design the test case.
● After designing the test, case tester will make a document of the
traceability matrix.
● The tester will execute the test case design.
● Analysis of the coverage to examine the covered testing area of the
application.
● Defect management should do to manage defect resolving.

What to test in functional testing? Explain


The main objective of functional testing is checking the functionality of the
software system. It concentrates on:

● Basic Usability: Functional Testing involves the usability testing of


the system. It checks whether a user can navigate freely without any
difficulty through screens.
● Accessibility: Functional testing test the accessibility of the function.
● Mainline function: It focuses on testing the main feature.
● Error Condition: Functional testing is used to check the error
condition. It checks whether the error message displayed.

Explain the complete process to perform functional testing.


There are the following steps to perform functional testing:

● There is a need to understand the software requirement.


● Identify test input data
● Compute the expected outcome with the selected input values.
● Execute test cases
● Comparison between the actual and the computed result

Explain the types of functional testing.


The main objective of functional testing is to test the functionality of the
component.

Functional testing is divided into multiple parts.

Here are the following types of functional testing.

What are the advantages of Functional Testing?


Advantages of functional testing are:

● It produces a defect-free product.


● It ensures that the customer is satisfied.
● It ensures that all requirements met.
● It ensures the proper working of all the functionality of an
application/software/product.
● It ensures that the software/ product work as expected.
● It ensures security and safety.
● It improves the quality of the product.

Example: Here, we are giving an example of banking software. In a bank


when money transferred from bank A to bank B. And the bank B does not
receive the correct amount, the fee is applied, or the money not converted
into the correct currency, or incorrect transfer or bank A does not receive
statement advice from bank B that the payment has received. These issues
are critical and can be avoided by proper functional testing.

What are the disadvantages of functional testing?


Disadvantages of functional testing are:

● Functional testing can miss a critical and logical error in the system.
● This testing is not a guarantee of the software to go live.
● The possibility of conducting redundant testing is high in functional
testing.

Coverage and Control Flow Graphs

The application of coverage analysis is typically associated with the


use of control and data flow models to represent program structural
elements and data. The logic elements most commonly considered
for coverage are based on the flow of control in a unit of code. For
example,

(i) program statements;

(ii) decisions/branches (these influence the program flow of


control);

(iii) conditions (expressions that evaluate to true/false, and do


not contain any other true/false-valued expressions);

(iv) combinations of decisions and conditions;

(v) paths (node sequences in flow graphs).

These logical elements are rooted in the concept of a program prime.


A program prime is an atomic programming unit. All structured
programs can be built from three basic primes-sequential (e.g.,
assignment statements), decision (e.g., if/then/else statements), and
iterative (e.g., while, for loops). Graphical representations for these
three primes are shown in Figure 5.1.

What is black box testing?


The black box is a powerful technique to check the application under test
from the user’s perspective. Black box testing is used to test the system
against external factors responsible for software failures. This testing
approach focuses on the input that goes into the software, and the output
that is produced. The testing team does not cover the inside details such
as code, server logic, and development method.
Black box testing is based on the requirements and checks the system to
validate against predefined requirements.

Various parameters checked in black box testing are:


· Accurate actions performed by users
· System’s interaction with the inputs
· The response time of the system
· Use of data structures Issues in the user interface
· Usability issues
· Performance issues
· Abrupt application failure, unable to start or finish

Types of Black Box Testing


There are many different types of Black Box Testing, some of them are
given below:
· Functional testing – This is a type of black box testing which is related to the
functional requirements of a system; Functional testing is concerned only with
the functional requirements of a system and covers how well the system executes
its functions.
· Non-functional testing – This black box testing type is not related to testing
of specific functionality, Nonfunctional testing is concerned with the
non-functional requirements and is designed specifically to evaluate the
readiness of a system according to the various criteria which are not covered by
functional testing.
· Regression testing – Regression Testing is performed after code fixes,
upgrades or any other system maintenance to check the new changes has not
affected any existing functionality.

Black box testing example:


A simple login screen of software or a web application will be tested for
seamless user login. The login screen has two fields, username and
password as an input and the output will be to enable access to the
system.
A black box testing will not consider the specifications of the code, and it
will test the valid username and password to login to the right account.
This form of testing technique will check the input and output.
· A user logged in when inputs a present username and correct password
· A user receives an error message when enters username and incorrect
password
The black box testing is also known as an opaque, closed box,
function-centric testing. It emphasizes on the behavior of the software.
Black box testing checks scenarios where the system can break.
For example, a user might enter the password in the wrong format, and a
user might not receive an error message on entering an incorrect
password.

When we do Black Box testing?


· Unlike traditional white box testing, black box testing is beneficial for testing
software usability.
· The overall functionality of the system under test
· Black box testing gives you a broader picture of the software.
· This testing approach sees an application from a user’s perspective.
· To test the software as a whole system rather than different modules.

Various approaches to black-box


testing
There are a set of approaches for black-box testing.
Manual UI Testing: In this approach, a tester checks the system as a user.
Check and verify the user data, error messages.
Automated UI Testing: In this approach, user interaction with the system
is recorded to find errors and glitches. Testers can set record demand as
per schedule.
Documentation Testing: In this approach, a tester purely checks the input
and output of the software. Testers consider what system should perform
rather than how. It is a manual approach to testing.

What are Black Box testing techniques?


There are various test case design techniques applied for black-box testing:
1. Boundary Value Analysis
2. Equivalence partitioning
3. State Transition Testing
4. Decision Table Testing
5. Graph-Based Testing
6. Error Guessing Technique
1- Boundary Value Analysis
It is the widely used black-box testing, which is also the basis for
equivalence testing. Boundary value analysis tests the software with test
cases with extreme values of test data. BVA is used to identify the flaws
or errors that arise due to the limits of input data.
For example: Taking inputs for a test case data for an age section should
accept a valid data of anything between 1-100. According to BVP analysis,
the software will be tested against four test data as -1, 1, 100, and 101 to
check the system’s response using the boundary values.
2- Equivalence partitioning
This test case designing techniques checks the input and output by
dividing the input into equivalent classes. The data must be tested at least
once to ensure maximum test coverage of data. It is the exhaustive form
of testing, which also reduces the redundancy of inputs.
For example: Taking inputs for a test case data for the example mentioned
above will have three classes from which one data will be tested.
Valid class: 1 to 100 (any number), Invalid class: -1 (checking the lowest
of lowest), Invalid class: 101(highest of highest).
[Also Read: What is Boundary Value Analysis and Equivalence Partitioning?]
3- State Transition Testing
This testing technique uses the inputs, outputs, and the state of the
system during the testing phase. It checks the software against the
sequence of transitions or events among the test data.
Based on the type of software that is tested, it checks for the behavioral
changes of a system in a particular state or another state while
maintaining the same inputs.
For example, A login page will let you input username and password until
three attempts. Each incorrect password will be sent the user to the login
page. After the third attempt, the user will be sent to an error page. This
state transition method considers the various states of the system and the
inputs to pass only the right sequence of the testing.
4- Decision Table Testing
This approach creates test cases based on various possibilities. It
considers multiple test cases in a decision table format where each
condition is checked and fulfilled, to pass the test and provide accurate
output. It is preferred in case of various input combinations and multiple
possibilities.
For example, A food delivery application will check various payment
modes as input to place the order — decision making based on the table.
Case1: If the end-user has a card, then the system will not check for cash
or coupon and will take action to place the order.
Case2: If the end-user has a coupon will not be checked for a card or cash
and action will be taken.
Case3: if the end-user has cash, the action will be taken.
Case4: If the end-user doesn’t have anything, then action will not be taken.
5- Graph-Based Testing:
It is similar to a decision-based test case design approach where the
relationship between links and input cases are considered.
6- Error Guessing Technique:
This method of designing test cases is about guessing the output and
input to fix any errors that might be present in the system. It depends on
the skills and judgment of the tester.
Comparison testing
This method uses the two different versions of the same software to
compare and validate the results.

How to do Black Box testing?


When you get the basic understanding of black-box testing then the next
question which comes up in mind is: How to perform the Black box
testing? Below you can check the steps to perform this testing:
· The first step to black-box testing is to understand the requirement
specifications of the application under test. An accurate and precise SRS
document should be there.
· The next step is to evaluate the set of valid inputs and test scenarios to test the
software. The goal is to save time and get good test coverage.
· Prepare the test cases to cover a maximum range of inputs.
· The test cases are run in the system to generate output, which is validated
with the expected outcome to mark pass or fail.
· The failed steps are marked and sent to the development team to fix them.
· Retest the system using various testing techniques to verify its recurring
nature or to pass it.
The black box testing can be easily used to check and validate the entire
software development life cycle. It can be used at various stages such as
unit, integration, acceptance, system, and regression to evaluate the
product.

What are the benefits of Black Box


testing?
· The tester doesn’t need any technical knowledge to test the system. It is
essential to understand the user’s perspective.
· Testing is performed after development, and both the activities are
independent of each other.
· It works for a more extensive coverage which is usually missed out by testers
as they fail to see the bigger picture of the software.
· Test cases can be generated before development and right after specification.
· Black box testing methodology is close to agile.

Conclusion
Black box testing helps to find the gaps in functionality, usability, and
other features. This form of testing gives an overview of software
performance and its output. It improves software quality and reduces the
time to market. This form of testing mitigates the risk of software failures
at the user’s end.

Random Testing in Software Testing


Random testing is software testing in which the system is tested with the help of
generating random and independent inputs and test cases. Random testing is also
named monkey testing. It is a black box assessment outline technique in which the
tests are being chosen randomly and the results are being compared by some
software identification to check whether the output is correct or incorrect.
Some important points about Random Testing:
1. Random testing was first examined by Melvin Breuer in the year 1971.
2. This testing was initially assessed by Pratima and Vishwani Agrawal in the
year 1975 to check the successful output of the software.
3. For random testing, there is also a book that contains some formulas for the
number of tests that can be taken and the number of successful results and
failure results.

Working Random Testing:


Step-1: Identify Input domain
Step-2: Select test inputs independently/randomly from the input domain
Step-3: Test the system on these inputs and form a random test set
Step-4: Compare the result with system specification
Step-5: If the Report fails then take necessary action.
The below image represents the working of Random Testing more clearly.
Working of Random Testing

Types of Random Testing


1. Random input sequence generation: It is also known as Random Number
Generator (RNG) in which a random sequential number or symbols is being
generated which cannot be assumed during the random selection.
2. Random sequence of data inputs: In this, all the data are selected randomly for
the inputs which can be used during the testing.
3. Random data selection from an existing database: The record where all the data
are available from that record only the data can be selected for testing afterward no
additional data cannot be added which are not available in the record.

Characteristics of Random Testing:


1. Random testing is implemented when the bug in an application is not
recognized.
2. It is used to check the system’s execution and dependability.
3. It saves our time and does not need any extra effort.
4. Random testing is less costly, it doesn’t need extra knowledge for testing the
program.

Methods to Implement Random Testing:


To implement the random testing basically, four steps are applied:
1. The user input domain is analyzed.
2. After that, from that domain, the data of test inputs are chosen separately.
3. With the help of these test inputs, the test is executed successfully. These
input tests conduct random sets of tests.
4. The outcomes are compared with the system identification. The outcome of
the test becomes unsuccessful if any test input doesn’t match with the original
one otherwise the outcomes are always successful.

Advantages of Random Testing


1. It is very cheap so that anyone can use this software.
2. It doesn’t need any special intelligence to access the program during the tests.
3. Errors can be traced very easily; it can easily detect the bug throughout the
testing.
4. This software is lacking bias means it makes the groups evenly for the testing
and it prefers not to repeatedly check the errors as there can be some changes in
the codes throughout the testing process.

Disadvantages of Random Testing


1. This software only finds changes errors.
2. They are not practical. Some tests will be of no use for a longer time.
3. Most of the time is consumed by analyzing all the tests.
4. New tests cannot be formed if their data is not available during testing.

Tools used for Random Testing


1. QuickCheck: It is a famous test tool, which was introduced for Haskell which
is available in many different languages. This tool generates random orders for
API calls that are related to the model and the properties of the system which
can give successful results after every test.
2. Randoop: This tool provides an order of methods and constructor
acknowledgment for the classes during the test and generates JUnit tests.
3. Simulant: It is a Clojure tool that runs according to the system’s different
specifications, the behavior of the model.
4. Gram Test: This random testing tool is based on grammar which is written in
Java, It utilizes BNF notation for specifying the grammar which is used as input
during testing.
Requirements based Testing?
Requirements-based testing is a testing approach in which test cases,
conditions and data are derived from requirements. It includes functional
tests and also non-functional attributes such as performance, reliability or
usability.

Stages in Requirements based Testing:


· Defining Test Completion Criteria - Testing is completed only when
all the functional and non-functional testing is complete.
· Design Test Cases - A Test case has five parameters namely the
initial state or precondition, data setup, the inputs, expected
outcomes and actual outcomes.
· Execute Tests - Execute the test cases against the system under
test and document the results.
· Verify Test Results - Verify if the expected and actual results match
each other.
· Verify Test Coverage - Verify if the tests cover both functional and
non-functional aspects of the requirement.
· Track and Manage Defects - Any defects detected during the
testing process goes through the defect life cycle and are tracked to
resolution. Defect Statistics are maintained which will give us the
overall status of the project.

Requirements Testing process:


· Testing must be carried out in a timely manner.
· Testing process should add value to the software life cycle, hence
it needs to be effective.
· Testing the system exhaustively is impossible hence the testing
process needs to be efficient as well.
· Testing must provide the overall status of the project, hence it
should be manageable.
Decision tables
Decision table testing is a software testing technique used to test system
behavior for different input combinations. This is a systematic approach where
the different input combinations and their corresponding system behavior
(Output) are captured in a tabular form. That is why it is also called as a
Cause-Effect table where Cause and effects are captured for better test
coverage.
A Decision Table is a tabular representation of inputs versus rules/cases/test
conditions. It is a very effective tool used for both complex software testing
and requirements management. A decision table helps to check all possible
combinations of conditions for testing and testers can also identify missed
conditions easily. The conditions are indicated as True(T) and False(F) values.
Example 1: How to make Decision Base Table for Login
Screen
Let’s create a decision table for a login screen.
The condition is simple if the user provides the correct username and
password the user will be redirected to the homepage. If any of the input is
wrong, an error message will be displayed.
Conditions Rule 1 Rule 2 Rule 3 Rule 4

Username (T/F) F T F T

Password (T/F) F F T T

Output (E/H) E E E H

Legend:
● T – Correct username/password
● F – Wrong username/password
● E – Error message is displayed
● H – Home screen is displayed
Interpretation:
● Case 1 – Username and password both were wrong. The user is shown
an error message.
● Case 2 – Username was correct, but the password was wrong. The
user is shown an error message.
● Case 3 – Username was wrong, but the password was correct. The
user is shown an error message.
● Case 4 – Username and password both were correct, and the user
navigated to the homepage

While converting this to a test case, we can create 2 scenarios,


● Enter the correct username and correct password and click on login,
and the expected result will be the user should be navigated to the
homepage
And one from the below scenario
● Enter wrong username and wrong password and click on login, and the
expected result will be the user should get an error message
● Enter correct username and wrong password and click on login, and the
expected result will be the user should get an error message
● Enter wrong username and correct password and click on login, and the
expected result will be the user should get an error message
As they essentially test the same rule.

Templates

You use templates to modify decision table values in the business process rules
manager. The templates are designed in IBM® Integration Designer and
contained in the business rule definition. The templates determine which
aspects of a decision table you can modify and provide a list of valid values to
choose from. You create new rows or columns in the table or new actions
based on the templates defined for that decision table, and you modify existing
conditions or actions that were created with the template. Decision table
templates are not shared between decision tables.

Initialization action rules

Decision tables support the use of an initialization action rule, which runs
before the decision table is started and allows for preprocessing, such as for
creating business objects or setting initial values. You can modify an
initialization action rule in the business process rules manager, provided that
the business rule definition was designed in IBM Integration Designer with an
initialization action.

Although only one initialization action rule can be created from a single
template, the action rule can have multiple action expressions in it, so it can
perform multiple actions. If an initialization rule template is defined for a
particular decision table, it can only be used in that table.

Otherwise conditions

The otherwise condition is a special condition that will be entered by default if


no other condition in the decision table is applicable.

The otherwise condition will only display in the business process rules
manager if it is included in the decision table definition that was designed in
IBM Integration Designer. You cannot add or remove it dynamically in the
business process rules manager.

However, you can incorporate actions defined with templates for the otherwise
condition. The otherwise condition can be used zero or one time for any
condition being checked.

The following figure shows a decision table with an initialization action rule that sets
the default member type to Silver) and otherwise conditions that apply to gold and
silver customers spending less than $500. The conditions PurchaseAmount and
MemberType are along the first and second rows, and the action Discount is along
the third row. The orientation of conditions and actions is shown by arrows.
Figure 1. Decision table

The example shows that gold customers spending $500 - $1999 get an 8%
discount while silver customers spending $500 - $2000 get a 3% discount.
Gold customers spending $2000 or more get a 10% discount while silver
customers spending $2000 or more get a 5% discount. Gold customers
spending less than $500 get a 2% discount while silver customers spending
less than $500 get a 0% discount.

● Creating decision table entries


You create a new decision table entry by copying an existing decision table
entry and modifying its values.
● Special actions menu
The Decision Table page has a Special actions menu to edit the values in a
decision table or modify the structure and variables of a template.
● Modifying decision table entries
You edit a decision table by directly entering the new value into the
appropriate input field or by selecting a value from the field's list box options.
● Modifying template values of decision tables
You modify the structure and values of a decision table template by using the
Special actions menu and by directly entering values into the appropriate
input fields.
Unit 3

Software Test Automation and Quality Metrics:

Software Test Automation, Skills needed for Automation, Scope of


Automation, Design and Architecturefor Automation, Requirements for a
Test Tool, Challenges in Automation Tracking the Bug, Debugging.

Testing Software System Security- Six-Sigma,TQM - Complexity Metrics


and Models, Quality Management Metrics, Availability Metrics, Defect
Removal Effectiveness, FMEA, Quality Function Deployment, Taguchi
Quality Loss Function, Cost of Quality.

Six-Sigma-

Six Sigma
Six Sigma is the process of improving the quality of the output by identifying and
eliminating the cause of defects and reduce variability in manufacturing and business
processes. The maturity of a manufacturing process can be defined by a sigma rating
indicating its percentage of defect-free products it creates. A six sigma method is one in
which 99.99966% of all the opportunities to produce some features of a component are
statistically expected to be free of defects (3.4 defective features per million
opportunities).
History of Six Sigma
Six-Sigma is a set of methods and tools for process improvement. It was introduced by
Engineer Sir Bill Smith while working at Motorola in 1986. In the 1980s, Motorola was
developing Quasar televisions which were famous, but the time there was lots of
defects which came up on that due to picture quality and sound variations.

By using the same raw material, machinery and workforce a Japanese form took over
Quasar television production, and within a few months, they produce Quasar TV's sets
which have fewer errors. This was obtained by improving management techniques.

Six Sigma was adopted by Bob Galvin, the CEO of Motorola in 1986 and registered as a
Motorola Trademark on December 28, 1993, then it became a quality leader.

Characteristics of Six Sigma


The Characteristics of Six Sigma are as follows:
1. Statistical Quality Control: Six Sigma is derived from the Greek Letter σ (Sigma)
from the Greek alphabet, which is used to denote Standard Deviation in statistics.
Standard Deviation is used to measure variance, which is an essential tool for
measuring non-conformance as far as the quality of output is concerned.

2. Methodical Approach: The Six Sigma is not a merely quality improvement


strategy in theory, as it features a well defined systematic approach of
application in DMAIC and DMADV which can be used to improve the quality of
production. DMAIC is an acronym for Design-Measure- Analyze-Improve-Control.
The alternative method DMADV stands for Design-Measure-
Analyze-Design-Verify.

3. Fact and Data-Based Approach: The statistical and methodical aspect of Six
Sigma shows the scientific basis of the technique. This accentuates essential
elements of the Six Sigma that is a fact and data-based.

4. Project and Objective-Based Focus: The Six Sigma process is implemented for
an organization's project tailored to its specification and requirements. The
process is flexed to suits the requirements and conditions in which the projects
are operating to get the best results.

5. Customer Focus: The customer focus is fundamental to the Six Sigma approach.
The quality improvement and control standards are based on specific customer
requirements.

6. Teamwork Approach to Quality Management: The Six Sigma process requires


organizations to get organized when it comes to controlling and improving
quality. Six Sigma involves a lot of training depending on the role of an individual
in the Quality Management team.

Six Sigma Methodologies


Six Sigma projects follow two project methodologies:
1. DMAIC

2. DMADV

DMAIC

It specifies a data-driven quality strategy for improving processes. This methodology is


used to enhance an existing business process.

The DMAIC project methodology has five phases:


1. Define: It covers the process mapping and flow-charting, project charter
development, problem-solving tools, and so-called 7-M tools.

2. Measure: It includes the principles of measurement, continuous and discrete


data, and scales of measurement, an overview of the principle of variations and
repeatability and reproducibility (RR) studies for continuous and discrete data.

3. Analyze: It covers establishing a process baseline, how to determine process


improvement goals, knowledge discovery, including descriptive and exploratory
data analysis and data mining tools, the basic principle of Statistical Process
Control (SPC), specialized control charts, process capability analysis, correlation
and regression analysis, analysis of categorical data, and non-parametric
statistical methods.

4. Improve: It covers project management, risk assessment, process simulation,


and design of experiments (DOE), robust design concepts, and process
optimization.

5. Control: It covers process control planning, using SPC for operational control and
PRE-Control.

DMADV

It specifies a data-driven quality strategy for designing products and processes. This
method is used to create new product designs or process designs in such a way that it
results in a more predictable, mature, and detect free performance.

The DMADV project methodology has five phases:


1. Define: It defines the problem or project goal that needs to be addressed.

2. Measure: It measures and determines the customer's needs and specifications.

3. Analyze: It analyzes the process to meet customer needs.

4. Design: It can design a process that will meet customer needs.

5. Verify: It can verify the design performance and ability to meet customer needs.

Quality Management Techniques:


Six Sigma
● Six Sigma could be a project-based approach for improving viability and
proficiency.
● It may be a restrained, customer-focused, data-driven approach for
making strides the execution of forms, items, or administrations.
Presently, the term Six Sigma can be utilized to allude to logic, an
execution metric, or a methodology.
● As reasoning, Six Sigma endeavors for flawlessness in accomplishing
effectiveness and effectiveness in assembly client and commerce
requirements.
● Six Sigma is proactive and prevention-based rather than receptive and
detection-based. As an execution metric, Six Sigma alludes to a level of
quality that’s close flawlessness.
● It endeavors for an imperfection level that’s no more than 3.4 parts per
million.
● As a technique, Six Sigma alludes to DMAIC, or D M A I C, a strategy
for change named after its five stages of define, measure, analyze,
improve, and control.

Total Quality Management


● Total Quality Management is characterized as a customer-oriented
handle and points for nonstop advancement of commerce operations.
● It guarantees that all associated works (especially work of
representatives) are toward the common objectives of progressing item
quality or benefit quality, as well as improving the generation prepare or
handle of the rendering of administrations.
● Be that as it may, the accentuation is put on fact-based choice making,
with the utilization of execution measurements to screen advance.
Unit 4

Fundamentals of Software Quality Assurance:

SQA basics, Components of the Software Quality Assurance System,


software quality in business context, planning for software quality
assurance, product quality and process quality, software process models, 7
QC Tools and Modern Tools.

What is Quality?
Quality is defined as the product or services that should be "fit for use and
purpose."

Quality is all about meeting the needs and expectations of customers


concerning functionality, design, reliability, durability, and price of the
product.

What is Assurance?
Assurance is a positive declaration on a product or service. It is all about
the product which should work well. It provides a guarantee which would
work without any problem according to expectations and requirements.

What is Quality Assurance (QA)?


Quality Assurance is also known as QA Testing. QA is defined as an
activity to ensure that an organization is providing the best product or
service to the customers.

Software Quality Assurance seems it is all about evaluation of software


based on functionality, performance, and adaptability; however software
quality assurance goes beyond the quality of the software, it also includes
the quality of the process used to develop, test and release the software.

Software Quality assurance is all about the Software Development lifecycle


that includes requirements management, software design, coding, testing,
and release management.
Quality Assurance is the set of activities that defines the procedures and
standards to develop the product.

Quality Assurance is a systematic way of creating an environment to


ensure that the software product being developed meets the quality
requirements. This process is controlled and determined at the managerial
level. It is a preventive process whose aim is to establish the correct
methodology and standard to provide a quality environment to the product
being developed. Quality Assurance focuses on process standard, projects
audit, and procedures for development. QA is also known as a set of
activities designed to evaluate the process by which products are
manufactured.

QA focused on improving the processes to deliver Quality Products.

What is the Quality Attribute of a software?


The following six characteristics can define the quality of the software:

1. Functionality

Quality of software is defined as how effectively the software interacts with


other components of the system. The software must provide appropriate
functions as per requirement, and these functions must be implemented
correctly.

2. Reliability

It is defined as the capability of the software to perform under specific


conditions for a specified duration.

3. Usability

Usability of software is defined as its ease of use. Quality of the software is


also identified as how easily a user can understand the functions of the
software and how much efforts are required to follow the features.

4. Efficiency
The efficiency of the software is dependent on the architecture and coding
practice followed during development.

5. Maintainability

Maintainability is also one of the significant factors to define the quality of


the software. It refers to identify the fault and fix in the software. It should
be stable when the changes are made.

6. Portability

Portability of the software, defined as how easily a system adapts to


changes in the specifications. Quality of the software is also determined by
the portability of the system how easy it is to install the software and how
easy it is to replace a component of the order in a given environment.

Software Quality Assurance:-


Software Quality Assurance is a set of exercises for ensuring quality in
software measures. It ensures that created programming meets and follows
the characterized or normalized quality determinations. SQA is a
progressing cycle inside the Software Development Life Cycle (SDLC) that
regularly checks the created software to ensure it meets the ideal quality
measures.

SQA practices are implemented in many kinds of software advancement,


paying little mind to the basic software improvement model being utilized.
SQA consolidates and actualizes software testing procedures to test the
product. As opposed to checking for quality after fulfillment, SQA
measures test for quality in each period of improvement, until the product
is finished. With SQA, the product improvement measure moves into the
following stage just once the current/past stage consents to the necessary
quality principles

Components of SQA System:


1. Pre-project components
2. Project life cycle components
3. Infrastructure error prevention and improvement components
4. Software quality management components
5. Standardization, certification, and SQA system assessment
component
6. Organizing for SQA — the human components

1. Pre-project Plan

Pre-project Plan ensures that the resources required for project, schedule,
and budget should be clearly defined. Plan for development and ensuring
quality has been determined.

Components are as:

● Required Resources (Hardware and Human resources)


● Development plan
● Schedules
● Risk evaluation
● Quality plan
● Project methodology

2. Project lifecycle component

A project lifecycle usually comprised of two stages:


1. Development Stage

In the Development Stage Component, Software Quality Assurance help to


identify the design and programming errors. Its Components are divided
into the following sub-classes: Reviews, Expert Opinions, and Software
Testing.

2. Operation Maintenance Stage

In Operation Maintenance Stage, the Software Quality Assurance


components include the Development lifecycle component along with
specialized components whose aim is to improve the maintenance tasks.

3. Infrastructure error prevention and improvement components

The aim of this component is to the prevention of software faults and


minimizes the rate of errors.
These components are as:

● Procedure and work instructions


● Templates and Checklists
● Staff Training, Retaining And Certification
● Preventive and Corrective Actions
● Configuration Management
● Documentation Control

4. Software Quality Management Components

This class of component consists of controlling development and


maintenance activities. These components establish the managerial control
of software development projects. The management component aims to
prevent the project from going over budget and behind schedule.

The management components include:

● Project Progress Control


● Software Quality Metrics
● Software Quality Costs

5. Standardization, Certification, and SQA assessment components

Aim of these components is to implement international managerial and


professional standards within the organization. These components help to
improve the coordination among the Organizational Quality Systems and
establish standards for the project process. The component includes:

● Quality management standards


● Project process standard

6. Organizing for Software Quality Assurance? The human elements

The main aim of this class of components is to initiate and support the
implementation of Software Quality Assurance components, identify any
deviations from the predefined Software Quality Assurance procedures,
methods, and recommended improvements. The Software Quality
Assurance organizational team includes test managers, testers, SQA unit
SQA committee, and SQA forum members.

Software quality in business context


Consider these five ways software quality can impact your business.

1. Predictability
Software quality drives predictability. Do it once and do it right, and there

will be less re-work, less variation in productivity and better performance

overall. Products get delivered on time, and they get built more

productively. Poor quality is much more difficult to manage. Predictability

decreases as re-work grows, and the likelihood of a late, lower quality

product increases.

2. Reputation
Some companies have a reputation for building quality software. It

becomes who they are, a part of their brand, and ultimately it is the

expectation people have of them. Customers seek them out because of it. A

good, solid reputation is hard to establish and easy to lose, but when your

company has it, it’s a powerful business driver. A few mistakes and that

reputation can be gone, creating major obstacles to sales, and

consequently, your bottom line.

3. Employee Morale
The most productive and happy employees have pride in their work.

Enabling employees to build quality software will drive a much higher level

of morale and productivity. On the other hand, poor products, lots of

re-work, unhappy customers and difficulty making deadlines have the

opposite effect, leading to expensive turnover and a less productive

workforce.

4. Customer Satisfaction
A quality product satisfies the customer. A satisfied customer comes back

for more and provides positive referrals. Customer loyalty is heavily driven

by the quality of the software you produce and service you provide. And,

let’s not forget that with the explosion of social media channels such as

Twitter and Facebook, positive referrals can spread quickly. Of course that

means poor quality and dissatisfaction can also be communicated quickly,

if not even quicker than the good ones.

5. Bottom Line
It all drives the bottom line. Predictable and productive performance, a

stellar reputation, happy employees, and satisfied customers are the

formula for a successful software business.


Software Quality Assurance Plan

In software, a quality assurance plan is the set of procedures, tools, and techniques
that testers can use to ensure that an app or service meets the software requirements.

These plans typically include some or all of the following sections:

● Purpose: Defines the reasoning behind the document, the scope of the testing,
and what stages of the software lifecycle it covers.
● Reference: Any references or other materials used to create this plan can go in
here.
● Management: In this section, outline (broadly) the organizational structure and
personnel necessary to implement the quality assurance plan.
● Problem Reporting: Define what happens when testers encounter issues (are
they tracked as bugs, is it an email that gets sent, or does the tester report it in
some other manner?).
● Tools, Technologies, and Methodologies: What will the testers use to carry out
the plan?
● Code Control: How will this plan ensure that harmful code doesn’t make it to
production (perhaps the testers check it out on an integration system before
pushing it to prod).
● Records Maintenance: What records will testers maintain, and will they ever
destroy them? This section is often critical for government software quality
assurance plans.
● Testing Methodology: What methods will the testers themselves employ? Will
they run through a rubric of test cases once a day? Or will they perform
continuous testing on the prod system?

The idea is to outlay a comprehensive set of rules and procedures that will ensure all
the code written, checked in, and the resulting systems are of the highest quality.
Difference between Product and Process:-

S. No. Product Process

1. Product is the final production of While the process is a set of sequence


the project. steps that have to be followed to create
a project.

2. A product focuses on the final Whereas the process is focused on


result. completing each step being developed.

3. In the case of products, the firm In contrast, the process consistently


guidelines are followed. follows guidelines.

4. A product tends to be short-term. Whereas the process tends to be


long-term.

5. The main goal of the product is While the purpose of the process is to
to complete the work make the quality of the project better.
successfully.

6. Product is created based on the A process serves as a model for


needs and expectations of the producing various goods in a similar
customers. way.

7. A product layout is a style of When resources with similar processes


layout design in which the or functions are grouped together, it is
materials required to make the referred to as a process layout.
product are placed in a single line
depending on the order of
operations.

8. Product patents are thought to A process patent provides the inventor


offer a greater level of protection only limited protection.
than process patents.
Product quality and Process quality
Product quality and process quality are two important concepts in quality
management. Product quality is the degree to which a product meets its
specifications. Process quality is the degree to which a process meets its
specifications.
There are several methods for measuring product quality, such as inspection,
testing, and quality audits. Process quality can be measured with process
audits and process control charts.
Product quality is important because it affects the functionality of the product
and the satisfaction of the customer. Process quality is important because it
affects the efficiency of the process and the quality of the product.
We have a variety of work products in the software development process,
such as requirement specifications, software design, software code, user
documentation, and so on. Any of these work products can be tested to see if
they meet quality standards by measuring their attributes. These processes
can be improved if we audit them.
Quality assurance is the level of implementation and adherence to an
acceptable process, such as measurements and quality criteria that yields
artifacts. To design software, there must be a systematic and parallel process.
The goal of process and product quality assurance is to monitor software
engineering processes and methods to ensure that quality is maintained. It is
the process of confirming and verifying whether or not a service or product
meets the customer’s needs.
Quality products result in customer loyalty that generates leads for the
company. When a customer discovers a product or service that they are
completely satisfied with, they return, make repeat purchases, and
recommend it to others.
Software process model
A software process model is an abstraction of the software development process.
The models specify the stages and order of a process. So, think of this as a
representation of the order of activities of the process and the sequence in which
they are performed.
A model will define the following:
● The tasks to be performed
● The input and output of each task
● The pre and post conditions for each task
● The flow and sequence of each task

The goal of a software process model is to provide guidance for


controlling and coordinating the tasks to achieve the end product
and objectives as effectively as possible.

Source: Omar Elgabry


There are many kinds of process models for meeting different requirements. We
refer to these as SDLC models (Software Development Life Cycle models). The
most popular and important SDLC models are as follows:
● Waterfall model
● V model
● Incremental model
● RAD model
● Agile model
● Iterative model
● Prototype model
● Spiral model

Factors in choosing a software process


Choosing the right software process model for your project can be difficult. If you
know your requirements well, it will be easier to select a model that best matches
your needs. You need to keep the following factors in mind when selecting your
software process model:

Project requirements
Before you choose a model, take some time to go through the project requirements
and clarify them alongside your organization’s or team’s expectations. Will the user
need to specify requirements in detail after each iterative session? Will the
requirements change during the development process?

Project size
Consider the size of the project you will be working on. Larger projects mean bigger
teams, so you’ll need more extensive and elaborate project management plans.

Project complexity
Complex projects may not have clear requirements. The requirements may change
often, and the cost of delay is high. Ask yourself if the project requires constant
monitoring or feedback from the client.

Cost of delay
Is the project highly time-bound with a huge cost of delay, or are the timelines
flexible?

Customer involvement
Do you need to consult the customers during the process? Does the user need to
participate in all phases?

Familiarity with technology


This involves the developers’ knowledge and experience with the project domain,
software tools, language, and methods needed for development.

Project resources
This involves the amount and availability of funds, staff, and other resources.

Types of software process models


As we mentioned before, there are multiple kinds of software process models that
each meet different requirements. Below, we will look at the top seven types of
software process models that you should know.

Waterfall Model
The waterfall model is a sequential, plan driven-process where you must plan and
schedule all your activities before starting the project. Each activity in the waterfall
model is represented as a separate phase arranged in linear order.
It has the following phases:
● Requirements
● Design
● Implementation
● Testing
● Deployment
● Maintenance
Each of these phases produces one or more documents that need to be approved
before the next phase begins. However, in practice, these phases are very likely to
overlap and may feed information to one another.

The software process isn’t linear, so the documents produced


may need to be modified to reflect changes.
The waterfall model is easy to understand and follow. It doesn’t require a lot of
customer involvement after the specification is done. Since it’s inflexible, it can’t
adapt to changes. There is no way to see or try the software until the last phase.
The waterfall model has a rigid structure, so it should be used in cases where the
requirements are understood completely and unlikely to radically change.

V Model
The V model (Verification and Validation model) is an extension of the waterfall
model. All the requirements are gathered at the start and cannot be changed. You
have a corresponding testing activity for each stage. For every phase in the
development cycle, there is an associated testing phase.

The corresponding testing phase of the development phase is


planned in parallel, as you can see above.
The V model is highly disciplined, easy to understand, and makes project
management easier. But it isn’t good for complex projects or projects that have
unclear or changing requirements. This makes the V model a good choice for
software where downtimes and failures are unacceptable.

Incremental Model
The incremental model divides the system’s functionality into small increments that
are delivered one after the other in quick succession. The most important
functionality is implemented in the initial increments.
The subsequent increments expand on the previous ones until everything has been
updated and implemented.
Incremental development is based on developing an initial implementation, exposing
it to user feedback, and evolving it through new versions. The process’ activities are
interwoven by feedback.

Each iteration passes through the requirements, design, coding,


and testing stages.
The incremental model lets stakeholders and developers see results with the first
increment. If the stakeholders don’t like anything, everyone finds out a lot sooner. It
is efficient as the developers only focus on what is important and bugs are fixed as
they arise, but you need a clear and complete definition of the whole system
before you start.
The incremental model is great for projects that have loosely-coupled parts and
projects with complete and clear requirements.

Iterative Model
The iterative development model develops a system through building small
portions of all the features. This helps to meet initial scope quickly and release it for
feedback.
In the iterative model, you start off by implementing a small set of the software
requirements. These are then enhanced iteratively in the evolving versions until
the system is completed. This process model starts with part of the software, which
is then implemented and reviewed to identify further requirements.
Like the incremental model, the iterative model allows you to see the results at the
early stages of development. This makes it easy to identify and fix any functional
or design flaws. It also makes it easier to manage risk and change requirements.
The deadline and budget may change throughout the development process,
especially for large complex projects. The iterative model is a good choice for large
software that can be easily broken down into modules.

RAD Model
The Rapid Application Development (RAD model) is based on iterative development
and prototyping with little planning involved. You develop functional modules in
parallel for faster product delivery. It involves the following phases:
1. Business modeling
2. Data modeling
3. Process modeling
4. Application generation
5. Testing and turnover

The RAD concept focuses on gathering requirements using focus


groups and workshops, reusing software components, and
informal communication.
The RAD model accommodates changing requirements, reduces development time,
and increases the reusability of components. But it can be complex to manage.
Therefore, the RAD model is great for systems that need to be produced in a short
time and have known requirements.

Spiral Model
The spiral model is a risk driven iterative software process model. The spiral model
delivers projects in loops. Unlike other process models, its steps aren’t activities but
phases for addressing whatever problem has the greatest risk of causing a failure.

It was designed to include the best features from the waterfall and
introduces risk-assessment.
You have the following phases for each cycle:
1. Address the highest-risk problem and determine the objective and alternate
solutions
2. Evaluate the alternatives and identify the risks involved and possible solutions
3. Develop a solution and verify if it’s acceptable
4. Plan for the next cycle
You develop the concept in the first few cycles, and then it evolves into an
implementation. Though this model is great for managing uncertainty, it can be
difficult to have stable documentation. The spiral model can be used for projects with
unclear needs or projects still in research and development.

Agile model
The agile process model encourages continuous iterations of development and
testing. Each incremental part is developed over an iteration, and each iteration is
designed to be small and manageable so it can be completed within a few weeks.
Each iteration focuses on implementing a small set of features completely. It
involves customers in the development process and minimizes documentation by
using informal communication.

Agile development considers the following:


● Requirements are assumed to change
● The system evolves over a series of short iterations
● Customers are involved during each iteration
● Documentation is done only when needed
Though agile provides a very realistic approach to software development, it isn’t
great for complex projects. It can also present challenges during transfers as there is
very little documentation. Agile is great for projects with changing requirements.
Some commonly used agile methodologies include:
● Scrum: One of the most popular agile models, Scrum consists of iterations
called sprints. Each sprint is between 2 to 4 weeks long and is preceded by
planning. You cannot make changes after the sprint activities have been
defined.
● Extreme Programming (XP): With Extreme Programming, an iteration can
last between 1 to 2 weeks. XP uses pair programming, continuous integration,
test-driven development and test automation, small releases, and simple
software design.
● Kanban: Kanban focuses on visualizations, and if any iterations are used
they are kept very short. You use the Kanban Board that has a clear
representation of all project activities and their numbers, responsible people,
and progress.

Software Crisis
1. Size: Software is becoming more expensive and more complex with

the growing complexity and expectation out of software. For example,

the code in the consumer product is doubling every couple of years.

2. Quality: Many software products have poor quality, i.e., the software

products defects after putting into use due to ineffective testing

technique. For example, Software testing typically finds 25 errors per

1000 lines of code.

3. Cost: Software development is costly i.e. in terms of time taken to

develop and the money involved. For example, Development of the

FAA's Advanced Automation System cost over $700 per lines of code.

4. Delayed Delivery: Serious schedule overruns are common. Very

often the software takes longer than the estimated time to develop,

which in turn leads to cost shooting up. For example, one in four

large-scale development projects is never completed.


7 basic quality tools

1. Stratification
2. Histogram
3. Check sheet (tally sheet)
4. Cause and effect diagram (fishbone or Ishikawa diagram)
5. Pareto chart (80-20 rule)
6. Scatter diagram
7. Control chart (Shewhart chart)

1. Stratification

Stratification analysis is a quality assurance tool used to sort data,


objects, and people into separate and distinct groups. Separating your
data using stratification can help you determine its meaning, revealing
patterns that might not otherwise be visible when it’s been lumped
together.

Whether you’re looking at equipment, products, shifts, materials, or


even days of the week, stratification analysis lets you make sense of
your data before, during, and after its collection.

To get the most out of the stratification process, consider which


information about your data’s sources may affect the end results of
your data analysis. Make sure to set up your data collection so that
that information is included.
2. Histogram

Quality professionals are often tasked with analyzing and interpreting


the behavior of different groups of data in an effort to manage quality.
This is where quality control tools like the histogram come into play.

The histogram can help you represent frequency distribution of data


clearly and concisely amongst different groups of a sample, allowing
you to quickly and easily identify areas of improvement within your
processes. With a structure similar to a bar graph, each bar within a
histogram represents a group, while the height of the bar represents
the frequency of data within that group.

Histograms are particularly helpful when breaking down the frequency


of your data into categories such as age, days of the week, physical
measurements, or any other category that can be listed in
chronological or numerical order.
3. Check sheet (or tally sheet)

Check sheets can be used to collect quantitative or qualitative data.


When used to collect quantitative data, they can be called a tally
sheet. A check sheet collects data in the form of check or tally marks
that indicate how many times a particular value has occurred, allowing
you to quickly zero in on defects or errors within your process or
product, defect patterns, and even causes of specific defects.

With its simple setup and easy-to-read graphics, check sheets make it
easy to record preliminary frequency distribution data when measuring
out processes. This particular graphic can be used as a preliminary
data collection tool when creating histograms, bar graphs, and other
quality tools.
Click on template to edit in Lucidchart

4. Cause-and-effect diagram (also known as a fishbone or


Ishikawa diagram)

Introduced by Kaoru Ishikawa, the fishbone diagram helps users


identify the various factors (or causes) leading to an effect, usually
depicted as a problem to be solved. Named for its resemblance to a
fishbone, this quality management tool works by defining a
quality-related problem on the right-hand side of the diagram, with
individual root causes and sub causes branching off to its left.

A fishbone diagram’s causes and subcauses are usually grouped into


six main groups, including measurements, materials, personnel,
environment, methods, and machines. These categories can help you
identify the probable source of your problem while keeping your
diagram structured and orderly.

Click on template to edit in Lucidchart

5. Pareto chart (80-20 rule)

As a quality control tool, the Pareto chart operates according to the


80-20 rule. This rule assumes that in any process, 80% of a process’s
or system’s problems are caused by 20% of major factors, often
referred to as the “vital few.” The remaining 20% of problems are
caused by 80% of minor factors.

A combination of a bar and line graph, the Pareto chart depicts


individual values in descending order using bars, while the cumulative
total is represented by the line.

The goal of the Pareto chart is to highlight the relative importance of a


variety of parameters, allowing you to identify and focus your efforts
on the factors with the biggest impact on a specific part of a process
or system.
6. Scatter diagram

Out of the seven quality tools, the scatter diagram is most useful in
depicting the relationship between two variables, which is ideal for
quality assurance professionals trying to identify cause and effect
relationships.

With dependent values on the diagram’s Y-axis and independent


values on the X-axis, each dot represents a common intersection
point. When joined, these dots can highlight the relationship between
the two variables. The stronger the correlation in your diagram, the
stronger the relationship between variables.

Scatter diagrams can prove useful as a quality control tool when used
to define relationships between quality defects and possible causes
such as environment, activity, personnel, and other variables. Once
the relationship between a particular defect and its cause has been
established, you can implement focused solutions with (hopefully)
better outcomes.
7. Control chart (also called a Shewhart chart)

Named after Walter A. Shewhart, this quality improvement tool can


help quality assurance professionals determine whether or not a
process is stable and predictable, making it easy for you to identify
factors that might lead to variations or defects.
Control charts use a central line to depict an average or mean, as well
as an upper and lower line to depict upper and lower control limits
based on historical data. By comparing historical data to data
collected from your current process, you can determine whether your
current process is controlled or affected by specific variations.

Using a control chart can save your organization time and money by
predicting process performance, particularly in terms of what your
customer or organization expects in your final product.

Click on template to edit in Lucidchart

Bonus: Flowcharts

Some sources will swap out stratification to instead include flowcharts


as one of the seven basic QC tools. Flowcharts are most commonly
used to document organizational structures and process flows, making
them ideal for identifying bottlenecks and unnecessary steps within
your process or system.

Mapping out your current process can help you to more effectively
pinpoint which activities are completed when and by whom, how
processes flow from one department or task to another, and which
steps can be eliminated to streamline your process.
Unit 5
Software Assurance Models:

Models for Quality Assurance, ISO-9000 series, CMM, CMMI, Test Maturity
Models, SPICE, Malcolm Baldrige Model- PCMM.

Software Quality Assurance Trends: Software Process- PSP and TSP, OO


Methodology, Clean room software engineering, Defect Injection and
prevention, Internal Auditing and Assessments, Inspections &
Walkthroughs, Case Tools and their affect on Software Quality.

Models for Quality Assurance:


1. ISO-9000 series

The ISO 9000 series of standards is based on the assumption that if a proper
stage is followed for production, then good quality products are bound to follow
automatically. The types of industries to which the various ISO standards apply are as
follows.

1. ISO 9001: This standard applies to the organizations engaged in design,


development, production, and servicing of goods. This is the standard that
applies to most software development organizations.

2. ISO 9002: This standard applies to those organizations which do not design
products but are only involved in the production. Examples of these category
industries contain steel and car manufacturing industries that buy the product
and plants designs from external sources and are engaged in only manufacturing
those products. Therefore, ISO 9002 does not apply to software development
organizations.

3. ISO 9003: This standard applies to organizations that are involved only in the
installation and testing of the products. For example, Gas companies.
How to get ISO 9000 Certification?
An organization determines to obtain ISO 9000 certification applies to ISO registrar
office for registration. The process consists of the following stages:

1. Application: Once an organization decided to go for ISO certification, it applies to


the registrar for registration.

2. Pre-Assessment: During this stage, the registrar makes a rough assessment of


the organization.

3. Document review and Adequacy of Audit: During this stage, the registrar reviews
the document submitted by the organization and suggest an improvement.

4. Compliance Audit: During this stage, the registrar checks whether the
organization has compiled the suggestion made by it during the review or not.

5. Registration: The Registrar awards the ISO certification after the successful
completion of all the phases.
6. Continued Inspection: The registrar continued to monitor the organization time
by time.

Some other ISO standards:


ISO 14001 – Environmental Management
ISO 27001 – Information Security Management

2. CMM

The Software Capability Maturity Model(CMM) is a model developed by SEI(Software


Engineering Institute). It is a generic quality model and fits all types of software development
organizations.
It has five maturity levels that represent the maturity of software organizations and describes
each level in terms of the KPA(Key Process Areas) that need to be addressed for that maturity
level. Each KPA is defined by its goals and key practices.
3. CMMI

The capability Maturity Model defines five different maturity levels. CMMI (Capability Maturity
Model Integration) is a quality standard and process improvement model. It helps organizations
to improve their performance and strive for continuous improvement.
The CMMI Levels are as follows:
CMMI Level 1 -> Initial
CMMI Level 2 -> Managed
CMMI Level 3 -> Defined
CMMI Level 4 -> Quantitatively Managed
CMMI Level 5 -> Optimizing

CMMI Level 1 -> Initial

This is the initial level. Process followed by the organization is chaotic and uncontrolled or
poorly controlled. The process followed by the organization is undocumented and the
environment is unstable.
CMMI Level 2 -> Managed

The process followed by the organization is managed to some extent, it is undocumented. The
process is generally reactive. and little effort is made to control or assure quality.
CMMI Level 3 -> Defined

The process followed by the organization is managed, efforts are made to control and assure
quality. Risks in the organization are managed. Incidents are resolved and prevented from
reoccurrence.
CMMI Level 4 -> Quantitatively Managed

The process followed by the organization is measured, controlled, and documented. Quality is
controlled and assured, incidents are prevented. The organization also emphasizes customer
satisfaction.
CMMI Level 5 -> Optimizing

The main focus for the organization at this level is on optimizing the process. The organization
strives to continually improve the process.

4. Test Maturity Models

This model assesses the maturity of processes in a Testing Environment.


Even this model has 5 levels, defined below-
Level 1 – Initial: There is no quality standard followed for testing processes and
only ad-hoc methods are used at this level
Level 2 – Definition: Defined process. Preparation of test strategy, plans, test cases
are done.
Level 3 – Integration: Testing is carried out throughout the software development
lifecycle (SDLC) – which is nothing but integration with the development activities,
E.g., V- Model.
Level 4 – Management and Measurement: Review of requirements and designs
takes place at this level and criteria has been set for each level of testing
Level 5 – Optimization: Many preventive techniques are used for testing
processes, and tool support(Automation) is used to improve the testing standards
and processes.

5. Malcolm Baldrige Model- PCMM

People Capability Maturity Model (PCMM)


PCMM is a maturity structure that focuses on continuously improving the management
and development of the human assets of an organization.

It defines an evolutionary improvement path from Adhoc, inconsistently performed


practices, to a mature, disciplined, and continuously improving the development of the
knowledge, skills, and motivation of the workforce that enhances strategic business
performance.

The People Capability Maturity Model (PCMM) is a framework that helps the
organization successfully address their critical people issues. Based on the best current
study in fields such as human resources, knowledge management, and organizational
development, the PCMM guides organizations in improving their steps for managing
and developing their workforces.
The People CMM defines an evolutionary improvement path from Adhoc, inconsistently
performed workforce practices, to a mature infrastructure of practices for continuously
elevating workforce capability.questions on Structures

The PCMM subsists of five maturity levels that lay successive foundations for
continuously improving talent, developing effective methods, and successfully directing
the people assets of the organization. Each maturity level is a well-defined evolutionary
plateau that institutionalizes a level of capability for developing the talent within the
organization

The five steps of the People CMM framework are:

Initial Level: Maturity Level 1

The Initial Level of maturity includes no process areas. Although workforce practices
implement in Maturity Level, 1 organization tend to be inconsistent or ritualistic, virtually
all of these organizations perform processes that are defined in the Maturity Level 2
process areas.

Managed Level: Maturity Level 2

To achieve the Managed Level, Maturity Level 2, managers starts to perform necessary
people management practices such as staffing, operating performance, and adjusting
compensation as a repeatable management discipline. The organization establishes a
culture focused at the unit level for ensuring that person can meet their work
commitments. In achieving Maturity Level 2, the organization develops the capability to
handle skills and performance at the unit level. The process areas at Maturity Level 2
are Staffing, Communication and Coordination, Work Environment, Performance
Management, Training and Development, and Compensation.

Defined Level: Maturity Level 3

The fundamental objective of the defined level is to help an organization gain a


competitive benefit from developing the different competencies that must be combined
in its workforce to accomplish its business activities. These workforce competencies
represent critical pillars supporting the strategic workforce competencies to current and
future business objectives; the improved workforce practices for implemented at
Maturity Level 3 become crucial enablers of business strategy.

Predictable Level: Maturity Level 4

At the Predictable Level, the organization handles and exploits the capability developed
by its framework of workforce competencies. The organization is now able to handle its
capacity and performance quantitatively. The organization can predict its capability for
performing work because it can quantify the ability of its workforce and of the
competency-based methods they use performing in their assignments.

Optimizing Level: Maturity Level 5

At the Optimizing Level, the integrated organization is focused on continual


improvement. These improvements are made to the efficiency of individuals and
workgroups, to the act of competency-based processes, and workforce practices and
activities.

Baldrige assessment criteria?


The Baldrige criteria for performance excellence are a structure that any
organization can use to improve its overall performance. The assessment criteria
consist of seven categories:
● Leadership – examines how senior executives lead the organization and
how the organization handles its responsibilities to the public and to the
environment in which it is inserted.
● Strategy – examines how an organization sets strategic directions and
how it determines key plans of action.
● Customers – examines how an organization determines requirements and
expectations of customers and markets, how it builds relations with
customers, and how it acquires, satisfies and retains customers.
● Measurement, analysis and knowledge management – examines
management, effective use, analysis and enhancement of data and
information to provide support for key processes at the organization and
for the organization’s performance management system.
● Workforce – examines how an organization allows its workforce to
develop their full potential and how the workforce is aligned with the
organization’s objectives.
● Operations – examines aspects of how key production/delivery and
support processes are designed, managed and enhanced.
● Results – examines an organization’s performance and improvement in its
main business areas: customer satisfaction, financial and market
performance, human resources, performance of suppliers and partners,
operational performance, governance and social responsibility. This
category also looks at an organization’s performance in relation to its
competitors.

Software Quality Assurance Trends: Software Process

PSP and TSP

Software is the set of instructions in the form of programs to govern the computer
system and process the hardware components. To produce a software product a
set of activities is used. This set is called a software process.
In this article, we will see a difference between PSP and TSP.

PSP: The personal software process is focused on individuals to improve


their performance. The PSP is an individual process, and it is a bottom-up
approach to software process improvement. The PSP is a prescriptive
process, it is a more mature methodology with a well-defined set of tools
and techniques.
TSP: TSP is a team-based process. It is focused on team productivity.
Basically, it is a top-down approach. The TSP is an adaptive process, and
process management methodology.

PSP TSP

PSP is a project management process TSP is a project management process that


that defines how to manage a project in a defines how to manage a project in a virtual
face-to-face environment. environment.

PSP is more formal and structured than


TSP is less formal and structured than PSP.
TSP.

PSP is based on the waterfall model. TSP is based on the agile model.

PSP is more suited for large projects. TSP is more suited for small projects.

PSP projects are typically completed in TSP projects are typically completed in multiple
one phase. phases.

PSP is a high-level language and it is TSP is a low-level language and it is difficult to


easy to learn and use. learn and use.

PSP is a structured language and it is TSP is an unstructured language and it is difficult


easy to read and write. to read and write.

PSP programs are written in English and TSP programs are written in assembly language
they are easy to understand. and they are difficult to understand.

PSP is a portable language and it can be TSP is a platform-dependent language and it can
run on any platform. be run only on specific platforms.
PSP is an interpreted language and it TSP is a compiled language and it needs to be
does not need to be compiled. compiled before it can be run.

PSP is a free language and it can be TSP is a commercial language and it is not
downloaded from the internet. available for free.

PSP is an open-source language and it is TSP is a closed-source language and it is not


available to everyone. available to everyone.

TSP, on the other hand, is an iterative and


PSP is a linear process model that is incremental process model that allows for
completed in a sequential manner. feedback and changes at each stage of the
process.

Cleanroom Software Engineering


● It is an engineering approach which is used to build correctness in
developed software.
● The main concept behind the cleanroom software engineering is to
remove the dependency on the costly processes.
● The cleanroom software engineering includes the quality approach of
writing the code from the beginning of the system and finally gathers
into a complete a system.

Following tasks occur in cleanroom engineering:

1. Incremental planning
● In this task, the incremental plan is developed.
● The functionality of each increment, projected size of the increment and
the cleanroom development schedule is created.
● The care is to be taken that each increment is certified and integrated in
proper time according to the plan.

2. Requirements gathering
● Requirement gathering is done using the traditional techniques like
analysis, design, code, test and debug.
● A more detailed description of the customer level requirement is
developed.

3. Box structure specification


● The specification method uses box structure.
● Box structure is used to describe the functional specification.
● The box structure separate and isolate the behaviour, data and procedure
in each increment.

4. Formal design
● The cleanroom design is a natural specification by using the black box
structure approach.
● The specification is called as state boxes and the component level
diagram called as the clear boxes.

5. Correctness verification
● The cleanroom conducts the exact correctness verification activities on
the design and then the code.
● Verification starts with the highest level testing box structure and then
moves toward the design detail and code.
● The first level of correctness takes place by applying a set of 'correcting
questions'.
● More mathematical or formal methods are used for verification if
correctness does not signify that the specification is correct.

6. Code generation, inspection and verification


● The box structure specification is represented in a specialized language
and these are translated into the appropriate programming language.
● Use the technical reviews for the syntactic correctness of the code.

7. Statical test planning


● Analyzed, planned and designed the projected usages of the software.
● The cleanroom activity is organized in parallel with specification,
verification and code generation.

8. Statistical use testing


● The exhaustive testing of computer software is impossible. It is
compulsory to design limited number of test cases.
● Statistical use technique execute a set of tests derived from a statistical
sample in all possible program executions.
● These samples are collected from the users from a targeted population.

9. Certification
● After the verification, inspection and correctness of all errors, the
increments are certified and ready for integration.

Cleanroom process model


● The modeling approach in cleanroom software engineering uses a
method called box structure specification.
● A 'box' contains the system or the aspect of the system in detail.
● The information in each box specification is sufficient to define its
refinement without depending on the implementation of other boxes.

The cleanroom process model uses three types of boxes as follows:

1. Black box
● The black box identifies the behavior of a system.
● The system responds to specific events by applying the set of transition
rules.

2. State box
● The box consist of state data or operations that are similar to the
objects.
● The state box represents the history of the black box i.e the data
contained in the state box must be maintained in all transitions.

3. Clear box
● The transition function used by the state box is defined in the clear box.
● It simply states that a clear box includes the procedural design for the
state box.
Object Oriented Methodology
Object Oriented Methodology (OOM) is a system development approach encouraging and
facilitating re-use of software components. With this methodology, a computer system can be
developed on a component basis which enables the effective re-use of existing components and
facilitates the sharing of its components by other systems.

Documents of OOM
This document aims at introducing briefly to the readers the Object Oriented Methodology
(OOM). Information covered in the document includes a brief overview of the OOM, its benefits,
the processes and some of the major techniques in OOM.

OOM is a system development approach encouraging and facilitating re-use of software


components. With this methodology, a computer system can be developed on a component
basis which enables the effective re-use of existing components and facilitates the sharing of its
components by other systems. Through the adoption of OOM, higher productivity, lower
maintenance cost and better quality can be achieved.

This methodology employs the international standard Unified Modeling Language (UML) from
the Object Management Group (OMG). UML is a modeling standard for OO analysis and design
which has been widely adopted in the IT industry.

The OOM life cycle consists of six stages. These stages are the business planning stage, the
business architecture definition stage, the technical architecture definition stage, the incremental
delivery planning stage, the incremental design and build stage, and the deployment stage.

OOM Procedures Manual


1. Purpose

The objectives of this document are to describe the Object Oriented Methodology (OOM)
process structure and to detail the procedures involved in conducting OOM projects.
2. Scope

The intended readers of this manual are the practitioners of OOM projects. This manual is
structured according to the stages of the OOM process. For each task in the OOM stages, the
following information is documented :

● Objective
● Description
● Prerequisites
● Deliverables
● Guidelines
● Techniques
● Sub-tasks

For ease of reference, part of the contents are extracted from the full manual and summarized
in the following sections.

3. OOM Overview

Object Oriented Methodology (OOM) is a system development approach encouraging and


facilitating re-use of software components. With this methodology, a computer system can be
developed on a component basis which enables the effective re-use of existing components and
facilitates the sharing of its components by other systems. Through the adoption of OOM, higher
productivity, lower maintenance cost and better quality can be achieved.

The OOM life cycle consists of six stages :

● Business planning;
● Business architecture definition;
● Technical architecture definition;
● Incremental delivery planning;
● Incremental design and build;
● Deployment.
OOM Documentation Manual
1. Purpose

The purpose of this manual is to define the documentation which will be produced by a project
using the Object Oriented Methodology (OOM). It describes the purpose, contents and
preparation guidelines of each document. Examples are also provided, as appropriate, for
reference.

2. Scope

The intended readers of this manual are the practitioners of OOM projects.

For each of the document, the following will be documented :

Purpose - To define the purpose of producing the documentation.

Guidelines - States when the documentation is to be created, modified, and used. Guidelines in
producing the documents, if any, will also be provided.

Contents - States the suggested contents, and where appropriate, examples will be given.

3. Summary of deliverables to be produced in each stage

You might also like