0% found this document useful (0 votes)
17 views340 pages

SQA PDF Notes

Chapter 1 introduces software quality assurance (SQA) and its significance in software development, outlining the definition of software and the classification of software errors. It discusses various causes of software errors, including faulty requirements and coding mistakes, and emphasizes the importance of quality factors in ensuring software meets both technical and user expectations. The chapter also distinguishes between software quality assurance and quality control, highlighting the objectives of SQA activities in both development and maintenance phases.

Uploaded by

zainzeeshan192
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views340 pages

SQA PDF Notes

Chapter 1 introduces software quality assurance (SQA) and its significance in software development, outlining the definition of software and the classification of software errors. It discusses various causes of software errors, including faulty requirements and coding mistakes, and emphasizes the importance of quality factors in ensuring software meets both technical and user expectations. The chapter also distinguishes between software quality assurance and quality control, highlighting the objectives of SQA activities in both development and maintenance phases.

Uploaded by

zainzeeshan192
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 340

Chapter 1: Introduction of

software quality assurance


Reference
This chapter refers from the book:
Mastering Software Quality Assurance: Best Practices, Tools
and Techniques for Software Developers
Introduction to Software Testing
Outline
1.1. What is software?
1.2. Classification of the causes of software errors
1.3. Software quality assurance – definition and
objective
1.4. Software quality factors.
1.5. Factors affecting intensity of quality assurance
Outline
1.1. What is software?
1.2. Classification of the causes of software errors
1.3. Software quality assurance – definition and
objective
1.4. Software quality factors.
1.5. Factors affecting intensity of quality assurance
Outline
1.1. What is software?
1.2. Classification of the causes of software errors
1.3. Software quality assurance – definition and
objective
1.4. Software quality factors.
1.5. Factors affecting intensity of quality assurance
What is software
Definition of software is really not simple. Simply code?
According to the IEEE:
🠶 Software is: Computer programs, procedures, and possibly
associated documentation and data pertaining to the operation of
a computer system.
🠶 A ‘similar definition comes from ISO:
ISO definition (from ISO 9000-3) lists four components necessary to
assure the quality of the software development process and years
of maintenance:
🠶 Computer programs (code)
🠶 Procedures
🠶 Documentation
🠶 Data necessary for operating the software system. 6
Overview of Software
There are two basic types of software:
🠶 Systems Software is a set of programs that support the computer
system by coordinating the activities of the hardware and the
applications. Systems software is written for a specific set of
hardware, most particularly the CPU.
🠶 Application Software is a set of programs that solve specific user-
oriented problems.
Operating Systems
🠶 An Operating System is a set of computer programs that control the
computer hardware and act as an interface with application
programs.
🠶 Operating System Activities:
🠶 Perform common computer hardware functions like storing data
on disk
🠶 Provide the user interface like the Windows XP Graphical User
Interface
🠶 Provide hardware independence by serving as the interface
between the application program and the hardware
🠶 Manage system memory to control how memory is accessed and
used
🠶 Manage processing tasks like enabling the user to run more than
one application (multitasking)
🠶 Control access to system resources by providing functions like
password protection
Types of Application Software
Outline
1.1. What is software?
1.2. Classification of the causes of software errors
1.3. Software quality assurance – definition and
objective
1.4. Software quality factors.
1.5. Factors affecting intensity of quality assurance
Classification of the causes of software errors
🠶 Software Error – made by programmer
🠶 Syntax (grammatical) error
🠶 Logic error (multiply vice add two operands)
🠶 Software Fault –
🠶 All software errors may not cause software faults
🠶 That part of the software may not be executed
🠶 (An error is present but not encountered….)
🠶 Software Failures – Here’s the interest.
🠶 A software fault becomes a software failure when/if it is
activated.
🠶 Faults may be found in the software due to the way the
software is executed or other constraints on the software’s
execution, such as execution options.
Classification of the causes of software errors

Software development process

software error
software fault
software failure
Causes of Software Error
1. Faulty requirements definition
• Usually considered the root cause of software errors
• Incorrect requirement definitions
• Simply stated, ‘wrong’ definitions (formulas, etc.)
• Incomplete definitions
• Unclear or implied requirements
• Missing requirements
• Just flat-out ‘missing.’ (e.g. Program Element Code)
• Inclusion of unneeded requirements
• (many projects have gone amuck for including far too
many requirements that will never be used.
• Impacts budgets, complexity, development time, …
Causes of Software Error
2. Client-developer communication failures
• Misunderstanding of instructions in requirements
documentation
• Misunderstanding of written changes during development.
• Misunderstanding of oral changes during development.
• Lack of attention
• to client messages by developers dealing with requirement
changes and
• to client responses by clients to developer questions
• Very often, these very talented individuals come from different
planets, it seems.
• Clients represent the users; developers represent a different
mindset entirely some times!
Causes of Software Error
3. Deliberate deviations from software requirements
• Developer reuses previous / similar work to save time.
• Often reused code needs modification which it may not
get or contains unneeded / unusable extraneous code.
• Book suggests developer(s) may overtly omit functionality due
to time / budget pressures.
• Another BAD choice; System testing will uncover these
problems to everyone’s dismay!
• I have never seen this done intentionally!
• Developer inserting unapproved ‘enhancements’ (perfective
coding; a slick new sort / search….); may also ignore some
seemingly minor features, which sometimes are quite major.
• Have seen this and it too causes problems and
embarrassment during reviews.
Causes of Software Error
🠶 4. Logical design errors
• Definitions that represent software requirements by means of
erroneous algorithms.
• Yep! Wrong formulas; Wrong Decision Logic Tables;
incorrect text; wrong operators / operands…
• Process definitions: procedures specified by systems analyst
not accurate reflection of the business process specified.
• Note: all errors are not necessarily software errors.
• This seems like a procedural error, and likely not a part of
the software system…
• Erroneous Definition of Boundary Condition – a common
source of errors
• The “absolutes” like ‘no more than’ “fewer than,” “n times
or more;” “the first time,” etc.
Causes of Software Error
🠶 4. Logical design errors (continued)
• Omission of required software system states
If rank is >= O1 and RPI is numeric, then….easy to miss action
based on the software system state.

• Omission of definitions concerning reactions to illegal


operation of the software system.
Including code to detect an illegal operation but failure to
design the computer software reaction to this: Gracefully
terminate, sound alarm, etc.
Causes of Software Error
5. Coding errors
• Too many to try to list.
• Syntax errors (grammatical errors)
• Logic errors (program runs; results wrong)
• Run-time errors (crash during execution)
Causes of Software Error
6. Non-compliance w/documentation & coding instructions
• Non-compliance with published templates (structure)
• Non-compliance with coding standards (attribute names…)
• (Standards and Integration Branch)
• Size of program;
• Other programs must be able to run in environment!
• Data Elements and Codes: AFM 300-4;
• Required documentation manuals and operating
instructions; AFDSDCM 300-8, etc…
• SQA Team: testing not only execution software but coding
standards; manuals, messages displayed; resources needed;
resources named (file names, program names,…)
Causes of Software Error
7. Shortcomings of the Testing Process
• Likely the part of the development process cut short most
frequently!
• Incomplete test plans
• Parts of application not tested or tested thoroughly!
• Failure to document, report detected errors and faults
• So many levels of testing….we will cover.
• Failure to quickly correct detected faults due to unclear
indications that there ‘was’ a fault
• Failure to fix the errors due to time constraints
• Many philosophies here depending on severity of the error.
Causes of Software Error
8. User interface and procedure errors

8. Documentation errors
• Errors in the design documents
• Trouble for subsequent redesign and reuse
• Errors in the documentation within the software for the User
Manuals
• Errors in on-line help, if available.
• Listing of non-existing software functions
• Planned early but dropped; remain in documentation!
• Many error messages are totally meaningless
Causes of Software Error
The nine causes of software errors are:
1. Faulty requirements definition
2. Client-developer communication failures
3. Deliberate deviations from software requirements
4. Logical design errors
5. Coding errors
6. Non-compliance with documentation and coding instructions
7. Shortcomings of the testing process
8. User interface and procedure errors
9. Documentation errors

🠶 You should be conversant with these


Outline
1.1. What is software?
1.2. Classification of the causes of software errors
1.3. Software quality assurance – definition and
objective
1.4. Software quality factors.
1.5. Factors affecting intensity of quality assurance
Software Quality
🠶 Software quality is:
(1)The degree to which a system, component, or process meets
specified requirements.
🠶 by Philip Crosby
(2)The degree to which a system, component, or process meets
customer or user needs or expectations.
🠶 by Joseph M. Juran

Now, more closely…


Software Quality
Software quality is:
(1)The degree to which a system, component, or process meets
specified requirements.
🠶 Seems to emphasize the specification, assuming the customer has
articulated all that is needed in the specs AND that if the specs are
met, the customer will be satisfied.
🠶 I have found that this is not necessarily the case, that, if fact,
often ‘austere’ systems are first deployed (errors discovered
in specs sometimes very serious);
🠶 customers acquiesce to the deployment with understanding of a
follow-on deployment.
Software Quality
Software quality is: (Joseph Juran)
(2)The degree to which a system, component, or process meets
customer or user needs or expectations.
Here, emphasis is on a satisfied customer whatever it takes. Implies
specs may need corrections
But this seems to free the customer from ‘professional responsibility’ for
the accuracy and completeness of the specs!
Assumption is that real needs can be articulated during development.
This may occur, but in fact major problems can be discovered quite
late. Not a happy customer!
Software Quality
Software Quality Assurance is:

1. A planned and systematic pattern of all actions necessary to


provide adequate confidence that an item or product conforms
to established technical requirements.

1. A set of activities designed to evaluate the process by which the


products are developed or manufactured. Contrast with: quality
control.
More closely:
SQA - IEEE definition
Says to plan and implement systematically!
Shows progress and instills confidence software is coming along
Refers to a software development process a methodology; a way
of doing things;
Refers to the specification of technical requirements must have
these.
Note that SQA must include not only process for development but
for
(hopefully) years of maintenance. So, we need to consider quality issues
affecting not only development but also maintenance into overall SQA
concept.
SQA activities must also include scheduling and budgeting.
SQA must address issues that arise when time constraints are encountered
– are features eliminated? Budget constraints may force compromise
SQA - Expanded Definition
Software quality assurance is:

A systematic, planned set of actions necessary to provide


adequate confidence that the software development process or
the maintenance process of a software system product conforms
to established functional technical requirements as well as with the
managerial requirements of keeping the schedule and operating
within the budgetary confines.
Software Quality Assurance vs. Software Quality
Control different objectives
🠶 Quality Control is defined as a designed to evaluate the quality of a set
of activities developed or manufactured product
🠶 We have QC inspections during development and before
deployment
🠶 QC activities are only a part of the total range of QA activities.
🠶 Quality Assurance’s objective is to minimize the cost of guaranteeing
quality by a variety of activities performed throughout the
development / manufacturing processes / stages.
🠶 Activities prevent causes of errors; detect and correct them early in
the development process
🠶 QA substantially reduces the rate of products that do not qualify for
shipment and/at the same time, reduce the costs of guaranteeing
quality in most cases.
The objectives of SQA activities
in Software Development
(1)Assuring an acceptable level of confidence that the software will
conform to functional technical requirements.

(1)Assuring an acceptable level of confidence that the software will


conform to managerial scheduling and budgetary requirements.

(3) Initiation and management of activities for the improvement and


greater efficiency of software development and SQA activities.
The objectives of SQA activities
in Software Maintenance (product-oriented)
(1)Assuring an acceptable level of confidence that the software
maintenance activities will conform to the functional technical
requirements.

(1)Assuring an acceptable level of confidence that the software


maintenance activities will conform to managerial scheduling and
budgetary requirements.

(3) Initiate and manage activities to improve and increase the


efficiency
of software maintenance and SQA activities.
Outline
1.1. What is software?
1.2. Classification of the causes of software errors
1.3. Software quality assurance – definition and
objective
1.4. Software quality factors.
1.5. Factors affecting intensity of quality assurance
The Requirements Document
• Requirement Documentation (Specification) is one of the
most important elements for achieving software quality
• Need to explore what constitutes a good software
requirements document.
• Some SQA Models suggest 11-15 factors categorized; some
fewer; some more
• Want to become familiar with these quality factors, and
• Who is really interested in them.
• The need for comprehensive software quality requirements
is pervasive in numerous case studies (see a few in this
chapter).
• (Where do the quality factors go??)
Need for Comprehensive Software
Quality Requirements
• Need for improving poor requirements documents is widespread
• Frequently lack quality factors such as: usability, reusability,
maintainability, …
• Software industry groups the long list of related attributes into what
we call quality factors. (Sometimes non-functional requirements)
• Natural to assume an unequal emphasis on all quality factors.
• Emphasis varies from project to project
– Scalability; maintainability; reliability; portability; etc.
• Let’s look at some of the categories…
Extra Thoughts

• Seems like in Software Engineering we concentrate on


capturing, designing, implementing, and deploying with
emphasis on functional requirements.
• Little (not none!) emphasis on the non-functional
requirements (quality factors).
• More and more emphasis now placed on quality
factors
• Can be a critical factor in satisfying overall
requirements.

• In the RUP, non-functional requirements are captured in the


Software Requirements Specification (SRS); functional
requirement usually captured in Use Case stories.
McCall’s Quality Factors
• McCall has 11 factors; Groups them into categories.
– 1977; others have added, but this still prevail.
• Three categories:
– Product Operation Factors
•How well it runs….
•Correctness, reliability, efficiency, integrity, and usability
– Product Revision Factors
•How well it can be changed, tested, and redeployed.
•Maintainability; flexibility; testability
– Product Transition Factors
•How well it can be moved to different platforms and
interface with other systems
•Portability; Reusability; Interoperability
• Since these underpin the notion of quality factors and others
who have added, reword or add one or two, we will spend time
on these factors.
McCall’s Quality Factors
Product operation factors

• Correctness
• Reliability
• Efficiency
• Integrity
• Usability

How well does it run and ease of use.


McCall’s Quality Factors
Category: Product Operation Factors
1. Correctness.
• Please note that we are asserting that ‘correctness’ issues are
arising from the requirements documentation and the specification
of the outputs…
• Examples include:
– Specifying accuracies for correct outputs at, say, NLT <1% errors,
that could be affected by inaccurate data or faulty
calculations;
– Specifying the completeness of the outputs provided, which
can be impacted by incomplete data (often done)
– Specifying the timeliness of the output (time between event and
its consideration by the software system)
– Specifying the standards for coding and documenting the
software system
– we have talked about this: standards and integration;
Essential!!
McCall’s Quality Factors
Category: Product Operation Factors
2. Reliability Requirements. (remember, this quality factor is
specified in the specs!)
• Reliability requirements deal with the failure to provide service.
–Address failure rates either overall or to required functions.
• Example specs:
–A heart monitoring system must have a failure rate of less than
one per million cases.
–Downtime for a system will not be more than ten minutes per
month (me)
–MTBF and MTTR - old and engineering, but still applicable.
3. Efficiency Requirements. Deals with the hardware resources
needed to perform the functions of the software.
–Here we consider MIPS, MHz (cycles per second); data storage
capabilities measured in MB or TB; communication lines (usually
measured in KBPS, MBPS, or GBPS).
–Example spec: simply very slow communications…
McCall’s Quality Factors
Category: Product Operation Factors
4. Integrity – deal with system security that prevent
unauthorized persons access.
• Huge nowadays; Cyber Security; Internet security;
network security, and more. These are certainly
not the same!

5. Usability Requirements – deals with the scope of staff


resources needed to train new employees and to operate the
software system.

– Deals with learnability, utility, usability, and more. (me)

– Example spec: A staff member should be able to


process n transactions / unit time. (me)
Product revision factors

• Maintainability
• Flexibility
• Testability

Can I fix it easily, retest, version it, and deploy it


easily?
McCall’s Quality Factors
Category: Product Revision Software Factors
These deal with requirements that affect the complete range of
software maintenance activities:
– corrective maintenance,
– adaptive maintenance, and
– perfective maintenance
– KNOW THE DIFFERENCES!
• 1. Maintainability Requirements
– The degree of effort needed to identify reasons (find the
problem) for software failure and to correct failures and to
verify the success of the corrections.
– Deals with the modular structure of the software, internal
program documentation, programmer manual, architectural
and detail design and corresponding documentation
– Example specs: size of module <= 30 statements.
– Refactoring...
McCall’s Quality Factors
Category: Product Revision Software Factors
2. Flexibility Requirements – deals with resources to change
(adopt) software to different types of customers that use the app
perhaps a little differently;
– May also involve a little perfective maintenance to perhaps
do a little better due to the customer’s perhaps slightly more
robust environment.
– Different clients exercise software differently. This is big!

3. Testability Requirements –
– Are intermediate results of computations predefined to assist
testing?
– Are log files created? Backup?
– Does the software diagnose itself prior to and perhaps during
operations?
Product transition factors

• Portability
• Reusability
• Interoperability

Can I move the app to different hardware?


Interface easily with different hardware / software
systems; can I reuse major portions of the code
with little modification to develop new apps?
McCall’s Quality Factors
Category: Product Transition Software
Quality Factors
1. Portability Requirements: If the software must be ported to
different environments (different hardware, operating systems,
…) and still maintain an existing environment, then portability
is a must.

2. Reusability Requirements: Are we able to reuse parts of the


app for new applications?
– Can save immense development costs due to errors
found / tested.
– Certainly higher quality software and development more
quickly results.
– Very big deal nowadays.
McCall’s Quality Factors
Category: Product Transition Software Quality
Factors
3. Interoperability Requirements: Does the application need
to interface with other existing systems

–Frequently these will be known ahead of time and plans


can be made to provide for this requirement during
design time.
Sometimes these systems can be quite different; different
platforms, different databases, and more

–Also, industry or standard application structures in areas


can be specified as requirements.
Alternatives

• Some other SQA professionals have offered essentially


renamed quality factors.
• One has offered 12 factors; another 15 factors.

• Totally five new factors were suggested


• Evans and Marciniak offer two ‘new’ ones:
– Verifiability and Expandability
• Deutsch and Willis offer three ‘new’ ones.
– Safety
– Manageability, and
– Survivability
McCall's factor model
and alternative models
Outline
1.1. What is software?
1.2. Classification of the causes of software errors
1.3. Software quality assurance – definition and
objective
1.4. Software quality factors.
1.5. Factors affecting intensity of quality assurance
Factors affecting Intensity of SQA Activities

• SQA Activities are linked to the completion of a project phase


– Requirements, design, etc.

• The SQA activities need to be integrated into the


development plan that implements one or more software
development models, such as the waterfall, prototyping,
spiral, …

• They need to be activities just like other more traditional


activities – be entered in plan, scheduled, etc.
Factors affecting Intensity of SQA Activities
•SQA planners need to determine
– A list of SQA activities needed for the project
– And then for each activity, they need to decide on
• Timing
• Type of QA activity to be applied (there are several)
• Who performs the activity and resources required.
– Important to note that many participate in SQA
activities
» Development team
» Department staff members
» Independent bodies
• Resources required for the removal of defects and
introduction of changes.
Factors affecting Intensity of SQA Activities

• Sad testimony that few want to allocate the necessary


time for SQA activities.
– This means time for SQA activities and then time for
subsequent removal of defects.
– Often, there is no time for follow-on work!!
• Activities are not simply cranked in and absorbed!
• So, time for SQA activities and defect correction actions
needs to be examined.
Factors Affecting the Required
Intensity of QA Activity
• Project Factors
– Magnitude of the project – how big is it?
– Technical complexity and difficulty
– Extent of reusable software components – a real factor
– Severity of failure outcomes if project fails – essential!
• Team Factors
– Professional qualifications of the team members
– Team acquaintance w/ project and experience in the
area
– Availability of staff members who can professionally
support the team, and
– Percentage of new staff members in the team.
Chapter 2: Integrating quality
activities in the project life cycle
Reference
This chapter refers from the book:
Mastering Software Quality Assurance: Best Practices, Tools
and Techniques for Software Developers
Introduction to Software Testing
Outline

2.1. Software development methodologies


2.2. Testing level
2.2.1. Introduction
2.2.2. Unit test
2.2.3. Integration test
2.2.4. System test
2.2.5. Acceptance test
2.3. A model of SQA defect removal effectiveness
and cost.
2.4. Create SQA plan
The SDLC

🠶 The ‘classic mode.’


🠶 Still in use today.
🠶 Captures the major building blocks in development
🠶 Linear sequence
🠶 Highly structured; plan-driven; Heavy-weight process
🠶 Product delivered for evaluation and deployment at the
end of development and testing
🠶 Big bang approach
🠶 Used for major projects of length
🠶 But serves as a framework for other models…
Prototyping Model

🠶 Replaces some of the parts of the SDLC with an evolutionary


and iterative process.
🠶 Software prototypes are repeatedly provided to customer for
evaluation and feedback.
🠶 Primarily iterate design and implementation.
🠶 Development team provided requirements.
🠶 Ultimately, the product reaches a satisfactory completion.
🠶 Then, the remainder of the process is carried out in the context
of another model, such as SDLC
Spiral Model

🠶 Uses an iterative approach designed to address each phases in


development by obtaining customer comments and change,
risk analysis, and resolution.
🠶 The spiral model typically has a ‘spiral’ for each of the traditional
development phases.
🠶 Within a cycle, specific engineering (design, development, etc.)
can take place using any other models, like SDLC, prototyping,..
🠶 The Spiral Model (Barry Boehm) is a risk-centered development
model where each spiral includes major risk activities /
assessments.
🠶 Was developed after SDLC in response to delayed risk in SDLC
🠶 As the SDLC, it is considered a heavy-weight, plan-driven
methodology and is highly structured.
The Object-Oriented Model

🠶 Emphasis here is re-usability via reusable objects and


components.

🠶 Component-based software development.

🠶 For non-available components, developer may


🠶 prototype needed modules,
🠶 use an SDLC approach,
🠶 purchase libraries of objects,
🠶 develop ‘his’ own, etc.
The SDLC
🠶 Requirements Definition: done
by customers
🠶 Analysis: analyze requirements
to form an initial software
model
🠶 Design: Detailed definition of
inputs/outputs and processes
including data structures,
software structure, etc.
The SDLC
🠶 Coding: Design translated
into code.
🠶 Coding includes SQA
activities such as
inspections, unit tests and
integration tests
🠶 Many takeoffs from this:
These tests done by
developers: individual
(unit), group or team
(integration tests….)
The SDLC
🠶 System Tests: Goal: to
discover errors / correct errors
to achieve an acceptable
level of quality. Carried out by
developers prior to delivery.
🠶 Sometimes ‘acceptance tests’
carried out by customer or in
conjunction with developer
The SDLC
🠶 Installation / Conversion:
🠶 After testing, system is
installed and/or replaces an
existing system;
🠶 Requires software / data
conversion
🠶 Important to not interrupt
daily activities during
conversion process.
🠶 Install incrementally, run in
parallel; turn switch and live
with it, etc.
The SDLC
🠶 Operations and Maintenance:
🠶 Hopefully done for years.
🠶 Maintenance:
🠶 Corrective
🠶 Adaptive
🠶 Perfective

🠶 Lots of variations to the classic


SDLC many in response to
problems….
🠶 Notice the feedback loops?
The SDLC
“V” Model
The Prototyping Model

🠶 One main idea behind prototyping is for the development


of fast prototypes and customer availability for feedback.
🠶 Often prototyping tools are used to help
🠶 Developers respond to feedback and add additional
parts as application evolves into an acceptable product.
🠶 Recognize this process can be inserted into the SDLC or
other models.
The Prototyping Model

A good approach for small to


medium-sized projects.
Very important: customer involvement.
The Spiral Model

🠶 A heavy-weight, plan-driven, highly-structured


approach for large projects.
🠶 Especially designed for those with higher chances of
failure.
🠶 Combines iterative model, emphasizes risk
assessment, customer participation, prototyping, and
more
🠶 Definitely an iterative process.
The Spiral Model
Can see each spiral
includes:

Planning

Risk Analysis / Resolution

Engineering activities
(design, code, test…)

Customer Evaluation
(errors, changes, new
requirements…)

Source: After Boehm 1988 (© 1988 IEEE)


The Spiral Model
Revised Spiral Model provides
customer with improved chances
for changes;
developer better chances to stay
within budget and time.

Done by increased emphasis on


customer participation and
on engineering activities.

Extra sections in spiral


dedicated to customer actions
and developer engineering

Source: After Boehm 1998 (© 1988 IEEE)


The Object-Oriented Model
🠶 Easy integration of existing software modules (objects /
components) into newly developed software systems.
🠶 Process begins with OOA and OOD
🠶 Then, acquire suitable components from reusable
software component libraries (or purchase them).
🠶 Otherwise, develop as needed.
🠶 Can involve adding to repertoire of library
components.
🠶 Economy: integrating reusable components; much
lower cost than developing
🠶 Improved quality – using tested components
🠶 Shorter development times: integration of reusable
software components.
Object Oriented Development Model
Disciplines, Phases, and Iterations
Identify most of the Identify and detail
Detail the use cases Track and capture
use cases to define
(80% of the requirements) remaining use cases requirements changes
scope, detail critical
use cases (10%)

22
Inception Phase

🠶 Overriding goal is obtaining buy-in from all interested parties


🠶 Initial requirements capture
🠶 Cost-benefit analysis
🠶 Initial risk analysis
🠶 Project scope definition
🠶 Defining a candidate architecture
🠶 Development of a disposable prototype
🠶 Initial use case model (10%-20% complete)
🠶 First pass at a domain model

23
Elaboration Phase
🠶 Requirements Analysis and Capture
🠶 Use Case Analysis
🠶 Use Cases (80% written and reviewed by end of phase)
🠶 Use Case Model (80% done)
🠶 Scenarios
🠶 Sequence and Collaboration Diagrams
🠶 Class, Activity, Component, State Diagrams
🠶 Glossary (so users and developers can speak common vocabulary)
🠶 Domain Model
🠶 To understand the problem: the system’s requirements as they
exist within the context of the problem domain
🠶 Risk Assessment Plan revised 24

🠶 Architecture Document
Construction Phase

🠶 Focus is on implementation of the design


🠶 Cumulative increase in functionality
🠶 Greater depth of implementation (stubs fleshed out)
🠶 Greater stability begins to appear
🠶 Implement all details, not only those of central architectural
value
🠶 Analysis continues, but design and coding predominate

25
Transition Phase

🠶 The transition phase consists of the transfer of the system to the user
community
🠶 Includes manufacturing, shipping, installation, training, technical
support, and maintenance
🠶 Development team begins to shrink
🠶 Control is moved to maintenance team
🠶 Alpha, Beta, and final releases
🠶 Software updates
🠶 Integration with existing systems (legacy, existing versions…)

26
Agile

27
Outline

2.1. Software development methodologies


2.2. Testing level
2.2.1. Introduction
2.2.2. Unit test
2.2.3. Integration test
2.2.4. System test
2.2.5. Acceptance test
2.3. A model of SQA defect removal effectiveness
and cost.
2.4. Create SQA plan
Quality Assurance vs Testing

Quality
Testing
Assurance
Quality Assurance vs Testing

Quality Assurance

Testing
Quality Assurance

Multiple activities throughout the dev process


🠶 Development standards
🠶 Version control
🠶 Change/Configuration management
🠶 Release management
🠶 Testing
🠶 Quality measurement
🠶 Defect analysis
🠶 Training
Testing

Also consists of multiple activities


🠶 Unit testing
🠶 Whitebox Testing
🠶 Blackbox Testing
🠶 Data boundary testing
🠶 Code coverage analysis
🠶 Exploratory testing
🠶 Ad-hoc testing
🠶 …
Testing Axioms

🠶 Testing cannot show that bugs do not exist

🠶 Exhaustive testing is impossible for non-trivial applications

🠶 Software Testing is a Risk-Based Exercise. Testing is done differently


in different contexts, i.e. safety-critical software is tested differently
from an e-commerce site.

🠶 Testing should start as early as possible in the software


development life cycle

🠶 The More Bugs you find, the More bugs there are.
Common Error Categories

🠶 Boundary-Related
🠶 Calculation/Algorithmic
🠶 Control flow
🠶 Errors in handling/interpretting data
🠶 User Interface
🠶 Exception handling errors
🠶 Version control errors
Testing Principles

🠶 All tests should be traceable to customer requirements


🠶 The objective of software testing is to uncover errors.
🠶 The most severe defects are those that cause the program to fail to
meet its requirements.

🠶 Tests should be planned long before testing begins


🠶 Detailed tests can be defined as soon as the system design is
complete

🠶 Tests should be prioritised by risk since it is impossible to


exhaustively test a system.
🠶 Pareto principle holds true in testing as well.
What do we test? When do we test it?

🠶 All artefacts, throughout the development life cycle.


🠶 Requirements
🠶 Are the complete?
🠶 Do they conflict?
🠶 Are they reasonable?
🠶 Are they testable?
What do we test? When do we test it?

🠶 Design
🠶 Does this satisfy the specification?
🠶 Does it conform to the required criteria?
🠶 Will this facilitate integration with existing systems?
🠶 Implemented Systems
🠶 Does the system do what is it supposed to do?
🠶 Documentation
🠶 Is this documentation accurate?
🠶 Is it up to date?
🠶 Does it convey the information that it is meant to convey?
The Testing Process
Test
Plannin
g
Test
Design
and
Specific
ation
Test
Implem
entatio
n (if
automa
Test
ted)
Result
Analysis
and
Reporti
Test
ng
Control
Manag
ement
and
Test Planning

🠶 Test planning involves the establishment of a test plan


🠶 Common test plan elements:
🠶 Entry criteria
🠶 Testing activities and schedule
🠶 Testing tasks assignments
🠶 Selected test strategy and techniques
🠶 Required tools, environment, resources
🠶 Problem tracking and reporting
🠶 Exit criteria
Test Design and Specification
🠶 Review the test basis (requirements, architecture, design, etc)
🠶 Evaluate the testability of the requirements of a system
🠶 Identifying test conditions and required test data
🠶 Design the test cases
🠶 Identifier
🠶 Short description
🠶 Priority of the test case
🠶 Preconditions
🠶 Execution
🠶 Post conditions
🠶 Design the test environment setup (Software, Hardware, Network
Architecture, Database, etc)
Test Execution

🠶 Verify that the environment is properly set up


🠶 Execute test cases
🠶 Record results of tests (PASS | FAIL | NOT EXECUTED)
🠶 Repeat test activities
🠶 Regression testing
Result Analysis and Reporting

🠶 Reporting problems
🠶 Short Description
🠶 Where the problem was found
🠶 How to reproduce it
🠶 Severity
🠶 Priority
🠶 Can this problem lead to new test case ideas?
Test Control, Management and Review

🠶 Exit criteria should be used to determine when


testing should stop. Criteria may include:
🠶 Coverage analysis
🠶 Faults pending
🠶 Time
🠶 Cost
🠶 Tasks in this stage include
🠶 Checking test logs against exit criteria
🠶 Assessing if more tests are needed
🠶 Write a test summary report for stakeholders
Levels of Testing

•User Acceptance Testing


•System Testing
•Integration Testing
•Unit Testing
System Testing

Component Component
A B

Component
C

Database
Integration Testing

Component Component
A B

Component
C

Database
Unit Testing

Component Component
A B

Component
C

Database
Outline

2.1. Software development methodologies


2.2. Testing level
2.2.1. Introduction
2.2.2. Unit test
2.2.3. Integration test
2.2.4. System test
2.2.5. Acceptance test
2.3. A model of SQA defect removal effectiveness
and cost.
2.4. Create SQA plan
Model for SQA defect removal
effectiveness and cost

The model addresses two quantitative aspects of the SQA planning


addressing several defect detection activities.

a.Want to study the SQA plan’s total effectiveness in


removing project defects, and

a.The total costs of removal of project defects

Note again that SQA activities must be integrated within the


project’s development plan.
Model for Defect Removal
The data:

Model based on three types of data:

Defect origin distribution


in which phase did the defects occur

Defect removal effectiveness, and


how effective are we at removal of
defects?

Cost of defect removal.


how much does it cost per defect per
phase
Defect Origin Distribution

• Very consistent over many years:


• Distribution of Defects:
– Requirements Specs 15%
– Design 35%
– Coding / integration 40%
– Documentation 10%
Defect Removal Effectiveness

• Generally speaking, the percentage of removed defects


is lower than the percentage of detected defects,
because some corrections are ineffective or inadequate.
• We simply miss some!!
• Others are undetected and uncorrected and passed on
to successive development phases.
• Lingering defects coupled with introduced defects in
current development phase add up!!!
• For discussion purposes, we will assume the filtering
effectiveness of accumulated defects of each quality
assurance activity is not less than 40%, that is, each activity
removes at least 40% of the incoming defects.
Defect Removal Effectiveness
Removal effectiveness
QA Activity Average defect filtering

effectiveness rate
🠶 requirements specs review 50%
🠶 design inspection 60%
🠶 design review 50%
🠶 code inspections 65%
🠶 unit test 50%
🠶 Unit test > code review 30%
🠶 integration test 50%
🠶 system tests / acceptance 50%
🠶 documentation review 50%
Cost Removal

• Removal of defects differs very significantly by


development phase.
• Cost are MUCH greater in later development phases.

• Note: In general, defect removal data is not


commonly available.
• Most agree with the data based on key studies. (next
slide)
The Model

Model is based on following assumptions:


• Development process is linear, sequential following waterfall
model
• There is a number of new defects introduced each phase
• Review and test software quality assurance activities serve as
filters, removing a percentage of defects and letting the rest pass
to next development phase as we saw three slides back.
• At each phase, incoming defects are the sum of those not
removed plus new defects in current phase
• Cost of defect removal is calculated for each SQA activity by
multiplying the number of defects removed by the relative cost of
removing a defect. (see table, previous slide)
• Remaining defects are passed to the customer. (this is the
heaviest cost for defect removal)
Costs of Defect Removal
• But we can do better using a comprehensive quality assurance
plan with more activities, and hence better filtering.

• The comprehensive quality assurance plan (comprehensive


defects filtering system) accomplishes the following:

• 1. Adds two quality assurance activities so that the two are


performed in the design phase as well as in the coding phase
– We have a Design Inspection and a Design Review vice
Design, and
– Code Inspections and unit test vice simple Unit Test
• 2. Improves the ‘filtering’ effectiveness of other quality
assurance activities.
Outline

2.1. Software development methodologies


2.2. Testing level
2.2.1. Introduction
2.2.2. Unit test
2.2.3. Integration test
2.2.4. System test
2.2.5. Acceptance test
2.3. A model of SQA defect removal effectiveness
and cost.
2.4. Create SQA plan
SQA Plan
🠶 The software quality assurance plan is one of the most important
plans that should be prepared before embarking on a software
development project.
🠶 The following details are recorded in the software quality
assurance plan:
1. Standards—Include coding guidelines, design guidelines, testing
guidelines, etc. selected for use in the project. These standards
ensure a minimum level of quality in software development as well
as uniformity of output from the project resources.
2. Quality control activities—Proposed activities for the project
include code walkthrough, requirements and design review, and
tests (unit testing, integration testing, functional testing, negative
testing, endto-end testing, system testing, acceptance testing,
etc.).
SQA Plan

4. Procedures and events that trigger causal analysis—Include


failures, defects, and successes.
5. Audits—To analyze the exceptions in the project so that necessary
corrective and preventive actions are taken to ensure the exceptions
do not recur in the project.
6. Institute of Electrical and Electronics Engineers Standard 730—Gives
details on how to prepare a quality assurance plan, including a
suggested template.
SQA Plan

Details of the following standards should be included in the software


quality assurance plan to guide project personnel in carrying out their
assignments effectively and with the desired levels of productivity and
quality:
🠶 Coding standards for the programming languages used in the
project
🠶 Database design standards
🠶 Graphical user interface design standards
🠶 Test case design standards
🠶 Testing standards
🠶 Review standards
🠶 Organizational process reference
SQA Plan

The following specifications of quality levels (quality metrics) for the


project should be stated in the software quality assurance plan:
🠶 Defect injection rate
🠶 Defect density
🠶 Defect removal efficiency for various quality assurance activities
🠶 Productivity for various artifacts of the project
🠶 Schedule variances
SQA Plan

The following quality control activities proposed to be implemented in


the project should be included in the software quality assurance plan:
🠶 Code walkthrough
🠶 Peer review
🠶 Formal review
🠶 Various types of software tests that would be carried out during
project
🠶 execution, which at a minimum should include the following:
▪ Unit testing
▪ Integration testing
▪ System testing
▪ Acceptance testing
SQA Plan

It also should contain the schedules for the following audits proposed
for the project:
🠶 Periodic conformance audits
🠶 Phase-end audits
🠶 Investigative audits (and criteria)
🠶 Delivery audits
Chapter 3: Reviews
Reference
This chapter refers from the book:
Mastering Software Quality Assurance: Best Practices, Tools
and Techniques for Software Developers
Introduction to Software Testing
Outline

3.1. Review objectives


3.2. Formal review
3.3. Peer reviews
3.4. Implementation of reviews activity in projects
3.4.1. Reviewing requirement specification
3.4.2. Reviewing design document
3.4.3. Reviewing development plan and SQA plan
3.4 Exercise
Outline

3.1. Review objectives


3.2. Formal review
3.3. Peer reviews
3.4. Implementation of reviews activity in projects
3.4.1. Reviewing requirement specification
3.4.2. Reviewing design document
3.4.3. Reviewing development plan and SQA plan
3.4 Exercise
Review

🠶 The design document is key.

🠶 It is checked repeatedly in the development process.

🠶 Typically, reviewed many times before getting a


stamp of approval to proceed with development.

🠶 Unfortunately, we often don’t find our own errors and


thus we need others for reviews.

🠶 Different stakeholders with different viewpoints are


used in the review process.
Review
🠶 A review process is : “a process or meeting during which a
work product or set of work products is presented to project
personnel, managers, users, customers, or other interested
parties for comment or approval.” (IEEE)

🠶 Essential to detect / correct errors in these earlier work


products because the cost of errors downstream is very
expensive!!

🠶 Review Choices:
🠶 Formal Design Reviews (FDR)
🠶 Peer reviews (inspections and walkthroughs)
🠶 Used especially in design and coding phase
Review objectives
Direct objectives – Deal with the current project

a. To detect analysis and design errors as well as subjects


where corrections, changes and completions are required

b. To identify new risks likely to affect the project.

c. To locate deviations from templates, style procedures and


conventions.

d. To approve the analysis or design product. Approval allows the


team to continue on to the next development phase.

Indirect objectives – are more general in nature.

a. To provide an informal meeting place for exchange of


professional knowledge about methods, tools and techniques.

b. To record analysis and design errors that will serve as a basis for
future corrective actions. (very important)
Review objectives
🠶 Many different kinds of reviews that apply to different objectives.
🠶 Reviews are not randomly thrown together.
🠶 Well-planned and orchestrated.
🠶 Objectives, roles, actions, participation, …. Very involved tasks.
🠶 Participants are expected to contribute in their area of expertise.

🠶 Idea behind reviews is to discover problems NOT to fix them/


🠶 Typically fixed after review and ‘offline’ so to speak.

🠶 Very common to review design documents.


🠶 Thus they are usually well-prepared initially prior to review.
Outline

3.1. Review objectives


3.2. Formal review
3.3. Peer reviews
3.4. Implementation of reviews activity in projects
3.4.1. Reviewing requirement specification
3.4.2. Reviewing design document
3.4.3. Reviewing development plan and SQA plan
3.4 Exercise
Formal Design Reviews

DPR – Development Plan Review


SRSR – Software Requirement Specification Review
PDR – Preliminary Design Review
DDR – Detailed Design Review
DBDR – Data Base Design Review
TPR – Test Plan Review
STPR – Software Test Procedure Review
VDR – Version Description Review
OMR – Operator Manual Review
SMR – Support Manual Review
TRR – Test Readiness Review
PRR – Product Release Review
IPR – Installation Plan Review

Important to note that a design review can take place any time an
analysis or design document is produced, regardless whether that
document is a requirement specification or an installation
document.
Formal Design Reviews

🠶 Personal experience in:


🠶 Operators’ Manual
🠶 Users’ Manual

🠶 Program Maintenance Manual


🠶 Staff Users Manual

🠶 Installation / Conversion Plan


🠶 Test Plan(s)

🠶 Depend on size of project!


Participants in the Review
Review Leader
🠶 Needs to be external to the project team.
🠶 Must have knowledge and experience in
development of the review type
🠶 Seniority at least as high as the project leader.
🠶 Good relationship with project leader and team

🠶 Generally, this person really needs to know the


project’s material and should NOT be the project
leader.
🠶 Would impact objectivity!
🠶 Would miss things entirely!
Participants in the Review

Review Team
🠶 Needs to be from senior members of the team
with other senior members from other
departments.
🠶 Why?
🠶 Escalation?
🠶 Team size should be 3-5 members.
🠶 Too large, and we have too much coordination.
🠶 This is a time for serious business – not
lollygagging!
Preparations for the Design
Review
🠶 Participants in the review include the
🠶 Review leader,
🠶 Review team, and
🠶 Development team.
🠶 Review Leader:
🠶 appoint team members;
🠶 establishes schedule,
🠶 distribute design documents, and more
Preparations for the Design
Review
🠶 Review Team preparation:
🠶 review document; list comments PRIOR to review session.
🠶 For large design docs, leader may assign parts to individuals;
🠶 Complete a checklist
🠶 Development Team preparations:
🠶 Prepare a short presentation of the document.
🠶 Focus on main issues rather than describing the process.
🠶 This assumes review team has read document and is familiar
with project’s outlines.
The Design Review Session

🠶 Review Leader is key person for sure!

🠶 Start with short presentation (development team)

🠶 Comments by members (review team)

🠶 Comments discussed to determine required actions team must


perform (review and development teams)
The Design Review Session

Decisions regarding design product – tells project’s progress.


🠶 Full approval - Continue on to next phase (may have minor
corrections)
🠶 Partial approval
🠶 Continue to next phase for some parts of project; major
action items needed for remainder of project.
🠶 Continuation of these parts granted only after satisfactory
completion of action items
🠶 Granted by review team member assigned to review
completed actions or by the full review team or special
review team or by some other forum
🠶 Denial of Approval – demands a repeat of Design Review
🠶 Project has major defects, critical defects
The Design Review Report

🠶 This is the Review Leader’s primary responsibility following the


review.
🠶 Important to perform corrections early and minimize delays to
project schedule.
🠶 Report contains:
🠶 Summary of review discussions
🠶 Decision about project continuation
🠶 Full list of action items – corrections, changes, additions the
project team must perform. For each item, anticipated
completion date and responsible person is listed.
🠶 Name of review team member assigned for follow up.
Follow up Process

🠶 The ‘follow-up person’ may be the review team leader


him/herself

🠶 Required to verify each action item fixed as condition for


allowing project to continue to next phase.

🠶 Follow up must be fully documented to enable


clarification of the corrections in the future, if any.
Oftentimes Problems

🠶 Sometimes entire parts of DR are often worthless due to


🠶 inadequately prepared review team, or
🠶 intentional evasion of a thorough review.

🠶 Tipoff:
🠶 Very short report – limited to documented approval of
the design and listing few, if any, defects
🠶 Short report approving continuation to next project
phase in full - listing several minor defects but no action
items.
🠶 A report listing several action items of varied severity but
no indication of follow-up (no correction schedule, no
documented activities, …)
Guidelines for Design Review

Design Review Infrastructure


• Develop checklists for common types of design documents.
• Train senior professionals to serve as a reservoir for Design
Review teams.

• Periodically analyze past Design Review effectiveness.

• Schedule the Design Reviews as part of the project plan

The Design Review Team


. Review teams size should be limited, with 3–5 members
being the optimum.
Guidelines for Design Review

The Design Review Session


• Discuss professional issues in a constructive way refraining
from personalizing the issues.
• Keep to the review agenda.
• Focus on detection of defects by verifying and validating the
participants' comments.
• Refrain from discussing possible solutions.
• If disagreement about an error - end the debate by
noting the issue and shifting its discussion to another forum.
• Properly document discussed comments, and the results of
their verification and validation.
• Review session should not exceed two hours.
Guidelines for Design Review

Post-Review Activities

• Prepare the review report, including the action items


• Establish follow-up to ensure the satisfactory performance of
all the list of action items
Outline

3.1. Review objectives


3.2. Formal review
3.3. Peer reviews
3.4. Implementation of reviews activity in projects
3.4.1. Reviewing requirement specification
3.4.2. Reviewing design document
3.4.3. Reviewing development plan and SQA plan
3.4 Excercise
Peer Reviews

🠶 Will discuss
🠶 1. inspections and
🠶 2. walkthroughs.
🠶 Difference between formal design reviews and peer
reviews is really in both their participants and authority.
🠶 DRs: most participants hold superior positions to the
project leaders and customer reps;
🠶 Peer reviews, we have equals
🠶 members of his/her department and other units.
Peer Reviews

🠶 Other major difference is


🠶 degree of authority and
🠶 objective of each review method.

🠶 FDRs: authorized to approved design doc


🠶 work can now continue in project.
🠶 Not granted in peer reviews
🠶 main objectives lie in
🠶 detecting errors and
🠶 deviations from standards.
Participants of Peer Reviews

🠶 Review leader 🠶 Review leader


🠶 The author 🠶 The author
🠶 Specialized professionals: 🠶 Specialized professionals:
🠶 Designer 🠶 Standards enforcer
🠶 Coder or implementer 🠶 Maintenance expert
🠶 Tester 🠶 User representative
Peer Reviews:
Inspections / Walk-Throughs

🠶 Walkthroughs and inspection differ in formality –


🠶 Inspections emphasize the objective of corrective
action; more formal
🠶 Walkthroughs limited to comments on document
reviewed.

🠶 Inspections also look to improve methods as well.


🠶 Inspections are considered to contribute more to
general level of SQA.
Peer Reviews: Inspections
Inspections usually based on a comprehensive infrastructure:
🠶 Development of inspection checklists for each type of
design document as well as coding languages, which are
periodically updated.
🠶 Development of typical defect type frequency tables,
based on past findings to direct inspectors to potential
‘defect concentration areas.’
🠶 Training of competent professionals in inspection process
issues – making it possible for them to serve as inspection
leaders (moderators) or inspection team members
🠶 Periodic analysis of the effectiveness of past inspections to
improve the inspection methodology
🠶 Introduction of scheduled inspections into project activity
plan and allocation of the required resources, including
resources for corrections
Note: author is
not the presenter
Much less
formal…

Author is the
presenter
Focus on Peer Reviews:

🠶 So for these two peer review methods, we will look at:


🠶 Participants of peer reviews
🠶 Preparation for peer reviews (some major
differences)
🠶 The peer review session (presenters and emphases
are different)
🠶 Post peer-review activities (differ considerably)
🠶 Peer review efficiency (arguable)
🠶 The principles that we talk about can apply to both
design peer reviews and code peer reviews.
Participants of Peer Reviews
🠶 Optimally, 3-4 participants
🠶 Should be peers of the software system designer-author.
🠶 This allows for free discussion without any intimidation.
🠶 Need a good blend of individual types: a review leader, the author,
and specialized professionals as needed for the focus of the review.
🠶 Review Leader
🠶 Moderator in inspections; Coordinator in walkthroughs.
🠶 Must be well-versed in project development and current
technologies
🠶 Have good relationships with author and development team
🠶 Come from outside the project team
🠶 History of proven experience in coordination / leadership settings
like this.
🠶 For inspections, training as a moderator is required.
Participants of Peer Reviews

🠶 Specialized Professionals – note these are experienced folks.


🠶 These differ by review type: inspections / walkthroughs
🠶 Inspections:
🠶 A designer – generally the systems analyst responsible for
analysis and design of software system reviewed
🠶 A coder or implementer – one who is thoroughly acquainted
with coding tasks, preferably the leader of the designated
coding team.
🠶 Able to detect defects leading to coding errors and other
software implementation issues.
🠶 A tester – experienced professional, preferably leader of
assigned testing team who focuses on identification of design
errors usually detected during the testing phase.
Participants of Peer Reviews

Walk Throughs:
🠶 A standards enforcer – team member specialized in development
standards and procedures;
🠶 locate deviations from these standards and procedures.
🠶 These problems substantially affect the team’s long-term
effectiveness for both development and follow-on
maintenance.
🠶 A maintenance expert – focus on maintainability / testability issues
to detect design defects that may hinder bug correction and
impact performance of future changes.
🠶 A maintenance expert - Focuses also on documentation
(completeness / correctness) vital for maintenance activity.
🠶 A user representation – need an internal user (if customer is in the
unit) or an external representative - review’s validity due to his/her
point of view as user-customer rather than the designer-supplier.
Participants of Peer Reviews
Team Assignments
🠶 Presenter:
🠶 For inspections:
🠶 The presenter of document; chosen by the moderator;
should not be document’s author
🠶 Sometimes the software coder serves as presenter due to
the familiarity of the design logic and its implications for
coding.
🠶 For walk-throughs:
🠶 Author most familiar with the document should be chosen
to present it to the group.
🠶 Some argue that a neutral person should be used..
🠶 Scribe:
🠶 Team leader will often serve as the scribe and record noted
defects to be corrected.
Preparation for a Peer Review Session

🠶 Peer Review Leader’s Preparation for the session:


🠶 For Design Document:
🠶 Select the most difficult / complex sections; sections
prone to defects.
🠶 The most critical sections / where defect can cause
severe damage
🠶 Select team members
🠶 Limit review session to two hours – absolutely.
🠶 Schedule up to two sessions a day if review tasks is sizable.
🠶 Schedule right after the document is ready for inspection.
Don’t wait….
🠶 Distribute the document to the team members prior to the
review session.
Preparation for a Peer Review Session

🠶 Peer review team’s preparations for review session:


🠶 For inspections: team members, preparation is quite thorough;
🠶 For walkthrough brief.
🠶 Inspection: participants must read document and list comments
before inspection begins.
🠶 In overview meeting, the author provides inspection team members
with the necessary background for reviewing chosen document,
project in general, logic, processes, outputs, inputs, interfaces.
🠶 Tool for inspector’s review: checklist for specific documents.
🠶 For walkthroughs: team briefly reads materials for general overview of
project
🠶 Generally they lack detailed knowledge and its substantive area.
🠶 In most cases, team participants not required to prepare advance
comments.
The Peer Review Session

🠶 Procedurally, presenter reads document section and may


add an explanation.
🠶 Participants may offer comments on doc or on comments
🠶 Restrict discussion to identification of errors – no solutions.

🠶 Presenter in walkthrough provides an overview


🠶 Walkthrough Scribe records each error (location,
description, type) – incorrect, missing, etc.

🠶 Inspection scribe will add estimated severity of each


defect, a factor to be used in the statistical analysis of
defects found and for the foundation of preventive /
corrective actions.
The Peer Review Session

Session Documentation
🠶 For inspections – much more comprehensive
🠶 Inspection Session Findings Report – produced by scribe
🠶 Inspection Session Summary Report – compiled by
inspection leader after session or series of sessions dealing
with the same document
🠶 Report summarizes inspection findings and resources
invested int eh inspections…
🠶 Report serves as inputs for analysis aimed at inspection
process improvement and corrective actions that go
beyond the specific document or project.
🠶 For walkthroughs – copies of the error documentation should
be provided to the development team and the session
participants.
The Post Review Session

🠶 Here is the most fundamental differentiating element


between the two peer review methods.

🠶 Inspection:
🠶 Does not end with a review session or distribution of
reports
🠶 Post inspection activities are conducted to attest to:
🠶 Prompt, effective correction / reworking of all erorr
🠶 Transmission of the inspection reports to controlling
authority for analysis
The Efficiency of Peer Reviews

🠶 These activities are under constant debate.


🠶 Some of the more common metrics applied to estimate
the effectiveness of peer reviews, suggested by literature:

🠶 Peer review detection efficiency (average hours


worked per defect detected)
🠶 Peer review defect detection density (average
number of defects detected per page of the design
document)
🠶 Internal peer review effectiveness (percentage of
defects detected by peer review as a percentage of
total defects detected by the developer
The Efficiency of Peer Reviews

🠶 (Not a lot of data on findings) An interesting study undertaken by


Cusumano reports results of a study on the effectiveness of design
review, code inspection, and testing at Fujitsu from 1977 to 1982.
🠶 Findings are still of interest, as data shows substantial improvement
in software quality associated with an increased share of code
inspection and design reviews and a reduced share of software
testing.
🠶 Software quality measured here by the number of defects per
1000 lines of maintained code, detected by the users during the
first six months of regular software system use.
🠶 The results only refer to the inspection method; one guesses a
similar result would apply to walkthrough methods.
Outline

3.1. Review objectives


3.2. Formal review
3.3. Peer reviews
3.4. Implementation of reviews activity in projects
3.4.1. Reviewing requirement specification
3.4.2. Reviewing design document
3.4.3. Reviewing development plan and SQA plan
3.4 Excercise
Review Checklist for Software Project
Planning
🠶 Is the software scope unambiguously defined and bounded?
🠶 Is terminology clear?
🠶 Are resources adequate for the scope?
🠶 Are resources readily available?
🠶 Are tasks properly defined and sequenced?
🠶 Is the basis for cost estimation reasonable? Has it been
developed using two different sources?
🠶 Have historical productivity and quality data been used?
🠶 Have differences in estimates been reconciled?
🠶 Are pre-established budgets and deadlines realistic?
🠶 Is the schedule consistent?
Review Checklist for Software Requirements
Analysis
🠶 Is the information domain analysis complete, consistent, and
accurate?
🠶 Is problem partitioning complete?
🠶 Are external and internal interfaces properly defined?
🠶 Are all requirements traceable to the system level?
🠶 Is prototyping conducted for the customer?
🠶 Is performance achievable with constraints imposed by other
system elements?
🠶 Are requirements consistent with schedule, resources, and
budget?
🠶 Are validation criteria complete?
Review Checklist for Software Design
(Preliminary Design Review)

🠶 Are software requirements reflected in the software


architecture?
🠶 Is effective modularity achieved? Are modules functionally
independent?
🠶 Is program architecture factored?
🠶 Are interfaces defined for modules and external system
elements?
🠶 Is data structure consistent with software requirements?
🠶 Has maintainability been considered?
Review Checklist for Software Design
(Design Walkthrough)
🠶 Does the algorithm accomplish the desired function?
🠶 Is the algorithm logically correct?
🠶 Is the interface consistent with architectural design?
🠶 Is logical complexity reasonable?
🠶 Have error handling and “antibugging” been specified?
🠶 Is local data structure properly defined?
🠶 Are structured programming constructs used throughout?
🠶 Is design detail amenable to the implementation language?
🠶 Which are used: operating system or language dependent features?
🠶 Is compound or inverse logic used?
🠶 Has maintainability been considered?
Review Checklist for Coding

🠶 Is the design properly translated into code? (The results of the


procedural design should be available at this review)
🠶 Are there misspellings or typos?
🠶 Has proper use of language conventions been made?
🠶 Is there compliance with coding standards for language style,
comments, module prologue?
🠶 Are incorrect or ambiguous comments present?
🠶 Are typing and data declaration proper?
🠶 Are physical constraints correct?
🠶 Have all items on the design walkthrough checklist been
reapplied (as required)?
Review Checklist for Software Testing (Test
Plan)
🠶 Have major test phases been properly identified and
sequenced?
🠶 Has traceability to validation criteria/requirements been
established as part of software requirements analysis?
🠶 Are major functions demonstrated early?
🠶 Is the test plan consistent with the overall project plan?
🠶 Has a test schedule been explicitly defined?
🠶 Are test resources and tools identified and available?
🠶 Has a test recordkeeping mechanism been established?
🠶 Have test drivers and stubs been identified, and has work to
develop them been scheduled?
🠶 Has stress testing for software been specified?
Review Checklist for Software Testing
(Test Procedure)

🠶 Have both white and black box tests been specified?


🠶 Have all independent logic paths been tested?
🠶 Have test cases been identified and listed with expected
results?
🠶 Is error handling to be tested?
🠶 Are boundary values to be tested?
🠶 Are timing and performance to be tested?
🠶 Has acceptable variation from expected results been
specified?
Review Checklist for Maintenance

🠶 Have side effects associated with change been considered?


🠶 Has the request for change been documented, evaluated,
and approved?
🠶 Has the change, once made, been documented and
reported to interested parties?
🠶 Have appropriate FTRs been conducted?
🠶 Has a final acceptance review been conducted to assure
that all software has been properly updated, tested, and
replaced?
Formal Technical Review (FTR)

🠶 Software quality assurance activity that is performed by software


engineering practitioners
🠶 Uncover errors in function, logic, or implementation for any
representation of the software
🠶 Verify that the software under review meets its requirements
🠶 Assure that the software has been represented according to
predefined standards
🠶 Achieve software that is developed in a uniform manner
🠶 Make projects more manageable
🠶 FTR is actually a class of reviews
🠶 Walkthroughs
🠶 Inspections
🠶 Round-robin reviews
🠶 Other small group technical assessments of the software
The Review Meeting
🠶 Constraints
🠶 Between 3 and 5 people (typically) are involved
🠶 Advance preparation should occur, but should involve no more
that 2 hours of work for each person
🠶 Duration should be less than two hours
🠶 Components
🠶 Product - A component of software to be reviewed
🠶 Producer - The individual who developed the product
🠶 Review leader - Appointed by the project leader; evaluates the
product for readiness, generates copies of product materials, and
distributes them to 2 or 3 reviewers
🠶 Reviewers - Spend between 1 and 2 hours reviewing the product,
making notes, and otherwise becoming familiar with the work
🠶 Recorder - The individual who records (in writing) all important
issues raised during the review
Review Reporting and Recordkeeping

🠶 Review Summary Report


🠶 What was reviewed?
🠶 Who reviewed it?
🠶 What were the findings and conclusions?
🠶 Review Issues List
🠶 Identify the problem areas within the product
🠶 Serve as an action item checklist that guides the producer
as corrections are made
Guidelines

🠶 Review the product, not the producer


🠶 Set an agenda and maintain it
🠶 Limit debate and rebuttal
🠶 Enunciate the problem areas, but don’t attempt to solve every
problem that is noted
🠶 Take written notes
🠶 Limit the number of participants and insist upon advance
preparation
🠶 Develop a checklist for each product that is likely to be reviewed
🠶 Allocate resources and time schedules for FTRs
🠶 Conduct meaningful training for all reviewers
🠶 Review your earlier reviews (if any)
Reviewer’s Preparation

🠶 Be sure that you understand the context of the material


🠶 Skim all product material to understand the location and the
format of information
🠶 Read the product material and annotate a hardcopy
🠶 Pose your written comments as questions
🠶 Avoid issues of stylea
🠶 Inform the review leader if you cannot prepare
Results of the Review Meeting

🠶 All attendees of the FTR must make a decision


🠶 Accept the product without further modification
🠶 Reject the product due to severe errors (and perform
another review after corrections have been made)
🠶 Accept the product provisionally (minor corrections are
needed, but no further reviews are required)
🠶 A sign-off is completed, indicating participation and
concurrence with the review team’s findings
Chapter 4: Blackbox testing
Reference
This chapter refers from the book:
Mastering Software Quality Assurance: Best Practices, Tools
and Techniques for Software Developers
Introduction to Software Testing
Outline
4.1. Definition and objectives
4.2. Software testing process
4.3. Blackbox testing techniques
4.3.1. Equivalence partitioning technique
4.3.2. Boundary value analysis technique
4.3.3. Decision table technique
4.3.4. State transition testing technique
4.3.5. Pairwise testing technique
Outline
4.1. Definition and objectives
4.2. Software testing process
4.3. Blackbox testing techniques
4.3.1. Equivalence partitioning technique
4.3.2. Boundary value analysis technique
4.3.3. Decision table technique
4.3.4. State transition testing technique
4.3.5. Pairwise testing technique
Software testing
🠶 Software testing is a way to assess the quality of the software and to reduce the risk of
software failure in operation.
🠶 software testing is a process which includes many different activities.
🠶 The test process also includes activities such as test planning, analyzing, designing,
and implementing tests, reporting test progress and results, and evaluating the quality
of a test object.
🠶 Dynamic test & static test
🠶 dynamic testing : testing does involve the execution of the component or system
being tested
🠶 static testing: testing does not involve the execution of the component or system
being tested; e.g., reviewing work products such as requirements, user stories, and
source code.
🠶 Verification & validation:
🠶 Verification: testing does involve checking whether the system meets specified
requirements,
🠶 Validation: checking whether the system will meet user and other stakeholder needs in
its operational environment(s)

5
Software testing objectives
🠶 To prevent defects by evaluate work products such as requirements,
user stories, design, and code
🠶 To verify whether all specified requirements have been fulfilled
🠶 To check whether the test object is complete and validate if it works as
the users and other stakeholders expect
🠶 To build confidence in the level of quality of the test object
🠶 To find defects and failures thus reduce the level of risk of inadequate
software quality
🠶 To provide sufficient information to stakeholders to allow them to make
informed decisions, especially regarding the level of quality of the test
object
🠶 To comply with contractual, legal, or regulatory requirements or
standards, and/or to verify the test object’s compliance with such
requirements or standards

6
Errors, Defects, and Failures
🠶 A person can make an error (mistake), which can lead to the introduction of a
defect (fault or bug) in the software code or in some other related work
product.
🠶 An error that leads to the introduction of a defect in one work product can trigger an
error that leads to the introduction of a defect in a related work product.
🠶 If a defect in the code is executed, this may cause a failure, but not necessarily
in all circumstances.
🠶 failures can also be caused by environmental conditions. For example, radiation,
electromagnetic fields, and pollution can cause defects in firmware or influence the
execution of software by changing hardware conditions.
🠶 Not all unexpected test results are failures. False positives may occur due to errors in
the way tests were executed, or due to defects in the test data, the test
environment, or other testware, or for other reasons. The inverse situation can also
occur, where similar errors or defects lead to false negatives. False negatives are
tests that do not detect defects that they should have detected; false positives are
reported as defects, but aren’t actually defects.
Errors, Defects, and Failures (cont.)

🠶 Errors may occur for many reasons,


🠶 Time pressure
🠶 Human fallibility
🠶 Inexperienced or insufficiently skilled project participants
🠶 Miscommunication between project participants, including miscommunication
about requirements and design
🠶 Complexity of the code, design, architecture, the underlying problem to be
solved, and/or the technologies used
🠶 Misunderstandings about intra-system and inter-system interfaces, especially
when such intra- system and inter-system interactions are large in number
🠶 New, unfamiliar technologies
Seven Testing Principles
1. Testing shows the presence of defects, not their absence
🠶 Testing reduces the probability of undiscovered defects remaining in the software but,
even if no defects are found, testing is not a proof of correctness.
2. Exhaustive testing is impossible
🠶 Rather than attempting to test exhaustively, risk analysis, test techniques, and priorities
should be used to focus test efforts.
3. Early testing saves time and money
🠶 Early testing is sometimes referred to as shift left
4. Defects cluster together
🠶 A small number of modules usually contains most of the defects discovered during pre-
release testing, or is responsible for most of the operational failures.
🠶 Predicted defect clusters, and the actual observed defect clusters in test or operation,
are an important input into a risk analysis used to focus the test effort
Seven Testing Principles (cont.)
5. Beware of the pesticide paradox
🠶 If the same tests are repeated over and over again, eventually these tests no longer find
any new defects. To detect new defects, existing tests and test data may need changing,
and new tests may need to be written.
🠶 In some cases, such as automated regression testing, the pesticide paradox has a
beneficial outcome, which is the relatively low number of regression defects.
6. Testing is context dependent
🠶 Testing is done differently in different contexts. For example, safety-critical industrial control
software is tested differently from an e-commerce mobile app. As another example,
testing in an Agile project is done differently than testing in a sequential software
development lifecycle project
7. Absence-of-errors is a fallacy
🠶 it is a fallacy (i.e., a mistaken belief) to expect that just finding and fixing a large number of
defects will ensure the success of a system. For example, thoroughly testing all specified
requirements and fixing all defects found could still produce a system that is difficult to use,
that does not fulfill the users’ needs and expectations, or that is inferior compared to other
competing systems.
Outline
4.1. Definition and objectives
4.2. Software testing process
4.3. Blackbox testing techniques
4.3.1. Equivalence partitioning technique
4.3.2. Boundary value analysis technique
4.3.3. Decision table technique
4.3.4. State transition testing technique
4.3.5. Pairwise testing technique
Test Process
🠶 There is no one universal software test process, but there are common sets
of test activities without which testing will be less likely to achieve its
established objectives.
🠶 These sets of test activities are a test process.
🠶 Which test activities are involved in this test process, how these activities
are implemented, and when these activities occur may be discussed in
an organization’s test strategy
Test Process in Context
Contextual factors that influence the test process for an organization, include,
but are not limited to:
🠶 Software development lifecycle model and project methodologies being
used
🠶 Test levels and test types being considered
🠶 Product and project risks
🠶 International Software Testing
🠶 Business domain
🠶 Operational constraints, including but not limited to:
🠶 Budgets and resources
🠶 Timescales
🠶 Complexity
🠶 Contractual and regulatory requirements
🠶 Organizational policies and practices
🠶 Required internal and external standards
Test Activities and Tasks

A test process consists of the following main groups of activities:


🠶 Test planning
🠶 Test monitoring and control
🠶 Test analysis
🠶 Test design
🠶 Test implementation
🠶 Test execution
🠶 Test completion
Test Activities and Tasks (cont.)
Test planning
🠶 Test planning involves activities that define the objectives of testing and the
approach for meeting test objectives within constraints imposed by the
context (e.g., specifying suitable test techniques and tasks, and formulating
a test schedule for meeting a deadline).
🠶 Test plans may be revisited based on feedback from monitoring and
control activities.
Test Activities and Tasks (cont.)
Test monitoring and control

🠶 Test monitoring involves the on-going comparison of actual progress


against planned progress using any test monitoring metrics defined in the
test plan
🠶 Test control involves taking actions necessary to meet the objectives of the
test plan (which may be updated over time)
🠶 Test monitoring and control are supported by the evaluation of exit criteria,
which are referred to as the definition of done in some software
development lifecycle models
🠶 Test progress against the plan is communicated to stakeholders in test
progress reports, including deviations from the plan and information to
support any decision to stop testing.
Test Activities and Tasks (cont.)
Test analysis
🠶 During test analysis, the test basis is analyzed to identify testable features and define
associated test conditions (determines “what to test”)
Test analysis includes the following major activities:
🠶 Analyzing the test basis appropriate to the test level being considered, for example:
🠶 Requirement specifications, such as business requirements, functional requirements, system
requirements, user stories, epics, use cases, or similar work products that specify desired
functional and non-functional component or system behavior
🠶 Design and implementation information, such as system or software architecture diagrams or
documents, design specifications, call flow graphs, modelling diagrams (e.g., UML or entity-
relationship diagrams), interface specifications, or similar work products that specify
component or system structure
🠶 The implementation of the component or system itself, including code, database metadata
and queries, and interfaces
🠶 Risk analysis reports, which may consider functional, non-functional, and structural aspects of
the component or system
Test Activities and Tasks (cont.)
Test analysis
🠶 Evaluating the test basis and test items to identify defects of various types,
such as:
🠶 Ambiguities
🠶 Omissions
🠶 Inconsistencies
🠶 Inaccuracies
🠶 Contradictions
🠶 Superfluous statements
Test Activities and Tasks (cont.)
Test analysis
🠶 Evaluating the test basis and test items to identify defects of various types, such as:
🠶 Ambiguities
🠶 Omissions
🠶 Inconsistencies
🠶 Inaccuracies
🠶 Contradictions
🠶 Superfluous statements
🠶 Identifying features and sets of features to be tested
🠶 Defining and prioritizing test conditions for each feature based on analysis of the test basis,
and considering functional, non-functional, and structural characteristics, other business
and technical factors, and levels of risks
🠶 Capturing bi-directional traceability between each element of the test basis and the
associated test conditions
Test Activities and Tasks (cont.)
Test analysis
🠶 In some cases, test analysis produces test conditions which are to be used
as test objectives in test charters.
🠶 Test charters are typical work products in some types of experience-based
testing. When these test objectives are traceable to the test basis, coverage
achieved during such experience-based testing can be measured.
Test Activities and Tasks (cont.)
Test design
🠶 the test conditions are elaborated into high-level test cases, sets of high-level
test cases, and other testware. So, test analysis answers the question “what to
test?” while test design answers the question “how to test?”
🠶 Test design includes the following major activities:
🠶 Designing and prioritizing test cases and sets of test cases
🠶 Identifying necessary test data to support test conditions and test cases
🠶 Designing the test environment and identifying any required infrastructure and tools
🠶 Capturing bi-directional traceability between the test basis, test conditions, & test
cases
🠶 As with test analysis, test design may also result in the identification of similar
types of defects in the test basis. Also, as with test analysis, the identification of
defects during test design is an important potential benefit.
Test Activities and Tasks (cont.)
Test implementation
🠶 the testware necessary for test execution is created and/or completed, including
sequencing the test cases into test procedures. So, implementation answers the question
“do we now have everything in place to run the tests?”
🠶 Test implementation includes the following major activities:
🠶 Developing and prioritizing test procedures, and, potentially, creating automated test scripts
🠶 Creating test suites from the test procedures and (if any) automated test scripts
🠶 Arranging the test suites within a test execution schedule in a way that results in efficient test
execution (see section 5.2.4)
🠶 Building the test environment (including, potentially, test harnesses, service virtualization,
simulators, and other infrastructure items) and verifying that everything needed has been set
up correctly
🠶 Preparing test data and ensuring it is properly loaded in the test environment
🠶 Verifying and updating bi-directional traceability between the test basis, test conditions, test
cases, test procedures, and test suites (see section 1.4.4)
🠶 Test design and test implementation tasks are often combined.
Test Activities and Tasks (cont.)
Test execution
🠶 During test execution, test suites are run in accordance with the test execution schedule.
🠶 Test execution includes the following major activities:
🠶 Recording the IDs and versions of the test item(s) or test object, test tool(s), and testware
🠶 Executing tests either manually or by using test execution tools
🠶 Comparing actual results with expected results
🠶 Analyzing anomalies to establish their likely causes (e.g., failures may occur due to defects in
the code, but false positives also may occur (see section 1.2.3)
🠶 Reporting defects based on the failures observed (see section 5.6)
🠶 Logging the outcome of test execution (e.g., pass, fail, blocked)
🠶 Repeating test activities either as a result of action taken for an anomaly, or as part of the
planned testing (e.g., execution of a corrected test, confirmation testing, and/or regression
testing)
🠶 Verifying and updating bi-directional traceability between the test basis, test conditions, test
cases, test procedures, and test results.
Test Activities and Tasks (cont.)
Test completion
🠶 Test completion activities collect data from completed test activities to consolidate
experience, testware, and any other relevant information. Test completion activities occur
at project milestones such as when a software system is released, a test project is
completed (or cancelled)
🠶 Test completion includes the following major activities:
🠶 Checking whether all defect reports are closed, entering change requests or product
backlog items for any defects that remain unresolved at the end of test execution
🠶 Creating a test summary report to be communicated to stakeholders
🠶 Finalizing and archiving the test environment, the test data, the test infrastructure, and other
testware for later reuse
🠶 Handing over the testware to the maintenance teams, other project teams, and/or other
stakeholders who could benefit from its use
🠶 Analyzing lessons learned from the completed test activities to determine changes needed
for future iterations, releases, and projects
🠶 Using the information gathered to improve test process maturity
Bug management
Outline
4.1. Definition and objectives
4.2. Software testing process
4.3. Blackbox testing techniques
4.3.1. Equivalence partitioning technique
4.3.2. Boundary value analysis technique
4.3.3. Decision table technique
4.3.4. State transition testing technique
4.3.5. Pairwise testing technique
Blackbox testing
• Blackbox testing is a technique for testing without
knowing software source code.
• Blackbox testing (also called behavioral or behavior-
based techniques) are based on an analysis of the
appropriate test basis (e.g., formal requirements
documents, …).
• are applicable to both functional and non- functional
testing.
• concentrate on the inputs and outputs of the test
object without reference to its internal structure.
Test Techniques
🠶 The purpose of a test technique is to help in identifying test conditions,
test cases, and test data.
🠶 The choice of which test techniques depends on a number of factors:
🠶 Component or system complexity
🠶 Regulatory standards
🠶 Customer or contractual requirements
🠶 Risk levels and types
🠶 Available documentation
🠶 Tester knowledge and skills
🠶 Available tools
🠶 Time and budget
🠶 Software development lifecycle model
🠶 The types of defects expected in the component or system

[email protected]
Test Techniques (cont.)
🠶 Some techniques are more applicable to certain situations and test levels;
others are applicable to all test levels.
🠶 When creating test cases, testers generally use a combination of test
techniques to achieve the best results from the test effort.
🠶 The use of test techniques in the test analysis, test design, and test
implementation activities can range from very informal (little to no
documentation) to very formal.
🠶 The appropriate level of formality depends on the context of testing, including
the maturity of test and development processes, time constraints, safety or
regulatory requirements, the knowledge and skills of the people involved, and
the software development lifecycle model being followed.

[email protected]
Blackbox test design technique

a. Equivalence Class Partitioning.


b. Boundary value analysis.
c. Decision Tables.
d. State Transition.
e. Pairwise testing.
Outline
4.1. Definition and objectives
4.2. Software testing process
4.3. Blackbox testing techniques
4.3.1. Equivalence partitioning technique
4.3.2. Boundary value analysis technique
4.3.3. Decision table technique
4.3.4. State transition testing technique
4.3.5. Pairwise testing technique
Black-box Test Techniques
Equivalence Partitioning
🠶 Equivalence partitioning divides data into partitions (also known as
equivalence classes)
🠶 all the members of a given partition are expected to be processed in the
same way.
🠶 Coverage is measured as the number of equivalence partitions tested
by at least one value, divided by the total number of identified
equivalence partitions, normally expressed as a percentage.
🠶 Equivalence partitioning is applicable at all test levels.
Black-box Test Techniques
Equivalence Partitioning (cont.)
🠶 There are equivalence partitions for both valid and invalid values.
🠶 Valid values are values that should be accepted by the component or
system.
🠶 “valid equivalence partition”: equivalence partition containing valid values
🠶 Invalid values are values that should be rejected by the component or
system.
🠶 “invalid equivalence partition”: equivalence partition containing invalid values
🠶 Partitions can be identified for any data element related to the test object.
🠶 Including: inputs, outputs, internal values, time-related values (e.g., before or after
an event), interface parameters (e.g., integrated components being tested during
integration testing).
🠶 Any partition may be divided into sub partitions if required.
🠶 Each value must belong to one and only one equivalence partition.
🠶 for invalid equivalence partitions, they should be tested individually, i.e., not
combined with other invalid equivalence partitions, to ensure that failures are
not masked.
Black-box Test Techniques
Equivalence Partitioning: examples
❖ The input condition is a range of values.
▪ For example, “The value of x can only range from 0 to
100”.
o valid equivalence class is: 0 <=x <= 100
o the two invalid equivalence classes are: x < 0 and x >
100.
❖ The input condition is some value.
▪ Example: “Only one to six people can be registered”.
o valid equivalence class: “One to six applicants”
o 2 classes of equivalence are invalid: “no one
registered” and “more than six people registered”.
Black-box Test Techniques
Equivalence Partitioning: examples

❖ The input condition is a set of values: We will define each value in


that set as a valid equivalence class.
▪ For example: “Registered vehicles are buses, coaches, trucks, taxis and
motorbikes”.
o 5 valid equivalence classes corresponding to 5 vehicle types
o 1 class of equivalence is not valid: a vehicle other than those
mentioned above eg “bicycle”.
❖ The entry condition is a special condition.
▪ Example: “The first character must be an alphanumeric character”
o 1 valid equivalence class: “first character is literal”
o 1 invalid equivalence class: "not an alphanumeric character (can be a
number or a special character)".
Outline
4.1. Definition and objectives
4.2. Software testing process
4.3. Blackbox testing techniques
4.3.1. Equivalence partitioning technique
4.3.2. Boundary value analysis technique
4.3.3. Decision table technique
4.3.4. State transition testing technique
4.3.5. Pairwise testing technique
Black-box Test Techniques
Boundary Value Analysis
🠶 Boundary value analysis (BVA) is an extension of equivalence partitioning,
but can only be used when the partition is ordered, consisting of numeric
or sequential data. The minimum and maximum values (or first and last
values) of a partition are its boundary values (see Beizer 1990).
🠶 Behavior at the boundaries of equivalence partitions is more likely to be
incorrect than behavior within the partitions.
🠶 Boundary value analysis can be applied at all test levels. This technique is
generally used to test requirements that call for a range of numbers
(including dates and times).
🠶 Boundary coverage for a partition is measured as the number of boundary
values tested, divided by the total number of identified boundary test
values, expressed as a percentage.
Black-box Test Techniques
Boundary Value Analysis: example

🠶 Suppose an input field accepts a single integer value as an input, using a keypad
to limit inputs so that non-integer inputs are impossible. The valid range is from 1 to
5, inclusive. So, there are three equivalence partitions: invalid (too low); valid;
invalid (too high). For the valid equivalence partition, the boundary values are 1
and 5. For the invalid (too high) partition, the boundary value is 6. For the invalid
(too low) partition, there is only one boundary value, 0, because this is a partition
with only one member.
🠶 Some variations of this technique identify three boundary values per boundary:
the values before, at, and just over the boundary. In the previous example, using
three-point boundary values, the lower boundary test values are 0, 1, and 2, and
the upper boundary test values are 4, 5, and 6 (see Jorgensen 2014).
Outline
4.1. Definition and objectives
4.2. Software testing process
4.3. Blackbox testing techniques
4.3.1. Equivalence partitioning technique
4.3.2. Boundary value analysis technique
4.3.3. Decision table technique
4.3.4. State transition testing technique
4.3.5. Pairwise testing technique
Black-box Test Techniques
Decision Table Testing
🠶 Decision tables are a good way to record complex business rules that a system
must implement.
🠶 When creating decision tables, the tester identifies conditions (often inputs) and
the resulting actions (often outputs) of the system.
🠶 These form the rows of the table, usually with the conditions at the top and the
actions at the bottom.
🠶 Each column corresponds to a decision rule that defines a unique combination of
conditions which results in the execution of the actions associated with that rule.
🠶 The values of the conditions and actions are usually shown as Boolean values (true or
false) or discrete values (e.g., red, green, blue), but can also be numbers or ranges of
numbers.
🠶 These different types of conditions and actions might be found together in the same
table.
Black-box Test Techniques
Decision Table Testing (cont.)
🠶 The common notation in decision tables is as
follows:
🠶 For conditions:
🠶 Y means the condition is true (may also be shown as T or 1)
🠶 N means the condition is false (may also be shown as F or 0)
🠶 — means the value of the condition doesn’t matter (may also be
shown as N/A)
🠶 For actions:
🠶 X means the action should occur (may also be shown as Y or T or
1)
🠶 Blank means the action should not occur (may also be shown as
– or N or F or 0)
Black-box Test Techniques
Decision Table Testing (cont.)
🠶 A full decision table has enough columns (test cases) to cover every
combination of conditions.
🠶 The common minimum coverage standard for decision table testing is to
have at least one test case per decision rule in the table. This typically
involves covering all combinations of conditions.
🠶 Coverage is measured as the number of decision rules tested by at least one
test case, divided by the total number of decision rules, expressed as a
percentage.
🠶 The strength of decision table testing :
🠶 helps to identify all the important combinations of conditions, some of which
might otherwise be overlooked.
🠶 helps in finding any gaps in the requirements.
🠶 may be applied to all situations in which the behavior of the software
depends on a combination of conditions, at any test level.

[email protected]
Black-box Test Techniques
Decision Table Testing: Example

Promotion for car owners if they meet at least 1 of 2 conditions:


married / good student. Each input is a logical value, so the
decision table only needs to have 4 columns, describing 4
different rules:

Rule 1 Rule 2 Rule 3 Rule 4


Conditions
Married? Yes Yes No No
Good Yes No Yes No
Student?
Actions
Discount ($) 60 25 50 0
Outline
4.1. Definition and objectives
4.2. Software testing process
4.3. Blackbox testing techniques
4.3.1. Equivalence partitioning technique
4.3.2. Boundary value analysis technique
4.3.3. Decision table technique
4.3.4. State transition testing technique
4.3.5. Pairwise testing technique
Black-box Test Techniques
State Transition Testing
🠶 A state transition diagram shows the possible software states, as well as how the
software enters, exits, and transitions between states.
🠶 A transition is initiated by an event (e.g., user input of a value into a field). The event
results in a transition.
🠶 The same event can result in two or more different transitions from the same state.
🠶 The state change may result in the software taking an action (e.g., outputting a
calculation or error message).
🠶 A state transition table shows all valid transitions and potentially invalid
transitions between states, as well as the events, and resulting actions for valid
transitions.
🠶 State transition diagrams normally show only the valid transitions and exclude
the invalid transitions.
Black-box Test Techniques
State Transition Testing (cont.)
🠶 Tests can be designed to cover a typical sequence of states, to exercise all
states, to exercise every transition, to exercise specific sequences of transitions,
or to test invalid transitions.
🠶 Application:
🠶 for menu-based applications and is widely used within the embedded software
industry.
🠶 modeling a business scenario having specific states or for testing screen navigation.
The concept of a state is abstract – it may represent a few lines of code or an entire
business process.
🠶 Coverage is commonly measured as the number of identified states or
transitions tested, divided by the total number of identified states or transitions
in the test object, expressed as a percentage.
Black-box Test Techniques
State Transition Testing (cont.)
Black-box Test Techniques
State Transition Testing: Example
Air ticket booking module has 6 status

12/31/13
Black-box Test Techniques
State Transition Testing: Example (cont.)

• Coverage level 1 :
Create test cases such
that each state occurs
at least once. For
example, 3 paths will
reach level 1 .
coverage
Black-box Test Techniques
State Transition Testing: Example (cont.)
• Coverage level 2 :
Create test cases such
that each event
occurs at least once.
For example, 3 paths
reach level 2 .
coverage
Black-box Test Techniques
State Transition Testing: Example (cont.)
• Testing level 3 : create paths
such that all transition paths
are tested. A transition path
is a defined state transition,
starting from the input state
and ending at the end state.
• This is the best coverage result because
it exhausts all possibilities, but it's not
feasible when the transition path has
loops.

• Testing level 4 : create


paths such that all linear
transition paths are tested.
Outline
4.1. Definition and objectives
4.2. Software testing process
4.3. Blackbox testing techniques
4.3.1. Equivalence partitioning technique
4.3.2. Boundary value analysis technique
4.3.3. Decision table technique
4.3.4. State transition testing technique
4.3.5. Pairwise testing technique
Black-box Test Techniques
Pairwise testing

🠶 In fact, most errors are generated from the


combination of values of input parameter pairs.
🠶 Method:
• Selection of input parameters and corresponding
values
• Get the combination (pairwise) of values between 2
parameters
• Build the test set so that it covers all the pairs defined
above
Black-box Test Techniques
Pairwise testing: Example
• Consider the
View options tab
from a version of
Microsoft
Powerpoint
software
Black-box Test Techniques
Pairwise testing: Example (cont.)

• Tab View_preference has 7 atributes, each atributes has


several values: Vertical_Ruler (Visible, InVisible), Ruler_units
(Inches, Centimetes, Points, Picas), Default_View (Normal,
Slide, Outline), Ss_Navigator (Popup, None), End_With_Black
(Yes1, No1), Always_Mirror (Yes2, No2), Warn_Before (Yes3,
No3).
• If test all cases, we have: 2*4*3*2*2*2*2 = 384 test case.
• The possible pairs: (Visible, Inches), (Visible, Centimetes),… ,
(No2, No3)
• Test case (Visible, Centimetes, Nomal, Non, No1, Yes2, Yes3)
includes 21 pairs (Visible, Centimetes), (Visble, Nomal),...,
(Yes2, Yes3)
Black-box Test Techniques
Pairwise testing: Example (cont.)
• A test suite
coveres all
possible pairs
(created by tool
pairwise PICT
http://download.
microsoft.com/do
wnload/f/5/5/f554
84df-8494-48fa-
8dbd-
8c6f76cc014b/pic
t33.msi)
Chapter 5: Whitebox testing
Reference
This chapter refers from the book:
Mastering Software Quality Assurance: Best Practices, Tools
and Techniques for Software Developers
Introduction to Software Testing
Outline

5.1. Whitebox testing introduction


5.2. Whitebox testing technique
5.2.1. Control flow testing technique
5.2.2. Data flow testing technique
5.3. Automation unit test
5.4. Excercise
Outline

5.1. Whitebox testing introduction


5.2. Whitebox testing technique
5.2.1. Control flow testing technique
5.2.2. Data flow testing technique
5.3. Automation unit test
5.4. Excercise
2. White-box Testing (WBT) introduction

🠶 WBT relies on a specific algorithm, on the internal data structure of the


module to be tested, to determine if the module is performing
correctly.
🠶 Therefore, WBT tester must have skills and knowledge to be able to
understand in detail about the code to be tested.
🠶 WBT usually takes a lot of time and effort
🠶 For important modules, which perform the main computation of the
system, this approach is necessary.
🠶 White box testing methods:
• Control flow testing
• Data flow testing
Outline

5.1. Whitebox testing introduction


5.2. Whitebox testing technique
5.2.1. Control flow testing technique
5.2.2. Data flow testing technique
5.3. Automation unit test
5.4. Excercise
5.2.1. Control flow testing technique
Some definitions

🠶 Execution path: is a corresponding software unit execution


script: an ordered list of instructions to be executed for a
particular run of the software unit. It starts from the entry point
of the software unit and stop at the end of the software unit.
🠶 Goal of control flow testing: is to ensure that all execution
paths of the software unit under test run correctly.
Unfortunately, in reality, the effort and time to achieve the
above goal is very large, even on small software units.
Example
1: WHILE NOT EOF LOOP F
2: Read Record;
2: IF field1 equals 0 THEN T
3: Add field1 to Total
3: Increment Counter
4: ELSE
4: IF field2 equals 0 THEN T
F
5: Print Total, Counter
5: Reset Counter
6: ELSE
6: Subtract field2 from Total
7: END IF
8: END IF
8: Print "End Record"
9: END LOOP
9: Print Counter

Example of execution paths :


1, 9
1, 2, 3, 8, 1, 9
1, 2, 4, 5, 7, 8, 1, 9
1, 2, 4, 6, 7, 8, 1, 9
Discussion (1)

🠶 For code:
for (i=1; i<=1000; i++)
for (j=1; j<=1000; j++)
for (k=1; k<=1000; k++)
doSomethingWith(i,j,k);
There is only 1 execution path, but length is 1000*1000*1000 = 1
billion statements doSomethingWith(i,j,k).
🠶 For code:
if (c1) s11 else s12;
if (c2) s21 else s22;
if (c3) s31 else s32;

...
if (c32) s321 else s322;

There are 232 = 4 billion execution paths.


Discussion(2)

• Some execution paths maybe miss :


if (a>0) doIsGreater();
if (a==0) dolsEqual();
// missing case a < 0 - if (a<0) dolsLess();
• A execution path is tested to be pass with a test case.
However, it is still incorrect with another test cases:
int blech (int a, int b) { return a/b; }

When testing, we select b <> 0 => the result is correct.


However, when b = 0 the function will be error.
Testing coverage

• From discussion above, we should test the minimum


number of test cases that result in maximum
confidence. But how to determine the minimum
number of test cases that can return results with
maximum confidence?
• Coverage: is the ratio of components actually tested to
the population after testing the selected test cases. The
larger the coverage, the higher the reliability.
• The component involved can be a command, decision
point, subcondition, execution path, or a combination
thereof.
Coverage level 0 & 1

🠶 Level 0 : freely test. The users will test the


remain parts. This is the level of testing
that is not really responsible 1. float foo(int a, int b, int c, int
d) {
🠶 Level 1 : each statement must be 2. return e;
excuted one time. E.g.,
3. float e;
🠶 For function foo, 2 following test cases 4. if (a==0)
will obtain 100% coverage level 1 : 5. return 0;
🠶 TC1. foo(0,0,0,0), return 0 6. int x = 0;
🠶 TC2. foo(1,1,1,1), return 1 7. if ((a==b) || ((c==d) && bug(a)))
🠶 However, they cannot find error divide 8. x = 1;
for 0 at line 8. 9. e = 1/x;
10.}
Coverage level 2
• Each decision must be excuted >=1 time for true case, and
>=1 time for false case. It’s called Branch coverage.

Line Predicate True False


3
(a == 0) Test Case 1 foo(0, Test Case 2 foo(1, 1, 1, 1)
0, 0, 0) return 0 return 1
6 ((a==b) OR ((c == Test Case 2 foo(1, Test Case 3 foo(1, 2, 1, 2)
d) AND bug(a) )) 1, 1, 1) return 1 return 1

🠶 With TC1, TC2 from previou slide, we obain 3/4 x 75% branch
coverage. Add test case 3 :
🠶 TC3. foo(1,2,1,2), we obain 100% branch coverage.
Coverage level 3
• Each subcondition must be excuted >= 1 time for true
case, >=1 time for false case. We call it’s subcondition
coverage.

Predicat True False


e
a==0 Test Case 1 foo(0, 0, 0, 0) Test Case 2 foo(1, 1, 1, 1) return
return 0 value 0
a==b Test Case 2 foo(1, 1, 1, 1) Test Case 3 foo(1, 2, 1, 2) division
return 1 by zero!
c==d Test Case 3 foo(1, 2, 1, 2) division
by zero!
bug(a)
Coverage level 4

• Each subcondition of each decision must be


excuted >=1 time for true case, >=1 time for
false case, and this true (false case) must effect
the result of decision.
• We call it’s branch & subcondition coverage.
Basis Path Testing (by Tom McCabe)
Basis Path Testing (by Tom McCabe)

🠶 Guideline:
• From testing module, create control flow graph G.
• Compute complexity Cyclomatic of G (called C).
o V(G) = E - N + 2, for E is number of edges, N is number of nodes.
o V(G) = P + 1, for P is number of decision node.
o Note: if V(G) >10, we should devide the module to be submodules
to reduce probability of error

• Create C linear basic paths to be tested.


• Create test case for each basic path.
Basis Path Testing (cont.)
Create C linear basic paths to be tested

1. Create first basic path. It should be the most popular


path
2. For second path, change the edge for the first decision
node and try to keep maximum of first basic path.
3. For 3rd path, change the edge of second decision
node and try to keep maximum of first basic path.
4. Continue with path 4th, 5th… until all decison node are
consider (enough C path).
Example
🠶 1: WHILE NOT EOF LOOP
F
🠶 2: Read Record; T
🠶 2: IF field1 equals 0 THEN
🠶 3: Add field1 to Total
🠶 3: Increment Counter
🠶 4: ELSE T
🠶 4: IF field2 equals 0 THEN F
🠶 5: Print Total, Counter
🠶 5: Reset Counter
🠶 6: ELSE
🠶 6: Subtract field2 from Total
🠶 7: END IF
🠶 8: END IF
🠶 8: Print "End Record"
🠶 9: END LOOP
🠶 9: Print Counter

🠶 We have C = 3 + 1 = 4, path:
🠶 1, 9
🠶 1, 2, 3, 8, 1, 9
🠶 1, 2, 4, 5, 7, 8, 1, 9
🠶 1, 2, 4, 6, 7, 8, 1, 9
Loop testing
Loop testing

• Focus on checking the validity


of the loop structure.
• For single loop:
o Skip loop
o loop 1 time
o loop k times (k <n)
Single
o loop n, n + 1 times loop Nested
With n is the maximum number of Sequence
loop
iterations loops

Unstructured
loops
Loop testing (cont.)
• For nested loop:
o Start with the innermost loop. Set
the iteration parameters for the
outer loops to the minimum value
o Check the min+1 parameter, a
typical value, max -1, max for the
inner loop while the iteration
parameters of the outer loops are
minimal Single
o Continue with outer loops until all loop Nested
Sequence
loops are tested loop
loops

Unstructured
loops
Loop testing (cont.)
• For sequence loops:
o Do similar to nested loop

o For unstructured loops?

Single
loop Nested
Sequence
loop
loops

Unstructured
loops
Loop testing: Example
1. // LOOP TESTING EXAMPLE PROGRAM import java.io.*;
2. class LoopTestExampleApp {
3. // ------------------ FIELDS ----------------------
4. public static BufferedReader keyboardInput = new BufferedReader(new
InputStreamReader(System.in));
5. private static final int MINIMUM = 1;
6. private static final int MAXIMUM = 10;
7. // ------------------ METHODS ---------------------
8. /* Main method */
9. public static void main(String[] args) throws IOException {
10. System.out.println("Input an integer value:");
11. int input = new Integer(keyboardInput.readLine()).intValue();
12. int numberOfIterations=0;
13. for(int index=input;index >= MINIMUM && index <= MAXIMUM;index++)
14. {
15. numberOfIterations++;
16. } // Output and end
17. System.out.println("Number of iterations = " + numberOfIterations); }
18.}
Loop testing: Example (cont.)

Input Result
11 0 (skip loop)
10 1 (loop 1 time)
5 5 (loop k time)
1 10 (loop n time)
0 0 (skip loop)
Outline

5.1. Whitebox testing introduction


5.2. Whitebox testing technique
5.2.1. Control flow testing technique
5.2.2. Data flow testing technique
5.3. Automation unit test
5.4. Excercise
Data flow testing

🠶 is an effective method for identifying issues with variables:


🠶 The statement of assigning or entering data into a variable is
incorrect.
🠶 Missing variable definition before use
🠶 The axiom is wrong (due to the wrong execution of the
execution flow)...
🠶 Each variable should have a good life cycle through a 3-step
sequence: created, used, and deleted.
🠶 Only commands within the scope of the variable can
access/process the variable.
🠶 Scope: global, local
Data flow testing (cont.)

Analysis of life cycle of a variable


🠶 Commands access a variable through one of the following three actions:
🠶 d (define): defines a variable, assigns a specified value to the variable (entering
data into a variable is also an operation of assigning a value to the variable).
🠶 r(reference) : refer to the value of the variable (usually through an expression).
🠶 u(undefine) : cancel (delete) the variable.
🠶 Thus, if the symbol ~ is to describe the state in which the variable does not
exist yet, we have the first three possibilities for processing on a variable:
🠶 ~d : variable does not exist and is defined with the specified value.
🠶 ~r : variable does not exist and can be used immediately (what value?)
🠶 ~u : variable does not exist and is destroyed (unusual).
Data flow testing (cont.)
Analysis of life cycle of a variable (cont)
🠶 3 different variable operations combined to create 9 pairs:
🠶 dd : variable is defined and then defined : a bit strange, maybe correct and
acceptable, but can also have programming errors.
🠶 dr : variable defined then used : correct and normal sequence.
🠶 du : variable defined and then deleted : a bit strange, maybe correct and
acceptable, but can also have programming errors.
🠶 rd : variable is used then define a new value : reasonable.
🠶 rr : variable is used and then used : reasonable.
🠶 ru : variable used and then destroyed : reasonable.
🠶 ud : variable deleted then redefined : acceptable.
🠶 ur : variable deleted and then used : error.
🠶 uu : the variable is deleted and then deleted again : probably a programming error.
Data flow testing (cont.)
Data flow grahp
• To describing different life
scenarios of variables.
• Creating data flow graph is
similarly to creating control
flow graph of the module
under test + labeling variable
action .
Data flow testing (cont.)

Data flow testing process


• Create control flow graph, then change to data flow graph.
• Compute Cyclomatic of graph (C = P +1).
• Create C basic paths.
• Testing live cycle for each data variable:
o Each variable has maximum C scenario for variable action.
o For each scenario, finding abnormal pair actions.
Data flow testing: Example.

• Graph has 2 decision


node, then C = 2 +1 = 3.
• Function has 4 input
parameter, 2 local
variables: a, b, c, d, e, x.
Data flow testing: Example.

• Scenario 1 : ~dduk

• Scenario 2: ~dduk (like scenario 1).

• Scenario 3: ~dk

Scenario 1 & 2 have abnormal pair dd. So, we must


determine whether it is related to an error or not.
Outline

5.1. Whitebox testing introduction


5.2. Whitebox testing technique
5.2.1. Control flow testing technique
5.2.2. Data flow testing technique
5.3. Automation unit test
5.4. Excercise
Tool

🠶 http://pathcrawler-online.com:8080

• PathCrawler's principal functionality, and the one demonstrated in


this online version, is to automate structural unit testing by
generating test inputs for full coverage of the C function under test.
Full coverage can mean all feasible execution paths or k-path
coverage which restricts the all-path criterion to paths with at most
k consecutive loop iterations. These are available in this online
version. PathCrawler can also be used to satisfy other coverage
criteria (such as branch coverage, MC-DC,...), to generate
supplementary tests to improve coverage of an existing functional
test suite or to generate just the tests necessary to cover the part of
the code which has been impacted by a modification.
Outline

5.1. Whitebox testing introduction


5.2. Whitebox testing technique
5.2.1. Control flow testing technique
5.2.2. Data flow testing technique
5.3. Automation unit test
5.4. Excercise
Excercise

🠶 Execute tool http://pathcrawler-online.com:8080 and find:


🠶 coverage results
🠶 Test cases
🠶 Unit code corresponding with test cases

🠶 Study unit test with Junit.


Chapter 6: SQA tools
Reference
This chapter refers from the book:
Mastering Software Quality Assurance: Best Practices, Tools
and Techniques for Software Developers
Introduction to Software Testing
Outline

6.1. QA management tools


6.2. Unit testing tool
6.3. Automated functional testing tool
6.4. Performance testing tool
6.5. Security testing tool
6.6. Applying SQA tools in projects
Outline

6.1. QA management tools


6.2. Unit testing tool
6.3. Automated functional testing tool
6.4. Performance testing tool
6.5. Security testing tool
6.6. Applying SQA tools in projects
6.1. QA management tools

🠶 QA management tools assist testers in doing their tasks effectively. Their tasks
including:
1. Creating and maintaining release/project cycle/component information.
2. Creating and maintaining the test artifacts specific to each release/cycle for which we
have- requirements, test cases, etc.
3. Establishing traceability and coverage of the test assets.
4. Test execution support – test suite creation, test execution status capture, etc.
5. Metric collection/report-graph generation for analysis.
6. Bug tracking/defect management.
Documentation support software

🠶 Writing reports, user manuals, designing and building test cases, and other
documentation-related tasks frequently employ documentation support
software.
🠶 The most commonly used software are:
∙ Microsoft Office Word.
∙ Microsoft Office Exel.
∙ Microsoft Office Project.
∙ Microsoft Office Power Point.
Bug tracking system
🠶 A bug tracking system is required for software projects in order to track and
report defects while the software is being developed.
Bug tracking system (cont.)

🠶 Following are some popular bug tracking softwares:


🠶 Bugzilla: http://www.bugzilla.org/
🠶 Redmine: http://www.redmine.org/
🠶 Atlassian JIRA : http://www.atlassian.com
Bug tracking system: Bugzilla

🠶 Bugzilla is server software designed to help you manage software


development.
🠶 Key features of Bugzilla includes
• Advanced search capabilities
• E-mail Notifications
• Modify/file Bugs by e-mail
• Time tracking
• Strong security
• Customization
• Localization
Bug tracking system: Redmine

🠶 Redmine is a flexible project management web application written using


Ruby on Rails framework.
🠶 Some of the main features of Redmine are:
• Multiple projects support
• Flexible role based access control
• Flexible issue tracking system
• Gantt chart and calendar
• News, documents & files management
• Feeds & email notifications
• Per project wiki
• Per project forums
• Time tracking
• Custom fields for issues, time-entries, projects and users
• SCM integration (SVN, CVS, Git, Mercurial and Bazaar)
• Issue creation via email
• …
Bug tracking system: Atlassian JIRA

🠶 Built for every member of agile teams and beyond to plan, track, and ship world-
class software.
🠶 Use cases
• Agile teams
• Bug tracking
• Project management
• Product management
• Process management
• Task management
• Software development
• Requirements & test case management
Outline

6.1. QA management tools


6.2. Unit testing tool
6.3. Automated functional testing tool
6.4. Performance testing tool
6.5. Security testing tool
6.6. Applying SQA tools in projects
6.2. Unit testing tool
🠶 Unit testing tools are used by the developers to test the source code of the
application .
JUNIT5 introduction

🠶 JUnit is an open source Unit Testing Framework for JAVA. It is useful for Java
Developers to write and run repeatable tests.

A first test case


import static org.junit.jupiter.api.Assertions.assertEquals;
import example.util.Calculator;
import org.junit.jupiter.api.Test;
class MyFirstJUnitJupiterTests {
private final Calculator calculator = new Calculator();
@Test
void addition() {
assertEquals(2, calculator.add(1, 1));
}
}
From: http://junit.org/junit5/docs/current/user-guide/
Junit Anotation
🠶 JUnit Jupiter uses annotations for configuring tests and extending the framework.
🠶 Examples:

Annotation Description
@Test Denotes that a method is a test method. Unlike JUnit 4’s @Test annotation, this annotation does
not declare any attributes, since test extensions in JUnit Jupiter operate based on their own
dedicated annotations. Such methods are inherited unless they are overridden.
@BeforeEach Denotes that the annotated method should be
executed before each @Test, @RepeatedTest, @ParameterizedTest, or @TestFactory method
in the current class; analogous to JUnit 4’s @Before. Such methods are inherited – unless they
are overridden or superseded (i.e., replaced based on signature only, irrespective of Java’s
visibility rules).
@AfterEach Denotes that the annotated method should be
executed after each @Test, @RepeatedTest, @ParameterizedTest, or @TestFactory method in
the current class; analogous to JUnit 4’s @After. Such methods are inherited – unless they
are overridden or superseded (i.e., replaced based on signature only, irrespective of Java’s
visibility rules).
A standard test class
import static @Test
org.junit.jupiter.api.Assertions.f @Disabled("for
ail; demonstration purposes")
void skippedTest() { //
import static
not executed }
org.junit.jupiter.api.Assumptions @Test
.assumeTrue; void abortedTest() {
import
org.junit.jupiter.api.AfterAll; assumeTrue("abc".cont
import ains("Z"));
org.junit.jupiter.api.AfterEach; fail("test should
import have been aborted");
}
org.junit.jupiter.api.BeforeAll;
@AfterEach
import void tearDown() { }
org.junit.jupiter.api.BeforeEach; @AfterAll
import static void
org.junit.jupiter.api.Disabled; tearDownAll() { }
import org.junit.jupiter.api.Test;
}
class StandardTests {
@BeforeAll
Class Assertions
Assertions is a collection of utility methods that support asserting conditions in tests.

Modifier and Type Method Description

static void assertEquals(byte Assert that expected and actual are


expected, equal.
byte actual)
static void assertNotEquals(by Assert that expected and actual are not
te unexpected, equal.
byte actual)
static void assertFalse(boolea Assert that the
n condition) supplied condition is false.
static void assertTrue(boolean Assert that the supplied condition is true.
condition)
Outline

6.1. QA management tools


6.2. Unit testing tool
6.3. Automated functional testing tool
6.4. Performance testing tool
6.5. Security testing tool
6.6. Applying SQA tools in projects
6.3. Automated functional testing tool

🠶 There are many automated functional testing tools, both free and paid, for
example:
🠶 Selenium free tool, website testing
🠶 Unified Functional Testing (UFT) a paid tool, testing websites and software running
on computers
🠶 Appium free tool, mobile app testing

🠶 Among the above tools, Selenium is widely used due to its free powerful
tool and strong support community.
Selenium introduction
🠶 Selenium (https://www.selenium.dev ) is an open-source and a portable
automated software testing tool for testing web applications.
🠶 Selenium is a set of tools that helps testers to automate web-based applications
more efficiently:
🠶 Selenium IDE: Selenium Integrated Development Environment (IDE) A Firefox plugin
enables testers to record their actions as they go through the process they need to
test.
🠶 Selenium RC: Selenium Remote Control (RC) The pioneering testing framework that
allowed for more than just straightforward browser activities and linear execution.
🠶 Selenium WebDriver: Selenium WebDriver is the replacement for Selenium RC, which
delivers commands to the browser directly and returns results..
🠶 Selenium Grid: Selenium Grid is is a technology that reduces execution time by running
parallel tests across multiple computers and browsers at once.
Outline

6.1. QA management tools


6.2. Unit testing tool
6.3. Automated functional testing tool
6.4. Performance testing tool
6.5. Security testing tool
6.6. Applying SQA tools in projects
6.4. Performance testing tool

🠶 Performance testing is excuted to find out how quickly a system can


handle a specific workload. It can also be used to confirm and validate the
system's other high-quality characteristics, such as scalability,
dependability, and resource efficiency.
Some performance testing tools

🠶 Loadrunner: is a load/stress testing tool for websites and other applications,


supporting a wide range of application environments, platforms, and
databases. Link to homepage: https://www.microfocus.com/en-
us/products/loadrunner-enterprise/overview
🠶 Apache JMeter: can be used for performance testing of both static and
dynamic resources. Link to homepage: http://jmeter.apache.org/
Performance testing tool: Loadrunner

🠶 Support performance testing for the widest range of protocols and more than
50 technologies and application environments
🠶 Quickly identify the most likely causes of performance issues with a patented
auto-correlation engine.
🠶 Accurately predict application scalability and capacity with accurate
emulation of realistic loads.
🠶 Flexible Test Scenarios: Run high-scale tests with minimal hardware and
leverage the public cloud to scale up or down.
🠶 Scripting: Easily create, record, correlate, replay, and enhance scripts for better
load testing.
🠶 Continuous Testing: Built-in integrations include IDE, CI/CD, open source test
automation, monitoring, and source code management tools.
Performance testing tool: Apache JMeter:

🠶 Apache JMeter features include:


🠶 Ability to load and performance test many different applications/server/protocol types.
🠶 Web - HTTP, HTTPS (Java, NodeJS, PHP, ASP.NET, …)
🠶 Full featured Test IDE that allows fast Test Plan recording (from Browsers or native
applications), building and debugging.
🠶 CLI mode (Command-line mode (previously called Non GUI) / headless mode) to load
test from any Java compatible OS (Linux, Windows, Mac OSX, …)
🠶 A complete and ready to present dynamic HTML report
🠶 Easy correlation through ability to extract data from most popular response
formats, HTML, JSON , XML or any textual format
🠶 Complete portability and 100% Java purity.
🠶 Full multi-threading framework allows concurrent sampling by many threads and
simultaneous sampling of different functions by separate thread groups.
🠶 Caching and offline analysis/replaying of test results.
🠶 Highly Extensible core:
Outline

6.1. QA management tools


6.2. Unit testing tool
6.3. Automated functional testing tool
6.4. Performance testing tool
6.5. Security testing tool
6.6. Applying SQA tools in projects
6.5. Security testing tool

🠶 We utilize security testing to ensure that data in some information systems


remains secure and cannot be accessed by unauthorized individuals.
Software that have undergone successful security testing are shielded
against harmful malware and other dangers that could cause them to
crash or behave in an unexpected way.
🠶 To check the vulnerabilities and defects in the software, there are
numerous free, commercial, and open-source solutions accessible. Aside
from being free, the nicest thing about open-source technologies is that we
can modify them to meet your unique needs.
Some security testing tools

🠶 Zed Attack Proxy (ZAP): is a multi-platform, open-source online application


security testing tool created by OWASP (Open Web Application Security
Project). ZAP is used to identify a variety of security flaws in a web
application both during the development and testing phases.
🠶 Wapiti: Wapiti is an open source project from SourceForge and devloop
that is free to use. In order to check web applications for security
vulnerabilities, Wapiti employs black box testing. It's a command-line
application,
🠶 W3af: W3af is one of the most well-known frameworks for web application
security testing that was created using Python. The program enables testers
to identify more than 200 different kinds of security flaws in web
applications.
Outline

6.1. QA management tools


6.2. Unit testing tool
6.3. Automated functional testing tool
6.4. Performance testing tool
6.5. Security testing tool
6.6. Applying SQA tools in projects
6.6. Applying QA tools in projects
Software testing tools
Type Examples

Functional test
automation

Mobile test
automation

Performance test

🠶
Test management

🠶
Security test

API test
Chapter 7: Standards in SQA
management
Reference
This chapter refers from the book:
Mastering Software Quality Assurance: Best Practices, Tools
and Techniques for Software Developers
Introduction to Software Testing
Outline

7.1. The scope of quality management standards


7.2. SQA in ISO standard
7.3. SQA in CMM, CMMI standard
Outline

7.1. The scope of quality management standards


7.2. SQA in ISO standard
7.3. SQA in CMM, CMMI standard
5 Why are Standards Important?

• Standards provide encapsulation of best, or at least


most appropriate, practice
• Standards provide a framework around which the
quality assurance process may be implemented
• Standards assist in continuity of work when it’s carried
out by different people throughout the software product
lifecycle

Standards should not be avoided. If they are too


extensive for the task at hand, then they should be
tailored.
6 A Simplistic approach

In most mature organizations:


• ISO is not the only source of SDS
• Process and Product standards are derived independently
• Product standards are not created by SQA
7
Process Standards

Process Standards – standards that define the process


which should be followed during software development

ISO CMM CMMI

Organizational Organizational
IPDS
Quality Manual SD Process STD’s

Project Project SD Process STD’s Project


SQP (SDP, IP, Method Sheets) SCMP
8
Product Standards

Product Standards – standards that apply to software


product being developed

ISO MIL/ Industry


STD’s STD’s

Organizational
Product STD’s

COTS Project Product STD’s (SDP, IP,


STD’s Method Sheets)
Outline

7.1. The scope of quality management standards


7.2. SQA in ISO standard
7.3. SQA in CMM, CMMI standard
10 ISO 9000
🠶 ISO (International Standards Organization):
🠶 a consortium of 63 countries established to formulate
and foster standardization.
🠶 ISO published its 9000 series of standards in 1987.
🠶 A set of guidelines for the production process.
🠶 not directly concerned about the product it self.
🠶 a series of three standards: ISO 9001, ISO 9002, and ISO
9003.
🠶 Based on the premise: if a proper process is followed for
production, good quality products are bound to follow.
The ISO 9000:2000 Software Quality
Standard
🠶 The international organization ISO has developed a series of
standards for quality assurance and quality management,
collectively known as the ISO 9000.
🠶 The ISO 9000 standards are reviewed and updated once every 5-8
years. The standards released in the year 2000 are known as ISO
9000:2000.
🠶 There are three components of the ISO 9000:2000 standard.
🠶 ISO 9000: Fundamentals and vocabulary
🠶 ISO 9001: Requirements
🠶 ISO 9004: Guidelines for performance improvements
🠶 Note: ISO 9002 and ISO 9003 were parts of ISO 9000:1994, but these
are no more parts of ISO 9000:2000. 11
12 ISO 9000 for Software Industry
🠶 ISO 9000 is a generic standard:
🠶 applicable to many industries,
🠶 starting from a steel manufacturing industry to a
service rendering company.
🠶 Many clauses of ISO 9000 documents:
🠶 use generic terminologies
🠶 very difficult to interpret them in the context of
software organizations.
🠶 Very difficult to interpret many clauses for software
industry:
🠶 software development is radically different from
development of other products.
Software vs. other industries Cont...
13

🠶 Software is intangible
🠶 therefore difficult to control.
🠶 It is difficult to control anything that we cannot see
and feel.
🠶 In contrast, in a car manufacturing unit
🠶 Software project management is an altogether different
ball game.
🠶 During software development: the only raw material
consumed is data.
🠶 For any other product development: Lot of raw materials
consumed..
🠶 ISO 9000 standards have many clauses corresponding to
raw material control .
-> not relevant to software organizations.
The ISO 9000:2000 Software Quality
Standard

ISO 9000:2000 Fundamentals:


🠶 This is based on eight principles.
🠶 Principle 1: Customer focus
🠶 Principle 2: Leadership
🠶 Principle 3: Involvement of people
🠶 Principle 4: Process approach
🠶 Principle 5: System approach to management
🠶 Principle 6: Continual improvement
🠶 Principle 7: Factual approach to decision making
🠶 Principle 8: Mutually beneficial supplier relationships 14
The ISO 9000:2000 Software Quality
Standard

ISO 9001:2000 Requirements


🠶 The five major parts of this document are as follows.
🠶 Part 4. Systemic requirements
🠶 Part 5. Management requirements
🠶 Part 6. Resource requirements
🠶 Part 7. Realization requirements
🠶 Part 8. Remedial requirements

15
The ISO 9000:2000 Software Quality
Standard
ISO 9001:2000 Requirements
🠶 Part 4. Systemic requirements (partial)
🠶 Document the organizational policies and goals.
🠶 Document all quality processes and their interrelationship.
🠶 Implement a mechanism to approve documents before are distributed.
🠶 Part 5: Management requirements (partial)
🠶 Generate an awareness for quality to meet a variety of requirements,
such as customer, regulatory, and statutory.
🠶 Focus on customers by identifying and meeting their requirements in
order to satisfy them.
🠶 Develop a quality policy to meet the customer’s needs.
🠶 Clearly define individual responsibilities and authorities concerning the
16

implementation of quality policies.


The ISO 9000:2000 Software Quality
Standard
ISO 9001:2000 Requirements
🠶 Part 6. Resource requirements (partial)
🠶 Identify and provide resources required to support the
organizational quality policy to realize the quality objectives.
🠶 Allocate quality personnel resource to projects.
🠶 a mechanism to enhance the quality level of personnel.
🠶 Part 7: Realization requirements (partial)
🠶 Develop a plan to realize a product from its requirements.
🠶 Gather their requirements. Classify those requirements.
🠶 Review the requirements.
🠶 Follow a defined purchasing process by evaluating potential
suppliers based on a number of factors, such as ability to 17

meet requirements and price.


The ISO 9000:2000 Software Quality
Standard

ISO 9001:2000 Requirements


🠶 Part 8. Remedial requirements (partial)
🠶 Measure and track the customer’s satisfaction level.
🠶 Perform internal audit.
🠶 Example: Find out whether or not personnel with
adequate education, experience, and skill have
been assigned to a project.
🠶 Monitor processes by using a set of key performance
indicators.

18
Outline

7.1. The scope of quality management standards


7.2. SQA in ISO standard
7.3. SQA in CMM, CMMI standard
Capability Maturity Model (CMM)

The Software engineering Institute – Capability Model (SEI-


CMM) is a model for judging the following:
🠶 Judging the maturity of the software processes of an
organization.
🠶 Identifying the key practices that are required to increase
the maturity of these processes.
🠶 Describes the principles and practices underlying software
process maturity and is intended to help software
organization improve the maturity of their software
processes in terms of an evolutionary path from ad hoc
chaotic processes to mature software process

20
Capability Maturity Model (CMM)
The CMM is organized into five maturity levels

Level 1 : Initial
🠶 The software process is characterized as ad hoc, few processes
are defined and success depends on individual efforts.
🠶 This period is chaotic without any procedure and process
established for software development and testing.

21
Capability Maturity Model (CMM)
Level 2 : Repreatable
🠶 Track cost, schedule, and functionality .
🠶 During this phase, measures and metrics will be reviewed to
include percentage compliance with various processes,
percentage of allocated requirements delivered, number of
changes to requirements, number of changes to project plan,
variance between estimated and actual size of deliverables.
🠶 The following are the key process activities during Level 2:
❑ Software configuration management
❑ Software quality assurance
❑ Software subcontract management
❑ Software project tracking and oversight
❑ Software project planning 22

❑ Requirement management
Capability Maturity Model (CMM)
Level 3: Defiened
🠶 The software process for management and engineering activities
is documented, standardized and integrated into a standard
software process for the organization.
🠶 All projects use an approved version of the organization standard
software process for developing and maintaining software.
🠶 The following are the key process activities during Level 3:
❑ Examine reviews
❑ Intergroup coordination
❑ Software program engineering
❑ Integrated software management
❑ Training Program
❑ Organization process definition
23

❑ Organization process focus


Capability Maturity Model (CMM)
Level 4: Managed
🠶 Detailed measures of the software process and product quality are
collected and both are understood and controlled.
🠶 This phase denotes that the processes are well defined and
proficiently managed.
🠶 The quality standard are on an upswing.
🠶 With sound quality process in place the organization is better
equipped to meet customer expectations of high quality/ high
performance software at reasonable cost and commitment
deliveries .

24
Capability Maturity Model (CMM)

Level 5: Optimizing
🠶 Continues process improvement is enabled by quantitative
feedback from the process and from piloting new idea and
technologies.
🠶 Continuous emphasis on process improvement and defect
reduction avoid process stagnancy and ensure continual
improvement translating into improved productivity, tracing
requirements across each development phase improves the
completeness of software, reduce rework, and simplify
maintenance. Verification and validation activities are planned
and executed to reduce defect leakage. Customers have access
to the project plane, receive regular status reports and their
feedback is sought and used for process tuning.

25
What is a CMMi?

🠶 A Capability Maturity Model (CMMi) is a reference model of mature


practices in a specified discipline, used to improve PROCESS at
work
🠶 The results of adopting CMMi is a much better product or process
quality.
🠶 Before we focus on CMMi we need to understand the meaning of a
PROCESS

so What is a PROCESS ?
What is a process
🠶 A process is a series of actions or steps taken in order to
achieve a particular end in the form of a product or
service
🠶 We may not realize it, but processes are everywhere and
in every aspect of our leisure and work. A few examples of
processes might include:
🠶 Preparing breakfast
🠶 Placing an order
🠶 Developing a budget
🠶 Writing a computer program
🠶 Obtaining application requirements
🠶 And so on
Process Improvement

• The quality of a system is highly influenced by the quality of


the process used to acquire, develop, and maintain it.
• even our finest people can’t perform at their best when
the process is not understood or operating at its best.”
• Everyone realizes the importance of having a motivated,
quality work force and the latest technology, but even the
finest people can’t perform at their best when the process
is not understood or operating at its best
• This premise implies a focus on processes as well as on
products.
CMMI for Process Improvement

The aim of CMMi is to improve processes so they can be


performed in the best manner with least cost
Use CMMI in process improvement activities as a
🠶 collection of best practices
🠶 framework for organizing and prioritizing activities
🠶 support for the coordination of multi-disciplined activities that
might be required to successfully build a product
🠶 means to emphasize the alignment of the process
improvement objectives with organizational business
objectives
Ad Hoc Processes (Not using CMMi)

Processes are ad hoc and improvised by practitioners and


their management
Process describes are not rigorously followed or enforced
Performance is highly dependent on current practitioners
Understanding of the current status of a project is limited
Immature processes result in fighting fires:
🠶 There is no time to improve – instead,
practitioners are constantly reacting
🠶 Firefighters get burned
🠶 Embers might rekindle later
Improved Processes (Using CMMi)

🠶 Process descriptions are consistent with the way


workactually is done
🠶 They are defined, documented and continuously
improved
🠶 Processes are supported visibly by management and
others
🠶 They are well controlled – process fidelity is evaluated and
enforced
🠶 There is constructive use of product and process enforced
🠶 There is constructive use of product and process
measurement
🠶 Technology is introduced in a disciplined manner
… So what is CMMI?

🠶 In the same way, high-quality software organizations are


different from low-quality organizations.

🠶 CMMI tries to capture and describe these differences.

🠶 CMMI strives to create software development


organizations that are “mature”, or more mature than
before applying CMMI.

32
How CMMI Helps?

CMMI provides guidance for improving an organization’s


processes and ability to manage the development,
acquisition and maintenance of products or services.

CMMI places proven approaches into a structure that helps


an organization:
- appraise its organizational maturity or process area
capability
- establish priorities for improvement
- implement these improvements
Summary of levels

🠶 Level 1 – Initial. Anything at all. Ad-hoc and chaotic. Will have


some successes, but will also have failures and badly missed
deadlines.

🠶 Level 2 – Repeatable. SW processes are defined,


documented, practiced, and people are trained in them.
Groups across an organization may use different processes.

34
Summary of levels

🠶 Level 3 – Defined. SW processes are consistent and known


across the whole organization.

🠶 Level 4 – Managed. SW processes and results are measured


quantitatively, and processes are evaluated with this data.

🠶 Level 5 – Optimizing. Continuous process improvement.


Experimenting with new methods and technologies. Change
processes when find something that works better.

35
Level 1 – Initial

🠶 Team tackles projects in different ways each time


🠶 Can have strong successes, but may not repeat
🠶 Some time/cost estimates are accurate, many far off
🠶 Success comes from smart people doing the right things
🠶 Hard to recover from good people leaving
🠶 Frequent crises and "firefighting.” (Many believe this is
standard for SW development. CMM says NO.)
🠶 Most SW development organizations are Level 1.

36
Level 2 – Repeatable

Key areas
🠶 Requirements management
🠶 Software project planning
🠶 Project tracking and oversight
🠶 Subcontracts management
🠶 Quality assurance
🠶 Configuration management

37
Level 3 – Defined

Key areas. Level 2, plus…


🠶 Organization-wide process focus
🠶 Organization-wide process definition
🠶 Training program in above
🠶 Integrated software management (above applied per
project)
🠶 Software product engineering (coding, etc.)
🠶 Inter-group coordination
🠶 Peer reviews

38
Level 4 – Managed

Key areas. Level 3, plus…


🠶 Quantitative process management (data gathering)
🠶 Quality management (data-driven quality improvement)

39
Level 5 – Optimizing

Key areas. Level 4, plus…


🠶 Defect prevention
🠶 Technology change management (bring in new methods)
🠶 Process change management (improve processes)

🠶 The optimizing level (Level 5) is not the destination of


process management.
🠶 The destination is better products for a better price:
economic survival
🠶 The optimizing level is a foundation for building an ever-
improving capability.
40

You might also like