0% found this document useful (0 votes)
2 views

Software Testing Polytech 28 Feb 2024

The document outlines the principles of software testing, emphasizing its role in measuring software quality by identifying defects in both functional and non-functional requirements. It discusses the importance of integrating testing into the software development lifecycle to reduce risks and improve quality through lessons learned and process improvements. Additionally, it covers various testing methodologies, terminologies, and the distinction between quality assurance and quality control.

Uploaded by

Yash Shah
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

Software Testing Polytech 28 Feb 2024

The document outlines the principles of software testing, emphasizing its role in measuring software quality by identifying defects in both functional and non-functional requirements. It discusses the importance of integrating testing into the software development lifecycle to reduce risks and improve quality through lessons learned and process improvements. Additionally, it covers various testing methodologies, terminologies, and the distinction between quality assurance and quality control.

Uploaded by

Yash Shah
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 271

ISTQB

International Software
Qualifications Board
Course Faculty:
Sachin N. Pardeshi.
M.E. (Comp. Engg.)
Software Quality
• Testing can help us measure the Quality of software
• Quality - ‘The degree to which a component, system or
process meets specified requirements and/or user/customer
needs and expectations’
• This is measured in terms of defects found
• Defects covering:
– functional software requirements and characteristics
– and non-functional software requirements and
characteristics (e.g. reliability, usability, efficiency, portability
and maintainability)
• Testing can give confidence in the Quality of the software if it
finds few or no defects

SNP - ISTQB Testing Foundation Course RCPIT,Shirpur Slide 2 • SNP Internal


Software Quality
• A properly designed test that passes, reduces the overall level of Risk
in a system
• Risk – ‘A factor that could result in future negative consequences; usually
expressed as impact and likelihood’
• When testing does find defects, the Quality of the software system increases
when those defects are fixed
• The Quality of systems can be improved through Lessons learned from previous
projects
• Analysis of root causes of defects found in other projects can lead to Process
Improvement
• Process Improvement can prevent those defects reoccurring
• Which in turn, can improve the Quality of future systems
• Testing should be integrated as one of the Quality assurance activities

SNP - ISTQB Testing Foundation Course RCPIT,Shirpur Slide 3 • SNP Internal


Definition of Software Testing

Software testing is a process, to evaluate the


functionality of a software application with an intent to
find whether the developed software met the specified
requirements or not and to identify the defects to ensure
that the product is defect free in order to produce the
quality product.
The Role of Testing
• Rigorous testing of systems and documentation can:

– reduce the risk of problems occurring in an operational


environment

– contribute to the quality of the software system

• How? By finding and correcting defects before the system is released


for operational use

• Software testing may also be required to meet contractual or


legal requirements, or industry-specific standards

SNP - ISTQB Testing Foundation Course RCPIT,Shirpur Slide 5 • SNP Internal


1.2: Failure, Error, Fault, Defect, Bug Terminology
“A mistake in coding is called Error, error found by tester is called
Defect, defect accepted by development team then it is called
Bug, build does not meet the requirements then it Is Failure.”
What is a defect?
The variation between the actual results and expected results is known as
defect.
If a developer finds an issue and corrects it by himself in the development
phase then it’s called a defect.
What is a bug?
If testers find any mismatch in the application/system in testing phase
then they call it as Bug.
As I mentioned earlier, there is a contradiction in the usage of Bug and
Defect. People widely say the bug is an informal name for the defect.
1.2: Failure, Error, Fault, Defect, Bug Terminology Contd …

What is an error?
We can’t compile or run a program due to coding mistake in a
program. If a developer unable to successfully compile or run a
program then they call it as an error.

What is a failure?
Once the product is deployed and customers find any issues
then they call the product as a failure product. After release, if
an end user finds an issue then that particular issue is called
as failure
1.3 Testing Objectives
• The Objectives of testing can vary depending on the stage of testing being
conducted. E.g.:

– Development testing (e.g. component, integration and system testing) - to cause as


many failures as possible so that defects in the software are identified and can be
fixed
– Acceptance testing - to confirm that the system works as expected, to gain
confidence that it has met the requirements
– Maintenance testing - often includes testing that no new errors have been introduced
during development of the changes
– During Operational testing - may be to assess system characteristics such as
reliability or availability

• Reduce the risk of failure

• Reduce the cost of failure


1.4 Test Case
A Test Case is a set of actions executed to verify a particular
feature or functionality of your software application.
1.5 When to Start and Stop Testing of Software (Entry and Exit
Criteria)
When to Start Testing:
• An early start to testing reduces the cost and time to rework
• It also depends on the development model that is being used. For
example, in the Waterfall model
• Testing is done in different forms at every phase of SDLC −
 During the requirement gathering phase, the analysis and
verification of requirements are also considered as testing.
 Reviewing the design in the design phase with the intent to
improve the design is also considered as testing.
 Testing performed by a developer on completion of the code is
also categorized as testing.
1.5 When to Start and Stop Testing of Software (Entry and Exit
Criteria)
When to Stop Testing?
It is difficult to determine when to stop testing, as testing is a never-ending process
and no one can claim that a software is 100% tested. The following aspects are to
be considered for stopping the testing process −
 Testing Deadlines
 Completion of test case execution
 Completion of functional and code coverage to a certain point
 Bug rate falls below a certain level and no high-priority bugs are identified
 Management decision
1.6 Skills for Software Tester
Non-Technical Skills
Analytical skills:
Communication skill:
Time Management & Organization Skills:
GREAT Attitude:
Passion
Technical skill: As I mentioned earlier for testing technical domain skill in languages is important.
Project life cycle,
Testing concepts,
Knowledge of testing types,
Programming languages familiarity,
Database concepts,
Test plan idea,
Ability to analyze requirements,
Documentation skill,
Testing tools
1.7 Quality Assurance
What is Assurance?
It provides a guarantee that the product will work without any
problems as per the expectations or requirements.
What is Quality Assurance?
Quality Assurance popularly known as QA Testing is an activity to
ensure that an organization is providing the best possible product or
service to customers.
Quality assurance has a defined cycle called PDCA cycle or
Deming cycle.The phases of this cycle are:
Quality Control

• process used to ensure quality in a product or a service.


• The main aim of Quality control is to check whether the
products meet the specifications and requirements of the
customer.
• QC also evaluates people on their quality level skill sets and
imparts training and certifications. This evaluation is required
for the service based organization and helps provide "perfect"
service to the customers.
Difference between Quality Control and Quality Assurance?
Examples of QC and QA activities are as follows:

Quality Control Activities Quality Assurance Activities

Walkthrough Quality Audit

Testing Defining Process

Inspection Tool Identification and selection

Checkpoint review Training of Quality Standards and


Processes
Verification and Validation
No Verification Validation

Verifying process includes It is a dynamic mechanism of


1 checking documents, design, testing and validating the
code and program actual product

2 It does not involve executing It always involves executing


the code the code

Verification uses methods like It uses methods like Black Box


3 reviews, walkthroughs, Testing ,White Box
inspections and desk- Testing and non-functional
checking etc. testing

Whether the software It checks whether software


4 conforms to specification is meets the requirements and
checked expectations of customer
Verification and Validation
No Verification Validation
It can find bugs that the
5 It finds bugs early in the
verification process can not
development cycle
catch
Target is application and
software architecture,
6 specification, complete Target is actual product
design, high level and data
base design etc.
QA team does verification and
With the involvement of testing
7 make sure that the software is
team validation is executed on
as per the requirement in the
software code.
SRS document.

8 It comes before validation It comes after verification


VERIFICATION AND VALIDATION MODE OR V MODEL
VERIFICATION AND VALIDATION MODE OR V MODEL
VERIFICATION AND VALIDATION MODE OR V MODEL

One of the major handicaps of waterfall STLC model


was that, defects were found at a very later state of
the development process.

Verification: Verification is a static analysis


technique. In this technique testing is done without
executing the code. Examples include – Reviews,
Inspection and walkthrough.
Validation: Validation is a dynamic analysis
technique where testing is done by executing the
code. Examples include functional and non-
functional testing techniques.
VERIFICATION AND VALIDATION MODE OR V MODEL

Requirement analysis: In this phase the requirements are collected,


analyzed and studied. Brain storming sessions/walkthrough, interviews
are done to have the objectives clear.
Verification activities: Requirements reviews.
Validation activities: Creation of UAT (User acceptance test) test cases
Artifacts produced: Requirements understanding document, UAT test
cases.

System requirements / High level design: In this phase a high level


design of the software is build. The team studies and investigates on
how the requirements could be implemented. The technical
feasibility of the requirements is also studied. The team also comes
up with the modules that would be created/ dependencies,
Hardware / software needs
Verification activities: Design reviews
Validation activities: Creation of System test plan and cases, Creation
of traceability metrics
Artifacts produced: System test cases, Feasibility reports, System test
plan, Hardware software requirements, and modules to be created
etc.
VERIFICATION AND VALIDATION MODE OR V MODEL

Architectural design: In this phase, based on the high level design,


software architecture is created. The modules, their relationships and
dependencies, architectural diagrams, database tables, technology
details are all finalized in this phase.
Verification activities: Design reviews
Validation activities: Integration test plan and test cases.
Artifacts produced: Design documents, Integration test plan and test
cases, Database table designs etc.

Module design/ Low level Design: In this phase each and every
module or the software components are designed individually.
Methods, classes, interfaces, data types etc are all finalized in this
phase.
Verification activities: Design reviews
Validation activities: Creation and review of unit test cases.
Artifacts produced: Unit test cases,
VERIFICATION AND VALIDATION MODE OR V MODEL

Implementation / Code: In this phase, actual coding is done.


Verification activities: Code review, test cases review
Validation activities: Creation of functional test cases.
Artifacts produced: test cases, review checklist.

Right hand side demonstrates the testing activities or the Validation


Phase. We will start from bottom.
Unit Testing: In this phase all the unit test case, created in the Low level
design phase are executed.
*Unit testing is a white box testing technique, where a piece of code is
written which invokes a method (or any other piece of code) to test
whether the code snippet is giving the expected output or not. This testing
is basically performed by the development team. In case of any anomaly,
defects are logged and tracked.
VERIFICATION AND VALIDATION MODE OR V MODEL

Integration Testing: In this phase the integration test cases are executed
which were created in the Architectural design phase. In case of any
anomalies, defects are logged and tracked.
*Integration Testing: Integration testing is a technique where the unit
tested modules are integrated and tested whether the integrated modules
are rendering the expected results. In simpler words, It validates whether
the components of the application work together as expected.
Artifacts produced: Integration test results.
Systems testing: In this phase all the system test cases, functional test cases
and nonfunctional test cases are executed. In other words, the actual and
full fledge testing of the application takes place here. Defects are logged and
tracked for its closure. Progress reporting is also a major part in this phase.
The traceability metrics are updated to check the coverage and risk
mitigated.
Artifacts produced: Test results, Test logs, defect report, test summary
report and updated traceability matrices.
VERIFICATION AND VALIDATION MODE OR V MODEL
PROS CONS

- Development and progress - Not suitable for bigger and


is very organized and complex projects
systematic – Not suitable if the
– Works well for smaller to requirements are not
medium sized projects. consistent.
– Testing starts from – No working software is
beginning so ambiguities are produced in the intermediate
identified from the stage.
beginning. – No provision for doing risk
– Easy to manage as each analysis so uncertainty and
phase has well defined risks are there.
objectives and goals.
VERIFICATION AND VALIDATION MODE OR V MODEL
Advantages of V-model:
• Simple and easy to use.
• Testing activities like planning, test designing happens well
before coding. This saves a lot of time. Hence higher chance of
success over the waterfall model.
• Proactive defect tracking – that is defects are found at early
stage.
• Avoids the downward flow of the defects.
• Works well for small projects where requirements are easily
understood.

Disadvantages of V-model:
• Very rigid and least flexible.
• Software is developed during the implementation phase, so no
early prototypes of the software are produced.
• If any changes happen in midway, then the test documents
along with requirement documents has to be updated.
When to use the V-model:
The V-shaped model should be used for small to medium sized
projects where requirements are clearly defined and fixed.
The V-Shaped model should be chosen when ample technical
resources are available with needed technical expertise.
VERIFICATION AND VALIDATION MODE OR V MODEL

When to use the V-model:


The V-shaped model should be used for small to medium sized
projects where requirements are clearly defined and fixed.
The V-Shaped model should be chosen when ample technical
resources are available with needed technical expertise.
End of Chapter 1
Chapter 2: Types of Testing

Static Testing :

1. Inspection

2. Structured Walkthroughs

3. Technical Reviews
Static Techniques
Static Testing – a
Definition
• Static testing techniques involve examination of the project’s documentation,
software and other information about the software products without
executing them
• Static Testing Includes both Reviews (e.g. of documentation) and Static
Analysis of code
• Reviews, Static Analysis and dynamic testing have the same objective –
identifying defects
• Static Testing and Dynamic Testing are complementary
• Each technique can find different types of defects effectively and efficiently

dynamic testing: Testing that involves the execution of the software


of a component or system.

SNP - ISTQB Testing Foundation Course RCPIT,Shirpur Slide 35 • SNP - Internal


Reviews and the Test Process
Why Review - The cost of
errors

100
90
80
70
60
Relative
50 Cost
40
Multiple
30
s
20
10
0
Reqs Des Code Unit Accept Use

SNP - ISTQB Testing Foundation Course RCPIT,Shirpur Slide 36 • SNP - Internal


Reviews and the Test Process

Why Review - Where errors are


introduced

30%

25%

20%

15%
Errors
10%

5%

0%
Reqs Funct Logical Code Other
Des Des

SNP - ISTQB Testing Foundation Course RCPIT,Shirpur Slide 37 • SNP - Internal


Reviews and the Test
Process
What do Reviews
Find?
• In contrast to dynamic testing, reviews find defects rather than failures

• Typical defects that are easier to find in reviews than in dynamic


testing are:
– deviations from standards
– requirement defects
– design defects
– insufficient maintainability
– incorrect interface specifications.

SNP - ISTQB Testing Foundation Course RCPIT,Shirpur Slide 38 • SNP - Internal


S/W INSPECTION PROCESS
S/W INSPECTION OBJECTIVES
-To detect & indentify defect in s/w element. it is conducted by peers ,compromising 3 to
6 participants
-Defects are collected & stored in an inspection d/b
SPECIAL RESPONSIBILITIES:-
1. Moderator:
- chief planner & meeting manager for inspection process
2. Reader:
- reader leads the inspection team through the s/w element in a comprehensive &
logical fashion
3. Recorder:
- Responsible for recording inspection data required for process analysis
4.Inspector:
-identifies & describe defects in s/w element
- Inspector must have knowledge of inspection process & should represent different
viewpoint at the meeting
5. Author
-responsible for meet inspection entry criteria for performing any
task required
S/W INSPECTION INPUT:- it include
s/w element to be inspected
Standard & guideline
Approved s/w element specification & inspection checklist
Any inspection reporting forms
ENTRY CRITERIA:-
Authorization:-
-Inspection are planned & documented for project planning
documents
Initiating event:-
- s/w inspection process can be triggered by following:-
s/w element availability
Project plan compliance
SVVP schedule compliance
At the request of management
Review Process
Formal Review Types –
Inspections
• Formal, systematic reviews of material. Primary purpose is fault finding
and process improvement
• Led by an independent trained moderator (but not the author)
• Attended by the author & the author's peers (usually 3 to 6)
acting in defined roles
• Pre meeting preparation essential
• Follow a strict format
– stated entry criteria and exit criteria
– seeking and recording defects
– use standardised rules, check-lists and techniques
– record metrics
• Formal follow-up process
• Optionally process improvement considerations part of review
• Weaknesses – expensive and time consuming but high defect yield!

SNP - ISTQB Testing Foundation Course RCPIT,Shirpur Slide 41 • SNP - Internal


Formal Reviiiiew Prrrroooocess
Planning
Kick-off
Review Overview – optional
Preparation
Review Meeting
Rework
Follow-up
Repeat Review
- optional

SNP - ISTQB Testing Foundation Course RCPIT,Shirpur Slide 165 • SNP


Internal
Formal Reviiiiew Prrrroooocess
Plannin
g
• Define Entry and Exit Criteria (for most
formal reviews)
• Ensure that the volume of material to be
reviewed is appropriate
• Identify roles, participants and establish a
time and place for the review

SNP - ISTQB Testing Foundation Course RCPIT,Shirpur Slide 43 • SNP - Internal


Formal Reviiiiew Prrrroooocess
Kick
Off
• Distribute the material to the participants
• Explain objectives, process and material to be reviewed
• Obtain copies of the relevant review and report
templates
• Create checklists of areas to cover and
distribute
– checklists can make reviews more effective and efficient
– E.g. a checklist based on perspectives such as user,
maintainer, tester or operations
– or a checklist of typical requirements problems to
focus on
• Make sure entry criteria has been/will be met

SNP - ISTQB Testing Foundation Course RCPIT,Shirpur Slide 44 • SNP - Internal


Formal Reviiiiew Prrrroooocess
Review Overview
(Optional)
• Required for new or difficult material
• Overviews:
– educate the participants
– allow participants to focus on technical content
– describe where the material fits in the system and in the development process
– focus on any complex functionality
– highlight any changes and explain the need for these changes

SNP - ISTQB Testing Foundation Course RCPIT,Shirpur Slide 45 • SNP - Internal


Formal Reviiiiew Prrrroooocess
Preparatio
n
• Each participant reviews the material to:
– learn about the material
– note suspected defects
– record questions
• In some circumstances, depending on the
expertise of the participants, the moderator may
ask certain participants to concentrate on
particular aspects of the material during
preparation

SNP - ISTQB Testing Foundation Course RCPIT,Shirpur Slide 46 • SNP - Internal


Formal Reviiiiew Prrrroooocess
Review
• Meeting
The material is read to the participants by the
reader
• Defects are raised by the participants and
recorded by the recorder
• Participants may make decisions about
categorising and even handling the defects –
though usually avoid ‘solutionising’
• Deliverables may include meeting minutes
• For Inspections - Pass or fail and repeat
review decisions are usually made by the
moderator
• The preparation time and the actual time may
be recorded

SNP - ISTQB Testing Foundation Course RCPIT,Shirpur Slide 47 • SNP - Internal


Formal Reviiiiew Prrrroooocess
Rewor
k
• The author must resolve all defects found during the review by
reworking the material as recommended by the review report
• Note, the cost of rework is NOT included in the cost of reviews

SNP - ISTQB Testing Foundation Course RCPIT,Shirpur Slide 48 • SNP - Internal


Formal Reviiiiew Prrrroooocess
Follow-
up
• Check the corrections to the material and account for all recorded
defects
• If necessary, schedule a repeat review for the corrected material
• Inform management of the status of the corrected material
• Add the error data from the review to the project statistics
database – enables process improvement!
• Complete and sign the review report and forms (Inspections)
• Ensure exit criteria met

SNP - ISTQB Testing Foundation Course RCPIT,Shirpur Slide 49 • SNP - Internal


Formal Reviiiiew Prrrroooocess
Repeat Review
(optional)
• If the material has been passed as is or if the rework is minor, no further
reviews are required
• If a repeat review is required (e.g. if significant re-work was
required) a repeat review must be scheduled with the same
participants to verify the revised material

SNP - ISTQB Testing Foundation Course RCPIT,Shirpur Slide 50 • SNP - Internal


Review Process
Formal Review Roles and
Responsibilities
Manager decides on the execution of reviews, allocates time in project
schedules and determines if the review objectives have been met

Moderator the person who leads, plans and runs the review
May mediate between the various points of view and is often the
person upon whom the success of the review rests
Author the writer or person with chief responsibility for the document(s)
to be reviewed

Reviewers Individuals with a specific technical or business background (also


called checkers or inspectors)
Identify and describe findings (e.g. defects)
Take part in any review meetings
Scribe/Recorder documents all the issues, problems and open points that were
identified during the meeting

SNP - ISTQB Testing Foundation Course RCPIT,Shirpur Slide 51 • SNP - Internal


WALKTHROUGH PROCESS
Walkthrough objectives
•Evaluate s/w element for defect, omissions, contradictions & to consider alternative
implementation
•Author make overview presentation of s/w element under review
Special responsibilities
•Moderator:-responsible for conducting a specific Walkthrough
•Recorder:- responsible for writing all comments made during Walkthrough about
defect, omissions & contradictions ,suggestions or alternative approach
•Author:- responsible for overview presentation of s/w element
Walkthrough input
•Statement of objectives for Walkthrough.
•s/w element under examination
•Standard
•Specification of s/w element
EXIT CRITERIA

- Process is complete when entire s/w element have been walkthrough in detail
- All deficiencies, omissions & suggestions for improvement are noted
- Issued the walkthrough report

WALKTHROUGH OUTPUT

Walkthrough report contains


-Statement of objectives
-List of noted omission ,deficiencies, contradiction & suggestion
ENTRY CRITERIA
Authorization:-
-Walkthrough are planned for & documented in project planning document
Initiating agent:-
- Walkthrough is conducted when author indicates s/w element is ready & when moderator
is appointed

WALKTHROUGH PROCEDURE
Planning:-
-Identify walkthrough team ,schedule meeting & distribute all necessary i/p material
Overview: presentation is made by author
Preparation:- participant review the i/p material
Examination:-
-author walkthrough the specific s/w element
- Walkthrough team ask question regarding s/w element
Review Process
Formal Review Types –
Walkthroughs
• A walkthrough is a review of authored material led by the author and
attended by a group of the author's peers (typically 2 to 6 peers).
Primary purpose is education
• The material is presented by the author to the peer group, who
focus on learning about the material, improving it and recording
defects
• Peer group should include development, operation
representatives, target audience, etc.
• Examples are Dry Runs or Scenario playing to validate product
• Sessions can be formal or informal
• Review sessions often open-ended (not time-boxed)
• Pre-meeting preparation often (regularly) involved
• Weaknesses – do not find as many defects as technical reviews and
inspections

SNP - ISTQB Testing Foundation Course RCPIT,Shirpur Slide 57 • SNP - Internal


Review Process
Formal Review Types – Technical Reviews
• May be performed as Peer Reviews without management participation
• Preferably lead by a trained moderator (not the author)
• Pre-Meeting preparation is required
• Primary purpose is to:
– Discuss
– make decisions
– evaluate alternatives
– find defects
– solve technical problems and check conformance to specifications and standards
• Degree of formality varies
• Reviewers bring a list of technical issues to the review
• Optional use of checklists and a review report
• During the meeting reviewers raise objections, ambiguities or inconsistencies in design or technical
aspects being discussed
• Problems are clarified and documented - solutions are sought after the review has concluded
• Weaknesses – do not find as many faults as Inspections

SNP - ISTQB Testing Foundation Course RCPIT,Shirpur Slide 58 • SNP - Internal


Benefits of
• Review
Detect faults as they are introduced –i.e. early detection and correction
• Reduce the risk of error propagation

• Detect errors that Dynamic test execution unlikely to find, e.g. requirement spec
errors
• Shorten development timescales

• Reduce fault levels in delivered software

• Lower cost and shorten testing timeframes

• Lower cost over the life of the software

• Create development productivity improvements

• Reliably evaluates progress and capability (1)

• Educates and trains participants (1)

• Improve communication between project teams

(1) – The Complete Guide to Software Testing – Bill Hetzel

SNP - ISTQB Testing Foundation Course RCPIT,Shirpur Slide 61 • SNP - Internal


2. Structural Testing

1. Code Functional Testing

2. Code Coverage Testing

3. Code Complexity Testing


2. Structural Testing

• Referred to as White Box, Glass Box or Clear Box testing

• Required knowledge of code

• Structural testing used at all levels

Integration Testing – Code Coverage, loop executed

Acceptance Testing – Menu Options


2.2.1 Code Functional Testing
• Done before submitting code for extensive phases like coverage testing, Complexity
Testing

• Involves debugging sort of activities require the code knowledge, loop iteration,
conditions, input and corresponding output

• Methods of Code Functional Testing


• We know certain obvious error for some input. So at first we perform those test
repeatedly to get confidence
• For more complex logic and conditions – add print statement in loops to validate
test case after validation remove print statements
• We can use different IDE or Debugging tools to resolve early defects from
product or module. Allows run statement by statement, adding breakpoints
2.2.2 Code Coverage Testing

• Program Statement and Line Coverage

• Branch Coverage

• Condition Coverage
2.2.2 Code Coverage Testing

• Inspect code directly


• Collects information from the running program
• Code Coverage is a part of feedback loop – coverage aspects of the code
• Execute line of code, follow every logic and decision path through software
product examining this details at product this is called Code-Coverage analysis
• Analysis is process of discovering code within a program that is not being
exercised by test cases
2.2.2 Code Coverage Testing Contd ..
a) Statement Coverage and Line Coverage

aim is to display that all executable statements have been run


at least once

– coverage measurement is:

Tester will be concentrating on the internal working of source code concerning


control flow graphs or flow charts.
Generally in any software, if we look at the source code, there will be a wide variety
of elements like operators, functions, looping, exceptional handlers, etc. Based on
the input to the program, some of the code statements may not be executed. The
goal of Statement coverage is to cover all the possible path's, line, and statement in
the code.
Example:
Prints (int a, int b) { ------------ Printsum is a
function
int result = a+ b;
If (result> 0)
Print ("Positive", result)
Else
Print ("Negative", result)
} ----------- End of the source code

Scenario 1:
If A = 3, B = 9

Number of executed statements = 5, Total number of statements = 7


Statement Coverage: 5/7 = 71%
Scenario 2:
If A = -3, B = -9

The statements marked in yellow color are those which are executed as per the scenario.
Number of executed statements = 6
Total number of statements = 7

Statement Coverage: 6/7 = 85%

But overall if you see, all the statements are being covered by 2nd scenario's
considered. So we can conclude that overall statement coverage is 100%.
White Boox Test
Statement
Techniques
Testing 1 Control
Flow Graph
Example 1 2
1. Read vehicle
3
2. Read colour
4
3. If vehicle = ‘Car’ Then
4. If colour = ‘Red’ Then 5
5. Print “Fast”
6
6. End If
7. End If 7
SNP - ISTQB Testing Foundation Course RCPIT,Shirpur Slide 71 • SNP - Internal
White Box Test
Techniques 1
Statement
Testing
2
Example 2
3
1.Read A
2.If A > 40 Then 3. 4

A 5
=A*2
6
4.End If
5.If A > 100 Then 6. 7
A=
SNP - ISTQB Testing Foundation Course RCPIT,Shirpur Slide 72 • SNP - Internal
White Box Test
Techniques Statement 1
Example 3
1. Read bread Testing
2
2. Read filling
3. If bread = ‘Roll’ Then 3/9
4. If filling = ‘Tuna’ Then
4/6
5. Price = 1.50
6. Else 7 5
7. Price = 1.00 10
8. End If 8
9. Else
10. Price = 0.75 11
11. End
SNP - ISTQB Testing Foundation If
Course RCPIT,Shirpur Slide 73 • SNP - Internal
b) BRANCH COVERAGE

In the branch coverage, every outcome from a code module is


tested. For example, if the outcomes are binary, you need to test
both True and False outcomes.
By using Branch coverage method, you can also measure the fraction of independent code segments.
It also helps you to find out which is sections of code don't have any branches.

The formula to calculate Branch Coverage:


To learn branch coverage, let's consider the same example used earlier

Consider the following code

Demo(int a) {
If (a> 5)
a=a*3
Print (a)
}
Branch Coverage will consider unconditional branch as well

Test Case Value of A Output Decision Coverage Branch Coverage


1 2 2 50% 33%
2 6 18 50% 67%

Branch coverage Testing offers the following advantages:


•Allows you to validate-all the branches in the code
•Helps you to ensure that no branched lead to any abnormality of the program's operation
•Branch coverage method removes issues which happen because of statement coverage testing
•Allows you to find those areas which are not tested by other testing methods
•It allows you to find a quantitative measure of code coverage
•Branch coverage ignores branches inside the Boolean expressions
c) Condition Coverage

• In this coverage expressions with logical operands are only considered.


• For example, if an expression has Boolean operations like AND, OR, XOR, which
indicated total possibilities.
• The formula to calculate Condition Coverage:

Example:

For the above expression, we have 4 possible combinations


•TT TF
•FT FF
2.2.3 Code Complexity Testing

• System design and system coding is verified


• Complexity testing performs verification through reviews inspection or walkthrough
• Uses:
• Complex programs are difficult to maintain in future. When Complex loop executed
may introduce new and undefined defects
• Complex written code or Complex designs are more subject to failure
• Complexity leads to complex testing and it may possible that complex decision may
give wrong output
• So such complexity must be balanced by adding optimization and simplicity by
designer and then must be implemented by developers.
McCabe’s Cyclomatic Complexity:

Cyclomatic complexity is a software metric used to measure the complexity of a


program.

Cyclomatic complexity can be calculated with respect to functions, modules, methods or


classes within a program.

This metric was developed by Thomas J. McCabe in 1976 and it is based on a control
flow representation of the program.

In the graph, Nodes represent processing tasks while edges represent control flow
between the nodes.
Flow graph notation for a program:
Flow Graph notation for a program is defines. several nodes connected through the
edges. Below are Flow diagrams for statements like if-else, While, until and normal
sequence of flow.
How to Calculate Cyclomatic Complexity
Mathematical representation:
Mathematically, it is set of independent paths through the graph diagram. The Code
complexity of the program can be defined using the formula -

V(G) = E - N + 2
Where,

E - Number of edges

N - Number of Nodes

V (G) = P + 1
Where P = Number of predicate nodes (node that contains condition)
V (F)= e-n+2
OR
V (F)= 1+d

e= 11 & n=10
V (F)=11-10+2=3
OR
V (F)=1+2=3
Properties of Cyclomatic complexity:
Following are the properties of Cyclomatic complexity:
V (G) is the maximum number of independent paths in the graph
V (G) >=1
G will have one path if V (G) = 1
Minimize complexity to 10
2.2 BLACK BOX TESTING
BLACK BOX TESTING, also known as Behavioral Testing, is a
software testing method in which the internal structure/design/implementation of the
item being tested is not known to the tester. These tests can be functional or non-
functional, though usually functional.
BLACK BOX TESTING Contd..

This method attempts to find errors in the following categories:

 Incorrect or missing functions


 Interface errors
 Errors in data structures or external database access
 Behavior or performance errors
 Initialization and termination errors
BLACK BOX TESTING Contd..

Advantages
Tests are done from a user’s point of view and will help in exposing
discrepancies in the specifications.
•Tester need not know programming languages or how the software
has been implemented.
•Tests can be conducted by a body independent from the
developers, allowing for an objective perspective and the avoidance of
developer-bias.
•Test cases can be designed as soon as the specifications are
complete.
BLACK BOX TESTING Contd..

Disadvantages

• Only a small number of possible inputs can be tested and many program paths will
be left untested.
• Without clear specifications, which is the situation in many projects, test cases will
be difficult to design.
• Tests can be redundant if the software designer/developer has already run a test
case.
BLACK BOX TESTING Contd..

Types of Black Box Testing

1. Requirement Based Testing


2. Positive and Negative Testing
3. Boundary Value Analysis
4. Decision Tables
5. Equivalence Partitioning
6. User Documentation Testing
7. Graph Based Testing
Requirement Based Testing

in which test cases, conditions and data are derived from requirements. It includes functional
tests and also non-functional attributes
Stages in Requirements based Testing:

•Defining Test Completion Criteria –

•Design Test Cases -

•Execute Tests

•Verify Test Results -

•Verify Test Coverage -

•Track and Manage Defects -


Requirement Based Testing

• Testing must be carried out in a timely manner.


• Testing process should add value to the software life cycle, hence it needs to
be effective.
• Testing the system exhaustively is impossible hence the testing process
needs to be efficient as well.
• Testing must provide the overall status of the project, hence it should be
manageable.
Positive and Negative Testing

Testing of application can be carried out in two different ways, Positive testing and Negative
testing.
Positive Testing

• validated against the valid input data.


• Such testing is to be carried out keeping positive point of view & only execute the
positive scenario
• test the normal day to day life scenarios and check the expected behavior of
application.
Positive and Negative Testing

Example of Positive Testing


Positive and Negative Testing

Negative Testing

• validated against the invalid input data


• main intention of this testing is to check whether software application not showing
error when supposed to & showing error when not supposed to.
• check the stability of the software application against the influences of different variety
of incorrect validation data set.
Negative Testing

The Negative testing helps to improve the testing coverage of your


software application under test. Both positive and negative testing
approaches are equally important for making your application more
reliable and stable.
Let’s take a other example of Positive and negative testing scenarios:
If the requirement is saying that password text field should accepts 6 – 20
characters and only alphanumeric characters.
Positive Test Scenarios:
• Password textbox should accept 6 characters
• Password textbox should upto 20 characters
• Password textbox should accepts any value in between 6-20 chars
length.
• Password textbox should accepts all numeric & alphabets values.
Negative Test scenarios:
• Password textbox should not accept less than 6 characters
• Password textbox should not exceeds more than 20 characters
• Password textbox should not accept special characters
Black Box Test Techniques

Boundary Value Analysis


• Boundary Value Analysis (BVA) uses the same analysis of partitions as EP and is
usually used in combination with EP in test case design

• As with EP, it can be used for all Test levels

• BVA operates on the basis that experience shows us that errors are most likely to
exist at the boundaries between partitions and in doing so incorporates a degree of
negative testing into the test design

• BVA Test cases are designed to exercise the software on and at either side of boundary
values

SNP - ISTQB Testing Foundation Course RCPIT,Shirpur Slide 97 • SNP - Internal


Black Box Tesstttt Techniques

Boundary Value
Analysis

Value = 1 Value = 100


(valid) (valid)
Value = 0 Value = 2 Value = 99 Value = 101
(invalid) (valid) (valid) (invalid)

SNP - ISTQB Testing Foundation Course RCPIT,Shirpur Slide 98 • SNP - Internal


Black Box Tesstttt Techniques

Boundary Value
Analysis
• find the boundary and then test one value above and
below it
• ALWAYS results in two test cases per boundary for
valid inputs and three tests cases per boundary for all
inputs
• inputs should be in the smallest significant values for
the boundary (e.g. Boundary of ‘a > 10.0’ should
result in test values of 10.0, 10.1 & 10.2)
• only applicable for numeric (and date) fields

SNP - ISTQB Testing Foundation Course RCPIT,Shirpur Slide 99 • SNP - Internal


Black Box Test Techniques

Decision Table
Testing
• Table based technique where
– Inputs to the system are recorded
– Outputs to the system are defined
• Inputs are usually defined in terms of actions which are Boolean
(true or false)
• Outputs are recorded against each unique combination of inputs
• Using the Decision Table the relationships between the inputs and the
possible outputs are mapped together
• As with State Transition testing, an excellent tool to capture certain
types of system requirements and to document internal system design. As
such can be used for a number of test levels
• Especially useful for complex business rules

SNP - ISTQB Testing Foundation Course RCPIT,Shirpur Slide 100 • SNP - Internal
Black Box Test Techniques
Decision Table
Structure
Inputs / Actions Test 1 Test 2 Test 3
Input 1 T T F
Input 2 T T F
Input 3 T DON’T CARE F
Input 4 T F T

Output / Response 1 Y Y N
Response Response 2 Y N Y
Response 3 N Y N

Each column of the table corresponds to a business rule that defines a unique
combination of conditions that result in the execution of the actions associated
with that rule

The strength of Decision Table testing is that it creates combinations of conditions


that might not otherwise have been exercised during testing

SNP - ISTQB Testing Foundation Course RCPIT,Shirpur Slide 101 • SNP - Internal
Black Box Test Techniques
Decision Table
Example
Test 3

Test 1 Test 2 T

> 55 yrs old F T F

Smoker F T T

Exercises 3 times a week + T F F

History of Heart Attacks F T Y

Insure Y N Y

Offer 10% Discount N N N

Offer
What 30% Discount
will be Y
the out come of the following N
Scenarios?
Joe is a 22 year old non smoker who goes to the gym 4 times / week and has
no history of heart attacks in his family

Kevin is 62 year old non smoker who swims twice a week and plays tennis. He
has no history of heart attacks in his family

SNP - ISTQB Testing Foundation Course RCPIT,Shirpur Slide 102 • SNP - Internal
Black Box Test Techniques

Equivalence
• aim is to treat groupsPartitioning
of inputs as equivalent and to
select one representative input to test them all
• Best shown in the following example….

– If we wanted to test the following IF statement:

– ‘IF VALUE is between 1 and 100 (inclusive) (e.g. VALUE


>=1 and VALUE <= 100) THEN …..’

– We could put a range of numbers as shown in the next


slide through test cases

SNP - ISTQB Testing Foundation Course RCPIT,Shirpur Slide 103 • SNP - Internal
Black Box Test Techniques

Equivalence Partitioning

IF Value >= 1 AND Value <= 100 THEN ….

0 101
37 65 99
1 100
-1 19 53
48 87

OUT OF OUT OF
RANGE IN RANGE RANGE

SNP - ISTQB Testing Foundation Course RCPIT,Shirpur Slide 104 • SNP - Internal
Black Box Test Techniques

Equivalence
Partitioning
• the numbers fall into a partition where each would have the
same, or equivalent, result i.e. an Equivalence Partition (EP) or
Equivalence Class

• EP says that by testing just one value we have tested


the partition (typically a mid-point value is used). It
assumes that:

1)if one value finds a bug, the others probably will too
2)if one doesn't find a bug, the others probably won't
either
SNP - ISTQB Testing Foundation Course RCPIT,Shirpur Slide 105 • SNP - Internal
Black Box Tesstttt Techniques

Equivalence
Partitioning
• in EP we must identify Valid Equivalence partitions and
Invalid Equivalence partitions where applicable (typically
in range tests)

• the Valid partition is bounded by the values 1 and 100

• plus there are 2 Invalid partitions

SNP - ISTQB Testing Foundation Course RCPIT,Shirpur Slide 106 • SNP - Internal
Black Box Test Techniques

Equivalence Partitioning
IF Value >= 1 AND Value <= 100 THEN ….

0 101
37 65 99
1 100
-1 19 53
48 87 1000

‘VALID’ PARTITION
‘INVALID’
‘INVALID’
SNPP- IASTRQBTTIeTstIiOng
RCPIT,Shirpu
Slide 207 • SNP -
PARTITION
r
NFoundation Course Internal
Black Box Test Techniques

Equivalence Partitioning

•Time would be wasted by specifying test cases that covered a


range of values within each of the three partitions, unless the
code was designed in an unusual way

•There are more effective techniques that can be used to find


bugs in such circumstances (such as code inspection)

•EP can help reduce the number of tests from a list of all
possible inputs to a minimum set that would still test each
partition

SNP - ISTQB Testing Foundation Course RCPIT,Shirpur Slide 108 • SNP - Internal
Black Box Test Techniques
Equivalence
Partitioning
• If the tester chooses the right partitions, the testing will be accurate and efficient

• If the tester mistakenly thinks of two partitions as equivalent and they are not, a test
situation will be missed

• Or on the other hand, if the tester thinks two objects are different and they are not, the
tests will be redundant

• EP can be used for all Levels of Testing

• EP is used to achieve good input and output coverage, knowing exhaustive testing is often
impossible

• It can be applied to human input, input via interfaces to a system, or interface


parameters in integration testing

SNP - ISTQB Testing Foundation Course RCPIT,Shirpur Slide 109 • SNP - Internal
User Documentation Testing

Cover all the manuals, user guides, installation guide, setup guide,
readme files, software release notes, Online help

Objectives:
1. To check what is stated in the document is available in the
software
2. To check what is available in product explained in Document
3. Documentation testing can save time, efforts and money.
4. Documentation testing can start very beginning of the software
process and hence project has high level of maturity.
User Documentation Testing
Graph Based Testing
It is also known as State Based Testing.
• State Transition Testing uses the following terms:
– state diagram: A diagram that depicts the states that a component or system can assume,
and shows the events or circumstances that cause and/or result from a change from one state
to another. [IEEE 610]
– state table: A grid showing the resulting transitions for each state combined with each
possible event, showing both valid and invalid transitions.
– state transition: A transition between two states of a component or system.
– state transition testing: A black box test design technique in which test cases are designed
to execute valid and invalid state transitions. Also known as N-switch testing.
• An excellent tool to capture certain types of system requirements and to document internal system
design. As such can be used for a number of test levels
• Often used in testing:
– Screen dialogues

– Web site transitions


Black Box Test Techniques

State Transition
Diagram

State A Starting State

Transition Between
Event/Action etc start and end states

Event from outside


State B the system
Action triggered by
Event

End State

SNP - ISTQB Testing Foundation Course RCPIT,Shirpur Slide 113 • SNP - Internal
Black Box Test Techniques
State Transition Example Simplified Car
Gears
Change Down/
Neutral Move Back Reverse
Change Up/
Change Up/ Change Down/ Accelerate
Accelerate Decelerate

Gear
1s
Change Up/t Change Down/
Accelerate Decelerate

Gear
Change Up/
2n
d Change Down/
Accelerate Decelerate

Gear
3rd
SNP - ISTQB Testing Foundation Course RCPIT,Shirpur Slide 114 • SNP - Internal
Black Box Test Techniques
State Transition - Switch
Coverage
• Switch Coverage is a method of determining the number tests based on
the number of “hops” between transitions. Some times known as
Chow
0-Switch Coverage (1 hop) 1-Switch Coverage (2 hops)
R N R N 1
N R R N R
N 1 N R N
1 N N 1 2
1 2 N 1 N
2 1 1 N R
2 3
3 2 Etc.

These hops can be used to determine the VALID tests

SNP - ISTQB Testing Foundation Course RCPIT,Shirpur Slide 115 • SNP - Internal
Black Box Test Techniques
State Transition - State

Table
While Switch testing helps determine the valid tests, we also need to look
for the invalid tests.

Change Up Change Down

R Acc / Neutral Null


N Acc/1st Gear Dec / Reverse
Acc/ 2nd Gear Dec / Neutral
1st
Acc / 3rd gear Dec / 1st Gear
2nd
Null Dec / 2nd Gear
3rd

Invalid tests are those identified by a null output in this case

• Changing down from reverse


• Changing up from 3rd

SNP - ISTQB Testing Foundation Course RCPIT,Shirpur Slide 116 • SNP - Internal
Black Box Test Techniques
State Transition – Another
example for a Theatre Show
reservation
Request Choose Reserve
Show Show Show
Options

Show
Show Options
Show selected Reservation
provided
Made

Pay for
Change Show
Mind/
Cancel
Return to
reservation
Options
Show
Reservation
Paid For

Cancelled
Cancel reservation/ Issue
Reservation Issue Refund
Ticket

Ticket
Cancel reservation
(return ticket)/Issue Received
Refund

SNP - ISTQB Testing Foundation Course RCPIT,Shirpur Slide 117 • SNP - Internal
Chapter 3

Levels of Testing and Special Tests


Unit Testing
UNIT Testing is defined as a type of software testing where
individual units/ components of a software are tested.

Why Unit Testing?


1. Unit Tests fix bug early in development cycle and save costs.
2. It helps understand the developers the code base and enable
them to make changes quickly
3. Good unit tests serve as project documentation
4. Unit tests help with code re-use. Migrate both your code and
your tests to your new project.
• Why Unit Testing?

Suppose you have two units and you do not want to test the units
individually but as an integrated system to save your time.
Once the system is integrated and you found error in an
integrated system it becomes difficult to differentiate that the
error occurred in which unit so unit testing is mandatory before
integrating the units.
When developer is coding the software it may happen that the
dependent modules are not completed for testing, in such cases
developers use stubs and drivers to simulate the called(stub) and
caller(driver) units. Unit testing requires stubs and drivers, stubs
simulates the called unit and driver simulates the calling unit.
STUBS:
Assume you have 3 modules, Module A, Module B and module C. Module A is ready
and we need to test it, but module A calls functions from Module B and C which are
not ready, so developer will write a dummy module which simulates B and C and
returns values to module A. This dummy module code is known as stub.
DRIVERS:
Now suppose you have modules B and C ready but module A which calls functions
from module B and C is not ready so developer will write a dummy piece of code for
module A which will return values to module B and C. This dummy piece of code is
known as driver.
Drivers:
For example: Suppose you are testing software that requires large
amount of data to enter for execution of test cases
With few hardware modification and software tools, you could be
replacing the keyboard or mouse of the system being tested with
additional computer acts as a test driver
You could write sample program which automatically generate key
stroke and mouse movements
Stubs:

Stubs are the just opposite to driver, They don’t control operate the
software being tested, instead receive or respond to data that the
software sends

For example: You are testing software that sends data to printer
3.2 Integration Testing

• Integration Testing - Testing performed to expose defects in


the interfaces and in the interactions between integrated
components or systems
• Components may be code modules, operating systems,
hardware and even complete systems

What is Integration Testing?


Integration Testing is defined as a type of testing where software modules are
integrated logically and tested as a group.
A typical software project consists of multiple software modules, coded by
different programmers. Integration Testing focuses on checking data
communication amongst these modules
Why do Integration Testing?

• Although each software module is unit tested, defects still exist


for various reasons like
• A Module, in general, is designed by an individual software
developer whose understanding and programming logic may
differ from other programmers. Integration Testing becomes
necessary to verify the software modules work in unity
• At the time of module development, there are wide chances of
change in requirements by the clients. These new requirements
may not be unit tested and hence system integration Testing
becomes necessary.
• Interfaces of the software modules with the database could be
erroneous
• External Hardware interfaces, if any, could be erroneous
• Inadequate exception handling could cause issues.
How to do Integration Testing?
The Integration test procedure irrespective of the Software testing strategies
1. Prepare the Integration Tests Plan
2. Design the Test Scenarios, Cases, and Scripts.
3. Executing the test Cases followed by reporting the defects.
4. Tracking & re-testing the defects.
5. Steps 3 and 4 are repeated until the completion of Integration is successful.
Component Integration Testing
Top-down
testing
Component
under test
P

Q R

U V
S T

Stubs
Slide 129 • SNP - Internal

SNP - ISTQB Testing Foundation Course RCPIT,Shirpur


Top Down Integration Testing

Advantages:

• Fault Localization is easier.


• Possibility to obtain an early prototype.
• Critical Modules are tested on priority
• major design flaws could be found and fixed first.

Disadvantages:
• Needs many Stubs.
• Modules at a lower level are tested inadequately.
Component Integration Testing
Bottom-up testing
Component under test
P
P is the driver for components Q
and R

Same for Q
Q R and R
driving
their
components

S T
U V
SNP - ISTQB Testing Foundation Course RCPIT,Shirpur Slide 132 • SNP - Internal
Bottom up Integration

Advantages:
• Fault localization is easier.
• No time is wasted waiting for all modules to be developed unlike
Big-bang approach
Disadvantages:
• Critical modules (at the top level of software architecture) which
control the flow of application are tested last and may be prone
to defects.
• An early prototype is not possible
Bi-directional integration
Bidirectional integration also called as sandwitch integration
In thetesting.
sandwich/hybrid strategy is a combination of Top Down and Bottom up
approaches. Here, top modules are tested with lower modules at the same time lower
modules are integrated with top modules and tested. This strategy makes use of stubs as
well as drivers.
Advantages of Sandwich Testing
•Sandwich approach is useful for very large projects having
several subprojects. When development follows a spiral model
and the module itself is as large as a system, then one can use
sandwich testing.
•Both Top-down and Bottom-up approach starts at a time as per
development schedule. Units are tested and brought together to
make a system. Integration is done downwards.
•It needs more resources and big teams perform both bottom-up
and top-down methods of testing at a time or one after the other.
Disadvantages of Sandwich Testing
•It require very high cost for testing because one part has Top-
down approach while another part has bottom-up approach.
•It cannot be used for smaller system with huge
interdependence between different modules. It makes sense
when the individual subsystem is as good as complete system.
•Different skill sets are required for tester at different level as
module are separate system handling separate domains like ERP
products with modules representing different functional areas.
Incremental Integration

• Each Module i.e. M1, M2, M3, etc. are tested individually as part of unit
testing
• Modules are combined incrementally i.e. one by one and tested for
successful interaction
• In Fig2, Module M1 & Module M2 are combined and tested
• In Fig3, Module M3 is added and tested
• In Fig4, Module M4 is added and testing is done to make sure everything
works together successfully
• Rest of the Modules are also added incrementally at each step and
tested for successful integration
Fig2

Fig3

Fig4
Objective of Incremental Test
• To ensure that different modules work together successfully after integration
• Identify the defects earlier and in each phase. This gives developers an edge to
identify where the problem is. Like if testing after M1 and M2 are integrated is
successful but when M3 is added, the test fails; this will help the developer in
segregating the issue
• Issues can be fixed in early phase without much rework and in less cost
Non-Incremental Integration
We go for this testing whenever we don’t have clear relationship between the modules. This type of
testing is also known as Big Bang method. In these cases we create the data in one module and bang on
all other modules and check for the data flow. Hence it is known as Big Bang Method.

Big Bang Integration Testing is an integration testing strategy wherein all units are linked
at once, resulting in a complete system. When this type of testing strategy is adopted, it
is difficult to isolate any errors found, because attention is not paid to verifying the
interfaces across individual units.

Disadvantages of Big-Bang Testing


• Defects present at the interfaces of components are identified at very late stage as all components
are integrated in one shot.
• It is very difficult to isolate the defects found.
• There is high probability of missing some critical defects, which might pop up in the production
environment.
• It is very difficult to cover all the cases for integration testing without missing even a single
scenario.
3.3 System Testing
System Testing is the testing of a complete and fully integrated software product.
Usually, software is only one element of a larger computer-based system.
Ultimately, software is interfaced with other software/hardware systems. System
Testing is actually a series of different tests whose sole purpose is to exercise the
full computer-based system.
When is it performed?

• System Testing is the third level of software testing performed after


Integration Testing and before Acceptance Testing.

Who performs it?

• Normally, independent Testers perform System Testing.


Recovery Testing

Recovery testing is a type of non-functional testing technique


performed in order to determine how quickly the system can
recover after it has gone through system crash or hardware
failure. Recovery testing is the forced failure of the software to
verify if the recovery is successful.
Recovery Plan - Steps:
•Determining the feasibility of the recovery process.
•Verification of the backup facilities.
•Ensuring proper steps are documented to verify the compatibility of backup facilities.
•Providing Training within the team.
•Demonstrating the ability of the organization to recover from all critical failures.
•Maintaining and updating the recovery plan at regular intervals.
Security Testing

• Ensures software systems and applications are free from any


vulnerabilities, threats, risks that may cause a big loss.
• Finding all possible loopholes and weaknesses of the system
which might result into a loss of information, revenue, repute at the
hands of the employees or outsiders of the Organization.
• The goal of security testing is to identify the threats in the system
and measure its potential vulnerabilities, so the system does not
stop functioning or is exploited.
Types of Security Testing:
• Vulnerability Scanning: This is done through automated software to scan a
system against known vulnerability signatures.
• Security Scanning: It involves identifying network and system weaknesses,
and later provides solutions for reducing these risks. This scanning can be performed
for both Manual and Automated scanning.
• Penetration testing: This kind of testing simulates an attack from a malicious
hacker. This testing involves analysis of a particular system to check for potential
vulnerabilities to an external hacking attempt.
• Risk Assessment: This testing involves analysis of security risks observed in the
organization. Risks are classified as Low, Medium and High. This testing recommends
controls and measures to reduce the risk.
• Security Auditing: This is an internal inspection of Applications and Operating
systems for security flaws. An audit can also be done via line by line inspection of
code
• Ethical hacking: It's hacking an Organization Software systems. Unlike malicious
hackers, who steal for their own gains, the intent is to expose security flaws in
the system.
• Posture Assessment: This combines Security scanning, Ethical Hacking and Risk
Assessments to show an overall security posture of an organization.
Example Test Scenarios for Security Testing:
Sample Test scenarios to give you a glimpse of security test cases -
•A password should be in encrypted format
•Application or System should not allow invalid users
•Check cookies and session time for application
•For financial sites, the Browser back button should not work.
Performance Testing

• Performance Testing is defined as a type of software testing to


ensure software applications will perform well under their
expected workload.

• The goal of Performance Testing is not to find bugs but to eliminate


performance bottlenecks.

• The focus of Performance Testing is checking a software program's

• Speed - Determines whether the application responds quickly

• Scalability - Determines maximum user load the software


application can handle.
• Stability - Determines if the application is stable under varying
loads
Types of Performance Testing
•Load testing - checks the application's ability to perform under anticipated user loads.
The objective is to identify performance bottlenecks before the software application goes
live.
•Stress testing - involves testing an application under extreme workloads to see how it
handles high traffic or data processing. The objective is to identify the breaking point of
an application.
•Endurance testing - is done to make sure the software can handle the expected load
over a long period of time.
•Spike testing - tests the software's reaction to sudden large spikes in the load
generated by users.
•Volume testing - Under Volume Testing large no. of. Data is populated in a database
and the overall software system's behavior is monitored. The objective is to check software
application's performance under varying database volumes.
•Scalability testing - The objective of scalability testing is to determine the software
application's effectiveness in "scaling up" to support an increase in user load. It helps
plan capacity addition to your software system.
Example Performance Test Cases
• Verify response time is not more than 4 secs when 1000 users access the website
simultaneously.
• Verify response time of the Application Under Load is within an acceptable range when
the network connectivity is slow
• Check the maximum number of users that the application can handle before it crashes.
• Check database execution time when 500 records are read/written simultaneously.
• Check CPU and memory usage of the application and the database server under peak
load conditions
• Verify response time of the application under low, normal, moderate and heavy load
conditions.
Load Testing

Load testing is a kind of Performance Testing which determines a


system's performance under real-life load conditions. This testing helps
determine how the application behaves when multiple users access it
simultaneously.
Following are the advantages of Load testing:
•Performance bottlenecks identification before production
•Improves the scalability of the system
•Minimize risk related to system downtime
•Reduced costs of failure
•Increase customer satisfaction

Disadvantages of Load testing:


•Need programming knowledge to use load testing tools.
•Tools can be expensive as pricing depends on the number of virtual users
supported.
Stress Testing

This test mainly determines the system on its robustness and error
handling under extremely heavy load conditions.
A most prominent use of stress testing is to determine the limit, at which
the system or software or hardware breaks. It also checks whether the
system demonstrates effective error management under extreme
conditions.
Need for Stress Testing
Consider the following scenarios -
•During festival time, an online shopping site may witness a spike in traffic, or when it
announces a sale.
•When a blog is mentioned in a leading newspaper, it experiences a sudden surge in
traffic.
It is imperative to perform Stress Testing to accommodate such abnormal traffic spikes.
Failure to accommodate this sudden traffic may result in loss of revenue and repute.

Stress testing is also extremely valuable for the following reasons:


• To check whether the system works under abnormal conditions.
• Displaying appropriate error message when the system is under stress.
• System failure under extreme conditions could result in enormous revenue loss
• It is better to be prepared for extreme conditions by executing Stress Testing.
Stress Testing - Scenarios:
• Monitor the system behaviour when maximum
number of users logged in at the same time.
• All users performing the critical operations at the
same time.
• All users Accessing the same file at the same time.
• Hardware issues such as database server down or
some of the servers in a server park crashed.
Difference between Load and Stress testing:

Load Testing
Load testing identifies the bottlenecks in the system under various workloads
and checks how the system reacts when the load is gradually increased

Stress Testing
Stress Testing determines the breaking point of the system to reveal the
maximum point after which it breaks.
Usability Testing

Usability Testing is defined as a type of software testing where, a small set of target
end-users, of a software system, "use" it to expose usability defects. This testing
mainly focuses on the user's ease to use the application, flexibility in handling
controls and the ability of the system to meet its objectives. It is also called User
Experience(UX) Testing.
There are many software applications/websites, which miserably fail, once
launched, due to following reasons –

•Where do I click next?


•Which page needs to be navigated?
•Which Icon or Jargon represents what?
•Error messages are not consistent or effectively displayed
•Session time not sufficient.
The goal of this testing is to satisfy users and it mainly concentrates on the following
parameters of a system:

• The effectiveness of the system Eg. Is the system is easy to learn?


• Efficiency Eg. Navigation required to reach the desired screen/webpage should be very
less. Scrollbars shouldn't be used frequently.
• Accuracy Eg. No broken links should be present.
• User Friendliness:
Usability Testing Advantages
As with anything in life, usability testing has its merits and de-merits. Let's look at
them
• It helps uncover usability issues before the product is marketed.
• It helps improve end-user satisfaction
• It makes your system highly effective and efficient
• It helps gather true feedback from your target audience who actually use your
system during a usability test. You do not need to rely on "opinions" from
random people.
Usability Testing Disadvantages
• Cost is a major consideration in usability testing. It takes lots of resources to set
up a Usability Test Lab. Recruiting and management of usability testers can also
be expensive
• However, these costs pay themselves up in form of higher customer satisfaction,
retention and repeat business. Usability testing is therefore highly recommended.
What is Compatibility Testing?
Compatibility Testing is a type of Software testing to check whether your software is
capable of running on different hardware, operating systems, applications, network
environments or Mobile devices.

What is Compatibility?
Compatibility is nothing but the capability of existing or living together. In normal life, Oil
is not compatible with water, but milk can be easily combined with water.
Types of Compatibility Tests
• Hardware: It checks software to be compatible with different hardware configurations.
• Operating Systems: It checks your software to be compatible with different Operating
Systems like Windows, Unix, Mac OS etc.
• Software: It checks your developed software to be compatible with other software. For
example, MS Word application should be compatible with other software like MS Outlook, MS
Excel, VBA etc.
• Network: Evaluation of performance of a system in a network with varying parameters such
as Bandwidth, Operating speed, Capacity. It also checks application in different networks with
all parameters.
• Browser: It checks the compatibility of your website with different browsers like Firefox,
Google Chrome, Internet Explorer etc.
• Devices: It checks compatibility of your software with different devices like USB port Devices,
Printers and Scanners, Other media devices and Blue tooth.
• Mobile: Checking your software is compatible with mobile platforms like Android, iOS etc.
• Versions of the software: It is verifying your software application to be compatible with
different versions of the software. For instance checking your Microsoft Word to be
compatible with Windows 7, Windows 7 SP1, Windows 7 SP2, Windows 7 SP3.
There are two types of version checking

Backward compatibility Testing is to verify the behavior of the developed


hardware/software with the older versions of the hardware/software.
Forward compatibility Testing is to verify the behavior of the developed
hardware/software with the newer versions of the hardware/software.
How to do Compatibility Testing
1.The initial phase of compatibility testing is to define the set of environments or
platforms the application is expected to work on.
2.The tester should have enough knowledge of the platforms/software/ hardware to
understand the expected application behavior under different configurations.
3.The environment needs to be set-up for testing with different platforms, devices,
networks to check whether your application runs well under different configurations.
4.Report the bugs. Fix the defects. Re-test to confirm Defect fixing.
3.4 ACCEPTANCE TESTING

What is Acceptance Testing?


Acceptance testing, a testing technique performed to determine whether
or not the software system has met the requirement specifications. The
main purpose of this test is to evaluate the system's compliance with the
business requirements and verify if it is has met the required criteria for
delivery to end users.
Acceptance Criteria

Acceptance criteria are defined on the basis of the following attributes


• Functional Correctness and Completeness
• Data Integrity
• Data Conversion
• Usability
• Performance
• Timeliness
• Confidentiality and Availability
• Installability and Upgradability
• Scalability
• Documentation
Why Acceptance Tests?
Though System testing has been completed successfully, the
Acceptance test is demanded by the customer. Tests conducted
here are repetitive, as they would have been covered in System
testing.
Then, why is this testing is conducted by customers?
This is because:
•To gain confidence in the product that is getting released to the
market.
•To ensure that the product is working in the way it has to.
•To ensure that the product matches current market standards and is
Types:
1) User Acceptance Testing (UAT)
UAT is to assess whether the Product is working for the user, correctly for the usage.

2) Business Acceptance Testing (BAT)


This is to assess whether the Product meets the business goals and purposes or not.

3) Contract Acceptance Testing (CAT)


This is a contract which specifies that once the Product goes live, within a predetermined period,
the acceptance test must be performed and it should pass all the acceptance use
cases.
4) Regulations/Compliance Acceptance Testing (RAT)
This is to assess whether the Product violates the rules and regulations that are
defined by the government of the country where it is being released. This may be
unintentional but will impact negatively on the business.

5) Operational Acceptance Testing (OAT)


This is to assess the operational readiness of the Product and is a non-functional testing. It
mainly includes testing of recovery, compatibility, maintainability, technical support
availability, reliability, fail-over, localization etc.
What is Alpha Testing?
Alpha testing is a type of acceptance testing; performed to identify all possible issues/bugs
before releasing the product to everyday users or the public. The focus of this testing is to
simulate real users by using a black box and white box techniques. The aim is to carry out
the tasks that a typical user might perform. Alpha testing is carried out in a lab
environment and usually, the testers are internal employees of the organization. To put it
as simple as possible, this kind of testing is called alpha only because it is done early on,
near the end of the development of the software, and before beta testing.
What is Beta Testing?
Beta Testing of a product is performed by "real users" of the software application in a
"real environment" and can be considered as a form of external User Acceptance
Testing.

Beta version of the software is released to a limited number of end-users of the product
to obtain feedback on the product quality. Beta testing reduces product failure risks and
provides increased quality of the product through customer validation.

It is the final test before shipping a product to the customers. Direct feedback from
customers is a major advantage of Beta Testing. This testing helps to tests the product in
customer's environment.
Alpha Testing Beta Testing

Alpha testing performed by Beta testing is performed by Clients


Testers who are usually internal or End Users who are not employees
employees of the organization of the organization

Alpha Testing performed at Beta testing is performed at a client


developer's site location or end user of the product

Reliability and Security Testing are Reliability, Security, Robustness are


not performed in-depth Alpha checked during Beta Testing
Testing

Alpha testing involves both the white Beta Testing typically uses Black
box and black box techniques Box Testing

Alpha testing requires a lab Beta testing doesn't require any lab
environment or testing environment environment or testing environment.
The software is made available to
the public and is said to be real time
environment

Long execution cycle may be Only a few weeks of execution are


required for Alpha testing required for Beta testing

Critical issues or fixes can be Most of the issues or feedback is


addressed by developers collected from Beta testing will be
immediately in Alpha testing implemented in future versions of the
product

Alpha testing is to ensure the quality Beta testing also concentrates on the
of the product before moving to Beta quality of the product, but gathers
testing users input on the product and
ensures that the product is ready for
real time users.
3.6 Special Tests:

What is Smoke Testing in Software Testing?


Smoke Testing is done to make sure if the build we received from the
development team is testable or not. It is also called as “Day 0” check.
It is done at the “build level”.
It helps not to waste the testing time to simply testing the whole
application when the key features don’t work or the key bugs have not
been fixed yet. Here our focus will be on primary and core application
work flow.
How to Conduct Smoke Testing?
• To conduct smoke testing, we don’t write test cases. We just pick
the necessary test cases from already written test cases.
• pick the test cases from our test suite which cover major functionality
of the application.
• In general, we pick minimal number of test cases that won’t take more
than half an hour to execute.
What is Sanity Testing in Software Testing?

Sanity Testing is done during the release phase to check for the main functionalities of
the application without going deeper. It is also called as a subset of Regression testing.
It is done at the “release level”.

At times due to release time constraints rigorous regression testing can’t be done to the
build, sanity testing does that part by checking main functionalities.

Most of the times, we don’t get enough time to complete the whole testing. Especially in
Agile Methodology, we will get pressure from the Product owners to complete testing in
few hours or end of the day. In this scenarios we choose Sanity Testing. Sanity Testing
plays a key role in these kind of situations.
How to Conduct Sanity Testing?

• We just pick the necessary test cases from already written test cases.
• When it comes to Sanity testing, the main focus is to make sure
whether the planned functionality is working as expected.

• Real Time Example: Let’s take the same example as above. Assume
you are working on an eCommerce site. A new feature is released
which is related to Search functionality. Here your main focus should
be on the Search functionality. Once you make sure that the Search
functionality is working fine then you move on to other major
functionality such as payment flow.
SMOKE TESTI NG SANI TY TESTI NG

Smoke Test is done to make Sanity Test is done during the

sure if the build we received release phase to check for the

from the development team is main functionalities of the

testable or not application without going deeper

Smoke Testing is performed by Sanity Testing is performed by

both Developers and Testers Testers alone

Smoke Testing exercises the Sanity Testing exercises only the

entire application from end to particular component of the

end entire application

Smoke Testing, build may be Sanity Testing, build is relatively

either stable or unstable stabl


What is Regression Testing?

• recent program or code change has not adversely affected existing features.

• Regression Testing is nothing but a full or partial selection of already executed


test cases which are re-executed to ensure existing functionalities work fine.

• This testing is done to make sure that new code changes should not have side
effects on the existing functionalities.

• It ensures that the old code still works once the new code changes are done.
Need of Regression Testing
Regression Testing is required when there is a
•Change in requirements and code is modified according to the requirement
•New feature is added to the software
•Defect fixing
•Performance issue fix
Retest All
all the tests in the existing test bucket or suite should be re-executed. This is very
expensive as it requires huge time and resources.

Regression Test Selection


Instead of re-executing the entire test suite, it is better to select part of the test suite
to be run Test cases selected can be categorized as
1) Reusable Test Cases 2) Obsolete Test Cases.
• Re-usable Test cases can be used in succeeding regression cycles.
• Obsolete Test Cases can't be used in succeeding cycles.

Prioritization of Test Cases


Prioritize the test cases depending on business impact, critical & frequently used
functionalities. Selection of test cases based on priority will greatly reduce the
regression test suite.
Selecting test cases for regression testing
Due to last minute bug fixes creating side effects and hence selecting the Test Case
for regression testing is an art and not that easy.

Effective Regression Tests can be done by selecting the following test cases -
• Test cases which have frequent defects
• Functionalities which are more visible to the users
• Test cases which verify core features of the product
• Test cases of Functionalities which has undergone more and recent changes
• All Integration Test Cases
• All Complex Test Cases
• Boundary value test cases
• A sample of Successful test cases
• A sample of Failure test cases
GUI Testing
What is GUI?
There are two types of interfaces for a computer application. Command Line Interface
is where you type text and computer responds to that command.

GUI stands for Graphical User Interface where you interact with the computer using
images rather than text.
In above example, if we have to do GUI testing we first check that the images should be completely visible
in different browsers.

Also, the links are available, and the button should work when clicked.

Also, if the user resizes the screen, neither images nor content should shrink or crop or overlap.
Why do GUI testing?
Is it really needed?
Does testing of functionally and logic of Application is not more than enough?? Then why to waste
time on UI testing.

To get the answer to think as a user, not as a tester. A user doesn't have any knowledge about XYZ
software/Application. It is the UI of the Application which decides that a user is going to use the
Application further or not.

A normal User first observes the design and looks of the Application/Software and how easy it is for him
to understand the UI. If a user is not comfortable with the Interface or find Application complex to
understand he would never going to use that Application Again. That's why, GUI is a matter for
concern, and proper testing should be carried out in order to make sure that GUI is free of Bugs.
GUI Testing Approaches:
GUI testing can be performed in the following three ways:

1.Manual GUI Testing: Like any traditional manual testing


approach, this approach is very simple where the graphical screens
are manually checked by the tester and compared with the prototype
screens or the test cases as prepared against the business
requirement documents.
2. Record and Replay (Test Automation):
• We can automate the GUI testing with the help of tools such as QTP,
Selenium, Sikuli, etc.
• record and play approach which is provided by many test automation tools
such as QTP, Selenium IDE etc
• During record and play, the tool itself generate the code as a part of the
test automation scripts.
• Otherwise, the tester can use APIs such as Selenium web driver, Sikuli, etc.
and write the test scripts by his own in the different programming
languages such as Java, Ruby, Groovy, PHP, Python, etc. Such test scripts
can automate the test scenarios to test the graphical elements which are
present on the screen under test.
Model-based testing:
A graphical description of the behavior of the system is known as a Model. A model helps us to
determine the system behavior under test. We use the system requirements in order to generate
the efficient test cases with the help of a Model.
Given below is an overview of a model-based testing.
• Develop a model.
• Determine various inputs for this model.
• Determine expected output for this model.
• Execute the tests.
• Compare the actual result against the expected output.
• Make the decision on the action on the model.
Charts and Decision tables are two common modeling techniques which can be used for deriving
the test cases. Charts represent the system state and check the state against some inputs.
Decision tables are a comparison approach where the results are compared in a tabular manner
i.e. actual result against expected output.
Object Oriented Application Testing

Client Server Testing:

Client Server Software :


• Testing like Servers go down, records lock, I/O (Input/Output) errors and lost messages
• Testing addresses system performance and scalability by understanding how systems
respond to increased workloads and what causes them to fail.
There are two distinct approaches when creating software tests. There is black box testing
and white or glass box testing.
Black box testing is also referred to as functional testing. for example, black box
tests focus on I/O. The testers know the input and predicted output, but they do not know
how the program arrives at its conclusions. Code is not examined, only specifications are.
White Box testing focuses on the internal workings of the program and uses
programming code to examine outputs. Furthermore, the tester must know what the
program is supposed to do and how its supposed to do it. Then, the tester can see if the
program strays from its proposed goal..
Client-Server Testing Techniques:

Risk Driven Testing and Performance Testing


most important bugs early on.
because testing is never allocated enough time or resources. Companies want to get their
products out as soon as possible.
Risk Driven: The prioritization of risks or potential errors is the engine behind risk driven
testing. (categories of error impact and likelihood)
Performance Testing: such as resource utilization, response time, and transaction rates.
It is also called load testing or stress testing.
Testing Aspects:

Unit testing, Integration testing, and System testing.

• Three in particular are particularly relevant to client server applications.


• A unit is the smallest testable component of a program. In object oriented
programming, which is increasingly influencing client-server applications, the smallest
unit is a class. Modules are made up of units.
• It is only a phase of three-layer testing, of which unit testing is the first.

Integration testing

System testing
Client Server Testing in Different Layers

A) Client Side: Graphical User Interface

i) Cross Platform nature


ii) Event Driven nature

B) Server Side: Application Testing


1. Client/Server loading test
2. Volume Testing
3. Stress Testing
4. Performance Testing
5. Data Testing
Client Server Testing in Different Layers

C) Networked Application Testing


i. Application response time
ii. Application Functionality
iii. Configuration and sizing
iv. Stress Testing
v. Performance Testing
vi. Reliability Testing

D) Security Testing
Web Based Testing
Web application testing, a software testing technique exclusively adopted to test the applications
that are hosted on web in which the application interfaces and other functionalities are tested.

Web Application Testing - Techniques:


1. Functionality Testing - The below are some of the checks that are performed but not limited to
the below list:
• Verify there is no dead page or invalid redirects.
• First check all the validations on each field.
• Wrong inputs to perform negative testing.
• Verify the workflow of the system.
• Verify the data integrity.
2. Usability testing - To verify how the application is easy to use with.
• Test the navigation and controls.
• Content checking.
• Check for user intuition.

3. Interface testing - Performed to verify the interface and the dataflow from one
system to other.

4. Compatibility testing- Compatibility testing is performed based on the context of


the application.
• Browser compatibility
• Operating system compatibility
• Compatible to various devices like notebook, mobile, etc.
5. Performance testing - Performed to verify the server response time and throughput under various
load conditions.

Load testing - It is the simplest form of testing conducted to understand the behaviour of the system
under a specific load. Load testing will result in measuring important business critical transactions and
load on the database, application server, etc. are also monitored.

Stress testing - It is performed to find the upper limit capacity of the system and also to determine
how the system performs if the current load goes well above the expected maximum.

Soak testing - Soak Testing also known as endurance testing, is performed to determine the system
parameters under continuous expected load. During soak tests the parameters such as memory
utilization is monitored to detect memory leaks or other performance issues. The main aim is to
discover the system's performance under sustained use.

Spike testing - Spike testing is performed by increasing the number of users suddenly by a very large
amount and measuring the performance of the system. The main aim is to determine whether the
system will be able to sustain the work load.
6. Security testing - Performed to verify if the application is secured on web as data theft and
unauthorized access are more common issues and below are some of the techniques to verify the
security level of the system.

• Injection
• Broken Authentication and Session Management
• Cross-Site Scripting (XSS)
• Insecure Direct Object References
• Security Misconfiguration
• Sensitive Data Exposure
• Missing Function Level Access Control
• Cross-Site Request Forgery (CSRF)
• Using Components with Known Vulnerabilities
• Unvalidated Redirects and Forwards
Chapter 4: Test Management
1. Test Planning
 Preparing Test Plan
8 Steps are used to prepare a test plan
1. Analyze the product
2. Develop test strategy
3. Define objective of test
4. Define test criteria
5. Planning the resources
6. Plan test environment
7. Schedule and Cost
8. Test Deliverables
1. Scope
2. Methodology
3. Requirements
4. Criteria for Pass and fail
5. Schedule
Test Plan Types:
•Master Test Plan
•Testing Level Specific test plan
•Unit Test Plan
•Integration test plan
•System Test Plan
•Acceptance Test Plan
• Testing Type Specific Test Plan
Standard Template for Test Plan:
1. Test Plan Identifier
2. References
3. Introduction
4. Test Items
5. Software Risk Issues
6. Features to be tested
7. Features not to be tested
8. Approach
9. Item Pass/Fail criteria
10. Suspension Criteria and Resumptions requirements
11. Test Deliverables
12. Remaining Test Task
13. Environmental Needs
14. Staffing and Training Needs
15. Responsibilities
16. Schedule
17. Planning Risk and Contingencies
18. Approvals
19. Glossary
Risk Management during Test Planning

List of everything

Quantified and prioritized all the risk

Find solutions and plan to


handle these solutions
4.1.2 Scope Management

In general Scope includes:


 Objectives and requirements of the project
 Constraints or limitations of the project like time, budget, resources, legal,
technological and management
 Assumptions about the statement that are consider at the time of planning
 Risk involved in the project implementations
4.1.3 Deciding Test Approach

Requirement based strategies – Planning, estimating & designing test

Formal or informal model eg: working as per SUT then correct

Involve strategies that are planned, developed eg. ISO 9126

Externally developed approach to testing

Exploratory testing – find out as many defects

Usually automated – find regression defects


4.1.4 Setting up criteria for Testing
4.1.5 Identify Responsibilities

• List the responsibilities of each team/role/individual

• Example: Shivram perform acceptance testing

• Managing, designing, preparing, executing and resolving different types of test activities

• Identify people related to the test environment

• Developers, tester, operations staff, testing services etc.


4.1.6 Staffing and Training Needs

• Training
• Product Training provided to test analyst on the application or system
• Test design techniques provided to business users involved in UAT
• Training for use of test executions and reporting tools provided to all users
• Staffing Needs
• Test team size and number of resources required
• Numbers of individual required for each role and if multiple users required for certain number
of individuals
• It is important to state when and how long resource required
4.1.7 Resource Requirements
4.1.8 Test Deliverables and Milestone
4.1.9 Testing Tasks

• Identify various task


• Hierarchical arrangement of task (identify required skill)
• Size estimations we can determine amount of testing need to be done
• Size estimations (number of person/days, months, years
• Schedule estimates translate efforts into specific time and frames
4.2 Test Management

• Traditional tools – Pen and Paper, Word Processors, Spreadsheets


• Larger testing efforts may use test management solutions like Spreadsheet or
database or commercial test management applications
• Generally test management allows different teams to plan, develop, execute and
access all testing activites
4.2.1 Choice of Standards

It is made by entity other than organization

Standard define by customer, lies in the business


requirements of certain product
Standard define by regulatory authorities, applicable both
producer and customer, associated legal action
Standard globally defined and applicable to all producer
and customer like ISO, IEEE etc
Developed by organization for their internal use
4.2.1 Choice of Standards Contd..
4.2.2 Test Infrastructure Management
4.2.2 Test Infrastructure Management Contd..
4.2.2 Test Infrastructure Management Contd..

Components of Testing Infrastructure


• Required Hardware Infrastructure
• Required Software Infrastructure
• Required Automated Testing tool infrastructure
4.2.3 Test People Management Contd..
Project Success is high just because of efficient and effective use of
management techniques

Bugs and defects are found based of skills of tester

Bridge the skill gap if necessary

Need to work together, follow the process, delivered the work product

Test Lead Skills

• Lead a team of tester, with full efficiency

• This is required to meet Product goal ultimately organizational goal


4.2.3 Test People Management
Contd..
Test Lead Responsibility
• Identify how the test team is formed and identify in the organization
• Decide the roadmap for Project success
• Identify the scope of testing using requirement document
• Prepare Test Plan and then approved by the management and development
team
• Identify the required metrics and work to have them in place
• Calculate size project for effort estimation
• Checks what skill required and available to balance them
• Identify skill gap present or not. If it is more then plan for training session
• Identify automation tools may required provide training to team
• Create healthy environment for all resources to gain maximum throughput
Test Team Management
Consideration for test team management
a) Understand the tester
a) Understanding tester mindset is a big thing for test lead & management people, as
per experience tester learn to break code and brought hidden defects
b) Learning different strategies and creative testing takes times
b) Testers work environment
a) Under pressure, many time project delay and if on time not able to verified properly
b) Test team try to indicate issue but management does not take seriously - frustration
c) Role of Test Team
a) 100% testing not possible there is chance of defect having at customer environment
4.2.4 Integrating with Product Release

• Testing team responsible to resolve risk in releasing with software


defects
• Success is depends upon ultimate integration of development and
testing
• Project plan must include separate plan of development and
testing
• Provide separate plan for each phase like unit, integration etc
• Service level agreement need to be done between development team
and testing team like how long testing team will test the application
• Defining severities and priorities
• Proper communication channel should be formed eg. documentation
4.3 Test Process
1. Test Planning and Control
2. Test Analysis and Design
3. Test Implementation and Execution
4. Evaluation of exit criteria and Test reporting
5. Test Closure Activities
4.3.1 Baseline Test Plan
 Involves important process of validation of documents and
specifications using which test cases would be planned or designed
 Base Line – A line that forms the base for performing further activities
like constructions, measurement, comparisons and calculations
 Developed by competent people then given for approval to higher
authorities
 If any changes first we have to make changes in base plan then approval
required followed by process continues. Using base line major issues
can be resolved
4.3.2 Test Case Specification
 Test Plan: Which units will be tested or what approached we should used
 A test case specification is requirement to be satisfied (test case: set of
input, execution condition and a pass/fail criteria)
 First determine feature to be tested in selected unit. For this use
approach specified in test plan
 Success of product is depending upon quality of test cases
4.3.3 Update of Traceability Matrix
 Associate requirements to its work products and test cases
 Advantage of matrix is to ensure the completeness of requirements
 It acts as a tool to verify every requirement is tested or not.
4.3.4 Executing Test Case
Test Execution has following major task:
• Follow test procedure to execute test suites and individual test cases.
• Do the confirmation testing or retesting
• Log the result (in log report with version, who, procedure, pass/fail status
etc.)
• Comparison with actual result with expected result
• If difference defect occurrence is reported
• Defect database must be updated, it is communication mechanism
between developer and tester
Homework:
4.3.6 Preparing Test Summary Report
Step 1: Purpose of the document
Gives short description about objectives of the document
Step 2: Overview of an application
Description about application to be tested mentioned
Step 3: In Scope, Out of Scope, Items not tested
Step 4: Metrics
Execution results, status of test case and defects etc.
4.3.6 Preparing Test Summary Report Contd ..

Step 5: Types of Testing Performed:


a. Smoke Testing
b. System Integration Testing
c. Regression Testing
4.3.6 Preparing Test Summary Report Contd ..

Step 6: Information of Test Environments and Tools:

Step 7: Lessons Learned:


4.3.6 Preparing Test Summary Report Contd ..

Step 7: Lessons Learned:


4.3.6 Preparing Test Summary Report Contd ..

Step 8: Recommendations ( Suggestions):

Step 9: Best Practices

Step 10: Exit Criteria

Step 11: Conclusion Sign off

Step 12: Definitions/ Acronyms an abbreviation


4.4 Test Reporting

When Testing completed tester generates final report


4.1.1 Recommending Product Release:
Testing never prove there is no defect, instead it provide an evidence of what
kind of defect present
Job of testing is to provide defect to senior management and product release
team
This information include:
• What kind of defect product has
• What is the impact/ severity of the defects
• What would be risk of releasing product with existing defects
Based on above questions senior management can take decision
Chapter 5
Defect Management
5.1 Introduction
• Software defects are expensive as well defect identification process is also
expensive
• It will not possible to eliminate defects but we can minimize their number and
impact
• Need to implement defect management process that focuses on preventing
defects, catching defects and minimizing impact of defects
5.1.1 Defect Classification:

Fig C5.1 Defect Classification


5.1.1 Defect Classification:

Severity Wise
•Major : observable product failure or departure from requirements
•Minor : Defect that will not cause a failure in execution of product
•Fatal: cause the system to crash or close abruptly or effect other application

Work Product Wise:


• SSD : Defect from System Study Document
• FSD: Functional Specification Document
• ADS: Architectural Design Document
• DDS: Detail Design Document
• Source Code
• Test Plan/ Test Cases:
• User Documentation (User Manual):
5.1.1 Defect Classification:

Type of Error Wise:


•Comments: Inadequate/incorrect/misleading or missing comments in source
code
•Computational Error : Improper computation of formula
•Data Error: Incorrect data population/ update in database
•Missing Design: Design features/ approach missed/ not documented in
design document
•Inadequate or sub optimal design: Design features/approach needs
additional input for it to be complete design features described does not
provide best approach
•Incorrect design : Wrong or inaccurate design
•Ambiguous design:
•Boundary Conditions neglected
•Interface Error:
5.1.1 Defect Classification:
Type of Error Wise:
•Logic Error: missing, inadequate, irrelevant functionality in source code
•Message Error: missing, inadequate, irrelevant error messages in source
code
•Navigation Error:
•Performance Error:
•Missing Requirements:
•Inadequate Requirements:
•Incorrect Requirements:
•Ambiguous Requirements:
•Sequencing / Timing Error:
•Standards:
•System Error: H/W, OS, Memory Leak and related error
•Test Plan/Cases Error: missing, inadequate test plan
•Typographical Error:
•Variable Declaration Error: type mismatch error in source code
5.1.1 Defect Classification:
Status Wise:
• Open
• Closed
• Deferred
• Cancelled
5.1.2 Defect Management Process
The process of finding defects and reducing them at lowest cost is called defect
management process.

1. Defect Prevention: implementation techniques, methodologies, standard process to reduce risk


2. Deliverable Baseline: Establishment of milestones where deliverables will be considered complete
3. Defect Discovery: Identification and reporting defects
4. Defect Resolution: Work by development team to prioritize, schedule & fix a defect & document resolution
5. Process Improvement: Analyze and identify ways to improve the process to prevent similar defect in future
occurrences
Defect Prevention Cycle:
5.1.2 Defect Prevention Process
Gray background:
Prevention is better than cure Soft industry
process
• What is defect?
• Benefits for early detections
• Defect Prevention is a continuous
process of collecting the defect
data, Doing root cause analysis,
determining and implementing the
corrective actions and sharing
lessons learned to avoid future
defects. Whit background:
Analyze defects
To their root
causes
5.1.2 Defect Prevention Process Contd…
Root Cause of Analysis:
Initiated by team leader or manager
Appling 3 key principles:
1. Reducing defects to improve the
quality
2. Applying best expertise
3. Targeting Systematic errors: mistakes
that tend to be repeated
Embedding procedures into
development process:
4. Monthly status of defects – severity
5. Monthly meeting for defect
awareness in team
6. Adding defect prevention measures
in s/w development life cycle
7. Learning form previous projects
8. Monitoring defects –
increasing/decreasing
5.2 Defect Life Cycle
5.2 Defect Life Cycle Contd ..

Next build

If developer not resolved


Status:
• Wont fix/Cant fix
• Cant reproduce
• Need more information
5.2.1 Defect Report Template

One line description of defect

Functionality, specification, UI etc

Impact of this defect on business Example:


Impact of this defect on product 5 keys press together crashes
Sever defect but priority low
Platform, database etc Similarly
Company logo not display
properly – severity low but
priority high

If that defect is related to other


Defects you can add in comment
5.3.1 Estimate Expected impact of Defect
• Defect Impact: Degree of impact on the development and operation
• Once critical risk identified, the financial impact of each risk should be estimated
• The expected impact of risk (E) is calculated as
E = P*I where
P : Probability of risk becoming a problem
I : Impact in dollars if the risk become a problem
• Other effective methods for estimating the expected impact of a risk is the annual loss
expectations (ALE) formula
• Occurrence of risk called an event
• Loss per event can be defined as average loss for sample of events
• Formula states : ALE = Loss/event*number of events
5.3.1 Techniques for finding defects

While using the final sw product the defect is found and sw is not
Working or fails
5.3.2 Reporting Defects:

Specify exact action do not add confusion statement

Provide more information, not less

Sticks to facts and avoid emotions


Do not impatient and file a defect report,
Replicate at least once more to be sure
Do not submit report as soon as you write, Review it at least once
Chapter 6:

Testing Tools and Measurements


6.1 Manual Testing

Manual Testing is the process of manually testing software for


defects
6.1.2 Limitations of Manual Testing
6.1.3 Automated Testing or Test Automation
All Manual Testing limitations are overcome by Automation Testing

Types of Test Automation Tools:


1. Static Test Tool
2. Dynamic Test Tool
6.1.4 Benefits of Automation Testing
• Reduce Time of Testing
• Improve the Bugs finding
• Deliver the quality product
• Allow to run test many times with different data
• Getting more time for test planning
• Save resources or require less resource
• Automation never tires and expert person at a time can work on many tools

Advantages of switching to automated testing:


• Efficient Testing - Automated testing is faster
• Consistency in Testing - more reliable
• Better Quality Software - reduces human and technical risk
• Automated Testing is cheaper - more powerful and multipurpose/useful
6.1.5 General Approaches of Automated Test
6.1.6 Need for Automated Testing Tool
6.2 Features of Test Tool
6.2.1 Test Tool:
• Identify tool is overwhelming task
• You need to start by different phases of software testing
• White box
• Unit Testing
• Integration Testing (Which tool would be best for)
• Which phase accomplish using automation tools
• From 100s of tool IT department identifies tool
• Remember Software testing tool cant work miracle, They can not make poorly
designed software better, They cant do anything unrealistic development
schedules etc.
6.2 Types of Tool : Static Testing Tool
6.2.2 Static Testing Tool

• Static Testing tools used by developers


• Static analysis tools are an extension of compiler technology
• Other than code static analysis also carried out on requirements or
websites
• Helps to understand structure of code and also used to enforced
coding standards
6.2.2 Static Testing Tool Contd ..
6.2.3 Dynamic Testing Tool

• Dynamic because they require the code to be in running state


• Example Car : Standing position you looks it comfort and at test
drive you check how car performance
6.2.3 Dynamic Testing Tool Contd ..
Testing tool classified into following categories:

Static Test Tool


• Flow analysers : They ensure consistency in data flow from input to output
• Path tests: They find unused code and code with contradictions
• Coverage analysers: It ensure that all logic paths are tested
• Interface analysers: It examine the effect of passing variables and data between modules

Dynamic Test Tool:


• Test Driver: It input data into module under test
• Test Beds: It simultaneously display source code along with program under execution.
• Emulators: The response facilities are used to emulates parts of the system not
developed
• Mutation Analysers: The errors are deliberately fade into the code to analyse fault
tolerance.
6.3 Advantages and Disadvantages of Testing tools:
Advantages of testing tool:
• Reduction of Repetitive work
• Greater Consistency and reputability
• Objective assessment
• Ease of access to information about test or testing

Disadvantages of testing tool:


• Unrealistic expectations from tool
• People often make mistakes by underestimating the time, cost and effort for initial
introduction of tool
• People frequently miscalculated the time and effort needed to achieve significant and
continuing benefits from tool.
• Mostly people underestimate the effort required to maintain the test assets generated
by the tool
• People depend on tool lot
6.4 Selecting a Testing tool:
• Must match the need and solve need effective and efficient way
• Tool must address organizations weaknesses and ready for the
changes
• If current practices are not good enough and organization is not
mature then it is always recommended to improve practices else with
tool there is chaos
• we can improve process parallel and introduce a tool
• Do not depend on tool for everything but support for organization
6.5 Metrics and Measurement of Software Testing:
6.5.1 Metrics : Contd ..
6.5.1 Metrics : Contd ..
Importance of Metrics:
“We can not improve what we can not measure” , “We can not control, what we
can not measure”.
6.5.1 Metrics : Contd ..
Test Analyst – total no of test case
developed, no of test case need to be
executed, Pass Fail etc

Data derived from Base Metrics, Tracked by


Test Lead Manager for reporting purpose
6.6.4 Project Metrics
• Project Metrics enable a software project manager to:
• Access the status of ongoing project
• Track Potential Risk
• Uncover problem area before their status becomes critical
• Adjust workflow or task
• Evaluate the project team ability to control quality of software work products
• Many of same metrics used in both the process or project domain.
6.6.4 Progress Metrics
• Tracking progress
• Indicate how different activities of the project progressing
• When we track progress, it is related to time or other unit that indicates a schedule
• Most often we use progress metrics to track planned versus actual over time
• What we track depends on a role
• For example: Financial people then track money spent
• Man hours/test case executed
• Planned hours/actual hours
• Test Case Executed/Planned
• Test Case Executed/Defects Found

You might also like