0% found this document useful (0 votes)
9 views

STE chapter notes

The document covers the basics of software testing, including its objectives, terminologies related to failures and defects, and the importance of test cases. It discusses when to start and stop testing, verification and validation models, and the differences between quality assurance and quality control. Additionally, it explains various testing methods such as static and dynamic testing, black box testing techniques, and unit testing, highlighting their advantages and processes.

Uploaded by

sanketnarhare383
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views

STE chapter notes

The document covers the basics of software testing, including its objectives, terminologies related to failures and defects, and the importance of test cases. It discusses when to start and stop testing, verification and validation models, and the differences between quality assurance and quality control. Additionally, it explains various testing methods such as static and dynamic testing, black box testing techniques, and unit testing, highlighting their advantages and processes.

Uploaded by

sanketnarhare383
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 85

Unit-1 (Marks-14)

Basics of Software Testing and Testing Methods

1.1 Software Testing, Objective of Testing.

Software Testing:
Testing is executing a system in order to identify any gaps,
errors, or missing requirements in contrary to the actual
requirements.

Objective of Testing:

• Software testing is required to identify the defects and errors


during the development phases.
• It is necessary for customer reliability and their satisfaction.
• It is necessary to verify the quality of the product.
• It is necessary to give the facilities to the customer’s i.e the delivery
of the quality product or software application that requires lower
maintenance cost as results are accurate, consistent and reliable.
• Testing is needed for an effective performance of the software
product.

1.2 Failure, Error, Fault, Defect, Bug Terminologies.

Failure:
It is the inability of a system or component to perform required
function according to its specification.

Error:
It is an issue identified internally or during unit testing.
Normally error occurs when human action produce
undesirable results.
Fault:
It is a condition that causes the software to fail to perform its
required function.

Defect:
It is an issue Identified by customer.

Bug:
Bug is an initiation of error or problem because of which the
fault may occur in the system.

1.3 Test Case, when to start and Stop Testing of Software


(Entry and Exit Criteria).

Test Case:

• Test cases involve a set of steps, conditions, and inputs that can be
used while performing testing tasks. The main intent of this activity
is to ensure whether software passes or fails in terms of its
functionality and other aspects. There are many types of test cases
such as functional, negative, error, logical test cases, physical test
cases, UI test cases, etc.
• Furthermore, test cases are written to keep track of the testing
coverage of software. Generally, there are no formal templates that
can be used during test case writing. However, the following
components are always available and included in every test case −
• Test case ID
• Product module
• Product version
• Revision history
• Purpose
• Assumptions
• Pre-conditions
• Steps
• Expected outcome
• Actual outcome
• Post-conditions
Many test cases can be derived from a single test scenario. In
addition, sometimes multiple test cases are written for a single
software which are collectively known as test suites.

# When to start and Stop Testing of Software (Entry and Exit


Criteria):

When to Start Testing?

It depends on the process and the associated stakeholders of the


project(s). In the IT industry, large companies have a team with
responsibilities to evaluate the developed software in context of the
given requirements. Moreover, developers also conduct testing
which is called Unit Testing. In most cases, the following
professionals are involved in testing a system within their
respective capacities −

• Software Tester
• Software Developer
• Project Lead/Manager
• End User

Different companies have different designations for people who test


the software on the basis of their experience and knowledge such
as Software Tester, Software Quality Assurance Engineer, QA
Analyst, etc.
It is not possible to test the software at any time during its cycle.
The next two sections state when testing should be started and
when to end it during the SDLC.

When to Stop Testing?

It is difficult to determine when to stop testing, as testing is a


never-ending process and no one can claim that software is 100%
tested. The following aspects are to be considered for stopping the
testing process −
• Testing Deadlines
• Completion of test case execution
• Completion of functional and code coverage to a certain point
• Bug rate falls below a certain level and no high-priority bugs
are identified
• Management decision

1.4 Verification and Validation ( V Model), Quality Assurance


and Quality Control.

Verification and Validation ( V Model):

• V model is known as Verification and Validation model.


• This model is an extension of the waterfall model.
• In the life cycle of V-shaped model, processes are executed
sequentially.
• Every phase completes its execution before the execution of next
phase begins.
These two terms are very confusing for most people, who use
them interchangeably. The following table highlights the
differences between verification and validation.

Sr. Verification Validation


No.

1 Verification addresses the concern: Validation addresses the concern:


"Are you building it right?" "Are you building the right thing?"

2 Ensures that the software system Ensures that the functionalities


meets all the functionality. meet the intended behavior.

3 Verification takes place first and Validation occurs after verification


includes the checking for and mainly involves the checking
documentation, code, etc. of the overall product.

4 Done by developers. Done by testers.

5 It has static activities, as it includes It has dynamic activities, as it


collecting reviews, walkthroughs, and includes executing the software
inspections to verify a software. against the requirements.

6 It is an objective process and no It is a subjective process and


subjective decision should be needed to involves subjective decisions on
verify a software. how well a software works.

Advantages of V-model

• V-model is easy and simple to use.


• Many testing activities i.e planning, test design are executed in the
starting, it saves more time.
• Calculation of errors is done at the starting of the project hence,
less chances of error occurred at final phase of testing.
• This model is suitable for small projects where the requirements
are easily understood.

Disadvantages of V-model

• V-model is not suitable for large and composite projects.


• If the requirements are not constant then this model is not
acceptable.

Quality Assurance and Quality Control:

Quality Assurance Quality Control Testing

QA includes activities It includes activities that It includes activities


that ensure the ensure the verification of a that ensure the
implementation of developed software with identification of
processes, procedures respect to documented (or bugs/error/defects in a
and standards in context not in some cases) software.
to verification of requirements.
developed software and
intended requirements.

Focuses on processes Focuses on actual testing Focuses on actual


and procedures rather by executing the software testing.
than conducting actual with an aim to identify
testing on the system. bug/defect through
implementation of
procedures and process.

Process-oriented Product-oriented activities. Product-oriented


activities. activities.

Preventive activities. It is a corrective process. It is a preventive


process.
It is a subset of Software QC can be considered as Testing is the subset of
Test Life Cycle (STLC). the subset of Quality Quality Control.
Assurance.

1.5 Method of Testing: Static and Dynamic Testing.

Static Testing:
• In static testing, testing and identification of defects is
carried out without executing the code.
• This testing is done in verification process. This testing
consists of static analysis and reviewing of documents. For
example, reviewing, walkthrough, inspection, etc.

Dynamic Testing:
• In this testing, the software code is executed for showing
the result of tests performed.
• Dynamic testing is done in validation process i.e. unit
testing, integration testing, system testing, etc.

Difference between Static and Dynamic testing:

Static Testing Dynamic Testing


Static testing is completed Dynamic testing is completed with the
without executing the execution of program.
program.
This testing is executed in This testing is executed in validation stage.
verification stage.
Static testing is executed Dynamic testing is executed after the
before the compilation. compilation.
This testing prevents the This testing finds and fixes the defects.
defects.
The cost is less for finding The cost is high for finding and fixing the
the defects and fixes. defects.
It consists of Walkthrough, It consists of specification based, structure
Inspection, reviews etc. based, Experience based, unit testing,
integration testing etc.

Static Testing Techniques

In this we are going to discuss about the static testing technique


i.e. Informal review, Walkthrough, Inspection, Technical
Reviews. Reviews vary from informal to formal review.

1.6 The Box Approach: White Box Testing:

Inspection, Walkthroughs, Technical Review, Functional


Testing, Code Coverage Testing, Code Complexity Testing.

1. Inspection:

• The trained moderator guides the Inspection. It is most formal


type of review.
• The reviewers are prepared and check the documents before the
meeting.
• In Inspection, a separate preparation is achieved when the product
is examined and defects are found. These defects are documented
in issue log.
• In Inspection, moderator performs a formal follow-up by applying
exit criteria.

Goals of Inspection

• Assist the author to improve the quality of the document under


inspection.
• Efficiently and rapidly remove the defects.
• Generating the documents with higher level of quality and it helps
to improve the product quality.
• It learns from the previous defects found and prohibits the
occurrence of similar defects.
• Generate common understanding by interchanging information.
2. Walkthroughs:

• In walkthrough, author guides the review team via the document


to fulfill the common understanding and collecting the feedback.
• Walkthrough is not a formal process.
• In walkthrough, a review team does not require to do detailed
study before meeting while authors are already in the scope of
preparation.
• Walkthrough is useful for higher-level documents i.e.
requirement specification and architectural documents.

Goals of walkthrough
• Make the document available for the stakeholders both
outside and inside the software discipline for collecting the
information about the topic under documentation.
• Describe and evaluate the content of the document.
• Study and discuss the validity of possible alternatives and
proposed solutions.

3. Technical Review:
• Technical review is a discussion meeting that focuses on
technical content of the document. It is a less formal
review.
• It is guided by a trained moderator or a technical expert.

Goals of technical review

• The goal is to evaluate the value of technical concept in the project


environment.
• Build the consistency in the use and representation of the
technical concepts.
• In early stages it ensures that the technical concepts are used
correctly.
• Notify the participants regarding the technical content of the
document.
4. Functional Testing:

• Functional testing is based on the specified behavior of the


software and it is referred to as black box testing.
• This testing focuses on suitability, interoperability, security,
accuracy and compliance.
• The techniques used for functional testing are frequently
specification based.

Following are the two aspects of testing functionality:

i) Requirement-based testing
ii) Business-process-based testing

i) Requirement-based testing
• In requirement based testing, the requirements are prioritized
depending on the risk criteria.
• It ensures that important and critical tests are included in the
testing efforts.
ii) Business-process-based testing
• In this testing, knowledge of the business processes is used.
• It describes the framework involved in everyday use of the system.

5. Code Coverage Testing:

Code Coverage testing is determining how much code is being


tested. It can be calculated using the formula:

Code Coverage = (Number of lines of code


exercised)/(Total Number of lines of code) * 100%

Following are the types of code coverage Analysis:


• Statement coverage and Block coverage
• Function coverage
• Function call coverage
• Branch coverage
• Modified condition/decision coverage

The coverage measurement of code is completed by using tools.


Many tools are available for this task.

These tools help for following tasks:

• It helps to improve the quality and productivity of testing.


• The quality improves to verify more structural aspects that are
tested. This helps to find the defects on that structural path.
• It helps in testing the same structure with different data.

6. Code Complexity Testing:

Code complexity is a source code complexity measurement that is


being correlated to a number of coding errors. It is calculated by
developing a Control Flow Graph of the code that measures the
number of linearly-independent paths through a program module.

Lower the Program's code complexity, lower the risk to modify and
easier to understand. It can be represented using the below
formula:

Code complexity = E - N + 2*P


where,
E = number of edges in the flow graph.
N = number of nodes in the flow graph.
P = number of nodes that have exit points

Example :

IF A = 10 THEN
IF B > C THEN
A=B
ELSE
A=C
ENDIF
ENDIF
Print A
Print B
Print C

Flow Graph:

The Code complexity is calculated using the above control flow


diagram that shows seven nodes(shapes) and eight edges (lines).
Hence the Code complexity is 8 - 7 + 2 = 3
1.7 Black Box Testing: Requirement Based Testing, Boundary
Value Analysis, Equivalence Partitioning.

1. Requirement Based Testing:

Requirements-based testing is a testing approach in which test


cases, conditions and data are derived from requirements. It
includes functional tests and also non-functional attributes such
as performance, reliability or usability.

Stages in Requirements based Testing:


• Defining Test Completion Criteria - Testing is completed
only when all the functional and non-functional testing is
complete.
• Design Test Cases - A Test case has five parameters namely
the initial state or precondition, data setup, the inputs,
expected outcomes and actual outcomes.
• Execute Tests - Execute the test cases against the system
under test and document the results.
• Verify Test Results - Verify if the expected and actual results
match each other.
• Verify Test Coverage - Verify if the tests cover both
functional and non-functional aspects of the requirement.
• Track and Manage Defects - Any defects detected during the
testing process goes through the defect life cycle and are
tracked to resolution. Defect Statistics are maintained which
will give us the overall status of the project.

Requirements testing process:


• Testing must be carried out in a timely manner.

• Testing process should add value to the software life cycle,


hence it needs to be effective.
• Testing the system exhaustively is impossible hence the
testing process needs to be efficient as well.
• Testing must provide the overall status of the project, hence it
should be manageable.

2. Boundary Value Analysis:

• Boundary Value Analysis is the test case design technique to test


boundary value between partitions.
• Boundary value is an input or value on the border of an
equivalence partition.
• It consists of start-end, lower-upper, maximum-minimum on
inside and outside boundaries.
• Boundary Value Analysis (BVA) checks the boundary values of
Equivalence Class Partitioning (ECP), hence BVA comes after
ECP.
• For example, A tool consists of user name and password field. It
accesses minimum 6 and maximum 10 characters to work with
this tool. Valid range is 6-10 characters, Invalid range is 5 or less
than 5 characters and more than 10 characters.

3. Equivalence Partitioning:
• In equivalence partitioning, software testing technique
divides the input data of a software unit into the partitions of
equivalent data and test cases are derived from the partitions
of equivalent data.
• Equivalence Partitioning is applied at any level of testing.
• The software treats all the conditions in one partition as the
same. Hence, equivalence partitioning needs to check only
one condition from each partition.
• Since, all the conditions in one partition are same, if one
condition in partition works then we can assume that all the
conditions in that partition work or otherwise.
Unit-2 (Marks-18)
Types and Levels of Testing
Level of Testing
2.1 Unit Testing: Driver, Stub
Unit testing, a testing technique using which individual
modules are tested to determine if there are any issues by
the developer himself. It is concerned with functional
correctness of the standalone modules.
The main aim is to isolate each unit of the system to
identify, analyze and fix the defects.

Unit Testing - Advantages:


 Reduces Defects in the newly developed features or
reduces bugs when changing the existing
functionality.
 Reduces Cost of Testing as defects are captured in
very early phase.
 Improves design and allows better refactoring of code.
 Unit Tests, when integrated with build gives the
quality of the build as well.
Unit Testing Life Cycle:

STUBS:
Assume you have 3 modules, Module A, Module B and
module C. Module A is ready and we need to test it, but
module A calls functions from Module B and C which are
not ready, so developer will write a dummy module which
simulates B and C and returns values to module A. This
dummy module code is known as stub.
DRIVERS:
Now suppose you have modules B and C ready but module
A which calls functions from module B and C is not ready
so developer will write a dummy piece of code for module
A which will return values to module B and C. This dummy
piece of code is known as driver.

Difference between Stubs and Drivers

STUBS DRIVERS

Stubs used in Top Down Drivers used in Bottom Up


Integration Testing Integration Testing

Stubs are used when sub Drivers are used when main
programs are under programs are under
development development

Top most module is tested


Lowest module is tested first
first

It can simulate the behavior It can simulate the behavior


of lower level modules that of upper level modules that
are not integrated are not integrated

Drivers are the calling


Stubs are called programs
programs
2.2 Integration Testing: Top-Down Integration, Bottom-
Up Integration, Bi-Directional Integration.

Integration Testing:
Upon completion of unit testing, the units or modules are
to be integrated which gives raise to integration testing.
The purpose of integration testing is to verify the
functional, performance, and reliability between the
modules that are integrated.

Objectives of integration testing include:

 To reduce risk
 To verify whether the functional and non-functional
behaviors of the interfaces are as designed and
specified
 To build confidence in the quality of the interfaces
 To find defects (which may be in the interfaces
themselves or within the components or systems)
 To prevent defects from escaping to higher test levels

Top-Down Integration:
In Top Down Integration Testing, testing takes place from
top to bottom. High-level modules are tested first and then
low-level modules and finally integrating the low-level
modules to a high level to ensure the system is working as
intended.

In this type of testing, Stubs are used as temporary module


if a module is not ready for integration testing.

Bottom-Up Integration:

It is a reciprocate of the Top-Down Approach. In Bottom Up


Integration Testing, testing takes place from bottom to up.
Lowest level modules are tested first and then high-level
modules and finally integrating the high-level modules to a
low level to ensure the system is working as
intended. Drivers are used as a temporary module for
integration testing.
Bi-Directional Integration:
Bi-Directional or Hybrid integration testing is also known
as Sandwich integration testing. It is the combination of
both Top-down and Bottom-up integration testing.

2.3 Testing and Web Application:


Performance Testing: Load Testing, Stress Testing,
Security Testing, Client Server Testing.
# Load Testing:
Load testing is performance testing technique using which
the response of the system is measured under various load
conditions. The load testing is performed for normal and
peak load conditions.
Load Testing Approach:
 Evaluate performance acceptance criteria

 Identify critical scenarios


 Design workload Model
 Identify the target load levels
 Design the tests
 Execute Tests
 Analyze the Results
Objectives of Load Testing:
 Response time

 Throughput
 Resource utilization
 Maximum user load
 Business-related metrics
#Stress Testing:
Stress testing a Non-Functional testing technique that is
performed as part of performance testing. During stress
testing, the system is monitored after subjecting the
system to overload to ensure that the system can sustain
the stress.
The recovery of the system from such phase (after stress)
is very critical as it is highly likely to happen in production
environment.
Reasons for conducting Stress Testing:
 It allows the test team to monitor system performance
during failures.
 To verify if the system has saved the data before
crashing or NOT.
 To verify if the system prints meaning error messages
while crashing or did it print some random
exceptions.
 To verify if unexpected failures do not cause security
issues.
Stress Testing - Scenarios:
Monitor the system behavior when maximum number of
users logged in at the same time.
All users performing the critical operations at the same
time.
All users Accessing the same file at the same time.
Hardware issues such as database server down or some of
the servers in a server park crashed.
#Security Testing:
Security testing is a testing technique to determine if an
information system protects data and maintains
functionality as intended. It also aims at verifying 6 basic
principles as listed below:
 Confidentiality
 Integrity
 Authentication
 Authorization
 Availability
 Non-repudiation

Security Testing - Techniques:


Injection
Broken Authentication and Session Management
Cross-Site Scripting (XSS)
Insecure Direct Object References
Security Misconfiguration
Sensitive Data Exposure
Missing Function Level Access Control
Cross-Site Request Forgery (CSRF)
Using Components with Known Vulnerabilities
Invalidated Redirects and Forwards
Open Source/Free Security Testing Tools:

Product Vendor URL

FxCop Microsoft https://www.owasp.org/index.ph


p/FxCop

FindBugs The http://findbugs.sourceforge.net/


University
of
Maryland

FlawFinder GPL http://www.dwheeler.com/flawfin


der/

Ramp GPL http://www.deque.com


Ascend

#Client Server Testing:


This type of testing usually done for 2 tier applications
(usually developed for LAN). Here we will be having Front-
end and Backend.
In Client Server we test features of applications like GUI on
both sides, functionality.
The tests performed on these types of applications would
be
1. User Interface Testing.
2. Functionality Testing.
3. Browser Compatibility Testing.
4. Load/Stress Testing.
5. Interoperability Testing/Intersystem Testing.
6. Storage and Data Volume Testing.

2.4 Acceptance Testing: Alpha Testing and Beta Testing,


Special Tests: Regression Testing, GUI Testing.
Acceptance testing, a testing technique performed to
determine whether or not the software system has met the
requirement specifications. The main purpose of
this test is to evaluate the system's compliance with the
business requirements and verify if it is has met the
required criteria for delivery to end users.
Types of Acceptance Testing:
#Alpha Testing:
Alpha testing takes place at the developer's site by the
internal teams, before release to external customers. This
testing is performed without the involvement of the
development teams.

Alpha Testing - In SDLC


The following diagram explains the fitment of Alpha testing
in the software development life cycle.
How do we run it?
In the first phase of alpha testing, the software is tested by
in-house developers during which the goal is to catch bugs
quickly.
In the second phase of alpha testing, the software is given
to the software QA team for additional testing.
Alpha testing is often performed for Commercial off-the-
shelf software (COTS) as a form of internal acceptance
testing, before the beta testing is performed.

#Beta Testing:
Beta testing also known as user testing takes place at the
end users site by the end users to validate the usability,
functionality, compatibility, and reliability testing.
Beta testing adds value to the software development life
cycle as it allows the "real" customer an opportunity to
provide inputs into the design, functionality, and usability
of a product. These inputs are not only critical to the
success of the product but also an investment into future
products when the gathered data is managed effectively.

Beta Testing - In SDLC


The following diagram explains the fitment of Beta testing
in the software development life cycle:

Beta Testing Dependencies


There are number of factors that depend on the success of
beta testing:
 Test Cost
 Number of Test Participants
 Shipping
 Duration of Test
 Demographic coverage
#Regression Testing:
Regression testing a black box testing technique that
consists of re-executing those tests that are impacted by
the code changes. These tests should be executed as often
as possible throughout the software development life cycle.
Types of Regression Tests:
 Final Regression Tests: - A "final regression testing"
is performed to validate the build that hasn't changed
for a period of time. This build is deployed or shipped
to customers.
 Regression Tests: - A normal regression testing is
performed to verify if the build has NOT broken any
other parts of the application by the recent code
changes for defect fixing or for enhancement.
Selecting Regression Tests:
 Requires knowledge about the system and how it
affects by the existing functionalities.
 Tests are selected based on the area of frequent
defects.
 Tests are selected to include the area, which has
undergone code changes many a times.
 Tests are selected based on the criticality of the
features.
Regression Testing Steps:
Regression tests are the ideal cases of automation which
results in better Return On Investment (ROI).
 Select the Tests for Regression.
 Choose the apt tool and automate the Regression
Tests
 Verify applications with Checkpoints
 Manage Regression Tests/update when required
 Schedule the tests
 Integrate with the builds
 Analyze the results
#GUI Testing:
GUI testing is a testing technique in which the
application's user interface is tested whether the
application performs as expected with respect to user
interface behavior.
GUI Testing includes the application behavior towards
keyboard and mouse movements and how different
GUI objects such as toolbars, buttons, menubars,
dialog boxes, edit fields, lists, behavior to the user
input.

GUI Testing Guidelines


 Check Screen Validations

 Verify All Navigations


 Check usability Conditions
 Verify Data Integrity
 Verify the object states
 Verify the date Field and Numeric Field Formats
GUI Automation Tools
Following are some of the open source GUI automation
tools in the market:
Licensed
Product URL
Under
AutoHotkey GPL http://www.autohotkey.com/
Selenium Apache http://docs.seleniumhq.org/
Sikuli MIT http://sikuli.org
Robot
Apache www.robotframework.org
Framework
watir BSD http://www.watir.com/
Dojo Toolkit BSD http://dojotoolkit.org/
UNIT-3 Marks-14
Test Management

3.1 Test Planning: Preparing a Test Plan, Deciding


Test Approach, Setting up criteria for testing,
Identifying responsibilities, Staffing, Resource
Requirements, Test Deliverables, Testing Tasks.
3.2 Test Management: Test Infrastructure
Management, Test people Management.
3.3 Test Process: Base Lining a Test Plan, Test Case
Specification.
3.4 Test Reporting: Executing Test Cases, Preparing
Test Summery Report.
3.1 Test Planning:
Test plan: A document describing the scope,
approach, resources and schedule of intended test
activities. It identifies amongst others test items, the
features to be tested, the testing tasks, who will do
each task, degree of tester independence, the test
environment, the test design techniques and entry
and exit criteria to be used, and the rationale for their
choice and any risks requiring contingency planning.
It is a record of the test planning process.
Master test plan: A test plan that typically addresses
multiple test levels.
Phase test plan: A test plan that typically addresses
one test phase.
# Preparing a Test Plan:
To determine the scope and the risks that need to be
tested and that are NOT to be tested.
❖Documenting Test Strategy.
❖Making sure that the testing activities have been
included.
❖Deciding Entry and Exit criteria. Evaluating the test
estimate.
❖Planning when and how to test and deciding how
the test results will be evaluated, and defining test
exit criterion.
❖The Test is facts delivered as part of test execution.
❖Defining the management information, including
the metrics required and defect resolution and risk
issues.
❖Ensuring that the test documentation generates
repeatable test assets.
# Deciding Test Approach:

There are many strategies that a project can adopt


depending on the context and some of them are:
• Dynamic and heuristic approaches
• Consultative approaches
• Model-based approach that uses statistical
information about failure rates.
• Approaches based on risk-based testing where the
entire development takes place based on the risk
• Methodical approaches which is based on failures.
• Standard-compliant approach specified by
industry-specific standards.
Factors to be considered:
• Risks of product or risk of failure or the
environment and the company
• Expertise and experience of the people in the
proposed tools and techniques.
• Regulatory and legal aspects, such as external
and internal regulations of the development
process
• The nature of the product and the domain

# setting up criteria for testing:

Who does Testing?


It depends on the process and the associated
stakeholders of the project(s). In the IT industry,
large companies have a team with responsibilities to
evaluate the developed software in context of the
given requirements. Moreover, developers also
conduct testing which is called Unit Testing. In most
cases, the following professionals are involved in
testing a system within their respective capacities −
• Software Tester
• Software Developer
• Project Lead/Manager
• End User
Different companies have different designations for
people who test the software on the basis of their
experience and knowledge such as Software Tester,
Software Quality Assurance Engineer, QA Analyst,
etc.
It is not possible to test the software at any time
during its cycle. The next two sections state when
testing should be started and when to end it during
the SDLC.
When to Start Testing?
An early start to testing reduces the cost and time to
rework and produce error-free software that is
delivered to the client. However in Software
Development Life Cycle (SDLC), testing can be started
from the Requirements Gathering phase and
continued till the deployment of the software.
It also depends on the development model that is
being used. For example, in the Waterfall model,
formal testing is conducted in the testing phase; but
in the incremental model, testing is performed at the
end of every increment/iteration and the whole
application is tested at the end.
Testing is done in different forms at every phase of
SDLC −
• During the requirement gathering phase, the
analysis and verification of requirements are also
considered as testing.
• Reviewing the design in the design phase with the
intent to improve the design is also considered as
testing.
• Testing performed by a developer on completion of
the code is also categorized as testing.

When to Stop Testing?


It is difficult to determine when to stop testing, as
testing is a never-ending process and no one can
claim that software is 100% tested. The following
aspects are to be considered for stopping the testing
process −
• Testing Deadlines
• Completion of test case execution
• Completion of functional and code coverage to a
certain point
• Bug rate falls below a certain level and no high-
priority bugs are identified
• Management decision

# Identifying responsibilities
Test lead/manager: A test lead is responsible for:
• Defining the testing activities for subordinates –
testers or test engineers.
• All responsibilities of test planning.
• To check if the team has all the necessary
resources to execute the testing activities.
• To check if testing is going hand in hand with the
software development in all phases.
• Prepare the status report of testing activities.
• Required Interactions with customers.
• Updating project manager regularly about the
progress of testing activities.

Test engineers/QA testers/QC testers are


responsible for:
• To read all the documents and understand what
needs to be tested.
• Based on the information procured in the above
step decide how it is to be tested.
• Inform the test lead about what all resources will
be required for software testing.
• Develop test cases and prioritize testing activities.
• Execute all the test case and report defects, define
severity and priority for each defect.
• Carry out regression testing every time when
changes are made to the code to fix defects.

# Staffing
• Provide the information about the test team size
and number of resources required to be delivered
to them. Then our test plan must give information
about description and distribution of every task in
high level terms.
• It should also provide information of number of
individuals required for each role and if multiple
roles are required for certain number of
individuals.
• It is important to state when and how long each
resource will be required. According to this, define
the resource estimate calculations.
# Resource Requirements
Resource requirement is a detailed summary of all
types of resources required to complete project task.
Resource could be human, equipment and materials
needed to complete a project.
Some of the following factors need to be considered:
• Machine configuration (RAM, processor, disk)
needed to run the product under test.
• Overheads required by test automation tools, if
any
• Supporting tools such as compilers, test data
generators, configuration management tools.
• The different configurations of the supporting
software(e.g. OS)that must be present
• Special requirements for running machine-
intensive tests such as load tests and
performance tests.
• Appropriate number of licenses of all the software
# Test Deliverables
List test deliverables, and links to them if available,
including the following:

• Test Plan (this document itself)


• Test Cases
• Test Scripts
• Test Data
• Defect Reports
• Test Reports

# Testing Tasks

There are two main Parts of Test Management


Process: -
❖ Planning
1. Risk Analysis
Risk Analysis is the first step which Test Manager
should consider before starting any project.
Because all projects may contain risks, early risk
detection and identification of its solution will help
Test Manager to avoid potential loss in the future
& save on project cost.
2. Test Estimation
An estimate is a forecast or prediction. Test
Estimation is approximately determining how
long a task would take to complete. Estimating
effort for the test is one of
the major and important tasks in Test
Management.
3. Test Planning
In software testing, a test plan gives detailed testing
information regarding an upcoming testing effort,
including:
• Test Strategy
• Test Objective

• Exit /Suspension Criteria

• Resource Planning

• Test Deliverables

4. Test Organization
Now you have a Plan, but how will you stick to the
plan and execute it? To answer that question, you
have Test Organization phase.
Generally speaking, you need to organize an
effective Testing Team. You have to assemble a
skilled team to run the ever-growing testing engine
effectively.

❖Execution
5. Test Monitoring and Control
Test Monitoring and Control is the process of
overseeing all the metrics necessary to ensure that
the project is running well, on schedule, and not
out of budget.
Monitoring:
Monitoring is a process of collecting, recording
and reporting information about the project
activity that the project manager and stakeholder
needs to know
Control:
Project Controlling is a process of using data from
monitoring activity to bring actual performance to
planned performance.
In this step, the Test Manager takes action to
correct the deviations from the plan. In some
cases, the plan has to be adjusted according to
project situation.
6. Issue Management
In the life cycle of any project, there will be always
an unexpected problems and questions that crop
up. For an example:
• The company cuts down your project budget
• Your project team lacks the skills to complete
project
• The project schedule is too tight for your team to
finish the project at the deadline.
Risk to be avoided while testing:
• Missing the deadline
• Exceed the project budget
• Lose the customer trust

7. Test Report and Evaluation


• “Test Evaluation Report” describes the results of
the Testing in terms of Test coverage and exit
criteria. The data used in Test Evaluation are
based on the test results data and test result
summary.
3.2 Test Management:
# Test Infrastructure Management:
What Is Infrastructure?
IT Infrastructure Ecosystem includes
• Operating Systems platforms (such as Windows,
UNIX, Linux, Mac OS)
• Computer Hardware platforms (such as Dell,
IBM, Sun, HP, Apple)
• Internet platforms (such as Apache, Cisco,
Microsoft IIS, .NET), Data Management and
Storage (such as IBM DB2, Oracle, SQL Server,
MySQL)
• Enterprise Software Applications (such as SAP,
Oracle, Microsoft).
What Is Infrastructure Testing?
Every software requires an infrastructure to perform
its actions. Infrastructure testing is the testing
process that covers hardware, software, and
networks. It reduces the risks of failure.
Why Infrastructure Testing Is Needed?
Infrastructure testing is needed to mitigate the risk of
failure of any hardware or software component. When
new infrastructure design is prepared for the
software, it becomes necessary to perform this
testing. It is needed to ensure if new infrastructure
functionality is working as intended. Issues arise
more likely when a new infrastructure module is
integrated with the project.

What Are The Benefits Of Infrastructure Testing?


1. Reduction in Production failures.
2. Improvement in defect identification before
production execution. Upgrade the quality of
infrastructure with zero defect slippage to
production.
3. Quickened test execution, empowering early go
live.
4. It helps in annual cost savings in operations as
well as in business.
5. Confirm that software works in a systematic and
controlled procedure.
6. Reduction in downtime.
7. Improvement in quality of service.
8. Availability of stable environments.
9. Reduction in the cost involved in risks.
10. Better user experience.
Who Can Perform Infrastructure Testing?

When To Perform Infrastructure Testing?


# Test people Management:
People management is an integral part of any project
management and test planning.
People management also requires the ability to hire,
motivate, and retain the right people.
These skills are seldom formally taught.
Testing projects present several additional challenges.
We believe that the success of a testing organization
depends vitally on judicious people management
skills.
Test Lead responsibilities and activities:
• Identify how the test teams formed and aligned
within organization
• Decide the roadmap for the project
• Identify the scope of testing using SRS
documents.
• Discuss test plan, review and approve by
management/ development team.
• Identify required metrics
• Calculate size of project and estimate efforts and
corresponding plan.
• Identify skill gap and balance resources and need
for training education.
• Identify the tools for test reporting, test
management, test automation, Create healthy
environment for all resources to gain maximum
throughput.
• Identify how the test teams formed and aligned
within organization management/ development
team.
Test team responsibilities and activities:
• Initiate the test plan for test case design Conduct
review meetings
• Monitor test progress, check for resources,
balancing and allocation
• Check for delays in schedule discuss, resolve risks
if any.
• Intimate status to stake holders and management
• Bridge the gap between test team and
management.
Consider followings for managing test
• Understand testers
• Test work environment
• Role of the test team
3.3 Test Process:
#Base Lining a Test Plan
Baseline testing is a type of non-functional testing.
This test measures important characteristics and
requirements. A benchmark is about analyzing the
relative performance of an application.

Baseline is a formal document which acts as a base


document for future work. Talking in layman
language, for making a building, you require base.
Same thing applies to testing. First of all, it is very
important to know that it is non-functional testing
which means it has nothing to do with testing of
functionality of application. So it can be said that it
acts as a base for future development whatever it is.
It may be performance, test case development.

Advantages of Baseline Testing

• It helps in creating a line that creates a base for


any construction types such as measurements,
comparisons and calculations.
• Using this type of testing, many problems related
to software are easily solved. The process of
resolving critical issues begins from the scratch.
• This type of software testing saves a lot of time as
compared to other types of performance testing
software’s that takes more time to conduct test
cycles.
• Sort complex processes.
• Concerning the quality characteristics of the
specific functions, the test focuses on the quality.
• The tests in model-based testing allow automation
and this factor add effectiveness to the testing
method.

Importance of Baseline Testing

Baseline analyzes the benchmarks of performance in


relation to the performance of an application. In other
words, the application has gaps in it. Hence, a
comparison is done for a new application and or an
unidentified application.

Baseline Testing Facts


▪ It is one of the type of non-functional testing.
▪ It refers to the validation of documents and likewise
specifications on which test cases would be designed.
The requirement specification validation is baseline
testing.
Baseline testing is a type of testing which follows the
requirement and specifications on the base of which a
tester write test cases. It is a non functional type of
testing performs on the bases of upper and lower limit
of the application.

#Test Case Specification


Test Case Specification:
Test case is a well-documented procedure designed to
test the functionality of the feature in the system.
For designing the test case, it needs to provide set of
inputs and its corresponding expected outputs.
Parameters:
1. Test case ID: is the identification number given to
each test case.
2. Purpose: defines why the case is being designed.
3. Precondition: for running in the system can be
defined, if required, in the test case.
4. Input: should not hypothetical. Actual inputs must
be provided, instead of general inputs.
Using the test plan as the basis, the testing team
designs test case specification, which then becomes
the basis for preparing individual test cases. Hence,
a test case specification should clearly identify,
1. The purpose of the test: This lists what features
or part the test is intended for.
2. Items being tested, along with their
version/release numbers as appropriate.
3. Environment that needs to be set up for
running the test cases: This includes the hardware
environment setup, supporting software environment
setup, setup of the product under test.
4. Input data to be used for the test case: The
choice of input data will be dependent on the test
case itself and the technique followed in the test case.
5. Steps to be followed to execute the test: If
automated testing is used, then, these steps ate
translated to the scripting language of the tool.
6. The expected results that are considered to be
“correct result”.
7. A step to compare the actual result produced
with the expected result:
This step should do an “intelligent” comparison of the
expected and actual results to highlight any
discrepancies.
8. Any relationship between this test and other
test: These can be in the form of dependencies among
the tests or the possibilities of reuse across the tests.
3.4 Test Reporting:
#Executing Test Cases
Test execution is the process of executing the code
and comparing the expected and actual results.
Following factors need to be considered for a test
execution process −
• Based on a risk, select a subset of test suite to be
executed for this cycle.
• Assign the test cases in each test suite to testers
for execution.
• Execute tests, report bugs, and capture test status
continuously.
• Resolve blocking issues as they arise.
• Report status, adjust assignments, and reconsider
plans and priorities daily.
• Report test cycle findings and status.
The following points need to be considered for
Test Execution.
• In this phase, the QA (Quality Assurance) team
performs actual validation of AUT (Application
Under Test) based on prepared test cases and
compares the stepwise result with the expected
result.
• The entry criteria of this phase is completion of the
Test Plan and the Test Cases Development phase,
the test data should also be ready.
• The validation of Test Environment setup is always
recommended through smoke testing before
officially entering the test execution.
• The exit criteria require the successful validation
of all Test Cases, Defects should be closed or
deferred, test case execution and defect summary
report should be ready.

#Preparing Test Summery Report


Test reporting is a means of achieving communication
through the testing cycle. There are 3 types of test
reporting.
1. Test incident report
2. Test cycle report:
3. Test summary report

Test summary report:


The final step in a test cycle is to recommend the
suitability of a product for release. A report that
summarizes the result of a test cycle is the test
summary report. There are two types of test summary
report:
1. Phase wise test summary, which is produced at the
end of every phase
2. Final test summary report.
A Summary report should content:
1. Test Summary report Identifier
2. Description: Identify the test items being reported
in this report with test id
3. Variances: Mention any deviation from test plans,
test procedures, if any.
4. Summary of results: All the results are mentioned
here with the resolved incidents and their solutions.
5. Comprehensive assessment and recommendation
for release should include Fit for release assessment
and recommendation of release.

OR
This section includes the summary of testing activity
in general. Information detailed here includes
• The number of test cases executed
• The numbers of test cases pass
• The numbers of test cases fail
• Pass percentage
• Fail percentage
• Comments
This information should be displayed visually by
using color indicator, graph, and highlighted table.
Test report is a communication tool between the
Test Manager and the stakeholder. Through the test
report, the stakeholder can understand the project
situation, the quality of product and other things.

The information of that report is too abstract. It does


not have any detailed information. The stakeholder
who will read it might be slightly puzzled when they
get it. They might ask or have following sets of
questions: -
• Why did they not execute 30 TCs that remains
• What are these failed Test Cases
• Doesn't have any bugs description
To solve that problem, a good Test Report should
be:

• Detail: You should provide a detailed description


of the testing activity, show which testing you
have performed. Do not put the abstract
information into the report, because the reader
will not understand what you said.
• Clear: All information in the test report should
be short and clearly understandable.
• Standard: The Test Report should follow
the standard template. It is easy for stakeholder
to review and ensure the consistency between
test reports in many projects.
• Specific: Do not write an essay about the project
activity. Describe and summarize the test result
specification and focus on the main point.
UNIT-04 Marks-12

Defect Management
4.1 Defect Classification, Defect Management Process.
4.2 Defect Life Cycle, Defect Template
4.3 Estimate Expected Impact of Defect, Techniques for
finding Defect, Reporting a Defect.

4.1 Defect Classification, Defect Management


Process.
Defect Classification
Defects are classified from the QA team perspective
as Priority and from the development perspective
as Severity (complexity of code to fix it). These are two
major classifications that play an important role in the
timeframe and the amount of work that goes in to fix
defects.

What is Priority?
Priority is defined as the order in which the defects
should be resolved. The priority status is usually set by
the QA team while raising the defect against the dev
team mentioning the timeframe to fix the defect. The
Priority status is set based on the requirements of the
end users.
For example, if the company logo is incorrectly placed in
the company's web page then the priority is high but it is
of low severity.
Priority Listing
A Priority can be categorized in the following ways −
• Low − This defect can be fixed after the critical ones
are fixed.
• Medium − The defect should be resolved in the
subsequent builds.
• High − The defect must be resolved immediately
because the defect affects the application to a
considerable extent and the relevant modules cannot
be used until it is fixed.
• Urgent − The defect must be resolved immediately
because the defect affects the application or the
product severely and the product cannot be used
until it has been fixed.
What is Severity?
Severity is defined as the impishness of defect on the
application and complexity of code to fix it from
development perspective. It is related to the development
aspect of the product. Severity can be decided based on
how bad/crucial is the defect for the system. Severity
status can give an idea about the deviation in the
functionality due to the defect.
Example − For flight operating website, defect in
generating the ticket number against reservation is high
severity and also high priority.
Severity Listing
Severity can be categorized in the following ways −
• Critical /Severity 1 − Defect impacts most crucial
functionality of Application and the QA team cannot
continue with the validation of application under
test without fixing it. For example, App/Product
crash frequently.
• Major / Severity 2 − Defect impacts a functional
module; the QA team cannot test that particular
module but continue with the validation of other
modules. For example, flight reservation is not
working.
• Medium / Severity 3 − Defect has issue with single
screen or related to a single function, but the system
is still functioning. The defect here does not block
any functionality. For example, Ticket# is a
representation which does not follow proper alpha
numeric characters like the first five characters and
the last five as numeric.
• Low / Severity 4 − It does not impact the
functionality. It may be a cosmetic defect, UI
inconsistency for a field or a suggestion to improve
the end user experience from the UI side. For
example, the background color of the Submit button
does not match with that of the Save button.
# Defect Management Process

1. Defect Prevention -- Implementation of techniques,


methodology and standard processes to reduce the risk
of defects.
Defect Prevention Process

2. Deliverable Baseline -- Establishment of milestones


where deliverables will be considered complete and ready
for further development work. When a deliverable is
baselined, any further changes are controlled. Errors in
a deliverable are not considered defects until after the
deliverable is baselined.
3. Defect Discovery -- Identification and reporting of
defects for development team acknowledgment. A defect
is only termed discovered when it has been documented
and acknowledged as a valid defect by the development
team member(s) responsible for the component(s) in
error.

• Find Defect : Discover defects before they become


major problems.
• Report Defect : Report defects to developers so that
they can be resolved.
• Acknowledge Defect : Obtain development
acknowledgement that the defect is valid and should be
addressed.
4. Defect Resolution -- Work by the development team
to prioritize, schedule and fix a defect, and document the
resolution. This also includes notification back to the
tester to ensure that the resolution is verified.

• Prioritize Risk : Developers determine the


importance of fixing a particular defect.

• Schedule Fix and Fix Defect: Developers schedule


when to fix a defect. Then developers should fix
defects in order of importance.

• Report Resolution: Developers notify all relevant


parties how and when the defect was repaired.

5. Process Improvement -- Identification and analysis of


the process in which a defect originated to identify ways
to improve the process to prevent future occurrences of
similar defects. Also the validation process that should
have identified the defect earlier is analyzed to determine
ways to strengthen that process.
Management Reporting -- Analysis and reporting of
defect information to assist management with risk
management, process improvement and project
management.
4.2 Defect Life Cycle, Defect Template
# Defect Life Cycle

• New - Potential defect that is raised and yet to be


validated.
• Assigned - Assigned against a development team to
address it but not yet resolved.
• Active - The Defect is being addressed by the
developer and investigation is under progress. At
this stage there are two possible outcomes, Deferred
or Rejected.
• Test - The Defect is fixed and ready for testing.
• Verified - The Defect that is retested and the test
has been verified by QA.
• Closed - The final state of the defect that can be
closed after the QA retesting or can be closed if the
defect is duplicate or considered as NOT a defect.
• Reopened - When the defect is NOT fixed, QA
reopens/reactivates the defect.
• Deferred - When a defect cannot be addressed in
that particular cycle it is deferred to future release.
• Rejected - A defect can be rejected for any of the 3
reasons, duplicate defect, NOT a Defect, Non
Reproducible.
# Defect Template

ID Unique identifier given to the defect. (Usually Automated)

Project Project name.

Product Product name.

Release Release version of the product. (e.g. 1.2.3)


Version

Module Specific module of the product where the defect was detected.

Detected Build Build version of the product where the defect was detected (e.g.
Version 1.2.3.5)

Summary Summary of the defect. Keep this clear and concise

Description Detailed description of the defect. Describe as much as possible


but without repeating anything or using complex words. Keep it
simple but comprehensive.
Steps to Step by step description of the way to reproduce the defect.
Replicate Number the steps.

Actual Result The actual result you received when you followed the steps.

Expected The expected results.


Results

Attachments Attach any additional information like screenshots and logs.

Remarks Any additional comments on the defect.

Defect Severity Severity of the Defect.

Defect Priority Priority of the Defect.

Reported By The name of the person who reported the defect.

Assigned To The name of the person that is assigned to analyze/fix the defect.

Status The

Fixed Build Build version of the product where the defect was fixed (e.g.
Version 1.2.3.9)

4.3 Estimate Expected Impact of Defect, Techniques


for finding Defect, Reporting a Defect.

# Estimate Expected Impact


Once the critical risks are identified, the financial
impact of each risk should be estimated. This can be
done by assessing the impact, in dollars, if the risk
does become a problem combined with the probability
that the risk will become a problem. The product of
these two numbers is the expected impact of the
risk. The expected impact of a risk (E) is calculated
as E = P * I, where:

P= Probability of the risk becoming a problem,


I= Impact in dollars if the risk becomes a problem.
Once the expected impact of each risk is identified, the
risks should be prioritized by the expected impact and
the degree to which the expected impact can be
reduced. While guess work will constitute a major role in
producing these numbers, precision is not
important. What will be important is to identify the risk,
and determine the risk's order of magnitude.

Large, complex systems will have many critical


risks. Whatever can be done to reduce the probability of
each individual critical risk becoming a problem to a very
small number should be done. Doing this increases the
probability of a successful project by increasing the
probability that none of the critical risks will become a
problem.

One should assume that an individual critical risk has a


low probability of becoming a problem only when there is
specific knowledge justifying why it is low. For example,
the likelihood that an important requirement was missed
may be high if developers have not involved users in the
project. If users have actively participated in the
requirements definition, and the new system is not a
radical departure from an existing system or process, the
likelihood may be low.
One of the more effective methods for estimating the
expected impact of a risk is the annual loss expectation
(ALE) formula. This is discussed below:

The occurrence of a risk can be called an "event."


Loss per event can be defined as the average loss for a
sample of events.
The formula states that the ALE equals the loss per
event multiplied by the number of events.
For example, if the risk is that the software system will
abnormally terminate, then the average cost of correcting
an abnormal termination is calculated and multiplied by
the expected number of abnormal terminations
associated with this risk
For the annual calculation, the number of events
should be the number of events per year.

# Techniques for finding Defect


The first step in preventing defects is to understand the
critical risks facing the project or system. The best way
to do this is to identify the types of defects that pose the
largest threat. In short, they are the defects that could
jeopardize the successful construction, delivery and/or
operation of the system. These risks can vary widely
from project to project depending on the type of system,
the technology, the users of the software, etc. These
risks might include:

• Missing a key requirement


• Critical application software that does not function
properly
• Vendor supplied software does not function properly
• Performance is unacceptably poor
• Hardware malfunction
o Hardware and/or software does not integrate

properly
• Hardware new to installation site
o Hardware not delivered on-time

• Users unable or unwilling to embrace new system


• User's inability to actively participate in project

It should be emphasized that the purpose of this step is


not to identify every conceivable risk, but to identify
those critical risks that merit special attention because
they could jeopardize the success of the project.

#Reporting a Defect
A Bug Report in Software Testing is a detailed document
about bugs found in the software application. Bug report
contains each detail about bugs like description, date
when bug was found, name of tester who found it, name
of developer who fixed it, etc. Bug report helps to identify
similar bugs in future so it can be avoided.
While reporting the bug to developer, your Bug Report
should contain the following information
• Defect_ID - Unique identification number for the
defect.
• Defect Description - Detailed description of the
Defect including information about the module in
which Defect was found.
• Version - Version of the application in which defect
was found.
• Steps - Detailed steps along with screenshots with
which the developer can reproduce the defects.
• Date Raised - Date when the defect is raised
• Reference- where in you provide reference to the
documents like. requirements, design, architecture
or maybe even screenshots of the error to help
understand the defect
• Detected By - Name/ID of the tester who raised the
defect
• Status - Status of the defect , more on this later
• Fixed by - Name/ID of the developer who fixed it
• Date Closed - Date when the defect is closed
• Severity which describes the impact of the defect on
the application
• Priority which is related to defect fixing urgency.
Severity Priority could be High/Medium/Low based
on the impact urgency at which the defect should be
fixed respectively
SAMPLE BUG REPORT

Bug Name: Application crash on clicking the SAVE


button while creating a new user.
Bug ID: (It will be automatically created by the BUG
Tracking tool once you save this bug)
Area Path: USERS menu > New Users
Build Number: Version Number 5.0.1
Severity: HIGH (High/Medium/Low) or 1
Priority: HIGH (High/Medium/Low) or 1
Assigned to: Developer-X
Reported By: Your Name
Reported On: Date
Reason: Defect
Status: New/Open/Active (Depends on the Tool you are
using)
Environment: Windows 2003/SQL Server 2005
UNIT-05 Marks-12
Testing Tools and Measurements
5.1 Manual Testing and need for Automated Testing
Tools
5.2 Advantages and Disadvantages of Using Tools
5.3 Selecting a Testing Tool
5.4 When to use Automated Test Tools, Testing using
Automated Test Tools
5.5 Metrics and Measurements: Types of Metrics,
Product Metrics and Process Metrics, Object Oriented
Metrics in Testing.
5.1 Manual Testing and need for Automated Testing
Tools
Testing Computer Software manually without using any
Automation Tools
Limitations of Manual Testing:
• Manual Testing requires more time or more resources,
sometimes both
• Performance testing is impractical in manual testing.
• Less Accuracy
• Executing same tests again and again time taking
process as well as Tedious.
• GUI Objects Size difference and Color combinations
etc.. Are not easy to find in Manual Testing.
• Not Suitable for Large scale projects and time bounded
projects.
• Batch Testing is not possible, for each and every test
execution Human user interaction is mandatory.
• Manual Test Case scope is very limited, if it is
Automated test, scope is unlimited.
• Comparing large amount of data is impractical
• Checking relevance of search of operation is difficult
• Processing change requests during software
maintenance takes more time.

Benefits of Automation Testing are:


1. Save Time /Speed: Due to advanced computing
facilities, automation test tools prevail in speed of
processing the tests. Automation saves time as software
can execute test cases faster than human.
2. Reduces the tester’s involvement in executing
tests: It relieves the testers to do some other work.
3. Repeatability/Consistency: The same tests can be
re-run in exactly the same manner eliminating the risk
of human errors such as testers forgetting their exact
actions, intentionally omitting steps from the test
scripts, missing out steps from the test script, all of
which can result in either defects not being identified or
the reporting of invalid bugs (which can again, be time
consuming for both developers and testers to reproduce)
4. Simulated Testing: Automated tools can create many
concurrent virtual users/data and effectively test the
project in the test environment before releasing the
product.
5. Test case design: Automated tools can be used to
design test cases also. Through automation, better
coverage can be guaranteed than if done manually.
6. Reusable: The automated tests can be reused on
different versions of the software, even if the interface
changes.
7. Avoids human mistakes: Manually executing the test
cases may incorporate errors. But this can be avoided in
automation testing.
8. Internal Testing: Testing may require testing for
memory leakage or checking the coverage of testing.
Automation can done this easily.
9. Cost Reduction: If testing time increases cost of the
software also increases. Due to testing tools time and
therefore cost is reduced.
Needs of automation testing:
1. Speed: Think about how long it would take you to
manually try a few thousand test cases for the windows
Calculator. You might average a test case every five
seconds or so. Automation might be able to run 10, 100
even 1000 times that fast.
2. Efficiency
doing anything else. If you have a test tool that reduces
the time it takes for you to run your tests, you have
more time for test planning and thinking up new tests.
3. Accuracy and Precision: After trying a few hundred
cases, your attention may reduce and you will start to
make mistakes .A test tool will perform the same test
and check the result perfectly, each and every time.
4. Resource Reduction: Sometimes it can be physically
impossible to perform a certain test case. The number of
people or the amount of equipment required to create
the test condition could be prohibitive. A test tool can
used to simulate the real world and greatly reduce the
physical resources necessary to perform the testing.
5. Simulation and Emulation: Test tools are used to
replace hardware or software that would normally
interface to your product. This software in ways that you
choose and ways that might otherwise be difficult to
achieve.
6. Relentlessness: Test tool and automation never tire
or give up. It will continuously test the software.
Difference between Manual testing Automated
testing
Automated Testing Manual Testing
• If you have to run a set of • If Test Cases have to be run
tests repeatedly automation is a a small number of times it's
huge gain more likely to perform manual
testing
• Helps performing • It allows the tester to
"compatibility testing" - testing perform more ad-hoc (random
the software on different testing)
configurations
• It gives you the ability to run • Short term costs are reduced
automation scenarios to
perform regressions in a shorter
time
• It gives you the ability to run • The more time tester spends
regressions on a code that is testing a module the grater the
continuously changing odds to find real user bugs
• It's more expensive to • Manual tests can be very
automate. Initial investments time consuming
are bigger than manual testing
• You cannot automate • For every release you must
everything, some tests still have rerun the same set of tests
to be done manually which can be tiresome
5.2 Advantages and Disadvantages of Using Tools

Advantages of using testing tools:

1. Reduce time of testing


2. Improve the bugs finding
3. Deliver the quality software/product
4. Allow to run tests many times with different data
5. Getting more time for test planning
6. Save resources or reduce requirement
7. It is never tired and expert person can work at a
time many tools.
Disadvantages of using testing tools:

1. It's more expensive to automate. Initial investments


are bigger than manual testing
2. Manual tests can be very time consuming.
3. You cannot automate everything; some tests still
have to be done manually.
4. You cannot rely on testing tools always.

5.3 Selecting a Testing Tool


Criteria for Selecting Test Tools:
The Categories for selecting Test Tools are,
1. Meeting requirements;
2. Technology expectations;
3. Training/skills;
4. Management aspects.
1. Meeting requirements-

There are plenty of tools available in the market but


rarely do they meet all the requirements of a given
product or a given organization. Evaluating different
tools for different requirements involve significant effort,
money, and time. Given of the plethora of choice
available, huge delay is involved in selecting and
implementing test tools.

2. Technology expectations-

Test tools in general may not allow test developers to


extends/modify the functionality of the framework. So
extending the functionality requires going back to the
tool vendor and involves additional cost and effort. A
good number of test tools require their libraries to be
linked with product binaries.

3. Training/skills-

While test tools require plenty of training, very few


vendors provide the training to the required level.
Organization level training is needed to deploy the test
tools, as the user of the test suite are not only the test
team but also the development team and other areas like
configuration management.
4. Management aspects-

A test tool increases the system requirement and


requires the hardware and software to be upgraded. This
increases the cost of the already- expensive test tool.

Guidelines for selecting a tool:

1. The tool must match its intended use. Wrong selection


of a tool can lead to problems like lower efficiency and
effectiveness of testing may be lost.
2. Different phases of a life cycle have different quality-
factor requirements. Tools required at each stage may
differ significantly.
3. Matching a tool with the skills of testers is also
essential. If the testers do not have proper training and
skill then they may not be able to work effectively.
4. Select affordable tools. Cost and benefits of various
tools must be compared before making final decision.
5. Backdoor entry of tools must be prevented.
Unauthorized entry results into failure of tool and
creates a negative environment for new tool introduction.
5.4 When to use Automated Test Tools, Testing using
Automated Test Tools
When to use Automated Test Tools
• The application has a very vast area with a high
degree of investing effort in regression.
• Optimization in costs occurred due to manual
errors.
• The software has multiple versions and releases.
• It is cost effective in long run.
• The risk factor is higher for a broader scope of test
execution.
• Cost figures and mathematical calculations are
included in the software functionality.
• There is a greater increase in the execution tempo,
efficiency along with the software quality.
• There is a lesser turnaround time, even for high-risk
software testing.

Testing using Automated Test Tools

Automation Testing these days is a must for most


software projects to ensure automatic verification of
key functionalities. Also help teams efficiently run a
large number of tests in a short period of time.
Listed below are a few tools that help software teams
build and execute automated tests:
Examples:
1. Selenium:
Selenium is a popular testing framework to perform web
application testing across various browsers and
platforms like Windows, Mac, and Linux. With selenium,
you can come up with very powerful, browser-centered
automation testing scripts which are scalable across
different environments. It is compatible with several
programming languages & automation testing
frameworks.
2. Watir:
Watir, pronounced as water, is an open source testing
tool made up of Ruby libraries to automate web
application testing. Loaded with Ruby libraries, it also
supports applications scripted in other languages. You
can link it with databases, export XML files, read files,
spreadsheets, and synchronize code as reusable
libraries. It is a very light-weight open source tool.
3. Ranorex:
Ranorex is flexible, all in one, GUI testing tool using
which you can execute automated tests flawlessly
throughout all environments and devices. When
compared to other GUI testing tools, Ranorex offers
super smart object recognition feature that
automatically detects any change in the user interface
and keeps the test going. Other features
include reusable code modules, early bug finding, and
integration with other tools.
4. HPE Unified Functional Testing (UFT)
HPE Unified Functional Testing (UFT) software,
formerly known as HP Quick Test Professional (QTP) is
an automated functional GUI testing tool which allows
the automation of user actions on a client based
computer application. It offers features like object
recognition, error handling mechanism, and automated
documentation. It also uses a scripting language to
manipulate the objects and controls of the application
under test.
5. Tricentis Tosca
Tricentis Tosca is a very popular software testing tool
that is used to automate end-to-end testing for
software applications. This tool offers a single
repository for all functional test artifacts, including
requirements, user stories, test data, virtualization
assets. Tosca comes with capabilities like test data
provisioning, service virtualization network, tests
mobile apps, and risk coverage. Tricentis Tosca is a
very popular software testing tool that is used to
automate end-to-end testing for software applications.
This tool offers a single repository for all functional test
artifacts, including requirements, user stories, test
data, virtualization assets. Tosca comes with
capabilities like test data provisioning, service
virtualization network, tests mobile apps, and risk
coverage.

5.5 Metrics and Measurements: Types of Metrics,


Product Metrics and Process Metrics, Object
Oriented Metrics in Testing.

A Metric is a measurement of the degree that any


attribute belongs to a system, product or process. For
example the number of errors per person hours would be
a metric. Thus, software measurement gives rise to
software metrics. A measurement is an indication of the
size, quantity, amount or dimension of a particular
attribute of a product or process. For example the
number of errors in a system is a measurement.

Software measurement is required to:

• Establish the quality of the current product or


process.
• To predict future qualities of the product or process.
• To improve the quality of a product or process.
• To determine the state of the project in relation to
budget and schedule.
Types of Metrics:
➢ Process quality
➢ Product quality
➢ Objective Metrics
➢ Subjective Metrics

Process quality:
Activities related to the production of software, tasks or
milestones.
1. Process metrics are collected across all projects and
over long periods of time.
2. They are used for making strategic decisions.
3. The intent is to provide a set of process indicators that
lead to long-term software process improvement.
4. The only way to know how/where to improve any
process is to:
• Measure specific attributes of the process.
• Develop a set of meaningful metrics based on these
attributes.
• Use the metrics to provide indicators that will lead to
a strategy for improvement.
Product quality:

Explicit result of the software development activity,


deliverables, products.

1. Product metrics help software engineers to better


understand the attributes of models and assess the
quality of the software.
2. They help software engineers to gain insight into the
design and construction of the software.
3. Focus on specific attributes of software engineering
work products resulting from analysis, design, coding,
and testing.
4. Provide a systematic way to assess quality based on a
set of clearly defined rules.
5. Provide an “on-the-spot” rather than “after-the-fact”
insight into the software development.

Objective Metrics:

1. They are non-negotiable – that is the way they are


defined doesn’t change with respect to the niche or the
type of endeavor they are being applied to.
2. Actual cost or AC is always the total cost actually
incurred in accomplishing a certain activity or a
sequence of activities.

You might also like