STE chapter notes
STE chapter notes
Software Testing:
Testing is executing a system in order to identify any gaps,
errors, or missing requirements in contrary to the actual
requirements.
Objective of Testing:
Failure:
It is the inability of a system or component to perform required
function according to its specification.
Error:
It is an issue identified internally or during unit testing.
Normally error occurs when human action produce
undesirable results.
Fault:
It is a condition that causes the software to fail to perform its
required function.
Defect:
It is an issue Identified by customer.
Bug:
Bug is an initiation of error or problem because of which the
fault may occur in the system.
Test Case:
• Test cases involve a set of steps, conditions, and inputs that can be
used while performing testing tasks. The main intent of this activity
is to ensure whether software passes or fails in terms of its
functionality and other aspects. There are many types of test cases
such as functional, negative, error, logical test cases, physical test
cases, UI test cases, etc.
• Furthermore, test cases are written to keep track of the testing
coverage of software. Generally, there are no formal templates that
can be used during test case writing. However, the following
components are always available and included in every test case −
• Test case ID
• Product module
• Product version
• Revision history
• Purpose
• Assumptions
• Pre-conditions
• Steps
• Expected outcome
• Actual outcome
• Post-conditions
Many test cases can be derived from a single test scenario. In
addition, sometimes multiple test cases are written for a single
software which are collectively known as test suites.
• Software Tester
• Software Developer
• Project Lead/Manager
• End User
Advantages of V-model
Disadvantages of V-model
Static Testing:
• In static testing, testing and identification of defects is
carried out without executing the code.
• This testing is done in verification process. This testing
consists of static analysis and reviewing of documents. For
example, reviewing, walkthrough, inspection, etc.
Dynamic Testing:
• In this testing, the software code is executed for showing
the result of tests performed.
• Dynamic testing is done in validation process i.e. unit
testing, integration testing, system testing, etc.
1. Inspection:
Goals of Inspection
Goals of walkthrough
• Make the document available for the stakeholders both
outside and inside the software discipline for collecting the
information about the topic under documentation.
• Describe and evaluate the content of the document.
• Study and discuss the validity of possible alternatives and
proposed solutions.
3. Technical Review:
• Technical review is a discussion meeting that focuses on
technical content of the document. It is a less formal
review.
• It is guided by a trained moderator or a technical expert.
i) Requirement-based testing
ii) Business-process-based testing
i) Requirement-based testing
• In requirement based testing, the requirements are prioritized
depending on the risk criteria.
• It ensures that important and critical tests are included in the
testing efforts.
ii) Business-process-based testing
• In this testing, knowledge of the business processes is used.
• It describes the framework involved in everyday use of the system.
Lower the Program's code complexity, lower the risk to modify and
easier to understand. It can be represented using the below
formula:
Example :
IF A = 10 THEN
IF B > C THEN
A=B
ELSE
A=C
ENDIF
ENDIF
Print A
Print B
Print C
Flow Graph:
3. Equivalence Partitioning:
• In equivalence partitioning, software testing technique
divides the input data of a software unit into the partitions of
equivalent data and test cases are derived from the partitions
of equivalent data.
• Equivalence Partitioning is applied at any level of testing.
• The software treats all the conditions in one partition as the
same. Hence, equivalence partitioning needs to check only
one condition from each partition.
• Since, all the conditions in one partition are same, if one
condition in partition works then we can assume that all the
conditions in that partition work or otherwise.
Unit-2 (Marks-18)
Types and Levels of Testing
Level of Testing
2.1 Unit Testing: Driver, Stub
Unit testing, a testing technique using which individual
modules are tested to determine if there are any issues by
the developer himself. It is concerned with functional
correctness of the standalone modules.
The main aim is to isolate each unit of the system to
identify, analyze and fix the defects.
STUBS:
Assume you have 3 modules, Module A, Module B and
module C. Module A is ready and we need to test it, but
module A calls functions from Module B and C which are
not ready, so developer will write a dummy module which
simulates B and C and returns values to module A. This
dummy module code is known as stub.
DRIVERS:
Now suppose you have modules B and C ready but module
A which calls functions from module B and C is not ready
so developer will write a dummy piece of code for module
A which will return values to module B and C. This dummy
piece of code is known as driver.
STUBS DRIVERS
Stubs are used when sub Drivers are used when main
programs are under programs are under
development development
Integration Testing:
Upon completion of unit testing, the units or modules are
to be integrated which gives raise to integration testing.
The purpose of integration testing is to verify the
functional, performance, and reliability between the
modules that are integrated.
To reduce risk
To verify whether the functional and non-functional
behaviors of the interfaces are as designed and
specified
To build confidence in the quality of the interfaces
To find defects (which may be in the interfaces
themselves or within the components or systems)
To prevent defects from escaping to higher test levels
Top-Down Integration:
In Top Down Integration Testing, testing takes place from
top to bottom. High-level modules are tested first and then
low-level modules and finally integrating the low-level
modules to a high level to ensure the system is working as
intended.
Bottom-Up Integration:
Throughput
Resource utilization
Maximum user load
Business-related metrics
#Stress Testing:
Stress testing a Non-Functional testing technique that is
performed as part of performance testing. During stress
testing, the system is monitored after subjecting the
system to overload to ensure that the system can sustain
the stress.
The recovery of the system from such phase (after stress)
is very critical as it is highly likely to happen in production
environment.
Reasons for conducting Stress Testing:
It allows the test team to monitor system performance
during failures.
To verify if the system has saved the data before
crashing or NOT.
To verify if the system prints meaning error messages
while crashing or did it print some random
exceptions.
To verify if unexpected failures do not cause security
issues.
Stress Testing - Scenarios:
Monitor the system behavior when maximum number of
users logged in at the same time.
All users performing the critical operations at the same
time.
All users Accessing the same file at the same time.
Hardware issues such as database server down or some of
the servers in a server park crashed.
#Security Testing:
Security testing is a testing technique to determine if an
information system protects data and maintains
functionality as intended. It also aims at verifying 6 basic
principles as listed below:
Confidentiality
Integrity
Authentication
Authorization
Availability
Non-repudiation
#Beta Testing:
Beta testing also known as user testing takes place at the
end users site by the end users to validate the usability,
functionality, compatibility, and reliability testing.
Beta testing adds value to the software development life
cycle as it allows the "real" customer an opportunity to
provide inputs into the design, functionality, and usability
of a product. These inputs are not only critical to the
success of the product but also an investment into future
products when the gathered data is managed effectively.
# Identifying responsibilities
Test lead/manager: A test lead is responsible for:
• Defining the testing activities for subordinates –
testers or test engineers.
• All responsibilities of test planning.
• To check if the team has all the necessary
resources to execute the testing activities.
• To check if testing is going hand in hand with the
software development in all phases.
• Prepare the status report of testing activities.
• Required Interactions with customers.
• Updating project manager regularly about the
progress of testing activities.
# Staffing
• Provide the information about the test team size
and number of resources required to be delivered
to them. Then our test plan must give information
about description and distribution of every task in
high level terms.
• It should also provide information of number of
individuals required for each role and if multiple
roles are required for certain number of
individuals.
• It is important to state when and how long each
resource will be required. According to this, define
the resource estimate calculations.
# Resource Requirements
Resource requirement is a detailed summary of all
types of resources required to complete project task.
Resource could be human, equipment and materials
needed to complete a project.
Some of the following factors need to be considered:
• Machine configuration (RAM, processor, disk)
needed to run the product under test.
• Overheads required by test automation tools, if
any
• Supporting tools such as compilers, test data
generators, configuration management tools.
• The different configurations of the supporting
software(e.g. OS)that must be present
• Special requirements for running machine-
intensive tests such as load tests and
performance tests.
• Appropriate number of licenses of all the software
# Test Deliverables
List test deliverables, and links to them if available,
including the following:
# Testing Tasks
• Resource Planning
• Test Deliverables
4. Test Organization
Now you have a Plan, but how will you stick to the
plan and execute it? To answer that question, you
have Test Organization phase.
Generally speaking, you need to organize an
effective Testing Team. You have to assemble a
skilled team to run the ever-growing testing engine
effectively.
❖Execution
5. Test Monitoring and Control
Test Monitoring and Control is the process of
overseeing all the metrics necessary to ensure that
the project is running well, on schedule, and not
out of budget.
Monitoring:
Monitoring is a process of collecting, recording
and reporting information about the project
activity that the project manager and stakeholder
needs to know
Control:
Project Controlling is a process of using data from
monitoring activity to bring actual performance to
planned performance.
In this step, the Test Manager takes action to
correct the deviations from the plan. In some
cases, the plan has to be adjusted according to
project situation.
6. Issue Management
In the life cycle of any project, there will be always
an unexpected problems and questions that crop
up. For an example:
• The company cuts down your project budget
• Your project team lacks the skills to complete
project
• The project schedule is too tight for your team to
finish the project at the deadline.
Risk to be avoided while testing:
• Missing the deadline
• Exceed the project budget
• Lose the customer trust
OR
This section includes the summary of testing activity
in general. Information detailed here includes
• The number of test cases executed
• The numbers of test cases pass
• The numbers of test cases fail
• Pass percentage
• Fail percentage
• Comments
This information should be displayed visually by
using color indicator, graph, and highlighted table.
Test report is a communication tool between the
Test Manager and the stakeholder. Through the test
report, the stakeholder can understand the project
situation, the quality of product and other things.
Defect Management
4.1 Defect Classification, Defect Management Process.
4.2 Defect Life Cycle, Defect Template
4.3 Estimate Expected Impact of Defect, Techniques for
finding Defect, Reporting a Defect.
What is Priority?
Priority is defined as the order in which the defects
should be resolved. The priority status is usually set by
the QA team while raising the defect against the dev
team mentioning the timeframe to fix the defect. The
Priority status is set based on the requirements of the
end users.
For example, if the company logo is incorrectly placed in
the company's web page then the priority is high but it is
of low severity.
Priority Listing
A Priority can be categorized in the following ways −
• Low − This defect can be fixed after the critical ones
are fixed.
• Medium − The defect should be resolved in the
subsequent builds.
• High − The defect must be resolved immediately
because the defect affects the application to a
considerable extent and the relevant modules cannot
be used until it is fixed.
• Urgent − The defect must be resolved immediately
because the defect affects the application or the
product severely and the product cannot be used
until it has been fixed.
What is Severity?
Severity is defined as the impishness of defect on the
application and complexity of code to fix it from
development perspective. It is related to the development
aspect of the product. Severity can be decided based on
how bad/crucial is the defect for the system. Severity
status can give an idea about the deviation in the
functionality due to the defect.
Example − For flight operating website, defect in
generating the ticket number against reservation is high
severity and also high priority.
Severity Listing
Severity can be categorized in the following ways −
• Critical /Severity 1 − Defect impacts most crucial
functionality of Application and the QA team cannot
continue with the validation of application under
test without fixing it. For example, App/Product
crash frequently.
• Major / Severity 2 − Defect impacts a functional
module; the QA team cannot test that particular
module but continue with the validation of other
modules. For example, flight reservation is not
working.
• Medium / Severity 3 − Defect has issue with single
screen or related to a single function, but the system
is still functioning. The defect here does not block
any functionality. For example, Ticket# is a
representation which does not follow proper alpha
numeric characters like the first five characters and
the last five as numeric.
• Low / Severity 4 − It does not impact the
functionality. It may be a cosmetic defect, UI
inconsistency for a field or a suggestion to improve
the end user experience from the UI side. For
example, the background color of the Submit button
does not match with that of the Save button.
# Defect Management Process
Module Specific module of the product where the defect was detected.
Detected Build Build version of the product where the defect was detected (e.g.
Version 1.2.3.5)
Actual Result The actual result you received when you followed the steps.
Assigned To The name of the person that is assigned to analyze/fix the defect.
Status The
Fixed Build Build version of the product where the defect was fixed (e.g.
Version 1.2.3.9)
properly
• Hardware new to installation site
o Hardware not delivered on-time
#Reporting a Defect
A Bug Report in Software Testing is a detailed document
about bugs found in the software application. Bug report
contains each detail about bugs like description, date
when bug was found, name of tester who found it, name
of developer who fixed it, etc. Bug report helps to identify
similar bugs in future so it can be avoided.
While reporting the bug to developer, your Bug Report
should contain the following information
• Defect_ID - Unique identification number for the
defect.
• Defect Description - Detailed description of the
Defect including information about the module in
which Defect was found.
• Version - Version of the application in which defect
was found.
• Steps - Detailed steps along with screenshots with
which the developer can reproduce the defects.
• Date Raised - Date when the defect is raised
• Reference- where in you provide reference to the
documents like. requirements, design, architecture
or maybe even screenshots of the error to help
understand the defect
• Detected By - Name/ID of the tester who raised the
defect
• Status - Status of the defect , more on this later
• Fixed by - Name/ID of the developer who fixed it
• Date Closed - Date when the defect is closed
• Severity which describes the impact of the defect on
the application
• Priority which is related to defect fixing urgency.
Severity Priority could be High/Medium/Low based
on the impact urgency at which the defect should be
fixed respectively
SAMPLE BUG REPORT
2. Technology expectations-
3. Training/skills-
Process quality:
Activities related to the production of software, tasks or
milestones.
1. Process metrics are collected across all projects and
over long periods of time.
2. They are used for making strategic decisions.
3. The intent is to provide a set of process indicators that
lead to long-term software process improvement.
4. The only way to know how/where to improve any
process is to:
• Measure specific attributes of the process.
• Develop a set of meaningful metrics based on these
attributes.
• Use the metrics to provide indicators that will lead to
a strategy for improvement.
Product quality:
Objective Metrics: