0% found this document useful (0 votes)
11 views82 pages

Software Testing

Uploaded by

ishu9129720392
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views82 pages

Software Testing

Uploaded by

ishu9129720392
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 82

Software Testing

What is software Testing?


• Testing is the process of exercising a program with the specific
intent of finding errors prior to delivery to the end user.

• Testing is the process of executing the program in order to find


errors.

• A successful test is one that finds an error.

• Testing can show the presence of bugs, but not the absence
What is software Testing?

• Software testing is the process of executing a software system to


determine if it matches its specification and executes correctly in
its intended environment

• It is the process of exercising or evaluating a system or system


component by manual or automatic means to verify that it
satisfies specified requirements or to identify differences
between expected and actual results.

• Testing is for finding errors, not to demonstrate your software.


Testing Terminologies

Error
An error refers to a human mistake or defect in the software code or
design. It is the root cause that leads to incorrect or unexpected behavior
in a system.
Examples:

• A typo in the source code (e.g., using = instead of ==).


• Incorrect algorithm implementation (e.g., using an incorrect formula).
• Misunderstanding of requirements by the developer.

Key Point: Errors occur at the development stage.


Testing Terminologies

Fault (Bug/Defect)
A fault is the manifestation/appearance of an error in the software. It is a
defect or bug in the program’s code or design that can potentially cause
the software to behave incorrectly.

Examples:

• Division by zero in a function.


• Array index out of bounds.
• Incorrect business logic implementation

Key Point: Faults are introduced during coding or design and may not always be
detected immediately.
Testing Terminologies

Failure
A failure occurs when the system does not perform its intended function
as specified. It is the actual incorrect output or behavior of the software
observed during execution, caused by a fault.

Examples:

• The application crashes when a user enters invalid input.


• Incorrect data is displayed in the user interface.
• A web service returns the wrong result or error message.

Key Point: Failures are observed during runtime when the system is in use.
Relationship Between Error, Fault, and Failure

• An error made by a developer (human mistake) introduces a fault in


the code.
• When the faulty code is executed, it leads to a failure, where the
system produces incorrect or unexpected result
Example Scenario:
Error: A developer accidentally uses <= instead of < in a conditional check.
Fault: The code now has a logical defect because it does not handle boundary
conditions correctly.
Failure: When the software is executed with boundary input, the system does not
behave as expected (e.g., it includes an extra item in a list).
Software Testing

White-box Black-box
methods methods

Methods

Strategies
Black-Box Testing

requirements Comparison

output

input events
Black-Box Testing

• Black Box Testing is a software testing method where the tester


evaluates the functionality of an application without looking at
the internal code, structure, or implementation details.
• Black box design treats the system as a black-box.
• It is also known as Functional Testing, Behavioral Testing,
Data-Driven Testing, Input / Output driven Testing.
• Give input to the system, get the output, compare the output with
the specification, if it matches, then the system is fine, otherwise
error is there.
Techniques Used in Black Box Testing

Following are main techniques to black-box testing.

• Equivalence Class Partitioning


• Boundary Value Analysis
Techniques Used in Black Box Testing

Equivalence Class Partitioning


Divides input data into different equivalence classes (valid and
invalid). Only one test case from each class is tested, as it is
assumed that all values within a class will behave similarly.
Example
• Example:
For an age input field (0-120), test cases might include:
• Valid input Class : 25 (valid range)
• Invalid input 1 Class : Values < 0 (below range),
• Invalid input 2 Class : Values >120 (above range)

Test Cases may be 5, -10, 125 to test the application


Equivalence Class Partitioning

• Partition system inputs and outputs into


‘equivalence sets’; e.g.,
– If input 5-digit integer
• between 10,000 and 99,999,
• then equivalence partitions are
– <10,000,
– 10,000-99,999 and
– >99,999
• Choose test cases at boundaries of these
sets:
– 00000, 09999, 10000, 10001, 99998, 99999, & 100000
Equivalence Partitioning
3 11
4 7 10

Less than 4 Between 4 and 10 More than 10

Number of input values

9999 100000
10000 50000 99999

Less than 10000 Between 10000 and 99999 More than 99999

Input values
Boundary Value Analysis
• For each range [R1, R2] listed in either the
input or output specifications, choose five R1 R2

cases:
– Values less than R1
– Values equal to R1
– Values greater than R1 but less than R2
– Values equal to R2
– Values greater than R2
Example: For an input range of 1 to 100:
Test with values like 0, 1, 55, 100, and 101.
Example of Black Box Testing

Scenario: Login Functionality of a Web


Application
Let’s consider a simple login functionality where the
user needs to enter a username and password.
Functionality to Test: A user should be able to log
in if they enter the correct username and password.
Test Cases
Test Case ID Test Description Input Expected Output
TC01 Valid login credentials Username: user1 Login successful;
Password: Password123 redirects to homepage
TC02 Invalid username Username: invalidUser Error message: "Invalid
Password: password123 credentials"
TC03 Invalid password Username: user1 Error message: "Invalid
Password: wrongPass credentials"
TC04 Empty username and Username: Error message: "Fields
password Password: cannot be empty"
TC05 Password field case Username: user1 Error message: "Invalid
sensitivity Password: password123 credentials"
Non-Functional Testing

Non-functional testing focuses on the quality


attributes of a software application, such as
performance, usability, reliability, and
scalability.
It verifies how the system performs under
specific conditions rather than what it does
(functionality).
The goal is to ensure the system meets
predefined non-functional
requirements and provides a good user
experience.
Non-Functional Testing

1. Performance Testing:
Objective: Assess how the system performs under various loads.
Performance testing assesses the application's performance under
specific conditions. It focuses on various aspects such as response time,
load time, and throughput rates under varying levels of user traffic.
Example: Checking if a website can handle 10,000 simultaneous users
without significant slowdown.

2. Load Testing:
Objective: Assess how the system performs under peak loads.
Example: Checking if a website can handle 10,000
simultaneous users without significant slowdown.
Non-Functional Testing

3. Stress Testing:
Objective: Stress testing pushes the software beyond its normal operational
capacity, often to a breaking point, to see how it handles extreme conditions. This
helps identify the application's upper limits and how it fails under stress.
Determine the system’s stability by testing it beyond its limits.

Example: Increasing the user load until the application crashes to identify the
breaking point.

4. Volume Testing:
Objective: To ensure that a software program or system can handle a
large volume of data.
Example: if the website is developed to handle traffic of 500 users,
volume testing will whether the site is able to handle 500 users or not.
Non-Functional Testing

5. Security Testing:
Objective: Security testing is critical in identifying vulnerabilities,
threats, and risks that could potentially lead to data loss,
breaches, or other security incidents. It ensures that the software
can protect data and maintain functionality as intended.

6. Usability Testing:
Objective: This focuses on the user's ease of using the application,
navigability, and overall user experience. Usability testing aims to
ensure that the software is intuitive and user-friendly.
Non-Functional Testing

7. Recovery Testing:
Objective: To ensure that a software program or system can be
recovered from a failure or data loss.
For example, when the application is running and the computer is
restarted, check the validity of the application’s integrity.

8. Usability Testing:
Objective: This focuses on the user's ease of using the application,
navigability, and overall user experience. Usability testing aims to
ensure that the software is intuitive and user-friendly.
Non-Functional Testing

9. Compatibility Testing
Objective: To ensure that a software program or system is
compatible with other software programs or systems.
For example, in this, the tester checks that the software is
compatible with other software, operating systems, etc.

10. Reliability testing:


Objective: To check that the application can perform a failure-free operation for
the specified period of time in the given environmental conditions.
Non-Functional Testing

11. Scalability Testing:


Objective: Evaluate the software's ability to scale up or down in response to the
application's demands. Ensures the application can handle growth in data,
processing capacity, or number of users.

12. Spike Testing:


Objective: Spike testing is a type of performance testing used to evaluate how a
system behaves under a sudden, extreme increase (spike) in load. The purpose
is to observe how the system handles this abrupt surge in traffic and to see if it
can quickly recover once the load drops back to normal levels.
Spike testing is a subset of stress testing, but it specifically focuses on sharp
increases in load rather than a gradual increase.
White-Box Testing

AAAAAA
BBBBBB
CCCC
DDDDD

input
White-Box Testing
• Also known as Structural Testing, Clear-Box Testing, Open-Box Testing,
Logic-Driven Testing, transparent testing, or glass box testing)
• It is a software testing method where the internal structure, design, and
code of the software are tested.
• Using White-Box testing methods, the software engineer can derive test
cases that.
 Guarantee that all independent paths within a module have been exercised at
least once.
 Exercise all logical decisions on their true and false sides.
 Execute all loops at their boundaries and within their operational bounds.
 Exercise internal data structures to ensure their validity.

• white-box testing: The internal structure of the software is taken into


account to derive the test cases
Key Features of White Box Testing

Knowledge of the internal code: The tester must understand the code
logic, data flow, and control flow.

Focus on code coverage: It aims to cover as much of the code as possible,


including all branches, loops, and paths.

Performed by developers or testers: Since it requires knowledge of the


code, it is often conducted by developers or experienced testers.
Code Coverage
• Code coverage is a software testing metric that measures
the percentage of source code executed during testing.
• It helps ensure that your tests exercise as much of the code
as possible, reducing the risk of hidden bugs.
• It provides insights into which parts of the code were
executed and which were not, helping identify areas that
need additional tests.
Coverage Metrics
• Coverage metrics
– Statement coverage:
– Branch coverage:
– Path coverage:
– Condition Coverage
Statement Coverage

Choose a test set T such that by executing program P for each test
case in T, each basic statement of P is executed at least once
Statement Coverage

areTheyPositive(int x, int y) Following test set will give us statement


{ coverage:
if (x >= 0) T1 = {(x=12,y=5), (x=1,y=35),
print(“x is positive”); (x=115,y=13),(x=91,y=2)}
else
print(“x is negative”); There are smaller test cases which will
if (y >= 0) give us statement coverage too:
print(“y is positive”); T2 = {(x=12,y=5), (x=1,y=35)}
else
There is a difference between these two
print(“y is negative”);
test sets though
}
Control Flow Graphs (CFGs)
• Nodes in the control flow graph are basic blocks
– A basic block is a sequence of statements always entered at the
beginning of the block and exited at the end
• Edges in the control flow graph represent the control flow

if (x < y) { B0 (x < y)
x = 5 * y; Y N
x = x + 3;
} x = 5 * y B1 B2 y = 5
else x = x + 3
y = 5;
x = x+y;

x = x+y B3

• Each block has a sequence of statements


• No jump from or to the middle of the block
• Once a block starts executing, it will execute till the end
Branch Coverage
• Construct the control flow graph
• Select a test set T such that by executing program P for each test case
d in T, each edge of P’s control flow graph is traversed at least once

assignAbsolute(int x)
B0 {
if (x < 0)
(x < 0)
x := -x;
true z := x;
B1 false
}
x := -x Test set {x = 1} does not execute
this edge, hence, it does not give
branch coverage
B2
z := x Test set {x = 1, x=2}gives both
statement and branch coverage
bool isEqual(int x, int y)
{ int max(int x, int y)
if (x = y) {
z := true; if (x > y)
else return x;
z := false; else
return z; return y;
} }

Generate test cases which will ensure statement coverage as well as branch
coverage
Statement vs. Branch Coverage

assignAbsolute(int x)
{ Consider this program segment, the test set
if (x < 0) T = {x = 1} will give statement coverage,
x := -x; however not branch coverage
z := x;
}
B0
Control Flow Graph: (x < 0)
true false
B1 Test set {x = 1} does not
x := -x execute this edge, hence, it
does not give branch coverage

B2
z := x
Path Coverage
Select a test set T such that by executing program P for each test case
d in T, all paths leading from the initial to the final node of P’s
control flow graph are traversed
areTheyPositive(int x, int y)
{
if (x >= 0)
print(“x is positive”);
else
Draw the Control flow chart for the given
print(“x is negative”); code
if (y >= 0)
print(“y is positive”);
else
print(“y is negative”);
}
Path Coverage
areTheyPositive(int x, int y) B0
{ (x >= 0)
if (x >= 0)
true false
print(“x is positive”);
else B1 B2
print(“x is negative”); print(“x is p”) print(“x is n”)
if (y >= 0)
print(“y is positive”); B3
else (y >= 0)
print(“y is negative”); true false
} B4 B5
Test set: print(“y is p”) print(“y is n”)
T2 = {(x = 12,y =  5), (x =  1,y = 35)}
gives both branch and statement B6
coverage but it does not give path coverage
return
Set of all execution paths: {(B0,B1,B3,B4,B6), (B0,B1,B3,B5,B6), (B0,B2,B3,B4,B6),
(B0,B2,B3,B5,B6)}
Test set T2 executes only paths: (B0,B1,B3,B5,B6) and (B0,B2,B3,B4,B6)
Path Coverage
B0
areTheyPositive(int x, int y)
(x >= 0)
{
if (x >= 0) true false
print(“x is positive”); B1 B2
else print(“x is p”) print(“x is n”)
print(“x is negative”);
if (y >= 0) B3
print(“y is positive”);
else (y >= 0)
print(“y is negative”); true false
} B4 B5
print(“y is p”) print(“y is n”)
Test set:
T1 = {(x=12,y=5), (x=1,y=35),
B6
(x=115,y=13),(x=91,y=2)}
return
gives both branch, statement and path
coverage
Condition Coverage

Condition Coverage, also known as Predicate Coverage, is a


testing criterion used to evaluate the logic of conditional
statements in software code.
It ensures that each individual Boolean condition within a
decision statement (like if, while, or for statements) has been
tested for both true and false values at least once.
Condition Coverage example
Here, there are two conditions:
• Consider the following code a>0
b<5
if (a > 0 && b < 5)
To achieve Condition Coverage, we need to
{ test:
// Some action
a > 0 is true and false.
} b < 5 is true and false.
This results in at least 4 tests to fully cover
both conditions:

Test 1: a = 1, b = 3 (both true)


Test 2: a = -1, b = 3 (first false, second true)
Test 3: a = 1, b = 6 (first true, second false)
Test 4: a = -1, b = 6 (both false)
Loop Coverage

Loop Coverage is a software testing metric that focuses on


verifying the execution of loops in a program.
Loop Coverage ensures that the loop's behavior is tested across
different scenarios, such as when it iterates zero times, exactly
once, and multiple times.
Loop Coverage Test Scenarios

For complete Loop Coverage, you typically need to consider the following
scenarios:

• Zero Iterations: The loop does not execute at all.


• One Iteration: The loop executes exactly once.
• Multiple Iterations: The loop executes multiple times (e.g., two or more
iterations).
• Maximum Iterations (if applicable): The loop executes up to its maximum limit.
• Boundary Conditions: Tests around the edges of the loop's range (e.g., just
before and just after the loop limit).
Example loop Coverage

Consider the following codeTo achieve Loop Coverage, we would test:


for (int i = 0; i < 5; i++)
Zero Iterations: Ensure the loop does not run if the initial
{ condition is not met (e.g., i = 5).
// Loop body One Iteration: Test with a condition that makes the loop
execute exactly once (e.g., if the loop runs from i = 0 to i < 1).
}
Multiple Iterations: Test normal execution with several
iterations (e.g., i = 0 to i < 5).
Boundary Condition: Check the behavior when i is just before
and after the loop's termination point (e.g., i = 4 and i = 5).
Cyclomatic complexity

• A software metric used to measure the complexity of software


• It provides an indication of the number of linearly
independent paths through a program's source code.
• This metric helps determine the minimum number of test
cases required to achieve full branch coverage.
• Developed by Thomas McCabe
• The higher the cyclomatic complexity, the more complex the
code, making it harder to test, maintain, and understand.
Program/Control Flow Graphs
• Describes the program control flow. Each branch is shown as a
separate path and loops are shown by arrows looping back to
the loop condition node
• The graph will have some edges(E), and some nodes(N).

• Cyclomatic complexity = Number of edges - Number of nodes +2


=E–N+2
Example

A = 10
IF B > C THEN No. of Edges = E = 7
A=B No. of Nodes = N = 7
Cyclomatic Complexity = E – N + 2
ELSE =7 – 7 + 2 = 2
A=C
ENDIF
Print A
Print B
Print C
int module1 (int x, int y) { Start
while (x!=y){
if(x>y) F
x=x-y, x != y Return x
else y=y-x;
T
}
return x; T
x>y x=x-y
}
Stop
F

No. of Edges = 8 y=y-x


No of nodes = 7
CC = 8 -7+2 = 3
If P is the number of predicate nodes in the flow graph, then
Cyclomatic Complexity = P + 1.

In this graph as the number of predicate nodes


is 1, so Cyclomatic Complexity = 1+1 = 2
CC = number of enclosed areas + 1

In this case, number of enclosed areas = 3, so


CC = 3+1 = 4
Interpreting Cyclomatic Complexity Values

• 1-10: Low complexity; easy to understand and test.


• 11-20: Moderate complexity; requires careful testing.
• 21-50: High complexity; difficult to test and maintain.
• >50: Very high complexity; code should be refactored.
Levels of Testing
Test process in software development
V-model:

acceptance
requirements test

system
specification test

detailed integration
design test

implementation unit
code test
Unit Testing
• Unit Testing is a type of software testing where individual
components or units of the software are tested in isolation.
• The purpose of unit testing is to validate that each unit of the
software performs as expected.
• A "unit" is typically the smallest part of an application that can
be tested independently, such as a function, method, or class.
Unit Testing
• Involves testing a single isolated module

• Note that unit testing allows us to isolate the errors to a single module
– we know that if we find an error during unit testing it is in the module we are
testing

• Modules in a program are not isolated, they interact with each other. Possible
interactions:
– Calling procedures in other modules
– Receiving procedure calls from other modules
– Sharing variables

• For unit testing we need to isolate the module we want to test, we do this using two
things
– drivers and stubs
Drivers and Stubs

Module
Driver Stub
procedure Under Test procedure
call call

access to global
variables

• Driver and Stub should have the same interface as the modules they replace

• Driver and Stub should be simpler than the modules they replace
Drivers and Stubs
• Driver: A program that calls the interface procedures of the module being
tested and reports the results

– A driver simulates a module that calls the module currently being


tested

• Stub: A program that has the same interface as a module that is being
used by the module being tested, but is simpler.

– A stub simulates a module called by the module currently being tested


Integration Testing
• Integration Testing is a type of software testing where individual
units or components are combined and tested as a group. The
purpose of integration testing is to identify issues that arise when
multiple modules or units interact with each other.

• The modules that may work properly and independently may not
work when they are integrated.

• It ensures that the integrated components work together correctly,


as intended.
Types of Integration Testing

• Big Bang Integration Testing: In this approach, all modules are


integrated and tested at once, after all units are complete.
• Advantages: Simple to implement, no need for intermediate tests.
• Disadvantages: Difficult to isolate bugs; identifying the source of failures
can be challenging.
• Incremental Integration Testing: In this approach, modules are integrated
and tested step by step, either one at a time or in small groups.
• Advantages: Easier to identify bugs; issues are detected earlier.
• Disadvantages: More complex than Big Bang due to the need for stubs and
drivers.
There are three main strategies for Incremental Integration Testing:
Top-Down Integration Testing: Starts testing from the top-level modules and moves
downward. Uses stubs to simulate lower-level modules.

Bottom-Up Integration Testing: Starts testing from the lower-level modules and moves
upward. Uses drivers to simulate higher-level modules.

Sandwich/Hybrid Integration Testing: Combines both top-down and bottom-up


approaches to address their respective limitations.
System Testing
• System testing - Testing of the system as a whole after the integration
phase
• It is a type of Black-Box Testing
• Test cases can be constructed based on the requirements specifications
• Main purpose is to assure that the system meets its requirements
• During system testing, in addition to functional tests:
– performance tests are performed.
System Testing

System testing includes the following types of testing.


• Recovery Testing
• Security Testing
• Performance Testing
 Robustness Testing
 Load Testing
 Stress Testing
 Volume Testing
 Spike Testing
 Usability Testing
Acceptance Testing
Acceptance Testing is a type of software testing performed to
determine whether a software application meets the business
requirements and is ready for release.
It is the final level of testing, conducted after unit, integration,
and system testing, and serves as the last step before the
software is delivered to the customer.
The primary goal of acceptance testing is to validate the
software against the end-user requirements and ensure that it
is fit for use in the real world.
Purpose of Acceptance Testing

• Verify Requirements: Confirm that the software satisfies the


agreed-upon requirements and specifications.
• Validate Usability: Ensure that the system meets user expectations
in terms of functionality, usability, and performance.
• Ensure Readiness for Release: Identify any critical issues or gaps
before delivering the software to the end-user.
• Build Confidence: Reassure stakeholders that the product is ready
for deployment
Types of Acceptance Testing
1. User Acceptance Testing (UAT):
– Performed by the end users or clients.
– Focuses on verifying that the software works as expected in real-
world scenarios.
– Example: A retail company's employees test a new inventory
management system by simulating day-to-day operations.
2. Business Acceptance Testing (BAT):
– Validates whether the software meets business goals and is aligned
with organizational processes.
– Often conducted by business analysts or product owners.
3. Contract Acceptance Testing:
• Ensures the software meets the terms and conditions defined in
the contract between the vendor and the client.

4. Regulatory Acceptance Testing:


– Ensures that the software complies with laws, regulations, and
standards specific to the industry.
Types of Acceptance Testing
Alpha Testing –
• The Alpha test is conducted in the developer’s
environment by the end-users.
• The environment might be simulated, with the
developer and the typical end-user present for the
testing.
• The end-user uses the software and records the errors
and problems.
• Alpha test is conducted in a controlled environment.
Types of Acceptance Testing

Beta Testing –
• The Beta test is conducted in the end-user’s environment.
• The developer is not present for the beta testing. The beta
testing is always in the real-world environment which is not
controlled by the developer.
• The end-user records the problems and reports it back to
the developer at intervals.
• Based on the results of the beta testing the software is made
ready for the final release to the intended customer base.
Test Plan

A test plan is a document that outlines the strategy, approach,


resources, and schedule for testing activities in a project.
It serves as a blueprint to guide the testing process and ensure
that all aspects of the software are thoroughly evaluated to meet
quality standards.
Key Components of a Test Plan

1. Introduction
• Objective: Purpose of the testing effort.
• Scope: Features or functionalities to be tested.
• Out of Scope: What will not be tested.
2. Test Items
• Specific items or components (modules, features, interfaces) that will be
tested.
3. Test Objectives
• Goals to achieve through testing, e.g., identifying defects, verifying
functionality, ensuring performance.
Key Components of a Test Plan

4. Testing Approach
• The testing methodology (e.g., manual or automated).
5. Entry and Exit Criteria
• Entry Criteria: Conditions that must be met before testing begins (e.g., code complete,
environment ready).
• Exit Criteria: Conditions to conclude testing (e.g., no critical defects, test cases
executed successfully).
6. Test Environment
• Details of hardware, software, network configurations, and tools required for testing.
7. Test Deliverables
• Outputs like test cases, test scripts, defect reports, and test summary reports.
Key Components of a Test Plan
8. Resource Allocation
• Test team members, roles, and responsibilities.
9. Schedule
• Timeline for testing activities, including milestones.
10. Risk Management
• Potential risks and their mitigation plans (e.g., lack of resources, delays in
environment setup).
11. Defect Management
• Process for reporting, tracking, and resolving defects.
12. Approval and Sign-off
• Criteria and stakeholders involved in approving the test plan.
Test Plan

Creating a comprehensive test plan ensures efficient resource


utilization, reduces errors, enhances the overall quality of the
product and ensures smooth completion of Testing Process
Test case

A test case is a detailed document that outlines specific inputs,


conditions, and expected outcomes to verify that a particular
feature or functionality of an application works as intended.
Structure of a Test Case

A well-written test case typically includes the following components:


• Test Case ID - A unique identifier for the test case (e.g., TC001)
• Test Case Title - A brief description of what the test case validates (e.g., "Verify login
with valid credentials").
• Objective/Purpose - The goal of the test case (e.g., "Ensure the user can log in
successfully with valid credentials").
• Preconditions - Conditions or setup required before executing the test case (e.g.,
"User account exists in the system").
• Test Steps
• Detailed step-by-step instructions for executing the test (e.g.):
• Open the login page.
• Enter valid username and password.
• Click the "Login" button.
Structure of a Test Case

• Test Data
– Data inputs required for the test case (e.g., username: user1, password: password123).
• Expected Result
– The outcome that should occur if the system functions correctly (e.g., "User is redirected to the
dashboard")
• Actual Result
• The observed outcome after executing the test case (e.g., "User is redirected to the dashboard")
• Pass/Fail Status
• Indicates whether the test case passed or failed based on a comparison of expected and actual
results.
• Priority
– The importance of the test case (e.g., High, Medium, Low).
1. Attachments
– Screenshots, logs, or any other evidence related to the test.
Example Testcase
Test Case ID TC001
Title Verify login with valid credentials
Objective Ensure successful login for valid users
Preconditions User account exists; Login page is accessible
Test Steps 1. Open the login page
2. Enter valid username and password
3. Click "Login" button
Test Data Username: test_user, Password: 12345
Expected Result User is redirected to the dashboard
Actual Result (To be filled after test execution)
Status Pass / Fail
Priority High
Environment OS: Windows 10, Browser: Chrome v114
Example Testcase
Test Test Input Expected Output Actual Output Pass/
Case ID Description Fail
TC01 Valid login Username: user1 Login successful;
credentials Password: Password123 redirects to
homepage
TC02 Invalid Username: invalidUser Error message:
username Password: password123 "Invalid credentials"
TC03 Invalid Username: user1 Error message:
password Password: wrongPass "Invalid credentials"
TC04 Empty Username: Error message:
username Password: "Fields cannot be
and empty"
password
TC05 Password Username: user1 Error message:
field case Password: password123 "Invalid credentials"
sensitivity
Test Suite

• A test suite is a collection of related test cases grouped


together for testing a specific functionality, feature, module,
or an entire system.
• It acts as a container to organize and manage multiple test
cases, ensuring a systematic approach to testing.
Components of a Test Suite
1. Test Suite Name:
• A descriptive name indicating the purpose of the suite (e.g., "User Registration Test
Suite").
2. Test Suite Description:
• A brief overview of what the test suite covers.
3. Associated Test Cases:
• A list of test cases included in the suite.
4. Dependencies/Prerequisites:
• Any preconditions required for executing the suite (e.g., environment setup, database
configuration).
5. Execution Details:
• Information about how and when the suite will be executed, and by whom.
6. Result Summary:
• A consolidated report of the results for all test cases in the suite (e.g., Pass, Fail, Blocked).
Example of Test Suite

Test Suite Name Login Module Test Suite


Description Validates all login-related functionality.
Preconditions Application is deployed and accessible.
Test Cases - TC001: Verify login with valid credentials
- TC002: Verify login with invalid credentials
- TC003: Verify login with blank fields
Dependencies Database should contain test user accounts.
Execution Order TC001 → TC002 → TC003
Result Summary Pass: 2, Fail: 1, Blocked: 0
Thanks

You might also like