0% found this document useful (0 votes)
2 views

sta unit 2

The document outlines the importance and components of a test plan in software testing, detailing its role as a blueprint for testing activities. It emphasizes the need for clear objectives, resource allocation, risk management, and communication among stakeholders. Additionally, it provides steps for creating an effective test plan and highlights high-level expectations for the testing process.

Uploaded by

benilbenil70
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

sta unit 2

The document outlines the importance and components of a test plan in software testing, detailing its role as a blueprint for testing activities. It emphasizes the need for clear objectives, resource allocation, risk management, and communication among stakeholders. Additionally, it provides steps for creating an effective test plan and highlights high-level expectations for the testing process.

Uploaded by

benilbenil70
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 33

lOMoARcPSD|38838811

CCS366 SOFTWARE TESTING AND AUTOMATION MCE CSE

CS336 SOFTWARE TESTING AUTOMATION


UNIT 2

TEST PLAN
TEST PLANNING
A Test Plan is a detailed document that catalogs the test strategies, objectives, schedule,
estimations, deadlines, and resources required to complete that project. Think of it as a blueprint
for running the tests needed to ensure the software is working correctly – controlled by test
managers.

 A well-crafted test plan is a dynamic document that changes according to progressions in the
project and stays current at all times.
 It is the point of reference based on which testing activities are executed and coordinated among
a QA team.
 The test plan is also shared with Business Analysts, Project Managers, Dev teams, and anyone
associated with the project. This mainly offers transparency into QA activities so that all
stakeholders know how the software will be tested.

WHY ARE TEST PLANS IMPORTANT?

 They help individuals outside the QA teams (developers, business managers, customer-facing
teams) understand exactly how the website or app will be tested.
 They offer a clear guide for QA engineers to conduct their testing activities.
 They detail aspects such as test scope, test estimation, strategy, etc.
 Collating all this information into a single document makes it easier to review by management
personnel or reuse for other projects.

COMPONENTS OF A TEST PLAN

 Scope: Details the objectives of the particular project. Also, it details user scenarios to be used in
tests. The scope can specify scenarios or issues the project will not cover if necessary.

MEENAKSHI COLLEGE OF ENGINEERING DEPARTMENT OF C.S.E.


lOMoARcPSD|38838811

CCS366 SOFTWARE TESTING AND AUTOMATION MCE CSE

 Schedule: Details start dates and deadlines for testers to deliver results.
 Resource Allocation: Details which tester will work on which test.
 Tools: Details what tools will be used for testing, bug reporting, and other relevant activities.
 Defect Management: Details how bugs will be reported, to whom, and what each bug report
needs to be accompanied by. For example, should bugs be reported with screenshots, text logs,
or videos of their occurrence in the code?
 Risk Management: Details what risks may occur during software testing and what risks the
software itself may suffer if released without sufficient testing.
 Exit Parameters: Details when testing activities must stop. This part describes the expected
results from the QA operations, giving testers a benchmark to compare actual results.

HOW TO CREATE A TEST PLAN?


Creating an effective Test Plan involves the following steps:

1. Product Analysis
2. Designing Test Strategy
3. Defining Objectives
4. Establish Test Criteria
5. Planning Resource Allocation
6. Planning Setup of Test Environment
7. Determine test schedule and estimation
8. Establish Test Deliverables
1. Product Analysis
Start with learning more about the product being tested, the client, and the end-users of similar
products. Ideally, this phase should focus on answering the following questions:

 Who will use the product?


 What is the primary purpose of this product?
 How does the product work?
 What are the software and hardware specifications?

MEENAKSHI COLLEGE OF ENGINEERING DEPARTMENT OF C.S.E.


lOMoARcPSD|38838811

CCS366 SOFTWARE TESTING AND AUTOMATION MCE CSE

In this stage, do the following:

 Interview clients, designers, and developers


 Review product and project documentation
 Perform a product walkthrough
2. Designing Test Strategy
The test strategy document is developed by the test manager and defines the following:

 Project objectives and how to achieve them.


 The amount of effort and cost required for testing.

 Scope of Testing: Contains the software components (hardware, software, middleware) to be


tested and those that will not be tested.

 Risks and Issues: Describes all possible risks that may occur during testing – tight deadlines,
poor management, inadequate or erroneous budget estimate – and the effect of these risks on the
product or business.
 Test Logistics: Mentions the names of testers (or their skills) and the tests to be run by them.
This section also includes the tools and the schedule laid out for testing.
3. Defining Objectives
This phase defines the goals and expected results of test execution. Since all testing intends to
identify as many defects as possible, the objects must include:

 A list of all software features – functionality, GUI, performance standards- must be tested.
 The ideal result or benchmark for every aspect of the software that needs testing. This is the
benchmark to which all actual results will be compared.
4. Establish Test Criteria
Test Criteria refers to standards or rules governing all activities in a testing project. The two
main test criteria are:

MEENAKSHI COLLEGE OF ENGINEERING DEPARTMENT OF C.S.E.


lOMoARcPSD|38838811

CCS366 SOFTWARE TESTING AND AUTOMATION MCE CSE

 Suspension Criteria: Defines the benchmarks for suspending all tests. For example, if QA team
members find that 50% of all test cases have failed, then all testing is suspended until the
developers resolve all of the bugs that have been identified so far.
 Exit Criteria: Defines the benchmarks that signify the successful completion of a test phase or
project. The exit criteria are the expected results of tests and must be met before moving on to
the next stage of development. For example, 80% of all test cases must be marked successful
before a feature or portion of the software can be considered suitable for public use.
5. Planning Resource Allocation
 This phase creates a detailed breakdown of all resources required for project completion.
Resources include human effort, equipment, and all infrastructure needed for accurate
and comprehensive testing.
 This part of test planning decides the project’s required measure of resources (number of
testers and equipment). This also helps test managers formulate a correctly calculated
schedule and estimation for the project.

6. Planning Setup of Test Environment

Ideally, test environments should be real devices so testers can monitor software behavior in real
user conditions.

Whether it is manual testing or automation testing nothing beats real devices, installed with real
browsers and operating systems are non-negotiable as test environments.

Do not compromise your test results with emulator or simulator

7. Determining Test Schedule and Estimation

Then, create a schedule to complete these tasks in the designated time with a specific amount of
effort.
Creating the schedule, however, does require input from multiple perspectives:

 Employee availability, number of working days, project deadlines, and daily resource
availability.

MEENAKSHI COLLEGE OF ENGINEERING DEPARTMENT OF C.S.E.


lOMoARcPSD|38838811

CCS366 SOFTWARE TESTING AND AUTOMATION MCE CSE

 Risks associated with the project which has been evaluated in an earlier stage.
8. Establish Test Deliverables
Test Deliverables refer to a list of documents, tools, and other equipment that must be created,
provided, and maintained to support testing activities in a project.
A different set of deliverables is required before, during, and after testing.
Deliverables required before testing
Documentation on

 Test Plan
 Test Design

Deliverables required during testing

 Simulators or Emulators (in early stages)


 Test Data
 Error and execution logs
 Test Script

Deliverables required after testing

 Test Results
 Release Notes
 Defect Report

Creating a comprehensive test plan is crucial for ensuring the quality and reliability of software.
A test plan outlines the testing approach, scope, objectives, resources, and schedules for a
software testing project. Here are some important concepts to consider when developing a test
plan:

1. Scope and Objectives: Clearly define the scope of the testing effort, including the
features, functions, and components that will be tested. Outline the objectives of testing,
such as identifying defects, validating functionality, and ensuring compliance with
requirements.

MEENAKSHI COLLEGE OF ENGINEERING DEPARTMENT OF C.S.E.


lOMoARcPSD|38838811

CCS366 SOFTWARE TESTING AND AUTOMATION MCE CSE

2. Test Strategy: Describe the overall approach to testing, including the types of testing
(e.g., unit, integration, system, acceptance) that will be performed. Explain the rationale
behind choosing specific testing techniques and methodologies.
3. Test Environments: Specify the hardware, software, and network configurations needed
to conduct testing effectively. This includes details about development and testing
environments, database versions, operating systems, browsers, etc.
4. Test Deliverables: List the documents and artifacts that will be produced as part of the
testing process, such as test cases, test scripts, test data, defect reports, and test logs.
5. Test Schedule: Outline the timeline for different testing phases, including start and end
dates for each phase, milestones, and dependencies. Consider factors like resource
availability and development progress.
6. Test Resources: Identify the personnel, tools, and infrastructure required for testing. This
includes testers, developers, test automation tools, test management tools, and any
specialized hardware or software.
7. Risk Assessment: Identify potential risks that might impact the testing process or the
software quality. Assess the impact and likelihood of each risk and propose mitigation
strategies.
8. Test Cases and Test Scripts: Define the test cases that will be executed during testing.
Each test case should include the test scenario, input data, expected outcomes, and steps
to reproduce the test. For automated testing, provide the test scripts and tools to be used.
9. Test Data: Describe the data needed for testing, including sample data, test databases,
and any specific data conditions that need to be simulated.
10. Defect Management: Define the process for reporting, tracking, prioritizing, and
resolving defects. Include guidelines for defect classification, severity, and priority.
11. Test Execution: Detail how the testing will be executed, including any manual or
automated procedures, testing sequences, and regression testing strategies.
12. Exit Criteria: Specify the conditions that must be met for each testing phase to be
considered complete. This might include criteria related to test coverage, defect
resolution, and overall system stability.
13. Test Sign-off and Approval: Define the process for obtaining approval to proceed from
one testing phase to another or for releasing the software to production.

MEENAKSHI COLLEGE OF ENGINEERING DEPARTMENT OF C.S.E.


lOMoARcPSD|38838811

CCS366 SOFTWARE TESTING AND AUTOMATION MCE CSE

14. Documentation: Address how documentation will be managed throughout the testing
process. This includes version control for test plans, test cases, and other related
documents.
15. Change Management: Describe how changes to the software or requirements will be
managed during testing. Address how these changes may impact the test plan and
ongoing testing efforts.

Remember that a test plan should be tailored to the specific project and organization's needs. It
should be a living document that evolves as the project progresses and new information becomes
available. Regularly review and update the test plan to ensure it remains relevant and aligned
with the project's goals.

HIGH-LEVEL EXPECTATIONS

in a software test plan refer to the overarching goals and outcomes that the testing effort aims to
achieve. These expectations set the tone for the testing process and provide a clear direction for
the testing team. Here are some examples of high-level expectations that could be included in a
software test plan:

1. Defect Identification: The testing process should systematically uncover defects,


anomalies, and discrepancies in the software's functionality, ensuring that these issues are
identified and documented for resolution.
2. Functional Validation: The software's features and functions should be thoroughly
validated against the specified requirements to ensure that they work as intended and
meet user needs.
3. Quality Assurance: The testing process should ensure that the software meets defined
quality standards, including performance, security, usability, and reliability aspects.
4. User Experience: The software should provide a positive and user-friendly experience,
including intuitive navigation, clear interfaces, and responsiveness.
5. Compatibility: The software should work seamlessly across various platforms, browsers,
devices, and operating systems as specified in the project requirements.
6. Regression Prevention: Testing should include regression testing to ensure that new
code changes do not negatively impact existing functionalities.

MEENAKSHI COLLEGE OF ENGINEERING DEPARTMENT OF C.S.E.


lOMoARcPSD|38838811

CCS366 SOFTWARE TESTING AND AUTOMATION MCE CSE

7. Timely Delivery: The testing process should be conducted efficiently and effectively to
avoid delays in the overall project timeline.
8. Documentation: All test cases, test scripts, defects, and testing outcomes should be well-
documented to provide clear traceability and insights into the testing process.
9. Communication: Regular communication should be maintained between the testing
team, development team, and stakeholders to keep everyone informed about testing
progress and outcomes.
10. Risk Mitigation: The testing process should identify and address potential risks that
could impact the software's quality, stability, or delivery.
11. Continuous Improvement: The testing process should be iterative, and feedback from
testing cycles should be used to improve the testing strategy and quality assurance
practices.
12. Compliance: If applicable, the software should adhere to industry regulations, standards,
and best practices.
13. Stakeholder Satisfaction: The testing effort should contribute to overall stakeholder
satisfaction by ensuring that the software meets or exceeds their expectations.

These high-level expectations should align with the project's objectives, requirements, and the
organization's quality standards. They provide a roadmap for the testing team and help establish
the overall testing strategy that guides the more detailed aspects of the test plan, such as test
cases, schedules, resources, and risk management.

Test Plan | Software Testing


A test plan describes how testing would be accomplished. It is a document that specifies the
purpose, scope, and method of software testing. It determines the testing tasks and the persons
involved in executing those tasks, test items, and the features to be tested. It also describes the
environment for testing and the test design and measurement techniques to be used. Note that a
properly defined test plan is an agreement between testers and users describing the role of
testing in software.

A complete test plan helps the people who are not involved in test group to understand why
product validation is needed and how it is to be performed. However, if the test plan is not

MEENAKSHI COLLEGE OF ENGINEERING DEPARTMENT OF C.S.E.


lOMoARcPSD|38838811

CCS366 SOFTWARE TESTING AND AUTOMATION MCE CSE

complete, it might not be possible to check how the software operates when installed on
different operating systems or when used with other software. To avoid this problem, IEEE
states some components that should be covered in a test plan. These components are listed in
Table.

Table Components of a Test Plan

Component Purpose

Responsibilities Assigns responsibilities to different people and


keeps them focused.

Assumptions Avoids any misinterpretation of schedules.

Test Provides an abstract of the entire process and


outlines specific tests. The testing scope,
schedule, and duration are also outlined.

Communication Communication plan (who, what, when, how


about the people) is developed.

Risk analysis Identifies areas that are critical for success.

Defect reporting Specifies the way in which a defect should be


documented so that it may reoccur and be
retested and fixed.

Environment Describes the data, interfaces, work area, and


the technical environment used in testing. All
this is specified to reduce or eliminate the
misunderstandings and sources of potential
delay.

MEENAKSHI COLLEGE OF ENGINEERING DEPARTMENT OF C.S.E.


lOMoARcPSD|38838811

CCS366 SOFTWARE TESTING AND AUTOMATION MCE CSE

A carefully developed test plan facilitates effective test execution, proper analysis of errors, and
preparation of error report. To develop a test plan, a number of steps are followed, as listed
below.

 Set objectives of test plan: Before developing a test plan, it is necessary to understand its
purpose. But, before determining the objectives of a test plan, it is necessary to determine
the objectives of the software. This is because the objectives of a test plan are highly
dependent on that of software. For example, if the objective of the software is to
accomplish all user requirements, then a test plan is generated to meet this objective.
 Develop a test matrix: A test matrix indicates the components of the software that are to
be tested. It also specifies the tests required to check these components. Test matrix is also
used as a test proof to show that a test exists for all components of the software that require
testing. In addition, test matrix is used to indicate the testing method, which is used to test
the entire software.
 Develop test administrative component: A test plan must be prepared within a fixed
time so that software testing can begin as soon as possible. The purpose of administrative
component of a test plan is to specify the time schedule and resources (administrative
people involved while developing the test plan) required to execute the test plan. However,
if the implementation plan (plan that describes how the processes in the software are
carried out) of software changes, the test plan also changes. In this case, the schedule to
execute the test plan also gets affected.
 Write the test plan: The components of a test plan such as its objectives, test matrix, and
administrative component are documented. All these documents are then collected
together to form a complete test plan. These documents are organized either in an informal
or formal manner.

MEENAKSHI COLLEGE OF ENGINEERING DEPARTMENT OF C.S.E.


lOMoARcPSD|38838811

CCS366 SOFTWARE TESTING AND AUTOMATION MCE CSE

A test plan has many sections, which are listed below.

 Overview: Describes the objectives and functions of the software to be performed. It also
describes the objectives of test plan such as defining responsibilities, identifying test
environment and giving a complete detail of the sources from where the information is
gathered to develop the test plan.
 Test scope: Specifies features and combination of features, which are to be tested. These
features may include user manuals or system documents. It also specifies the features and
their combinations that are not to be tested.
 Test methodologies: Specifies the types of tests required for testing features and
combination of these features such as regression tests and stress tests. It also provides
description of sources of test data along with how test data is useful to ensure that testing
is adequate such as selection of boundary or null values. In addition, it describes the
procedure for identifying and recording test results.

MEENAKSHI COLLEGE OF ENGINEERING DEPARTMENT OF C.S.E.


lOMoARcPSD|38838811

CCS366 SOFTWARE TESTING AND AUTOMATION MCE CSE

 Test phases: Identifies different types of tests such as unit testing, integration testing and
provides a brief description of the process used to perform these tests. Moreover, it
identifies the testers that are responsible for performing testing and provides a detailed
description of the source and type of data to be used. It also describes the procedure of
evaluating test results and describes the work products, which are initiated or completed in
this phase.
 Test environment: Test environment: Identifies the hardware, software, automated testing
tools;
 Schedule: Provides detailed schedule of testing activities and defines the responsibilities
to respective people. In addition, it indicates dependencies of testing activities and the time
frames for them.
 Approvals and distribution: Identifies the individuals who approve a test plan and its
results. It also identifies the people to whom the test plan document(s) is distributed.

Test Case Design


A test case provides the description of inputs and their expected outputs to observe whether the
software or a part of the software is working correctly. IEEE defines test case as ‘a set of input
values, execution preconditions, expected results and execution post conditions, developed for a
particular objective or test condition such as to exercise a particular program path or to verify
compliance with a specific requirement.’ Generally, a test case is associated with details like
identifier, name, purpose, required inputs, test conditions, and expected outputs.

Incomplete and incorrect test cases lead to incorrect and erroneous test outputs. To avoid this,
the test cases must be prepared in such a way that they check the software with all possible
inputs. This process is known as exhaustive testing and the test case, which is able to perform
exhaustive testing, is known as ideal test case. Generally, a test case is unable to perform
exhaustive testing; therefore, a test case that gives satisfactory results is selected. In order to
select a test case, certain questions should be addressed.

 How to select a test case?


 On what basis are certain elements of program included or excluded from a test case?

MEENAKSHI COLLEGE OF ENGINEERING DEPARTMENT OF C.S.E.


lOMoARcPSD|38838811

CCS366 SOFTWARE TESTING AND AUTOMATION MCE CSE

To provide an answer to these questions, test selection criterion is used that specifies the
conditions to be met by a set of test cases designed for a given program. For example, if the
criterion is to exercise all the control statements of a program at least once, then a set of test
cases, which meets the specified condition should be selected.

The process of generating test cases helps to identify the problems that exist in the software
requirements and design. For generating a test case, firstly the criterion to evaluate a set of test
cases is specified and then the set of test cases satisfying that criterion is generated. There are
two methods used to generate test cases, which are listed below.

 Code-based test case generation: This approach, also known as structure based test case
generation, is used to assess the entire software code to generate test cases. It considers
only the actual software code to generate test cases and is not concerned with the user
requirements. Test cases developed using this approach are generally used for performing
unit testing. These test cases can easily test statements, branches, special values, and
symbols present in the unit being tested.
 Specification-based test case generation: This approach uses specifications, which
indicate the functions that are produced by the software to generate test cases. In other
words, it considers only the external view of the software to generate test cases. It is
generally used for integration testing and system testing to ensure that the software is
performing the required task. Since this approach considers only the external view of the
software, it does not test the design decisions and may not cover all statements of a
program. Moreover, as test cases are derived from specifications, the errors present in
these specifications may remain uncovered.

Several tools known as test case generators are used for generating test cases. In addition to test
case generation, these tools specify the components of the software that are to be tested. An
example of test case generator is the ‘astra quick test’, which captures business processes in the
visual map and generates data-driven tests automatically.

Test Case Specifications

A test plan is neither not related to the details of testing units nor it specifies the test cases to be
used for testing units. Thus, test case specification is done in order to test each unit separately.

MEENAKSHI COLLEGE OF ENGINEERING DEPARTMENT OF C.S.E.


lOMoARcPSD|38838811

CCS366 SOFTWARE TESTING AND AUTOMATION MCE CSE

Depending on the testing method specified in a test plan, the features of the unit to be tested are
determined. The overall approach stated in the test plan is refined into two parts: specific test
methods and the evaluation criteria. Based on these test methods and the criteria, the test cases
to test the unit are specified.

For each unit being tested, these test case specifications describe the test cases, required inputs
for test cases, test conditions, and the expected outputs from the test cases. Generally, it is
required to specify the test cases before using them for testing. This is because the effectiveness
of testing depends to a great extent on the nature of test cases.

Test case specifications are written in the form of a document. This is because the quality of test
cases is evaluated by performing a test case review, which requires a formal document. The
review of test case document ensures that test cases satisfy the chosen criteria and conform to
the policy specified in the test plan. Another benefit of specifying test cases in a formal
document is that it helps testers to select an effective set of test cases.

Software Testing Strategies – Types of


Software Testing Strategies
To perform testing in a planned and systematic manner, software testing strategy is developed.
A testing strategy is used to identify the levels of testing which are to be applied along with the
methods, techniques, and tools to be used during testing. This strategy also decides test cases,
test specifications, test case decisions, and puts them together for execution.

Developing a test strategy, which efficiently meets the requirements of an organization, is


critical to the success of software development in that organization.

The choice of software testing strategy is highly dependent on the nature of the developed
software. For example, if the software is highly data intensive then a strategy that checks
structures and values properly to ensure that all inputs given to the software are correct and
complete should be developed. Similarly, if it is transaction intensive then the strategy should
be such that it is able to check the flow of all the transactions. The design and architecture of the
software are also useful in choosing testing strategy. A number of software testing strategies are

MEENAKSHI COLLEGE OF ENGINEERING DEPARTMENT OF C.S.E.


lOMoARcPSD|38838811

CCS366 SOFTWARE TESTING AND AUTOMATION MCE CSE

developed in the testing process. All these strategies provide the tester a template, which is used
for testing. Generally, all testing strategies have following characteristics.

1. Testing proceeds in an outward manner. It starts from testing the individual units,
progresses to integrating these units, and finally, moves to system testing.
2. Testing techniques used during different phases of software development are different.
3. Testing is conducted by the software developer and by an ITG.
4. Testing and debugging should not be used synonymously. However, any testing strategy
must accommodate debugging with itself.

Types of Software Testing Strategies

There are different types of software testing strategies, which are selected by the testers
depending upon the nature and size of the software. The commonly used software testing
strategies are listed below.

 Analytic testing strategy: This uses formal and informal techniques to access and
prioritize risks that arise during software testing. It takes a complete overview of
requirements, design, and implementation of objects to determine the motive of testing.
 Model-based testing strategy: This strategy tests the functionality of the software
according to the real world scenario (like software functioning in an organization). It

MEENAKSHI COLLEGE OF ENGINEERING DEPARTMENT OF C.S.E.


lOMoARcPSD|38838811

CCS366 SOFTWARE TESTING AND AUTOMATION MCE CSE

recognizes the domain of data and selects suitable test cases according to the probability of
errors in that domain.
 Methodical testing strategy: It tests the functions and status of software according to the
checklist, which is based on user requirements. This strategy is also used to test the
functionality, reliability, usability, and performance of the software.
 Process-oriented testing strategy: It tests the software according to already existing
standards such as the IEEE standards. In addition, it checks the functionality of the
software by using automated testing tools.
 Dynamic testing strategy: This tests the software after having a collective decision of the
testing team. Along with testing, this strategy provides information about the software
such as test cases used for testing the errors present in it.
 Philosophical testing strategy: It tests the software assuming that any component of the
software can stop functioning anytime. It takes help from software developers, users and
systems analysts to test the software.

A testing strategy should be developed with the intent to provide the most effective and efficient
way of testing the software. While developing a testing strategy, some questions arise such as:
when and what type of testing is to be done? What are the objectives of testing? Who is
responsible for performing testing? What outputs are produced as a result of testing? The inputs
that should be available while developing a testing strategy are listed below.

 Type of development project


 Complete information about the hardware and software components that are required to
develop the software
 Risks involved
 Description of the resources that are required for testing
 Description of all testing methods that are required to test various phases of SDLC
 Details of all the attributes that the software is unable to provide. For example, software
cannot describe its own limitations.

MEENAKSHI COLLEGE OF ENGINEERING DEPARTMENT OF C.S.E.


lOMoARcPSD|38838811

CCS366 SOFTWARE TESTING AND AUTOMATION MCE CSE

The output produced by the software testing strategy includes a detailed document, which
indicates the entire test plan including all test cases used during the testing phase. A testing
strategy also specifies a list of testing issues that need to be resolved.

An efficient software testing strategy includes two types of tests, namely, low-level tests and
high-level tests. Low-level tests ensure correct implementation of small part of the source code
and high-level tests ensure that major software functions are validated according to user
requirements. A testing strategy sets certain milestones for the software such as final date for
completion of testing and the date of delivering the software. These milestones are important
when there is limited time to meet the deadline.

In spite of these advantages, there are certain issues that need to be addressed for successful
implementation of software testing strategy. These issues are discussed here.

 In addition to detecting errors, a good testing strategy should also assess portability and
usability of the software.
 It should use quantifiable manner to specify software requirements such as outputs
expected from software, test effectiveness, and mean time to failure which should be
clearly stated in the test plan.
 It should improve testing method continuously to make it more effective.
 Test plans that support rapid cycle testing should be developed. The feedback from rapid
cycle testing can be used to control the corresponding strategies.
 It should develop robust software, which is able to test itself using debugging techniques.
 It should conduct formal technical reviews to evaluate the test cases and test strategy. The
formal technical reviews can detect errors and inconsistencies present in the testing
process.

Characteristics of STLC
 STLC is a fundamental part of the SDLC but STLC consists of only the testing phases.
 STLC starts as soon as requirements are defined or software requirement document is
shared by stakeholders.
 STLC yields a step-by-step process to ensure quality software.
Phases of STLC

MEENAKSHI COLLEGE OF ENGINEERING DEPARTMENT OF C.S.E.


lOMoARcPSD|38838811

CCS366 SOFTWARE TESTING AND AUTOMATION MCE CSE

1. Requirement Analysis: Requirement Analysis is the first step of the Software Testing Life
Cycle (STLC). In this phase quality assurance team understands the requirements like what is
to be tested. If anything is missing or not understandable then the quality assurance team meets
with the stakeholders to better understand the detailed knowledge of requirements.
The activities that take place during the Requirement Analysis stage include:
 Reviewing the software requirements document (SRD) and other related documents
 Interviewing stakeholders to gather additional information
 Identifying any ambiguities or inconsistencies in the requirements
 Identifying any missing or incomplete requirements
 Identifying any potential risks or issues that may impact the testing process
Creating a requirement traceability matrix (RTM) to map requirements to test cases
At the end of this stage, the testing team should have a clear understanding of the software
requirements and should have identified any potential issues that may impact the testing
process. This will help to ensure that the testing process is focused on the most important areas
of the software and that the testing team is able to deliver high-quality results.

2. Test Planning: Test Planning is the most efficient phase of the software testing life cycle
where all testing plans are defined. In this phase manager of the testing, team calculates the
estimated effort and cost for the testing work. This phase gets started once the requirement-
gathering phase is completed.
The activities that take place during the Test Planning stage include:
 Identifying the testing objectives and scope
 Developing a test strategy: selecting the testing methods and techniques that will be used
 Identifying the testing environment and resources needed
 Identifying the test cases that will be executed and the test data that will be used
 Estimating the time and cost required for testing
 Identifying the test deliverables and milestones
 Assigning roles and responsibilities to the testing team
 Reviewing and approving the test plan
At the end of this stage, the testing team should have a detailed plan for the testing activities
that will be performed, and a clear understanding of the testing objectives, scope, and

MEENAKSHI COLLEGE OF ENGINEERING DEPARTMENT OF C.S.E.


lOMoARcPSD|38838811

CCS366 SOFTWARE TESTING AND AUTOMATION MCE CSE

deliverables. This will help to ensure that the testing process is well-organized and that the
testing team is able to deliver high-quality results.

3. Test Case Development: The test case development phase gets started once the test
planning phase is completed. In this phase testing team notes down the detailed test cases. The
testing team also prepares the required test data for the testing. When the test cases are
prepared then they are reviewed by the quality assurance team.
The activities that take place during the Test Case Development stage include:
 Identifying the test cases that will be developed
 Writing test cases that are clear, concise, and easy to understand
 Creating test data and test scenarios that will be used in the test cases
 Identifying the expected results for each test case
 Reviewing and validating the test cases
 Updating the requirement traceability matrix (RTM) to map requirements to test cases

MEENAKSHI COLLEGE OF ENGINEERING DEPARTMENT OF C.S.E.


lOMoARcPSD|38838811

CCS366 SOFTWARE TESTING AND AUTOMATION MCE CSE

TEST CASE:
A test case is a set of actions performed on a system to determine if it satisfies software
requirements and functions correctly. The purpose of a test case is to determine if different
features within a system are performing as expected and to confirm that the system satisfies all
related standards, guidelines and customer requirements. The process of writing a test case can
also help reveal errors or defects within the system.

Test cases are typically written by members of the quality assurance (QA) team or the
testing team and can be used as step-by-step instructions for each system test. Testing begins
once the development team has finished a system feature or set of features. A sequence or
collection of test cases is called a test suite.

Why Test Cases Are Important:

Test cases define what must be done to test a system, including the steps executed in the
system, the input data values that are entered into the system and the results that are expected
throughout test case execution.

The benefits of an effective test case include:

 Guaranteed good test coverage.

 Reduced maintenance and software support costs.

 Reusable test cases.

 Confirmation that the software satisfies end-user requirements.

 Improved quality of software and user experience.

 Higher quality products lead to more satisfied customers.

 More satisfied customers will increase company profits.

MEENAKSHI COLLEGE OF ENGINEERING DEPARTMENT OF C.S.E.


lOMoARcPSD|38838811

CCS366 SOFTWARE TESTING AND AUTOMATION MCE CSE

Example of test case format

Test cases must be designed to fully reflect the software application features and functionality
under evaluation. QA engineers should write test cases so only one thing is tested at a time. The
language used to write a test case should be simple and easy to understand, active instead of
passive, and exact and consistent when naming elements.

The components of a test case include:

 Test name. A title that describes the functionality or feature that the test is verifying.

 Test ID. Typically a numeric or alphanumeric identifier that QA engineers and testers use to
group test cases into test suites.

 Objective. Also called the description, this important component describes what the test
intends to verify in one to two sentences.

 References. Links to user stories, design specifications or requirements that the test is
expected to verify.

 Prerequisites. Any conditions that are necessary for the tester or QA engineer to perform the
test.

 Test setup. This component identifies what the test case needs to run correctly, such as app
version, operation system, date and time requirements and security specifications.

 Test steps. Detailed descriptions of the sequential actions that must be taken to complete the
test.

 Expected results. An outline of how the system should respond to each test step.

Test Case Writing Best Practices

An effective test case design will be:

 Accurate, or specific about the purpose.

 Economical, meaning no unnecessary steps or words are used.

MEENAKSHI COLLEGE OF ENGINEERING DEPARTMENT OF C.S.E.


lOMoARcPSD|38838811

CCS366 SOFTWARE TESTING AND AUTOMATION MCE CSE

 Traceable, meaning requirements can be traced.

 Repeatable, meaning the document can be used to perform the test numerous times.

 Reusable, meaning the document can be reused to successfully perform the test again in the
future.

To achieve these goals, QA and testing engineers can use the following best practices:

 Prioritize which test cases to write based on project timelines and the risk factors of the
system or application.

 Create unique test cases and avoid irrelevant or duplicate test cases.

 Confirm that the test suite checks all specified requirements mentioned in the specification
document.

 Write test cases that are transparent and straightforward. The title of each test case should be
short.

 Test case steps should be broken into the smallest possible segments to avoid confusion
when executing.

 Test cases should be written in a way that allows others to easily understand them and
modify the document when necessary.

 Keep the end user in mind whenever a test case is created.

 Do not assume the features and functionality of the system.

 Each test case should be easily identifiable.

 Descriptions should be clear and concise.

TYPES OF TEST CASES


Functionality test cases

This is a type of black box testing that can reveal if an app's interface works with the rest
of the system and its users by identifying whether the functions that the software is expected to

MEENAKSHI COLLEGE OF ENGINEERING DEPARTMENT OF C.S.E.


lOMoARcPSD|38838811

CCS366 SOFTWARE TESTING AND AUTOMATION MCE CSE

perform are a success or failure. Functionality test cases are based on system specifications or
user stories, allowing tests to be performed without accessing the internal structures of the
software. This test case is usually written by the QA team.

Performance Test Cases

These test cases can help validate response times and confirm the overall effectiveness of
the system. Performance test cases include a very strict set of success criteria and can be used to
understand how the system will operate in the real world. Performance test cases are typically
written by the testing team, but they are often automated because one system can demand
hundreds of thousands of performance tests.

Unit test cases

Unit testing involves analyzing individual units or components of the software to confirm each
unit performs as expected. A unit is the smallest testable element of software. It often takes a few
inputs to produce a single output.

User interface test cases

This type of test case can verify that specific element of the graphical user interface (GUI) look
and perform as expected. UI test cases can reveal errors in elements that the user interacts with,
such as grammar and spelling errors, broken links and cosmetic inconsistencies. UI tests often
require cross-browser functionality to ensure an app performs consistently across different
browsers. These test cases are usually written by the testing team with some help from the design
team.

Security test cases

These test cases are used to confirm that the system restricts actions and permissions
when necessary to protect data. Security tests cases often focus on authentication and encryption

MEENAKSHI COLLEGE OF ENGINEERING DEPARTMENT OF C.S.E.


lOMoARcPSD|38838811

CCS366 SOFTWARE TESTING AND AUTOMATION MCE CSE

and frequently use security-based tests, such as penetration testing. The security team is
responsible for writing these test cases -- if one exists in the organization.

Integration Test Cases

An integration test case is written to determine how the different software modules
interact with each other. The main purpose of this test case is to confirm that the interfaces
between different modules work correctly. Integration test cases are typically written by the
testing team, with input provided by the development team.

Database Test Cases

This type of test case aims to examine what is happening internally, helping testers
understand where the data is going in the system. Testing teams frequently use SQL queries to
write database test cases.

Usability Test Cases

A usability test case can be used to reveal how users naturally approach and use an
application. Instead of providing step-by-step details, a usability test case will provide the tester
with a high-level scenario or task to complete. These test cases are typically written by the
design and testing teams and should be performed before user acceptance testing.

User Acceptance Test Cases

These test cases focus on analyzing the user acceptance testing environment. They are
broad enough to cover the entire system and their purpose is to verify if the application is
acceptable to the user. User acceptance test cases are prepared by the testing team or product
manager and then used by the end user or client. These tests are often the last step before the
system goes to production.

MEENAKSHI COLLEGE OF ENGINEERING DEPARTMENT OF C.S.E.


lOMoARcPSD|38838811

CCS366 SOFTWARE TESTING AND AUTOMATION MCE CSE

Regression Testing

This test confirms recent code or program changes have not affected existing system
features. Regression testing involves selecting all or some of the executed test cases and running
them again to confirm the software's existing functionalities still perform appropriately.

The Key Differences Between A Test Case And A Test Scenario Include:

 A test case provides a set of actions performed to verify that specific software features are
performing correctly. A test scenario is any feature that can be tested.

 A test case is beneficial in exhaustive testing -- a software testing approach that involves
testing every possible data combination. A test scenario is more agile and focuses on the end-
to-end functionality of the software.

 A test case looks at what to test and how to test it while a test scenario only identifies what to
test.

 A test case requires more resources and time for test execution than a test scenario.

 A test case includes information such as test steps, expected results and data while a test
scenario only includes the functionality to be tested.

CREATING TEST SCHEDULES

To configure automated tests or test sets to run without any user interaction by creating
a test schedule. This schedule can be a one-time scheduled run, or it can be a recurring schedule
on specific days of the week.

The Test Schedules screen

1. On the Test Schedules screen, you can perform various tasks with test schedules.
To access this screen, select Test Management > Test Schedules on the main QAComplete
toolbar.

MEENAKSHI COLLEGE OF ENGINEERING DEPARTMENT OF C.S.E.


lOMoARcPSD|38838811

CCS366 SOFTWARE TESTING AND AUTOMATION MCE CSE

2. Click the Recent Items button to display the items that have been changed lately.
Click an item in that list to go to the corresponding Edit forms and edit its properties.

MEENAKSHI COLLEGE OF ENGINEERING DEPARTMENT OF C.S.E.


lOMoARcPSD|38838811

CCS366 SOFTWARE TESTING AND AUTOMATION MCE CSE

Search in Test Schedules


To search for a specific schedule, you can do the following:
 Search for a schedule field value in the Search field.
 Select a predefined filter from the Filter drop-down list.
 Create a new filter by clicking the New Filter button.

Additional actions

MEENAKSHI COLLEGE OF ENGINEERING DEPARTMENT OF C.S.E.


lOMoARcPSD|38838811

CCS366 SOFTWARE TESTING AND AUTOMATION MCE CSE

Test Schedule Reference:


This section describes the fields and drop-down lists available on the Edit Test
Schedule form. You can use them to describe your test schedule.
Note For some options in the table below, possible values are determined by choice lists. These
: options are marked with an asterisk (*). To manage choice lists for your project, go to Test
Management > Test Schedules > Actions > Manage Choice Lists.

Option Description

Date Created The date and time the test schedule was created on.

Note: This field is filled in automatically. You cannot edit it.

Date Updated The date when the test schedule was updated last time.

Note: This field is filled in automatically. You cannot edit it.

Updated By The user who updated the test schedule last time.

Note: This field is filled in automatically. You cannot edit it.

Id The ID of the test schedule.

Note: This field is filled in automatically. You cannot edit it.

Linked Items The items linked to the test schedule.

Created By The user who created the test schedule.

Date Last Launched The date when the test schedule was launched last time.

Host Name The test host used to run tests on schedule.

Title The name of the test schedule.

Run on The days when the test schedule is active.

Start Date The date when the test schedule becomes active.

MEENAKSHI COLLEGE OF ENGINEERING DEPARTMENT OF C.S.E.


lOMoARcPSD|38838811

CCS366 SOFTWARE TESTING AND AUTOMATION MCE CSE

Option Description

End Date The date when the test schedule becomes inactive.

Start Time The time when the test run starts according to the test schedule.

Enabled Defines whether the test schedule is currently active.

Agent The automation agent used to run tests for this schedule.

Link Run to Release The release to which the scheduled test run is linked.

Link Run to Configuration The configuration to which the scheduled test run is linked.

WHAT IS A BUG REPORT:


In the course of the QA process, when a bug has been identified, it has to be documented
and sent to developers to be fixed. Given that software is exceptionally complex, layered, and
feature-heavy in the current digital environment, most QA pipelines generate multiple bugs.
Additionally, developers often work on multiple development projects simultaneously, which
means they have a considerable number of bugs requiring attention. They have to operate under
significant pressure and can be overwhelmed without the right resources.
Well-structured and adequately detailed bug reports are one of those resources. Good bug
reports tell developers exactly what needs to be fixed and help them get it done faster. This
prevents software releases from being delayed, offering faster time-to-market without
compromising on quality.

Benefits of a good Bug Report:


A good bug report covers all the crucial information about the bug, which can be used in
the debugging process:

1. It helps with a detailed bug analysis.


2. Gives better visibility about the bug and helps find the right direction and approach
towards debugging.

MEENAKSHI COLLEGE OF ENGINEERING DEPARTMENT OF C.S.E.


lOMoARcPSD|38838811

CCS366 SOFTWARE TESTING AND AUTOMATION MCE CSE

3. Saves cost and time by helping debug at an earlier stage.


4. Prevents bugs from going into production and disrupting end-user experience.
5. Acts as a guide to help avoid the same bug in future releases.
6. Keeps all the stakeholders informed about the bug, helping them take corrective
measures.

Elements of an Effective Bug Report:


When studying how to create a bug report, start with the question: What does a bug
report need to tell the developer?
A bug report should be able to answer the following questions:

 What is the problem?


 How can the developer reproduce the problem (to see it for themselves)?
 Where in the software (which webpage or feature) has the problem appeared?
 What is the environment (browser, device, OS) in which the problem has occurred?

Elements of an Effective Bug Report:


When studying how to create a bug report, start with the question: What does a bug
report need to tell the developer?
A bug report should be able to answer the following questions:

 What is the problem?


 How can the developer reproduce the problem (to see it for themselves)?
 Where in the software (which webpage or feature) has the problem appeared?
 What is the environment (browser, device, OS) in which the problem has occurred?

How to write an Effective Bug Report:


An effective bug report should contain the following:

MEENAKSHI COLLEGE OF ENGINEERING DEPARTMENT OF C.S.E.


lOMoARcPSD|38838811

CCS366 SOFTWARE TESTING AND AUTOMATION MCE CSE

1. Title/Bug ID

2. Environment
3. Steps to reproduce a Bug

4. Expected Result
5. Actual Result

6. Visual Proof (screenshots, videos, text) of Bug


7. Severity/Priority
1. Title/Bug ID
The title should provide a quick description of the bug. For example, “Distorted Text in FAQ
section on <name> homepage”.
Assigning an ID to the bug also helps to make identification easier.
2. Environment
A bug can appear in a particular environment and not others. For example, a bug appears when
running the website on Firefox, or an app malfunctions only when running on an iPhone X.
These bugs can only be identified with cross browser testing or cross device tests.
When reporting the bug, QAs must specify if the bug is observed in one or more specific
environments. Use the template below for specificity:

 Device Type: Hardware and specific device model


 OS: OS name and version
 Tester: Name of the tester who identified the bug
 Software version: The version of the software which is being tested, and in which the bug has
appeared.
 Connection Strength: If the bug is dependent on the internet connection (4G, 3G, WiFi,
Ethernet) mention its strength at
the time of testing.
 Rate of Reproduction: The number of times the bug has been reproduced, with the exact steps
involved in each reproduction.

MEENAKSHI COLLEGE OF ENGINEERING DEPARTMENT OF C.S.E.


lOMoARcPSD|38838811

CCS366 SOFTWARE TESTING AND AUTOMATION MCE CSE

3. Steps to Reproduce a Bug


Number the steps clearly from first to last so that the developers can quickly and exactly follow
them to see the bug for themselves. Here is an example of how one can reproduce a bug in steps:

1. Click on the “Add to Cart” button on the Homepage (this takes the user to the Cart).
2. Check if the same product is added to the cart.
4. Expected Result
This component of Bug Report describes how the software is supposed to function in the given
scenario. The developer gets to know what the requirement is from the expected results. This
helps them gauge the extent to which the bug is disrupting the user experience.
Describe the ideal end-user scenario, and try to offer as much detail as possible. For the above
example, the expected result should be:
“The selected product should be visible in the cart.”
5. Actual Result
Detail what the bug is actually doing and how it is a distortion of the expected result.

 Elaborate on the issue


 Is the software crashing?
 Is it simply pausing in action?
 Does an error appear?
 Or is it simply unresponsive?
6. Visual Proof of Bug
Screenshots, videos of log files must be attached to clearly depict the occurrence of the bug.
Depending on the nature of the bug, the developer may need video, text, and images.
Testing using BrowserStack can leverage multiple debugging options such as text logs, visual
logs (screenshots), video logs, console logs, and network logs. These make it easy for QAs and
devs to detect exactly where the error has occurred, study the corresponding code and implement
fixes.
BrowserStack’s debugging toolkit makes it possible to easily verify, debug and fix different
aspects of software quality, from UI functionality and usability to performance and network
consumption.

MEENAKSHI COLLEGE OF ENGINEERING DEPARTMENT OF C.S.E.


lOMoARcPSD|38838811

CCS366 SOFTWARE TESTING AND AUTOMATION MCE CSE

The range of debugging tools offered by BrowserStack’s mobile app and web testing products
are as follows:

 Live: Pre-installed developer tools on all remote desktop browsers and Chrome developer tools
on real mobile devices (exclusive on BrowserStack
 Automate: Screenshots, Video Recording, Video-Log Sync, Text Logs, Network Logs, Selenium
Logs, Console Logs
 App Live: Real-time Device Logs from Logcat or Console
 App Automate: Screenshots, Video Recording, Video-Log Sync, Text Logs, Network Logs,
Appium Logs, Device Logs, App Profiling
7. Bug Severity
Every bug must be assigned a level of severity and corresponding priority. This reveals
the extent to which the bug affects the system, and in turn, how quickly it needs to be fixed.
Levels of Bug Severity:

 Low: Bug won’t result in any noticeable breakdown of the system


 Minor: Results in some unexpected or undesired behavior, but not enough to disrupt system
function
 Major: Bug capable of collapsing large parts of the system
 Critical: Bug capable of triggering complete system shutdown
Levels of Bug Priority:

 Low: Bug can be fixed at a later date. Other, more serious bugs take priority
 Medium: Bug can be fixed in the normal course of development and testing.
 High: Bug must be resolved at the earliest as it affects the system adversely and renders it
unusable until it is resolved.

MEENAKSHI COLLEGE OF ENGINEERING DEPARTMENT OF C.S.E.

You might also like