0% found this document useful (0 votes)
29 views

Unit Iv

Uploaded by

Shirisha Yadav
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
29 views

Unit Iv

Uploaded by

Shirisha Yadav
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 48

UNIT-IV

This unit Can be Organized into two sections:


Testing Strategies :
A strategic approach to software testing,
Test strategies for conventional software,
Black-Box and White-Box testing,
Verification and Validation testing,
System testing,
The art of Debugging.
Testing Objectives
• Testing has been described as the process of executing a
program with the intention of finding errors.
• A good test case is one that has high probability of finding
an undiscovered error.
• A successful test is one that uncovers an as-yet undiscovered
error.
The major testing objective is to design tests that
systematically uncover types of errors with minimum time and
effort.
• The principal goal of testing is to ensure that users will not be negatively
affected by any software bug. Those bugs are software errors that cause
unexpected or incorrect results. There are many kinds of bugs that we
can divide into various categories. For instance, we can list bugs in terms
of the impact level on the software:
• Low-impact bugs do not cause a lot of problems for users and do not
affect their experience with the app.
• High-impact bugs have more impact on the application functionality, but
still, it remains useful.
• Critical bugs affect the core application functionality making them even
impossible to use.
• Regardless of bug type, software testers are the ones that look for them and
try to determine their causes by conducting different tests, and with that
knowledge, help developers resolve the issues.
PRINCIPLES OF SOFTWARE TESTING

Every software engineer must apply following testing principles


while performing the software testing.
Why Testing is important?
• Software testing is a critical part of the software development and maintenance process. It helps
ensure software quality, validates requirements, mitigates risks, and enhances user satisfaction.
• Here are some reasons why testing is important in software engineering:
Identify mistakes
• Testing helps software development teams to catch and fix bugs early in the cycle.
Quality control
• Testing ensures software quality and validates requirements.
Ensure software is ready
• Testing is crucial in determining whether or not the software is ready to be released.
Avoid unnecessary costs
• Testing documentation plays a vital role in software testing.
• Defining every process followed in testing software and removing ambiguities helps save the
project's effort, time, and cost.
Improve customer satisfaction
• Testing helps ensure that the final user experience is as smooth as was initially intended when the
software was conceptualized.
• Review your code
• Testing validates your code's correctness and helps ensure it is reliable and maintainable.
Software Testing- Definition
Software testing is a formal process carried out by a specialized
testing team in which a software unit, several integrated software
units or an entire software package are examined by running the
programs on a computer. All the associated tests are performed
according to approved test procedures on approved test cases.
1) Testing Strategies:
• A strategy for software testing provides a road map that describes
the steps to be conducted as part of testing.
• When these steps are planned and then undertaken, and how much
effort, time, and resources will be required.
• testing strategy must incorporate test planning, test case design, test
execution, and resultant data collection and evaluation.
• A strategy for software testing is developed by the project manager,
software engineers, and testing specialists.
• Software is tested to uncover errors that were made inadvertently as it
was designed and constructed.
• As errors are uncovered, they must be diagnosed and corrected using
a process that is called debugging.
STLC (Software Testing Life Cycle)
• Software Testing Life Cycle (STLC) is a sequence of specific activities
conducted during the testing process to ensure software quality goals are met.
• STLC involves both verification and validation activities.
• There are following six major phases in every Software Testing Life Cycle
Model (STLC Model):
Requirement Analysis
Test Planning
Test case development
Test Environment setup
Test Execution
Test Cycle closure
Test Planning
In this phase in which a Senior QA manager determines the test plan
strategy along with efforts and cost estimates for the project. Moreover, the
resources, test environment, test limitations and the testing schedule are also
determined. The Test Plan gets prepared and finalized in the same phase.
Test Planning Activities
• Preparation of test plan/strategy document for various types of testing
• Test tool selection
• Test effort estimation
• Resource planning and determining roles and responsibilities.
• Training requirement
Deliverables of Test Planning
• Test plan /strategy document.
• Effort estimation document.
Test Case Development Phase
The Test Case Development Phase involves the creation,
verification and rework of test cases & test scripts after the test plan is
ready.
Test Case Development Activities
• Create test cases, automation scripts (if applicable)
• Review and baseline test cases and scripts
• Create test data (If Test Environment is available)
Deliverables of Test Case Development
• Test cases/scripts
• Test data
• Test Environment Setup decides the software and hardware conditions
under which a work product is tested. It is one of the critical aspects of the
testing process and can be done in parallel with the Test Case Development
Phase. Test team may not be involved in this activity if the development
team provides the test environment. The test team is required to do a
readiness check (smoke testing) of the given environment.
Test Environment Setup Activities
• Understand the required architecture, environment set-up and prepare
hardware and software requirement list for the Test Environment.
• Setup test Environment and test data
• Perform smoke test on the build
Deliverables of Test Environment Setup
• Environment ready with test data set up
• Smoke Test Results.
Test Execution Phase
Test execution phase is carried out by the testers in which testing of the software build is done
based on test plans and test cases prepared.
The process consists of test script execution, test script maintenance and bug reporting. If
bugs are reported then it is reverted back to development team for correction and retesting will be
performed.
Test Execution Activities
• Execute tests as per plan
• Document test results, and log defects for failed cases
• Map defects to test cases in RTM
• Retest the Defect fixes
• Track the defects to closure
Deliverables of Test Execution
• Completed RTM with the execution status
• Test cases updated with results
• Defect reports
Requirements Traceability Matrix (RTM) is a document used to ensure that the requirements
defined for a system are linked at every point during the verification process.
Test Cycle Closure
Test cycle closure phase is completion of test execution which involves several
activities like test completion reporting, collection of test completion matrices and test
results.
Testing team members meet, discuss and analyse testing artifacts to identify
strategies that have to be implemented in future, taking lessons from current test cycle.
The idea is to remove process bottlenecks for future test cycles.
Test Cycle Closure Activities
• Evaluate cycle completion criteria based on Time, Test coverage, Cost, Software,
Critical Business Objectives, Quality.
• Prepare test metrics based on the above parameters.
• Document the learning out of the project
• Prepare Test closure report
• Qualitative and quantitative reporting of quality of the work product to the customer.
• Test result analysis to find out the defect distribution by type and severity.
Deliverables of Test Cycle Closure
• Test Closure report
• Test metrics
Verification and Validation(V&V)
Testing Strategy
Software testing is often referred to as verification and
Validation.

• Verification refers to the set of tasks that ensure that software


correctly implements a specific function.
Checking of whether we are building the product right

• Validation refers to a different set of tasks that ensure that the


software that has been built is traceable to customer requirements.
Checking of whether we are building the right product
Verification
Verification is the process of checking that a software achieves its
goal without any bugs. It is the process to ensure whether the product
that is developed is right or not. It verifies whether the developed
product fulfils the requirements that we have.
Verification is Static Testing.
Activities involved in verification:
• Inspections
• Reviews
• Walkthroughs
• Desk-checking
Validation
Validation is the process of checking whether the software
product is up to the mark or in other words product has high level
requirements. It is the process of checking the validation of product
i.e. it checks what we are developing is the right product. it is
validation of actual and expected product. Validation is the Dynamic
Testing.
Activities involved in validation:
• Black box testing
• White box testing
• Unit testing
• Integration testing
Verification Validation
The verifying process includes checking documents, It is a dynamic mechanism of testing and validating
design, code, and program the actual product
It does not involve executing the code It always involves executing the code

Verification uses methods like reviews, walkthroughs, It uses methods like Black Box Testing,
inspections, and desk- checking etc. White Box Testing, and non-functional testing

Whether the software conforms to specification is It checks whether the software meets the requirements
checked and expectations of a customer

It can find bugs that the verification process can not


It finds bugs early in the development cycle
catch
Target is application and software architecture,
specification, complete design, high level, and Target is an actual product
database design etc.
QA team does verification and make sure that the
With the involvement of testing team validation is
software is as per the requirement in the SRS
executed on software code.
document.
It comes before validation It comes after verification
Example of verification and validation
• Now, let’s take an example to explain verification and validation
planning:
• In Software Engineering, consider the following specification for
verification testing and validation testing,
• A clickable button with name Submet
• Verification would check the design doc and correcting the spelling
mistake.
• Otherwise, the development team will create a button like
• So new specification is
• A clickable button with name Submit
Once the code is ready, Validation is done. A Validation test found-

Owing to Validation testing, the development team will make the


submit button clickable.
• Test Strategies for Conventional software
The software process may be viewed as the spiral
A strategy for software testing may also be viewed in the context of the
spiral
i) Unit testing begins at the vertex of the spiral and concentrates on
each unit (e.g., component, class) of the software as implemented in
source code. Tests focus on each component individually, ensuring
that it functions properly as a unit. Hence, the name unit testing.
ii) Integration testing focus on design and software architecture.
Components must be assembled or integrated to form the complete
software package. Integration testing addresses the issues
associated with the dual problems of verification and program
construction. Test case design techniques that focus on inputs and
outputs are more prevalent during integration.
iii) Validation testing, where requirements established as part of
requirements modelling are validated against the software that has been
constructed. Validation testing provides final assurance that software
meets all informational, functional, behavioural, and performance
requirements.

iv) System testing, where the software and other system elements are
tested as a whole. Software, once validated, must be combined with
other system elements (e.g., hardware, people, databases). System
testing verifies that all elements mesh properly and that overall system
function/performance is achieved.
2)Test strategies for conventional software
• There are many strategies that can be used to test software.
 At one extreme, you can wait until the system is fully constructed and
then conduct tests on the overall system in hopes of finding errors.
This approach, although appealing, simply does not work. It will
result in buggy software that disappoints all stakeholders.

 At the other extreme, you could conduct tests on a daily basis,


whenever any part of the system is constructed. This approach,
although less appealing to many, can be very effective.
Types of Test strategies

• Unit Testing
• Integration Testing
• Validation Testing
• System Testing
Unit Testing

• Unit testing focuses verification effort on the smallest unit of


software design—the software component or module.
• The Process of executing a single unit/module without effecting other
modules present in the software and comparing the actual outputs
with predefined outputs in the component level
The Developers are responsible for performing unit testing, this can be
carried at least once during software development and repeated
depending on changes.
Unit-test Considerations:
• The module interface is tested to ensure that information properly
flows into and out of the program unit under test .
• Local data structures are examined to ensure that data stored
temporarily maintains its integrity during all steps in an algorithm’s
execution.
• Boundary conditions are tested to ensure that the module operates
properly at boundaries established to limit or restrict processing.
• All independent paths through the control structure are exercised to
ensure that all statements in a module have been executed at least
once.
• Error-handling paths are tested.

• Test cases should be designed to uncover errors due to erroneous


computations, incorrect comparisons, or improper control flow.

• Unit-test procedures. Unit testing is normally considered as an


adjunct to the coding step. A review of design information provides
guidance for establishing test cases. Each test case should be coupled
with a set of expected results.
Integration Testing
• Integration testing is a systematic technique for constructing the
software architecture while at the same time conducting tests to uncover
errors associated with interfacing.
• The objective is to take unit-tested components and build a program
structure that has been dictated by design.
• The entire program is tested as a whole. A set of errors is encountered.
Correction is difficult because isolation of causes is complicated by the
vast expanse of the entire program.
• The program is constructed and tested in small increments, where errors
are easier to isolate and correct; interfaces are more likely to be tested
completely; and a systematic test approach may be applied.
• When individual components are already tested using unit testing and
all errors are uncovered, then the reason for integration testing is to
ensure that units work perfectly in isolation.
• When Modules are integrated following problems may raised:
 Data loss
 Increased impression error
 The purpose of modules may not fulfilled
 Modules may badly effect
Integration Testing

Non-
Incremental Incremental
Testing Testing

Big-Bang

Top- Bottom-
Sandwich Regression Smoke
Down Up
Top-down integration
• Top-down integration testing is an incremental approach to
construction of the software architecture.
• Modules are integrated by moving downward through the control
hierarchy, beginning with the main control module (main program).
• Modules subordinate to the main control module are incorporated into
the structure in either a
• depth-first or
• breadth-first manner.
Depth-first integration integrates all components on a major control path of the
program structure. For example, selecting the left-hand path, components M1, M2 ,
M5 would be integrated first. Next, M8 or M6 would be integrated. Then, the
central and right-hand control paths are built.
Breadth-first integration incorporates all components directly subordinate at each
level, moving across the structure horizontally. From the figure, components M2,
M3, and M4 would be integrated first. The next control level, M5, M6, and so on,
follows
The integration process is performed in a series of five steps:
1. The main control module is used as a test driver and stubs are
substituted for all components directly subordinate to the main control
module.
2. Depending on the integration approach selected (i.e., depth or breadth
first), subordinate stubs are replaced one at a time with actual
components.
3. Tests are conducted as each component is integrated.
4. On completion of each set of tests, another stub is replaced with the
real component.
5. Regression testing may be conducted to ensure that new errors have not
been introduced
The process continues from step 2 until the entire program structure is
built.
Bottom-Up Integration Testing
• Bottom-up integration testing, as its name implies, begins construction
and testing with atomic modules (i.e., components at the lowest levels
in the program structure).
• Bottom-up integration strategy may be implemented with the
following steps:
1. Low-level components are combined into clusters that perform a
specific software sub function.
2. A driver (a control program for testing) is written to coordinate test
case input and output.
3. The cluster is tested.
4. Drivers are removed and clusters are combined moving upward in the
program structure.
Integration follows the pattern illustrated in Figure 17.6. Components
are combined to form clusters 1, 2, and 3. Each of the clusters is tested using
a driver (shown as a dashed block).
Components in clusters 1 and 2 are subordinate to Ma. Drivers D1 and D2
are removed and the clusters are interfaced directly to Ma. Similarly, driver
D3 for cluster 3 is removed prior to integration with module Mb. Both Ma
and Mb will ultimately be integrated with component Mc, and so forth
Sandwich
• Sandwich Testing is the combination of bottom-up approach and
top-down approach, so it uses the advantage of both bottom up
approach and top down approach.
• Initially it uses the stubs and drivers where stubs simulate the
behaviour of missing component. It is also known as the Hybrid
Integration Testing.
Regression Testing
• Each time a new module is added as part of integration testing, the
software changes.
• New data flow paths are established, new I/O may occur, and new
control logic is invoked.
• These changes may cause problems with functions that previously
worked flawlessly.
• In the context of an integration test strategy, regression testing is the
re execution of some subset of tests that have already been conducted
to ensure that changes have not propagated unintended side effects.
Smoke Testing

• Smoke Testing is a software testing process that determines whether


the deployed software build is stable or not.
• Smoke testing is a confirmation for QA team to proceed with further
software testing. It consists of a minimal set of tests run on each build
to test software functionalities.
• Smoke testing is also known as “Build Verification Testing” or
“Confidence Testing.”
• In simple terms, smoke tests means verifying the important features
are working and there are no showstoppers in the build that is under
testing.
• It is a simple test that shows the product is ready for testing.
• This helps determine if the build is flawed as to make any further
testing a waste of time and resources.
Non Incremental Testing
• The units are tested in unit testing are first combined to form a single
software package and then the package is tested as a whole such
approach is called non incremental or big bang approach.

Advantages:
• The big bang approach can be useful only for small systems.
Disadvantages:
• Testing can commence only after completion of coding.
• Testing may not uncover interface errors.
• Modules critical to the system may not receive the extra testing.
Validation Testing

• Through Validation testing requirements are validated against s/w


constructed.
• These are high-order tests where validation criteria must be evaluated
to assure that s/w meets all functional, behavioral and performance
requirements.
• It succeeds when the software functions in a manner that can be
reasonably expected by the customer.
Alpha and Beta Testing
• It is virtually impossible for a software developer to foresee how the
customer will really use a program.
• A series of acceptance tests are conducted for end users to validate all
requirements.
• An acceptance test can range from an informal “test drive” to a
planned and systematically executed series of tests.
• Acceptance testing can be conducted over a period of weeks or
months, thereby uncovering cumulative errors that might degrade the
system over time.
• The alpha test is conducted at the developer’s site by a
representative group of end users. Alpha tests are conducted in a
controlled environment.
• The beta test is conducted at one or more end-user sites. Unlike alpha testing, the
developer generally is not present.
• The beta test is a “live” application of the software in an environment that cannot be
controlled by the developer.
• The customer records all problems that are encountered during beta testing and reports
these to the developer at regular intervals.
• As a result of problems reported during beta tests, developer make modifications and
then prepare for release of the software product to the entire customer base.
• A variation on beta testing, called customer acceptance testing, is sometimes
performed when custom software is delivered to a customer under contract.
• The customer performs a series of specific tests in an attempt to uncover errors before
accepting the software from the developer.
In simple words
• Alpha testing is done by testers and quality analysts inside the organization whereas
Beta testing is done by real users who will be actually using the software. Alpha testing
takes a longer duration to complete execution while Beta testing gets completed within
a few weeks.
System Testing
System testing is a series of different tests whose primary purpose is to
fully exercise the computer-based system.
• Software is incorporated with other system elements (e.g., hardware, people,
information), and a series of system integration and validation tests are
conducted.
• After integration with other elements and tested this type of testing called
system testing.
• A classic system-testing problem is “finger pointing.” This occurs when an
error is uncovered, and the developers of different system elements blame
each other for the problem.
• Recovery Testing
• Security Testing
• Stress Testing
• Performance Testing
Recovery Testing:
• Recovery testing is a system test that forces the software to fail in a variety
of ways and verifies that recovery is properly performed.
• If recovery is automatic (performed by the system itself), reinitialization,
checkpointing mechanisms, data recovery, and restart are evaluated for
correctness.
• Many computer-based systems must recover from faults and resume
processing.
Security Testing
• Any computer-based system that manages sensitive information or causes
actions that can improperly harm (or benefit) individuals is a target for
improper or illegal penetration.
• Security testing attempts to verify that protection mechanisms built into
a system will, in fact, protect it from improper penetration.
Stress Testing
• Stress testing executes a system in a manner that demands resources in abnormal
quantity, frequency, or volume.
• Stress tests are designed to confront programs with abnormal situations.
• A variation of stress testing is a technique called sensitivity testing.
• Stress Testing is a type of software testing performed to check the robustness of
the system under the varying loads.
Performance Testing
• Performance testing is designed to test the run-time performance of software
within the context of an integrated system.
• Performance Testing is a type of software testing that is carried out to test the
speed, scalability, stability and reliability of the software product or application.
Ex. Real time embedded System.
Performance tests are often coupled with stress testing and usually require both
hardware and software instrumentation.

You might also like