0% found this document useful (0 votes)
2 views

Testing new

Software testing is a critical process for assessing the quality of computer software, involving the execution of programs to identify bugs and ensure they meet specifications. It encompasses various testing types, including unit, integration, system, and regression testing, and is distinct from Software Quality Assurance (SQA). Effective testing requires independence, skilled personnel, and thorough documentation to ensure reliability and compliance with user needs.

Uploaded by

VIPUL RASTOGI
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

Testing new

Software testing is a critical process for assessing the quality of computer software, involving the execution of programs to identify bugs and ensure they meet specifications. It encompasses various testing types, including unit, integration, system, and regression testing, and is distinct from Software Quality Assurance (SQA). Effective testing requires independence, skilled personnel, and thorough documentation to ensure reliability and compliance with user needs.

Uploaded by

VIPUL RASTOGI
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 10

Software Testing

Software testing is the process used to access the quality of computer


software. Software testing is an empirical technical investigation
conducted to provide stakeholders with information about the quality of
the product or service under test, with respect to the context in which it is
intended to operate. This includes the process of executing a program or
application with the intent of finding software bugs. Quality is not an
absolute, but is not limited to, it is value to some person. With that in
mind, testing can never completely establish the correctness of arbitrary
computer software; testing furnishes a criticism or comparison that
compares the state and behavior of the product against a specification. An
important point is that software testing should be distinguished from the
separate discipline of Software Quality Assurance (S.Q.A.), which
encompasses all business process areas, not just testing.
Software testing may be viewed as an important part of the software
quality assurance (SQA) process. In SQA, software process specialists and
auditors take a broader view on software and its development. They
examine and change the software engineering process itself to reduce the
amount of faults that end up in defect rate. What constitutes an acceptable
defect rate depends on the nature of the software.

Software testing is used in association with verification and validation:


• Verification : Have we built the software right (i.e., does it match the
specification)?
• Validation : Have we built the right software (i.e., is this what the
customer wants)?

Simply stated, quality is very important. Many companies have not


learned that quality is important and deliver more claimed functionality
but at a lower quality level. It is much easier to explain to a customer why
there is a missing feature than to explain to a customer why the product
lacks quality. A customer satisfied with the quality of a product will
remain loyal and wait for new functionality in the next version. Quality is
a distinguishing attribute of a system indicating the degree of excellence.
The Testing Phase: Improve Quality.
Phase Deliverable
Testing  Regression Test
 Internal Testing
 Unit Testing
 Application Testing
 Stress Testing

In many software engineering methodologies, the testing phase is a


separate phase, which is performed by a different team after the
implementation is completed. There is merit in this approach; it is hard to
see one’s own mistakes, and a fresh eye can discover obvious errors much
faster than the person who has read and re-read the material many times.
Unfortunately, delegating testing to another team leads to a slack attitude
regarding quality by the implementation team.
Alternatively, another approach is to delegate testing to the whole
organization. If the teams are to be known as craftsmen, then the teams
should be responsible for establishing high quality across all phases.
Sometimes, an attitude change must take place to guarantee quality.

Regardless if testing is done after-the-fact or continuously, testing is


usually based on a regression technique split into several major focuses,
namely internal, unit, application, and stress.

TESTING PRINCIPLES
Software testing is an extremely creative and intellectually challenging
task. When testing follows the principles given below, the creative
element of test design and execution rivals any of the preceding software
development steps.
1. Testing must be done by an independent party.
Testing should not be performed by the person or team that developed the
Software since they tend to defend the correctness of the program.
2. Assign best personnel to the task.
Because testing requires high creativity and responsibility only the best
personnel must be assigned to design, implement, and analyze test cases,
test data and test results.
3. Testing should not be planned under the tacit assumption that no errors
will be found.
4. Test for invalid and unexpected input conditions as well as valid
conditions.
The program should generate correct messages when an invalid test is
encountered and should generate correct results when the test is valid.
5. The probability of the existence of more errors in a module or group of
modules is directly proportional to the number of errors already found.
6. Testing is the process of executing software with the intent of finding
errors.
7. Keep software static during test.
The program must not be modified during the implementation of the set of
Designed test cases.
8. Document test cases and test results.
9. Provide expected test results if possible.
A necessary part of test documentation is the specification of expected
results, even if providing such results is impractical.

TEST CASE DESIGN

Functional Testing
This is also called as Black box testing. It takes an external perspective of
the test object to derive test cases. These tests can be functional or non-
functional, though usually functional. The test designer selects valid and
invalid input and determines the correct output. There is no knowledge of
the test object’s internal structure.
This method of test design is applicable to all, levels of software testing:
unit, integration, functional testing, system and acceptance. The higher the
level, and hence the bigger and more complex the box, the more one is
forced to use black box testing to simplify. While this method can uncover
unimplemented parts of the specification, one cannot be sure that all
existent paths are tested.
Functional testing covers the following types of testing:
—* Boundary value analysis
—* Equivalence class testing
—* Decision table based testing
— Cause effect graphing technique
RELIABILITY
Most important and dynamic characteristic of software is its reliability.
Software Reliability is the probability of failure-free software operation
for a specified period of time in a specified environment. Software
Reliability is also an important factor affecting system reliability. It differs
from hardware reliability in that it reflects the design perfection, rather
than manufacturing perfection. The high complexity of software is the
major contributing factor of Software Reliability problems. Software
Reliability is not a function of time - although researchers have come up
with models relating the two. The modeling technique for Software
Reliability is reaching its prosperity, but before using the technique, we
must carefully select the appropriate model that can best suit our case.
Measurement in software is still in its infancy. No good quantitative
methods have been developed to represent Software Reliability without
excessive limitations. Various approaches can be used to improve the
reliability of software, however, it is hard to balance development time
and budget with software reliability.

VERIFICATION AND VALIDATION


Verification is a quality process that is used to evaluate whether or not a
product, service, or system complies with a regulation, specification, or
conditions imposed at the start of a development phase. Verification can
be in development, scale-up, or production. This is often an internal
process.
Validation is the process of establishing documented evidence that
provides a high degree of assurance that a product, service, or system
accomplishes its intended requirements. This often involves acceptance
and suitability with external customers.
It is sometimes said that validation ensures that ‘we built the right thing’
and verification ensures that ‘we built it right’. ‘Building the right thing’
refers back to the user’s needs, while ‘building it right’ checks that the
documented development process was followed. In some contexts, it is
required to have written requirements for both as well as formal
procedures or protocols for determining compliance.
V&V is intended to be a systematic and. technical evaluation of software
and associated products of the development and maintenance processes.
Reviews and tests are done at the end of each phase of the development
process to ensure software requirements are complete and testable and that
design, code, documentation, and data satisfy those requirements.

Verification and Validation Planning


V&V is an expensive process. For some large systems, such as real-time
systems with complex non-functional constraints, half the system
development budget may be spent on V & V. Careful planning is needed
to get the most out of inspections and testing and to control the costs of the
verification and validation process. The planning of validation and
verification of a software system should start early in the development
process. Test planning is concerned with setting out standards for the
testing process rather than describing product tests. Test plans also provide
the information to staff to get an overall picture of-the system tests and to
place their own work in this context. Test plans also provide information
that is responsible for ensuring that appropriate hardware and software
resources are available to the testing team.
The structure of software test plan:
• The testing process
• Requirements traceability
• Tested items
• Testing schedule
• Test recording procedures
• Hardware and software requirements
• Constraints
Like other plans, the test plan is not a static document. It should be revised
regularly as testing is an activity that is dependent on implementation
being complete. If a part of the system is incomplete, it can not be
delivered for integration testing.
LEVELS OF TESTING
Testing is an important step in software development life cycle. The
process of testing takes place at various stages of development in
programming. This is a vital step in development life cycle because the
process of testing helps to identify the mistakes and sends the program for
correction.
This process gets repeated at various stages until the final unit or program
is found to be complete thus giving a total quality to the development
process.

White Box Testing


For doing this testing process the person have to access to the source code
of the product to be tested. So it is essential that the person doing this
white box testing have some knowledge of the program being tested.
Though not necessary it would be more worth if the programmer itself
does this white box testing process since this testing process requires the
handling of source code.
Black Box Testing
This is otherwise called as functional testing. In contrary to white box
testing here the person who is doing the black box testing need not have
the programming knowledge. This is because the person doing the black
box testing would access the output or outcomes as the end user would
access and would perform thorough functionality testing to check whether
the developed module or product behaves in functionality in the way it has
to be.
Unit Testing
This testing is done for each module of the program to ensure the validity
of each module. This type of testing is done usually by developers by
writing test cases for each scenarios of the module and writing the results
occurring in each step for each module.

Integration Testing
By making unit testing for each module as explained above the process of
integrated testing as a whole becomes simpler. This is because by
correcting mistakes or bugs in each module the integration of all units as a
system and testing process becomes easier. So one might think why the
integration is testing needed. The answer is simple. It is needed because
unit testing as explained test and assures correctness of only each module.
But it does not cover the aspects of how the system would behave or what
error would be reported when modules are integrated. This is done in the
level of integration testing.

Top-down and Bottom- up Strategies: - Top down integration testing is


an incremental integration testing technique, which begins, by testing the
top level module and progressively adds in lower level module one by
one. Lower level modules are normally simulated by stubs, which mimic
functionality of lower level modules. As you add lower level code, you
will replace stubs with the actual components.
Top Down integration can be performed and tested in breadth first or
depth first manner.
Advantages
• Driver do not have to be written when top down testing is used.
• It provides early working module of the program and so design defects
can be found and corrected early.
Disadvantages
• Stubs have to be written with utmost care, as they will simulate setting of
output parameters.
• It is difficult to have other people or third parties to perform this testing,
mostly developers will have to spend time on this.
In bottom up integration testing, module at the lowest level are developed
first and other modules which go towards the ‘main’ program are
integrated and tested one at a time. Bottom up integration also uses test
drivers to drive and pass appropriate data to the lower level modules. As
and when code for other module gets ready, these drivers are replaced
with the actual module.
In this approach, lower level modules are tested extensively thus make
sure that highest used module is tested properly.
Advantages
• Behaviors of the interaction points are crystal clear, as components are
added in the controlled manner and tested repetitively.
• Appropriate for applications where bottom up design methodology is
used.
Disadvantages
• Writing and maintaining test drivers or harness is difficult than writing
stubs.
• This approach is not suitable for the software development using top
down approach.

System Testing
System testing of software or hardware is testing conducted on a complete,
Integrated system to evaluate the system’s compliance with its specified
requirements System testing falls within the scope of black box testing,
and as such, should require no knowledge of the inner design of the code
or logic.
As a rule, system testing takes, as its input, all of the ‘integrated” software
components that have successfully passed integration testing and also the
software system itself integrated with any applicable hardware system(s).
The purpose of integration testing is to detect any inconsistencies between
the software units that are integrated together (called assemblages) or
between any of the assemblages and the hardware. System testing is a
more limiting type of testing; it seeks to detect defects both within the
‘inter-assemblages” and also within the system as a whole.
System testing is performed on the entire system in the context of a
Functional Requirement Specification(s) (FRS) and/or a System
Requirement Specification (SRS). System testing is an investigatory
testing phase. Where the focus is to have almost a destructive attitude and
test not only the design, but also the behavior and even the believed
expectations of the customer. It is also intended to test up to and some
suggest beyond the bounds defined in the software/hardware requirements
specification(s) - although how this is meaningfully possible is undefined,

Regression Testing
We all know that development life cycle is subjected to continuous
changes as per the requirements of user. Suppose if there is a change in the
existing system which has already been tested it is essential that one has to
make sure that this new changes made to the existing system do not affect
the existing functionality. For ensuring this regression testing is done.

Smoke Test
This is also called as sanity testing. This is mainly used to identify
environmental related problems and is performed mostly by test manager.
For any application, it is always necessary to have the environment first
checked for smooth running of the application. So in this testing process
the application is run in the environment technically called as dry run and
checked to find that the application could run without any problem or
abend in between.

Alpha Testing
The above different testing process described takes place in different
stages of development as per the requirement and needs. But a final testing
is always made after a full finished product that is before it released to end
users and this is called as alpha testing. The alpha testing involves both the
white box testing and black box testing thus making alpha testing to be
carried out in two phases.
Beta Testing
This process of testing is carried out to have more validity of the software
Developed. This takes place after the alpha testing. After the alpha phase
also the generally the release is not made fully to all end users. The
product is released to a set of people and feedback is got from them to
ensure the validity of the product. So here normally the testing is being
done by group of end users and therefore this beta testing phase covers
black box testing or functionality testing only.

You might also like