0% found this document useful (0 votes)
92 views56 pages

Unit Iii: Part B

Unit testing involves testing individual software components in isolation from the rest of the system. The goals of unit testing are to detect defects in units and to test their functionality according to specifications. Unit testing should be planned with unit test cases designed to test different aspects of units like functions, algorithms, data, and logic. Tests are run against the units and results are recorded. Integration testing then tests interactions between units followed by system and acceptance testing.

Uploaded by

Sumathy Jayaram
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
92 views56 pages

Unit Iii: Part B

Unit testing involves testing individual software components in isolation from the rest of the system. The goals of unit testing are to detect defects in units and to test their functionality according to specifications. Unit testing should be planned with unit test cases designed to test different aspects of units like functions, algorithms, data, and logic. Tests are run against the units and results are recorded. Integration testing then tests interactions between units followed by system and acceptance testing.

Uploaded by

Sumathy Jayaram
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 56

UNIT III

LEVELS OF TESTING
The need for Levels of Testing – Unit Test – Unit Test Planning – Designing the Unit Tests
– The Test Harness – Running the Unit tests and Recording results – Integration tests –
Designing Integration Tests – Integration Test Planning – Scenario testing – Defect bash
elimination System Testing – Acceptance testing – Performance testing – Regression
Testing – Internationalization testing – Ad-hoc testing – Alpha, Beta Tests – Testing OO
systems – Usability and Accessibility testing – Configuration testing – Compatibility testing
– Testing the documentation –Website testing
Part B
1. What do you mean by unit testing? Explain in detail about the process of unit testing
and unit test planning. (Apr/May – 2018) (Apr/May – 2017)
Write the importance of security testing and explain the consequences of security
breaches, also write the various areas which has to be focused on during security
testing. (Apr/May – 2018)
1. Write notes on configuration testing and its objectives. (Apr/May – 2018)
2. State the need for integration testing in procedural code. (Apr/May – 2018)
3. Explain in detail about test harness. Also write notes on integration test.
4. Explain various system testing approaches in detail. (Nov/Dec – 2016) (Apr/May –
2017)
5. Write notes on regression testing, alpha and beta acceptance testing strategies.
6. Write notes on configuration testing and compatibility testing. (Nov/Dec – 2016)

Different Levels of Software Testing

 Unit testing
 Integration testing
 System testing
 Acceptance testing

The Need for Level of Software Testing


Unit Test: In unit test a single component is tested
Goal : To detect functional and structural defects in the unit

Integration Test: In the integration level several components are tested as a group
Goal :To investigate component interactions

System Test: In the system level the system as a whole is tested


Goal : To evaluate attributes such as usability, reliability, and performance

Acceptance test: In acceptance test the development organization must show that the
software meets all of the client’s requirements.
Goal : To provide a good opportunity for developers to request recommendation letters from
the client.

The Task Required for Preparing Unit Test by the Developer/Tester


To prepare for unit test by the developer/tester must perform several tasks. They are
 Plan the general approach to unit testing.
 Design the test cases, and test procedures.
 Define the relationship between the tests.
 Prepare the support code necessary for unit test.
The Tasks Required for Planning of a Unit Test
 Describe unit test approach and risks.
 Identify unit features to be tested.
 Add levels of detail to the plan.

The Components Suitable for Conduct the Unit Test


 Procedure and function
 Class/object and manuals.
 Procedure-sized reusable component.

UNIT TESTING
Functions, Procedures, Classes, and Methods as Units
A workable definition for a software unit is as follows:
A unit is the smallest possible testable software component.
It can be characterized in several ways. For example, a unit in a typical procedure- oriented
software system:
• performs a single cohesive function;
• can be compiled separately;
• is a task in a work breakdown structure (from the manager’s point of view);
• contains code that can fit on a single page or screen.
• A unit is traditionally viewed as a function or procedure implemented in a
procedural (imperative) programming language.
• In object-oriented systems both the method and the class/object have been suggested
by researchers as the choice for a unit
• A unit may also be a small-sized COTS component purchased from an outside
vendor that is undergoing evaluation by the purchaser, or a simple module retrieved
from an in-house reuse library.
Some components suitable for unit test

Fig. Some components suitable for unit test.

Advantages of Unit Test:


1. It is easier to design, execute, record, and analyze test results.
2. If a defect is revealed by the tests it is easier to locate and repair since only the
one unit is under consideration.
The principal goal for unit testing
The principal goal for unit testing is insure that each individual software unit is
functioning according to its specification. Good testing practice calls for unit tests that are
planned and public. Planning includes designing tests to reveal defects such as functional
description defects, algorithmic defects, data defects, and control logic and sequence defects.
The unit should be tested by an independent tester (someone other than the developer) and
the test results and defects found should be recorded as a part of the unit history.

To prepare for unit test the developer/tester must perform several tasks. These are:
(i) plan the general approach to unit testing;
(ii) design the test cases, and test procedures (these will be attached to the test plan);
(iii) define relationships between the tests;
(iv) prepare the auxiliary code necessary for unit test.

UNIT TEST PLANNING


• A general unit test plan should be prepared. It may be prepared as a component of the
master test plan or as a stand-alone plan.
• It should be developed in conjunction with the master test plan and the project plan for
each project.
• Documents that provide inputs for the unit test plan are the project plan, as well the
requirements, specification, and design documents that describe the target units.
• Components of a unit test plan are described in detail the IEEE Standard for Software
Unit Testing.

A brief description of a set of development phases for unit test planning is found
below. In each phase a set of activities is assigned based on those found in the IEEE
Standard for Software Unit Testing.
Phase 1: Describe Unit Test Approach and Risks
In this phase of unit testing planning the general approach to unit testing is outlined. The
test planner:
(i) identifies test risks;
(ii) describes techniques to be used for designing the test cases for the units;
(iii) describes techniques to be used for data validation and recording of test results;
(iv) describes the requirements for test harnesses and other software that interfaces
with the units to be tested, for example, any special objects needed for testing object-
oriented units.

Phase 2: Identify Unit Features to be Tested


This phase requires information from the unit specification and detailed design
description. The planner determines which features of each unit will be tested, for
example: functions, performance requirements, states, and state transitions, control
structures, messages, and data flow patterns.

Phase 3: Add Levels of Detail to the Plan


In this phase the planner refines the plan as produced in the previous two phases. The
planner adds new details to the approach, resource, and scheduling portions of the unit
test plan.
As an example, existing test cases that can be reused for this project can be identified in
this phase. Unit availability and integration scheduling information should be included in
the revised version of the test plan. The planner must be sure to include a description of
how test results will be recorded.

The next steps in unit testing consist of designing the set of test cases, developing the
auxiliary code needed for testing, executing the tests, and recording and analyzing the
results.

DESIGNING THE UNIT TESTS


• Part of the preparation work for unit test involves unit test design.
• It is important to specify
(i) the test cases (including input data, and expected outputs for each test case)
(ii) the test procedures (steps required run the tests).
• As part of the unit test design process, developers/testers should also describe the
relationships between the tests.
• Test suites can be defined that bind related tests together as a group.
• All of this test design information is attached to the unit test plan.
• Test cases, test procedures, and test suites may be reused from past projects if the
organization has been careful to store them so that they are easily retrievable and reusable.
• We design test cases for functions and procedures. They are also useful for designing
tests for the individual methods (member functions) contained in a class. This approach
gives the tester the opportunity to exercise logic structures and/or data flow sequences, or
to use mutation analysis, all with the goal of evaluating the structural integrity of the unit.
• In the case of a smaller-sized COTS component selected for unit testing, a black box
test design approach may be the only option. It should be mentioned that for units that
perform mission/safely/business critical functions, it is often useful and prudent to design
stress, security, and performance tests at the unit level if possible.

UNIT TEST ON CLASS / OBJECTS


Unit testing on object oriented systems
The choices are (a) either the individual method as a unit or (b) the class as a whole .
•Many developers/testers consider the class to be the component of choice for unit testing.
•The process of testing classes as units is sometimes called component test .

• Testing levels in object oriented systems


– operations associated with objects
• usually not tested in isolation because of encapsulation and dimension
(too small)
– classes -> unit testing
– clusters of cooperating objects -> integration testing
– the complete OO system -> system testing
• Complete test coverage of a class involves
– Testing all operations associated with an object
– Setting and interrogating all object attributes
– Exercising the object in all possible states
• Inheritance makes it more difficult to design object class tests as the information to
be tested is not localised

Challenges/issues of Class Testing


If the class is the selected component, testers may need to address special issues related
to the testing and retesting of these components.
Some of these issues are described follow:
• Issue 1: Adequately Testing Classes
The potentially high costs for testing each individual method in a class have been
described. These high costs will be particularly apparent when there are many
methods in a class; the numbers can reach as high as 20 to 30. Finally, a tester
might use a combination of approaches, testing some of the critical methods on an
individual basis as units, and then testing the class as a whole.
• Issue 2: Observation of Object States and State Changes
Methods may not return a specific value to a caller. They may instead change the
state of an object. The state of an object is represented by a specific set of values
for its attributes or state variables.
• Issue 3: Encapsulation
– Difficult to obtain a snapshot of a class without building extra methods
which display the classes’ state
• Issue 4 :Inheritance
– Each new context of use (subclass) requires re-testing because a method may
be implemented differently (polymorphism).
– Other unaltered methods within the subclass may use the redefined method
and need to be tested
• Issue 5:White box tests
Basis path, condition, data flow and loop tests can all be applied to individual
methods within a class.

THE TEST HARNESS


The auxiliary code developed to support testing of units and components is called a test
harness. The harness consists of drivers that call the target code and stubs that represent
modules it calls.

Fig. The test harness


Drivers and stubs can be developed at several levels of functionality
Functionality of a driver
(i) call the target unit;
(ii) do 1, and pass inputs parameters from a table;
(iii) do 1, 2, and display parameters;
(iv) do 1, 2, 3 and display results (output parameters)
Functionality of a stub
(i) display a message that it has been called by the target unit;
(ii) do 1, and display any input parameters passed from the target unit;
(iii) do 1, 2, and pass back a result from a table;
(iv) do 1, 2, 3, and display result from table
Drivers and stubs are developed as procedures and functions for traditional imperative-
language based systems. For object-oriented systems, developing drivers and stubs often
means the design and implementation of special classes to perform the required testing tasks.
The higher the degree of functionally for the harness, the more resources it will require to
design, implement, and test. Developers/testers will have to decide depending on the nature
of the code under test, just how complex the test harness needs to be

RUNNING THE UNIT TESTS AND RECORDING RESULTS


Unit tests can begin when
(i) the units becomes available from the developers (an estimation of availability is
part of the test plan),
(ii) the test cases have been designed and reviewed, and
(iii) the test harness, and any other supplemental supporting tools, are available.
The testers then proceed to run the tests and record results. Documents called test logs
that can be used to record the results of specific tests. The status of the test efforts for a
unit, and a summary of the test results, could be recorded in a simple format such as
shown in Table.
It is very important for the tester at any level of testing to carefully record, review, and
check test results. The tester must determine from the results whether the unit has passed
or failed the test. If the test is failed, the nature of the problem should be recorded in what
is sometimes called a test incident report Differences from expected behavior should be
described in detail. This gives clues to the developers to help them locate any faults.

TABLE- Summary work sheet for unit test results

It is very important for the tester at any level of testing to carefully record, review, and
check test results. The tester must determine from the results whether the unit has passed or
failed the test. If the test is failed, the nature of the problem should be recorded in what is
sometimes called a test incident report. Differences from expected behaviour should be
described in detail. This gives clues to the developers to help them locate any faults. During
testing the tester may determine that additional tests are required. For example, a tester may
observe that a particular coverage goal has not been achieved. The test set will have to be
augmented and the test plan documents should reflect these changes.

Reasons for the failure of a Unit


 fault in the unit implementation ( code)
 a fault in the test case specification (the input or the output was not specified
correctly);
 a fault in test procedure execution (the test should be rerun);
 a fault in the test environment (perhaps a database was not set up properly);
 a fault in the unit design (the code correctly adheres to the design specification, but
the latter is incorrect).
The causes of the failure should be recorded in a test summary report, which is a summary of
testing activities for all the units covered by the unit test plan.

INTEGRATION TESTING
Integration testing type focuses on testing interfaces that are ―implicit and explicit‖ and
―internal and external.‖

The Major Goals of Integration Test


Integration test for procedural code has two major goals
Integration testing means testing of interfaces
 To detect that occur on the interface of units.
 To assemble the individual units into working subsystems and finally a complete
system that is ready for system test.
In unit test the testers attempt to detect defects that are related to the functionality and
structure of the unit. There is some simple testing of unit interfaces when the units interact
with drivers and stubs. However, the interfaces are more adequately tested during
integration test when each unit is finally connected to a full and working implementation of
those units it calls, and those that call it. As a consequence of this assembly or integration
process, software subsystems and finally a completed system is put together during the
integration test. The completed system is then ready for system testing.

Figure 5.1 shows a set of modules and the interfaces associated with them. The solid lines
represent explicit interfaces and the dotted lines represent implicit interfaces, based on the
understanding of architecture, design, or usage.
Figure 5.1 A set of modules and interfaces.

DESIGN AN INTEGRATION TEST


• Testing of groups of components integrated to create a sub-system
• Usually the responsibility of an independent testing team (except sometimes in small
projects)
• Integration testing should be black-box testing with tests derived from the
specification
• A principal goal is to detect defects that occur on the interfaces of units
• Main difficulty is localising errors
• Incremental integration testing (as opposed to big-bang integration testing) reduces
this problem

Test drivers and stubs


• Auxiliary code developed to support testing
• Test drivers
– call the target code
– simulate calling units or a user
– where test procedures and test cases are coded (for automatic test case
execution) or a user interface is created (for manual test case execution)
• Test stubs
– simulate called units
– simulate modules/units/systems called by the target code
Approaches to integration testing
There are several methodologies available, to in decide the order for integration testing.
These are as follows.
1. Top-down integration
2. Bottom-up integration
3. Bi-directional integration
4. System integration
Top-Down Integration
Integration testing involves testing the topmost component interface with other components
in same order as you navigate from top to bottom, till you cover all the components.

Example of top down integrations.

Table : Order of testing interfaces for the example

Step Interfaces tested

1 1-2

2 1-3

3 1-4

4 1-2-5

5 1-3-6

6 1-3-6-(3-7)

7 (1-2-5)-(1-3-6-(3-7))

8 1-4-8

9 (1-2-5)-(1-3-6-(3-7))-(1-4-8)

The order in which the interfaces are tested may change a bit if different methods of
traversing are used. A breadth first approach will get you component order such as 1–2, 1–3,
1–4 and so on and a depth first order will get you components such as 1–2–5, 1–3–6, and so
on.

Bottom-up Integration
Bottom-up integration is just the opposite of top-down integration, where the components
for a new product development become available in reverse order, starting from the bottom.
Example of bottom-up integration. Arrows pointing down depict logic flow; arrows pointing
up indicate integration paths.

The navigation in bottom-up integration starts from component 1 and covering all sub-
systems, till component 8 is reached. Order of interfaces tested using bottom up integration

Step Interfaces tested

1 1-5

2 2-6, 3-6

3 2-6-(3-6)

4 4-7

5 1-5-8

6 2-6-(3-6)-8

7 4-7-8

8 (1-5-8)-(2-6-(3-6)-8)-(4-7-8)

The arrows from bottom to top (that is, upward-pointing arrows) indicate integration
approach or integration path. What it means is that the logic flow of the product can be
different from the integration path. It may be easy to say that top-down integration approach
is best suited for the Waterfall and V models and the bottom-up approach for the iterative
and agile methodologies.
Advantages and disadvantages
• Architectural validation
– Top-down integration testing is better at discovering errors in the system
architecture
• System demonstration
– Top-down integration testing allows a limited demonstration at an early
stage in the development
• Test implementation
– Often easier with bottom-up integration testing
• Test observation
– Problems with both approaches. Extra code may be required to observe tests

Bi-Directional Integration
Bi-directional integration is a combination of the top-down and bottom-up integration
approaches used together to derive integration steps.

The individual components 1, 2, 3, 4, and 5 are tested separately and bi-directional


integration is performed initially with the use of stubs and drivers. Drivers are used to
provide upstream connectivity while stubs provide downstream connectivity. A driver is a
function which redirects the requests to some other component and stubs simulate the
behavior of a missing component. After the functionality of these integrated components are
tested, the drivers and stubs are discarded. Once components 6, 7, and 8 become available,
the integration methodology then focuses only on those components, as these are the
components which need focus and are new. This approach is also called ―sandwich
integration.”

Figure: Bi-directional integration.

Table : Steps for integration using sandwich testing.


Step Interfaces tested

1 6-2

2 7-3-4

3 8-5

4 (1-6-2)-(1-7-3-4)-(1-8-5)

As you can see from the table, steps 1–3 use a bottom-up integration approach and step 4
uses a top-down integration approach for this example.
System Integration
System integration means that all the components of the system are integrated and tested as a
single unit. Integration testing, which is testing of interfaces, can be divided into two types:
 Components or sub-system integration
 Final integration testing or system integration

Instead of integrating component by component and testing, this approach waits till all
components arrive and one round of integration testing is done. This approach is also
called big-bang integration. It reduces testing effort and removes duplication in testing.

Big bang integration is ideal for a product where the interfaces are stable with less number of
defects.

System integration using the big bang approach is well suited in a product development
scenario where the majority of components are already available and stable and very few
components get added or modified.

Cluster Test Plan Used In Integration Testing For OO Systems


• A cluster consists of classes that are related, for example, they may work together
(cooperate) to support a required functionality for the complete system.
• The clusters Test Plan include the following items:
o A natural languages description of the function of the cluster to be tested;
o List of classes in the cluster;
o clusters this cluster is dependent on;
o A set of cluster test cases.
INTEGRATION TEST PLANING
Integration test must be planned. Planning can begin when high-level design is complete so
that the system architecture is defined.
Documents relevant to integration test planning
•system architecture,
•requirements document,
•the user manual, and
•usage scenarios
.
Contents of the integration test plan document
 structure charts,
 state charts,
 data dictionaries,
 cross-reference tables,
 module interface descriptions,
 data flow descriptions,
 messages and event
 descriptions
For procedural-oriented system the order of integration of the units should be defined. This
depends on the strategy selected.
•For object-oriented systems a working definition of a cluster or similar construct must be
described, and relevant test cases must be specified. In addition, testing resources and
schedules for integration should be included in the test plan.

For readers integrating object-oriented systems Murphy et al. has a detailed description of a
Cluster Test Plan. The plan includes the following items:
(i) clusters this cluster is dependent on;
(ii) a natural language description of the functionality of the cluster to be tested;
(iii) list of classes in the cluster;
(iv) a set of cluster test cases.

SCENARIO TESTING
Scenario testing is defined as a “set of realistic user activities that are used for evaluating
the product.” It is also defined as the testing involving customer scenarios.
There are two methods to evolve scenarios.
1. System scenarios
2. Use-case scenarios/role based scenarios
System Scenarios
System scenario is a method whereby the set of activities used for scenario testing covers
several components in the system. The following approaches can be used to develop system
scenarios.
Story line Develop a story line that combines various activities of the product that may be
executed by an end user.

Life cycle/state transition Consider an object, derive the different


transitions/modifications that happen to the object, and derive scenarios to cover them.

Deployment/implementation stories from customer Develop a scenario from a known


customer deployment/implementation details and create a set of activities by various users in
that implementation.

Business verticals Visualize how a product/software will be applied to different verticals


and create a set of activities as scenarios to address specific vertical businesses

Battle ground Create some scenarios to justify that ―the product works‖ and some
scenarios to ―try and break the system‖ to justify ―the product doesn't work.‖ This adds
flavor to the scenarios mentioned above.

Coverage is always a big question with respect to functionality in scenario testing.


However, by using a simple technique, some comfort feeling can be generated on the
coverage of activities by scenario testing.

Table : Coverage of activities by scenario testing.

Use Case Scenarios


A use case scenario is a stepwise procedure on how a user intends to use a system, with
different user roles and associated parameters. A use case scenario can include stories,
pictures, and deployment details. Use cases are useful for explaining customer problems and
how the software can solve those problems without any ambiguity.
Use case scenarios term the users with different roles as actors. What the product should
do for a particular activity is termed as system behavior. Users with a specific role to interact
between the actors and the system are called agents.
Figure :Example of a use case scenario in a bank.
Table : Actor and system response in use case for ATM cash withdrawal.
Actor System response

User likes to withdraw cash and inserts Request for password or Personal
the card in the ATM machine Identification Number (PIN)

User fills in the password or PIN Validate the password or PINGive a


list containing types of accounts

User selects an account type Ask the user for amount to


withdraw

User fills in the amount of cash required Check availability of funds Update
account balance Prepare receipt
Dispense cash

Retrieve cash from ATM Print receipt


This way of documenting a scenario and testing makes it simple and also makes it realistic
for customer usage. Use cases are not used only for testing. Hence, use cases are useful in
combining the business perspectives and implementation detail and testing them together.

Defect bash elimination Testing


A defect bash is an ad hoc testing, done by people performing different roles in the same
time duration during the integration testing phase, to bring out all types of defects that may
have been left out by planned testing.

Defect bash is an ad hoc testing where people performing different roles in an organization
test the product together at the same time.
Defect bash brings together plenty of good practices that are popular in testing industry.
They are as follows.
1. Enabling people ―Cross boundaries and test beyond assigned areas”
2. Bringing different people performing different roles together in the organization for
testing—“Testing isn't for testers alone”
3. Letting everyone in the organization use the product before delivery—“Eat your
own dog food”
4. Bringing fresh pairs of eyes to uncover new defects—“Fresh eyes have less bias”
5. Bringing in people who have different levels of product understanding to test the
product together randomly—“Users of software are not same”
6. Let testing doesn't wait for lack of/time taken for documentation—“Does testing
wait till all documentation is done?”
7. Enabling people to say ―system works‖ as well as enabling them to ―break the
system‖ — “Testing isn't to conclude the system works or doesn't work”

Even though it is said that defect bash is an ad hoc testing, not all activities of defect bash
are unplanned. All the activities in the defect bash are planned activities, except for what to
be tested. It involves several steps.
Step 1 :Choosing the frequency and duration of defect bash
Step 2 :Selecting the right product build
Step 3 :Communicating the objective of each defect bash to everyone
Step 4 :Setting up and monitoring the lab for defect bash
Step 5 :Taking actions and fixing issues
Step 6 :Optimizing the effort involved in defect bash
1.Choosing the Frequency and Duration of Defect Bash
Defect bash is an activity involving a large amount of effort (since it involves large a number
of people) and an activity involving huge planning (as is evident from the above steps).
2.Selecting the Right Product Build
Since the defect bash involves a large number of people, effort and planning, a good quality
build is needed for defect bash. A regression tested build would be ideal as all new features
and defect fixes would have been already tested in such a build.
3.Communicating the Objective of Defect Bash
The objective should be to find a large number of uncovered defects or finding out system
requirements (CPU, memory, disk, and so on) or finding the non-reproducible or random
defects, which could be difficult to find through other planned tests.
4.Setting Up and Monitoring the Lab
During defect bash, the product parameters and system resources (CPU, RAM, disk,
network) need to be monitored for defects and also corrected so that users can continue to
use the system for the complete duration of the defect bash.
There are two types of defects that will emerge during a defect bash. The defects that are
in the product, as reported by the users, can be classified as functional defects.
Defects that are unearthed while monitoring the system resources, such as memory leak,
long turnaround time, missed requests, high impact and utilization of system resources, and
so on are called non-functional defects.
5.Taking Actions and Fixing Issues
The last step, is to take the necessary corrective action after the defect bash. Getting a large
number of defects from users is the purpose and also the normal end result from a defect
bash. Many defects could be duplicate defects. It is difficult to solve all the problems if they
are taken one by one and fixed in code.
6.Optimizing the Effort Involved in Defect Bash
Having a tested build, keeping the right setup, sharing the objectives, and so on, to save
effort and meet the purpose. Another approach to reduce the defect bash effort is to conduct
―micro level‖ defect bashes before conducting one on a large scale.

SYSTEM TESTING
The testing conducted on the complete integrated products and solutions to evaluate system
compliance with specified requirements on functional and nonfunctional aspects is called
system testing.
The goal is to ensure that the system performs according to its requirements.
System test evaluates both functional behavior and quality requirements such as reliability,
usability, performance and security.

SEVERAL TYPES OF SYSTEM TESTS


 Functional test
 Performance test
 Stress test
 Configuration test
 Security test
 Recovery test
Fig. Types of system tests

System testing is done for the following reasons.

1. Provide independent perspective in testing


2. Bring in customer perspective in testing
3. Provide a ―fresh pair of eyes‖ to discover defects not found earlier by testing
4. Test product behavior in a holistic, complete, and realistic environment
5. Test both functional and non-functional aspects of the product
6. Build confidence in the product
7. Analyze and reduce the risk of releasing the product
Ensure all requirements are met and ready the product for acceptance testing

FUNCTIONAL TESTING
• Ensure that the behavior of the system adheres to the requirements specification
• All functional requirements for the system must be achievable by the system.
• Black-box in nature
• Equivalence class partitioning, boundary-value analysis and state-based testing are
valuable techniques
• Document and track test coverage with a (tests to requirements) traceability matrix
• A defined and documented form should be used for recording test results from
functional and other system tests
• Failures should be reported in test incident reports
– Useful for developers (together with test logs)
– Useful for managers for progress tracking and quality assurance purposes
• The tests should focus on the following goals.
– All types or classes of legal inputs must be accepted by the software.
– All classes of illegal inputs must be rejected (however, the system should
remain available).
– All possible classes of system output must exercised and examined.
– All effective system states and state transitions must be exercised and
examined.
– All functions must be exercised.

PERFORMANCE TESTING
Requirements document shows that there are two major types of requirements:
1. Functional requirements: Users describe what functions the software should perform.
Testers test for compliance of these requirements at the system level with the functional-
based system tests.
2. Quality requirements: They are non functional in nature but describe quality levels
expected for the software. One example of a quality requirement is performance level. The
users may have objectives for the software system in terms of memory use, response time,
throughput, and delays.
• Goals:
– See if the software meets the performance requirements
– See whether there any hardware or software factors that impact on the
system's performance
– Provide valuable information to tune the system
– Predict the system's future performance levels
• Performance objectives must be articulated clearly by the users/clients in the
requirements documents, and be stated clearly in the system test plan.
•The objectives must be quantified.
For example, a requirement that the system return a response to a query in ―a reasonable
amount of time is not an acceptable requirement; the time requirement must be specified in
quantitative way.
•Resources for performance testing must be allocated in the system test plan .
• Results of performance test should be quantified, and the corresponding
environmental conditions should be recorded
• Resources usually needed
– a source of transactions to drive the experiments, typically a load generator
– an experimental test bed that includes hardware and software the system
under test interacts with
– instrumentation of probes that help to collect the performance data (event
logging, counting, sampling, memory allocation counters, etc.)
– a set of tools to collect, store, process and interpret data from probes
special resources needed for a performance test
Stress Testing:
When a system is tested with a load that causes it to allocate its resources in maximum
amounts, it is called stress testing.
Ex.
If an operating system is required to handle 10 interrupts/second and the load causes 20
interrupts/second, the system is being stressed.
•The goal of stress test is to try to break the system; find the circumstances under which it
will crash. This is sometimes called ―breaking the system.
•Stress testing often uncovers race conditions, deadlocks, depletion of resources in unusual
or unplanned patterns, and upsets in normal operation of the software system.
•Stress testing is supported by many of the resources used for performance test

EXAMPLE: The load generator. The testers set the load generator parameters so that load
levels cause stress to the system. For example, in our example of a telecommunication
system, the arrival rate of calls, the length of the calls, the number of misdials, as well as
other system parameters should all be at stress levels. As in the case of performance test,
special equipment and laboratory space may be needed for the stress tests. Examples are
hardware or software probes and event loggers. The tests may need to run for several days.
Planners must insure resources are available for the long time periods required. The reader
should note that stress tests should also be conducted at the integration, and if applicable at
the unit level, to detect stress-related defects as early as possible in the testing process. This
is especially critical in cases where redesign is needed.

CONFIGURATION TESTING
• Configuration testing allows developers/testers to evaluate system performance and
availability when hardware exchanges and reconfigurations occur.
• Configuration testing also requires many resources including the multiple hardware
devices used for the tests. If a system does not have specific requirements for device
configuration changes then large-scale configuration testing is not essential.
• Several types of operations should be performed during configuration test. Some
sample operations for testers are
(i) rotate and permutate the positions of devices to ensure physical/ logical device
permutations work for each device (e.g., if there are two printers A and B, exchange
their positions);
(ii) induce malfunctions in each device, to see if the system properly handles the
malfunction;
(iii) induce multiple device malfunctions to see how the system reacts. These
operations will help to reveal problems (defects) relating to hardware/ software
interactions when hardware exchanges, and reconfigurations occur.
The Objectives of Configuration Testing
 Show that all the configuration changing commands and menus work properly.
 Show that all the interchangeable devices are really interchangeable, and that they
each enter the proper state for the specified conditions.
 Show that the systems’ performance level is maintained when devices are
interchanged, or when they fail.

SECURITY TESTING
• Security testing evaluates system characteristics that relate to the availability,
integrity, and confidentially of system data and services.
• Users/clients should make sure their security needs are clearly known at requirements
time, so that security issues can be addressed by designers and testers.
• Evaluates system characteristics that relate to the availability, integrity and
confidentiality of system data and services
• Computer software and data can be compromised by
– criminals intent on doing damage, stealing data and information, causing denial
of service, invading privacy
– errors on the part of honest developers/maintainers (and users?) who modify,
destroy, or compromise data because of misinformation, misunderstandings,
and/or lack of knowledge
• Both can be perpetuated by those inside and outside on an organization
• Attacks can be random or systematic. Damage can be done through various means
such as:
(i) Viruses; (ii) Trojan horses;
(iii) Trap doors; (iv) illicit channels.
• The effects of security breaches could be extensive and can cause:
(i) loss of information; (ii) corruption of information;
(iii) misinformation; (iv) privacy violations;
(v) denial of service.
• Other Areas to focus on Security Testing:
password checking, legal and illegal entry with passwords, password expiration,
encryption, browsing, trap doors, viruses, …

Areas to focus on during security testing


Password Checking—Test the password checker to insure that users will select a password
that meets the conditions described in the password checker specification. Equivalence class
partitioning and boundary value analysis based on the rules and conditions that specify a
valid password can be used to design the tests.
Legal and Illegal Entry with Passwords—Test for legal and illegal system/data access via
legal and illegal passwords.
Password Expiration—If it is decided that passwords will expire after a certain time period,
tests should be designed to insure the expiration period is properly supported and that users
can enter a new and appropriate password.
Encryption—Design test cases to evaluate the correctness of both encryption and
decryption algorithms for systems where data/messages are encoded.
Browsing—Evaluate browsing privileges to insure that unauthorized browsing does not
occur. Testers should attempt to browse illegally and observe system responses. They should
determine what types of private information can be inferred by both legal and illegal
browsing.
Trap Doors—Identify any unprotected entries into the system that may allow access
through unexpected channels (trap doors). Design tests that attempt to gain illegal entry and
observe results. Testers will need the support of designers and developers for this task. In
many cases an external ―tiger team‖ as described below is hired to attempt such a break into
the system.
Viruses—Design tests to insure that system virus checkers prevent or curtail entry of viruses
into the system. Testers may attempt to infect the system with various viruses and observe
the system response. If a virus does penetrate the system, testers will want to determine what
has been damaged and to what extent. The best approach to ensure security if resources
permit, is to hire a so-called ―tiger team‖ which is an outside group of penetration experts
who attempt to breach the system security.

Although a testing group in the organization can be involved in testing for security breaches,
the tiger team can attack the problem from a different point of view. Before the tiger team
starts its work the system should be thoroughly tested at all levels

RECOVERY TESTING
• Recovery testing subjects a system to losses of resources in order to determine if it can
recover properly from these losses.
• Especially important for transaction systems
• Example: loss of a device during a transaction
• Tests would determine if the system could return to a well-known state, and that no
transactions have been compromised
– Systems with automated recovery are designed for this purpose
• They usually have multiple CPUs and/or multiple instances of devices, and
mechanisms to detect the failure of a device. They also have a so-called ―checkpoint
― system that meticulously records transactions and system states periodically so that
these are preserved in case of failure. This information allows the system to return to a
known state after the failure.
• The recovery testers must ensure that the device monitoring system and the
checkpoint software are working properly.
• Areas to focus on Recovery Testing:
– Restart – the ability of the system to restart properly on the last checkpoint
after a loss of a device
– Switchover – the ability of the system to switch to a new processor, as a result
of a command or a detection of a faulty processor by a monitor
• In each of these testing situations all transactions and processes must be carefully
examined to detect:
(i) loss of transactions;
(ii) merging of transactions;
(iii) incorrect transactions;
(iv) an unnecessary duplication of a transaction.
A good way to expose such problems is to perform recovery testing under a stressful
load. Transaction inaccuracies and system crashes are likely to occur with the result that
defects and design flaws will be revealed.
Regression Testing
Regression testing is done to ensure that enhancements or defect fixes made to the software
works properly and does not affect the existing functionality. Regression testing
follows selective re-testing technique. Whenever the defect fixes are done, a set of test cases
that need to be run to verify the defect fixes are selected by the test team. An impact analysis
is done to find out what areas may get impacted due to those defect fixes.

TYPES OF REGRESSION TESTING


When internal or external test teams or customers begin using a product, they report defects.
These defects are analyzed by each developer who make individual defect fixes.
There are two types of regression testing in practice.
1. Regular regression testing
2. Final regression testing
Regular regression testing is done between test cycles to ensure that the defect fixes that are
done and the functionality that were working with the earlier test cycles continue to work. A
regular regression testing can use more than one product build for the test cases to be
executed.
Final regression testing is done to validate the final build before release. The CM engineer
delivers the final build with the media and other contents exactly as it would go to the
customer. The final regression test cycle is conducted for a specific period of duration,
which is mutually agreed upon between the development and testing teams. This is called
the “cook time” for regression testing.

WHEN TO DO REGRESSION TESTING?


It is necessary to perform regression testing when
1. A reasonable amount of initial testing is already carried out.
2. A good number of defects have been fixed.
3. Defect fixes that can produce side-effects are taken care of.

Regression testing may also be performed periodically, as a pro-active measure.


HOW TO DO REGRESSION TESTING?
The methodology here is made of the following steps.
1. Performing an initial ―Smoke‖ or ―Sanity‖ test
2. Understanding the criteria for selecting the test cases
3. Classifying the test cases into different priorities
4. A methodology for selecting test cases
5. Resetting the test cases for test execution
6. Concluding the results of a regression cycle
Smoke testing consists of
1. Identifying the basic functionality that a product must satisfy;
2. Designing test cases to ensure that these basic functionality work and packaging
them into a smoke test suite;
3. Ensuring that every time a product is built, this suite is run successfully before
anything else is run; and
4. If this suite fails, escalating to the developers to identify the changes and perhaps
change or roll back the changes to a state where the smoke test suite succeeds.
Regression methodology can be applied when
1. We need to assess the quality of product between test cycles (both planned and need
based);
2. We are doing a major release of a product, have executed all test cycles, and are
planning a regression test cycle for defect fixes; and
3. We are doing a minor release of a product (support packs, patches, and so on)
having only defect fixes, and we can plan for regression test cycles to take care of
those defect fixes.
Since regression uses test cases that have already executed more than once, it is expected
that 100% of those test cases pass using the same build, if defect fixes are done right. In
situations where the pass percentage is not 100, the test manager can compare with the
previous results of the test case to conclude whether regression was successful or not.
Conclude the results of a regression test cycle
Internationalization Testing
Internationalization (I18n) , the subscript 18 is used to mean that there are 18 characters
between ―I‖ and the last ―n‖ in the word ―internationalization.‖ The testing that is done in
various phases to ensure that all those activities are done right is called
internationalization testing or I18n testing.
Some important aspects of internationalization testing are
1. Testing the code for how it handles input, strings, and sorting items;
2. Display of messages for various languages; and Processing of messages for
various languages and conventions

Figure 9.1 Major activities in internationalization testing.

ENABLING TESTING
An activity of code review or code inspection mixed with some test cases for unit testing,
with an objective to catch I18n defects is called enabling testing.
 Check the code for APIs/function calls that are not part of the I18n API set. For
example, printf () and scanf () are functions in C
 Check the code for hard-coded date, currency formats, ASCII code, or character
constants.
 Check the code to see that there are no computations (addition, subtraction) done on
date variables or a different format forced to the date in the code.
 Check the dialog boxes and screens to see whether they leave at least 0.5 times
more space for expansion (as the translated text can take more space).
 Check that the code does not assume that the language characters can be represented
in 8 bits, 16 bits, or 32 bits.
 If the code uses scrolling of text, then the screen and dialog boxes must allow
adequate provisions for direction change in scrolling such as top to bottom, right to
left, left to right, bottom to top, and so on as conventions are different in different
languages.

INTERNATIONALIZATION VALIDATION
I18n validation is different from I18n testing. I18n testing is the superset of all types of testing.
I18n validation is performed with the following objectives.
1. The software is tested for functionality with ASCII, DBCS, and European
characters.
2. The software handles string operations, sorting, sequencing operations as per the
language and characters selected.
3. The software display is consistent with characters which are non-ASCII in GUI and
menus.
4. The software messages are handled properly.
FAKE LANGUAGE TESTING
The fake language translators use English-like target languages, which are easy to
understand and test. This type of testing helps English testers to find the defects that may
otherwise be found only by language experts during localization testing.

Figure 9.5 Fake language testing.


Fake language testing helps in simulating the functionality of the localized product for a
different language, using software translators.

LANGUAGE TESTING
Language testing is the short form of ―language compatibility testing.‖ This ensures that
software created in English can work with platforms and environments that are English and
non-English.
LOCALIZATION TESTING
When the software is approaching the release date, messages are consolidated into a separate
file and sent to multilingual experts for translation. A set of build tools consolidates all the
messages and other resources (such as GUI screens, pictures) automatically, and puts them
in separate files
The following checklist may help in doing localization testing.
 All the messages, documents, pictures, screens are localized to reflect the native
users and the conventions of the country, locale, and language.
 Sorting and case conversions are right as per language convention. For example,
sort order in English is A, B, C, D, E, whereas in Spanish the sort order is A, B,
C, CH, D, E. See Figure 9.8.
 Font sizes and hot keys are working correctly in the translated messages,
documents, and screens.
 Filtering and searching capabilities of the software work as per the language and
locale conventions.
 Addresses, phone numbers, numbers, and postal codes in the localized software
are as per the conventions of the target user.

Sort order in English and Spanish.


TOOLS USED FOR INTERNATIONALIZATION
There are several tools available for internationalization. These largely depend on the
technology and platform used Sample tools for internationalization.
Adhoc Testing
When a software testing performed without proper planning and documentation, it is said to
be Adhoc Testing. Such kind of tests are executed only once unless testers uncover the
defects.
•Adhoc Tests are done after formal testing is performed on the application.
•Adhoc methods are the least formal type of testing as it is NOT a structured
approach.
The success of Adhoc testing depends upon the capability of the tester, who carries out the
test. The tester has to find defects without any proper planning and documentation, solely
based on his intuition

Adhoc testing can be performed when there is limited time to do exhaustive testing and
usually performed after the formal test execution. Adhoc testing will be effective only if the
tester has in-depth understanding about the System Under Test.
Forms of Adhoc Testing :
Buddy Testing: Two buddies, one from development team and one from test team mutually
work on identifying defects in the same module. Buddy testing helps the testers develop
better test cases while development team can also make design changes early. This kind of
testing happens usually after completing the unit testing.
Pair Testing: Two testers are assigned the same modules and they share ideas and work on
the same systems to find defects. One tester executes the tests while another tester records
the notes on their findings.
Monkey Testing: Testing is performed randomly without any test cases in order to break the
system.
Adhoc Testing can be made more effective by
 Preparation
 Creating a Rough Idea
 Divide and Rule
 Targeting Critical Functionalities
 Using Tools: Documenting the findings

ACCEPTANCE TESTING
Acceptance testing is done by the customer or by the representative of the customer to check
whether the product is ready for use in the real-life environment.

Acceptance testing is a phase after system testing that is normally done by the customers or
representatives of the customer. The customer defines a set of test cases that will be executed
to qualify and accept the product. These test cases are executed by the customers themselves
to quickly judge the quality of the product before deciding to buy the product.

Sometimes, acceptance test cases are developed jointly by the customers and product
organization. In this case, the product organization will have complete understanding of
what will be tested by the customer for acceptance testing. In such cases, the product
organization tests those test cases in advance as part of the system test cycle itself to avoid
any later surprises when those test cases are executed by the customer.

Acceptance test cases failing in a customer site may cause the product to be rejected and
may mean financial loss or may mean rework of product involving effort and time.

Acceptance Criteria
Acceptance criteria-Product acceptance
During the requirements phase, each requirement is associated with acceptance criteria. It is
possible that one or more requirements may be mapped to form acceptance criteria (for
example, all high priority requirements should pass 100%). Whenever there are changes to
requirements, the acceptance criteria are accordingly modified and maintained.

Acceptance criteria—Procedure acceptance


Acceptance criteria can be defined based on the procedures followed for delivery.
1. User, administration and troubleshooting documentation should be part of the
release.
2. Along with binary code, the source code of the product with build scripts to be
delivered in a CD.
3. A minimum of 20 employees are trained on the product usage prior to deployment.
These procedural acceptance criteria are verified/tested as part of acceptance testing.

Acceptance criteria–Service level agreements

Service level agreements (SLA) can become part of acceptance criteria. Service level
agreements are generally part of a contract signed by the customer and product organization.
The important contract items are taken and verified as part of acceptance testing. For
example, time limits to resolve those defects can be mentioned part of SLA such as

 All major defects that come up during first three months of deployment need to be
fixed free of cost;
 Downtime of the implemented system should be less than 0.1%;
 All major defects are to be fixed within 48 hours of reporting.

Selecting Test Cases for Acceptance Testing


1. End-to-end functionality verification Test cases that include the end-to-end
functionality of the product are taken up for acceptance testing. This ensures that all
the business transactions are tested as a whole and those transactions are completed
successfully. Real-life test scenarios are tested when the product is tested end-to-
end.
2. Domain tests Since acceptance tests focus on business scenarios, the product
domain tests are included. Test cases that reflect business domain knowledge are
included.
3. User scenario tests Acceptance tests reflect the real-life user scenario verification.
As a result, test cases that portray them are included.
4. Basic sanity tests Tests that verify the basic existing behavior of the product are
included. These tests ensure that the system performs the basic operations that it
was intended to do. Such tests may gain more attention when a product undergoes
changes or modifications. It is necessary to verify that the existing behavior is
retained without any breaks.
5. New functionality When the product undergoes modifications or changes, the
acceptance test cases focus on verifying the new features.
6. A few non-functional tests Some non-functional tests are included and executed
as part of acceptance testing to double-check that the non-functional aspects of the
product meet the expectations.
7. Tests pertaining to legal obligations and service level agreements Tests that are
written to check if the product complies with certain legal obligations and SLAs are
included in the acceptance test criteria.
8. Acceptance test data Test cases that make use of customer real-life data are
included for acceptance testing.

Executing Acceptance Tests

Defects reported during acceptance tests could be of different priorities. Test teams help
acceptance test team report defects. Showstopper and high-priority defects are necessarily
fixed before software is released.

In case major defects are identified during acceptance testing, then there is a risk of
missing the release date. When the defect fixes point to scope or requirement changes, then
it may either result in the extension of the release date to include the feature in the current
release or get postponed to subsequent releases.

ALPHA TESTING
– alpha testing – on the developers site
• Alpha testing takes place at the developer's site by the internal teams,
before release to external customers. This testing is performed without the
involvement of the development teams.
• This test takes place at the developer’s site. A cross-section of potential users and
members of the developer’s organization are invited to use the software. Developers
observe the users and note problems.

 First phase of testing in Customer Validation.


 Performed at developer's site - testing environment. Hence, the activities can be
controlled.
 Only functionality, usability are tested. Reliability and Security testing are not usually
performed in-depth.
 White box and / or Black box testing techniques are involved.
 Build released for Alpha Testing is called Alpha Release.
 System Testing is performed before Alpha Testing.
 Issues / Bugs are logged into the identified tool directly and are fixed by developer at
high priority. Helps to identify the different views of product usage as different
business streams are involved.

Beta Testing
Developing a product involves a significant amount of effort and time. Delays in product
releases and the product not meeting the customer requirements are common. A product
rejected by the customer after delivery means a huge loss to the organization. There are
many reasons for a product not meeting the customer requirements. They are as follows.

1. There are implicit and explicit requirements for the product. A product not meeting
the implicit requirements (for example, ease of use) may mean rejection by the
customer.
2. Since product development involves a good amount of time, some of the
requirements given at the beginning of the project would have become obsolete or
would have changed by the time the product is delivered.
3. The requirements are high-level statements with a high degree of ambiguity.
Picking up the ambiguous areas and not resolving them with the customer results in
rejection of the product.
4. The understanding of the requirements may be correct but their implementation
could be wrong.
5. Lack of usability and documentation makes it difficult for the customer to use the
product and may also result in rejection.

The list above is only a sub-set of the reasons and there could be many more reasons for
rejection. To reduce the risk, which is the objective of system testing, periodic feedback
is obtained on the product. One of the mechanisms used is sending the product that is
under test to the customers and receiving the feedback. This is called beta testing.

During the entire duration of beta testing, there are various activities that are planned and
executed according to a specific schedule. This is called a beta program.

Some of the activities involved in the beta program are as follows.

1. Collecting the list of customers and their beta testing requirements along with their
expectations on the product.
2. Working out a beta program schedule and informing the customers.
3. Sending some documents for reading in advance and training the customer on
product usage.
4. Testing the product to ensure it meets ―beta testing entry criteria.‖
5. Sending the beta product (with known quality) to the customer and enable them to
carry out their own testing.
6. Collecting the feedback periodically from the customers and prioritizing the defects
for fixing.
7. Responding to customers’ feedback with product fixes or documentation changes
and closing the communication loop with the customers in a timely fashion.
8. Analyzing and concluding whether the beta program met the exit criteria.
9. Communicate the progress and action items to customers and formally closing the
beta program.
10.Incorporating the appropriate changes in the product.

One other challenge in beta programs is the choice of the number of beta customers. If the
number chosen are too few, then the product may not get a sufficient diversity of test
scenarios and test cases. If too many beta customers are chosen, then the engineering
organization may not be able to cope up with fixing the reported defects in time. Thus the
number of beta customers should be a delicate balance between providing a diversity of
product usage scenarios and the manageability of being able to handle their reported defects
effectively.

Finally, the success of a beta program depends heavily on the willingness of the beta
customers to exercise the product in various ways, knowing fully well that there may be
defects. Only customers who can be thus motivated and are willing to play the role of trusted
partners in the evolution of the product should participate in the beta program.

ACCESSIBILITY TESTING
Verifying the product usability for physically challenged users is called accessibility
testing.

There are a large number of people who are challenged with vision, hearing, and mobility
related problems—partial or complete. Product usability that does not look into their
requirements would result in lack of acceptance. For such users, alternative methods of using
the product have to be provided. There are several tools that are available to help them with
alternatives. These tools are generally referred as accessibility tools or assistive
technologies.

Accessibility testing involves testing these alternative methods of using the product and
testing the product along with accessibility tools. Accessibility is a subset of usability and
should be included as part of usability test planning.

Accessibility to the product can be provided by two means.

1. Making use of accessibility features provided by the underlying infrastructure (for


example, operating system), called basic accessibility, and
2. Providing accessibility in the product through standards and guidelines,
called product accessibility.
Basic Accessibility
Keyboard accessibility

A keyboard is the most complex device for vision- and mobility-impaired users. Hence, it
received plenty of attention for accessibility. Some of the accessibility improvements were
done on hardware and some in the operating system.

Similarly, the operating system vendors came up with some more improvements in the
keyboard. Some of those improvements are usage of sticky keys, toggle keys and arrow
keys for mouse.

Sticky keys To explain the sticky keys concept, let us take an example of <CTRL=
<ALT><DEL>. One of the most complex sequences for vision-impaired and mobility-
impaired users is <CTRL><ALT><DEL>.

Filter keys When keys are pressed for more than a particular duration, they are assumed to
be repeated.

Toggle key sound When toggle keys are enabled, the information typed may be different
from what the user desires.

Sound keys To help vision-impaired users, there is one more mechanism that pronounces
each character as and when they are hit on the keyboard.

Arrow keys to control mouse Mobility-impaired users have problems moving the mouse.
By enabling this feature, such users will be able to use the keyboard arrow keys for mouse
movements. The two buttons of the mouse and their operations too can be directed from the
keyboard.

Narrator Narrator is a utility which provides auditory feedback

Screen accessibility

Some accessibility features that enhance usability using the screen are as follows.

Visual sound Visual sound is the ―wave form‖ or ―graph form‖ of the sound. These visual
effects inform the user of the events that happen on the system using the screen.

Enabling captions for multimedia All multimedia speech and sound can be enabled with
text equivalents, and they are displayed on the screen when speech and sound are played.

Soft keyboard Some of the mobility-impaired and vision-impaired users find it easier to
use pointing devices instead of the keyboard.

Easy reading with high contrast A toggle option is provided generally by the operating
system to switch to a high contrast mode. This mode uses pleasing colors and font sizes for
all the menus on the screen.
Product Accessibility

A good understanding of the basic accessibility features is needed while providing


accessibility to the product. A product should do everything possible to ensure that the basic
accessibility features are utilized by it.

Sample requirement #1: Text equivalents have to be provided for audio, video, and picture
images.

when users use tools like the narrator, the associated text is read and produced in audio
form whereby the vision-impaired are benefited.

Hence text equivalents for audio (captions), audio descriptions for pictures and visuals
become an important requirement for accessibility.

Sample requirement #2: Documents and fields should be organized so that they can be read
without requiring a particular resolution of the screen, and templates (known as style
sheets).

Sample requirement #3: User interfaces should be designed so that all information conveyed
with color is also available without color.

Figure 12.4 Color as a method of identification.


Sample requirement #4: Reduce flicker rate, speed of moving text; avoid flashes and
blinking text.

Different people read at different speeds. People with below-average speed in reading may
find it irritating to see text that is blinking and flashing as it further impacts reading speed.

Sample requirement #5: Reduce physical movement requirements for the users when
designing the interface and allow adequate time for user responses.

When designing the user interfaces, adequate care has to be taken to ensure that the
physical movement required to use the product is minimized to assist mobility-impaired
users.

A screen with four fields in the corners.


Table : Sample list of usability and accessibility tools.
Name of the Purpose
tool

JAWS For testing accessibility of the product with some


assistive technologies.

HTML To validate the HTML source file for usability and


validator accessibility standards.

Style sheet To validate the style sheets (templates) for usability


validator standards set by W3C.

Magnifier Accessibility tool for vision challenged (to enable them


to enlarge the items displayed on screen

Narrator Narrator is a tool that reads the information displayed on


the screen and creates audio descriptions for vision-
challenged users.
Name of the Purpose
tool

Soft Soft keyboard enables the use of pointing devices to use


keyboard the keyboard by displaying the keyboard template on the
screen.

Usability Testing
Usability testing is an important aspect of quality control. It is one of the procedures we can
use as testers to evaluate our product to ensure that it meets user requirements on a
fundamental level.
Usability is a quality factor that is related to the effort needed to learn, operate,
prepare input, and interpret the output of a computer program.

Understandability: The amount of effort required to understand the software.


Ease of learning: The degree to which user effort required to understand the software is
minimized.
Operability: The degree to which the operation of the software matches the purpose,
environment, and physiological characteristics of users; this includes ergonomic factors such
as color, shape, sound, font size, etc.
Communicativeness: The degree to which the software is designed in accordance with the
psychological characteristics of the users.
An Approach to Usability Testing
Rubin’s approach to usability testing employs techniques to collect empirical data while
observing a representative group of end users using the software product to perform a set of
tasks that are representative of the proposed usage. He describes four types of tests: (i)
exploratory, (ii) assessment, iii) validation, and (iv) comparison.
It is important to describe the basic elements of usability testing to show how they are
related to designer, developer, and tester interests.
The elements are
 the development of a test objective (designers, testers),
 use of a representative sample of end users (testers),
 an environment for the test that represents the actual work environment (designers,
testers),
 observations of the users who either review or use a representation of the product (the
latter could be a prototype) (developers, testers),
 the collection, analysis, and summarization of qualitative and quantitative
performance and preference measurement data (designers, developers, and testers)
 recommendations for improvement of the software product (designers, developers)

AssessmentUsabilityTesting
Assessment tests are usually conducted after a high-level design for the software has been
developed. Findings from the exploratory tests are expanded upon; details are filled in. For
these types of tests a functioning prototype should be available, and testers should be able to
evaluate how well a user is able to actually perform realistic tasks.
(i) number of tasks corrected completed/unit time;
(ii) number of help references/unit time;
(iii) number of errors (and error type);
(v) error recovery time.

ValidationUsabilityTesting
A principal objective of validation usability testing is to evaluate how the product compares
to some predetermined usability standard or benchmark. Testers want to determine whether
the software meets the standards prior to release; if it does not, the reasons for this need to be
established.
Other objectives of validation usability testing include:
1. Initiating usability standards.
2. Evaluating how well user-oriented components of a software system work together..
3. Ensuring that any show-stoppers or fatal defects are not present. If the software is new
and such a defect is revealed by the tests, the development organization may decide to delay
the release of the software

UsabilityTesting:ResourceRequirements
A usability testing laboratory
Trained personnel.
 selecting the user participants;
• designing, administering, and monitoring of the tests;
• developing forms needed to collect relevant data from user participants;
• analyzing, organizing, and distributing data and results to relevant parties;
• making recommendations to development staff and management.
Usability test planning.
UsabilityTestsandMeasurements
Tests designed to measure usability are in some ways more complex than those required for
traditional software testing.
For example a usability test for a word processing program might consist of tasks such as:
(i) open an existing document;
(ii) add text to the document;
(iii) modify the old text;
(iv) change the margins in selected sections;
(v) change the font size in selected sections;
(vi) print the document;
(vii) save the document.
As the user performs these tasks she will be observed by the testers and video cameras. Time
periods for task completion and the performance of the system will be observed and
recorded.
Many of the usability test results will recorded as subjective evaluations of the software.
Users will be asked to complete questionnaires that state preferences and ranking with
respect to features such as:
(i) usefulness of the software;
(ii) how well it met expectations;
(iii) ease of use;
(iv) ease of learning;
(vi) usefulness and availability of help facilities.

Usability testers also collect quantitative measures. For example:


(i) time to complete each task;
(ii) time to access information in the user manual;
(iii) time to access information from on-line help;
(iv) number and percentage of tasks completed correctly;
(v) number or percentage of tasks completed incorrectly;
(vi) time spent in communicating with help desk.
Testers can also count the number of:
(i) errors made;
(ii) incorrect menu choices;
(iii) user manual accesses;
(iv) help accesses;
(v) time units spent in using help;
(vi) incorrect selections;
(vii) negative comments or gestures (captured by video).
As a result of the usability tests, all the analyzed data should be used to make
recommendations for actions. In this phase of usability testing designers with a knowledge
of user-centered design, and human factors staff with knowledge of human–computer
interaction can work as part of the recommendation team. A final report should be developed
and distributed to management and the technical staff who are involved in the project.
Table:Roles and responsibilities for usability testing.

Role Responsibility

Usability  Institutionalizing and improving usability across the


architect/consultant organization
 Educating, advocating, and obtaining management
commitment required for usability, as an initiative area in
the organization
 Creating a communication channel between customers and
the organization for getting feedback on usability problems

Usability expert  Providing the technology guidance needed for performing


usability testing
 Owning the usability strategy for testing products for
usability

Human factors  Reviewing the screens and other artifacts for usability
specialist  Ensuring consistency across multiple products

Graphic designer  Creating icons, graphic images, and so on needed for user
interfaces
 Cross-checking the icons and graphic messages on the
contexts they are used and verifying whether those images
communicate the right meaning

Usability  Estimating, scheduling, and tracking all usability testing


manager/lead activities
 Working with customers to obtain feedback on the current
version of the product prior to release

Usability test  Executing usability test based on scripts, scenarios, and


engineer test cases
 Providing feedback to usability tests from execution
perspective as well as user perspective, if possible

Testing OO systems
Testing OO systems broadly covers the following topics.
1. Unit testing a class
2. Putting classes to work together (integration testing of classes)
3. System testing
4. Regression testing
5. Tools for testing OO systems
Unit Testing a Set of Classes
As a class is built before it is ―published‖ for use by others, it has to be tested to see if it is
ready for use. Classes are the building blocks for an entire OO system. The classes have to
be unit tested.

Why classes have to be tested individually first


In the case of OO systems, it is even more important (than in the case of procedure-oriented
systems) to unit test the building blocks (classes) thoroughly for the following reasons.
1. A class is intended for heavy reuse. A residual defect in a class can, therefore,
potentially affect every instance of reuse.
2. Many defects get introduced at the time a class (that is, its attributes and methods)
gets defined.
3. A class may have different features; different clients of the class may pick up
different pieces of the class.Thus, unless the class is tested as a unit first, there may
be pieces of a class that may never get tested.
4. A class is a combination of data and methods. If the data and methods do not work
in sync at a unit test level, it may cause defects that are potentially very difficult to
narrow down later on.

Special considerations for testing classes


One of the methods that is effective for this purpose is the Alpha-Omega method. This
method works on the following principles.
1. Test the object through its life cycle from ―birth to death‖ (that is, from instantiation
to destruction).
2. Test the simple methods first and then the more complex methods.
3. Test the methods from private through public methods.
4. Send a message to every method at least once. This ensures that every method is
tested at least once.

The Alpha-Omega method achieves the above objective by the following steps.
1. Test the constructor methods first.
2. Test the get methods or accessor methods.
3. Test the methods that modify the object variables.
4. Finally, the object has to be destroyed and when the object is destroyed, no further
accidental access should be possible.
There are two other forms of classes and inheritance that pose special challenges for
testing—multiple inheritance and abstract classes.

Putting Classes to Work Together—Integration Testing


A variant of polymorphism called dynamic binding creates special challenges for testing.
The various methods of integration like top-down, bottom-up, big bang, and so on can all be
applicable here. The extra points to be noted about integration testing OO systems are that
1. OO systems are inherently meant to be built out of small, reusable components.
Hence integration testing will be even more critical for OO systems.
2. There is typically more parallelism in the development of the underlying
components of OO systems; thus the need for frequent integration is higher.
3. Given the parallelism in development, the sequence of availability of the classes
will have to be taken into consideration while performing integration testing.

System Testing and Interoperability of OO Systems


Object oriented systems are by design meant to be built using smaller reusable components
(i.e. the classes). This heavy emphasis on reuse of existing building blocks makes system
testing even more important for OO systems than for traditional systems. Some of the
reasons for this added importance are:
1. A class may have different parts, not all of which are used at the same time.
2. Different classes may be combined together by a client and this combination may
lead to new defects that are hitherto uncovered.
3. An instantiated object may not free all its allocated resource, thus causing memory
leaks and such related problems, which will show up only in the system testing
phase.
Thus, proper entry and exit criteria should be set for the various test phases before system
testing so as to maximize the effectiveness of system testing.

Regression Testing of OO Systems


Taking the discussion of integration testing further, regression testing becomes very crucial
for OO systems. As a result of the heavy reliance of OO systems on reusable components,
changes to any one component could have potentially unintended side-effects on the clients
that use the component. Hence, frequent integration and regression runs become very
essential for testing OO systems. Also, because of the cascaded effects of changes resulting
from properties like inheritance, it makes sense to catch the defects as early as possible.

Tools for Testing of OO Systems


There are several tools that aid in testing OO systems. Some of these are
1. Use cases
2. Class diagrams
3. Sequence diagrams
4. State charts
Testing methods and tools for key OO concepts
Key OO concept Testing methods and tools

Object orientation  Tests need to integrate data and methods more tightly

Unit testing of  BVA, equivalence partitioning, and so on for testing


classes variables
 Code coverage methods for methods
 Alpha-Omega method of exercising methods
 Activity diagram for testing methods
 State diagram for testing the states of a class
 Stress testing to detect memory leaks and similar
defects when a class is instantiated and destroyed
multiple times
Key OO concept Testing methods and tools

Encapsulation and  Requires unit testing at class level and incremental


inheritance class testing when encapsulating
 Inheritance introduces extra context; each combination
of different contexts has to be tested
 Desk checking and static review is tougher because of
the extra context

Abstract classes  Requires re-testing for every new implementation of


the abstract class

Polymorphism  Each of the different methods of the same name should


be tested separately
 Maintainability of code may suffer

Dynamic binding  Conventional code coverage has to modified to be


applicable for dynamic binding
 Possibility of unanticipated run time defects higher

Inter-  Message sequencing


objectcommunication  Sequence diagrams
via messages

Object reuse and  Needs more frequent integration tests and regression
parallel development tests
of objects  Integration testing and unit testing are not as clearly
separated as in the case of a procedure-oriented
language
 Errors in interfaces between objects likely to be more
common in OO systems and hence needs through
interface testing

Configuration Testing
•Configuration testing is the process of checking the operation of the software under testing
with all the various types of hardware.

The different configuration possibilities for a standard Windows-based PC


•The PC - Compaq, Dell, Gateway, Hewlett Packard, IBM
•Components - system boards, component cards, and other internal devices such as
disk drives, CD-ROM drives, video, sound, modem, and network cards
•Peripherals - printers, scanners, mice, keyboards, monitors, cameras, joysticks
•Interfaces - ISA, PCI, USB, PS/2, RS/232, and Firewire
•Options and memory - hardware options and memory sizes
•Device Drivers -All components and peripherals communicate with the operating
system and the software applications through low-level software called device drivers.
These drivers are often provided by the hardware device manufacturer and are
installed when you set up the hardware. Although technically they are software, for
testing purposes they are considered part of the hardware configuration.
To start configuration testing on a piece of software, the tester needs to consider which of
these configuration areas would be most closely tied to the program.
Examples:
•A highly graphical computer game will require lots of attention to the video and sound
areas.
•A greeting card program will be especially vulnerable to printer issues.
•A fax or communications program will need to be tested with numerous modems and
network configurations.

Finding Configuration Bugs:


The sure way to tell if a bug is a configuration problem and not just an ordinary bug is to
perform the exact same operation that caused the problem, step by step, on another computer
with a completely different configuration.
 If the bug doesn’t occur, it’s very likely a configuration problem.
 If the bug happens on more than one configuration, it’s probably just a regular
bug.
The general process that the tester should use when planning the configuration testing are:
•Decide the types of hardware needed
•Decide what hardware brands, models, and device drivers are available
•Decide which hardware features, modes, and options are possible
•Pare down the identified hardware configurations to a manageable set
•Identify the software’s unique features that work with the hardware configurations
•Design the test cases to run on each configuration
•Execute the tests on each configuration
•Rerun the tests until the results satisfy the test team

Compatibility Testing
Testing done to ensure that the product features work consistently with different
infrastructure components is called compatibility testing. Software compatibility testing
means checking that your software interacts with and shares information correctly with other
software.
This interaction could occur between two programs simultaneously running on the same
computer or even on different computers connected through the Internet thousands of miles
apart .

Examples of compatible software


Cutting text from a Web page and pasting it into a document opened in your word processor.
In performing software compatibility testing on a new piece of software, the tester needs to
concentrate on
 Platform and Application Versions (Backward and Forward Compatibility, The
Impact of Testing Multiple Versions)
 Standards and Guidelines (High-Level Standards and Guidelines, Low-Level
Standards and Guidelines
 Data Sharing Compatibility (File save and file load, File export and file import )

The compatibility testing of a product involving parts of itself can be further classified
into two types.

1. Backward compatibility Testing : The testing that ensures the current version of
the product continues to work with the older versions of the same product is called
backward compatibility testing.
2. Forward compatibility testing: There are some provisions for the product to work
with later versions of the product and other infrastructure components, keeping
future requirements in mind. Such requirements are tested as part of forward
compatibly testing.

Platform and Application Versions


Selecting the target platforms or the compatible applications is really a program management
or a marketing task.
Backward compatible:
If something is backward compatible, it will work with previous versions of the software.
Forward compatible :
If something is forward compatible, it will work with future versions of the software.
In compatibility testing a new platform, tester must check that existing software
applications work correctly with it

To begin the task of compatibility testing, tester needs to equivalence partition all the
possible software combinations into the smallest, effective set that verifies that the software
interacts properly with other software.
In Compatibility testing a new application tester may require to test it on multiple
platforms and with multiple applications
Standards and Guidelines
There are two types of standards:
•High level
•Low level
High-level standards are the ones that guide the product’s general compliance, its look and
feel, its supported features, and so on.
Low-level standards are the nitty-gritty details, such as the file formats and the network
communications protocols.
Data Sharing Compatibility
The sharing of data among applications is what really gives software its power. A well-
written program that supports and adheres to published standards must allow users to easily
transfer data to and from other software to be a great compatible product.

Familiar means of transferring data :


1.File save and file load
2.File export and file import
3.Cut, copy, and paste

Testing the documentation


If the software’s documentation consists of nothing but a simple readme file, testing it would
not be a big deal. The tester should make sure that it included all the material that it was
supposed to, that everything was technically accurate, and to run a spell check and a virus
scan on the disk.
Types of Documentation: ( components classified as documentation)
•Packaging text and graphics
•Marketing material, ads, and other inserts
•Warranty/registration
•EULA
•Labels and stickers
•Installation and setup instructions
•User’s manual
•Online help
•Tutorials, wizards, and CBT
•Samples, examples, and templates
•Error messages
Documentation on a disk label

The Importance of Documentation Testing


Good software documentation contributes to the product’s overall quality in three ways
1.It improves usability
2.It improves reliability
3.It lowers support costs
Effective approach to documentation testing :

Treat the documentation like a user. Read it carefully, follow every step, examine every
figure, and try every example. With this approach, the tester will find bugs both in the
software and the documentation.

Documentation Testing Checklist


1.General Areas
Website Testing
Web site testing encompasses many areas, including
•configuration testing,
•compatibility testing,
•usability testing,
•documentation testing,
•localization testing

Web page features


•Text of different sizes, fonts, and colors
•Graphics and photos
•Hyperlinked text and graphics
•Varying advertisements
•Drop-down selection boxes
•Fields in which the users can enter data

Features that make the Web site much more complex :


 Customizable layout that allows users to change where information is positioned on
screen
 Customizable content that allows users to select what news and information they want
to see
 Dynamic drop-down selection boxes
 Dynamically changing text
 Dynamic layout and optional information based on screen resolution
 Compatibility with different Web browsers, browser versions, and hardware and
software platforms
 Lots of hidden formatting, tagging, and embedded information that enhances the Web
page’s usability
Black-Box Testing
•Treat the Web page or the entire Web site as a black box
•Take some time and explore
 Think about how to approach testing a website
 What would the tester test?
 What would the equivalence partitions be?
 What would the tester choose not to test?

When testing a Web site, the tester first creates a state table, treating each page as a different
state with the hyperlinks as the lines connecting them. A completed state map will give a
better view of the overall task.

The tester should look for the following


Text
Web page text should be treated just like documentation and tested accordingly. Check the
audience level, the terminology, the content and subject matter, check spelling .
•Hyperlinks
Links can be tied to text or graphics. Each link should be checked to make sure that it jumps
to the correct destination and opens in the correct window
•Graphics
Do all graphics load and display properly? If a graphic is missing or is incorrectly named, it
won’t load and the Web page will display an error where the graphic was to be placed.
•Forms
Test forms just as you would if they were fields in a regular software program. Are the fields
the correct size? Do they accept the correct data and reject the wrong data? Is there proper
confirmation when you finally press Enter? Are optional fields truly optional and the
required ones truly required?
•Objects and Other functionality
Take care to identify all the features present on each page. Does it have its own states? Does
it handle data? Could it have ranges or boundaries? What test cases apply and how should
they be equivalence classed?

Gray-Box Testing
 Graybox testing, is a mixture of black-box and white-box testing. Test the software as
a black-box, but supplement the work by taking a peek (not a full look, as in white-
box testing) at what makes the software work.
 HTML and Web pages can be tested as a gray box
White-Box Testing :
Features of a website tested with a white-box approach are
 Dynamic Content
 Database-Driven Web Pages
 Programmatically Created Web Pages
 Server Performance and Loading
 Security
Configuration and Compatibility Testing :
 Configuration testing is the process of checking the operation of the software with
various types of hardware and software platforms and their different settings.
 Compatibility testing is checking the software’s operation with other software

The possible hardware and software configurations might be that could affect the operation
or appearance of a web site are :
•Hardware Platform
•Browser Software and Version
•Browser Plug-Ins
•Browser Options
•Video Resolution and Color Depth
•Text Size
•Modem Speeds
Usability Testing :
Following and testing a few basic rules can help make Web sites more usable.
Respected expert on Web site design and usability, has performed extensive research on
Web site usability.

The Top Ten Mistakes in Web Design


1.Gratuitous Use of Bleeding-Edge Technology
2.Scrolling Text, Marquees, and Constantly Running Animations
3.Long Scrolling Pages
4.Non-Standard Link Colors
5.Outdated Information
6.Overly Long Download Times
7.Lack of Navigation Support
8.Orphan Pages
9.Complex Web Site Addresses
10.Using Frames
Part A
1. What is Bottom up integration Testing? And its advantages. (Apr/May – 2018)
Bottom-up testing
a. Integrate individual components in levels until the complete system is created
Advantages and disadvantages
• Architectural validation
– Top-down integration testing is better at discovering errors in the system architecture
• System demonstration
– Top-down integration testing allows a limited demonstration at an early stage in the
development
• Test implementation
– Often easier with bottom-up integration testing
• Test observation
– Problems with both approaches. Extra code may be required to observe tests
2. Give example for security testing? (Apr/May – 2018)
It Evaluates system characteristics that relate to the availability, integrity and confidentiality of
system data and services.
Security Testing examples: password checking, legal and illegal entry with passwords, password
expiration, encryption, browsing, trap doors, viruses.

3. What is the need for different levels of testing? (Nov/Dec – 2016)


Execution-based software testing, especially for large systems, is usually carried out at
different levels. At each level there are specific testing goals.At the system level the system as a
whole is tested and a principle goal is to evaluate attributes such as usability, reliability, and
performance. To make sure that all the requirements are fulfilled, different levels of testing are
needed.

4. Brief the importance of test plan in software testing . (Apr/May – 2021)


A test plan is the foundation of every testing effort. It helps set out how the software will
be checked, what specifically will be tested, and who will be performing the test. By creating a
clear test plan all team members can follow, everyone can work together effectively.

5. List the features of risk matrix test tool. . (Apr/May – 2021)


Risk is composed of two factors: the probability of something happening and the (negative) business
impact that it would have. So, if we draw it in a matrix, we will be able to distinguish zones according
to risk, where the extremes will be:

1. Very likely, high impact: We must test it!


2. Unlikely, high impact: We should test it.
3. Very likely, low impact: If there is time, we could test it.
4. Unlikely, low impact: If we want to throw money down the drain, we’ll test this. That is, the test
is too expensive for the value it provides. So, we won’t test it.

Risk matrices are widely used in risk management. To use risk matrices to set priorities and guide
resource allocations has also been recommended in different standards and is spread through areas of
applied risk management including enterprise risk management (ERM)

6. Define integration testing? (Apr/May – 2017)


One unit at a time is integrated into a set of previously integrated modules which have passed
a set of integration tests.
7. What are the different types of system types? (Apr/May – 2017)
• Functional testing
• Performance testing
• Stress testing
• Configuration testing
• Security testing
• Recovery testing
8. Define alpha, beta and acceptance tests. (Nov/Dec – 2016)
When software is being developed for a specific client, acceptance tests are carried out after
system testing.
Alpha test: This test takes place at the developer’s site.
Beta Test: Beta tests ends the software to a cross-section of users who install it and use it
under real world working conditions.

Define stress Testing. (Apr/May 2019)


When a system is tested with a load that causes it to allocate its resources in maximum
amounts .It is important because it can reveal defects in real-time and other types of systems
which it will crash. This is sometimes called ―breaking the system‖.

Define test Harness. (Apr/May 2019), (Nov/Dec 2019).


The auxiliary code developed into support testing of units and components is called a test harness.
The harness consists of driversthat call the target code and stubs that represent modules it calls.
Test harness enables the automation of tests. It refers to the system test drivers and other supporting
tools that requires to execute tests. It provides stubs and drivers which are small programs that
interact with the software under test.
Test harnesses execute tests, by using a test library and generates a report. It requires that your test
scripts are designed to handle different test scenarios and test data.

Why is it important to design test harness for testing?(Nov/Dec 2017), (Apr/May 2017),
(Apr/May 2019), (Nov/Dec 2019).
Test harness enables the automation of tests. It refers to the system test drivers and other
supporting tools that requires to execute tests. It provides stubs and drivers which are small
programs that interact with the software under test.
Test harnesses execute tests, by using a test library and generates a report. It requires that
your test scripts are designed to handle different test scenarios and test data.

Define alpha and beta testing. (May/June 2014), (Nov/Dec 2016)


Alpha testing takes place at the developer's site by the internal teams, before release to external
customers. This testing is performed without the involvement of the development teams.
Beta testing also known as user testing takes place at the end users site by the end users to validate the
usability, functionality, compatibility, and reliability testing.

Define Unit test. (Nov/Dec 2017)


Unit testing is a software development process in which the smallest testable parts of an
application, called units, are individually and in- dependently scrutinized for proper
operation. Unit testing can be done manually but is often automated.

Define Stress Testing. (Apr/May 2019)


When a system is tested with a load that causes it to allocate its resources in maximum amounts,
this is called stress testing. For example, if an operating system is required to handle 10 interrupts/second
and the load causes 20 interrupts/second, the system is being stressed.

What is security testing? (Apr/May 2018)


Security testing evaluates system characteristics that relate to the availability, integrity, and
confidentially of system data and services. Users/clients should be encouraged to make sure their security
needs are clearly known at requirements time, so that security issues can be addressed by designers and
testers.

9. What are the difference levels of testing?


The major phases of testing: unit test, integration test, system test, and some type of
acceptance test.
10. What is the importance of acceptance testing?
During acceptance test the development organization musts how that the software meets all
of the client’s requirements. Very often final payments for system development depend on the
quality of the software as observed during the acceptance test.
11. What is meant by software unit?
A unit is the smallest possible testable software component.
12. What are the characteristics of software unit?
a. performs a single cohesive function;
b. can be compiled separately;
c. is a task in a work breakdown structure (from the manager’s point of view);
d. Contains code that can fit on a single page or screen.
13. What are the steps required to perform unit testing?
(i) plan the general approach to unit testing;
(ii) Design the test cases, and test procedures (these will be attached to the test plan);
(iii) Define relationships between the tests;
(iv) Prepare the auxiliary code necessary for unit test.
14. What do you mean by test harness?
The auxiliary code developed to support testing of units and components is called a test
harness. The harness consists of drivers that call the target code and stubs that represent modules it
calls.
15. What are the reasons for unit failure?
a. A fault in the test case specification (the input or the output was not specified correctly);
b. A fault in test procedure execution (the test should be rerun);
c. A fault in the test environment (perhaps a database was not set up properly);
d. A fault in the unit design (the code correctly adheres to the design specification, but the latter is
incorrect).
16. What is the need for test summary report?
This is a valuable document for the groups responsible for
integrationandsystemtests.Itisalsoavaluablecomponentoftheproject history. Its value lies in the useful
data it provides for test process improvement and defect prevention.
17. What are the goals of integration test?
a. To detect defects that occur on the interfaces of units;
b. To assemble the individual units into working subsystems and finally a complete system that is
ready for system test.
18. What do you mean by clusters?
A cluster consists of classes that are related, for example, they may worktogether (co-
operate) to support a required functionality for the complete system.
19. What are the documents required for integration test planning?
Requirements document, the user manual, and usage scenarios.These documents contain
structure charts, state charts, data dictionaries, cross-reference tables,
moduleinterfacedescriptions,dataflowdescriptions,messagesandevent descriptions, all necessary to
plan integration tests.
20. Write notes on cluster test plan.
The plan includes the following items:
a. Clusters this cluster is dependent on;
b. A natural language description of the functionality of the cluster to be tested;
c. List of classes in the cluster;
d. A set of cluster test cases.
21. What do you mean by load generator?
An important tool for implementing system tests is a load generator. A load generator is
essential for testing quality requirements such as performance and stress.A load is a series of inputs
that simulates a group of transactions.
22. Define functional testing.
Functional tests are black box in nature. The focus is on the inputs and proper outputs for
each function. Improper and illegal inputs must also be handled by the system. System behavior
under the latter circumstances tests must be observed. All functions must be tested.
23. What is the goal of performance testing?
The goal of system performance tests is to see if the software meets the performance
requirements. Testers also learn from performance testwhether there are any hardware or software
factors that impact on the system’s performance. Performance testing allows testers to tune the
system; that is, to optimize the allocation of system resources.
24. What are the resources required for performance testing?
a. A source of transactions to drive the experiments.
b. An experimental test bed that includes hardware and software the system-under-test interacts
with.
c. Instrumentation or probes that help to collect the performance data.
d. A set of tools to collect, store, process, and interpret the data.
25. What do you mean by regression testing? (Nov/Dec – 2018)
Regression testing is not a level of testing ,but it is there testing of software that occurs when
changes are made to ensure that the new version of the software has retained the capabilities of the
old version and that no new defects have been introduced due to the changes.
List the levels of Testing or Phases of testing. (Nov/Dec –2018)
26.
a. Unit Test
b. Integration Test
c. System Test
d. Acceptance Test
27. List the phases of unit test planning.
Unit test planning having set of development phases. Phase1: Describe unit test approach and
risks. Phase 2: Identify unit features to be tested. Phase 3: Add levels of detail to the plan.
28. What are the steps for top down integration?
a. Main control module is used as a test driver and stubs are substituted for all components
directly subordinate to the main module.
b. Depending on integration approach (Depth or breadth first) subordinate stubs are replaced
one at a time with actual components.
c. Tests are conducted as each component is integrated.
d. The completion of each set of tests another stub is replaced with real component.
29. Define stress Testing.
When a system is tested with a load that causes it to allocate its resources in
maximum amounts .It is important because it can reveal defects in real-time and other types
of systems which it will crash. This is sometimes called ―breaking the system‖.
30. What are the two major requirements in the Performance testing?
a. Functional Requirement: User describe what functions the software should perform. We test
for compliance of the requirement at the system level with the functional based system test.
b. Quality Requirement: They are nonfunctional in nature but describe quality levels
expected for the software.
31. What are the Integration strategies?
a. Top_ Down: In this strategy integration of the module begins with testing the upper level
modules.
b. Bottom_ Up: In this strategy integration of the module begins with testing the lowest level
modules.
32. Define Test incident report.
The tester must determine from the test whether the unit has passed or failed the test. If the
test is failed, the nature of the problem should be recorded in what is sometimes called a test incident
report.
33. Define test case.
A use case is a pattern, scenario, or exemplar of usage. It describes a typical interaction
between the software system under development and a user.
34. List the issues of class testing.
Issue1: Adequately testing classes
Issue2: Observation of object states and state changes.
Issue3: The retesting of classes-I
Issue4: The retesting of classes-II

You might also like