Unit Iii: Part B
Unit Iii: Part B
LEVELS OF TESTING
The need for Levels of Testing – Unit Test – Unit Test Planning – Designing the Unit Tests
– The Test Harness – Running the Unit tests and Recording results – Integration tests –
Designing Integration Tests – Integration Test Planning – Scenario testing – Defect bash
elimination System Testing – Acceptance testing – Performance testing – Regression
Testing – Internationalization testing – Ad-hoc testing – Alpha, Beta Tests – Testing OO
systems – Usability and Accessibility testing – Configuration testing – Compatibility testing
– Testing the documentation –Website testing
Part B
1. What do you mean by unit testing? Explain in detail about the process of unit testing
and unit test planning. (Apr/May – 2018) (Apr/May – 2017)
Write the importance of security testing and explain the consequences of security
breaches, also write the various areas which has to be focused on during security
testing. (Apr/May – 2018)
1. Write notes on configuration testing and its objectives. (Apr/May – 2018)
2. State the need for integration testing in procedural code. (Apr/May – 2018)
3. Explain in detail about test harness. Also write notes on integration test.
4. Explain various system testing approaches in detail. (Nov/Dec – 2016) (Apr/May –
2017)
5. Write notes on regression testing, alpha and beta acceptance testing strategies.
6. Write notes on configuration testing and compatibility testing. (Nov/Dec – 2016)
Unit testing
Integration testing
System testing
Acceptance testing
Integration Test: In the integration level several components are tested as a group
Goal :To investigate component interactions
Acceptance test: In acceptance test the development organization must show that the
software meets all of the client’s requirements.
Goal : To provide a good opportunity for developers to request recommendation letters from
the client.
UNIT TESTING
Functions, Procedures, Classes, and Methods as Units
A workable definition for a software unit is as follows:
A unit is the smallest possible testable software component.
It can be characterized in several ways. For example, a unit in a typical procedure- oriented
software system:
• performs a single cohesive function;
• can be compiled separately;
• is a task in a work breakdown structure (from the manager’s point of view);
• contains code that can fit on a single page or screen.
• A unit is traditionally viewed as a function or procedure implemented in a
procedural (imperative) programming language.
• In object-oriented systems both the method and the class/object have been suggested
by researchers as the choice for a unit
• A unit may also be a small-sized COTS component purchased from an outside
vendor that is undergoing evaluation by the purchaser, or a simple module retrieved
from an in-house reuse library.
Some components suitable for unit test
To prepare for unit test the developer/tester must perform several tasks. These are:
(i) plan the general approach to unit testing;
(ii) design the test cases, and test procedures (these will be attached to the test plan);
(iii) define relationships between the tests;
(iv) prepare the auxiliary code necessary for unit test.
A brief description of a set of development phases for unit test planning is found
below. In each phase a set of activities is assigned based on those found in the IEEE
Standard for Software Unit Testing.
Phase 1: Describe Unit Test Approach and Risks
In this phase of unit testing planning the general approach to unit testing is outlined. The
test planner:
(i) identifies test risks;
(ii) describes techniques to be used for designing the test cases for the units;
(iii) describes techniques to be used for data validation and recording of test results;
(iv) describes the requirements for test harnesses and other software that interfaces
with the units to be tested, for example, any special objects needed for testing object-
oriented units.
The next steps in unit testing consist of designing the set of test cases, developing the
auxiliary code needed for testing, executing the tests, and recording and analyzing the
results.
It is very important for the tester at any level of testing to carefully record, review, and
check test results. The tester must determine from the results whether the unit has passed or
failed the test. If the test is failed, the nature of the problem should be recorded in what is
sometimes called a test incident report. Differences from expected behaviour should be
described in detail. This gives clues to the developers to help them locate any faults. During
testing the tester may determine that additional tests are required. For example, a tester may
observe that a particular coverage goal has not been achieved. The test set will have to be
augmented and the test plan documents should reflect these changes.
INTEGRATION TESTING
Integration testing type focuses on testing interfaces that are ―implicit and explicit‖ and
―internal and external.‖
Figure 5.1 shows a set of modules and the interfaces associated with them. The solid lines
represent explicit interfaces and the dotted lines represent implicit interfaces, based on the
understanding of architecture, design, or usage.
Figure 5.1 A set of modules and interfaces.
1 1-2
2 1-3
3 1-4
4 1-2-5
5 1-3-6
6 1-3-6-(3-7)
7 (1-2-5)-(1-3-6-(3-7))
8 1-4-8
9 (1-2-5)-(1-3-6-(3-7))-(1-4-8)
The order in which the interfaces are tested may change a bit if different methods of
traversing are used. A breadth first approach will get you component order such as 1–2, 1–3,
1–4 and so on and a depth first order will get you components such as 1–2–5, 1–3–6, and so
on.
Bottom-up Integration
Bottom-up integration is just the opposite of top-down integration, where the components
for a new product development become available in reverse order, starting from the bottom.
Example of bottom-up integration. Arrows pointing down depict logic flow; arrows pointing
up indicate integration paths.
The navigation in bottom-up integration starts from component 1 and covering all sub-
systems, till component 8 is reached. Order of interfaces tested using bottom up integration
1 1-5
2 2-6, 3-6
3 2-6-(3-6)
4 4-7
5 1-5-8
6 2-6-(3-6)-8
7 4-7-8
8 (1-5-8)-(2-6-(3-6)-8)-(4-7-8)
The arrows from bottom to top (that is, upward-pointing arrows) indicate integration
approach or integration path. What it means is that the logic flow of the product can be
different from the integration path. It may be easy to say that top-down integration approach
is best suited for the Waterfall and V models and the bottom-up approach for the iterative
and agile methodologies.
Advantages and disadvantages
• Architectural validation
– Top-down integration testing is better at discovering errors in the system
architecture
• System demonstration
– Top-down integration testing allows a limited demonstration at an early
stage in the development
• Test implementation
– Often easier with bottom-up integration testing
• Test observation
– Problems with both approaches. Extra code may be required to observe tests
Bi-Directional Integration
Bi-directional integration is a combination of the top-down and bottom-up integration
approaches used together to derive integration steps.
1 6-2
2 7-3-4
3 8-5
4 (1-6-2)-(1-7-3-4)-(1-8-5)
As you can see from the table, steps 1–3 use a bottom-up integration approach and step 4
uses a top-down integration approach for this example.
System Integration
System integration means that all the components of the system are integrated and tested as a
single unit. Integration testing, which is testing of interfaces, can be divided into two types:
Components or sub-system integration
Final integration testing or system integration
Instead of integrating component by component and testing, this approach waits till all
components arrive and one round of integration testing is done. This approach is also
called big-bang integration. It reduces testing effort and removes duplication in testing.
Big bang integration is ideal for a product where the interfaces are stable with less number of
defects.
System integration using the big bang approach is well suited in a product development
scenario where the majority of components are already available and stable and very few
components get added or modified.
For readers integrating object-oriented systems Murphy et al. has a detailed description of a
Cluster Test Plan. The plan includes the following items:
(i) clusters this cluster is dependent on;
(ii) a natural language description of the functionality of the cluster to be tested;
(iii) list of classes in the cluster;
(iv) a set of cluster test cases.
SCENARIO TESTING
Scenario testing is defined as a “set of realistic user activities that are used for evaluating
the product.” It is also defined as the testing involving customer scenarios.
There are two methods to evolve scenarios.
1. System scenarios
2. Use-case scenarios/role based scenarios
System Scenarios
System scenario is a method whereby the set of activities used for scenario testing covers
several components in the system. The following approaches can be used to develop system
scenarios.
Story line Develop a story line that combines various activities of the product that may be
executed by an end user.
Battle ground Create some scenarios to justify that ―the product works‖ and some
scenarios to ―try and break the system‖ to justify ―the product doesn't work.‖ This adds
flavor to the scenarios mentioned above.
User likes to withdraw cash and inserts Request for password or Personal
the card in the ATM machine Identification Number (PIN)
User fills in the amount of cash required Check availability of funds Update
account balance Prepare receipt
Dispense cash
Defect bash is an ad hoc testing where people performing different roles in an organization
test the product together at the same time.
Defect bash brings together plenty of good practices that are popular in testing industry.
They are as follows.
1. Enabling people ―Cross boundaries and test beyond assigned areas”
2. Bringing different people performing different roles together in the organization for
testing—“Testing isn't for testers alone”
3. Letting everyone in the organization use the product before delivery—“Eat your
own dog food”
4. Bringing fresh pairs of eyes to uncover new defects—“Fresh eyes have less bias”
5. Bringing in people who have different levels of product understanding to test the
product together randomly—“Users of software are not same”
6. Let testing doesn't wait for lack of/time taken for documentation—“Does testing
wait till all documentation is done?”
7. Enabling people to say ―system works‖ as well as enabling them to ―break the
system‖ — “Testing isn't to conclude the system works or doesn't work”
Even though it is said that defect bash is an ad hoc testing, not all activities of defect bash
are unplanned. All the activities in the defect bash are planned activities, except for what to
be tested. It involves several steps.
Step 1 :Choosing the frequency and duration of defect bash
Step 2 :Selecting the right product build
Step 3 :Communicating the objective of each defect bash to everyone
Step 4 :Setting up and monitoring the lab for defect bash
Step 5 :Taking actions and fixing issues
Step 6 :Optimizing the effort involved in defect bash
1.Choosing the Frequency and Duration of Defect Bash
Defect bash is an activity involving a large amount of effort (since it involves large a number
of people) and an activity involving huge planning (as is evident from the above steps).
2.Selecting the Right Product Build
Since the defect bash involves a large number of people, effort and planning, a good quality
build is needed for defect bash. A regression tested build would be ideal as all new features
and defect fixes would have been already tested in such a build.
3.Communicating the Objective of Defect Bash
The objective should be to find a large number of uncovered defects or finding out system
requirements (CPU, memory, disk, and so on) or finding the non-reproducible or random
defects, which could be difficult to find through other planned tests.
4.Setting Up and Monitoring the Lab
During defect bash, the product parameters and system resources (CPU, RAM, disk,
network) need to be monitored for defects and also corrected so that users can continue to
use the system for the complete duration of the defect bash.
There are two types of defects that will emerge during a defect bash. The defects that are
in the product, as reported by the users, can be classified as functional defects.
Defects that are unearthed while monitoring the system resources, such as memory leak,
long turnaround time, missed requests, high impact and utilization of system resources, and
so on are called non-functional defects.
5.Taking Actions and Fixing Issues
The last step, is to take the necessary corrective action after the defect bash. Getting a large
number of defects from users is the purpose and also the normal end result from a defect
bash. Many defects could be duplicate defects. It is difficult to solve all the problems if they
are taken one by one and fixed in code.
6.Optimizing the Effort Involved in Defect Bash
Having a tested build, keeping the right setup, sharing the objectives, and so on, to save
effort and meet the purpose. Another approach to reduce the defect bash effort is to conduct
―micro level‖ defect bashes before conducting one on a large scale.
SYSTEM TESTING
The testing conducted on the complete integrated products and solutions to evaluate system
compliance with specified requirements on functional and nonfunctional aspects is called
system testing.
The goal is to ensure that the system performs according to its requirements.
System test evaluates both functional behavior and quality requirements such as reliability,
usability, performance and security.
FUNCTIONAL TESTING
• Ensure that the behavior of the system adheres to the requirements specification
• All functional requirements for the system must be achievable by the system.
• Black-box in nature
• Equivalence class partitioning, boundary-value analysis and state-based testing are
valuable techniques
• Document and track test coverage with a (tests to requirements) traceability matrix
• A defined and documented form should be used for recording test results from
functional and other system tests
• Failures should be reported in test incident reports
– Useful for developers (together with test logs)
– Useful for managers for progress tracking and quality assurance purposes
• The tests should focus on the following goals.
– All types or classes of legal inputs must be accepted by the software.
– All classes of illegal inputs must be rejected (however, the system should
remain available).
– All possible classes of system output must exercised and examined.
– All effective system states and state transitions must be exercised and
examined.
– All functions must be exercised.
PERFORMANCE TESTING
Requirements document shows that there are two major types of requirements:
1. Functional requirements: Users describe what functions the software should perform.
Testers test for compliance of these requirements at the system level with the functional-
based system tests.
2. Quality requirements: They are non functional in nature but describe quality levels
expected for the software. One example of a quality requirement is performance level. The
users may have objectives for the software system in terms of memory use, response time,
throughput, and delays.
• Goals:
– See if the software meets the performance requirements
– See whether there any hardware or software factors that impact on the
system's performance
– Provide valuable information to tune the system
– Predict the system's future performance levels
• Performance objectives must be articulated clearly by the users/clients in the
requirements documents, and be stated clearly in the system test plan.
•The objectives must be quantified.
For example, a requirement that the system return a response to a query in ―a reasonable
amount of time is not an acceptable requirement; the time requirement must be specified in
quantitative way.
•Resources for performance testing must be allocated in the system test plan .
• Results of performance test should be quantified, and the corresponding
environmental conditions should be recorded
• Resources usually needed
– a source of transactions to drive the experiments, typically a load generator
– an experimental test bed that includes hardware and software the system
under test interacts with
– instrumentation of probes that help to collect the performance data (event
logging, counting, sampling, memory allocation counters, etc.)
– a set of tools to collect, store, process and interpret data from probes
special resources needed for a performance test
Stress Testing:
When a system is tested with a load that causes it to allocate its resources in maximum
amounts, it is called stress testing.
Ex.
If an operating system is required to handle 10 interrupts/second and the load causes 20
interrupts/second, the system is being stressed.
•The goal of stress test is to try to break the system; find the circumstances under which it
will crash. This is sometimes called ―breaking the system.
•Stress testing often uncovers race conditions, deadlocks, depletion of resources in unusual
or unplanned patterns, and upsets in normal operation of the software system.
•Stress testing is supported by many of the resources used for performance test
EXAMPLE: The load generator. The testers set the load generator parameters so that load
levels cause stress to the system. For example, in our example of a telecommunication
system, the arrival rate of calls, the length of the calls, the number of misdials, as well as
other system parameters should all be at stress levels. As in the case of performance test,
special equipment and laboratory space may be needed for the stress tests. Examples are
hardware or software probes and event loggers. The tests may need to run for several days.
Planners must insure resources are available for the long time periods required. The reader
should note that stress tests should also be conducted at the integration, and if applicable at
the unit level, to detect stress-related defects as early as possible in the testing process. This
is especially critical in cases where redesign is needed.
CONFIGURATION TESTING
• Configuration testing allows developers/testers to evaluate system performance and
availability when hardware exchanges and reconfigurations occur.
• Configuration testing also requires many resources including the multiple hardware
devices used for the tests. If a system does not have specific requirements for device
configuration changes then large-scale configuration testing is not essential.
• Several types of operations should be performed during configuration test. Some
sample operations for testers are
(i) rotate and permutate the positions of devices to ensure physical/ logical device
permutations work for each device (e.g., if there are two printers A and B, exchange
their positions);
(ii) induce malfunctions in each device, to see if the system properly handles the
malfunction;
(iii) induce multiple device malfunctions to see how the system reacts. These
operations will help to reveal problems (defects) relating to hardware/ software
interactions when hardware exchanges, and reconfigurations occur.
The Objectives of Configuration Testing
Show that all the configuration changing commands and menus work properly.
Show that all the interchangeable devices are really interchangeable, and that they
each enter the proper state for the specified conditions.
Show that the systems’ performance level is maintained when devices are
interchanged, or when they fail.
SECURITY TESTING
• Security testing evaluates system characteristics that relate to the availability,
integrity, and confidentially of system data and services.
• Users/clients should make sure their security needs are clearly known at requirements
time, so that security issues can be addressed by designers and testers.
• Evaluates system characteristics that relate to the availability, integrity and
confidentiality of system data and services
• Computer software and data can be compromised by
– criminals intent on doing damage, stealing data and information, causing denial
of service, invading privacy
– errors on the part of honest developers/maintainers (and users?) who modify,
destroy, or compromise data because of misinformation, misunderstandings,
and/or lack of knowledge
• Both can be perpetuated by those inside and outside on an organization
• Attacks can be random or systematic. Damage can be done through various means
such as:
(i) Viruses; (ii) Trojan horses;
(iii) Trap doors; (iv) illicit channels.
• The effects of security breaches could be extensive and can cause:
(i) loss of information; (ii) corruption of information;
(iii) misinformation; (iv) privacy violations;
(v) denial of service.
• Other Areas to focus on Security Testing:
password checking, legal and illegal entry with passwords, password expiration,
encryption, browsing, trap doors, viruses, …
Although a testing group in the organization can be involved in testing for security breaches,
the tiger team can attack the problem from a different point of view. Before the tiger team
starts its work the system should be thoroughly tested at all levels
RECOVERY TESTING
• Recovery testing subjects a system to losses of resources in order to determine if it can
recover properly from these losses.
• Especially important for transaction systems
• Example: loss of a device during a transaction
• Tests would determine if the system could return to a well-known state, and that no
transactions have been compromised
– Systems with automated recovery are designed for this purpose
• They usually have multiple CPUs and/or multiple instances of devices, and
mechanisms to detect the failure of a device. They also have a so-called ―checkpoint
― system that meticulously records transactions and system states periodically so that
these are preserved in case of failure. This information allows the system to return to a
known state after the failure.
• The recovery testers must ensure that the device monitoring system and the
checkpoint software are working properly.
• Areas to focus on Recovery Testing:
– Restart – the ability of the system to restart properly on the last checkpoint
after a loss of a device
– Switchover – the ability of the system to switch to a new processor, as a result
of a command or a detection of a faulty processor by a monitor
• In each of these testing situations all transactions and processes must be carefully
examined to detect:
(i) loss of transactions;
(ii) merging of transactions;
(iii) incorrect transactions;
(iv) an unnecessary duplication of a transaction.
A good way to expose such problems is to perform recovery testing under a stressful
load. Transaction inaccuracies and system crashes are likely to occur with the result that
defects and design flaws will be revealed.
Regression Testing
Regression testing is done to ensure that enhancements or defect fixes made to the software
works properly and does not affect the existing functionality. Regression testing
follows selective re-testing technique. Whenever the defect fixes are done, a set of test cases
that need to be run to verify the defect fixes are selected by the test team. An impact analysis
is done to find out what areas may get impacted due to those defect fixes.
ENABLING TESTING
An activity of code review or code inspection mixed with some test cases for unit testing,
with an objective to catch I18n defects is called enabling testing.
Check the code for APIs/function calls that are not part of the I18n API set. For
example, printf () and scanf () are functions in C
Check the code for hard-coded date, currency formats, ASCII code, or character
constants.
Check the code to see that there are no computations (addition, subtraction) done on
date variables or a different format forced to the date in the code.
Check the dialog boxes and screens to see whether they leave at least 0.5 times
more space for expansion (as the translated text can take more space).
Check that the code does not assume that the language characters can be represented
in 8 bits, 16 bits, or 32 bits.
If the code uses scrolling of text, then the screen and dialog boxes must allow
adequate provisions for direction change in scrolling such as top to bottom, right to
left, left to right, bottom to top, and so on as conventions are different in different
languages.
INTERNATIONALIZATION VALIDATION
I18n validation is different from I18n testing. I18n testing is the superset of all types of testing.
I18n validation is performed with the following objectives.
1. The software is tested for functionality with ASCII, DBCS, and European
characters.
2. The software handles string operations, sorting, sequencing operations as per the
language and characters selected.
3. The software display is consistent with characters which are non-ASCII in GUI and
menus.
4. The software messages are handled properly.
FAKE LANGUAGE TESTING
The fake language translators use English-like target languages, which are easy to
understand and test. This type of testing helps English testers to find the defects that may
otherwise be found only by language experts during localization testing.
LANGUAGE TESTING
Language testing is the short form of ―language compatibility testing.‖ This ensures that
software created in English can work with platforms and environments that are English and
non-English.
LOCALIZATION TESTING
When the software is approaching the release date, messages are consolidated into a separate
file and sent to multilingual experts for translation. A set of build tools consolidates all the
messages and other resources (such as GUI screens, pictures) automatically, and puts them
in separate files
The following checklist may help in doing localization testing.
All the messages, documents, pictures, screens are localized to reflect the native
users and the conventions of the country, locale, and language.
Sorting and case conversions are right as per language convention. For example,
sort order in English is A, B, C, D, E, whereas in Spanish the sort order is A, B,
C, CH, D, E. See Figure 9.8.
Font sizes and hot keys are working correctly in the translated messages,
documents, and screens.
Filtering and searching capabilities of the software work as per the language and
locale conventions.
Addresses, phone numbers, numbers, and postal codes in the localized software
are as per the conventions of the target user.
Adhoc testing can be performed when there is limited time to do exhaustive testing and
usually performed after the formal test execution. Adhoc testing will be effective only if the
tester has in-depth understanding about the System Under Test.
Forms of Adhoc Testing :
Buddy Testing: Two buddies, one from development team and one from test team mutually
work on identifying defects in the same module. Buddy testing helps the testers develop
better test cases while development team can also make design changes early. This kind of
testing happens usually after completing the unit testing.
Pair Testing: Two testers are assigned the same modules and they share ideas and work on
the same systems to find defects. One tester executes the tests while another tester records
the notes on their findings.
Monkey Testing: Testing is performed randomly without any test cases in order to break the
system.
Adhoc Testing can be made more effective by
Preparation
Creating a Rough Idea
Divide and Rule
Targeting Critical Functionalities
Using Tools: Documenting the findings
ACCEPTANCE TESTING
Acceptance testing is done by the customer or by the representative of the customer to check
whether the product is ready for use in the real-life environment.
Acceptance testing is a phase after system testing that is normally done by the customers or
representatives of the customer. The customer defines a set of test cases that will be executed
to qualify and accept the product. These test cases are executed by the customers themselves
to quickly judge the quality of the product before deciding to buy the product.
Sometimes, acceptance test cases are developed jointly by the customers and product
organization. In this case, the product organization will have complete understanding of
what will be tested by the customer for acceptance testing. In such cases, the product
organization tests those test cases in advance as part of the system test cycle itself to avoid
any later surprises when those test cases are executed by the customer.
Acceptance test cases failing in a customer site may cause the product to be rejected and
may mean financial loss or may mean rework of product involving effort and time.
Acceptance Criteria
Acceptance criteria-Product acceptance
During the requirements phase, each requirement is associated with acceptance criteria. It is
possible that one or more requirements may be mapped to form acceptance criteria (for
example, all high priority requirements should pass 100%). Whenever there are changes to
requirements, the acceptance criteria are accordingly modified and maintained.
Service level agreements (SLA) can become part of acceptance criteria. Service level
agreements are generally part of a contract signed by the customer and product organization.
The important contract items are taken and verified as part of acceptance testing. For
example, time limits to resolve those defects can be mentioned part of SLA such as
All major defects that come up during first three months of deployment need to be
fixed free of cost;
Downtime of the implemented system should be less than 0.1%;
All major defects are to be fixed within 48 hours of reporting.
Defects reported during acceptance tests could be of different priorities. Test teams help
acceptance test team report defects. Showstopper and high-priority defects are necessarily
fixed before software is released.
In case major defects are identified during acceptance testing, then there is a risk of
missing the release date. When the defect fixes point to scope or requirement changes, then
it may either result in the extension of the release date to include the feature in the current
release or get postponed to subsequent releases.
ALPHA TESTING
– alpha testing – on the developers site
• Alpha testing takes place at the developer's site by the internal teams,
before release to external customers. This testing is performed without the
involvement of the development teams.
• This test takes place at the developer’s site. A cross-section of potential users and
members of the developer’s organization are invited to use the software. Developers
observe the users and note problems.
Beta Testing
Developing a product involves a significant amount of effort and time. Delays in product
releases and the product not meeting the customer requirements are common. A product
rejected by the customer after delivery means a huge loss to the organization. There are
many reasons for a product not meeting the customer requirements. They are as follows.
1. There are implicit and explicit requirements for the product. A product not meeting
the implicit requirements (for example, ease of use) may mean rejection by the
customer.
2. Since product development involves a good amount of time, some of the
requirements given at the beginning of the project would have become obsolete or
would have changed by the time the product is delivered.
3. The requirements are high-level statements with a high degree of ambiguity.
Picking up the ambiguous areas and not resolving them with the customer results in
rejection of the product.
4. The understanding of the requirements may be correct but their implementation
could be wrong.
5. Lack of usability and documentation makes it difficult for the customer to use the
product and may also result in rejection.
The list above is only a sub-set of the reasons and there could be many more reasons for
rejection. To reduce the risk, which is the objective of system testing, periodic feedback
is obtained on the product. One of the mechanisms used is sending the product that is
under test to the customers and receiving the feedback. This is called beta testing.
During the entire duration of beta testing, there are various activities that are planned and
executed according to a specific schedule. This is called a beta program.
1. Collecting the list of customers and their beta testing requirements along with their
expectations on the product.
2. Working out a beta program schedule and informing the customers.
3. Sending some documents for reading in advance and training the customer on
product usage.
4. Testing the product to ensure it meets ―beta testing entry criteria.‖
5. Sending the beta product (with known quality) to the customer and enable them to
carry out their own testing.
6. Collecting the feedback periodically from the customers and prioritizing the defects
for fixing.
7. Responding to customers’ feedback with product fixes or documentation changes
and closing the communication loop with the customers in a timely fashion.
8. Analyzing and concluding whether the beta program met the exit criteria.
9. Communicate the progress and action items to customers and formally closing the
beta program.
10.Incorporating the appropriate changes in the product.
One other challenge in beta programs is the choice of the number of beta customers. If the
number chosen are too few, then the product may not get a sufficient diversity of test
scenarios and test cases. If too many beta customers are chosen, then the engineering
organization may not be able to cope up with fixing the reported defects in time. Thus the
number of beta customers should be a delicate balance between providing a diversity of
product usage scenarios and the manageability of being able to handle their reported defects
effectively.
Finally, the success of a beta program depends heavily on the willingness of the beta
customers to exercise the product in various ways, knowing fully well that there may be
defects. Only customers who can be thus motivated and are willing to play the role of trusted
partners in the evolution of the product should participate in the beta program.
ACCESSIBILITY TESTING
Verifying the product usability for physically challenged users is called accessibility
testing.
There are a large number of people who are challenged with vision, hearing, and mobility
related problems—partial or complete. Product usability that does not look into their
requirements would result in lack of acceptance. For such users, alternative methods of using
the product have to be provided. There are several tools that are available to help them with
alternatives. These tools are generally referred as accessibility tools or assistive
technologies.
Accessibility testing involves testing these alternative methods of using the product and
testing the product along with accessibility tools. Accessibility is a subset of usability and
should be included as part of usability test planning.
A keyboard is the most complex device for vision- and mobility-impaired users. Hence, it
received plenty of attention for accessibility. Some of the accessibility improvements were
done on hardware and some in the operating system.
Similarly, the operating system vendors came up with some more improvements in the
keyboard. Some of those improvements are usage of sticky keys, toggle keys and arrow
keys for mouse.
Sticky keys To explain the sticky keys concept, let us take an example of <CTRL=
<ALT><DEL>. One of the most complex sequences for vision-impaired and mobility-
impaired users is <CTRL><ALT><DEL>.
Filter keys When keys are pressed for more than a particular duration, they are assumed to
be repeated.
Toggle key sound When toggle keys are enabled, the information typed may be different
from what the user desires.
Sound keys To help vision-impaired users, there is one more mechanism that pronounces
each character as and when they are hit on the keyboard.
Arrow keys to control mouse Mobility-impaired users have problems moving the mouse.
By enabling this feature, such users will be able to use the keyboard arrow keys for mouse
movements. The two buttons of the mouse and their operations too can be directed from the
keyboard.
Screen accessibility
Some accessibility features that enhance usability using the screen are as follows.
Visual sound Visual sound is the ―wave form‖ or ―graph form‖ of the sound. These visual
effects inform the user of the events that happen on the system using the screen.
Enabling captions for multimedia All multimedia speech and sound can be enabled with
text equivalents, and they are displayed on the screen when speech and sound are played.
Soft keyboard Some of the mobility-impaired and vision-impaired users find it easier to
use pointing devices instead of the keyboard.
Easy reading with high contrast A toggle option is provided generally by the operating
system to switch to a high contrast mode. This mode uses pleasing colors and font sizes for
all the menus on the screen.
Product Accessibility
Sample requirement #1: Text equivalents have to be provided for audio, video, and picture
images.
when users use tools like the narrator, the associated text is read and produced in audio
form whereby the vision-impaired are benefited.
Hence text equivalents for audio (captions), audio descriptions for pictures and visuals
become an important requirement for accessibility.
Sample requirement #2: Documents and fields should be organized so that they can be read
without requiring a particular resolution of the screen, and templates (known as style
sheets).
Sample requirement #3: User interfaces should be designed so that all information conveyed
with color is also available without color.
Different people read at different speeds. People with below-average speed in reading may
find it irritating to see text that is blinking and flashing as it further impacts reading speed.
Sample requirement #5: Reduce physical movement requirements for the users when
designing the interface and allow adequate time for user responses.
When designing the user interfaces, adequate care has to be taken to ensure that the
physical movement required to use the product is minimized to assist mobility-impaired
users.
Usability Testing
Usability testing is an important aspect of quality control. It is one of the procedures we can
use as testers to evaluate our product to ensure that it meets user requirements on a
fundamental level.
Usability is a quality factor that is related to the effort needed to learn, operate,
prepare input, and interpret the output of a computer program.
AssessmentUsabilityTesting
Assessment tests are usually conducted after a high-level design for the software has been
developed. Findings from the exploratory tests are expanded upon; details are filled in. For
these types of tests a functioning prototype should be available, and testers should be able to
evaluate how well a user is able to actually perform realistic tasks.
(i) number of tasks corrected completed/unit time;
(ii) number of help references/unit time;
(iii) number of errors (and error type);
(v) error recovery time.
ValidationUsabilityTesting
A principal objective of validation usability testing is to evaluate how the product compares
to some predetermined usability standard or benchmark. Testers want to determine whether
the software meets the standards prior to release; if it does not, the reasons for this need to be
established.
Other objectives of validation usability testing include:
1. Initiating usability standards.
2. Evaluating how well user-oriented components of a software system work together..
3. Ensuring that any show-stoppers or fatal defects are not present. If the software is new
and such a defect is revealed by the tests, the development organization may decide to delay
the release of the software
UsabilityTesting:ResourceRequirements
A usability testing laboratory
Trained personnel.
selecting the user participants;
• designing, administering, and monitoring of the tests;
• developing forms needed to collect relevant data from user participants;
• analyzing, organizing, and distributing data and results to relevant parties;
• making recommendations to development staff and management.
Usability test planning.
UsabilityTestsandMeasurements
Tests designed to measure usability are in some ways more complex than those required for
traditional software testing.
For example a usability test for a word processing program might consist of tasks such as:
(i) open an existing document;
(ii) add text to the document;
(iii) modify the old text;
(iv) change the margins in selected sections;
(v) change the font size in selected sections;
(vi) print the document;
(vii) save the document.
As the user performs these tasks she will be observed by the testers and video cameras. Time
periods for task completion and the performance of the system will be observed and
recorded.
Many of the usability test results will recorded as subjective evaluations of the software.
Users will be asked to complete questionnaires that state preferences and ranking with
respect to features such as:
(i) usefulness of the software;
(ii) how well it met expectations;
(iii) ease of use;
(iv) ease of learning;
(vi) usefulness and availability of help facilities.
Role Responsibility
Human factors Reviewing the screens and other artifacts for usability
specialist Ensuring consistency across multiple products
Graphic designer Creating icons, graphic images, and so on needed for user
interfaces
Cross-checking the icons and graphic messages on the
contexts they are used and verifying whether those images
communicate the right meaning
Testing OO systems
Testing OO systems broadly covers the following topics.
1. Unit testing a class
2. Putting classes to work together (integration testing of classes)
3. System testing
4. Regression testing
5. Tools for testing OO systems
Unit Testing a Set of Classes
As a class is built before it is ―published‖ for use by others, it has to be tested to see if it is
ready for use. Classes are the building blocks for an entire OO system. The classes have to
be unit tested.
The Alpha-Omega method achieves the above objective by the following steps.
1. Test the constructor methods first.
2. Test the get methods or accessor methods.
3. Test the methods that modify the object variables.
4. Finally, the object has to be destroyed and when the object is destroyed, no further
accidental access should be possible.
There are two other forms of classes and inheritance that pose special challenges for
testing—multiple inheritance and abstract classes.
Object orientation Tests need to integrate data and methods more tightly
Object reuse and Needs more frequent integration tests and regression
parallel development tests
of objects Integration testing and unit testing are not as clearly
separated as in the case of a procedure-oriented
language
Errors in interfaces between objects likely to be more
common in OO systems and hence needs through
interface testing
Configuration Testing
•Configuration testing is the process of checking the operation of the software under testing
with all the various types of hardware.
Compatibility Testing
Testing done to ensure that the product features work consistently with different
infrastructure components is called compatibility testing. Software compatibility testing
means checking that your software interacts with and shares information correctly with other
software.
This interaction could occur between two programs simultaneously running on the same
computer or even on different computers connected through the Internet thousands of miles
apart .
The compatibility testing of a product involving parts of itself can be further classified
into two types.
1. Backward compatibility Testing : The testing that ensures the current version of
the product continues to work with the older versions of the same product is called
backward compatibility testing.
2. Forward compatibility testing: There are some provisions for the product to work
with later versions of the product and other infrastructure components, keeping
future requirements in mind. Such requirements are tested as part of forward
compatibly testing.
To begin the task of compatibility testing, tester needs to equivalence partition all the
possible software combinations into the smallest, effective set that verifies that the software
interacts properly with other software.
In Compatibility testing a new application tester may require to test it on multiple
platforms and with multiple applications
Standards and Guidelines
There are two types of standards:
•High level
•Low level
High-level standards are the ones that guide the product’s general compliance, its look and
feel, its supported features, and so on.
Low-level standards are the nitty-gritty details, such as the file formats and the network
communications protocols.
Data Sharing Compatibility
The sharing of data among applications is what really gives software its power. A well-
written program that supports and adheres to published standards must allow users to easily
transfer data to and from other software to be a great compatible product.
Treat the documentation like a user. Read it carefully, follow every step, examine every
figure, and try every example. With this approach, the tester will find bugs both in the
software and the documentation.
When testing a Web site, the tester first creates a state table, treating each page as a different
state with the hyperlinks as the lines connecting them. A completed state map will give a
better view of the overall task.
Gray-Box Testing
Graybox testing, is a mixture of black-box and white-box testing. Test the software as
a black-box, but supplement the work by taking a peek (not a full look, as in white-
box testing) at what makes the software work.
HTML and Web pages can be tested as a gray box
White-Box Testing :
Features of a website tested with a white-box approach are
Dynamic Content
Database-Driven Web Pages
Programmatically Created Web Pages
Server Performance and Loading
Security
Configuration and Compatibility Testing :
Configuration testing is the process of checking the operation of the software with
various types of hardware and software platforms and their different settings.
Compatibility testing is checking the software’s operation with other software
The possible hardware and software configurations might be that could affect the operation
or appearance of a web site are :
•Hardware Platform
•Browser Software and Version
•Browser Plug-Ins
•Browser Options
•Video Resolution and Color Depth
•Text Size
•Modem Speeds
Usability Testing :
Following and testing a few basic rules can help make Web sites more usable.
Respected expert on Web site design and usability, has performed extensive research on
Web site usability.
Risk matrices are widely used in risk management. To use risk matrices to set priorities and guide
resource allocations has also been recommended in different standards and is spread through areas of
applied risk management including enterprise risk management (ERM)
Why is it important to design test harness for testing?(Nov/Dec 2017), (Apr/May 2017),
(Apr/May 2019), (Nov/Dec 2019).
Test harness enables the automation of tests. It refers to the system test drivers and other
supporting tools that requires to execute tests. It provides stubs and drivers which are small
programs that interact with the software under test.
Test harnesses execute tests, by using a test library and generates a report. It requires that
your test scripts are designed to handle different test scenarios and test data.