MODULE 2-Se 3rd Chap
MODULE 2-Se 3rd Chap
MODULE 4
Software Testing
Development Testing
Test Driven Development
Release Testing
User Testing
**************************************************************
Testing is intended to show that a program does what it is intended to do and to discover
program defects before it is put into use. When you test software, you execute a program using
artificial data. You check the results of the test run for errors, anomalies or information about the
programs non-functional attributes. Testing can reveal the presence of errors, but NOT their
absence. Testing is part of a more general verification and validation process, which also
includes static validation techniques.
Goals of software testing:
To demonstrate to the developer and the customer that the software meets its
requirements.
o Leads to validation testing: you expect the system to perform correctly using
a given set of test cases that reflect the system's expected use.
o A successful test shows that the system operates as intended.
To discover situations in which the behavior of the software is incorrect,
undesirable or does not conform to its specification.
o Leads to defect testing: the test cases are designed to expose defects; the test
cases can be deliberately obscure and need not reflect how the system is
normally used.
o A successful test is a test that makes the system perform incorrectly and so
exposes a defect in the system.
Testing is part of a broader process of software verification and validation (V & V).
The goal of V & V is to establish confidence that the system is good enough for its intended
use, which depends on:
1. Software purpose: the level of confidence depends on how critical the software is to an
organization.
2. User expectations: users may have low expectations of certain kinds of software.
3. Marketing environment: getting a product to market early may be more important than
finding defects in the program.
During testing, errors can mask (hide) other errors. Because inspection is a static
process, you don't have to be concerned with interactions between errors.
Inspections and testing are complementary and not opposing verification techniques. Both
should be used during the V & V process. Inspections can check conformance with a
specification but not conformance with the customer's real requirements. Inspections cannot
check non-functional characteristics such as performance, usability, etc.
1. Development testing: the system is tested during development to discover bugs and
defects.
2. Release testing: a separate testing team test a complete version of the system before it is
released to users.
3. User testing: users or potential users of a system test the system in their own
environment.
Development testing
Development testing includes all testing activities that are carried out by the team developing the
system:
1. Unit testing: individual program units or object classes are tested; should focus on
testing the functionality of objects or methods.
2. Component testing: several individual units are integrated to create composite
components; should focus on testing component interfaces.
3. System testing: some or all of the components in a system are integrated and the system
is tested as a whole; should focus on testing component interactions.
Unit testing
Unit testing is the process of testing individual components in isolation. It is a defect testing
process. Units may be:
When testing object classes, tests should be designed to provide coverage of all of the features
of the object:
1. A setup part, where you initialize the system with the test case, namely the inputs and
expected outputs.
2. A call part, where you call the object or method to be tested.
3. An assertion part where you compare the result of the call with the expected result. If the
assertion evaluates to true, the test has been successful if false, then it has failed.
Testing is expensive and time consuming, so it is important that you choose effective unit
test cases
The test cases should show that, when used as expected, the component that you are
testing does what it is supposed to do.
If there are defects in the component, these should be revealed by test cases
The first of these should reflect normal operation of a program and should show that the
component works
The other kind of test case should be based on testing experience of where common problems
arise. It should use abnormal inputs to check that these are
1. Partition testing, where you identify groups of inputs that have common characteristics
and should be processed in the same way. You should choose tests from within each of
these groups
2. Guideline-based testing, where you use testing guidelines to choose test cases. These
guidelines reflect previous experience of the kinds of errors that programmers often make
when developing components
The input data and output results of a program often fall into a number of different
classes with common characteristics.
Examples of these classes are positive numbers, negative numbers, and menu selections.
That is, if you test a program that does a computation and requires two positive numbers,
then you would expect the program to behave in the same way for all positive numbers
Because of this equivalent behaviour, these classes are sometimes called equivalence
partitions or domains
One systematic approach to test case design is based on identifying all input and output
partitions for a system or component.
Test cases are designed so that the inputs or outputs lie within these partitions.
Partition testing can be used to design test cases for both systems and components
The large shaded ellipse on the left represents the set of all possible inputs to the program
that is being tested.
A program being tested should process all of the members of an input equivalence
partitions in the same way.
Output equivalence partitions are partitions within which all of the outputs have
something in common.
Sometimes there is a 1:1 mapping between input and output equivalence partitions
The shaded area in the left ellipse represents inputs that are invalid.
The shaded area in the right ellipse represents exceptions that may occur (i.e., responses to
invalid inputs).
A good rule of thumb for test case selection is to choose test cases on the boundaries of
the partitions, plus cases close to the midpoint of the partition
The reason for this is that designers and programmers tend to consider typical values of
inputs when developing a system.
Boundary values are often atypical (e.g., zero may behave differently from other non-
negative numbers) so are sometimes overlooked by developers. Program failures often
occur when processing these atypical values
You identify partitions by using the program specification or user documentation and from
experience where you predict the classes of input value that are likely to detect errors.
For example, say a program specification states that the program accepts 4 to 8 inputs
which are five-digit integers greater than 10,000.
You use this information to identify the input partitions and possible test input values.
When you use the specification of a system to identify equivalence partitions, this is
called ‘black-box testing’. Here, you don’t need any knowledge of how the system works.
However, it may be helpful to supplement the black-box tests with ‘white-box testing’,
where you look at the code of the program to find other possible tests.
For example, your code may include exceptions to handle incorrect inputs. You can use
this knowledge to identify ‘exception partitions’—different ranges where the same
exception handling should be applied.
You can also use testing guidelines to help choose test cases.
Guidelines encapsulate knowledge of what kinds of test cases are effective for
discovering errors.
For example, when you are testing programs with sequences, arrays, or lists, guidelines
that could help reveal defects include:
Derive tests so that the first, middle, and last elements of the sequence are accessed. This
approach is reveals problems at partition boundaries
Choose inputs that force the system to generate all error messages;
Component Testing
Software components are often composite components that are made up of several interacting
objects. You access the functionality of these objects through the defined component interface.
Testing composite components should therefore focus on showing that the component interface
behaves according to its specification. Objectives are to detect faults due to interface errors or
invalid assumptions about interfaces. Interface types include:
Interface errors:
1. Interface misuse: a calling component calls another component and makes an error in its
use of its interface e.g. parameters in the wrong order.
2. Interface misunderstanding: a calling component embeds assumptions about the
behavior of the called component which are incorrect.
3. Timing errors: the called and the calling component operate at different speeds and out-
of-date information is accessed.
Design tests so that parameters to a called procedure are at the extreme ends of their
ranges.
Always test pointer parameters with null pointers.
Design tests which cause the component to fail.
Use stress testing in message passing systems.
In shared memory systems, vary the order in which components are activated.
System testing
System testing during development involves integrating components to create a version
of the system and then testing the integrated system. The focus in system testing
is testing the interactions between components. System testing checks that components
are compatible interact correctly and transfer the right data at the right time across their
interfaces. System testing tests the emergent behavior of a system.
During system testing, reusable components that have been separately developed and off-
the-shelf systems may be integrated with newly developed components. The complete
system is then tested. Components developed by different team members or sub-teams
may be integrated at this stage. System testing is a collective rather than an individual
process.
The use cases developed to identify system interactions can be used as a basis for
system testing. Each use case usually involves several system components so testing the
Information Science & Engineering, SCEM, Adiyar Page 9
Software Engineering-18CS35
use case forces these interactions to occur. The sequence diagrams associated with the
use case document the components and their interactions that are being tested.
Test-driven development
Test-driven development (TDD) is an approach to program development in which
you inter-leave testing and code development. Tests are written before code and
'passing' the tests is the critical driver of development. This is a differentiating feature of
TDD versus writing unit tests after the code is written: it makes the developer focus on
the requirements before writing the code.
The code is developed incrementally, along with a test for that increment. You don't
move on to the next increment until the code that you have developed passes its test.
TDD was introduced as part of agile methods such as Extreme Programming. However, it
can also be used in plan-driven development processes.
The goal of TDD isn't to ensure we write tests by writing them first, but to produce
working software that achieves a targeted set of requirements using simple,
maintainable solutions. To achieve this goal, TDD provides strategies for keeping code
working, simple, relevant, and free of duplication.
Code coverage: every code segment that you write has at least one associated test so
all code written has at least one test.
Regression testing: a regression test suite is developed incrementally as a program is
developed.
Simplified debugging: when a test fails, it should be obvious where the problem lies;
the newly written code needs to be checked and modified.
System documentation: the tests themselves are a form of documentation that
describes what the code should be doing.
Release testing
Release testing is the process of testing a particular release of a system that is intended for use
outside of the development team. The primary goal of the release testing process is to convince
the customer of the system that it is good enough for use. Release testing, therefore, has to
show that the system delivers its specified functionality, performance and dependability, and that
it does not fail during normal use. Release testing is usually a black-box testing process
where tests are only derived from the system specification.
Release testing is a form of system testing. Important differences:
A separate team that has not been involved in the system development, should be
responsible for release testing.
System testing by the development team should focus on discovering bugs in the
system (defect testing). The objective of release testing is to check that the system
meets its requirements and is good enough for external use (validation testing).
Requirements-based testing involves examining each requirement and developing a test or tests
for it. It is validation rather than defect testing: you are trying to demonstrate that the system has
properly implemented its requirements.
Example:
If a patient is known to be allergic to any particular medication, then prescription of that
medication shall result in a warning message being issued to the system user. If a
prescriber chooses to ignore an allergy warning, they shall provide a reason why this has
been ignored.
To check if these requirements have been satisfied, you may need to develop several
related tests:
1. Set up a patient record with no known allergies. Prescribe medication for allergies that
are known to exist. Check that a warning message is not issued by the system.
2. Set up a patient record with a known allergy. Prescribe the medication to that the patient
is allergic to, and check that the warning is issued by the system.
3. Set up a patient record in which allergies to two or more drugs are recorded. Prescribe
both of these drugs separately and check that the correct warning for each drug is issued.
4. Prescribe two drugs that the patient is allergic to. Check that two warnings are correctly
issued.
5. Prescribe a drug that issues a warning and overrule that warning. Check that the system
requires the user to provide information explaining why the warning was overruled.
Scenario testing is an approach to release testing where you devise typical scenarios of use and
use these to develop test cases for the system. Scenarios should be realistic and real system users
should be able to relate to them. If you have used scenarios as part of the requirements
engineering process, then you may be able to reuse these as testing scenarios.
Performance Testing
Once a system has been completely integrated, it is possible to test for emergent
properties, such as performance and reliability.
Performance tests have to be designed to ensure that the system can process its intended
load.
This usually involves running a series of tests where you increase the load until the
system performance becomes unacceptable
To test whether performance requirements are being achieved, you may have to construct
an operational profile.
An operational profile is a set of tests that reflect the actual mix of work that will be
handled by the system.
Therefore, if 90% of the transactions in a system are of type A; 5% of type B; and the
remainder of types C, D, and E, then you have to design the operational profile so that the
vast majority of tests are of type A
Experience has shown that an effective way to discover defects is to design tests around
the limits of the system.
In performance testing, this means stressing the system by making demands that are
outside the design limits of the software. This is known as ‘stress testing’
For example, say you are testing a transaction processing system that is designed to
process up to 300 transactions per second. You start by testing this system with fewer
than 300 transactions per second. You then gradually increase the load on the system
beyond 300 transactions per second until it is well beyond the maximum design load of
the system and the system fails
User testing
User or customer testing is a stage in the testing process in which users or customers provide
input and advice on system testing. User testing is essential, even when comprehensive system
and release testing have been carried out. Types of user testing include:
1. Alpha testing: users of the software work with the development team to test the software
at the developer's site.
2. Beta testing: a release of the software is made available to users to allow them to
experiment and to raise problems that they discover with the system developers.
3. Acceptance testing: customers test a system to decide whether or not it is ready to be
accepted from the system developers and deployed in the customer environment.
1. Define acceptance criteria: This stage should, ideally, take place early in the process
before the contract for the system is signed. The acceptance criteria should be part of the
system contract and be agreed between the customer and the developer.
2. Plan acceptance testing : This involves deciding on the resources, time, and budget for
acceptance testing and establishing a testing schedule. The acceptance test plan should
also discuss the required coverage of the requirements and the order in which system
features are tested
3. Derive acceptance tests: Once acceptance criteria have been established, tests have to be
designed to check whether or not a system is acceptable. Acceptance tests should aim to
test both the functional and non-functional characteristics (e.g., performance) of the
system. They should, ideally, provide complete coverage of the system requirements
4. Run acceptance tests: The agreed acceptance tests are executed on the system. Ideally,
this should take place in the actual environment where the system will be used, but this
may be disruptive and impractical. Therefore, a user testing environment may have to be
set up to run these tests
5. Negotiate test results: It is very unlikely that all of the defined acceptance tests will pass
and that there will be no problems with the system. If this is the case, then acceptance
testing is complete and the system can be handed over.
6. Reject/accept system: This stage involves a meeting between the developers and the
customer to decide on whether or not the system should be accepted. If the system is not
good enough for use, then further development is required to fix the identified problems.
Once complete, the acceptance testing phase is repeated
****************************************************************************
Software Evolution
Evolution Process
Program Evolution Dynamics
Software Maintenance
Legacy System Management
**************************************************************************
A key problem for all organizations is implementing and managing change to their existing
software systems.
Organizations have huge investments in their software systems - they are critical business
assets. To maintain the value of these assets to the business, they must be changed and updated.
The majority of the software budget in large companies is devoted to changing and evolving
existing software rather than developing new software. A spiral model of development and
evolution represents how a software system evolves through a sequence of multiple releases.
Their systems are critical business assets and they have to invest in system change to
maintain the value of these assets.
Consequently, most large companies spend more on maintaining existing systems than on
new systems development
Software cost a lot of money so a company has to use a software system for many years
to get a return on its investment.
Obviously, the requirements of the installed systems change as the business and its
environment change.
Therefore, new releases of the systems, incorporating changes, and updates, are usually
created at regular intervals
You start by creating release 1 of the system. Once delivered, changes are proposed and
the development of release 2 starts almost immediately.
In fact, the need for evolution may become obvious even before the system is deployed
so that later releases of the software may be under development before the current version
has been released.
Rajlich and Bennett proposed an alternative view of the software evolution life cycle
Evolution is the phase in which significant changes to the software architecture and
functionality may be made.
During servicing, the only changes that are made are relatively small, essential changes
During evolution, the software is used successfully and there is a constant stream of
proposed requirements changes.
However, as the software is modified, its structure tends to degrade and changes become
more and more expensive.
This often happens after a few years of use when other environmental changes, such as
hardware and operating systems, are also often required.
At some stage in the life cycle, the software reaches a transition point where significant
changes, implementing new requirements, become less and less cost effective
During the servicing phase, the software is still useful and used but only small tactical
changes are made to it.
During this stage, the company is usually considering how the software can be replaced.
In the final stage, phase-out, the software may still be used but no further changes are
being implemented.
Evolution processes
Software evolution processes depend on:
Proposals for change are the driver for system evolution. These should be linked with
components that are affected by the change, thus allowing the cost and impact of the change to
be estimated. Change identification and evolution continues throughout the system lifetime.
The process includes the fundamental activities of change analysis, release planning,
system implementation, and releasing a system to customers
The cost and impact of these changes are assessed to see how much of the system is
affected by the change and how much it might cost to implement the change.
If the proposed changes are accepted, a new release of the system is planned. During
release planning, all proposed changes (fault repair, adaptation, and new functionality)
are considered.
A decision is then made on which changes to implement in the next version of the
system.
The changes are implemented and validated, and a new version of the system is released.
The process then iterates with a new set of changes proposed for the next release
Ideally, the change implementation stage of this process should modify the system
specification, design, and implementation to reflect the changes to the system
New requirements that reflect the system changes are proposed, analyzed, and validated.
System components are redesigned and implemented and the system is retested.
If appropriate, prototyping of the proposed changes may be carried out as part of the
change analysis process
Change requests sometimes relate to system problems that have to be tackled urgently.
1. If a serious system fault occurs that has to be repaired to allow normal operation to
continue.
2. If changes to the systems operating environment have unexpected effects that disrupt
normal operation
3. If there are unanticipated changes to the business running the system, such as the
emergence of new competitors or the introduction of new legislation that affects the
system
4. In these cases, the need to make the change quickly means that you may not be able to
follow the formal change analysis process.
5. Rather than modify the requirements and design, you make an emergency fix to the
program to solve the immediate problem
6. However, the danger is that the requirements, the software design, and the code become
inconsistent
You chose a quick and workable solution rather than the best solution as far as system
structure is concerned.
This accelerates the process of software ageing so that future changes become
progressively more difficult and maintenance costs increase
Lehman and Belady carried out several empirical studies of system change with a view to
understanding more about characteristics of software evolution
Lehman and Belady claim these laws are likely to be true for all types of large
organizational software systems (what they call E-type systems).
These are systems in which the requirements are changing to reflect changing business
needs.
New releases of the system are essential for the system to provide business value
Law Description
Conservation of Over the lifetime of a system, the incremental change in each release is
familiarity approximately constant.
The quality of systems will decline unless they are modified to reflect
Declining quality
changes in their operational environment.
As the system’s environment changes, new requirements emerge and the system must be
modified.
When the modified system is reintroduced to the environment, this promotes more
environmental changes, so the evolution process starts again
The second law states that, as a system is changed, its structure is degraded.
You spend time improving the software structure without adding to its functionality.
Obviously, this means additional costs, over and above those of implementing required
system changes
The third law suggests that large systems have a dynamic of their own that is established
at an early stage in the development process.
This determines the gross trends of the system maintenance process and limits the
number of possible system changes
Lehman’s fourth law suggests that most large programming projects work in a
‘saturated’ state.
That is, a change to resources or staffing has imperceptible effects on the long-term
evolution of the system.
Lehman’s fifth law is concerned with the change increments in each system release.
The more functionality added in each release, the more faults there will be.
Therefore, a large increment in functionality in one system release means that this will
have to be followed by a further release in which the new system faults are repaired.
Relatively little new functionality should be included in this release. This law suggests
that you should not budget for large functionality increments in each release without
taking into account the need for fault repair.
The sixth and seventh laws are similar and essentially say that users of software will
become increasingly unhappy with it unless it is maintained and new functionality is
added to it.
The final law reflects the most recent work on feedback processes, although it is not yet
clear how this can be applied in practical software development
Software Maintenance
Software maintenance focuses on modifying a program after it has been put into use. The
term is mostly used for changing custom software. Generic software products are said to evolve
to create new versions. Maintenance does not normally involve major changes to the system's
architecture. Changes are implemented by modifying existing components and adding new
components to the system.
Types of software maintenance include:
1. Fault repairs: Coding errors are usually relatively cheap to correct; design errors are
more expensive as they may involve rewriting several program components.
Requirements errors are the most expensive to repair because of the extensive system
redesign which may be necessary.
2. Environmental adaptation: This type of maintenance is required when some aspect of
the system’s environment such as the hardware, the platform operating system, or other
support software changes. The application system must be modified to adapt it to cope
with these environmental changes.
3. Functionality addition: This type of maintenance is necessary when the system
requirements change in response to organizational or business change. The scale of the
changes required to the software is often much greater than for the other types of
maintenance
In practice, there is not a clear-cut distinction between these types of maintenance.
When you adapt the system to a new environment, you may add functionality to take
advantage of new environmental features.
Software faults are often exposed because users use the system in unanticipated ways.
Changing the system to accommodate their way of working is the best way to fix these
faults
Different people sometimes give them different names
‘Corrective maintenance’ is universally used to refer to maintenance for fault repair.
‘Adaptive maintenance’ sometimes means adapting to a new environment and
sometimes means adapting the software to new requirements.
‘Perfective maintenance’ sometimes means perfecting the software by implementing
new requirements; in other cases it means maintaining the functionality of the system but
improving its structure and its performance
The surveys broadly agree that software maintenance takes up a higher proportion of IT
budgets than new development (roughly two-thirds maintenance, one-third development).
They also agree that more of the maintenance budget is spent on implementing new
requirements than on fixing bugs
Maintenance costs are usually greater than development costs (2x to 100x depending on the
application). Costs are affected by both technical and non-technical factors; they tend to increase
as software is maintained.
Maintenance corrupts the software structure making further maintenance more difficult. Aging
software can have high support costs (e.g. old languages, compilers etc.).
Maintenance cost factors include:
1. Team stability: maintenance costs are reduced if the same staff are involved with them
for some time.
2. Contractual responsibility: the developers of a system may have no contractual
responsibility for maintenance so there is no incentive to design for future change.
3. Staff skills: maintenance staff are often inexperienced and have limited domain
knowledge.
4. Program age and structure: as programs age, their structure is degraded and they
become harder to understand and change.
Maintenance prediction is concerned with assessing which parts of the system may cause
problems and have high maintenance costs. Predicting the number of changes requires and
understanding of the relationships between a system and its environment. Tightly coupled
systems require changes whenever the environment is changed. Factors influencing this
relationship are:
Number and complexity of system interfaces; The larger the number of interfaces
and the more complex these interfaces, the more likely it is that interface changes will
be required as new requirements are proposed
Number of inherently volatile system requirements; Requirements that reflect
organizational policies and procedures are likely to be more volatile than
requirements that are based on stable domain characteristics.
The business processes where the system is used: As business processes evolve, they
generate system change requests. The more business processes that use a system, the
more the demands for system change
After a system has been put into service, you may be able to use process data to help
predict maintainability.
Examples of process metrics that can be used for assessing maintainability are as follows
1. Number of requests for corrective maintenance :An increase in the number of bug and
failure reports may indicate that more errors are being introduced into the program than
are being repaired during the maintenance process. This may indicate a decline in
maintainability.
2. Average time required for impact analysis :This reflects the number of program
components that are affected by the change request. If this time increases, it implies more
and more components are affected and maintainability is decreasing
3. Average time taken to implement a change request: This is the amount of time that you
need to modify the system and its documentation, after you have assessed which
components are affected. An increase in the time needed to implement a change may
indicate a decline in maintainability
4. Number of outstanding change requests :An increase in this number over time may
imply a decline in maintainability
System reengineering refers to restructuring or rewriting part or all of a legacy system without
changing its functionality. It is applicable where some but not all sub-systems of a larger
system require frequent maintenance. Reengineering involves adding effort to make them easier
Reduced risk: there is a high risk in new software development. There may be
development problems, staffing problems and specification problems.
Reduced cost: the cost of reengineering is often significantly less than the costs of
developing new software.
If a system is not dependable and the problems directly affect the business customers or
mean that people in the business are diverted from other tasks to solve these problems,
the system has a low business value
The system outputs
The key issue here is the importance of the system outputs to the successful functioning of
the business.
If the business depends on these outputs, then the system has a high business value.
Conversely, if these outputs can be easily generated in some other way or if the system
produces outputs that are rarely used, then its business value may be low