0% found this document useful (0 votes)
18 views96 pages

19ecs463 Module 3 STM

The document discusses various types of testing activities involved in software development like unit testing, integration testing, function testing, system testing, acceptance testing, regression testing, verification testing, validation testing, stubs, drivers, and decomposition-based integration. It provides details on the differences between verification and validation testing and explains concepts like stubs, drivers, unit validation with examples.

Uploaded by

startechbyjus123
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views96 pages

19ecs463 Module 3 STM

The document discusses various types of testing activities involved in software development like unit testing, integration testing, function testing, system testing, acceptance testing, regression testing, verification testing, validation testing, stubs, drivers, and decomposition-based integration. It provides details on the differences between verification and validation testing and explains concepts like stubs, drivers, unit validation with examples.

Uploaded by

startechbyjus123
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 96

Module 3

Validation Activities
Module 3 : Validation Activities
Unit validation testing
Integration testing
Function testing
System testing
Accepting testing
Regression Testing: Objectives of regression testing
Regression testing types
Regression testing techniques

2
Verification Testing

▪ Verification is the process of evaluating work-products of a


development phase to determine whether they meet the specified
requirements.
▪ Verification ensures that the product is built according to the
requirements and design specifications.
▪ Are we building the product right?
Activities involved
▪ Reviews
▪ Walkthroughs
▪ Inspections
3
Verification Testing - Workflow

4
Validation Testing

▪ The process of evaluating software during the development process or at the end
of the development process to determine whether it satisfies specified business
requirements.
▪ Validation Testing ensures that the product actually meets the client's needs. It
can also be defined as to demonstrate that the product fulfills its intended use
when deployed on appropriate environment.
▪ Are we building the right product?
Activities involved
• Unit Testing
• Integration Testing
• System Testing
• User Acceptance Testing 5
Validation Testing - Workflow

6
Differences between verification and validation testing
Verification Validation
We check whether we are developing the right We check whether the developed product is
product or not. right.
Verification is also known as static testing. Validation is also known as dynamic testing.

Verification includes different methods like Validation includes testing like functional
Inspections, Reviews, and Walkthroughs. testing, system testing, integration, and User
acceptance testing.

It is a process of checking the work-products It is a process of checking the software during or


(not the final product) of a development cycle to at the end of the development cycle to decide
decide whether the product meets the specified whether the software follow the specified
requirements. business requirements.

Quality assurance comes under verification Quality control comes under validation testing. 7
testing.
Differences between verification and validation testing
Verification Validation
The execution of code does not happen in the In validation testing, the execution of code
verification testing. happens.
In verification testing, we can find the bugs In the validation testing, we can find those
early in the development phase of the bugs, which are not caught in the verification
product. process.
Verification testing is executed by the Validation testing is executed by the testing
Quality assurance team to make sure that the team to test the application.
product is developed according to customers'
requirements.
Verification is done before the validation After verification testing, validation testing
testing. takes place.
In this type of testing, we can verify that the In this type of testing, we can validate that
inputs follow the outputs or not. the user accepts the product or not.
8
V & V Activities

9
Stubs and Drivers

▪ Stubs and Drivers are actually computer programs that


work as substitution of some computer modules which
are not available for testing.
▪ Example : Testing the high speed of a car.
▪ Stubs and drivers simulate the functionality of other
modules and allow us to perform the testing activity.

10
Stubs and Drivers - Example

11
Stub

▪ Stubs are basically Top-down approach of Integration


Testing(When we have upper modules prepared but,
don’t have the bottom modules).
▪ We can’t test the upper modules without the bottom
modules, so, for the testing phase instantly, we make the
simulation of bottom modules and check the working of
upper modules.
▪ We call them in place of original program.
12
Drivers

▪ Drivers are the programs which we use to call the other


program, you will use a driver to call any stub.
▪ In other words, you can say it’s a main program through
which other modules are called.
▪ We use the driver in bottom-up approach when bottom
level modules are prepared,but, top level are not. In this
case, we prepare a driver to perform the task.

13
UNIT VALIDATION TESTING

▪ unit is the smallest building block of the software system

▪ Units must be validated

▪ Unit tests ensure if software meets functionality prior to integration and


system testing.

▪ software is divided into modules

▪ Module are not an isolated entity.

▪ module is not independent

▪ It cannot be tested in isolation.

14
Driver Example

▪ Module B is under test.

▪ Module A is a superordinate of module B.

▪ Suppose module A is not ready and B has to be unit tested.

▪ Module B needs inputs from module A.

▪ Driver module simulate module A

▪ Driver passes the required inputs to module B

15
Driver Example

16
Driver Example

17
Drivers

▪ Driver support the code and data

▪ It provide an environment for testing a part of a system

▪ A test driver may take the inputs from the user.

▪ It may read the inputs from a fi le.

▪ Driver can be defi ned as a software module which invoke a module under test and
provide test inputs, control and monitor execution, and report test results

▪ A test driver initializes the environment desired for testing.

▪ It Provides simulated inputs in required format to the units to be tested 18


STUB
▪ Stubs and drivers are effective tool for demonstrating progress in a business
environment.

▪ Module under testing may also call some other module

▪ The called modules are not ready at the time of testing.

▪ Therefore, these modules need to be simulated for testing.

▪ Dummy modules are prepared for these subordinate modules.

▪ These dummy modules are called stubs.

19
STUB

▪ stub can be defi ned as a piece of software that works similar to a unit which is
referenced by the unit being tested

▪ It provides the minimum required behaviour for that unit.

▪ It is a reduced implementation of the actual module.

▪ It does not perform any action of its own

▪ Display instruction are included as a message in the body of stub.

20
STUB

21
Drivers & Stubs

Drivers and stubs represent

overheads also.

Overhead of designing them may

increase the time and cost of the

entire software system.

Therefore, they must be designed

simple to keep overheads low.

22
Integration Testing & Its Reasons for Performing
▪ Integration is the activity of combining the modules together

▪ Integration aims at constructing a working software system.

▪ modules are not standalone entities.

▪ They are a part of a software system which has many interfaces.

▪ Even if a single interface is mismatched, many modules may be affected.

▪ It exposes inconsistency between the modules such as

▪ Data can be lost across an interface.

▪ module Integration may not give the desired result.

▪ Data types and their valid ranges may mismatch between the modules.

23
Three approaches for integration testing

24
DECOMPOSITION-BASED INTEGRATION

▪ The functional decomposition of modules is shown as a tree

▪ Nodes represent the modules present in the system

▪ Links/edges between the two modules represent the calling sequence.

▪ The nodes on the last level in the tree are leaf nodes

▪ integrate all the modules together and then test it. (Non Incremental)

▪ Another method is to integrate the modules one by one and test them incrementally.

▪ Based on these methods, integration testing methods are classified into two categories:

▪ (a) non-incremental – Big Bang Integration Testing

▪ (b) incremental.


25
DECOMPOSITION-BASED INTEGRATION

26
Non-Incremental Integration Testing : Big-Bang integration testing

▪ Big bang integration testing is a testing approach where all components or modules
are integrated and tested as a single unit. This is done after all modules have been
completed and before any system-level testing is performed.
Drawbacks

▪ Big-Bang method cannot be adopted practically

▪ Big-Bang requires more work.

▪ It is difficult to localize the errors since the exact location of bugs cannot be found
easily.

▪ 27
Non-Incremental Integration Testing : Big-Bang integration testing

28
Incremental Integration Testing

▪ Start with one module and unit test it.

▪ Then combine with another module

▪ modules are combined one by one

▪ perform test on both the modules.

▪ incrementally keep on adding the modules

▪ It test the recent environment.

▪ Thus, an integrated tested software system is achieved

29
Incremental Integration Testing Advantages

▪ It does not require many drivers and stubs.

▪ It is easy to localize the errors

▪ Thorough testing is performed

30
Types of Incremental Integration Testing
Incremental Integration Testing Is Divided Into Two Categories.

▪ 1. Top-down Integration Testing

▪ A. Depth First Integration

▪ B Breadth First Integration

▪ 2. Bottom-up Integration Testing

(Behavioural Testing At The Integration Level)

▪ 3. Call Graph-based Integration

▪ 4. Pair-wise Integration

▪ 5. Path-based Integration
31
Top-down Integration Testing
▪ Start with the top or initial module in the software.

▪ Substitute the stubs for all the subordinate modules of top module. Test the top module.

▪ After testing the top module, stubs are replaced one at a time with the actual modules for
integration.

▪ Perform testing on this recent integrated environment.

▪ Regression testing may be conducted

▪ Repeat steps for the whole design hierarchy

▪ Look at the design hierarchy from top to bottom.

▪ Start with the high-level modules and move downward through the design hierarchy

32
Top-down Integration Testing : Depth first integration

▪ Modules subordinate to the top module are integrated in the following two

ways:

▪ Depth fi rst integration 1 2, 6, 7/8 will be integrated fi rst.

Next, modules 1, 3, 4/5 will be integrated

33
Top-down Integration Testing : Depth first integration

34
Top-down Integration Testing : Breadth first integration

▪ All modules directly subordinate at each level, moving across the design
hierarchy horizontally, are integrated fi rst

▪ modules 2 and 3 will be integrated fi rst. Next, modules 6, 4, and 5 will be


integrated. Modules 7 and 8 will be integrated last.

Drawbacks

▪ Stubs must be prepared as required for testing one module.

▪ Stubs are often more complicated

35
Top-down Integration Testing : Breadth first integration

36
Bottom-up Integration Testing
▪ Begins with the modules at the lowest level in the software structure.

▪ After testing these modules, they are integrated and tested moving from bottom to top
level

▪ Bottom-up integration can be considered as the opposite of top-down approach

▪ Start with the lowest level modules in the design hierarchy.

▪ Look for the super-ordinate

▪ Design the driver module for this super-ordinate module.

▪ Test the module selected in step 1 with the driver designed in step 2.

▪ Repeat steps and move up in the design hierarchy.

▪ Whenever, the actual modules are available, replace stubs and drivers with the actual
one and test again. 37
Bottom-up Integration Testing

38
Comparison Between Top Down & Bottom Up

39
CALL GRAPH-BASED INTEGRATION

▪ call graph is a directed graph

▪ The nodes are either modules or units

▪ Directed edge from one node to another means one module has
called another module.

▪ The call graph can be captured in a matrix form

▪ It is known as the adjacency matrix

40
CALL GRAPH-BASED INTEGRATION

▪ Integration testing detects bugs which are structural.

▪ It is also important to detect some behavioural bugs.

▪ Refine the functional decomposition tree into a form of module calling graph,

▪ then we are moving towards behavioural testing at the integration level

41
CALL GRAPH-BASED INTEGRATION

42
CALL GRAPH-BASED INTEGRATION

43
Pair-wise Integration

▪ Pairs 1–10 and 1–11. The resulting set will be the total test sessions which will

be equal to the sum of all edges in the call graph.

▪ The number of test sessions is 19 which is equal to the number of edges in the

call graph

44
Pair-wise Integration

45
Neighbourhood Integration

▪ The neighbourhood of a node, can be defi ned as the set of nodes that are one

edge away from the given node

▪ neighbourhood for a node is the immediate predecessor as well as the

immediate successor nodes

46
Pair-wise Integration

47
Pair-wise Integration

48
PATH-BASED INTEGRATION

▪ Source node Module at which execution starts or resumes.

▪ control is being transferred after calling the module are also source nodes.

▪ Sink node Module at which the execution terminates.

▪ control is transferred are also sink nodes.

▪ Module execution path ( MEP) path consisting of a set of executable statements within a
module like in a flow graph.

▪ Message control from one unit is transferred to another unit, then a message

49
PATH-BASED INTEGRATION

▪ MM-path It is a path consisting of MEPs and messages.

▪ It shows the sequence of executable statements

▪ MM-path is a set of MEPs and transfer of control among different units in


the form of messages.

▪ MM-path graph It is a extended flow graph where nodes are MEPs and edges
are messages.

▪ In this graph, messages are highlighted with thick lines.

50
PATH-BASED INTEGRATION

51
PATH-BASED INTEGRATION

52
PATH-BASED INTEGRATION

53
FUNCTION TESTING
▪ functionality of the system specification is tested

▪ function coverage metric is used To keep a record of function testing

▪ It detect discrepancies between the functional specifications of a software


and its actual behaviour.

▪ It verify that the system behaviour

54
FUNCTION TESTING
▪ It measure the quality of the functional components of the system.

▪ test leader defines the scope, schedule, and deliverables for the function test
cycle.

▪ test plan (document) and a test schedule (work plan) often undergo several
revisions

▪ Functional decomposition of a system

▪ Testing organization will take responsibility for creating and maintaining


the partitions.

55
FUNCTION TESTING
▪ Requirements need to be itemized under an appropriate functional partition.

▪ Test cases need to be itemized under an appropriate functional partition

▪ Test cases need to be traced/mapped back to the appropriate requirement.

▪ A function coverage matrix is prepared.

▪ appropriate set of test cases need to be executed

▪ results of the test cases recorded

56
FUNCTION TESTING

57
System Testing

▪ Test the whole system on various grounds

▪ The ground can be performance, security, maximum load etc

▪ It checks if program or system does not meet its original requirements


and objectives as stated in the requirement specification.

58
System Testing

59
System Testing

60
Recovery Testing

▪ How good the developed software is when it faces a disaster

▪ Disaster can be unplugging the system, network ,stopping the


database, software crash

▪ software systems (e.g. operating system, database management


systems,etc.) must recover from programming errors, hardware
failures, data errors or any disaster in the system

61
Testers should work on the following areas during recovery testing:

▪ Restart If there is a failure and we want to recover and start again

▪ Testers must ensure that all transactions have been reconstructed correctly
and that all devices are in proper states.

▪ Switchover The ability of the system to switch to a new component must be


tested.

▪ if there are standby components

▪ failure of one component, the standby takes over the control

▪ Maximum load

62
Security Testing
▪ customers data has to be secured ensure that their functionality is properly implemented

▪ Internet users (personal data/information)is not secure the system loses its accountability.

▪ Security may include : controlling access to data

▪ encrypting data in communication,

▪ ensuring secrecy of stored data

▪ auditing security events

▪ effects of security breaches could be extensive and can cause loss of information, corruption o
information, misinformation, privacy violations, denial of service, etc.

63
Security Testing

Many different tasks are performed to manage software security risks, including

▪ Creating security abuse/misuse cases

▪ Listing security requirements

▪ Performing architectural risk analysis

▪ Building risk-based security test plans

▪ Performing security tests

64
Elements of security testing

▪ Confidentiality protects against the disclosure of information to parties

▪ Integrity information at receiver side is not altered in transit or by anyone


other

▪ Authorization process of determining that a requester is allowed to receive a


service or perform an operation. Access control

▪ Availability Information must be kept available for authorized persons when


they need it.

▪ Non-repudiation prevent the later denial that an action happened, or a


communication took place, etc.

▪ authentication information combined with some timestamp.

65
Performance testing

▪ How many concurrent users are expected for each target system server

▪ network appliance confi gurations time requirements

▪ testing team must use realistic databases

▪ Using realistic-size databases provides the following benefi ts:

▪ large dataset require signifi cant disk space and processor power

▪ Data transfer across a network, bandwidth may also be a consideration.

66
Performance testing

Develop a high-level plan including requirements, resources, timelines,


and milestones.

Develop a detailed performance test plan , including all dependencies


and associated timelines.
▫ Choose test tools

Specify test data needed.

Confi gure the test environment


▫ Execute tests, probably repeatedly

67
Load Testing

▪ It determine the maximum sustainable load the system can handle

▪ Maximum number of resources are allocated system during load testing

▪ there is high probability that the system will fail when put under maximum
load.

▪ As the load is being increased, transactional errors occur

▪ When many users work simultaneously on a web server, the server responds
slow.

68
Stress Testing

▪ It is a type of load testing

▪ system must be stress-tested

▪ system is put under loads beyond the limits so that the system breaks.

▪ Thus, stress testing tries to break the system under test by overwhelming its
resources in order to fi nd the circumstances under which it will crash.

▪ In real-time systems, the entire threshold values and system limits must be
noted carefully.

▪ real-time defense system, we must stress-test the system; otherwise, there


may be loss of equipment as well as life.

69
Stress Testing: The areas that may be stressed in a system are

▪ Input transactions

Disk space

▪ Output

▪ Communications

▪ Interaction with users

70
Usability Testing

▪ It identifi es discrepancies between the user interfaces of a product and the


human engineering requirements of its potential users.

▪ Area experts They understand usability problems or expectations

▪ They analyse and provide valuable suggestions.

▪ Group meetings It result in potential customers’ comments on what they


would like to see in an interface.

▪ Surveys they yield valuable information about how potential customers


would use a software product to accomplish their tasks.

71
Usability Testing Characteristics

▪ Ease of use No environmental pressures of the end-user.

▪ Interface steps It should not be misleading and complex to understand

▪ Response time It should not be so high that the user is frustrated or move
to some other option

▪ Help system A good user interface should be provided.

▪ It should not be redundant

▪ it should be very precise and easily understood

▪ Error messages For every exception in the system, there must be an error
Message

▪ Error messages should be clear and meaningful 72


Compatibility/Conversion/Configuration Testing

▪ check the compatibility of a system being developed with


different operating system, hardware and software
configuration available

73
Acceptance testing: formal testing

▪ Determine whether a software system satisfies its acceptance criteria to


accept the system or not.

▪ software built satisfy the user requirements

▪ customer/client must be involved

▪ end user provide the development team with feedback

▪ It is carried out by end-users.

▪ Determine whether the software is fi t for the user.

▪ Making users confident about the product.

▪ Check if software system satisfi es its acceptance criteria.

74
Acceptance Testing process
▪ The Process completes when the customer and the developer has no further problems.

▪ A well-defi ned acceptance test plan must be created or reviewed by the customer.

▪ The development team and the customer should work together and

▪ Prepare the acceptance plan. Plan acceptance activities

▪ Schedule adequate time for the customer to examine and review the product.

▪ Perform formal acceptance testing at delivery.

▪ Make a decision based on the results of acceptance testing.

75
Acceptance testing: Entry & Exit Criteria

▪ System testing is complete

▪ Defects has to be identified are either fi xed or documented.

▪ Acceptance plan is prepared

▪ resources have to be identifi ed.

Exit Criteria

▪ Acceptance decision is made for the software.

▪ In case of any warning, the development team is notifi ed.

76
Types of Acceptance Testing

Acceptance testing is classifi ed into two categories:

▪ Alpha Testing Tests are conducted at the development site by the


end users.

▪ Beta Testing Tests are conducted at the customer site

77
Acceptance testing: ALPHA TESTING

▪ The product is complete and usable in a test environment, but not necessarily
bug-free.

▪ testers and users together perform this testing

▪ final chance to get customer verification

▪ provides confidence on the software

▪ finds major defects or performance issues

78
Acceptance testing: Entry & Exit Criteria to Alpha

▪ All features are complete and tested

▪ High bugs on primary platforms are fi xed/verifi ed.

▪ 50% of medium bugs are fi xed/verifi ed.

▪ Usability testing and feedback

▪ Alpha sites are ready for installation.

Exit Criteria from Alpha

▪ Get responses/feedbacks from customers.

▪ Prepare a report of any serious bugs being noticed.

▪ Notify bug-fixing issues to developers.


79
BETA TESTING

▪ product should be complete and usable in a production environment.

▪ fi nal ‘vote of confidence’ from a few customers to help validate product

▪ Then it is ready for volume shipment to all customers.

▪ Versions of the software, known as beta-versions

▪ They are released to a limited audience outside the company.

▪ software is released to groups of people

▪ Beta-versions are made available to the open public to increase the feedback

▪ At this stage the software is closer to release

80
Entry & Exit Criteria to Beta

▪ Positive responses from alpha sites.

▪ Customer bugs in alpha testing have been addressed.

▪ There are no fatal errors which can affect the functionality of the software.

▪ Beta sites are ready for installation.

Exit Criteria from Beta

▪ Get responses/feedbacks from the beta testers.

▪ Prepare a report of all serious bugs.

▪ Notify bug-fi xing issues to developers.


81
Guidelines for Beta Testing

▪ Don’t expect to release new builds to beta testers more than once every two
weeks.

▪ Don’t plan a beta with fewer than four releases.

▪ adding a feature during the beta process, the clock goes back and you need
another 3–4 releases.

82
Regression testing

▪ Defined as the software maintenance task

▪ It validates the parts of the changed software

▪ It ensures proper functioning of the software as it was before changes


occurred.

▪ It enhances the quality of software

▪ It ensure that bug-fi xes and new functionalities introduced in a new


version of the software do not adversely affect the previous version modifi
cations work correctly

83
Regression testing

84
OBJECTIVES OF REGRESSION TESTING

▪ check whether the bug-fixing has worked or not.

▪ If finds other related bugs

▪ validate that the system does not have any related bugs.

▪ It tests to check the effect on other parts of the program

▪ bug-fi xing has unwanted consequences on other parts of a program.

▪ check the influence of changes in one part on other parts of the program.

85
REGRESSION TESTING TYPES

Bug-Fix regression

▪ It is performed after a bug has been reported and fi xed.

▪ Its repeat the test cases that expose the problem.

▪ It performs retesting a substantial part of the product.

Side-Effect regression/Stability regression

▪ The goal is to prove that the change has no effect

▪ It tests the overall integrity of the program

86
Regression Testing Techniques: Three
▪ Regression test selection technique : It selects subset of the existing test suite.

▪ It reduce the time required to retest a modified program

▪ Test case prioritization technique : It reorder test suite.

▪ Tests with the highest priority are executed earlier rather than those with lower priority.

▪ There are two types of prioritization:

▪ (a) General Test Case Prioritization prioritize the test cases in T that will be useful
over a succession of subsequent modifi ed versions of P, without any knowledge of the
modified version.

87
Regression Testing Techniques: Three

(b) Version-Specifi c Test Case Prioritization We prioritize the test cases in T, with the

knowledge of the changes made in P.

Test suite reduction technique It reduces testing costs

▪ It permanently eliminates redundant test cases from test suites

88
Selective Retest Technique (SRT)

▪ Software maintenance includes more than 60% of development costs.

▪ objective of SRT is cost reduction.

▪ It identifies the portions of modified version of P1

▪ characteristic features of the SRT

▪ It minimizes the resources

▪ It minimizes the number of test cases applied to the new version.

It analyses the relationship between the test cases and the software elements they cover.

▪ It uses the information about changes to select test cases

89
Selective Retest Technique (SRT)

90
Selective Retest Technique (SRT)

91
Problems involved SRT

▪ Regression test selection problem

▪ Coverage identification problem.

▪ Test suite execution problem

▪ Test suite maintenance problem

▪ Test resources are limited

▪ Test cases selection are of crucial importance.

92
Strategy for Test Case Selection: Selection Criteria Based on Code

Test criterion. Provides a decision procedure for selecting the test cases

potential failures can only be detected if the parts of code that can cause faults
are executed

Fault-revealing test cases A test case t detects a fault in P 1 if it causes P1 to


fail

Modifi cation-revealing test cases if it causes the outputs of P and P1 to differ

Modifi cation-traversing test cases if it executes new or modifi ed code in P1.

93
Regression Test Selection Techniques

Minimization techniques attempt to select minimal sets of test cases from T that yield
coverage of modifi ed or affected portions of P.

Datafl ow techniques select test cases that exercise data interactions that have been affected
by modifi cations.

Safe techniques Most regression test selection techniques are not designed to be safe.
Techniques that are not safe can fail

Ad hoc/Random techniques When time constraints .no test selection tool is available

Another simple approach is to randomly select a predetermined number of test cases from T.

Retest-all technique The retest-all technique simply reuses all existing test cases.
94
Evaluating Regression Test Selection Techniques

▪ Inclusiveness If T contains n tests that are modifi cation-revealing for P and


P1

▪ and suppose M selects m of these tests.

▪ The inclusiveness of M relative to P and P 1 and T is:

Precision It measures the extent to which M omits tests that are nonmodification-revealing.
Suppose T contains n tests that are non-modifi cationrevealing for P and P1, and suppose M
omits m of these tests.
The precision of M relative to P, P 1, and T is given by

95
Evaluating Regression Test Selection Techniques

▪ Effi ciency It is measured in terms of their space and time requirements.

▪ primarily depends on the test history

▪ both space and time effi ciency depends on the size of the test suite

▪ The size depends on the computational cost

96

You might also like