Overview of Testing
Overview of Testing
1
Key Terms and Definitions
Verification:
- Are we building the product right?
- Looks at Process compliance.
- Preventive in nature
- IEEE/ANSI definition:
The process of evaluating a system or component to determine whether the
products of a given development phase satisfy the conditions imposed at the
start of that phase
Validation:
- Are we building the right product?
- Looks at Product compliance.
- Corrective in nature but can be preventive also.
- IEEE/ANSI definition:
The process of evaluating a system or component during or at the end of the
development process to determine whether it satisfies specified requirements
- Verification and Validation are complimentary
Reliability:
- Probability that a given software program performs as expected for
- A period of time without error.
Testing:
- Examination of the behavior of a Software program over a set of Sample data.
- The process of executing a system with the intent of finding defects
- Corrective in nature
- General definition:
Testing = Verification + Validation
Requirement:
- Is a condition or capability that is necessary for a system to meet its objectives.
2
- ( Also non-functional (operational) such as Availability, Efficiency, Performance,
Compatibility, Reliability, Quality, Safety, Scalability, Security, Usability,
Documentation, Cost, etc.,)
- A set of activities designed to ensure that the development and/or maintenance process is adequate
to ensure a system will meet its objectives.
- QA activities ensure that the process is defined and appropriate.
- QA is process oriented and is Preventive in nature.
- Quality Assurance makes sure you are doing the right things, the right way
- Examples are Process Checklists and Quality Audits
Useful Quotes
- Edsger Dijkstra
"Beware of bugs in the above code; I have only proved it correct, not tried it."
- Donald Knuth
Q&A-1
Objectives of Testing
3
The Objectives of Software Testing are
- Find Errors.
- Verify Requirements.
Make Prediction about the product(s).
Of the above mentioned factors the last one is pretty difficult. Why? Because it depends on several
external factors in addition to the standard factors.
Robustness: Does the software component deteriorate gracefully as it approaches the limits
specified in the specification. Neither positive nor negative testing can result in unpredictable
system behavior
Completeness: Does the software solve the problem completely.
Consistency: Does the software component perform consistently, i. e in the sense does it produce
the same output each time for the same input(s).
Usability: Is the software easy to use.
Testability: Is the software easily testable.
Safety: If the software component is safety critical, is it safe to use.
Cost of Testing
Q&A-2
Levels of Testing
Unit/module/component test
- Test individual units separately.
4
- Deals with finding logic errors, syntax errors etc.
- Verify that component adheres to its specification.
- Done by programmers
- Generally all white box
- Automation desirable for repeatability
Integration test
- Verify component interactions to make sure they are correct.
- Find interface defects.
- Must carry out Regression testing and Smoke testing
- Done by programmers as they integrate code into code base
- Generally white box, may be some black box
- Automation desirable for repeatability
-
System test
- Verify the overall system functionality.
- The target computer system also is exercised though hardware testing is not in the scope
- Recommended that it is done by an external test group
- Mostly black box so that testing is not ‘corrupted’ by too much knowledge
- - Test automation desirable
Alpha testing (Validation)
Integration Testing
- Top-down Integration Test
- Bottom-up Integration Test
The control program is tested first. Modules are integrated one at a time. Emphasize on interface
testing
Advantages:
- No test drivers needed
- Interface errors are discovered early
- Modular features aid debugging
Disadvantages :
- Test stubs are needed
- Errors in critical modules at low levels are found late.
5
Bottom-up Integration Test
System Test:
Conducted on a complete, integrated system to evaluate the system's compliance with its specified
requirements
Falls within the scope of Black box testing
Takes, as its input, all of the "integrated" software components that have successfully passed
Integration testing and also the software system itself integrated with any applicable hardware
system(s)
System testing is the first time that the entire system can be tested against the Functional
Requirement Specification(s) (FRS) and/or the System Requirement Specification (SRS)
The focus is to have almost a destructive attitude and test not only the design, but also the
behavior and even the believed expectations of the customer
Is intended to test up to and beyond the bounds defined in the software/hardware requirements
specification(s)
Different types of testing falling under system testing are:
- Functional testing
- User interface test
- Usability test
- Compatibility test
- User help test
- Security test
- Performance test
- Sanity test
- Regression test
- Reliability test
- Recovery test
- Installation test
- Maintenance test
- Accessibility test
Q&A-3
What other quality control activities will help in avoiding more testing bugs before commencing
test activities?
Explain when can the integration test become a black box testing?
Explain why an external test group is recommended to carry out the System Testing?
What is the need for further testing beyond System Testing
Give instances of the need for Top-down-integration test and Bottom-up-integration test
Unit-> Integration -> System testing is the normal order followed. Can we change this order? Or
can we by-pass any of the first two phases?
White-Box Testing
6
A test case design method that uses the control structure of the procedural design to derive test
cases.
- Guarantee that all independent paths within a module have been exercised at least once.
- Exercise all logical decision on their true and false values.
- Execute all loops at their boundaries and within their operational bounds.
- Exercise internal data structures to assure their validity.
Black-Box Testing
Focuses on the functional requirements of the software. It is not an alternative approach to white-
box testing. Instead it acts as a complement to the WB Testing technique.
- Runtime errors (Missing function definitions etc).
- Interface errors.
- Performance errors, and
- Initialization and termination errors.
Regression Testing
Test the effects of the newly introduced changes on all the previously integrated code.
The common strategy is to accumulate a comprehensive regression bucket but also to define a
subset.
The full bucket is run only occasionally, but the subset is run against every spin.
Disadvantages :
7
- To decide how much of a subset to use and
- which tests to select
Stress and Load Testing
Stress Testing
- Aimed at examining the behavior of the system under
- varied stress conditions
- Stress can vary gradually with definite incremental
- addition and reach the maximum allowable limit
- Normally the system is kept under the chosen stress
- condition for longer duration
Load Testing
- Aimed at examining the behavior of the system under
- half/full load conditions or any other specific % of
- full load
- The load is suddenly applied and the system’s response
- is observed
Observations may include memory leaks, % utilisation of CPU and memory, etc.
These tests also help in benchmarking the system capacity
Data Flow Testing
Tests the use of variables along different paths of program execution.
Most common types of errors occur because of initialization before declaration or usage before
declaration.
Global variables cause more problems than local variables.
Very Expensive to perform and is used mainly to test High Performance Applications and High
Risk Applications.
Equivalence Partitioning
8
- Point is “ON” the rectangle.
- Point is one of the vertices itself. (Spl case of above).
What should happen in these cases? Have these cases been taken care of by the developer? BT
helps solve
Some problems of these types.
Random Testing
Positive testing is a black box approach where the tests are chosen according to specs. Simple and
complex combination of VALID test cases and functions are used.
In contrast, Stress or negative testing has the goal to show how the program reacts to abnormal
and even unspecified inputs or events. INVALID inputs are used. Exception handling is taxed to
the limit. The crash testing and unfriendly users are parts of this testing.
The crash test tries to bring the system down. Environment and test cases are made as abnormal
as possible.
Q&A-4
9
Requirements Acceptance
Elicitation Testing plan
System
Analysis
Testing plan
Integration
Design Testing plan
Unit Testing
Implementation
plan
Test Planning
Test Strategy
10
A test case should contain the following attributes
- Test case Identity
- Title
- Pre-conditions
- Test setup
- Input parameters
- Procedure
- Expected output
- Special observations
- Mapping to requirements (optional)
Information:
- List of test cases with sequence number, TC ID, title, estimated time, actual time taken,
observations made, Procedure, expected output and result
- Version number and the name of the build used for testing
- Version number and the name of the components deployed
- Version number and the name of the other related components used in setting up the test
platform
- Information on the Defect or MR/CR raised against the failed test cases
11
A typical Test Execution Cycle
Unit Testing
- Check-1: If the Pass criteria is met proceed with Check-2, else go to step 1
- Check- 2: If the test readiness review approves go to step 2, else wait
Integration Testing
- Check-1: Is the condition for suspension met? if yes wait else proceed with Check-2
- Check-2: If the Pass criteria is met proceed with Check-3, else go to step 2
- Check-3: If the test readiness review approves go to step 3, else wait
System Testing
- Check-1: Is the condition for suspension met? if yes wait else proceed with Check-2
- Check-2: If the Pass criteria is met proceed with Check-3, else go to step 3
- Check-3: Are all testing cycles completed? if yes go to step 4 else step 3
Acceptance Testing
12
Mitigation is the steps taken to prevent any risk
Contingency is the action to be taken upon occurrence of any risk
13
Unit Test Level
- Defect tracking
- Code complexity
- Code coverage – test effectiveness ratio
Statement coverage
Branch/ decision coverage
Decision-condition coverage
Integration Test Level
- Error rates in design
- Error rates in implementation
- Error rate in test design
- Test execution /progress metrics
- Time/Cost metrics
- Requirements churn metrics
System Test Level
- Test completeness metrics
- Defect arrival rate
- Cumulative defects by status
- Defect closure rate
- Reliability prediction
- Schedule tracking metrics
- Staff and resource tracking metrics
Validation Test Level
- Defect arrival rate
- Cumulative defects by status
- Defect closure rate
- Defect backlog by severity
- Cost of quality metrics (COQ & COPQ)
Independent.
Customer Perspective
Testing intended functionalities.
Testing unintended functionalities.
Professionalism.
Independent
- Independent from the developer. Why?
Developers tend to be biased towards their mistakes.
Customer Perspective
- - Must be able to think from a customer’s perspective.
Why? Ultimately the customer is the one going to use the product. They bring in the
revenue, so a good tester
Must be able to think from a customer’s perspective.
14
- - Sometimes called break-it testing (Dirty Testing). In this process the tester intentionally
tries to make the code fail. Helps in detecting some special cases where the code may fail.
Professionalism
- Adhere to the principle of doing right thing right way
- Do not be influenced by oral explanations or
- justifications given by developers for not reporting a
- defect
- Report a defect or bug NOT for the sake of reporting
- BUT for the sake of getting it rectified
- Do not make any assumptions in either reporting a bug
- or hiding it
- Provide adequate and relevant information regarding the
- defect so that it becomes helpful for the developers to fix
- it within the expected time
- - Confirm thoroughly before raising a bug
Testing - Misconceptions
The End
15