SE Unit4
SE Unit4
me/jntuh
Validation:
4. System Testing
1. Unit Testing:
2. Integration Testing:
Focuses on design and construction of software architecture.
Addresses issues related to verification and program construction.
Uncover errors associated with interfacing.
Two approaches: Top-down integration and Bottom-up integration.
A combined approach called Sandwich strategy is also an option.
3. Validation Testing:
Validates requirements against the software constructed.
High-order tests ensuring that software meets functional, behavioral,
and performance requirements.
Criteria include:
Validation Test Criteria
Configuration Review
Alpha and Beta Testing
Alpha testing at the developer's site, Beta testing at end-user sites.
4. System Testing:
Tests software and other system elements as a whole.
Involves combining software with hardware, people, and databases.
Types of tests include:
Recovery testing
Security testing
Stress testing
Performance testing
Testing Tactics:
Goal: Find errors.
A good test is one with a high probability of finding errors.
Tests should not be redundant, and they should be appropriately
complex.
Two Major Categories of Software Testing:
Black Box Testing: Examines fundamental aspects of a system, ensuring
each function of the product is operational.
White Box Testing: Examines internal operations and procedural details
of a system.
2. Equivalence Partitioning:
Example: If 0.0<=x<=1.0, test cases include (0.0, 1.0) for valid input and
(-0.1, 1.1) for invalid input.
4. Orthogonal Array Testing:
Applied to problems with a relatively small input domain but too
large for exhaustive testing.
Reduces the number of test cases.
c) Loop Testing:
Focuses on the validity of loop constructs.
Validation Testing
Validation testing is a crucial phase in the software development lifecycle,
irrespective of whether it involves conventional software, object-oriented software,
or web applications. The testing strategy remains consistent across these types of
software. When a software requirements specification is in place, it outlines the
validation criteria forming the basis for the validation-testing approach.
1. Test Plan:
Outlines the classes of tests to be conducted.
Defines the scope and objectives of the testing phase.
2. Test Procedure:
Defines specific test cases to ensure:
All functional requirements are satisfied.
Behavioral characteristics are achieved.
Content is accurate and properly presented.
Performance requirements are met.
Documentation is correct.
Usability and other requirements are fulfilled (e.g., transportability,
compatibility, error recovery, maintainability).
3. Validation Test Case Execution:
After each validation test case, one of two conditions exists:
1. The function or performance characteristic is accepted.
2. A deviation from the specification is found, and a deficiency list is
created.
4. Configuration Review (Audit):
An essential element of the validation process.
1. Alpha Testing:
Conducted at the developer's site by a group of representative users.
Software is used in a natural setting, recording errors and usage
problems.
Conducted in a controlled environment.
Intended to uncover errors that end-users may not identify.
2. Beta Testing:
Conducted at one or more end-user sites.
Developer generally does not present during alpha testing.
A "live" application of the software in a real-world environment.
End-users record all encountered problems and report them to the
developer.
Purpose:
Typically performed when custom software is delivered to a customer
under contract.
The customer conducts specific tests to uncover errors before accepting
the software.
System Testing
System testing is a comprehensive phase that consists of various tests, each
serving a specific purpose. The primary goal is to fully exercise the computer-
based system, ensuring that all integrated elements function as allocated.
1. Recovery Testing:
Objective: Verify that the system can recover from faults and resume
processing with minimal or no downtime.
Requirements: The system must be fault-tolerant, and faults should not
cause a complete system function failure.
Introduction:
Debugging Outcomes:
Cause will be found and corrected.
Cause will not be found.
Characteristics of Bugs:
Symptom and cause can be in different locations.
Symptoms may be caused by human error or timing problems.
Human Trait:
Debugging is an innate human trait; some individuals are naturally
adept at it.
Debugging Strategies:
Objective: Find and correct the cause of a software error through
systematic evaluation, intuition, and luck.
Three strategies:
1. Brute Force Method:
Common but least efficient method.
Applied when other methods fail.
Involves memory dumps, run-time traces, and extensive use
of output statements.
Can lead to a waste of time and effort.
2. Back Tracking:
Common debugging approach, useful for small programs.
Traces the source code backward from the site where the
symptom is uncovered.
Software Measurement
Categorization:
1. Direct Measure:
Software Process: Includes cost and effort.
Software Product: Includes lines of code, execution speed, memory
size, defects per reporting time period.
2. Indirect Measure:
Examines the quality of the software product itself (e.g.,
functionality, complexity, efficiency, reliability, and maintainability).
Reasons for Measurement:
Gain a baseline for future assessments.
Determine status with respect to the plan.
Predict size, cost, and duration estimates.
Improve product quality and process.
Metrics in Software Measurement:
Size-Oriented Metrics:
Concerned with the measurement of software.
Includes LOC, effort, cost, PP document, errors, defects, and
people.
Function-Oriented Metrics:
Measures functionality derived by the application.
Widely used metric: Function Point (independent of programming
language).
Object-Oriented Metrics:
Relevant for object-oriented programming.
Based on the number of scenarios, key classes, support classes,
average support classes per key class, and subsystems.
Web-Based Application Metrics:
Measure:
1. Number of static pages (NSP)
2. Number of dynamic pages (NDP)
3. Customization (C) = NSP / (NSP + NDP) (C should approach
1).