0% found this document useful (0 votes)
17 views

Unit-5 Level of Testing

Uploaded by

nemona2025hirko
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views

Unit-5 Level of Testing

Uploaded by

nemona2025hirko
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 28

5

Levels of Testing

10/31/2024 1
Outline
⚫ Unit Testing
⚫ System Integration Testing
⚫ System Testing
⚫ Acceptance Testing
5.1. Unit Testing
• Unit Testing: Refers to testing program units in isolation .

• Units are functions, procedures, or methods, even a class.

• Unit Testing removes dependencies on other program units and helps to verify the unit
alone produce desirable result.

• Intuitively, a programmer needs to test a unit via

• Execute every line of code.

• Execute every predicate in the unit to evaluate them as true and false separately.

• Observe that the unit performs its intended function and ensure that it contains no
known errors.

• Unit Testing is conducted in two complementary phases:

• Static unit Testing: is a non-execution based via code review

• Dynamic unit Testing: is execution based.


Cont.…
In Static Unit Testing:
• In static unit testing, code is reviewed by applying techniques commonly known as
inspection and walkthrough.
Cont.…
In Dynamic Unit Testing
• In this testing,a program unit is actually executed in isolation

• Dynamic unit testing is created by emulating the context of the unit under test

• The context of a unit test consists of two parts:


(i) a caller of the unit (Test Driver) and (ii) all the units called by the unit (Stub).

Test Driver: is a program that invokes the unit under test with
input values.

Stubs: A stub is a “dummy subprogram” that replaces a unit


that is called by the unit under test.
• Defect Prevention: include suitable mechanism in the code
• Built in tracking, tracing mechanisms
• Include exceptions such as division by zero, array index out of bounds,
buffer overflow or
• underflow, File Not found, db connection error and etc.
• Debugging: The process of determining the cause of a program failure and
fixing it.
• In Unit Testing modules contains the following
• Sequence of statements
5.2 System Integration Testing

•A software module, or component, is a self-contained element of a system.


•Modules have well-defined interfaces with other modules.
•A system is a collection of modules interconnected in a certain way to
accomplish a tangible objective.
•system integration testing is a systematic technique for assembling a software
system while conducting tests to uncover errors associated with interfacing.
•Integration testing is said to be complete when the system is fully integrated
together, all the test cases have been executed, all the severe and moderate
defects found have been fixed, and the system is retested
5.2. System Integration Testing
Different Types Of Interfaces

• Modules are interfaced with other modules to realize the system’s functional requirements.

• Procedure Call Interface: A procedure in one module calls a procedure in another


module the caller passes on control to the called module.

• Shared Memory : A block of memory is shared between two modules.

• Message Passing Interface: One module prepares a message by initializing the fields of a data
structure and sending the message to another module.

• Interface Errors

• Construction

• Inadequate Functionality

• Location of Functionality

• Added Functionality

• Misuse of Interface
• Misunderstanding of Interface and etc…
The major advantages of conducting system integration testing are as follows:

• Defects are detected early.

• It is easier to fix defects detected earlier.

• We get earlier feedback on the health and acceptability of the individual


modules and on the overall system.
• Scheduling of defect fixes is flexible, and it can overlap with development.
Granularity of System Integration Testing

• Intra-system testing combining the modules together to build a cohesive system.

• Inter-system testing: ensure that the interaction between the systems work together. Integrating a
call control system and a billing system in a telephone network is another example of intersystem
testing.

• Pairwise system testing : ensure that two systems under consideration can function together. For
example, in testing communication between a network element (radio node) and the element
management systems
5.2. System Integration Testing

System Integration Techniques

•Incremental a series of test cycles created to add new modules in existing build.
•Top down testing top level modules are tested and down in the hierarchy.

Top-down integration of modules A and B C and D are test drivers

Top-down integration of modules A, B, and D

Top-down integration of modules A, B, D, and C


5.2. System Integration Testing

Bottom Up integration begins with the integration of lowest level modules that invokes
no other modules.

Bottom-up integration of modules E, F, and G

Sandwich and Big Bang

• In Sandwich mix of the top-down and bottom-up approaches.

• In Big Bang first, all modules are individually tested. then, all those modules are put together

to construct the entire system which is also tested as a whole.


Compare the Top-Down and Bottom-Up approaches
• Validation of Major Design Decisions: The top-level modules contain major design decisions. Faults in design
decisions are detected early if integration is done in a top-down manner. In the bottom-up approach, those faults
are detected toward the end of the integration process.

• Observation of System-Level Functions: One applies test inputs to the top-level module, which is similar to
performing system-level tests in a very limited way in the top-down approach. This gives an opportunity to the
SIT personnel and the development team to observe system-level functions early in the integration process.
However, similar observations can be done in the bottom-up approach only at the end of system integration.

• Difficulty in Designing Test Cases: In the top-down approach, as more and more modules are integrated and
stubs lie farther away from the top-level module, it becomes increasingly difficult to design stub behavior and
test input. This is because stubs return predetermined values, and a test engineer must compute those values for a
given test input at the top level. However, in the bottom-up approach, one designs the behavior of the test cases
easily.

• Reusability of Test Cases: In the top-down approach, test cases designed to test the interface of a newly
integrated module is reused in performing regression tests in the following iteration. Those test cases are reused
as system-level test cases. However, in the bottom-up approach, all the test cases incorporated into test drivers,
except for the top-level test driver, cannot be reused. The top-down approach saves resources in the form of time
and money.
5.3. System Testing
• The objective of system-level testing is to establish whether an implementation conforms to the
requirements specified by the customers.
• Basic tests provide an evidence that the system can be installed, configured,
and brought to an operational state.
• Functionality tests provide comprehensive testing of requirements within the
capabilities of the system.
• Robustness tests determine how well the system recovers from various input
errors and other failure situations.
• Interoperability tests determine whether the system can interoperate with
other
third-party products.
• Performance tests measure the throughput and response time, under various
conditions.
• Scalability tests determine user scaling, geographic scaling, and resource
scaling.
• Stress tests put a system under stress in order to determine the limitations
of a system and, when it fails, to determine the manner in which the failure
occurs.
• Regression tests determine that the system remains stable as it
cycles through the integration of other subsystems and through maintenance
tasks.
Basic Tests
• The basic tests give evidence that the system is ready for more rigorous tests.
• The objective is to establish that there is sufficient evidence that a system can operate without trying
to perform thorough testing.
• Basic tests are performed to ensure that commonly used functions, not all of which may directly relate
to user-level functions, work to our satisfaction.
• Boot tests are designed to verify that the system can boot up its software image (or build) from the
supported boot options. The boot options include booting from ROM, FLASH card, and PCMCIA
(Personal Computer Memory Card International Association) card.
• Upgrade/downgrade tests are designed to verify that the system software can be upgraded or
downgraded (rollback) in a graceful manner from the previous version to the current version or vice
versa.
• An upgradation process may fail because of a number of different conditions: user-invoked abort (the
user interrupts the upgrade process), in-process network disruption (the network environment goes
down), in-process system reboot (there is a power glitch), or self-detection of upgrade failure (this is
due to such things as insufficient disk space and version incompatibilities)
Cont.…
• The LED (light emitting diode) tests are designed to verify that the system LED status indicators
function as desired.
• The LED tests are designed to ensure that the visual operational status of the system and the
submodules are correct.
Examples of LED tests at the system and subsystem levels are as follows:
• System LED test: green = OK, blinking green = fault, off = no power.
• Ethernet link LED test: green=OK, blinking green=activity, off=fault.
• Cable link LED test: green = OK, blinking green = activity, off = fault.
• User-defined T1 line card LED test: green=OK, blinking green=activity, red = fault, off = no power.

• Diagnostic tests are designed to verify that the hardware components (or modules) of the system
are functioning as desired.
• Diagnostic tests monitor, isolate, and identify system problems without manual troubleshooting.
Some examples of diagnostic tests are as follows:
• Power-On Self-Test(POST), Ethernet Loop-Back Test, Bit Error Test (BERT).

• The CLI tests are designed to verify that the system can be configured, or provisioned, in specific
ways. 10/31/2024
Functionality Tests
• Communication Systems: are designed to verify the implementation of the
communication systems as specified in the customer requirements specification.
• Module: are designed to verify that all the modules function individually as desired
within the systems
• Logging and Tracking : are designed to verify the configurations and operations of
logging and tracing.
• Element Management: The EMS tests verify the main functions which are to
manage, monitor, and upgrade the communication system network elements
• Management Information Base: The MIB tests are designed to verify (i) standard
MIBs
• including MIB II and (ii) enterprise MIBs specific to the system.
• Graphical User Interface: The GUI tests are designed to verify the interface to the
users of an application. These tests verify different components (objects) such as icons,
menu
• bars, dialogue boxes, scroll bars, list boxes, and radio buttons.
• Security: tests are designed to verify that the system meets the security
requirements: confidentiality, integrity, and availability.
Robustness Tests
Robustness means how sensitive a system is to erroneous input and changes in its operational
environment.
• Boundary value tests are designed to cover boundary conditions, special values, and system
defaults.
• Power cycling tests are executed to ensure that, when there is a power glitch in a deployment
environment, the system can recover from the glitch to be back in
normal operation after power is restored.
• On-line insertion and removal (OIR) tests are designed to ensure that online insertion and
removal of modules, incurred during both idle and heavy load operations, are gracefully handled
and recovered.
• High-availability tests are designed to verify the redundancy of individual modules, including the
software that controls these modules. The goal is to verify that the system gracefully and quickly
recovers from hardware and software failures without adversely impacting the operation of the
system. The concept of high availability is also known as fault tolerance.
• Degraded node (also known as failure containment) tests verify the operation of a system after a
portion of the system becomes non operational. It is a useful test for all mission-critical applications.
5.4. Acceptance Testing
 Acceptance Testing: A product is ready to be delivered to the customer after the system test
group is satisfied with the product by performing system-level tests.

 Customers execute acceptance tests based on their expectations from the product.

 The services offered by a software product may be used by millions of users.

 Acceptance testing is a formal testing conducted to determine whether a system satisfies its
acceptance criteria. The criteria the system must satisfy to be accepted by the customer.

 It helps the customer to determine whether or not to accept the system.

 Types of Acceptance Testing: There are two categories of acceptance testing:

 User acceptance testing: The UAT is conducted by the customer to ensure that system
satisfies the contractual acceptance criteria before being signed off as meeting user needs.
 Business acceptance testing: The BAT is undertaken within the development organization
5.4.1 Acceptance Criteria
 Acceptance Criteria: The acceptance criteria must be defined and agreed upon between the
supplier and the customer to avoid any kind of protracted arguments.

 The acceptance criteria document is a part of the contract in the case of an outsourced
development under the OEM(original equipment manufacturer) agreement.
 In general, the marketing organization of the buyer defines the acceptance criteria.

 It is useful to focus on the following three major objectives of acceptance testing for pragmatic
reasons:
Confirm that the system meets the agreed-upon criteria.
Identify and resolve discrepancies, if there are any. The sources of discrepancies and
mechanisms for resolving them
Determine the readiness of the system for cut-over to live operations. The final acceptance of a
system for deployment is conditioned upon the outcome of the acceptance testing. The
acceptance test team produces an acceptance test report which outlines the acceptance
conditions.
Cont.….
• Acceptance Criteria:Acceptance criteria are defined on the basis of these multiple facets of quality
attributes. These attributes determine the presence or absence of quality in a system.

• Buyers, or Customers, should think through the relevance and relative importance of these attributes
in their unique situation at the time of formulating the acceptance criteria.

• Five views of quality, namely, transcendental view, user view,


manufacturing view, product view, and value-based view

 The transcendental view sees quality as something that can be recognized


but is difficult to describe.

 The user view sees quality as satisfying the purpose.


 The manufacturing view sees quality as conforming to the specification.
 The product view ties quality with the inherent characteristics of the
product.
 The value-based view puts a cost figure on quality the amount a customer
Acceptance Criteria Quality Attributes:
• Functional Correctness and Completeness
• Maintainability and Serviceability
• Accuracy • Robustness
• Data Integrity • Timeliness
• Backup and Recovery • Confidentiality and Availability
• Competitive Edge • Compatibility and Interoperability
• Usability • Compliance
• Install ability and Upgradability
• Performance • Scalability
• Start-Up Time • Documentation
• Stress
• Reliability and Availability

10/31/2024 21
Selection of Acceptance Criteria
 Selection of Acceptance Criteria: The customer needs to select a subset of the quality
attributes and prioritize them to suit their specific situation.
 Next, the customer identifies the acceptance criteria for each of the selected quality attributes.
When the customer and the software vendor reach an agreement on the acceptance criteria,
both parties must keep in mind that satisfaction of the acceptance criteria is a trade-off
between time, cost, and quality.

 Only business goals and priorities can determine the degree of “less than perfect” that is
acceptable to both the parties. Ultimately, the acceptance criteria must be related to the
business goals of the customer’s organization.

Eg: Usability and maintainability take precedence over performance and reliability for a
word processor software. On the other hand, it might be the other way around for a
real-time operating system or telecommunication software.
Acceptance Test Plan
• Acceptance Test Plan : Planning for acceptance testing begins as soon as the acceptance
criteria are known. Early development of an acceptance test plan (ATP) gives us a good
picture of the final product.

• The purpose of an ATP is to develop a detailed outline of the process to test the system prior to
making a transition to the actual business use of the system.

• Often, the ATP is delivered by the vendor as a contractual agreement, so that the business
acceptance testing can be undertaken within the vendor’s development organization to ensure
that the system eventually passes the acceptance test.

• In developing an ATP, emphasis is put on demonstrating that the system works according to
the customer’s expectation, rather than passing a set of comprehensive tests
• An ATP needs to be written and executed by the customer’s special user group.
Acceptance Testing-Outline of ATP

1. Introduction
2. Acceptance test category. For each category of acceptance criteria:
(a) Operational environment
(b) Test case specification
(i) Test case ID number
(ii) Test title
(iii) Test objective
(iv) Test procedure
3.Schedule
4.Human resources
Acceptance Test Execution
• Acceptance Test Execution :The activity includes the following detailed actions:

• The developers train the customer on the usage of the system.

• The developers and the customer coordinate the fixing of any problem discovered during
acceptance testing.

• The developers and the customer resolve the issues arising out of any acceptance criteria
discrepancy.
ACC Document
Information
Acceptance Test Report
Structure of Acceptance Test Status Report
Cont.…

Structure of Acceptance Test Summary Report


1.Report identifier
2.Summary
3.Variances
4.Summary of results
5.Evaluation
6.Recommendations
7.Summary of activities
8.Approval

10/31/2024 27
10/31/2024 28

You might also like