0% found this document useful (0 votes)
14 views

Qat Notes

Uploaded by

A -Z Heroes
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views

Qat Notes

Uploaded by

A -Z Heroes
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 33

ISO-9126

The ISO 9126 Quality Model was developed by the International Organization for
Standardization (ISO) to provide a framework for evaluating software quality. This
model defines six key characteristics that cover different aspects of software quality:
1. Functionality: This characteristic assesses whether the software performs its
intended tasks correctly.
2. Reliability: Reliability measures the consistency of the software's
performance. It examines whether the software can operate without failures
under specified conditions.
3. Usability: Usability evaluates how easy and pleasant the software is to use.
This includes the user interface design, navigation, and overall user
experience.
4. Efficiency: Efficiency looks at the software's performance in relation to the
resources it uses. This involves assessing whether the software can perform its
functions quickly and without consuming excessive memory, processing
power, or battery life.
5. Maintainability: Maintainability considers how easy it is to fix bugs and make
updates to the software. This characteristic is crucial for developers as it
impacts the ease of modifying the software to correct issues, improve
performance, or adapt to new requirements. Software with high
maintainability allows for quick and cost-effective updates and improvements.
6. Portability: Portability measures the software's ability to operate in different
environments. This means the software should be able to run on various
platforms, such as different operating systems or devices, without requiring
significant changes.

Verification and validation:


1. Verification:
• Verification activities assess whether the product of a development
phase (such as requirement specification, design, code, etc.) meets the
requirements established before that phase.
• It focuses on checking the correctness of interim work products and
ensuring consistency, completeness, and correctness at each stage of
system development.
• Verification activities primarily involve static analysis techniques like
inspection, walkthrough, reviews, and sometimes dynamic analysis
through actual program execution.
• The goal of verification is to confirm that the product is being built
correctly according to specifications.
2. Validation:
• Validation activities confirm whether the final product meets its
intended use and customers' expectations.
• It focuses on assessing the product from the user's perspective and
ensuring it meets overall expectations.
• Validation is typically performed towards the end of the development
cycle, although early execution, as seen in methodologies like eXtreme
Programming (XP), can reduce risks and costs.
• Validation involves running the entire system in its real environment
and conducting various tests to ensure it meets customer needs.
• The goal of validation is to confirm that the correct product is being
built, satisfying users' needs and expectations.

Difference between them:


1. Objective:
• Verification: Ensures the product is built correctly.
• Validation: Ensures the correct product is built.
2. Focus:
• Verification: Interim work products.
• Validation: Final product.
3. Timing:
• Verification: Throughout development.
• Validation: Towards the end of development.
4. Approach:
• Verification: Static analysis.
• Validation: Dynamic analysis.
5. Activity:
• Verification: Checks adherence to specifications.
• Validation: Checks user expectations.
6. Purpose:
• Verification: Confirms correctness.
• Validation: Confirms suitability.
7. Feedback:
• Verification: Early detection of defects.
• Validation: Feedback from end-users.
8. Scope:
• Verification: Focuses on developer's viewpoint.
• Validation: Focuses on user's viewpoint.
9. Method:
• Verification: Inspections, walkthroughs, reviews.
• Validation: Testing in real environment.
10. Outcome:
• Verification: Quality of interim deliverables.
• Validation: Satisfaction of end-users.
11. Execution:
• Verification: Mostly performed by developers.
• Validation: Involves end-users or stakeholders.
12. Verification activities:
• Reviewing requirements, designs, and code.
• Validation activities: User acceptance testing, beta testing.

Fault, defect, failure, error:


1. Failure: This occurs when a system, product, or service doesn't perform as
expected or desired. It's the inability of a system or component to perform its
required functions within specified performance requirements.
2. Fault: A fault is an abnormal condition or defect at the component,
equipment, or sub-system level that may lead to a failure. Faults are typically
the root cause of failures.
3. Error: An error is a discrepancy between a computed, observed, or measured
value and the true, specified, or theoretically correct value. Errors can occur
due to various reasons such as human mistakes, environmental conditions, or
system limitations.
4. Defect: Defects are imperfections or flaws in a product or system that can
potentially lead to failures or malfunctions. These could be design defects,
manufacturing defects, or even operational defects.

Types of errors
1. Syntax Errors: Mistakes in the language's syntax.
2. Logic Errors: Flaws in the algorithm or logic.
3. Runtime Errors: Issues that occur while the program is running.
4. Semantic Errors: Errors in the meaning or interpretation of code.
5. Compilation Errors: Problems encountered during code compilation.
6. Arithmetic Errors: Incorrect mathematical calculations.
7. Input Errors: Incorrect or invalid input provided to a system or program.
8. Boundary Errors: Errors that occur at the boundaries of systems or data
ranges.
9. Intermittent Errors: Errors that occur sporadically or under specific conditions.
10. Human Errors: Mistakes made by humans during development or operation.

OBJECTIVES OF TESTING:
The stakeholders in a test process include programmers, test engineers, project
managers, and customers. They have different perspectives on testing:
1. It works: At first, programmers check if each part of the software works as
expected. This builds confidence. When the whole system is ready, they ensure it
performs its basic functions. The main goal is to prove the system works reliably.

2. It doesn't work: After confirming basic functions, they look for weaknesses. They
intentionally try to make parts of the system fail. This helps find areas for
improvement and makes the system stronger.

3. Reduce failure risk: Complex software often has faults causing occasional failures.
By finding and fixing faults during testing, the chance of system failure decreases over
time. The aim is to make sure the system is reliable.
4. Cut testing costs: Testing has many costs like designing tests, analyzing results, and
documenting everything. Efforts are made to do testing efficiently, reducing costs
while still being effective.

Activities of testing:
1. Identify Objective: First, they determine what they want to test, ensuring each
test case has a clear purpose.
2. Select Inputs: Next, they choose inputs based on requirements, code, or
expectations, keeping the test objective in mind.
3. Compute Expected Outcome: They predict the program's expected outcome
based on the chosen inputs and the test objective.
4. Set up Execution Environment: They prepare the environment needed for the
program to run, fulfilling any external requirements like network connections
or database access.
5. Execute the Program: Then, they run the program with the selected inputs,
observing the actual outcome. This may involve coordinating inputs across
different components.
6. Analyze Test Result: Finally, they compare the actual outcome with the
expected one. If they match and the test objective is met, it's a pass. If not, it's
a fail. Sometimes, the result is inconclusive, requiring further testing for a clear
verdict.
In simpler terms, they plan what to test, pick inputs, predict what should happen, set
up the program environment, run the test, compare what happens with what's
expected, and decide if it passed, failed, or needs more testing.

Levels of testing:
Testing is a crucial part of software development, involving various stages and
participants. There are four main stages of testing: unit, integration, system, and
acceptance testing. Each stage serves a specific purpose and involves different
stakeholders.
• Unit Testing: Programmers test individual units of code in isolation, like
functions or classes. The goal is to ensure that each unit works correctly on its
own.
• Integration Testing: Developers and integration test engineers assemble units
to create larger subsystems. The aim is to construct a stable system that can
undergo further testing.
• System Testing: This encompasses a wide range of tests, including
functionality, security, performance, and reliability. It's a critical phase where
the system's readiness for deployment is assessed.
• Acceptance Testing: Conducted by customers, this ensures that the system
meets their expectations and contractual criteria. It focuses on measuring the
quality of the product rather than searching for defects.
Regression testing is another important aspect, performed whenever a system
component is modified. Its goal is to ensure that modifications haven't introduced
new faults. This process is integrated into unit, integration, and system testing
phases.
Acceptance testing comes in two forms: User Acceptance Testing (UAT) and Business
Acceptance Testing (BAT). UAT is performed by customers to validate that the system
meets their needs according to the contract. BAT, conducted internally by the
supplier, ensures that the system is likely to pass UAT.
Overall, testing ensures that software meets quality standards, satisfies user needs,
and functions as expected throughout its development lifecycle.

White box and black box testing:


The process of designing test cases involves gathering information from various
sources like the program's specification, source code, and input-output domains. This
helps in ensuring comprehensive testing. There are two main approaches to testing
based on the sources of information: white-box (structural) testing and black-box
(functional) testing.
• White-box Testing (Structural Testing): This approach involves examining the
source code, focusing on control flow and data flow. Control flow refers to how
instructions lead from one to another, including conditional statements and
function calls. Data flow deals with how values move between variables. It's
like understanding the internal workings of the program.
• Black-box Testing (Functional Testing): In this approach, testers don't look at
the internal code. They treat the program as a black box, only considering
inputs and outputs. Testers apply inputs, observe outputs, and compare them
to expected outcomes based on the program's specification. It's more about
assessing the program's functionality.
Difference between BBT and WBT:
differences between white-box and black-box testing by topics:
1. Focus:
• White-box Testing: Focuses on internal code structure, logic, and
implementation details.
• Black-box Testing: Focuses on external system behaviour, without
considering internal code structure or implementation.
2. Knowledge Requirement:
• White-box Testing: Requires knowledge of the internal workings of the
software, including code and architecture.
• Black-box Testing: Does not require knowledge of internal code details,
making it accessible to testers with varying technical backgrounds.
3. Test Case Derivation:
• White-box Testing: Test cases are derived from an understanding of the
source code and internal design.
• Black-box Testing: Test cases are derived from requirements,
specifications, or user scenarios.
4. Testing Techniques:
• White-box Testing: Involves techniques like statement coverage, branch
coverage, path coverage, and condition coverage.
• Black-box Testing: Involves techniques like equivalence partitioning,
boundary value analysis, decision tables, and state transition testing.
5. Objective:
• White-box Testing: Objective is to verify the correctness of individual
components, functions, or algorithms based on internal logic.
• Black-box Testing: Objective is to validate the system's functionality
against specified inputs and expected outputs, focusing on user
requirements.
6. Testing Level:
• White-box Testing: Often performed at the unit testing level, focusing
on individual units of code.
• Black-box Testing: Can be performed at various levels, including system
testing, integration testing, and acceptance testing.
7. Suitability:
• White-box Testing: Suitable for uncovering coding errors, ensuring code
maintainability, and assessing code quality.
• Black-box Testing: Suitable for uncovering functional defects, usability
issues, and ensuring that the system meets user expectations.
8. Perspective:
• White-box Testing: Developer-centric perspective, focusing on code
internals and technical aspects.
• Black-box Testing: User-centric perspective, focusing on system
behaviour, interfaces, and user interactions.
9. Tools and Techniques:
• White-box Testing: Involves tools and techniques like code reviews,
static analysis, code coverage tools, and debugging.
• Black-box Testing: Involves tools and techniques like test automation
frameworks, test management tools, and exploratory testing
approaches.
10. Complexity Handling:
• White-box Testing: May struggle with handling complexity in large-scale
systems due to the need for extensive internal knowledge.
• Black-box Testing: Better suited for testing complex or large-scale
systems where internal implementation details are complex or
proprietary.

Software testing and its tools benefits:


Software testing is typically a labour-intensive task, involving manual generation,
execution, and analysis of test cases. However, the use of appropriate tools and test
automation can significantly improve efficiency and effectiveness, leading to several
benefits:
1. Increased Productivity of Testers through Automation
2. Comprehensive Regression Testing Coverage
3. Faster Testing Cycles with Automation
4. Cost Savings and Efficiency Gains
5. Development of More Effective Test Cases
6. Boosted Morale and Skill Development for Test Engineers
7. Consistency and Reproducibility in Test Results
8. Efficient Debugging Enabled by Detailed Test Logs
9. Accelerated Time to Market for Software Products
10. Long-Term Efficiency and Maintenance Cost Reduction
11. Ability to Test Complex Scenarios Easily
12. Retention of Manual Testing for Human-Centric Testing Needs

Outline for control flow testing:


1. Inputs: You start with the source code of a program and a set of criteria for
selecting paths through that code. For example, you might want to ensure that
every statement in the code is executed at least once, or that every possible
outcome of a conditional statement (like an if statement) is tested.
2. Control Flow Graph (CFG): A control flow graph is a visual representation of all
the possible paths through the program. It helps you understand how the
program's logic flows from one statement to another. You create this graph to
see all the potential routes through the code.
3. Selection of Paths: Once you have the CFG, you select paths through it based
on your criteria. You want to make sure you cover all the important parts of
the code, including all the conditional statements.
4. Generation of Test Input Data: For each selected path, you need to find input
values that will cause the program to follow that path. This means finding
values that will make all the conditional statements along the path evaluate to
true or false, depending on the flow of the program.
5. Feasibility Test of a Path: After selecting paths and generating test input data,
you check if each path is feasible. A feasible path is one that can actually be
executed with valid input values. If you find any paths that are infeasible,
meaning they can't be executed with any valid input values, you might need to
adjust your criteria and select different paths.

Paths in control flow graph:


In a Control Flow Graph (CFG), each node represents a unique point in the program's
execution, and edges between nodes represent the flow of control from one point to
another. An entry node marks the starting point of the program, and an exit node
marks the end.
To identify entry-exit paths in a CFG, we look for sequences of nodes that start from
the entry node and end at the exit node. These paths represent different ways the
program can be executed.

Path selection criteria


The concept of path selection criteria is essential in testing because it helps
programmers focus their testing efforts on a manageable set of paths that are likely
to reveal defects in the code. Here's a breakdown of some well-known path selection
criteria and their advantages:
1. Select all paths:
• This criterion involves testing every single path through the program.
• Advantages:
• Ensures thorough coverage of the code.
• Guarantees that every possible execution scenario is tested.
• However, it may not be practical for programs with a large number of
paths, as testing all paths could be time-consuming and resource-
intensive.
2. Select paths to achieve complete statement coverage:
• This criterion aims to execute every statement in the code at least once.
• Advantages:
• Ensures that every line of code is exercised during testing.
• Helps identify unexecuted or dead code segments.
• However, it may not cover all decision points and branches within the
code.
3. Select paths to achieve complete branch coverage:
• This criterion focuses on executing every possible branch or decision
point in the code.
• Advantages:
• Ensures that every possible outcome of conditional statements is
tested.
• Helps uncover logic errors related to decision-making.
• However, it may not cover all possible paths through nested conditions
or loops.
4. Select paths to achieve predicate coverage:
• Predicate coverage aims to test all Boolean conditions in the code.
• Advantages:
• Ensures that every condition evaluates to both true and false
during testing.
• Helps identify errors related to conditional logic.
• However, it may not cover all possible combinations of conditions,
especially in complex conditional expressions.

Terms in data flow:


1. Global c-use: A c-use (computation use) of a variable in a node is considered
global if the variable has been defined before in a node other than the current
one. For instance, if a variable is defined in node 2 and used in node 9, the use
in node 9 is a global c-use.
2. Definition Clear Path (def-clear path): A path from one node to another is
called a def-clear path with respect to a variable if the variable has not been
defined or undefined in any of the intermediate nodes. This path does not
concern the status of the variable at the start and end nodes and may include
loops.
3. Global Definition: A node has a global definition of a variable if it defines the
variable and there is a def-clear path to either a node containing a global c-use
of the variable or an edge containing a predicate use of the variable.
4. Simple Path: A path where all nodes, except possibly the first and last, are
distinct.
5. Loop-Free Path: A path where all nodes are distinct.
6. Complete Path: A path from the entry node to the exit node.
7. Definition-Use Path (du-path): A path from a node with a global definition of a
variable to a node with a global c-use of the variable, or to an edge with a
predicate use of the variable, satisfying specific conditions regarding def-clear
paths.
Types of interface errors:
1. Construction Errors: These occur when programmers overlook interface
specifications while writing code, often due to inappropriate use of #include
statements in languages like C.
2. Inadequate Functionality: Errors caused by assumptions that one part of the
system would perform a function, which isn't fulfilled by another part. This can
result from unintentional oversights or misunderstanding of system
functionality.
3. Location of Functionality: Disagreements or misunderstandings about where a
function should reside in the software. This can occur due to design
methodology issues or lack of experience among personnel.
4. Changes in Functionality: Modifying one module without adjusting related
modules accordingly, causing unexpected effects on program behavior.
5. Added Functionality: Errors introduced when new functionality is added
without proper documentation or change requests.
6. Misuse of Interface: Errors arising when one module incorrectly uses the
interface of another module, such as passing the wrong parameters or in the
wrong order.
7. Misunderstanding of Interface: Occurs when a calling module misunderstands
the interface specification of a called module, leading to incorrect assumptions
about input parameters or conditions.
8. Data Structure Alteration: Issues arising from inadequate data structure
design, such as insufficient size or fields to accommodate required
information.
9. Inadequate Error Processing: Failure to handle error codes properly returned
by called modules, leading to errors being overlooked or mishandled.
10. Additions to Error Processing: Errors introduced when changes in other
modules necessitate modifications to error handling, but the handling
techniques are not appropriately updated.
11. Inadequate Postprocessing: Failure to release resources no longer needed,
like memory deallocation, leading to resource leaks.
12. Inadequate Interface Support: When the actual functionality provided does
not adequately support the specified capabilities of the interface, causing
misinterpretation or incorrect results.
13. Initialization/Value Errors: Errors resulting from failure to initialize variables or
assign appropriate values, often due to oversight.
14. Violation of Data Constraints: Occurs when the implementation does not
support specified relationships among data items, typically due to incomplete
design specifications.
15. Timing/Performance Problems: Errors arising from inadequate
synchronization among processes, leading to race conditions or performance
issues.
16. Coordination of Changes: Errors caused by a lack of communication regarding
changes to interrelated modules, leading to inconsistencies or conflicts.
17. Hardware/Software Interfaces: Errors arising from inadequate software
handling of hardware devices, such as failure to synchronize data transfer
rates, resulting in data loss or device malfunction.

GRANULARITY OF SYSTEM INTEGRATION TESTING:


1. Intrasystem Testing:
• Objective: Combining modules within the system incrementally to build
a cohesive system.
• Process: Modules are combined in a step-by-step manner, similar to
constructing and testing successive builds.
• Example: In a client-server system, both the client and server are built
separately and then combined. Test cases are derived from the low-
level design document detailing module specifications.
2. Intersystem Testing:
• Objective: Testing interactions between independently tested systems
at a high level.
• Process: All systems are connected together, and testing is conducted
end-to-end. Only one feature is tested at a time on a limited basis.
• Example: Integrating a client-server system after separately integrating
the client and server modules. Test cases are derived from the high-
level design document detailing system architecture.
3. Pairwise Testing:
• Objective: Testing interactions between two interconnected systems
within the overall system.
• Process: Only two systems are tested at a time, assuming other systems
behave as expected. The network infrastructure must support the
interaction between the two systems.
• Challenges: Unintended side effects, such as a failure in one system
triggering issues in another, can complicate testing.
• Example: Testing communication between a network element and
element management systems within a wireless data network, where
failure in one device may impact the testing process.

Sandwich and Big Bang:


1. Sandwich Approach:
• Imagine a sandwich with three layers: top, middle, and bottom.
• In software integration, we divide the system into three layers too.
• The bottom layer has basic modules that are used a lot. We integrate
these first using the bottom-up approach.
• The top layer has modules with major design decisions. We integrate
these using the top-down approach.
• The middle layer holds the rest of the modules. If it exists, we integrate
it after the top and bottom layers.
• This approach combines the benefits of top-down and bottom-up,
avoiding the need for stubs for low-level modules.
2. Big-Bang Approach:
• Imagine throwing all the ingredients of a dish into a pot and cooking
them together.
• Similarly, in big-bang integration, we test each module individually first.
• Then, we combine all the modules to build the entire system and test it
as a whole.
• This approach is quick but risky for large systems:
• Large systems may have many interface errors, and it's hard to
pinpoint them all at once.
• Fixing errors in a big-bang integrated system can be costly and
time-consuming.
3. Study Results:
• Researchers found that top-down integration strategies are the most
effective for fixing defects.
• Both top-down and big-bang strategies produce more reliable systems.
• Bottom-up strategies are less effective in fixing defects and tend to
create less reliable systems.
• The sandwich approach falls in between, offering moderate reliability
compared to the other methods.

SYSTEM TEST CATEGORIES:


1. Basic tests provide an evidence that the system can be installed, configured,
and brought to an operational state.
2. Functionality tests provide comprehensive testing over the full range of
the requirements within the capabilities of the system.
3. Robustness tests determine how well the system recovers from various
input errors and other failure situations.
4. Interoperability tests determine whether the system can interoperate with
other third-party products.
5. Performance tests measure the performance characteristics of the system,
for example, throughput and response time, under various conditions.
6. Scalability tests determine the scaling limits of the system in terms of user
scaling, geographic scaling, and resource scaling.
7. Stress tests put a system under stress in order to determine the limitations
of a system and, when it fails, to determine the manner in which the failure
occurs.
8. Load and stability tests provide evidence that the system remains stable
for a long period of time under full load.
9. Reliability tests measure the ability of the system to keep operating for a
long time without developing failures.
10. Regression tests determine that the system remains stable as it cycles
through the integration of other subsystems and through maintenance tasks.
11. Documentation tests ensure that the system’s user guides are accurate and
usable.
12. Regulatory tests ensure that the system meets the requirements of government
regulatory bodies in the countries where it will be deployed.
Boot tests are crucial for ensuring that a system can successfully start up and load its
software from various boot options. These tests are designed to validate the system's
ability to initialize and operate with different configurations, ensuring reliability and
functionality across different setups.

Types of tests:
Boot tests:
Boot Options:
• Boot options refer to the different sources from which a system can
load its software image during startup.
• Common boot options include booting from ROM (Read-Only Memory),
FLASH card (non-volatile memory), and PCMCIA cards (memory cards
commonly used in laptops and other devices).
2. Minimum Configuration:
• The minimum configuration of a system represents the smallest setup
in which the system can operate.
• For example, in the case of a router, the minimum configuration might
consist of only one line card inserted into its slots.
• Boot tests ensure that the system can successfully boot up and function
with this minimal setup, verifying that essential components are
detected and initialized correctly.
3. Maximum Configuration:
• Conversely, the maximum configuration represents the largest possible
setup with all available resources utilized.
• For a router, the maximum configuration would involve filling all
available slots with line cards.
• Boot tests also verify the system's ability to handle this maximum
configuration, ensuring that it can manage the increased hardware load
and still boot up effectively.
4. Testing Process:
• During boot testing, the system is powered on and initialized using each
supported boot option.
• For each boot option, both minimum and maximum configurations are
tested.
• The system's behavior during startup is monitored and verified to
ensure that it successfully loads the software image and initializes all
necessary components.
• Any failures or issues encountered during the boot process are
documented and investigated to identify potential problems or areas
for improvement.
In summary, boot tests are essential for validating a system's ability to boot up and
load its software image from various boot options. By testing both minimum and
maximum configurations, these tests ensure that the system can operate reliably
across different setups and hardware configurations.

Upgrade/Downgrade Tests:
1. Purpose: Upgrade/downgrade tests ensure that the system's software can be
smoothly updated to a newer version or reverted back to a previous version.
2. Upgrade Process: These tests verify that the system can successfully upgrade
from the (n - 1)th version of the software to the nth version without
encountering any issues.
3. Downgrade Process (Rollback): If the upgrade process fails for any reason,
such as user interruption, network disruption, or insufficient disk space, these
tests ensure that the system can roll back to the (n - 1)th version smoothly.
4. Failure Scenarios: Upgrade/downgrade tests cover various failure scenarios,
including user-invoked aborts, network disruptions, system reboots, and self-
detection of upgrade failures due to reasons like disk space constraints or
version incompatibilities.
5. Graceful Handling: The tests verify that the system can handle these failure
scenarios gracefully, ensuring that the system remains stable and functional
even during upgrade or rollback processes.
6. Verification: The tests validate that the upgrade and rollback procedures are
executed correctly, and the system behaves as expected throughout the
process, maintaining data integrity and functionality.

Light Emitting Diode Tests:


1. Purpose: LED tests ensure that the system LED status indicators on the front
panels of the systems function correctly, providing visual indication of the
operational status of various modules.
2. Location: LEDs are located on the front panels of the systems, making them
easily visible to users for monitoring the system's status.
3. System LED Test:
• Green LED indicates that the chassis is operational.
• Blinking green LED indicates a fault, possibly indicating issues with one
or more submodules.
• Off LED indicates that there is no power to the system.
4. Ethernet Link LED Test:
• Green LED indicates that the Ethernet link is operational.
• Blinking green LED indicates network activity.
• Off LED indicates a fault in the Ethernet connection.
5. Cable Link LED Test:
• Green LED indicates that the cable link is operational.
• Blinking green LED indicates activity, such as data transmission.
• Off LED indicates a fault in the cable connection.
6. User-Defined T1 Line Card LED Test:
• Green LED indicates normal operation of the T1 line card.
• Blinking green LED indicates activity on the T1 line.
• Red LED indicates a fault, possibly indicating issues with the T1 line
card.
• Off LED indicates that there is no power supplied to the T1 line card.
7. Verification: LED tests verify that the visual indicators accurately reflect the
operational status of the corresponding modules, ensuring that users can
easily identify any faults or issues with the system or its components.
Diagnostic Tests:
Purpose of Diagnostic Tests: Diagnostic tests, also known as built-in self-tests (BIST),
are designed to verify that hardware components or modules of the system are
functioning correctly without requiring manual troubleshooting.

Command line interface tests:


The CLI tests are designed to verify that the system can be configured, or provisioned,
in specific ways. This is to ensure that the CLI software module processes the user
commands correctly as documented. This includes accessing the relevant information
from the system using CLI.

FUNCTIONALITY TESTS:
Communication Systems Tests:
Communication systems tests are designed to verify the implementation of the
communication systems as specified in the customer requirements specification.
1. Basic Interconnection Tests:
• These tests check if the system can make simple connections.
• They ensure that devices can link up over networks like Ethernet or Wi-
Fi.
2. Capability Tests:
• Capability tests ensure that the system offers the features it's supposed
to.
• They check if the system can do what it's meant to, like supporting
specific protocols or handling data in certain ways.
3. Behavior Tests:
• Behavior tests look at how the system acts in different situations.
• They check if the system behaves correctly during tasks like sending
data, handling errors, and negotiating with other systems.
4. Systems Resolution Tests:
• These tests give clear answers to specific questions about the system.
• They focus on confirming if the system meets important requirements
or standards, like hitting performance targets or following
communication rules.
Module Tests:
Module tests are focused on ensuring that each individual component, or module, of
a system works correctly on its own. These tests are crucial because the proper
functioning of each module contributes to the overall performance of the entire
system. Here's a breakdown of how module tests work:
1. Purpose: The main goal of module tests is to verify that each module operates
according to its designated functionality as outlined in the system's
requirements.
2. Verification Process: Module tests involve verifying the performance of each
module independently, without considering interactions with other modules.
This ensures that any issues or errors can be isolated and addressed efficiently.
3. Example Scenario: Consider an Internet router comprising various modules
such as line cards, system controllers, power supplies, and fan trays. Module
tests for different components might include:
• Ethernet Line Cards: Tests verify functionalities like autosensing,
latency, collision detection, supported frame types, and acceptable
frame lengths.
• Fan Tray: Tests ensure that the fan status is accurately detected,
reported by the system software, and displayed correctly through LEDs
(e.g., green for "in service" and red for "out of service").
• T1/E1 Line Cards: Tests for T1/E1 line cards typically cover various
aspects such as:
• Clocking mechanisms (e.g., internal source timing and receive
clock recovery).
• Alarms detection, including loss of signal (LOS), loss of frame
(LOF), and alarm indication signal (AIS).
• Line coding methods like alternate mark inversion (AMI), bipolar
8 zero substitution (B8ZS) for T1, and high-density bipolar 3
(HDB3) for E1.
• Framing standards such as Digital Signal 1 (DS1) and E1 framing.
• Channelization capabilities, ensuring the ability to transfer user
traffic across channels multiplexed from different time slots on
T1/E1 links.
Logging and Tracing Tests:
Logging and tracing tests are essential for verifying the proper configuration and
functionality of logging and tracing mechanisms within a system. Logging and tracing
tests are essential for verifying the proper configuration and functionality of logging
and tracing mechanisms within a system

Graphical user interface tests:


GUI tests are crucial for ensuring that the graphical user interface of an application
functions correctly and provides a seamless experience for users. Here's a simplified
explanation of GUI testing and usability testing:
Verification of Interface Components: GUI tests verify various components of the
interface, such as icons, menu bars, dialogue boxes, scroll bars, list boxes, and radio
buttons. These tests ensure that all elements are displayed correctly and function as
intended.

Security tests:
Security tests are procedures designed to assess whether a software system adheres
to predefined security requirements, which typically include three main aspects:
confidentiality, integrity, and availability.
1. Confidentiality: This aspect ensures that sensitive data and processes are
protected from unauthorized access or disclosure. Security tests evaluate
whether the system effectively prevents unauthorized users from accessing
confidential information.
2. Integrity: Integrity measures whether data and processes are safeguarded
against unauthorized modification or alteration. Security tests aim to verify
that the system maintains the accuracy and consistency of data, preventing
unauthorized changes.
3. Availability: Availability refers to the requirement that data and processes
should remain accessible to authorized users, without disruptions or denial-of-
service attacks.

ROBUSTNESS TESTS:
Robustness means how sensitive a system is to erroneous input and changes in its
operational environment. Tests in this category are designed to verify how gracefully
the system behaves in error situations and in a changed operational environment.
The purpose is to deliberately break the system, not as an end in itself, but as a
means to find error.
Boundary Value Tests:
Boundary value tests are designed to cover boundary conditions, special values, and
system defaults. The tests include providing invalid input data to the system and
observing how the system reacts to the invalid input. The system should respond
with an error message or initiate an error processing routine.

Power cycling tests:


Power cycling tests are executed to ensure that, when there is a power glitch in a
deployment environment, the system can recover from the glitch to be back in
normal operation after power is restored. As an example, verify that the boot test is
successful every time it is executed during power cycling.

On-line insertion and removal tests:


On-line insertion and removal (OIR) tests are crucial for verifying the seamless
handling of module insertion and removal events in a system, whether it's under
minimal or heavy workload conditions. The goal is to ensure that the system
gracefully manages these events and returns to normal operation once the failure
condition is resolved, all without requiring a reboot or causing crashes in other
components. These tests simulate scenarios where faulty modules are replaced,
ensuring that the system continues to function faultlessly throughout the process.
For instance, when replacing an Ethernet card, the system should remain operational
without any disruptions or failures.

High-Availability Tests:
High-availability tests are designed to verify the redundancy of individual modules,
including the software that controls these modules. The goal is to verify that the
system gracefully and quickly recovers from hardware and software failures without
adversely impacting the operation of the system. The concept of high availability is
also known as fault tolerance.

Degraded node tests:


Degraded node tests assess the resilience of a system in the face of partial failures.
For instance, cutting one of several connections between routers evaluates how well
the remaining connections handle the increased load. Similarly, disabling the primary
port on a router examines whether traffic seamlessly shifts to alternative ports
without service interruption. After reactivating the primary port, the test gauges if
the system seamlessly returns to its original state. These tests gauge the system's
ability to adapt to failures while maintaining operational integrity.
EQUIVALENCE CLASS PARTITIONING:
Equivalence partitioning is indeed a powerful technique in software testing. By
dividing the input domain into equivalence classes, testers can ensure thorough
coverage while managing the potentially infinite number of inputs. Each equivalence
class represents a set of inputs that the system should handle in a similar manner,
making it easier to design test cases that cover a wide range of scenarios without
exhaustive testing.
For example, consider a function that calculates the square root of a number. The
input domain consists of all real numbers. However, we can partition this domain into
equivalence classes such as positive numbers, negative numbers, and zero. Testing a
representative value from each equivalence class helps ensure that the function
behaves correctly across different scenarios without needing to test every possible
real number.
By identifying input conditions and grouping inputs accordingly, testers can efficiently
select test cases that provide maximum coverage while minimizing redundancy. This
approach is particularly useful in situations where exhaustive testing is impractical
due to the sheer size of the input domain.
Guidelines to create equivalence classes for different input conditions:
1. Range [a, b]: Suppose the input condition specifies a range [a, b]. We create
one EC for valid inputs within the range, say a≤X≤b. Additionally, we create
two other ECs for invalid inputs: one for X<a and one for �X>b.
2. Set of values: If the input condition specifies a set of values, we create one EC
for each element of the set and one EC for an invalid member. For instance, if
the input is selected from a set of N items, we create N + 1 ECs: one for each
element of the set {M1}, {M2}, ..., {MN}, and one for elements outside the set
{M1, M2, ..., MN}.
3. Individual values: When each individual value is handled differently by the
system, we create one EC for each valid input. For example, if the input is from
a menu, we create one EC for each menu item.
4. Number of valid values: If the input condition specifies the number of valid
values (say N), we create one EC for the correct number of inputs and two ECs
for invalid inputs: one for zero values and one for more than N values. For
instance, if a program accepts 100 natural numbers for sorting, we create
three ECs: one for 100 valid inputs of natural numbers, one for no input value,
and one for more than 100 natural numbers.
5. "Must-be" value: For an input condition specifying a "must-be" value, we
create one EC for the must-be value and one EC for something that is not a
must-be value. For example, if the first character of a password must be a
numeric character, we generate two ECs: one for valid values where the first
character is numeric and one for invalid values where the first character is not
numeric.
6. Splitting of EC: If elements within an EC are handled differently by the system,
we split the EC into smaller ECs to ensure each element is adequately tested.

BOUNDARY VALUE ANALYSIS:


Boundary value analysis (BVA) is a testing technique closely related to equivalence
partitioning (EC). While equivalence partitioning focuses on dividing the input
domain into classes of inputs that are treated identically by the system, BVA takes it a
step further by concentrating on the boundaries between these classes.
The central idea of BVA is to select test data near the boundaries of input and output
equivalence classes. By doing so, BVA aims to uncover defects that often occur due to
incorrect handling or implementation of these boundaries. Boundary conditions,
which are predicates that apply directly on or around the boundaries of equivalence
classes, are crucial in BVA.

Unit 4
1st definition of software reliability
Software reliability is defined as the probability of a software system operating
without failure for a specified time in a specified environment. The key elements of
this definition include:
1. Probability of Failure-Free Operation: This refers to the likelihood that the
software will function without any failures during a specified period.
2. Length of Time of Failure-Free Operation: The duration for which the software
is expected to operate without failure is crucial since users often require the
software to complete tasks within a certain time frame.
3. Execution Environment: This involves the conditions and manner in which
users interact with the software, which can vary significantly.

Software reliability is probabilistic due to unknown faults in large systems, causing


unpredictable failures based on usage. Users prioritize minimal disruptions and failure-free
operation for a set time.
For example, an inventory system should run from 8 AM to 8 PM without failure. An office
PC, used from 8:30 AM to 4:30 PM, has a reliability of 0.975 if it crashes five times over 200
days.

Execution environment affects reliability. Different users interact with software differently,
leading to varied reliability experiences. A word processor may behave differently for small
versus large documents, invoking different parts and faults.

Understanding the execution environment is key. If a system has ten functions but one group
uses only seven fault-free ones, they perceive higher reliability than another group using all
ten with some faults.

The execution environment crucially impacts software reliability by influencing fault


encounters and perceived reliability. The operational profile concept will further explore this.

2nd Definition of Software Reliability


Failure intensity measures the reliability of a software system in a given environment.
Lower failure intensity indicates higher reliability. To represent current reliability, the
failure intensity is stated. For example, if a system in testing has 2 failures per 8
hours, its reliability is 0.25 failures per hour (2 failures / 8 hours).

Comparing the Definitions of Software Reliability


The first definition of software reliability focuses on the system's ability to complete
transactions without failure for a minimum length of time. For example, an
autonomous underwater robot must run for three days continuously to complete an
exploration round. If it completes 99% of rounds successfully, its reliability is 0.99.
The second definition emphasizes minimizing failures, regardless of operation
duration. The occurrence of any failure is crucial, such as in air-traffic control systems
or telephone switches, where failures can have significant consequences.

Factors influencing software reliability:


A user’s perception of software reliability depends on two main factors: the number
of faults in the software and the ways users operate it, known as the operational
profile. Several factors influence the number of faults in a system:
1. Size and Complexity of Code: Larger and more complex systems with many
lines of code (LOC) tend to have more faults due to more module interfaces
and conditional statements. Economic constraints may prevent thorough
understanding and debugging, leading to fault introduction during
development and maintenance.
2. Development Process Characteristics: Advances in software engineering, such
as using formal methods (SDL, UML) and code review techniques, help reduce
faults. Quality control measures and test tools also contribute to fault
detection and reduction.
3. Personnel Education, Experience, and Training: The rapid growth of the IT
industry has led to many personnel with insufficient training working on large
projects, increasing the likelihood of faults due to a lack of skills.
4. Operational Environment: Fault detection depends on the ability to replicate
the actual operational environment during testing. If test engineers do not
operate the system as users would, faults may go undetected.
Software reliability is influenced by a combination of these factors. Although an ideal
mathematical model to capture all influencing parameters does not exist due to the
complexity,

Comparison of Software Engineering Technologies:


To develop high-quality software cost-effectively, various technologies have been
introduced, including different development processes like the waterfall, spiral,
prototyping, eXtreme Programming, and Scrum models. Numerous techniques also
exist for generating test cases and capturing customer requirements, such as entity
relationship diagrams, data flow diagrams, and UML.
When evaluating new technologies, three key criteria from a management
perspective are:
1. Cost of Adoption: The expense involved in implementing the new technology.
2. Impact on Development Schedule: How the technology affects the project
timeline.
3. Return on Investment: The improvement in software quality resulting from the
new technology.
Software reliability can help assess the effectiveness of new technologies. For
instance, if two technologies, M1 and M2, are used to develop systems S1 and S2 for
the same application, comparing their reliability levels can indicate which technology
produces higher quality software. This comparison aids managers in selecting the
most effective technology for improving software reliability.

Measuring the Progress of System Testing


Measuring software development progress is essential for project management. Key
metrics for monitoring system-level testing progress include:
1. Percentage of test cases executed
2. Percentage of successful execution of high-priority functional tests
Software reliability, specifically failure intensity, can objectively measure testing
progress. By comparing the current failure intensity of the System Under Test (SUT)
with the acceptable failure intensity at release, one can assess the remaining work. A
high current failure intensity indicates more work is needed, while a small difference
suggests nearing the goal. Thus, reliability metrics help track progress in system
testing.

Controlling the System in Operation


System reliability usually decreases due to maintenance work. The larger the changes
made, the more reliability is reduced. For instance, if a system has k faults per 1000
lines of code initially, adding N lines during maintenance statistically introduces
Nk\1000 new faults. This decrease in reliability necessitates extended testing to
identify and fix these faults, thereby improving reliability. Project managers may limit
the extent of maintenance work based on the acceptable temporary reduction in
system reliability, thereby determining the permissible size of maintenance activities.

Better Insight into Software Development Process


By observing failure intensity at the start of system testing, a test manager can
estimate the duration needed to reduce failure intensity to an acceptable level. This
enables managers to make informed decisions about the software development
process, improving planning and resource allocation.

Operation:
An operation is a major system-level logical task of short duration which returns
control to the initiator when the operation is complete and whose processing is
substantially different from other operations. In the following, the key characteristics
of an operation are explained:
• Major means that an operation is related to a functional requirement or feature of
a software system.
• An operation is a logical concept in the sense that it involves software, hardware,
and user actions. Different actions may exist as different segments of processing. The
different processing time segments can be contiguous, non-contiguous, sequential, or
concurrent.
• Short duration means that a software system is handling hundreds of operations
per hour.
• Substantially different processing means that an operation is considered to be an
entity in the form of some lines of source code, and there is a high probability that
such an entity contains a fault not found in another entity.
FIVE VIEWS OF SOFTWARE QUALITY:
1. Transcendental View: Quality is an ideal recognized through experience but too
complex to define precisely. It suggests that good quality is inherently noticeable.
This view does not use concrete measures to define quality.
2. User View: Quality is determined by how well a product meets user needs and
expectations. It includes both functionality and subjective elements like usability
and reliability. A product is considered high quality if it satisfies many users.
3. Manufacturing View: Quality is seen as meeting specified requirements,
emphasizing consistency and reducing costs. The focus is on manufacturing
correctly the first time to lower expenses. Standards like CMM and ISO 9001 are
based on this view.
4. Product View: Quality is linked to good internal properties leading to good
external qualities. This view explores the relationship between a product's
internal attributes and its performance. For example, high modularity improves
testability and maintainability.
5. Value-Based View: Quality balances excellence and cost, focusing on how much
customers are willing to pay. It considers both the quality and the economic value
of the product. The goal is to achieve a cost-effective level of quality.

The 11 quality factors are:


1. Correctness: A software system should meet both explicitly specified functional
requirements and implicitly expected nonfunctional requirements. A system is
deemed correct if it fulfills these criteria. However, a correct system may still be
unacceptable to customers if it fails to meet unstated requirements like stability,
performance, and scalability. Conversely, users might accept an incorrect system if it
meets their needs.
2. Reliability: Constructing large software systems that are completely correct is
challenging. Some functions may fail in certain scenarios, making the software
technically incorrect. However, customers may still accept the software if these
failures are rare and don't frequently occur during actual use. Customers may
tolerate occasional failures and still perceive the system as reliable if the failure rate
is very low and doesn't impact their mission objectives. Thus, reliability is based on
customer perception, and an incorrect system can still be seen as reliable.
3. Efficiency: Efficiency measures how well a software system uses resources like
computing power, memory, disk space, communication bandwidth, and energy. The
system should minimize resource usage while performing its functions. For instance,
a base station in a cellular network can support more users by using less
communication bandwidth.
4. Integrity: Integrity refers to a system's ability to withstand security attacks and
control unauthorized access to software or data. It is crucial for ensuring that only
authorized users or programs can access the system. Integrity is especially important
in today's network-based applications and multiuser systems.
5. Usability: A software system is considered usable if users find it easy to use. The
user interface plays a crucial role in user satisfaction. Even if a system has many
desirable qualities, a poor user interface can lead to its failure. However, usability
alone isn't enough; the product must also be reliable. Frequent failures cannot be
compensated by a good user interface alone.
6. Maintainability: Maintainability refers to how easily and inexpensively
maintenance tasks can be performed. For software, maintenance activities fall into
three categories:
1. Corrective Maintenance: Fixing defects found after the software has been
released. These defects may have been known at release or introduced during
later updates.
2. Adaptive Maintenance: Modifying the software to keep it compatible with
changes in its operating environment.
3. Perfective Maintenance: Enhancing the software to improve its performance
or other qualities.
Effective maintainability ensures that software can be efficiently updated and
kept in good working condition throughout its lifespan.
7. Testability: Testability refers to the ability to verify that a software system meets
both explicitly stated and expected requirements. At every development stage, it's
essential to consider how each requirement will be tested. For each requirement, the
key questions are: What procedure should be used to test it, and how easily can it be
verified?.
8. Flexibility measures how easily an operational system can be modified. Frequent
changes over time can increase modification costs, especially if the initial design lacks
flexibility. To assess flexibility, ask: How easily can new features be added to the
system?
9. Portability refers to how easily a software system can be adapted to different
execution environments, such as various hardware platforms or operating systems. It
enhances market potential and allows customers to adopt new technologies. Good
design practices, like modularity, facilitate portability by isolating environment-
specific computations.
10. Reusability involves using significant portions of one product in another, with
minor modifications if needed. It saves development and testing costs and time.
11. Interoperability is the ability of different software systems to work together,
where the output of one system is usable as input to another, often across different
computers connected by a network. This is crucial for modern applications like
internet-based and wireless systems.
Quality Factors Definition
1. Correctness -Extent to which a program satisfies its specifications and fulfills
the user’s mission objectives
2. Reliability-Extent to which a program can be expected to perform its intended
function with required precision
3. Efficiency-Amount of computing resources and code required by a program to
perform a function
4. Integrity-Extent to which access to software or data by unauthorized persons
can be controlled
5. Usability-Effort required to learn, operate, prepare input, and interpret output
of a program
6. Maintainability-Effort required to locate and fix a defect in an operational
program
7. Testability-Effort required to test a program to ensure that it performs its
intended functions
8. Flexibility-Effort required to modify an operational program
9. Portability-Effort required to transfer a program from one hardware and/or
software environment to another
10. Reusability-Extent to which parts of a software system can be reused in other
applications
11. Interoperability-Effort required to couple one system with another

Quality Criteria:
A quality criterion is an attribute of a quality factor relevant to software
development. Examples include:
• Modularity: An attribute of software architecture. High modularity allows
cohesive components to be grouped in one module, enhancing system
maintainability.
• Traceability: An attribute that enables developers to map user requirements
to specific modules, improving system correctness.
ISO 9000:2000 SOFTWARE QUALITY STANDARD:
The International Organization for Standardization (ISO) has developed a series of
standards known as ISO 9000, which focus on quality assurance and management.
Founded in 1946 and based in Geneva, Switzerland, the ISO creates standards
applicable to a wide range of products, from spices to software. These standards are
updated every 5-8 years, with the latest version released in 2000, known as ISO
9000:2000.

ISO 9000:2000 consists of three components:


1. **ISO 9000**: Fundamentals and vocabulary.
2. **ISO 9001**: Requirements.
3. **ISO 9004**: Guidelines for performance improvements.

Previous versions included ISO 9002 and ISO 9003, which addressed quality
assurance in production, installation, and final inspection, but these are not part of
ISO 9000:2000.

Capability Maturity Model (CMM)


CMM assesses an organization's process maturity, indicating its ability to produce
low-cost, high-quality software. The model defines five maturity levels:
1. Initial Level: Processes are often chaotic and ad hoc.
2. Managed Level: Basic project management processes are established to track
cost, schedule, and functionality.
3. Defined Level: Processes are documented and standardized across the
organization.
4. Quantitatively Managed Level: Processes are measured and controlled.
5. Optimizing Level: Focus on continuous process improvement.
By advancing through these levels, an organization can incrementally improve its
process capability, resulting in better quality software at lower costs.

Capability Maturity Model Integration (CMMI)


After the development and success of the Capability Maturity Model for Software
(CMM-SW), which was first released in 1991 and updated in 1993, CMMs were
created for various other areas. CMMs are reference models of mature practices in
specific disciplines used to appraise and improve an organization’s capability in that
discipline. Examples include:

1. Software CMM (CMM-SW)


2. Systems Engineering CMM
3. Integrated Product Development CMM
4. Electronic Industry Alliance 731 CMM
5. Software Acquisition CMM
6. People CMM
7. Supplier Source CMM

These CMMs vary by discipline, structure (continuous vs. staged improvements), and
definitions of maturity, reflecting the unique aspects of each field.

Issues with Multiple CMMs


Organizations using multiple CMMs faced several issues:
1. Different structures, maturity measurement methods, and terminologies.
2. Challenges in integrating CMMs to achieve common goals, such as producing
low-cost, high-quality products on schedule.
3. Difficulties in using multiple models for supplier selection and subcontracting.

Emergence of CMMI
To address these issues, the Capability Maturity Model Integration (CMMI) was
developed, incorporating elements from several models, including:

1. Capability Maturity Model for Software (CMM-SW)


2. Integrated Product Development Capability Maturity Model (IPD-CMM)
3. Capability Maturity Model for Systems Engineering (CMM-SE)
4. Capability Maturity Model for Supplier Sourcing (CMM-SS)

Benefits of CMMI
CMMI provides a unified approach to process improvement, which is crucial for:
1. Supplier Evaluation: Ensuring subsystems, often developed by different vendors,
meet maturity standards.
2. Diverse Components: Ensuring interoperability and coexistence of diverse system
components like databases, communications, security, and real-time processing.
3. Specialized Contexts: Developing software in specialized environments, such as
Internet routing software on specialized hardware and operating systems.

By integrating various maturity models, CMMI helps organizations streamline


processes, improve product quality, and maintain consistency across different
domains.

TEST PROCESS IMPROVEMENT


A test process is a certain way of performing activities related to defect detection.
Some typically desired test activities in software development are as follows:
1. Identifying Test Goals
2. Preparing a Test Plan
3. Identifying Different Kinds of Tests
4. Hiring Test Personnel
5. Designing Test Cases
6. Setting Up Test Benches
7. Procuring Test Tools
8. Assigning Test Cases to Test Engineers
9. Prioritizing Test Cases for Execution
10. Organizing the Execution of Test Cases into Multiple Test Cycles
11. Preparing a Schedule for Executing Test Cases
12. Executing Test Cases
13. Reporting Defects
14. Tracking Defects While Resolving Them
15. Measuring the Progress of Testing
16. Measuring the Quality Attributes of the Software Under Test
17. Evaluating the Effectiveness of a Test Process
18. Identifying Steps to Improve the Effectiveness of Test Activities
19. Identifying Steps to Reduce the Cost of Testing

You might also like