Qat Notes
Qat Notes
The ISO 9126 Quality Model was developed by the International Organization for
Standardization (ISO) to provide a framework for evaluating software quality. This
model defines six key characteristics that cover different aspects of software quality:
1. Functionality: This characteristic assesses whether the software performs its
intended tasks correctly.
2. Reliability: Reliability measures the consistency of the software's
performance. It examines whether the software can operate without failures
under specified conditions.
3. Usability: Usability evaluates how easy and pleasant the software is to use.
This includes the user interface design, navigation, and overall user
experience.
4. Efficiency: Efficiency looks at the software's performance in relation to the
resources it uses. This involves assessing whether the software can perform its
functions quickly and without consuming excessive memory, processing
power, or battery life.
5. Maintainability: Maintainability considers how easy it is to fix bugs and make
updates to the software. This characteristic is crucial for developers as it
impacts the ease of modifying the software to correct issues, improve
performance, or adapt to new requirements. Software with high
maintainability allows for quick and cost-effective updates and improvements.
6. Portability: Portability measures the software's ability to operate in different
environments. This means the software should be able to run on various
platforms, such as different operating systems or devices, without requiring
significant changes.
Types of errors
1. Syntax Errors: Mistakes in the language's syntax.
2. Logic Errors: Flaws in the algorithm or logic.
3. Runtime Errors: Issues that occur while the program is running.
4. Semantic Errors: Errors in the meaning or interpretation of code.
5. Compilation Errors: Problems encountered during code compilation.
6. Arithmetic Errors: Incorrect mathematical calculations.
7. Input Errors: Incorrect or invalid input provided to a system or program.
8. Boundary Errors: Errors that occur at the boundaries of systems or data
ranges.
9. Intermittent Errors: Errors that occur sporadically or under specific conditions.
10. Human Errors: Mistakes made by humans during development or operation.
OBJECTIVES OF TESTING:
The stakeholders in a test process include programmers, test engineers, project
managers, and customers. They have different perspectives on testing:
1. It works: At first, programmers check if each part of the software works as
expected. This builds confidence. When the whole system is ready, they ensure it
performs its basic functions. The main goal is to prove the system works reliably.
2. It doesn't work: After confirming basic functions, they look for weaknesses. They
intentionally try to make parts of the system fail. This helps find areas for
improvement and makes the system stronger.
3. Reduce failure risk: Complex software often has faults causing occasional failures.
By finding and fixing faults during testing, the chance of system failure decreases over
time. The aim is to make sure the system is reliable.
4. Cut testing costs: Testing has many costs like designing tests, analyzing results, and
documenting everything. Efforts are made to do testing efficiently, reducing costs
while still being effective.
Activities of testing:
1. Identify Objective: First, they determine what they want to test, ensuring each
test case has a clear purpose.
2. Select Inputs: Next, they choose inputs based on requirements, code, or
expectations, keeping the test objective in mind.
3. Compute Expected Outcome: They predict the program's expected outcome
based on the chosen inputs and the test objective.
4. Set up Execution Environment: They prepare the environment needed for the
program to run, fulfilling any external requirements like network connections
or database access.
5. Execute the Program: Then, they run the program with the selected inputs,
observing the actual outcome. This may involve coordinating inputs across
different components.
6. Analyze Test Result: Finally, they compare the actual outcome with the
expected one. If they match and the test objective is met, it's a pass. If not, it's
a fail. Sometimes, the result is inconclusive, requiring further testing for a clear
verdict.
In simpler terms, they plan what to test, pick inputs, predict what should happen, set
up the program environment, run the test, compare what happens with what's
expected, and decide if it passed, failed, or needs more testing.
Levels of testing:
Testing is a crucial part of software development, involving various stages and
participants. There are four main stages of testing: unit, integration, system, and
acceptance testing. Each stage serves a specific purpose and involves different
stakeholders.
• Unit Testing: Programmers test individual units of code in isolation, like
functions or classes. The goal is to ensure that each unit works correctly on its
own.
• Integration Testing: Developers and integration test engineers assemble units
to create larger subsystems. The aim is to construct a stable system that can
undergo further testing.
• System Testing: This encompasses a wide range of tests, including
functionality, security, performance, and reliability. It's a critical phase where
the system's readiness for deployment is assessed.
• Acceptance Testing: Conducted by customers, this ensures that the system
meets their expectations and contractual criteria. It focuses on measuring the
quality of the product rather than searching for defects.
Regression testing is another important aspect, performed whenever a system
component is modified. Its goal is to ensure that modifications haven't introduced
new faults. This process is integrated into unit, integration, and system testing
phases.
Acceptance testing comes in two forms: User Acceptance Testing (UAT) and Business
Acceptance Testing (BAT). UAT is performed by customers to validate that the system
meets their needs according to the contract. BAT, conducted internally by the
supplier, ensures that the system is likely to pass UAT.
Overall, testing ensures that software meets quality standards, satisfies user needs,
and functions as expected throughout its development lifecycle.
Types of tests:
Boot tests:
Boot Options:
• Boot options refer to the different sources from which a system can
load its software image during startup.
• Common boot options include booting from ROM (Read-Only Memory),
FLASH card (non-volatile memory), and PCMCIA cards (memory cards
commonly used in laptops and other devices).
2. Minimum Configuration:
• The minimum configuration of a system represents the smallest setup
in which the system can operate.
• For example, in the case of a router, the minimum configuration might
consist of only one line card inserted into its slots.
• Boot tests ensure that the system can successfully boot up and function
with this minimal setup, verifying that essential components are
detected and initialized correctly.
3. Maximum Configuration:
• Conversely, the maximum configuration represents the largest possible
setup with all available resources utilized.
• For a router, the maximum configuration would involve filling all
available slots with line cards.
• Boot tests also verify the system's ability to handle this maximum
configuration, ensuring that it can manage the increased hardware load
and still boot up effectively.
4. Testing Process:
• During boot testing, the system is powered on and initialized using each
supported boot option.
• For each boot option, both minimum and maximum configurations are
tested.
• The system's behavior during startup is monitored and verified to
ensure that it successfully loads the software image and initializes all
necessary components.
• Any failures or issues encountered during the boot process are
documented and investigated to identify potential problems or areas
for improvement.
In summary, boot tests are essential for validating a system's ability to boot up and
load its software image from various boot options. By testing both minimum and
maximum configurations, these tests ensure that the system can operate reliably
across different setups and hardware configurations.
Upgrade/Downgrade Tests:
1. Purpose: Upgrade/downgrade tests ensure that the system's software can be
smoothly updated to a newer version or reverted back to a previous version.
2. Upgrade Process: These tests verify that the system can successfully upgrade
from the (n - 1)th version of the software to the nth version without
encountering any issues.
3. Downgrade Process (Rollback): If the upgrade process fails for any reason,
such as user interruption, network disruption, or insufficient disk space, these
tests ensure that the system can roll back to the (n - 1)th version smoothly.
4. Failure Scenarios: Upgrade/downgrade tests cover various failure scenarios,
including user-invoked aborts, network disruptions, system reboots, and self-
detection of upgrade failures due to reasons like disk space constraints or
version incompatibilities.
5. Graceful Handling: The tests verify that the system can handle these failure
scenarios gracefully, ensuring that the system remains stable and functional
even during upgrade or rollback processes.
6. Verification: The tests validate that the upgrade and rollback procedures are
executed correctly, and the system behaves as expected throughout the
process, maintaining data integrity and functionality.
FUNCTIONALITY TESTS:
Communication Systems Tests:
Communication systems tests are designed to verify the implementation of the
communication systems as specified in the customer requirements specification.
1. Basic Interconnection Tests:
• These tests check if the system can make simple connections.
• They ensure that devices can link up over networks like Ethernet or Wi-
Fi.
2. Capability Tests:
• Capability tests ensure that the system offers the features it's supposed
to.
• They check if the system can do what it's meant to, like supporting
specific protocols or handling data in certain ways.
3. Behavior Tests:
• Behavior tests look at how the system acts in different situations.
• They check if the system behaves correctly during tasks like sending
data, handling errors, and negotiating with other systems.
4. Systems Resolution Tests:
• These tests give clear answers to specific questions about the system.
• They focus on confirming if the system meets important requirements
or standards, like hitting performance targets or following
communication rules.
Module Tests:
Module tests are focused on ensuring that each individual component, or module, of
a system works correctly on its own. These tests are crucial because the proper
functioning of each module contributes to the overall performance of the entire
system. Here's a breakdown of how module tests work:
1. Purpose: The main goal of module tests is to verify that each module operates
according to its designated functionality as outlined in the system's
requirements.
2. Verification Process: Module tests involve verifying the performance of each
module independently, without considering interactions with other modules.
This ensures that any issues or errors can be isolated and addressed efficiently.
3. Example Scenario: Consider an Internet router comprising various modules
such as line cards, system controllers, power supplies, and fan trays. Module
tests for different components might include:
• Ethernet Line Cards: Tests verify functionalities like autosensing,
latency, collision detection, supported frame types, and acceptable
frame lengths.
• Fan Tray: Tests ensure that the fan status is accurately detected,
reported by the system software, and displayed correctly through LEDs
(e.g., green for "in service" and red for "out of service").
• T1/E1 Line Cards: Tests for T1/E1 line cards typically cover various
aspects such as:
• Clocking mechanisms (e.g., internal source timing and receive
clock recovery).
• Alarms detection, including loss of signal (LOS), loss of frame
(LOF), and alarm indication signal (AIS).
• Line coding methods like alternate mark inversion (AMI), bipolar
8 zero substitution (B8ZS) for T1, and high-density bipolar 3
(HDB3) for E1.
• Framing standards such as Digital Signal 1 (DS1) and E1 framing.
• Channelization capabilities, ensuring the ability to transfer user
traffic across channels multiplexed from different time slots on
T1/E1 links.
Logging and Tracing Tests:
Logging and tracing tests are essential for verifying the proper configuration and
functionality of logging and tracing mechanisms within a system. Logging and tracing
tests are essential for verifying the proper configuration and functionality of logging
and tracing mechanisms within a system
Security tests:
Security tests are procedures designed to assess whether a software system adheres
to predefined security requirements, which typically include three main aspects:
confidentiality, integrity, and availability.
1. Confidentiality: This aspect ensures that sensitive data and processes are
protected from unauthorized access or disclosure. Security tests evaluate
whether the system effectively prevents unauthorized users from accessing
confidential information.
2. Integrity: Integrity measures whether data and processes are safeguarded
against unauthorized modification or alteration. Security tests aim to verify
that the system maintains the accuracy and consistency of data, preventing
unauthorized changes.
3. Availability: Availability refers to the requirement that data and processes
should remain accessible to authorized users, without disruptions or denial-of-
service attacks.
ROBUSTNESS TESTS:
Robustness means how sensitive a system is to erroneous input and changes in its
operational environment. Tests in this category are designed to verify how gracefully
the system behaves in error situations and in a changed operational environment.
The purpose is to deliberately break the system, not as an end in itself, but as a
means to find error.
Boundary Value Tests:
Boundary value tests are designed to cover boundary conditions, special values, and
system defaults. The tests include providing invalid input data to the system and
observing how the system reacts to the invalid input. The system should respond
with an error message or initiate an error processing routine.
High-Availability Tests:
High-availability tests are designed to verify the redundancy of individual modules,
including the software that controls these modules. The goal is to verify that the
system gracefully and quickly recovers from hardware and software failures without
adversely impacting the operation of the system. The concept of high availability is
also known as fault tolerance.
Unit 4
1st definition of software reliability
Software reliability is defined as the probability of a software system operating
without failure for a specified time in a specified environment. The key elements of
this definition include:
1. Probability of Failure-Free Operation: This refers to the likelihood that the
software will function without any failures during a specified period.
2. Length of Time of Failure-Free Operation: The duration for which the software
is expected to operate without failure is crucial since users often require the
software to complete tasks within a certain time frame.
3. Execution Environment: This involves the conditions and manner in which
users interact with the software, which can vary significantly.
Execution environment affects reliability. Different users interact with software differently,
leading to varied reliability experiences. A word processor may behave differently for small
versus large documents, invoking different parts and faults.
Understanding the execution environment is key. If a system has ten functions but one group
uses only seven fault-free ones, they perceive higher reliability than another group using all
ten with some faults.
Operation:
An operation is a major system-level logical task of short duration which returns
control to the initiator when the operation is complete and whose processing is
substantially different from other operations. In the following, the key characteristics
of an operation are explained:
• Major means that an operation is related to a functional requirement or feature of
a software system.
• An operation is a logical concept in the sense that it involves software, hardware,
and user actions. Different actions may exist as different segments of processing. The
different processing time segments can be contiguous, non-contiguous, sequential, or
concurrent.
• Short duration means that a software system is handling hundreds of operations
per hour.
• Substantially different processing means that an operation is considered to be an
entity in the form of some lines of source code, and there is a high probability that
such an entity contains a fault not found in another entity.
FIVE VIEWS OF SOFTWARE QUALITY:
1. Transcendental View: Quality is an ideal recognized through experience but too
complex to define precisely. It suggests that good quality is inherently noticeable.
This view does not use concrete measures to define quality.
2. User View: Quality is determined by how well a product meets user needs and
expectations. It includes both functionality and subjective elements like usability
and reliability. A product is considered high quality if it satisfies many users.
3. Manufacturing View: Quality is seen as meeting specified requirements,
emphasizing consistency and reducing costs. The focus is on manufacturing
correctly the first time to lower expenses. Standards like CMM and ISO 9001 are
based on this view.
4. Product View: Quality is linked to good internal properties leading to good
external qualities. This view explores the relationship between a product's
internal attributes and its performance. For example, high modularity improves
testability and maintainability.
5. Value-Based View: Quality balances excellence and cost, focusing on how much
customers are willing to pay. It considers both the quality and the economic value
of the product. The goal is to achieve a cost-effective level of quality.
Quality Criteria:
A quality criterion is an attribute of a quality factor relevant to software
development. Examples include:
• Modularity: An attribute of software architecture. High modularity allows
cohesive components to be grouped in one module, enhancing system
maintainability.
• Traceability: An attribute that enables developers to map user requirements
to specific modules, improving system correctness.
ISO 9000:2000 SOFTWARE QUALITY STANDARD:
The International Organization for Standardization (ISO) has developed a series of
standards known as ISO 9000, which focus on quality assurance and management.
Founded in 1946 and based in Geneva, Switzerland, the ISO creates standards
applicable to a wide range of products, from spices to software. These standards are
updated every 5-8 years, with the latest version released in 2000, known as ISO
9000:2000.
Previous versions included ISO 9002 and ISO 9003, which addressed quality
assurance in production, installation, and final inspection, but these are not part of
ISO 9000:2000.
These CMMs vary by discipline, structure (continuous vs. staged improvements), and
definitions of maturity, reflecting the unique aspects of each field.
Emergence of CMMI
To address these issues, the Capability Maturity Model Integration (CMMI) was
developed, incorporating elements from several models, including:
Benefits of CMMI
CMMI provides a unified approach to process improvement, which is crucial for:
1. Supplier Evaluation: Ensuring subsystems, often developed by different vendors,
meet maturity standards.
2. Diverse Components: Ensuring interoperability and coexistence of diverse system
components like databases, communications, security, and real-time processing.
3. Specialized Contexts: Developing software in specialized environments, such as
Internet routing software on specialized hardware and operating systems.