SE - Unit - 4 &5
SE - Unit - 4 &5
Software Testing is a method to assess the functionality of the software program. The
process checks whether the actual software matches the expected requirements and ensures the
software is bug-free. The purpose of software testing is to identify the errors, faults, or missing
requirements in contrast to actual requirements. It mainly aims at measuring the specification,
functionality, and performance of a software program or application.
Software testing can be divided into two steps:
1. Verification: It refers to the set of tasks that ensure that the software correctly
implements a specific function. It means “Are we building the product right?”.
2. Validation: It refers to a different set of tasks that ensure that the software that has been
built is traceable to customer requirements. It means “Are we building the right
product?”
TYPES OF TESTING:
1. Black Box Testing: Black box technique of testing in which the tester doesn’t have
access to the source code of the software and is conducted at the software interface
without any concern with the internal logical structure of the software known as black-
box testing.
2. White-Box Testing: White box technique of testing in which the tester is aware of the
internal workings of the product, has access to its source code, and is conducted by
making sure that all internal operations are performed according to the specifications
is known as white box testing.
3. Grey Box Testing: Grey Box technique is testing in which the testers should have
knowledge of implementation, however, they need not be experts.
4. Static Testing: Static Testing is a type of a Software Testing method which is performed
to check the defects in software without actually executing the code of the software
application. Whereas in Dynamic Testing checks, the code is executed to detect the
defects. Static testing is performed in the early stage of development to avoid errors as
it is easier to find sources of failures and it can be fixed easily. The errors that cannot
be found using Dynamic Testing, can be easily found by Static Testing.
5. Structural Testing: Structural testing is a type of software testing which uses the
internal design of the software for testing or in other words the software testing which
is performed by the team which knows the development phase of the software, is known
as structural testing. Structural testing is basically related to the internal design and
implementation of the software i.e. it involves the development team members in the
testing team. It basically tests different aspects of the software according to its types.
Structural testing is just the opposite of behavioral testing.
CHALLENGES IN WHITEBOX AND BLACKBOX TESTING:Black box testing and white
box testing are two distinct approaches to testing software, each with its own set of challenges.
Let's explore the challenges associated with each:
Black Box Testing Challenges:
1. Limited Visibility:
Challenge: Testers have no knowledge of the internal code or logic.
Impact: It might be challenging to identify certain types of defects that require knowledge
of the internal workings of the software.
● Resource Availability:
● Availability of resources (such as databases or external services) for integration
testing may be a challenge.
4. Testing Techniques:
● Top-Down Stubs vs. Bottom-Up Drivers:
● Stubs are used in top-down testing to simulate lower-level modules.
● Drivers are used in bottom-up testing to simulate higher-level modules.
● Functional and Non-functional Testing:
● Both functional and non-functional aspects are considered during integration
testing (e.g., performance, security).
5. Tools and Automation:
● Integration Testing Tools:
● Various tools are available to automate and streamline integration testing
processes.
● Continuous Integration (CI):
● Integration testing is often integrated into continuous integration pipelines for
frequent and automated testing.
6. Verification and Validation:
● Verification:
● Ensures that individual components meet their specifications.
● Validation: Ensures that integrated components work together as intended in the
overall system.
7. Documentation:
● Test Cases and Results:
● Well-documented test cases and results are essential for tracking the integration
testing process.
Integration testing is essential for identifying issues related to the interaction of components early
in the development process, reducing the likelihood of integration-related problems in the later
stages of software development. It plays a crucial role in building a reliable and robust software
system.
Unit-5
SYSTEM TESTING OVERVIEW
System testing, also referred to as system-level testing or system integration testing, is the
process in which a quality assurance (QA) team evaluates how the various components of an
application interact together in the full, integrated system or application.System testing verifies
that an application performs tasks as designed. It's a type of black box testing that focuses on the
functionality of an application rather than the inner workings of a system, which white box testing
is concerned with.
System testing, for example, might check that every kind of user input produces the
intended output across the application. System testing is the third level of testing in the software
development process. It's typically performed before acceptance testing and after integration
testing.
FUNCTIONAL TESTING VERSUS NONFUNCTIONAL TESTING:
Difference between Functional Testing and Non Functional Testing
Parameters Functional Non-functional testing
Usage Helps to validate the behavior of the Helps to validate the performance
application. of the application.
Requirements Functional testing is carried out This kind of testing is carried out
using the functional specification. by performance specifications
Manual testing Functional testing is easy to execute It’s very hard to perform non-
by manual testing. functional testing manually.
Functionality It describes what the product does. It describes how the product
works.
Example Test Check login functionality. The dashboard should load in 2
Case seconds.
● Test each function of the application: Functional testing tests each function of the
application by providing the appropriate input and verifying the output against the
functional requirements of the application.
● Test primary entry function: In functional testing, the tester tests each entry function
of the application to check all the entry and exit points.
● Test flow of the GUI screen: In functional testing, the flow of the GUI screen is
checked so that the user can navigate throughout the application.
● Basic Usability: Functional testing involves basic usability testing to check whether
the user can freely navigate through the screens without any difficulty.
● Mainline functions: This involves testing the main feature and functions of the
application.
● Accessibility: This involves testing the accessibility of the system for the user.
● Error Conditions: Functional testing involves checking whether the appropriate
error messages are being displayed or not in case of error conditions.
Functional Testing Process
Functional testing involves the following steps:
1. Identify test input: This step involves identifying the functionality that needs to be
tested. This can vary from testing the usability functions, and main functions to error
conditions.
2. Compute expected outcomes: Create input data based on the specifications of the
function and determine the output based on these specifications.
3. Execute test cases: This step involves executing the designed test cases and
recording the output.
4. Compare the actual and expected output: In this step, the actual output obtained
after executing the test cases is compared with the expected output to determine the
amount of deviation in the results. This step reveals if the system is working as
expected or not.
Type of Functional Testing Techniques
Unit Testing: Unit testing is the type of functional testing technique where the individual
units or modules of the application are tested. It ensures that each module is working
correctly.
Integration Testing: In Integration testing, combined individual units are tested as a group
and expose the faults in the interaction between the integrated units.
Smoke Testing: Smoke testing is a type of functional testing technique where the basic
functionality or feature of the application is tested as it ensures that the most important
function works properly.
User Acceptance Testing: User acceptance testing is done by the client to certify that the
system meets the requirements and works as intended. It is the final phase of testing before
the product release.
Interface Testing: Interface testing is a type of software testing technique that checks the
proper interaction between two different software systems.
Usability Testing: Usability testing is done to measure how easy and user-friendly a
software application is.
System Testing: System testing is a type of software testing that is performed on the
complete integrated system to evaluate the compliance of the system with the
corresponding requirements.
Regression Testing: Regression testing is done to make sure that the code changes should
not affect the existing functionality and the features of the application. It concentrates on
whether all parts are working or not.
Sanity Testing: Sanity testing is a subset of regression testing and is done to make sure
that the code changes introduced are working as expected.
White box Testing: White box testing is a type of software testing that allows the tester to
verify the internal workings of the software system. This includes analyzing the code,
infrastructure, and integrations with the external system.
Black box Testing: Black box testing is a type of software testing where the functionality
of the software system is tested without looking at the internal working or structures of the
software system.
Database Testing: Database testing is a type of software testing that checks the schema,
tables, etc of the database under test.
Adhoc Testing: Adhoc testing also known as monkey testing or random testing is a type
of software testing that does not follow any documentation or test plan to perform testing.
Recovery Testing: Recovery testing is a type of software testing that verifies the
software’s ability to recover from the failures like hardware failures, software failures,
crashes, etc.
Static Testing: Static testing is a type of software testing which is performed to check the
defects in software without actually executing the code of the software application.
Greybox Testing: Grey box testing is a type of software testing that includes black box
and white box testing.
Component Testing: Component testing also known as program testing or module testing
is a type of software testing that is done after the unit testing. In this, the test objects can
be tested independently as a component without integrating with other components.
Benefits of Functional Testing
● Bug-free product: Functional testing ensures the delivery of a bug-free and high-
quality product.
● Customer satisfaction: It ensures that all requirements are met and ensures that the
customer is satisfied.
● Testing focussed on specifications: Functional testing is focussed on specifications as
per customer usage.
● Proper working of application: This ensures that the application works as expected
and ensures proper working of all the functionality of the application.
● Improves quality of the product: Functional testing ensures the security and safety
of the product and improves the quality of the product.
Limitations of Functional Testing
● Missed critical errors: There are chances while executing functional tests that critical
and logical errors are missed.
● Redundant testing: There are high chances of performing redundant testing.
● Incomplete requirements: If the requirement is not complete then performing this
testing becomes difficult.
Non Functional Testing:
Non-functional testing in software engineering focuses on aspects of a system that do not
involve specific behaviors or functions. Instead, it evaluates the system's performance, reliability,
scalability, and other qualities that contribute to its overall effectiveness. Here are some key types
of non-functional testing:
● Performance Testing:
● Load Testing: Assessing the system's ability to handle a specific amount of load or
concurrent users.
● Stress Testing: Evaluating the system's behavior under extreme conditions to ensure
it can handle unexpected loads.
Reliability Testing:
● Availability Testing: Ensuring that the system is available and accessible whenever
it is needed.
● Reliability Testing: Assessing the system's ability to consistently perform its
functions without failure.
Scalability Testing:
● Vertical Scaling: Evaluating the system's ability to handle an increased load by
adding more resources (e.g., CPU, memory) to a single machine.
● Horizontal Scaling: Assessing the system's ability to handle an increased load by
adding more machines to a network.
Usability Testing:
● User Interface Testing: Evaluating the user interface for ease of use,
responsiveness, and overall user experience.
Compatibility Testing:
● Compatibility Testing: Ensuring that the software works correctly across different
devices, browsers, operating systems, and network environments.
Security Testing:
● Security Testing: Identifying vulnerabilities and weaknesses in the system to
prevent unauthorized access, data breaches, or other security threats.
Maintainability Testing:
● Maintainability Testing: Assessing how easy it is to maintain and update the
software, including code readability, modularity, and ease of fixing defects.
Portability Testing:
● Portability Testing: Ensuring that the software can be easily transferred from one
environment to another without compromising functionality.
Compliance Testing:
● Compliance Testing: Verifying that the software complies with industry standards,
regulations, and legal requirements.
Documentation Testing:
● Documentation Testing: Ensuring that the system documentation is accurate, up-
to-date, and comprehensive.
Non-functional testing is crucial for delivering a reliable and high-quality software product. It
helps identify and address issues related to performance, security, and other critical aspects that
can significantly impact the user experience and overall success of the software.
ACCEPTANCE TESTING AND ITS CRITERIA:
Acceptance testing is a crucial phase in the software development life cycle (SDLC) that ensures
a system meets its specified requirements and is ready for deployment. It involves evaluating the
system's functionality, performance, and other aspects to determine whether it satisfies the
acceptance criteria set by the stakeholders. Here are the key aspects of acceptance testing and its
criteria:
Acceptance Testing Types:
User Acceptance Testing (UAT):
● Purpose: Validates that the system meets the business requirements and is
acceptable to end-users.
● Participants: End-users or business representatives.
● Criteria:
● All critical business processes are functioning correctly.
● User interfaces are intuitive and user-friendly.
● Business workflows align with user expectations.
● System performance meets acceptable standards.
Operational Acceptance Testing (OAT):
● Purpose: Verifies that the system can be operated and maintained in its target
environment.
● Participants: Operations and support teams.
● Criteria:
● System can be installed and configured successfully.
● Monitoring and error-handling mechanisms are effective.
● Backups and recovery procedures are reliable.
Regulatory Acceptance Testing:
● Purpose: Ensures compliance with industry regulations or legal requirements.
● Participants: Regulatory authorities or compliance officers.
● Criteria:
● The system adheres to specified regulations and standards.
● Necessary security measures are in place.
PERFORMANCE TESTING:
Performance testing is a type of testing that evaluates how a system performs under various
conditions and workloads. The goal is to ensure that the software meets specified performance
requirements and can handle the expected user load without degradation in speed or
responsiveness. Performance testing helps identify bottlenecks, assess scalability, and optimize the
overall performance of a system.Here are the key types of performance testing and their objectives:
Load Testing:
● Objective: Determines how the system performs under expected user loads.
● Activities:
● Simulates the expected number of concurrent users.
● Measures response times and throughput under the load.
● Identifies performance bottlenecks.
Stress Testing:
● Objective: Evaluates the system's behavior under extreme conditions, such as heavy
traffic or resource exhaustion.
● Activities:
● Tests beyond the normal operational capacity.
● Assesses system stability and responsiveness under stress.
● Identifies breaking points and failure conditions.
Soak Testing (Endurance Testing):
● Objective: Checks for system performance and stability over an extended period
under normal load conditions.
● Activities:
● Maintains a steady load for an extended duration.
● Monitors for memory leaks, performance degradation, or other issues over
time.
Scalability Testing:
● Objective: Measures the system's ability to scale with increased user load or
resource demands.
● Activities:
● Tests the system's performance as the user base or data volume grows.
● Assesses how well the system can be expanded to handle increased load.
Volume Testing:
● Objective: Evaluates the system's performance when handling large amounts of
data.
● Activities:
● Tests the software's ability to manage a substantial volume of data.
● Assesses database performance, file handling, and overall data processing
capabilities.
Concurrency Testing:
● Objective: Examines the system's behavior when multiple users access it
simultaneously.
● Activities:
● Simulates concurrent user interactions.
● Identifies and resolves issues related to data integrity and access conflicts.
Isolation Testing:
● Objective: Tests the performance of individual components or modules in isolation.
● Activities:
● Focuses on specific functions or components to identify performance issues.
● Helps pinpoint the source of performance problems within the system.
Compatibility Testing:
● Objective: Ensures that the software performs well across different environments,
devices, and configurations.
● Activities:
● Tests performance on various browsers, operating systems, and hardware
setups.
● Verifies that the application meets performance criteria in diverse
environments.
Real User Monitoring (RUM):
● Objective: Monitors and analyzes the actual user experience in real-time.
● Activities:
● Captures and analyzes performance data from real users.
● Provides insights into user interactions and experiences.
Performance testing is a crucial step in the software development life cycle to ensure that the
application can handle expected loads and deliver a satisfactory user experience. It helps identify
and address performance issues before the software is deployed to production.
Factors governing Performance testing:
Several factors govern performance testing, and understanding these factors is essential for
designing effective performance testing strategies. Here are the key factors that influence
performance testing:
System Architecture:
User Load:
● The number of concurrent users accessing the system can have a significant
impact on its performance. Performance testing should simulate realistic user
loads to assess how the system behaves under normal and peak usage conditions.
Scenarios and Use Cases:
● Performance testing should align with the expected usage scenarios and use cases
of the application. Testing should cover common user activities to accurately
reflect real-world conditions.
Network Conditions:
● The performance of a system can be influenced by the speed, bandwidth, and
reliability of the network. Performance testing should consider variations in
network conditions to simulate different user environments.
Data Volume and Complexity:
● The amount and complexity of data processed by the system can impact
performance. Performance testing should evaluate how the system handles
varying data loads, including large datasets and complex transactions.
Concurrency and Load Patterns:
● Understanding how users interact with the system concurrently and the patterns of
load (e.g., bursty, steady-state) is crucial. Different load patterns can reveal how
the system responds under different usage scenarios.
Response Time Requirements:
● Every application has specific response time requirements. Performance testing
should assess whether the system meets these requirements under different
conditions, ensuring a responsive user experience.
Transaction Throughput:
● The rate at which the system can process transactions is a critical performance
metric. Performance testing should measure transaction throughput under varying
loads to ensure it aligns with the application's performance goals.
Third-Party Integrations:
● If the system integrates with third-party services or APIs, the performance of
these integrations can impact overall system performance. Performance testing
should include scenarios involving third-party interactions.
Hardware Utilization:
● Monitoring hardware resource utilization, such as CPU, memory, and disk I/O,
provides insights into system bottlenecks. Performance testing should assess how
the system utilizes hardware resources under different loads.
Caching and Content Delivery:
● The use of caching mechanisms and content delivery networks can affect
performance. Performance testing should consider scenarios with and without
caching to evaluate the impact on response times and resource utilization.
Security Considerations:
● Security measures, such as encryption and authentication, can introduce overhead.
Performance testing should assess the impact of security features on response
times and overall system performance.
Environment Configuration:
● Differences in test, staging, and production environments can impact
performance. It's essential to replicate production-like conditions in the testing
environment to obtain accurate performance results.
Failure and Recovery Scenarios:
● Performance testing should include scenarios where the system is subjected to
failures or sudden spikes in load. Assessing how the system recovers from such
situations is crucial for overall system resilience.
Regulatory Compliance:
● For systems that must adhere to specific regulations or compliance standards,
performance testing should ensure that the system meets the required performance
criteria while maintaining compliance.
Considering these factors in the performance testing process helps identify potential issues early
in the development life cycle and ensures that the system performs optimally under various
conditions.
WHAT IS REGRESSION TESTING:
Regression testing is a type of software testing that verifies whether recent changes to the
software, such as bug fixes, enhancements, or new features, have adversely affected the existing
functionalities. The primary goal of regression testing is to ensure that the modifications have not
introduced new defects or negatively impacted the software's previously tested features.
Purpose:
● Ensure that the recent changes in the codebase do not introduce new bugs or break
existing functionalities.
● Verify that the overall integrity of the software is maintained after each
modification.
Scope:
● Focus on testing the affected areas of the codebase due to recent changes.
● Involves re-running test cases that cover the modified or related code as well as
critical functionalities of the software.
When to Perform Regression Testing:
● After bug fixes.
● After implementing new features or enhancements.
● After code refactoring.
● After changes in the software's environment or dependencies.
● As a part of continuous integration or continuous deployment processes.
Test Automation:
● Regression testing is often automated to efficiently and quickly execute a large
number of test cases.
● Automated test suites help detect regressions early in the development process,
reducing the time and effort required for manual testing.
Test Suites:
● Regression test suites consist of a set of test cases that cover the critical and
commonly used functionalities of the software.
● The test suite is expanded or modified over time to include new test cases
addressing specific scenarios and features.
Impact Analysis:
● Before regression testing, teams often conduct impact analysis to identify areas of
the codebase that may be affected by recent changes.
● This analysis helps determine the scope of regression testing and ensures that
relevant test cases are executed.
Continuous Integration (CI) and Continuous Deployment (CD):
● In CI/CD pipelines, regression testing is an integral part of the automated testing
process.
● Whenever new code is committed, automated regression tests are executed to
catch regressions early in the development cycle.
Tools:
● Various testing tools are available for automating regression testing, such as
Selenium for web applications, JUnit for Java applications, or pytest for Python
applications.
Manual Regression Testing:
● In some cases, manual testing is necessary, especially for scenarios that are
challenging to automate or for exploratory testing to discover unexpected issues.
Verification of Bug Fixes:
● After fixing a reported bug, regression testing helps ensure that the bug is indeed
resolved without introducing new issues.
Maintenance and Evolvement:
● As the software evolves, the regression test suite should be continuously
maintained to reflect changes in the application's features and functionalities.
Traceability:
● Establish traceability between requirements, test cases, and code changes.
● Ensure that each requirement has associated test cases and that test cases cover
the relevant areas affected by code changes.
Risk-Based Regression Testing:
● Prioritize test cases based on the perceived risk associated with specific features
or changes.
● Focus regression testing efforts on areas with the highest risk of regression.
By adopting these best practices, teams can build a robust regression testing process that helps
ensure the ongoing quality and reliability of the software throughout its development lifecycle.