0% found this document useful (0 votes)
14 views

Software Engineering Unit4

Uploaded by

hrithik04350
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views

Software Engineering Unit4

Uploaded by

hrithik04350
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 50

Software Engineering

Unit4
Software Testing Concepts and Maintenance: Strategic approach to
Software Testing, Test strategies for Conventional Software: Unit testing,
Integration testing Test strategies for Object-Oriented Software:
Validation testing, System testing Software testing fundamentals:
White-Box testing, Basic path testing, Control structure testing,
Black-Box testing, Penetration testing Dependability properties,
Availability and reliability, Safety, Security. Maintenance -types of
maintenance, enhancing maintainability during development
Software Testing
Software testing is an important process in the software
development lifecycle . It involves verifying and validating
that a software application is free of bugs, meets the
technical requirements set by its design and development ,
and satisfies user requirements efficiently and effectively.
Software testing can be divided into two steps

Verification: It refers to the set of tasks that ensure that the software correctly
implements a specific function. It means “Are we building the product right?”.
Focus: It checks if the product is built correctly according to the design
specifications and standards

Validation: It refers to a different set of tasks that ensure that the software that has
been built is traceable to customer requirements. It means “Are we building the right
product?”.
Focus: It checks if the right product has been built and whether it meets user
expectations and requirements in real-world scenarios.
Strategic approach to Software Testing
Software is tested to uncover errors introduced during design and
construction. Testing often accounts for more project effort than other
software engineering activity.

Testing Strategy provides a road map that describes the steps to be


conducted as part of testing. It should incorporate test planning, test case
design, test execution and resultant data collection and execution .
A Test Strategy is a high-level plan that guides how software testing will be
carried out. Here's a simplified explanation of key points:
What is a Test Strategy?
● It outlines how testing will be done.
● Explains what parts of the software will be tested, the types of testing
needed, and criteria for starting and ending testing.
● Helps decide whether to automate testing or not and how resources will be
used.
Components of a Test Strategy Document:
1. Scope and Overview: Describes the project, who approves the document, and what
testing activities will be done.
2. Testing Methodology: Explains types of testing (like unit or system testing) and who will
do each task.
3. Testing Environment: Details the setup needed for testing, like hardware, software, and
user access.
4. Testing Tools: Lists the tools needed for test management and automation.
5. Release Control: Ensures versions of the software are tested in an organized way.
6. Risk Analysis: Identifies possible risks and how to handle them.
7. Review and Approval: Indicates who will review and approve the test strategy document.
Testing Strategies for Conventional Software can be viewed as a spiral
consisting of four levels of testing:

1)Unit Testing
2)Integration Testing
3)Validation Testing
4)System Testing
Unit testing

Unit testing is the process where you test the smallest functional unit of
code. Software testing helps ensure code quality, and it's an integral part
of software development. It's a software development best practice to write
software as small, functional units then write a unit test for each code unit.
You can first write unit tests as code. Then, run that test code automatically
every time you make changes in the software code. This way, if a test fails,
you can quickly isolate the area of the code that has the bug or error.
Integration Testing: Integration testing is the process of testing the
interface between two software units or modules. Its focus is on
determining the correctness of the interface. Integration testing aims to
expose faults in the interaction between integrated units. Once all the modules
have been unit tested, integration testing is performed.
Top-Down Integration Testing:
● Testing starts from the top-level modules (main control) and progresses downward
to the lower-level modules.
● Major modules are tested first, and the submodules (called by the major modules)
are integrated gradually.
● Stubs are used as placeholders for lower-level modules that are not yet integrated.
● This approach allows testing of critical modules earlier but may delay finding errors
in lower-level modules.
Bottom-Up Integration Testing:
● Testing starts from the lower-level modules and progresses upward towards the
main control module.
● Submodules are tested first, and gradually the higher-level modules are integrated.
● Drivers are used to simulate higher-level modules that are not yet integrated.
● This method is good for testing lower-level modules early, but the overall system
flow isn't tested until later.
Stubs:
● These are dummy modules used in top-down integration testing.
● They simulate the behavior of lower-level modules that are not yet developed or
integrated.
● For example, if a high-level module calls a function in a lower-level module that
hasn't been created, a stub can be used to mimic its behavior (e.g., returning
predefined values).
Stubs simulate lower-level modules in top-down testing.
Drivers:
● These are dummy modules used in bottom-up integration testing.
● They simulate higher-level modules that are not yet integrated, allowing you to test
lower-level modules in isolation.
● For example, if a low-level module is ready but the module that calls it is not, a
driver is used to provide input and control for testing the lower-level module
Drivers simulate higher-level modules in bottom-up testing.
Test strategies for Object-Oriented Software

Validation testing is also known as dynamic testing, where we are ensuring that "we
have developed the product right." And it also checks that the software meets the
business needs of the client.
Validation testing is testing where tester performed functional and non-functional
testing. Here functional testing includes Unit Testing (UT), Integration Testing (IT)
and System Testing (ST), and non-functional testing includes User acceptance
testing (UAT).
System Testing is a type of software testing that is performed on a completely integrated system to
evaluate the compliance of the system with the corresponding requirements. In system testing,
integration testing passed components are taken as input.
System Testing Process
System Testing is performed in the following steps:
● Test Environment Setup: Create testing environment for the better quality testing.
● Create Test Case: Generate test case for the testing process.
● Create Test Data: Generate the data that is to be tested.
● Execute Test Case: After the generation of the test case and the test data, test cases are
executed.
● Defect Reporting: Defects in the system are detected.
● Regression Testing: It is carried out to test the side effects of the testing process.
● Log Defects: Defects are fixed in this step.
● Retest: If the test is not successful then again test is performed.
Types of System Testing
● Performance Testing: Performance Testing is a type of software testing that is carried
out to test the speed, scalability, stability and reliability of the software product or
application.
● Load Testing: Load Testing is a type of software Testing which is carried out to
determine the behavior of a system or software product under extreme load.
● Stress Testing: Stress Testing is a type of software testing performed to check the
robustness of the system under the varying loads.
● Scalability Testing: Scalability Testing is a type of software testing which is carried out
to check the performance of a software application or system in terms of its capability to
scale up or scale down the number of user request load.

.
Different Levels of Software Testing
● Unit Testing: This is the most basic level of testing, where individual components or
units of the software are tested in isolation. It’s like checking each ingredient before
adding it to a dish.
● Integration Testing: Once individual units are tested, they are combined and tested as a
group. Consider blending different ingredients in a mixer to ensure they come together
perfectly.
● System Testing: The entire software system is tested as a whole. It’s similar to baking
the dish in the oven, ensuring every part is cooked evenly and thoroughly.
● Acceptance Testing: This is the final hurdle before the software is delivered to the client
or end-users. It’s like a final taste test to ensure the dish meets the diner’s preferences and
expectations before serving.
TYPES OF TESTING
White-Box testing
White box testing techniques analyze the internal structures the used
data structures, internal design, code structure, and the working of the
software rather than just the functionality as in black box testing. It is
also called glass box testing clear box testing or structural testing.
White Box Testing is also known as transparent testing or open box
testing.
White box testing is also known as structural testing or code-based
testing, and it is used to test the software’s internal logic, flow, and
structure. The tester creates test cases to examine the code paths and
logic flows to ensure they meet the specified requirements.
White Box Testing Focus On

● Path Checking: Examines the different routes the program can take when it runs.
Ensures that all decisions made by the program are correct, necessary, and efficient.
● Output Validation: Tests different inputs to see if the function gives the right output
each time.
● Security Testing: Uses techniques like static code analysis to find and fix potential
security issues in the software. Ensures the software is developed using secure
practices.
● Loop Testing: Checks the loops in the program to make sure they work correctly and
efficiently. Ensures that loops handle variables properly within their scope.
● Data Flow Testing: Follows the path of variables through the program to ensure they
are declared, initialized, used, and manipulated correctly.
Regression testing is like a software quality checkup after any changes are made. It
involves running tests to make sure that everything still works as it should, even after
updates or tweaks to the code. This ensures that the software remains reliable and
functions properly, maintaining its integrity throughout its development lifecycle.
When to do regression testing?
● When new functionality is added to the system and the code has been modified
to absorb and integrate that functionality with the existing code.
● When some defect has been identified in the software and the code is debugged
to fix it.
● When the code is modified to optimize its working.
White Box Testing Techniques
Statement Coverage: Ensures that every executable statement in the code is executed at least once
during testing.
Branch Coverage: Focuses on testing all possible branches from decision points (like if statements)
in the code to ensure that all paths are executed.
Path Coverage: Involves testing all possible paths through the code. This technique is more
comprehensive but can become complex with larger codebases.
Condition Coverage: Tests all the boolean expressions in the decision points to ensure both
true and false outcomes are tested.
Loop Coverage: Ensures that loops in the code are tested for various conditions, such as
zero iterations, one iteration, and multiple iterations.
Data Flow Testing: Focuses on the lifecycle of variables, checking the definitions, uses,
and potential issues like unused variables or variable redefinitions.
Control Flow Testing: Analyzes the flow of control through the code, often represented in a control
flow graph. It helps identify unreachable code and potential infinite loops.
Code Complexity Testing: Uses metrics like cyclomatic complexity to identify complex areas of
code that may require more rigorous testing.
Basic path testing
Path Coverage
Path coverage is concerned with linearly independent paths through the code. Testers draw a control flow diagram of the
code, such as the example below.

Control flow diagram used to design tests in a path coverage approach

In this example, there are several possible paths through the code:

1, 2
1, 3, 4, 5, 6, 8
1, 3, 4, 7, 6, 8
etc.
In a path coverage approach, the tester writers unit tests to execute as many as possible

of the paths through the program’s control flow. The objective is to identify paths

that are broken, redundant, or inefficient.


Control structure testing
Condition Coverage: Tests all the boolean expressions in the decision points
to ensure both true and false outcomes are tested.
Loop Coverage: Ensures that loops in the code are tested for various
conditions, such as zero iterations, one iteration, and multiple iterations.
Data Flow Testing: Focuses on the lifecycle of variables, checking the
definitions, uses, and potential issues like unused variables or variable
redefinitions.
Black-Box testing
Black Box Testing is an important part of making sure software works as it
should. Instead of peeking into the code, testers check how the software
behaves from the outside, just like users would. This helps catch any issues
or bugs that might affect how the software works.
Types Of Black Box Testing

The following are the several categories of black box testing:


Functional Testing
Regression Testing
Nonfunctional Testing (NFT)
Functional Testing
Functional testing checks if a software application works according to its
requirements. It verifies each feature by providing inputs and comparing the
actual outputs with expected results. This type of testing looks at user
interfaces, APIs, databases, and overall functionality, without considering
the underlying source code. It can be done manually or through automation.
Regression Testing
Regression testing ensures that changes to the code (like updates or bug
fixes) haven’t introduced new errors. It checks both the modified code and
related parts of the application to confirm that everything still functions
correctly. Essentially, it ensures that new updates don’t break existing
features.
Non-Functional Testing
Non-functional testing evaluates aspects of a software application that aren’t
about specific functions, such as performance, usability, and scalability. It
checks how well the system performs under certain conditions, focusing on
qualities that functional testing doesn’t cover. This type of testing is just as
important as functional testing.
Penetration testing
Penetration testing (or pen testing) is a security exercise where a cyber-security
expert attempts to find and exploit vulnerabilities in a computer system. The
purpose of this simulated attack is to identify any weak spots in a system’s defenses
which attackers could take advantage of.
Penetration testing helps an organization discover vulnerabilities and flaws in their
systems that they might not have otherwise been able to find. This can help stop
attacks before they start, as organizations can fix these vulnerabilities once they
have been identified.
Penetration testing is also required by some data regulations. For instance, PCI DSS
(Payment Card Industry Data Security Standard) version 4.0 emphasizes the
importance of penetration testing to identify vulnerabilities in an organization's
systems that handle cardholder data.
Types of Penetration Tests
1. Open-Box Pen Test
○ The tester receives some information about the company’s security before starting the test.
This helps them focus their efforts.
2. Closed-Box Pen Test
○ Also called a "single-blind" test, the tester only knows the company name and has no other
background information. This simulates an attacker with limited knowledge.
3. External Pen Test
○ The tester targets the company’s external-facing technology, like websites and external
servers. This can be done remotely, without entering the company’s building.
4. Internal Pen Test
○ The tester operates from within the company’s internal network. This helps assess potential
damage that could be caused by an insider, like a disgruntled employee.
Dependability properties
The most important system property is the dependability of the system. The
dependability of a system reflects the user's degree of trust in that system. It reflects
the extent of the user's confidence that it will operate as users expect and that it will
not 'fail' in normal use. Dependability covers the related systems attributes of
reliability, availability and security. These are all inter-dependent.
System failures may have widespread effects with large numbers of people affected
by the failure. Systems that are not dependable and are unreliable, unsafe or insecure
may be rejected by their users. The costs of system failure may be very high if the
failure leads to economic losses or physical damage. Undependable systems may
cause information loss with a high consequent recovery cost.
Causes of failure:
Hardware failure
Hardware fails because of design and manufacturing errors or because components have
reached the end of their natural life.
Software failure
Software fails due to errors in its specification, design or implementation.
Operational failure
Human operators make mistakes. Now perhaps the largest single cause of system failures
in socio-technical systems.
Principal properties of dependability:
Principal properties:
● Availability: The probability that the system will be up and running and able to
deliver useful services to users.
● Reliability: The probability that the system will correctly deliver services as
expected by users.
● Safety: A judgment of how likely it is that the system will cause damage to
people or its environment.
● Security: A judgment of how likely it is that the system can resist accidental or
deliberate intrusions.
● Resilience: A judgment of how well a system can maintain the continuity of its
critical services in the presence of disruptive events such as equipment failure
and cyberattacks.
How to achieve dependability?
● Avoid the introduction of accidental errors when developing the system.
● Design V & V processes that are effective in discovering residual errors in the
system.
● Design systems to be fault tolerant so that they can continue in operation when
faults occur.
● Design protection mechanisms that guard against external attacks.
● Configure the system correctly for its operating environment.
● Include system capabilities to recognize and resist cyberattacks.
● Include recovery mechanisms to help restore normal system service after a
failure.
Maintenance
Software is always changing and as long as it is being used, it has to be monitored and maintained
properly. This is partly to adjust for the changes within an organization but is even more important
because technology keeps changing.

Your software may need maintenance for any number of reasons – to keep it up and running, to
enhance features, to rework the system for changes into the future, to move to the Cloud, or any other
changes. Whatever the motivation is for software maintenance, it is vital for the success of your
business.
Types of maintenance

● Corrective Software Maintenance


● Adaptive Software Maintenance
● Perfective Software Maintenance
● Preventive Software Maintenance
Corrective Software Maintenance
Corrective software maintenance is what one would typically associate with the maintenance of any
kind. Correct software maintenance addresses the errors and faults within software applications
that could impact various parts of your software, including the design, logic, and code. These corrections
usually come from bug reports that were created by users or customers – but corrective software
maintenance can help to spot them before your customers do, which can help your brand’s reputation.

Adaptive Software Maintenance


Adaptive software maintenance becomes important when the environment of your software changes.
This can be brought on by changes to the operating system, hardware, software dependencies,
Cloud storage, or even changes within the operating system. Sometimes, adaptive software
maintenance reflects organizational policies or rules as well. Updating services, making modifications to
vendors, or changing payment processors can all necessitate adaptive software maintenance.
Perfective Software Maintenance
Perfective software maintenance focuses on the evolution of requirements and features that existing in your
system. As users interact with your applications, they may notice things that you did not or suggest new features
that they would like as part of the software, which could become future projects or enhancements. Perfective
software maintenance takes over some of the work, both adding features that can enhance user experience and
removing features that are not effective and functional. This can include features that are not used or those that
do not help you to meet your end goals.

Preventive Software Maintenance


Preventive Software Maintenance helps to make changes and adaptations to your software so that it can
work for a longer period of time. The focus of the type of maintenance is to prevent the deterioration of your
software as it continues to adapt and change. These services can include optimizing code and updating
documentation as needed.Preventive software maintenance helps to reduce the risk associated with operating
software for a long time, helping it to become more stable, understandable, and maintainable.
Enhancing maintainability during development
Modular Design:

○ Break the system into smaller, manageable modules or components. This makes it easier to understand,
test, and modify parts of the system independently.

Clear Documentation:

○ Maintain thorough documentation for both the code and the overall system architecture. This includes
user manuals, API documentation, and design specifications.

Coding Standards:

○ Establish and enforce coding standards to ensure consistency across the codebase. This makes it easier
for developers to read and understand each other's code.

Code Reviews:

○ Implement regular code reviews to identify potential issues early and ensure adherence to best practices.
Peer feedback can significantly improve code quality.
Automated Testing:
○ Develop a comprehensive suite of automated tests (unit, integration, and end-to-end tests).
This helps catch issues early and ensures that changes do not introduce new bugs.
Version Control:
○ Use version control systems to manage code changes. This allows tracking of modifications,
collaboration among developers, and easy rollback to previous versions if needed.
User Feedback:
○ Actively seek and incorporate user feedback throughout the development process to ensure
that the software meets user needs and expectations.
Training and Knowledge Sharing:
○ Foster a culture of knowledge sharing and provide training opportunities for team members.
This helps ensure that everyone is up to date on best practices and tools.
Structure of the V-Model:
● The left side of the "V" is for Verification (planning and development activities).
● The right side is for Validation (testing activities).
● Each development phase corresponds to a testing phase.
Verification (Left Side of the V)
These phases ensure that the software is being built correctly according to requirements.
● Requirements Analysis:
○ Collecting and analyzing user requirements.
○ Output: Requirements Specification (used for Acceptance Testing).
● System Design:
○ Designing the overall system architecture.
○ Output: System Design Document (used for System Testing).
● High-Level Design:
○ Breaking down the system into modules.
○ Output: High-Level Design Document (used for Integration Testing).
● Low-Level Design:
○ Designing individual modules in detail.
○ Output: Detailed Design Document (used for Unit Testing).
● Coding:
○ Writing the actual code based on the design documents.
Validation (Right Side of the V)
These phases ensure that the software meets the user’s needs and works as intended.
● Unit Testing:
○ Testing individual units or components of the code for correctness.
○ Corresponds to Low-Level Design.
● Integration Testing:
○ Testing the interaction between different modules.
○ Corresponds to High-Level Design.
● System Testing:
○ Testing the entire system for functionality, performance, and reliability.
○ Corresponds to System Design.
● Acceptance Testing:
○ Verifying that the system meets the end-user's requirements.
○ Corresponds to Requirements Analysis.

You might also like