0% found this document useful (0 votes)
17 views

ST Unit 2

Uploaded by

aditya4u9
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views

ST Unit 2

Uploaded by

aditya4u9
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 14

UNIT 2

FLOW TESTING
TRANSACTIONAL FLOW TESTING
The transaction flow testing targets to validate every step of the transaction within a system,
specifically in the financial or fintech apps. These steps include data entry, processing, output
generation, and validation. It makes sure that the integration is seamless with no hiccups.
Transactional Flow Testing is a type of software testing that focuses on verifying the
correctness and reliability of the flow of transactions within an application or system. In this
context, transactions refer to logical units of work that consist of multiple database
operations or steps. These steps need to be executed as a single, indivisible unit. The goal of
transactional flow testing is to ensure that the integration of these steps is seamless, with no
hiccups during the transaction flow12.
Here are some key points about transactional flow testing:
1. Definition: Transaction flow testing targets the validation of every step of a
transaction within a system, specifically in financial or fintech applications. These
steps include data entry, processing, output generation, and validation.
2. Behavioral Model: It creates a behavioral model of the program, which leads to
effective functional testing.
3. Transaction Flow Graphs: These graphs represent a system’s processing and are
used for functional testing. They help specify requirements for complex systems,
especially online systems. For instance, an air traffic control or airline reservation
system may have thousands of different transaction flows represented by relatively
simple flow graphs
Using Transaction Flow Graphs
Transaction Flow Graph (TGF) represents the transaction flow in a system. It helps you
understand and optimize the transaction flow accordingly. It consists of edges to represent
individual steps and nodes which depict data entry, and processing activities.
The visual depiction of transaction flow graphs enables QA experts, developers, and
stakeholders to identify bottlenecks and improve system performance.
Conducting a thorough examination of these graphs helps identifying critical paths, potential
loopholes, shortcomings, or alternative flows.
Primary Use Cases of Transaction Flow Graph

 Identifying Dependencies- The transaction flow graph provides various opportunities


for optimizing resource utilization and parallel processing to improve system
performance.
 Generating Test Scenarios- Testers can easily identify the alternative flows
represented in the TFG to create test cases.
 Validating Test Cases- It also helps testers to adhere to the business rules and
correctly interact with the various system components.
 Enhanced Visualization- The visual representation makes it easy for stakeholders
and QA professionals to get a clear picture of transactions.
Below are the primary objectives for transaction flow testing.

 Consistency and Reliability - Transaction flow testing helps eliminating issues


arising due to external factors and dependencies.
 Accuracy and Integrity - Transactional flow testing ensures that data integrated with
transactions always remain intact across the entire flow. In this way, validating and
handling outputs are always accurate.
 Enhanced Scalability and Performance - Testing transaction flow enables testers to
verify that the system handles transactions consistently without performance hits or
lags during heavy loads and without unexpected errors.
 Exception Management -Testers evaluate and check that the error messages are
displayed appropriately, and transactions are roll backed when necessary to maintain
system integrity.
Transaction flow testing also involves automation testing techniques and tools to simulate
complex and repetitive test scenarios. It enhances customer experience and avoids
transactional failures which improve the overall quality of system.
Key Strategies for Implementing Transaction Flow Testing
1 Prioritizing End-to-End Testing
2 Load and Performance Testing
3 Data Validation and Management
4 Creating Detailed Test Case Scenario
5 Overcome Challenges in Transaction Flow Testing
6 Thoroughly Understand the Transactional Flow

1 Prioritizing End-to-End Testing

 Highlight and address the bottlenecks or breakdowns which can occur during the
flow.
 It is essential to ensure correct dependencies, proper communication, and seamless
data transfer among components for their proper functioning with end-to-end testing.
 Emphasize testing the entire transaction flow from the start to its completion.
 Assess if the integration between different systems and components is swift and
hassle-free.

2 Load and Performance Testing

 Load and performance testing for transaction flow is essential to evaluate system
performance under wide range of transaction loads.
 This approach is effective in optimizing system resources by identifying bottlenecks.
 Simulating high-traffic scenarios is extremely important to measure the scalability and
response times.
 Performance and load testing ensures that the transaction flow can handle workloads
without any degradations.

3 Data Validation and Management

 Data validation and management are essential to verify system integrity and accuracy
at every step.
 Creating realistic test data provides a wide coverage along with edge cases.
 QA professionals can effectively reset data in a consistent format after testing. It
becomes easier to maintain the reliability and repeatability of test cases.
 It is also helpful in determining consistency at every step.

4 Creating Detailed Test Case Scenario

 You must consider different inputs, user interactions, and system states in your test
scenarios for transaction flow testing.
 It is essential to document every step for test cases, specific conditions, and expected
results or assertions.
 Include negative and positive test cases as they are effective in validating expected
and exceptional behaviors.
 Create test case scenarios that encompass every path and condition for the entire
transaction flow.

5 Overcome Challenges in Transaction Flow Testing

 Create a stable test environment to maintain consistency in network configurations


hardware, software, and data which mirrors the production environment.
 Prioritize transaction flow testing based on risk analysis, negative and positive
scenarios, and cover user impact.
 Encourage team members to discuss their challenges and clarify requirements without
hesitating to resolve issues efficiently.
 User realistic test data to simulate actual user behavior.

6 Thoroughly Understand the Transactional Flow

 Find out the right sequence of steps and components required for a seamless end-to-
end flow of transactions.
 Analyze essential documents such as data flow diagrams, architectural diagrams, and
process flows. It will helps to visualize and understand the intricacies of transaction
flow testing.
 Keep monitoring transaction logs and system behavior to identify certain patterns,
exceptions, and areas which need improvement.

Transaction Flow Testing Techniques

Get the Transaction Flows:


 Complicated systems that process a lot of different, complicated transactions, should
have explicit representations of the transactions flows or the equivalent
 Transaction flows are like control flow graphs, and consequently, we should expect to
have them in increasing levels of details
 The systems design documentation should contain an overview section, that details
the main transaction flows
 Detailed transaction flows are a mandatory prerequisite to the rational design of a
systems functional test
Inspections, Reviews and Walkthroughs:
 Transaction flows are natural agenda for system reviews or inspections
 In conducting the walkthrough, you should:
 Discuss enough transaction types to account for 98%-99% of the transaction, the
system is expected to process
 Discuss paths through flows in functional rather than technical terms
 Ask the designers to relate every flow to the specification and to show how that
transaction, directly or indirectly, follows from the requirements
 Make transaction flow testing the corner stone of system functional testing, just as
path testing is the corner stone of unit testing
 Select additional flow paths for loops, extreme values, and domain boundaries
 Design more test cases to validate all births and deaths
 Publish and distribute the selected test paths through the transaction flows as early as
possible, so that they will exert the maximum beneficial effect on the project
Path Selection
Select a set of covering paths (C1 + C2) using the analogous criteria you used for
structural path testing
 Select a covering set of paths based on functionally sensible transactions as you
would for control flow graphs
 Try to find most tortuous, longest, strongest path from the entry to the exit of the
transaction flow

Path Sensitization
 Most of the normal paths are very easy to sensitize. 80% - 95% transaction flow
coverage (C1 + C2) is usually easy to achieve
 The remaining small percentage is often very difficult
 Sensitization is the act of defining the transaction, if there are sensitization problems
on the easy paths, then bet on either bug in transaction flows or a design bug
Path Instrumentation
 Instrumentation plays a bigger role in transaction flow testing than in unit path testing
The information of the path taken for a given transaction must be kept with that
transaction and can be recorded by a central transaction dispatcher or by the
individual processing modules
 In some systems, such traces are provided by the operating systems or a running log
Design and Maintain Test Database
 Design and maintenance of the test databases constitute about 30% to 40% of the
effort in transaction flow test design
 People are often unaware that a test database needs to designed
Test databases must be centrally administrated and configuration controlled with a
comprehensive design plan
 Creating a comprehensivetest databases is itself a big project on its own
 It requires talented, matured, and diplomatic designers who are experienced in both
system design and test design
Test Execution:
 Commit to automation of test execution if you want to do transaction flow testing for
a system of any size
If the numbers of test cases are limited, you need not worry about test execution
automation
 If the number of test cases run into several hundred, performing transaction flow
testing to achieve (C1 + C2) needs execution automation without which you cannot get
it right
Data Flow Testing
Data flow testing is a white-box testing technique that examines the flow of data in a
program. It focuses on the points where variables are defined and used and aims to identify
and eliminate potential anomalies that could disrupt the flow of data, leading to program
malfunctions or erroneous outputs.
Data flow testing operates on two distinct levels: static and dynamic.
Static data flow testing involves analyzing the source code without executing the program. It
constructs a control flow graph, which represents the various paths of execution through the
code. This graph is then analyzed to identify potential data flow anomalies, such as:
 Definition-Use Anomalies: A variable is defined but never used, or vice versa.
 Redundant Definitions: A variable is defined multiple times before being used.
 Uninitialized Use: A variable is used before it has been assigned a value.
Dynamic data flow testing, on the other hand, involves executing the program and monitoring
the actual flow of data values through variables. It can detect anomalies related to:
 Data Corruption: A variable’s value is modified unexpectedly, leading to incorrect
program behavior.
 Memory Leaks: Unnecessary memory allocations are not properly released, causing
memory consumption to grow uncontrollably.
 Invalid Data Manipulation: Data is manipulated in an unintended manner, resulting in
erroneous calculations or outputs.
Here’s a real-life example
def transfer_funds(sender_balance, recipient_balance, transfer_amount):
#Data flow starts
temp_sender_balance = sender_balance
temp_recipient_balance = recipient_balance
#Check if the sender has sufficient balance
if temp_sender_balance>= transfer_amount:
# Deduct the transfer amount from the sender’s balance
temp_sender_balance -= transfer_amount
#Add the transfer amount to the recipient’s balance
temp_recipient_balance += transfer_amount
# Data flow ends
#Return the updated balances
return temp_sender_balance, temp_recipient_balance
In this example, data flow testing would focus on ensuring that the
variables (temp_sender_balance, temp_recipient_balance, and transfer_amount) are
correctly initialized, manipulated, and reflect the expected values after the fund transfer
operation. It helps identify potential anomalies or defects in the data flow, ensuring the
reliability of the fund transfer functionality.
Steps Followed In Data Flow Testing
1: Variable Identification
Identify the relevant variables in the program that represent the data flow. These variables are
the ones that will be tracked throughout the testing process.
2: Control Flow Graph (CFG) Construction
Develop a Control Flow Graph to visualize the flow of control and data within the program.
The CFG will show the different paths that the program can take and how the data flow
changes along each path.
3: Data Flow Analysis
Conduct static data flow analysis by examining the paths of data variables through the
program without executing it. This will help to identify potential problems with the way that
the data is being used, such as variables being used before they have been initialized.
4: Data Flow Anomaly Identification
Detect potential defects, known as data flow anomalies, arising from incorrect variable
initialization or usage. These anomalies are the problems that the testing process is trying to
find.
5: Dynamic Data Flow Testing
Execute dynamic data flow testing to trace program paths from the source code, gaining
insights into how data variables evolve during runtime. This will help to confirm that the data
is being used correctly in the program.
6: Test Case Design
Design test cases based on identified data flow paths, ensuring comprehensive coverage of
potential data flow issues. These test cases will be used to test the program and make sure
that the data flow problems have been fixed.
7: Test Execution
Execute the designed test cases, actively monitoring data variables to validate their behavior
during program execution. This will help to identify any remaining data flow problems.
8: Anomaly Resolution
Address any anomalies or defects identified during the testing process. This will involve
fixing the code to make sure that the data is being used correctly.
9: Validation
Validate that the corrected program successfully mitigates data flow issues and operates as
intended. This will help to ensure that the data flow problems have been fixed and that the
program is working correctly.
10: Documentation
Document the data flow testing process, including identified anomalies, resolutions, and
validation results for future reference. This will help to ensure that the testing process can be
repeated in the future and that the data flow problems do not recur.
Types of Data Flow Testing
Static Data Flow Testing
Static data flow testing delves into the source code without executing the program. It involves
constructing a control flow graph (CFG), a visual representation of the different paths of
execution through the code. This graph is then analyzed to identify potential data flow
anomalies, such as:
 Definition-Use Anomalies: A variable is defined but never used, or vice versa.
 Redundant Definitions: A variable is defined multiple times before being used.
 Uninitialized Use: A variable is used before it has been assigned a value.
 Data Dependency Anomalies: A variable’s value is modified in an unexpected
manner, leading to incorrect program behavior.
Static data flow testing provides a cost-effective and efficient method for uncovering
potential data flow issues early in the development cycle, reducing the risk of costly defects
later on.
Real-Life Example: Static Data Flow Testing in Action
Consider a simple program that calculates the average of three numbers:
Python
x = int(input("Enter the first number: "))
y = int(input("Enter the second number: "))

average = (x + y) / 2
print("The average is:", average)
Static data flow testing would reveal a potential anomaly, as the variable average is defined
but never used. This indicates that the programmer may have intended to print average but
mistakenly omitted it.
Dynamic Data Flow Testing
Dynamic data flow testing, on the other hand, involves executing the program and monitoring
the actual flow of data values through variables. This hands-on approach complements static
data flow testing by identifying anomalies that may not be apparent from mere code analysis.
For instance, dynamic data flow testing can detect anomalies related to:
 Data Corruption: A variable’s value is modified unexpectedly, leading to incorrect
program behavior.
 Memory Leaks: Unnecessary memory allocations are not properly released, causing
memory consumption to grow uncontrollably.
 Invalid Data Manipulation: Data is manipulated in an unintended manner, resulting
in erroneous calculations or outputs.
Dynamic data flow testing provides valuable insights into how data behaves during program
execution, complementing the findings of static data flow testing.
Real-Life Example: Dynamic Data Flow Testing in Action
Consider a program that calculates the factorial of a number:
Python
def factorial(n):
if n == 0:
return 1
else:
return n * factorial(n - 1)

print(factorial(5))
Dynamic data flow testing would identify an anomaly related to the recursive call
to factorial(). If the input is a negative number, the recursion would continue indefinitely,
leading to a stack overflow error. Static data flow testing, which only analyzes the code
without executing it, would not pick up this anomaly.
Advantages of Data Flow Testing
Early Bug Detection
Improved Code Quality
Thorough Test Coverage:
Enhanced Cooperation:
User-Centric Approach
Effective Debugging
Data Flow Testing Limitations/Disadvantages
Not every possible anomaly in data flow can be found every time.
Testing data flow can be costly and time-consuming.
Not all software types can benefit from data flow testing.
Testing for data flow issues might not be able to find every kind of flaw.
Other testing techniques should not be used in place of data flow testing.
Data Flow Testing Coverage Metrics:

1. All Definition Coverage: Encompassing “sub-paths” from each definition to some of


their respective uses, this metric ensures a comprehensive examination of variable
paths, fostering a deeper understanding of data flow within the code.
2. All Definition-C Use Coverage: Extending the coverage spectrum, this metric
explores “sub-paths” from each definition to all their respective C uses, providing a
thorough analysis of how variables are consumed within the code.
3. All Definition-P Use Coverage: Delving into precision, this metric focuses on “sub-
paths” from each definition to all their respective P uses, ensuring a meticulous
evaluation of data variable paths with an emphasis on precision.
4. All Use Coverage: Breaking through type barriers, this metric covers “sub-paths”
from each definition to every respective use, regardless of their types. It offers a
holistic view of how data variables traverse through the code.
5. All Definition Use Coverage: Elevating simplicity, this metric focuses on “simple
sub-paths” from each definition to every respective use. It streamlines the coverage
analysis, offering insights into fundamental data variable interactions within the code.

Strategies in Data Flow Testing are

All-du Paths (ADUP)


The all-du-paths strategy is the strongest data flow testing strategy
It requires that every du path form every definition of every variable to every use of that
definition be exercise under some test
For variable X
In image, because variable X are used only on link (1, 3) any test that starts at the entry
satisfies this criterion (for variable X, but not for all variables as required by the strategy)
For Variable z
The situation for variable z (image) is more complicated because the variable is redefined in
many places
For the definition on link (1, 3) we must exercise paths that include sub paths (1, 3, 4) and (1,
3, 5)
The definition on link (4, 5) is covered by any path that includes (5, 6) such as sub path (1, 3,
4, 5, 6 ....)
The (5, 6) definition requires paths that include sub paths (5, 6, 7 and 4) and (5, 6, 7 and 8)

For variable v
Variable V (image) is defined only once on link (1, 3)
Because V has a predicate use at node 12 and the subsequent path to the end must be forced
for both directions at node 12, the all-du-paths strategy for this variable requires that we
exercise all loop-free entry/exit paths and at least one path that includes the loop caused by
(11, 4)
Note that we must test paths that include both sub paths (3, 4, 5) and (3, 5) even though
neither of these has V definition
They must be included because they provide alternate du paths to the V se on link (5, 6)
Although (7, 4) is not used in the test set for variable V, it will be included in the test set that
covers the predicate uses of array variable v() and U
The all-du-paths strategy is a strong criterion, but it does not take as many tests as it might
seem at first because any one test simultaneously satisfies the criterion for several definitions
and uses of several different variables

All Uses Strategy (AU)


The all uses strategy is that at least one definition clear path from every definition of every
variable to every use of that definition be exercised under some test
Just as we reduced our ambitions by stepping down from all paths (P) to branch coverage
(C2), say, we can reduce the number of test cases by asking that the test set should include at
least one path segment from every definition to every use that can be reached by that
definition

For Variable V
In image, ADUP requires that we include sub paths (3,4,5) and (3,5) in some tests because
subsequent uses of V, such as on link (5,6) can be reached by either alternative
In AU, either (3,4,5) or (3,5) can be used to start paths, but we don't have to use both
Similarly, we can skip the (8,10) link if we've included the (8,9,10) sub path
Note the hole, we must include (8,9,10) in some test cases because that's the only way to
reach the c use at link (9,10-) but suppose our bug for variable V is on link (8,10) after all?
Find a covering set of paths under AU for the image shown below
All p-uses/some c-uses strategy (APU + C)
For every variable and every definition of that variable, include at least one definition free
path from the definition to every predicate use
If there are definitions of the variables that are not covered by the above prescription, and
then add computational use test cases as required to cover every definition

For variable Z
In image for APU + C we can select paths that all take the upper link (12, 13) and therefore
we do not cover the c-use of Z but that's okay according to the strategy's definition because
every definition is covered
Links (1,3), (4,5), (5,6) and (7,8) must be included because they contain definitions for
variable z
Links (3,4), (3,5), (8,9) (8,10), (9,6) and (9,10) must be included because they contain
predicate uses of Z
Find a covering set of test cases under APU + C for a ll variables in this example - it only
takes two test
For variable V
In image, APU + C is achieved for V by (135678, 10,11,4,5,6,7,8,10,11,12[upper],13,2) and
(1,3,5,6,7,8,10,11,12[lower], 13,2)
Note that the c-use at (9,10) need not be included under the APU + C criterion

All c-uses/some p-uses strategy (ACU + P)


The all c-uses/some p-uses strategy (ACU+P) is to first ensure coverage by computational
use cases
If any definition is not covered by the previously selected paths, add such predicate use cases
as are needed to assure that every definition is included in some test

For variable z
In image ACU + P coverage is achieved for Z by path (1,3,4,5,6,7,8,10,11,12,13[lower], 2)
but the predicate uses of several definition are not covered
Specifically, the (1,3) definition is not covered for the (3,5) p-use, the (7,8) definition is not
covered for the (8,9), (9,6) and (9,10) p-uses
The above examples imply that APU+C is stronger than branch coverage, but ACU+P may
be weaker than, or incomparable to branch coverage

All Definition Strategy (AD)


the 'all definitions strategy' reveals that every definition of every variable be covered by at
least one use of that variable; be it the use a computational use or a predicate use.
For variable
Path(1,3,4,5,6,7,7....) satisfies this criterion for variable Z, whereas any entry/exit path
satisfies it for variable V
From the definition of this strategy we would expect it to be weaker than both ACU +P and
APU+C

All Predicate Uses (APU), All Computational Uses (ACU) Strategies


The 'all predicate uses strategy' is derived from APU+C strategy by dropping the requirement
that we include a c-use for the variable if there are no p-uses for the variable
The 'all computational uses strategy' is derived form ACU+P strategy by dropping the
requirement that we include a p-use for the vaiable if there are no c-uses for the variable
It is obvious that ACU should be weaker than ACU+P, and APU should be weaker than
APU+C

You might also like