ST Unit 2
ST Unit 2
FLOW TESTING
TRANSACTIONAL FLOW TESTING
The transaction flow testing targets to validate every step of the transaction within a system,
specifically in the financial or fintech apps. These steps include data entry, processing, output
generation, and validation. It makes sure that the integration is seamless with no hiccups.
Transactional Flow Testing is a type of software testing that focuses on verifying the
correctness and reliability of the flow of transactions within an application or system. In this
context, transactions refer to logical units of work that consist of multiple database
operations or steps. These steps need to be executed as a single, indivisible unit. The goal of
transactional flow testing is to ensure that the integration of these steps is seamless, with no
hiccups during the transaction flow12.
Here are some key points about transactional flow testing:
1. Definition: Transaction flow testing targets the validation of every step of a
transaction within a system, specifically in financial or fintech applications. These
steps include data entry, processing, output generation, and validation.
2. Behavioral Model: It creates a behavioral model of the program, which leads to
effective functional testing.
3. Transaction Flow Graphs: These graphs represent a system’s processing and are
used for functional testing. They help specify requirements for complex systems,
especially online systems. For instance, an air traffic control or airline reservation
system may have thousands of different transaction flows represented by relatively
simple flow graphs
Using Transaction Flow Graphs
Transaction Flow Graph (TGF) represents the transaction flow in a system. It helps you
understand and optimize the transaction flow accordingly. It consists of edges to represent
individual steps and nodes which depict data entry, and processing activities.
The visual depiction of transaction flow graphs enables QA experts, developers, and
stakeholders to identify bottlenecks and improve system performance.
Conducting a thorough examination of these graphs helps identifying critical paths, potential
loopholes, shortcomings, or alternative flows.
Primary Use Cases of Transaction Flow Graph
Highlight and address the bottlenecks or breakdowns which can occur during the
flow.
It is essential to ensure correct dependencies, proper communication, and seamless
data transfer among components for their proper functioning with end-to-end testing.
Emphasize testing the entire transaction flow from the start to its completion.
Assess if the integration between different systems and components is swift and
hassle-free.
Load and performance testing for transaction flow is essential to evaluate system
performance under wide range of transaction loads.
This approach is effective in optimizing system resources by identifying bottlenecks.
Simulating high-traffic scenarios is extremely important to measure the scalability and
response times.
Performance and load testing ensures that the transaction flow can handle workloads
without any degradations.
Data validation and management are essential to verify system integrity and accuracy
at every step.
Creating realistic test data provides a wide coverage along with edge cases.
QA professionals can effectively reset data in a consistent format after testing. It
becomes easier to maintain the reliability and repeatability of test cases.
It is also helpful in determining consistency at every step.
You must consider different inputs, user interactions, and system states in your test
scenarios for transaction flow testing.
It is essential to document every step for test cases, specific conditions, and expected
results or assertions.
Include negative and positive test cases as they are effective in validating expected
and exceptional behaviors.
Create test case scenarios that encompass every path and condition for the entire
transaction flow.
Find out the right sequence of steps and components required for a seamless end-to-
end flow of transactions.
Analyze essential documents such as data flow diagrams, architectural diagrams, and
process flows. It will helps to visualize and understand the intricacies of transaction
flow testing.
Keep monitoring transaction logs and system behavior to identify certain patterns,
exceptions, and areas which need improvement.
average = (x + y) / 2
print("The average is:", average)
Static data flow testing would reveal a potential anomaly, as the variable average is defined
but never used. This indicates that the programmer may have intended to print average but
mistakenly omitted it.
Dynamic Data Flow Testing
Dynamic data flow testing, on the other hand, involves executing the program and monitoring
the actual flow of data values through variables. This hands-on approach complements static
data flow testing by identifying anomalies that may not be apparent from mere code analysis.
For instance, dynamic data flow testing can detect anomalies related to:
Data Corruption: A variable’s value is modified unexpectedly, leading to incorrect
program behavior.
Memory Leaks: Unnecessary memory allocations are not properly released, causing
memory consumption to grow uncontrollably.
Invalid Data Manipulation: Data is manipulated in an unintended manner, resulting
in erroneous calculations or outputs.
Dynamic data flow testing provides valuable insights into how data behaves during program
execution, complementing the findings of static data flow testing.
Real-Life Example: Dynamic Data Flow Testing in Action
Consider a program that calculates the factorial of a number:
Python
def factorial(n):
if n == 0:
return 1
else:
return n * factorial(n - 1)
print(factorial(5))
Dynamic data flow testing would identify an anomaly related to the recursive call
to factorial(). If the input is a negative number, the recursion would continue indefinitely,
leading to a stack overflow error. Static data flow testing, which only analyzes the code
without executing it, would not pick up this anomaly.
Advantages of Data Flow Testing
Early Bug Detection
Improved Code Quality
Thorough Test Coverage:
Enhanced Cooperation:
User-Centric Approach
Effective Debugging
Data Flow Testing Limitations/Disadvantages
Not every possible anomaly in data flow can be found every time.
Testing data flow can be costly and time-consuming.
Not all software types can benefit from data flow testing.
Testing for data flow issues might not be able to find every kind of flaw.
Other testing techniques should not be used in place of data flow testing.
Data Flow Testing Coverage Metrics:
For variable v
Variable V (image) is defined only once on link (1, 3)
Because V has a predicate use at node 12 and the subsequent path to the end must be forced
for both directions at node 12, the all-du-paths strategy for this variable requires that we
exercise all loop-free entry/exit paths and at least one path that includes the loop caused by
(11, 4)
Note that we must test paths that include both sub paths (3, 4, 5) and (3, 5) even though
neither of these has V definition
They must be included because they provide alternate du paths to the V se on link (5, 6)
Although (7, 4) is not used in the test set for variable V, it will be included in the test set that
covers the predicate uses of array variable v() and U
The all-du-paths strategy is a strong criterion, but it does not take as many tests as it might
seem at first because any one test simultaneously satisfies the criterion for several definitions
and uses of several different variables
For Variable V
In image, ADUP requires that we include sub paths (3,4,5) and (3,5) in some tests because
subsequent uses of V, such as on link (5,6) can be reached by either alternative
In AU, either (3,4,5) or (3,5) can be used to start paths, but we don't have to use both
Similarly, we can skip the (8,10) link if we've included the (8,9,10) sub path
Note the hole, we must include (8,9,10) in some test cases because that's the only way to
reach the c use at link (9,10-) but suppose our bug for variable V is on link (8,10) after all?
Find a covering set of paths under AU for the image shown below
All p-uses/some c-uses strategy (APU + C)
For every variable and every definition of that variable, include at least one definition free
path from the definition to every predicate use
If there are definitions of the variables that are not covered by the above prescription, and
then add computational use test cases as required to cover every definition
For variable Z
In image for APU + C we can select paths that all take the upper link (12, 13) and therefore
we do not cover the c-use of Z but that's okay according to the strategy's definition because
every definition is covered
Links (1,3), (4,5), (5,6) and (7,8) must be included because they contain definitions for
variable z
Links (3,4), (3,5), (8,9) (8,10), (9,6) and (9,10) must be included because they contain
predicate uses of Z
Find a covering set of test cases under APU + C for a ll variables in this example - it only
takes two test
For variable V
In image, APU + C is achieved for V by (135678, 10,11,4,5,6,7,8,10,11,12[upper],13,2) and
(1,3,5,6,7,8,10,11,12[lower], 13,2)
Note that the c-use at (9,10) need not be included under the APU + C criterion
For variable z
In image ACU + P coverage is achieved for Z by path (1,3,4,5,6,7,8,10,11,12,13[lower], 2)
but the predicate uses of several definition are not covered
Specifically, the (1,3) definition is not covered for the (3,5) p-use, the (7,8) definition is not
covered for the (8,9), (9,6) and (9,10) p-uses
The above examples imply that APU+C is stronger than branch coverage, but ACU+P may
be weaker than, or incomparable to branch coverage