testing final
testing final
(a) Define Software. Differentiate between Software Testing and Software Quality Assurance.
Software Error : An incorrect internal state that is the manifestation of some fault
Software Failure : External, incorrect behavior with respect to the requirements or other
description of the expected behavior
Example:
A patient gives a doctor a list of symptoms
– Failures
The doctor may look for anomalous internal conditions (high blood pressure, irregular heartbeat,
bacteria in the blood stream)
– Errors
(c) Explain five different levels of thinking maturity about software testing.
Level 0 Thinking
-Testing is the same as debugging
-Does not distinguish between incorrect behavior and mistakes in the program
-Does not help develop software that is reliable or safe
Level 1 Thinking
▪ Purpose is to show correctness
▪ Correctness is impossible to achieve
Level 2 Thinking
- Purpose is to show failures
- Looking for failures is a negative activity
- Puts testers and developers into an adversarial relationship
- What if there are no failures?
Level 3 Thinking
- Testing can only show the presence of failures
- Whenever we use software, we incur some risk
- Risk may be small and consequences unimportant
- Risk may be great and consequences catastrophic
- Testers and developers cooperate to reduce risk
Level 4 Thinking
- A mental discipline that increases quality
- Testing is only one way to increase quality
- Test engineers can become technical leaders of the project
- Primary responsibility is to measure and improve software quality
- Their expertise should help the developers
Verification: The process of determining whether the products of a given phase of the software
development process fulfill the requirements established during the previous phase.
Validation: The process of evaluating software at the end of software development to ensure
compliance with intended usage.
(e) Draw the block diagram of a software testing strategy and explain it briefly.
(f) Explain 7 principles of software testing.
Testing Shows Presence of Defects: Testing shows the presence of defects in the
software. The goal of testing is to make the software fail. Sufficient testing reduces the
presence of defects. In case testers are unable to find defects after repeated regression
testing doesn’t mean that the software is bug-free.
Exhaustive Testing is Impossible: Testing all the functionalities using all valid and invalid
inputs and preconditions is known as Exhaustive testing. If we keep on testing all possible
test conditions then the software execution time and costs will rise. So instead of doing
exhaustive testing, risks and priorities will be taken into consideration whilst doing testing
and estimating testing efforts.
Early Testing: Defects detected in early phases of SDLC are less expensive to fix. So
conducting early testing reduces the cost of fixing defects. It is cheaper to change the
incorrect requirement compared to fixing the fully developed functionality which is not
working as intended.
Defect Clustering: Defect Clustering in software testing means that a small module or
functionality contains most of the bugs or it has the most operational failures. As per
the Pareto Principle (80-20 Rule), 80% of issues comes from 20% of modules and
remaining 20% of issues from remaining 80% of modules. So we do emphasize testing on
the 20% of modules where we face 80% of bugs.
Pesticide Paradox: The process of repeating the same test cases again and again,
eventually, the same test cases will no longer find new bugs is called Pesticide Paradox in
software testing. So to overcome this Pesticide Paradox, it is necessary to review the test
cases regularly and add or update them to find more defects.
Testing is Context Dependent: Testing approach depends on the context of the software
we develop. We do test the software differently in different contexts. For example, online
banking application requires a different approach of testing compared to an e-commerce
site.
Absence of Error – Fallacy: The absence of error does not necessarily mean that the
software is error free. 99% of bug-free software may still be unusable, if wrong
requirements were incorporated into the software and the software is not addressing the
business needs. The software which we built not only be a 99% bug-free software but also
it must fulfill the business needs otherwise it will become an unusable software. The testing
team must start with the hypothesis that there are errors in the software. Using test cases
and other methods, the testing team desires to prove the hypothesis wrong.
(g) Draw the block diagram of RIPR, V-Model & MDTD and explain them briefly.
(h) Explain the V-Model with the necessary diagram.
The V-Model (Verification and Validation Model) is an extension of the Waterfall Model that
emphasizes the relationship between each development phase and its corresponding testing
phase. It ensures that verification and validation processes are integrated into each stage of the
software development lifecycle.
(i) Differentiate between smoke testing, sanity testing and regression testing.
Tests the stability of a new Tests the stability of a new Tests the functionality of all
build functionality or code affected areas after new
changes in the existing functionality /code changes
build in the existing build
Alpha Testing
It is the most common type of testing used in the Software industry. The objective of this testing
is to identify all possible issues or defects before releasing it into the market or to the user.
Alpha Testing is carried out at the end of the software development phase but before the Beta
Testing. Still, minor design changes may be made as a result of such testing.
Alpha Testing is conducted at the developer’s site. In-house virtual user environment can be
created for this type of testing.
Beta Testing
Beta Testing is a formal type of Software Testing which is carried out by the customer. It is
performed in the Real Environment before releasing the product to the market for the actual
end-users.
Beta Testing is carried out to ensure that there are no major failures in the software or product
and it satisfies the business requirements from an end-user perspective. Usually, this testing is
typically done by end-users or others. The Beta version of the software or product is released
to a limited certain number of users in a specific area.
So end-user actually uses the software and shares the feedback to the company. Company then
takes necessary action before releasing the software to the worldwide.
Load Testing
It is a type of Non-Functional Testing and the objective of Load Testing is to check how much
load or maximum workload a system can handle without any performance degradation.
Load Testing helps to find the maximum capacity of the system under specific load and any
issues that cause software performance degradation.
Load testing is performed using tools like JMeter, LoadRunner, WebLoad, Silk performer, etc.
Stress Testing
This testing is done when a system is stressed beyond its specifications in order to check how
and when it fails.
This is performed under heavy load like putting large number beyond storage capacity, complex
database queries, continuous input to the system or database load.
iii) Black box testing and white box testing
● Functional testing verifies each function/feature of the software whereas Non Functional
testing verifies non-functional aspects like performance, usability, reliability, etc.
● Functional testing can be done manually whereas Non Functional testing is hard to perform
manually.
● Functional testing is based on customer’s requirements whereas Non Functional testing is
based on customer’s expectations.
● Functional testing has a goal to validate software actions whereas Non Functional testing has
a goal to validate the performance of the software.
● A Functional Testing example is to check the login functionality whereas a Non Functional
testing example is to check the dashboard should load in 2 seconds.
● Functional describes what the product does whereas Non Functional describes how the
product works.
● Functional testing is performed before the non-functional testing.
v) Differentiate between Monkey testing and Gorilla testing.
Monkey Testing
★ Monkey Testing is carried out by a tester assuming that if the monkey uses the application
then how random input, values will be entered by the Monkey without any knowledge or
understanding of the application.
★ The objective of Monkey Testing is to check if an application or system gets crashed by
providing random input values/data.
★ Monkey Testing is performed randomly and no test cases are scripted and it is not necessary
to be aware of the full functionality of the system.
Gorilla Testing
★ Gorilla Testing is a testing type performed by a tester and sometimes by the developer as
well.
★ In Gorilla Testing, one module or the functionality in the module is tested thoroughly and
heavily.
★ The objective of this testing is to check the robustness of the application.
Monkey Testing is random, chaotic, and broad, aimed at identifying unpredictable issues.
Gorilla Testing is focused, repetitive, and thorough, aimed at ensuring the reliability of a specific
module. Both serve different purposes and complement each other in comprehensive testing
strategies.
vi) Functional and non-functional testing.
Functional : Testing to ensure that the software functions according to specified requirements
and features.
3. Define:
i) Coverage criterion
A coverage criterion is a set of rules or requirements that a test suite must satisfy to ensure
adequate testing of a software program. It specifies which parts of the software need to be
tested and to what extent.
A test criterion is a standard or rule used to design and evaluate test cases, ensuring that
specific objectives, such as coverage or functionality, are met during the testing process.
Test requirements are specific conditions, configurations, or criteria that must be satisfied during
testing to achieve the desired coverage. These are derived from the coverage criterion and
guide the creation of test cases.
Example:
For edge coverage, test requirements would be the set of all edges in the control flow graph
that must be exercised by the test cases.
Node coverage (also called statement coverage) requires that every node in the control flow
graph is executed at least once during testing.
Example:
For a program with an if-else statement
v) Edge Coverage
Edge coverage (also called branch coverage) requires that every edge in the control flow
graph is traversed at least once during testing.
Edge pair coverage requires that every pair of connected edges (two consecutive edges) in the
control flow graph is traversed at least once. This tests combinations of paths through the
program.
Complete path coverage requires that all possible paths in the control flow graph are executed
at least once. This is often impractical for complex programs due to the exponential number of
paths, especially in the presence of loops.
4. Explain the basic architecture of a control flow graph. Define simple path and prime
path.
A Control Flow Graph (CFG) represents the flow of control in a program. It is a directed graph
where:
Components of a CFG:
1. Entry Node:
○ Represents the starting point of the program or function.
○ It has no incoming edges.
2. Exit Node:
○ Represents the end of the program or function.
○ It has no outgoing edges.
3. Decision Nodes:
○ Represent conditional statements (e.g., if, while).
○ Have multiple outgoing edges, typically for true or false branches.
4. Sequential Nodes:
○ Represent straightforward execution from one statement to the next.
5. Back Edges:
○ Represent control flow returning to a previous point in the program, usually
caused by loops.
Simple Path
A simple path is a path that does not traverse any node more than once, except possibly the
starting and ending nodes.
Prime Path
A simple path that does not appear as a proper subpath of any other simple path
6(a)
Answer:
6(b)
Extra:
9. Determine the def & use for the following code snippet / CFG.