Unit 1 ST PPT
Unit 1 ST PPT
Basic Concepts and Preliminaries - Software Quality, Role of Testing, Verification and
Validation, Failure, Error, Fault, and Defect, Notion of Software Reliability, Objectives
of Testing, What Is a Test Case? Expected Outcome, Concept of Complete Testing,
Central Issue in Testing, Testing Activities, Test Levels, Sources of Information for Test
Case Selection, White-Box, Black-Box and Gray-Box Testing, Test Planning and Design
Static analysis and dynamic analysis are complementary in nature, and for better
effectiveness, both must be performed repeatedly and alternated
Role of Testing……
Dynamic code analysis advantages
• It identifies vulnerabilities in a runtime environment.
• It allows for analysis of applications in which you do not have access to the actual code.
• It identifies vulnerabilities that might have been false negatives in the static code analysis.
• It permits you to validate static code analysis findings.
• It can be conducted against any application.
Dynamic code analysis limitations
• Automated tools provide a false sense of security that everything is being addressed.
• Cannot guarantee the full test coverage of the source code
• Automated tools produce false positives and false negatives.
• Automated tools are only as good as the rules they are using to scan with.
• It is more difficult to trace the vulnerability back to the exact location in the code, taking
longer to fix the problem
• By static and dynamic analysis they need to identify as many faults as possible
• Those faults are fixed at an early stage of the software development.
Verification and Validation
• Two similar concepts related to software testing frequently used by practitioners are
verification and validation.
Verification: This kind of activity helps us in evaluating a software system by
determining whether the product of a given development phase satisfies the
requirements established before the start of that phase. or
Verification is the process of checking that software achieves its goal without any
bugs. It is the process to ensure whether the product that is developed is right or not.
It verifies whether the developed product fulfills the requirements that we have.
Verification is static testing. Verification means Are we building the product right?
• The product can be an intermediate product, such as requirement specification,
design specification, code, user manual, or even the final product.
• Activities that check the correctness of a development phase are called verification
activities.
Verification and Validation….
What is Validation?
• Activities of this kind help us in confirming that a product meets its intended use.
• Validation activities aim at confirming that a product meets its customer’s expectations.
• In other words, validation activities focus on the final product, which is extensively tested from the
customer point of view. Validation establishes whether the product meets overall expectations of the
users.
• Late execution of validation activities is often risky by leading to higher development cost. Validation
activities may be executed at early stages of the software development cycle.
• An example of early execution of validation activities can be found in the eXtreme Programming (XP)
software development methodology. In the XP methodology, the customer closely interacts with the
software development group and conducts acceptance tests during each development iteration
• Validation is the process of checking whether the software product is up to the mark or in other words
product has high-level requirements. It is the process of checking the validation of the product i.e. it
checks what we are developing is the right product. It is validation of the actual and expected products.
Validation is dynamic testing. Validation means Are we building the right product?
Verification activities aim at confirming that one is building the product correctly, whereas validation
activities aim at confirming that one is building the correct product.
Differences between Verification and Validation
Verification Validation
Validation refers to the set of activities that ensure that
Verification refers to the set of activities that ensure software
the software that has been built is traceable to customer
correctly implements the specific function
Definition requirements.
It includes checking documents, designs, codes, and programs. It includes testing and validating the actual product.
Focus
Type of Testing Verification is the static testing. Validation is dynamic testing.
It does not include the execution of the code. It includes the execution of the code.
Execution
Methods used in verification are reviews, walkthroughs, Methods used in validation are Black Box Testing, White
inspections and desk-checking. Box Testing and non-functional testing.
Methods Used
It checks whether the software conforms to specifications or It checks whether the software meets the requirements
not. and expectations of a customer or not.
Purpose
It can only find the bugs that could not be found by the
It can find the bugs in the early stage of the development.
Bug verification process.
Human or It consists of checking of documents/files and is performed by It consists of execution of program and is performed by
human. computer.
Computer
After a valid and complete specification the verification starts. Validation begins as soon as project starts.
Lifecycle
Another Verification is also termed as white box testing or static Validation can be termed as black box testing or dynamic
testing as work product goes through reviews. testing as work product is executed.
Terminology
Verification finds about 50 to 60% of the defects. Validation finds about 20 to 30% of the defects.
Performance
Defect is a good synonym for fault, as is bug. Faults can be elusive. An error of omission results in a fault in which
something is missing that should be present in the representation. This suggests a useful refinement; we might speak
of faults of commission and faults of omission. A fault of commission occurs when we enter something into a
representation that is incorrect. Faults of omission occur when we fail to enter correct information. Of these two
types, faults of omission are more difficult to detect and resolve.
• Failure—A failure occurs when the code corresponding to a fault executes.
Two subtleties arise here: one is that failures only occur in an executable representation, which is usually taken to be
source code, or more precisely, loaded object code; the second subtlety is that this definition
Incident—When a failure occurs, it may or may not be readily apparent to the user (or customer or tester). An incident
is the symptom associated with a failure that alerts the user to the occurrence of a failure.
Failure, Error, Fault, and Defect….
Test—Testing is obviously concerned with errors, faults, failures, and incidents. A test is
the act of exercising software with test cases. A test has two distinct goals: to find
failures or to demonstrate correct execution.
Test case—A test case has an identity and is associated with a program behavior. It also
has a set of inputs and expected outputs.
Failure, Error, Fault, and Defect….
Defect report or Bug report consists of the following information:
• Defect ID – Every bug or defect has it’s unique identification number
• Defect Description – This includes the abstract of the issue.
• Product Version – This includes the product version of the application in which the defect is found.
• Detail Steps – This includes the detailed steps of the issue with the screenshots attached so that developers can
recreate it.
• Date Raised – This includes the Date when the bug is reported
• Reported By – This includes the details of the tester who reported the bug like Name and ID
• Status – This field includes the Status of the defect like New, Assigned, Open, Retest, Verification, Closed, Failed,
Deferred, etc.
• Fixed by – This field includes the details of the developer who fixed it like Name and ID
• Date Closed – This includes the Date when the bug is closed
• Severity – Based on the severity (Critical, Major or Minor) it tells us about impact of the defect or bug in the
software application
• Priority – Based on the Priority set (High/Medium/Low) the order of fixing the defect can be made.
Notion of Software Reliability
• A quantitative measure that is useful in assessing the quality of a software is
its reliability.
• Software reliability is defined as the probability of failure-free operation of a
software system for a specified time in a specified environment.
• The level of reliability of a system depends on those inputs that cause
failures to be observed by the end users. Software reliability can be
estimated via random testing
• Notion of reliability is specific to a “specified environment,” test data must
be drawn from the input distribution to closely resemble the future usage of
the system.
• Capturing the future usage pattern of a system in a general sense is
described in a form called the operational profile.
Objectives of Testing
Different stakeholders view a test process from different perspectives:
• It does work: While implementing a program unit, the programmer may
want to test whether or not the unit works in normal circumstances. The
programmer gets much confidence if the unit works to the satisfaction.
The same idea applies to an entire system as well—once a system has
been integrated, the developers may want to test whether or not the
system performs the basic functions. Here, for the psychological reason,
the objective of testing is to show that the system works, rather than it
does not work.
• It does not work: Once the programmer (or the development team) is
satisfied that a unit (or the system) works to a certain degree, more tests
are conducted with the objective of finding faults in the unit (or the
system). Here, the idea is to try to make the unit (or the system) fail
Objectives of Testing….
• Reduce the risk of failure: Most of the complex software systems contain faults, which cause the
system to fail from time to time.
This concept of “failing from time to time” gives rise to the notion of failure rate.
As faults are discovered and fixed while performing more and more tests, the failure rate of a
system generally decreases.
Thus, a higher level objective of performing tests is to bring down the risk of failing to an
acceptable level.
• Reduce the cost of testing: Different kinds of costs associated with a test process include the cost of
designing, maintaining, and executing test cases, the cost of analyzing the result of executing each
test case, the cost of documenting the test cases, and the cost of actually executing the system and
documenting it.
Therefore, the less the number of test cases designed, the less will be the associated cost of
testing.
However, producing a small number of arbitrary test cases is not a good way of saving cost.
The highest level of objective of performing tests is to produce low-risk software with fewer
number of test cases.
This idea leads us to the concept of effectiveness of test cases. Test engineers must therefore
judiciously select fewer, effective test cases.
What is a Test case?
• A test case has an identity and is associated with a program behavior. It also has a set of inputs and expected
outputs.
• In its most basic form, a test case is a simple pair of.
<input, expected outcome>
State-less systems: A compiler is a stateless system
– Test cases are very simple
• Outcome depends solely on the current input
If a program under test is expected to compute the square root of nonnegative numbers, then four
examples of test cases are as shown
Step #9 – Status:
Finally set the status as Pass or Fail based on the expected result against the actual result. If the actual and expected results
are the same, mention it as Passed. Else make it as Failed. If a test fails, it has to go through the bug life cycle to be fixed.
Example:
Result: Pass
Test case example…
A test case is a set of conditions and criteria that specify how a tester will determine if the system does
what it is expected to do.
Test cases can be manual where a tester follows conditions and steps by hand or automated where a test is
written as a program to run against the system
28
Project name ATM
Module Name Withdrawal
Creation By Manager
Creation date 18-03-2021
Reviewd by
Reviewd date
Test scenario_ID Test scenario description Test case ID Test case description Test steps Pre-condition Test data Post condition Expected result Actual result Status
•
Test Planning and Design….
• Test design is a critical phase of software testing.
• In this phase:
the system requirements are critically studied,
system features to be tested are thoroughly identified, and
the objectives of test cases and
the detailed behaviour of test cases are defined.
• Test objectives are identified from different sources namely, the requirement
specification and the functional specification.
• Each test case is designed as a combination of modular test components
called test steps.
• Test steps are combined together to create more complex tests.
Test Planning and Design….
• New test centric approach to system development is gradually emerging
(called as Test driven Development-TDD)
• Here programmers design, develop & implement test case before
production code is written.
• This approach is key practice in modern agile s/w development process.
• Main characteristics of agile s/w development process are:
Incremental development