0% found this document useful (0 votes)
37 views

Unit 1 ST PPT

Uploaded by

www.abhayk2004
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
37 views

Unit 1 ST PPT

Uploaded by

www.abhayk2004
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 59

UNIT - 1

Basic Concepts and Preliminaries - Software Quality, Role of Testing, Verification and
Validation, Failure, Error, Fault, and Defect, Notion of Software Reliability, Objectives
of Testing, What Is a Test Case? Expected Outcome, Concept of Complete Testing,
Central Issue in Testing, Testing Activities, Test Levels, Sources of Information for Test
Case Selection, White-Box, Black-Box and Gray-Box Testing, Test Planning and Design

Agile Tool – JIRA


 What is testing?
 Software testing is a process used to identify the correctness,
completeness and quality of developed computer software.
 The process of devising a set of inputs to a given piece of software that
will cause the software to exercise some portion of its code.
 The developer of the software can then check that the results
produced by the software are in accord with his or her expectations.
 Software Quality
• The concept of software quality is first studied in terms of quality factors and quality criteria.
• A quality factor represents a behavioural characteristic of a system.
• Some examples of high-level quality factors are
 correctness
 reliability
 efficiency
 testability
 maintainability
 reusability
• A quality criterion is an attribute of a quality factor that is related to software development.
For example:
 Modularity is an attribute of the architecture of a software system.
 A highly modular software allows designers to put cohesive components in one module,
thereby improving the maintainability of the system.
Five views of quality in a comprehensive manner as follows:
1. Transcendental View: It envisages quality as something that can be recognized but is difficult to define. The
transcendental view is not specific to software quality alone but has been applied in other complex areas of
everyday life. Or
Quality is something that is understood clearly, but it’s not tangible and can’t be communicated, such as love or
beauty.
2. User View: It perceives quality as fitness for purpose. According to this view, while evaluating the quality of a
product, one must ask the key question: “Does the product satisfy user needs and expectations?”
Or a product has quality if it satisfies the user requirements.
3. Manufacturing View: Here quality is understood as conformance to the specification. The quality level of a product
is determined by the extent to which the product meets its specifications.
Or quality is evident in how the product is conformed to design specifications and manufacturing standards.
4. Product View: quality is viewed as tied to the inherent characteristics of the product. A product’s inherent
characteristics, that is, internal qualities, determine its external qualities.
Or quality refers to the attributes/characteristics or features that a product has.
5. Value-Based View: Quality, in this perspective, depends on the amount a customer is willing to pay for it.
Or if a product is perceived to be offering good value for the price, it possesses good quality.
 Various software quality models
Software quality models have been proposed to define quality and its
related attributes.
 ISO 9126 - International Organization for Standardization (ISO)
characteristics: functionality, reliability, usability, efficiency,
maintainability, and portability.
 CMM - Capability Maturity Model Software Engineering Institute (SEI)
o The CMM framework, a development process is evaluated on a
scale of level 1 through level 5.
o For example,
Level 1 is called the initial level, whereas
level 5—optimized—is the highest level of process maturity.
 Software testing Models
There are two well-known process models
 Test Process Improvement (TPI) model
In the dynamic world of software development, ensuring top-notch software quality is crucial for the
success of any product. Test Process Improvement (TPI) is a powerful approach that helps
organizations continuously enhance their testing practices, resulting in better software quality and
customer satisfaction
 Test Maturity Model (TMM)
Models allow an organization to assess the current state of their software
testing processes:
o Identify the next logical area for improvement
o Recommend an action plan for test process improvement
The Test Maturity Model (TMM) is a framework designed to assess and improve the maturity of an
organization’s software testing processes. It helps organizations evaluate their current testing
practices, identify weaknesses, and implement structured improvements to enhance their testing
capabilities.
Role of Testing
• Testing plays an important role in achieving and assessing the quality of a software
product
• Software testing is a verification process for software quality assessment and
improvement
• Divided into two broad categories, namely, static analysis and dynamic analysis.
 Static Analysis: It is based on the examination of a number of documents,
namely- requirements documents, software models, design documents, and
source code
– Traditional static analysis includes code review, inspection, walk-through, algorithm analysis,
and proof of correctness
(Inspection and walkthrough are methods of software review with distinct approaches. Inspection is formal, involving detailed
group scrutiny to detect defects early, while walkthrough is informal, where an author presents work to peers for feedback.
Inspection follows structured phases and roles, using checklists, ensuring thoroughness, while walkthroughs are flexible,
focusing on discussion and feedback without predefined steps. Each method offers unique benefits in enhancing software
quality through early detection and collaborative improvement processes.)
– It does not involve actual execution of the code under development. Instead, it examines
code and reasons over all possible behaviors that might arise during run time
– Compiler optimizations are standard static analysis
Role of Testing……
Advantages of Static Analysis
• It can find weaknesses in the code at the exact location.
• It can be conducted by trained software assurance developers who fully understand the code.
• Source code can be easily understood by other or future developers
• It allows a quicker turn around for fixes
• Weaknesses are found earlier in the development life cycle, reducing the cost to fix.
Less defects in later tests
• Unique defects are detected that cannot or hardly be detected using dynamic tests
Example:
o Unreachable code
o Variable use (undeclared, unused)
o Uncalled functions
o Boundary value violations
Role of Testing……
 Dynamic Analysis: Dynamic analysis of a software system involves actual
program execution in order to expose possible program failures.
o The behavioral and performance properties of the program are also observed
o Programs are executed with both typical and carefully chosen input values
o Practical considerations, a finite subset of the input set can be selected. Therefore, in testing,
we observe some representative program behaviors and reach a conclusion about the
quality of the system
o Careful selection of a finite test set is crucial to reaching a reliable conclusion

Static analysis and dynamic analysis are complementary in nature, and for better
effectiveness, both must be performed repeatedly and alternated
Role of Testing……
Dynamic code analysis advantages
• It identifies vulnerabilities in a runtime environment.
• It allows for analysis of applications in which you do not have access to the actual code.
• It identifies vulnerabilities that might have been false negatives in the static code analysis.
• It permits you to validate static code analysis findings.
• It can be conducted against any application.
Dynamic code analysis limitations
• Automated tools provide a false sense of security that everything is being addressed.
• Cannot guarantee the full test coverage of the source code
• Automated tools produce false positives and false negatives.
• Automated tools are only as good as the rules they are using to scan with.
• It is more difficult to trace the vulnerability back to the exact location in the code, taking
longer to fix the problem
• By static and dynamic analysis they need to identify as many faults as possible
• Those faults are fixed at an early stage of the software development.
Verification and Validation
• Two similar concepts related to software testing frequently used by practitioners are
verification and validation.
 Verification: This kind of activity helps us in evaluating a software system by
determining whether the product of a given development phase satisfies the
requirements established before the start of that phase. or
Verification is the process of checking that software achieves its goal without any
bugs. It is the process to ensure whether the product that is developed is right or not.
It verifies whether the developed product fulfills the requirements that we have.
Verification is static testing. Verification means Are we building the product right?
• The product can be an intermediate product, such as requirement specification,
design specification, code, user manual, or even the final product.
• Activities that check the correctness of a development phase are called verification
activities.
Verification and Validation….
 What is Validation?
• Activities of this kind help us in confirming that a product meets its intended use.
• Validation activities aim at confirming that a product meets its customer’s expectations.
• In other words, validation activities focus on the final product, which is extensively tested from the
customer point of view. Validation establishes whether the product meets overall expectations of the
users.
• Late execution of validation activities is often risky by leading to higher development cost. Validation
activities may be executed at early stages of the software development cycle.
• An example of early execution of validation activities can be found in the eXtreme Programming (XP)
software development methodology. In the XP methodology, the customer closely interacts with the
software development group and conducts acceptance tests during each development iteration
• Validation is the process of checking whether the software product is up to the mark or in other words
product has high-level requirements. It is the process of checking the validation of the product i.e. it
checks what we are developing is the right product. It is validation of the actual and expected products.
Validation is dynamic testing. Validation means Are we building the right product?
Verification activities aim at confirming that one is building the product correctly, whereas validation
activities aim at confirming that one is building the correct product.
Differences between Verification and Validation

Verification Validation
Validation refers to the set of activities that ensure that
Verification refers to the set of activities that ensure software
the software that has been built is traceable to customer
correctly implements the specific function
Definition requirements.

It includes checking documents, designs, codes, and programs. It includes testing and validating the actual product.
Focus
Type of Testing Verification is the static testing. Validation is dynamic testing.

It does not include the execution of the code. It includes the execution of the code.
Execution

Methods used in verification are reviews, walkthroughs, Methods used in validation are Black Box Testing, White
inspections and desk-checking. Box Testing and non-functional testing.
Methods Used

It checks whether the software conforms to specifications or It checks whether the software meets the requirements
not. and expectations of a customer or not.
Purpose
It can only find the bugs that could not be found by the
It can find the bugs in the early stage of the development.
Bug verification process.

The goal of verification is application and software


The goal of validation is an actual product.
Goal architecture and specification.
Validation is executed on software code with the help of
Quality assurance team does verification.
testing team.
Responsibility

It comes before validation. It comes after verification.


Timing

Human or It consists of checking of documents/files and is performed by It consists of execution of program and is performed by
human. computer.
Computer

After a valid and complete specification the verification starts. Validation begins as soon as project starts.
Lifecycle

Verification is for prevention of errors. Validation is for detection of errors.


Error Focus

Another Verification is also termed as white box testing or static Validation can be termed as black box testing or dynamic
testing as work product goes through reviews. testing as work product is executed.
Terminology

Verification finds about 50 to 60% of the defects. Validation finds about 20 to 30% of the defects.
Performance

Verification is based on the opinion of reviewer and may


Validation is based on the fact and is often stable.
change from person to person.
Stability
Failure, Error, Fault, and Defect
• Error— People make errors. A good synonym is mistake. When people make mistakes while coding, we call these
mistakes as bugs. Errors tend to propagate; a requirements error may be magnified during design and amplified still
more during coding.
• Fault—A fault is the result of an error. It is more precise to say that a fault is the representation of an error, where
representation is the mode of expression, such as narrative text, Unified Modeling Language diagrams, hierarchy charts,
and source code.

Defect is a good synonym for fault, as is bug. Faults can be elusive. An error of omission results in a fault in which
something is missing that should be present in the representation. This suggests a useful refinement; we might speak
of faults of commission and faults of omission. A fault of commission occurs when we enter something into a
representation that is incorrect. Faults of omission occur when we fail to enter correct information. Of these two
types, faults of omission are more difficult to detect and resolve.
• Failure—A failure occurs when the code corresponding to a fault executes.
Two subtleties arise here: one is that failures only occur in an executable representation, which is usually taken to be
source code, or more precisely, loaded object code; the second subtlety is that this definition

Incident—When a failure occurs, it may or may not be readily apparent to the user (or customer or tester). An incident
is the symptom associated with a failure that alerts the user to the occurrence of a failure.
Failure, Error, Fault, and Defect….
Test—Testing is obviously concerned with errors, faults, failures, and incidents. A test is
the act of exercising software with test cases. A test has two distinct goals: to find
failures or to demonstrate correct execution.
Test case—A test case has an identity and is associated with a program behavior. It also
has a set of inputs and expected outputs.
Failure, Error, Fault, and Defect….
Defect report or Bug report consists of the following information:
• Defect ID – Every bug or defect has it’s unique identification number
• Defect Description – This includes the abstract of the issue.
• Product Version – This includes the product version of the application in which the defect is found.
• Detail Steps – This includes the detailed steps of the issue with the screenshots attached so that developers can
recreate it.
• Date Raised – This includes the Date when the bug is reported
• Reported By – This includes the details of the tester who reported the bug like Name and ID
• Status – This field includes the Status of the defect like New, Assigned, Open, Retest, Verification, Closed, Failed,
Deferred, etc.
• Fixed by – This field includes the details of the developer who fixed it like Name and ID
• Date Closed – This includes the Date when the bug is closed
• Severity – Based on the severity (Critical, Major or Minor) it tells us about impact of the defect or bug in the
software application
• Priority – Based on the Priority set (High/Medium/Low) the order of fixing the defect can be made.
Notion of Software Reliability
• A quantitative measure that is useful in assessing the quality of a software is
its reliability.
• Software reliability is defined as the probability of failure-free operation of a
software system for a specified time in a specified environment.
• The level of reliability of a system depends on those inputs that cause
failures to be observed by the end users. Software reliability can be
estimated via random testing
• Notion of reliability is specific to a “specified environment,” test data must
be drawn from the input distribution to closely resemble the future usage of
the system.
• Capturing the future usage pattern of a system in a general sense is
described in a form called the operational profile.
Objectives of Testing
Different stakeholders view a test process from different perspectives:
• It does work: While implementing a program unit, the programmer may
want to test whether or not the unit works in normal circumstances. The
programmer gets much confidence if the unit works to the satisfaction.
The same idea applies to an entire system as well—once a system has
been integrated, the developers may want to test whether or not the
system performs the basic functions. Here, for the psychological reason,
the objective of testing is to show that the system works, rather than it
does not work.
• It does not work: Once the programmer (or the development team) is
satisfied that a unit (or the system) works to a certain degree, more tests
are conducted with the objective of finding faults in the unit (or the
system). Here, the idea is to try to make the unit (or the system) fail
Objectives of Testing….
• Reduce the risk of failure: Most of the complex software systems contain faults, which cause the
system to fail from time to time.
 This concept of “failing from time to time” gives rise to the notion of failure rate.
 As faults are discovered and fixed while performing more and more tests, the failure rate of a
system generally decreases.
 Thus, a higher level objective of performing tests is to bring down the risk of failing to an
acceptable level.
• Reduce the cost of testing: Different kinds of costs associated with a test process include the cost of
designing, maintaining, and executing test cases, the cost of analyzing the result of executing each
test case, the cost of documenting the test cases, and the cost of actually executing the system and
documenting it.
 Therefore, the less the number of test cases designed, the less will be the associated cost of
testing.
 However, producing a small number of arbitrary test cases is not a good way of saving cost.
 The highest level of objective of performing tests is to produce low-risk software with fewer
number of test cases.
 This idea leads us to the concept of effectiveness of test cases. Test engineers must therefore
judiciously select fewer, effective test cases.
What is a Test case?
• A test case has an identity and is associated with a program behavior. It also has a set of inputs and expected
outputs.
• In its most basic form, a test case is a simple pair of.
<input, expected outcome>
 State-less systems: A compiler is a stateless system
– Test cases are very simple
• Outcome depends solely on the current input
 If a program under test is expected to compute the square root of nonnegative numbers, then four
examples of test cases are as shown

• State-oriented: ATM is a state oriented system


– Test cases are not that simple. A test case may consist of a sequences of <input, expected outcome>
• The outcome depends both on the current state of the system and the current input
• ATM example:
o < check balance, $500.00 >
o < withdraw, “amount?”
o < $200.00, “$200.00” >
o < check balance, $300.00 >
How To Write Test Cases in Manual Testing
Step #1 – Test Case ID: Each test case should be represented by a unique ID.
It’s good practice to follow some naming convention for better understanding and discrimination purposes.
Step #2 – Test Case Description:
Pick test cases properly from the test scenarios
Example:
Test scenario: Verify the login of Gmail
Test case: Enter a valid username and valid password
Step #3 – Pre-Conditions:
Conditions that need to meet before executing the test case. Mention if any preconditions are available.
Example: Need a valid Gmail account to do login
Step #4 – Test Steps:
To execute test cases, you need to perform some actions. So write proper test steps. Mention all the test steps in
detail and in the order how it could be executed from the end-user’s perspective.
Example:
Enter Username
Enter Password
Click Login button
Step #5 – Test Data:
You need proper test data to execute the test steps. So gather appropriate test data. The data which could be used an
input for the test cases.
Username: [email protected]
Password: STM
Step #6 – Expected Result:
The result which we expect once the test cases were executed. It might be anything such as Home Page, Relevant screen,
Error message, etc.,
Example: Successful login

Step #7 – Post Condition:


Conditions that need to achieve when the test case was successfully executed.
Example: Gmail inbox is shown

Step #8 – Actual Result:


The result which system shows once the test case was executed. Capture the result after the execution. Based on this result
and the expected result, we set the status of the test case.
Example: Redirected to Gmail inbox

Step #9 – Status:
Finally set the status as Pass or Fail based on the expected result against the actual result. If the actual and expected results
are the same, mention it as Passed. Else make it as Failed. If a test fails, it has to go through the bug life cycle to be fixed.
Example:
Result: Pass
Test case example…
A test case is a set of conditions and criteria that specify how a tester will determine if the system does
what it is expected to do.
Test cases can be manual where a tester follows conditions and steps by hand or automated where a test is
written as a program to run against the system

28
Project name ATM
Module Name Withdrawal
Creation By Manager
Creation date 18-03-2021

Reviewd by
Reviewd date

Test scenario_ID Test scenario description Test case ID Test case description Test steps Pre-condition Test data Post condition Expected result Actual result Status

1)enter a valid pin 2)enter a


valid amount 3)click on Valid ATM pin:***** User should be able to
verify functionality of ATM TC_ATM_001 Enter valid pin valid amount enter button Test data Amount:2000 w ithdraw amount Successful Successful Pass

1)enter a invalid pin 2)enter a User should not be able to


valid amount 3)click on Valid ATM pin:** w ithdraw amount
verify functionality of ATM TC_ATM_002 Enter invalid pin enter button Test data Amount:2000 Unsuccessful Unsuccessful Pass

1)enter a valid pjn 2)enter a User should be able to


invalid amount 3)click on Valid ATM pin:***** w ithdraw amount
verify functionality of ATM TC_ATM_003 Enter valid pin and invalid amount enterbutton Test data Amount:200000 Unsuccessful Unsuccessful Pass

1)enter a invalid pin Valid ATM pin:***** User should be able to


verify functionality of ATM TC_ATM_004 Enter invalid pin more than 3 times 2)click on enter button Test data Amount:2000 w ithdraw amount Unsuccessful Unsuccessful Pass

1)enter an empty pin Valid ATM pin: User should be able to


TS_ATM_001 verify functionality of ATM TC_ATM_005 Enter pin 2)click on enter button Test data Amount:2000 w ithdraw amount Unsuccessful Unsuccessful Pass
Test Scenario
• Test Scenario gives the idea of what we have to test.
• Test Scenario is like a high-level test case.
Expected Outcome
• An outcome of program execution is a complex entity that may include the
following:
 Values produced by the program: Outputs for local observation (integer, text,
audio, image), Outputs (messages) for remote storage, manipulation, or
observation
 State change: State change of the program, State change of the database
(due to add, delete, and update operations)
 A sequence or set of values which must be interpreted together for the
outcome to be valid
• An important concept in test design is the concept of an oracle.
• An oracle is any entity—program, process, human expert, or body of data—that
tells us the expected outcome of a particular test or set of tests
• A test case is meaningful only if it is possible to decide on the acceptability of the
result produced by the program under test.
Expected Outcome….
• A test oracle is a mechanism that verifies the correctness of program outputs
– Generate expected results for the test inputs
– Compare the expected results with the actual results of execution
(In software testing, a test oracle (or just oracle) is a provider of information that
describes correct output based on the input of a test case. Testing with an oracle
involves comparing actual results of the system under test (SUT) with the expected
results as provided by the oracle.)
• In exceptional cases, where it is extremely difficult, impossible, or even undesirable to
compute a single expected outcome, one should identify expected outcomes by
examining the actual test outcomes, as explained in the following:
1. Execute the program with the selected input.
2. Observe the actual outcome of program execution.
3. Verify that the actual outcome is the expected outcome.
4. Use the verified actual outcome as the expected outcome in subsequent runs of the
test case.
The Concept of Complete Testing
• Complete or exhaustive testing means
“There are no undisclosed faults at the end of test phase”
• Complete testing is near impossible for most of the systems
o The domain of possible inputs of a program is too large
• Valid inputs
• Invalid inputs
o There may be timing constraints on the inputs, that is, an input may be valid at a
certain time and invalid at other times.
o The design issues may be too complex to completely test
• For example, a programmer may use a global variable or a static variable to control program execution.
– It may not be possible to create all possible execution environments of the system.
– This becomes more significant when the behaviour of the software system depends on the real,
outside world, such as weather, temperature, altitude, pressure, and so on.
Central Issue in Testing
• Realize that though the outcome of complete testing, that is, discovering all
faults, is highly desirable, it is a near-impossible task, and it may not be
attempted.
• The next is to select a subset of the input domain to test a program.
• Let D be the input domain of a program P.
 We select a subset D1 of D, that is, D1 ⊂D, to test program P.
 It is possible that D1 exercises only a part P1, that is, P1 ⊂P, of the execution
behaviour of P, in which case faults with the other part, P2, will go
undetected.
 By selecting a subset of the input domain D1, the test engineer attempts to
deduce properties of an entire program P by observing the behaviour of a
part P1 of the entire behaviour of P on selected inputs D1.
 Therefore, selection of the subset of the input domain must be done in a systematic and
careful manner so that the deduction is as accurate and complete as possible.
Central Issue in Testing….
A subset of the input domain exercising a subset of the program behavior

 Divide the input domain D into D1 and D2


 Select a subset D1 of D to test program P
 It is possible that D1 exercise only a part P1 of P
Testing Activities
Different activities in process testing

1) Identify the objective to be tested


2) Select inputs
3) Compute the expected outcome
4) Set up the execution environment of the program
5) Execute the program
6) Analyze the test result
Testing Activities….
1) Identify the objective to be tested - The objective defines the
intention, or purpose, of designing one or more test cases to ensure
that the program supports the objective.
2) Select inputs - Selection of test inputs can be based on the
requirements specification, the source code, or our expectations.
Test inputs are selected by keeping the test objective in mind.
3) Compute the expected outcome - This can be done from an
overall, high-level understanding of the test objective and the
specification of the program under test.
Testing Activities….
4) Set up the execution environment of the program: all the assumptions
external to the program must be satisfied. A few examples of assumptions
external to a program are as follows:
• Initialize the local system, external to the program. This may
include making a network connection available, making the right
database system available, and so on.
• Initialize any remote, external system (e.g., remote partner process in
a distributed application.) For example, to test the client code, we may
need to start the server at a remote site.
5) Execute the program: The test engineer executes the program with the
selected inputs and observes the actual outcome of the program.
Testing Activities….
6) Analyze the test results: compare the actual outcome of program execution with
the expected outcome.
• At the end of the analysis step, a test verdict is assigned to the program.
• There are three major kinds of test verdicts, namely, pass, fail, and inconclusive
• Pass: If the program produces the expected outcome and the purpose of the test
case is satisfied, then a pass verdict is assigned.
• Fail: If the program does not produce the expected outcome, then a fail verdict is
assigned.
• Inconclusive: In some cases it may not be possible to assign a clear pass or fail
verdict. For example
if a timeout occurs while executing a test case on a distributed application, we
may not be in a position to assign a clear pass or fail verdict. In those cases, an
inconclusive test verdict is assigned. An inconclusive test verdict means that
further tests are needed to be done to refine the inconclusive verdict into a clear
pass or fail verdict.
Testing Activities….
• A test report must be written after analyzing the test result.
• The motivation for writing a test report is to get the fault fixed if the test
revealed a fault.
• A test report contains the following items to be informative:
 Explain how to reproduce the failure.
 Analyze the failure to be able to describe it.
 A pointer to the actual outcome and the test case, complete with the
input, the expected outcome, and the execution environment.
Testing Level
Development and testing phases in the V model
• Testing is performed at different levels
involving the complete system.
• A software system goes through four
stages of testing before it is actually
deployed. These four stages are known
as unit, integration, system, and
acceptance level testing.
• The first three levels of testing are
performed by a number of different
stakeholders in the development
organization, where as acceptance
testing is performed by the customers.
• The four stages of testing have been
illustrated in the form of what is called
the classical V model.
Testing Level….
• Unit testing
– Individual program units, such as procedure, methods in isolation
• Integration testing
– Modules are assembled to construct larger subsystem and tested
• System testing
– Includes wide spectrum of testing such as functionality testing, security testing,
robustness testing and load testing, stability testing, stress testing, performance
testing, and reliability testing.
– System testing comprises a number of distinct activities: creating a test plan, designing
a test suite, preparing test environments, executing the tests by following a clear
strategy, and monitoring the process of test execution.
• Acceptance testing
– A key notion: is Customer’s expectations from the system
– The objective of acceptance testing is to measure the quality of the product.
Testing Level….
– Two types of acceptance testing
• UAT (User Acceptance Testing)
• BAT (Business Acceptance Testing)
– UAT: Its conducted by customer and system satisfies the contractual acceptance
criteria
– BAT: Is undertaken within the supplier’s development organization. System will
eventually pass the user acceptance test
• Regression testing at different software testing levels
Testing Level….
• Regression testing is another level of testing that is performed throughout the life
cycle of a system.
• It is performed whenever a component of the system is modified.
• The key idea is to ascertain that the modification has not introduced any new faults.
• Regression testing is considered as a sub-phase of unit, integration, and system-level
testing.
• New test cases are not designed
• Test are selected, prioritized and executed
• To ensure that nothing is broken in the new version of the software
(Regression testing is a type of software testing that ensures that recent changes to a
program or code don't negatively impact existing features. It's a key part of software
development, as it helps to identify and fix bugs early on, reducing risk and the time it
takes to fix defects.)
Source of Information for Test Selection
In order to generate effective tests at a lower cost, test designers analyze the
following sources of information:

1. Requirement and Functional Specifications


2. Source Code
3. Input and output Domain
4. Operational Profile
5. Fault Model
– Error Guessing
– Fault Seeding
– Mutation Analysis
Source of Information for Test Selection….
• Requirement and Functional Specifications:
o The process begins by capturing user needs. The nature and amount of user needs
identified at the beginning of system development will vary depending on the specific
life-cycle model to be followed.
o Waterfall model - a requirements engineer tries to capture most of the requirements.
o Agile software development model- such as XP or the Scrum only a few requirements
are identified in the beginning.
o A test engineer considers all the requirements the program is expected to meet
whichever life-cycle model is chosen to test a program
• Source Code: describes the actual behaviour of the system
• Input and output Domain: Some values in the input domain of a program
have special meanings, and hence must be treated separately. For eg: the
factorial of a nonnegative integer.
Source of Information for Test Selection….
• Operational Profile: is a quantitative characterization of how a system will
be used. It was created to guide test engineers in selecting test cases
(inputs) using samples of system usage.
• Fault Model:
o Previously encountered faults are an excellent source of information in
designing new test cases.
o The known faults are classified into different classes, such as initialization
faults, logic faults, and interface faults, and stored in a repository.
• There are three types of fault-based testing:
1. Error Guessing: A test engineer applies his experience to
(i) assess the situation and guess where and what kinds of faults might
exist, and
(ii) design tests to specifically expose those kinds of faults.
Source of Information for Test Selection….
2. Fault Seeding: known faults are injected into a program, and the test
suite is executed to assess the effectiveness of the test suite.
• fault seeding is based on the idea of fault injection – a fault is
inserted into a program, and an oracle is available to assert that the
inserted fault indeed made the program incorrect.
3. Mutation Analysis: mutate the program statements in order to
determine the fault detection. If the test cases are not capable of
revealing such faults, the test engineer may specify additional test-cases
to reveal the faults.
• It is based on the idea of fault simulation – one may modify an
incorrect program and turn it into a correct program
White-box testing/ Structural Testing
Structural Testing
• Structural Testing is another approach to test case identification. It is also called
as White Box, Clear Box, Glass box and Open box testing. Function is
understood only in terms of its implementation and used to identify test cases.
• White Box: A strategy in which testing is based on internal parts, structure, and
implementation.
Black box testing/ Functional Testing
• Functional Testing is based on the view that any program can be considered
to be a function that maps values from input domain to values in the output
range. This notion is used when systems are considered to be black boxes.

• In black box testing, the content or implementation of a black box is not


known and its function is understood completely in terms of its inputs and
outputs.
Gray Box testing
• Gray box testing is a combination of white box and Black box testing. It
can be performed by a person who knew both coding and testing.
• And if the single person performs white box, as well as black-box
testing for the application, is known as Gray box testing.
White-box testing/ structural testing Black-box testing/ functional testing
• Examines source code with focus on: • Examines the program that is accessible
– Control flow from outside
– Data flow • Applies the input to a program and
• Control flow refers to flow of control observe the externally visible outcome
from one instruction to another • It is applied to both an entire program as
• Data flow refers to propagation of values well as to individual program units
from one variable or constant to another • It is performed at the external interface
variable level of a system
• It is applied to individual units of a • It is conducted by a separate software
program quality assurance group
• Software developers perform structural
testing on the individual program units
they write
Test Planning and Design
• The purpose is to get ready and organized for test execution
• A test plan provides a:
– Framework
 A set of ideas, facts or circumstances within which the tests will be
conducted
– Scope
 Outline the domain or extent of the test activities
– Details of resource needed
– Effort required
– Schedule of activities
– Budget


Test Planning and Design….
• Test design is a critical phase of software testing.
• In this phase:
 the system requirements are critically studied,
 system features to be tested are thoroughly identified, and
 the objectives of test cases and
 the detailed behaviour of test cases are defined.
• Test objectives are identified from different sources namely, the requirement
specification and the functional specification.
• Each test case is designed as a combination of modular test components
called test steps.
• Test steps are combined together to create more complex tests.
Test Planning and Design….
• New test centric approach to system development is gradually emerging
(called as Test driven Development-TDD)
• Here programmers design, develop & implement test case before
production code is written.
• This approach is key practice in modern agile s/w development process.
• Main characteristics of agile s/w development process are:
 Incremental development

 Coding of unit & acceptance testing along with customers

 Frequent regression testing

 Writing test code 1 test case at a time, before production code.


Agile Tool - JIRA
• Connect every team, task, and project together with Jira
Powerful agile boards
• Scrum boards: Scrum boards help agile teams break large, complex projects into
manageable pieces of work so focused teams ship faster.
• Kanban boards: Agile and DevOps teams can use flexible kanban boards to
visualize workflows, limit work-in-progress, and maximize efficiency as a team.
Templates make it easy to get started quickly and customize as you go.
• Choose your own adventure: Jira Software is flexible enough to mold to your
team’s own unique way of working, whether it is Scrum, Kanban, or something in
between.
• Software development teams - Software developers or development teams
primarily use the Jira software, which entails all of Jira Core's features and also,
agile functionality.
Agile Tool – JIRA…..
• Jira is the #1 agile project management tool used by teams to plan, track,
release and support world-class software with confidence. It is the single
source of truth for your entire development lifecycle, empowering
autonomous teams with the context to move quickly while staying
connected to the greater business goal.
7 steps to get started in Jira
• Step 1 - Create a project. Log into your Jira site
• Step 2 - Pick a template.
• Step 3 - Set up your columns.
• Step 4 - Create an issue.
• Step 5 - Connect your tools.
• Step 6 - Invite your team.
• Step 7 - Move work forward.

You might also like